CN115268628A - Man-machine interaction method and device for character robot - Google Patents

Man-machine interaction method and device for character robot Download PDF

Info

Publication number
CN115268628A
CN115268628A CN202210648115.6A CN202210648115A CN115268628A CN 115268628 A CN115268628 A CN 115268628A CN 202210648115 A CN202210648115 A CN 202210648115A CN 115268628 A CN115268628 A CN 115268628A
Authority
CN
China
Prior art keywords
action
mode
feedback
robot
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210648115.6A
Other languages
Chinese (zh)
Inventor
孙启瑞
米海鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210648115.6A priority Critical patent/CN115268628A/en
Publication of CN115268628A publication Critical patent/CN115268628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a human-computer interaction method and a human-computer interaction device for a character robot, wherein the human-computer interaction method comprises the following steps: acquiring multi-source information in a target area; determining a behavior mode of the pedestrian in the target area according to the multi-source information; determining an emotional feedback mode of the character robot based on the behavior mode; and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action. The method fuses multi-source information to judge the current interaction subject behavior; and calling a corresponding feedback action according to the interaction subject behavior so as to enable the interaction reaction of the character robot to be more appropriate and further improve the interaction effect of the character robot.

Description

Man-machine interaction method and device for character robot
Technical Field
The invention relates to the field of human-computer natural interaction, in particular to a human-computer interaction method and device for a character robot.
Background
In emerging fields of digital entertainment, stage performance, space experience, education presentation, new media art and the like, a typical role robot interaction application scene is provided. The prominent characteristic of the scenes is that the character robot brings multi-modal sensory narratives and impacts to people through interaction with audiences (people), environments (objects) and intelligent agents (machines), and then transmits information and emotions in the three-dimensional world of human-machine-object. It can be seen that a character robot that can bring good interactive feedback to viewers is of great importance in a character robot interactive application scenario.
However, the currently constructed character robot still mainly performs in a display show, and the character robot performs actions designed by a designer through a pre-written action program, so that the homogenization is serious, the innovation is lacked, and the interactive feedback brought to the audience is unexpected.
Therefore, there is a need to redesign the interaction of character robots to improve the interaction capability of character robots and to improve the attraction of character robot interaction scenes to viewers.
Disclosure of Invention
The invention aims to provide a human-computer interaction method for a character robot, which aims to solve the problem that the character robot in the prior art cannot bring good interaction feedback to audiences, so that the character robot can make an interaction reaction more matched with the behavior of the audiences.
In a first aspect, the present invention provides a human-computer interaction method for a character robot, the method comprising:
acquiring multi-source information in a target area;
determining a behavior mode of a pedestrian in the target area according to the multi-source information;
determining an emotional feedback mode of the character robot based on the behavior mode;
and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action.
According to the man-machine interaction method for the role robot, provided by the invention, multi-source information in a target area is obtained, and the method specifically comprises the following steps:
and collecting multi-source information in the target area by using a multi-source sensor.
According to the man-machine interaction method for the role robot, provided by the invention, multi-source information in the target area is acquired by using a multi-source sensor, and the man-machine interaction method specifically comprises at least two of the following steps:
the multi-source sensor comprises an image sensor, and the image sensor is used for collecting a scene image of the target area; the target area is an area within a certain range around the role robot;
the multi-source sensor comprises a distance sensor, and the distance sensor is used for collecting distance information between the role robot and the pedestrian in the target area;
the multi-source sensor comprises a pressure sensor, and pressure information of the character robot is acquired by using the pressure sensor;
the multi-source sensor comprises a vibration sensor, and vibration information of the character robot is utilized.
According to the man-machine interaction method for the role robot, provided by the invention, a scene image of the target area, distance information between the role robot and pedestrians in the target area, pressure information of the role robot and vibration information of the role robot are used as the multi-source information;
the determining the behavior mode of the pedestrian in the target area according to the multi-source information comprises the following steps:
carrying out portrait recognition on the scene image of the target area;
if the pressure information/vibration information of the role robot is subjected to pressure/vibration, recording a touch Boolean value/a knock Boolean value as positive; otherwise, the touch boolean value/tap boolean value is not recorded as positive;
and determining the behavior pattern of the pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch Boolean value and the tapping Boolean value.
According to the human-computer interaction method for the character robot provided by the invention, the determining the behavior pattern of the pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch Boolean value and the tap Boolean value comprises:
if the human figure recognition result indicates that the human figure image envelope surface is not recognized, the behavior mode is an unmanned interaction mode;
if the portrait recognition result is that at least one human-shaped image envelope surface is recognized and all the recognized human-shaped image envelope surfaces meet preset conditions, the behavior mode is a passing mode;
if the human image recognition result is that a plurality of human-shaped image enveloping surfaces are recognized and at least two recognized human-shaped image enveloping surfaces do not meet preset conditions, the behavior mode is a multi-person interaction mode;
if the portrait identification result is that a human-shaped image envelope surface is identified, the identified human-shaped image envelope surface does not meet the preset condition, and the touch Boolean value is positive, the behavior mode is a touch mode;
if the portrait identification result is that a human-shaped image envelope surface is identified, the identified human-shaped image envelope surface does not meet the preset condition, and the knocking Boolean value is positive, the behavior mode is a knocking mode;
if the portrait identification result is that a human-shaped image envelope surface is identified and the identified human-shaped image envelope surface does not meet the preset condition, the spatial position of the identified human-shaped image envelope surface is only changed within a preset spatial range, the touch Boolean value is not positive and the tap Boolean value is not positive, and the behavior mode is a staring mode;
the change condition of the space position of the envelope surface of the identified human-shaped image is determined by the distance information between the character robot and the pedestrian in the target area;
the preset condition is that the time of the human-shaped image enveloping surface passing through the role robot is less than the preset time.
According to the man-machine interaction method for the role robot, the priority of the behavior mode is recorded in the priority sequence of the behavior mode;
the determining of the emotional feedback mode of the character robot based on the behavior mode comprises:
under the condition that the behavior mode is not the multi-user interaction mode, searching an emotion feedback mode associated with the behavior mode from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot;
under the condition that the behavior mode is the multi-person interaction mode, the behavior mode of each pedestrian participating in interaction is refined, and the behavior mode with the highest priority is selected;
and searching the emotion feedback mode associated with the behavior mode with the highest priority from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot.
According to the man-machine interaction method for the character robot, the feedback action corresponding to the emotion feedback mode is determined, and the method comprises the following steps:
searching a feedback action corresponding to the emotion feedback mode from a prestored emotion feedback mode-feedback action comparison table;
the obtaining of the action parameter corresponding to the feedback action includes:
calling action parameters corresponding to the feedback action from a pre-stored action library; each feedback action designed for the role robot and corresponding action parameters thereof are stored in the action library;
wherein the emotion feedback mode comprises: a boring mode, a frightening mode, a curiosity mode, a sticky mode and an excitation mode;
the feedback action is a combination of interaction actions of different body parts of the character robot.
According to the human-computer interaction method for the character robot provided by the invention, the generation process of the action parameters corresponding to each feedback action comprises the following steps:
determining an interaction action contained in each feedback action;
generating action parameters corresponding to each interactive action in each feedback action by adopting a three-dimensional modeling technology and an animation redirection technology;
performing interpolation processing on the action parameters corresponding to each interactive action respectively so as to enable the character robot to smoothly execute the corresponding interactive actions;
performing action arrangement on the interaction action in each feedback action based on the action parameter corresponding to each interaction action after interpolation processing to obtain an action parameter corresponding to each feedback action;
verifying the action parameters corresponding to each feedback action in an action simulation system, and outputting the action parameters corresponding to each feedback action under the condition that the verification result is that a control instruction is generated according to the action parameters corresponding to each feedback action so as to control the character robot to smoothly execute each feedback action; and in the case that the control instruction generated according to the action parameter corresponding to each feedback action cannot control the character robot to smoothly execute each feedback action according to the verification result, re-executing the operation.
According to the human-computer interaction method for the role robot, which is provided by the invention, the action parameter corresponding to each interaction action in each feedback action is generated by adopting a three-dimensional modeling technology and an animation redirection technology, and the method comprises the following steps:
based on a three-dimensional modeling technology, generating a robot animation model with virtual animation skeletons according to the number, the positions and the degrees of freedom of joints of the role robot;
generating action animation corresponding to each interactive action by adopting an action capturing or animation software editing mode;
and generating action parameters corresponding to each interactive action capable of being executed on the character robot by adopting an animation redirection technology based on the action animation corresponding to each interactive action and the robot animation model.
In a second aspect, the present invention also provides a human-computer interaction device for a character robot, the device comprising:
the acquisition module is used for acquiring multi-source information in the target area;
the behavior mode determining module is used for determining the behavior mode of the pedestrian in the target area according to the multi-source information;
the emotion feedback mode determination module is used for determining the emotion feedback mode of the character robot based on the behavior mode;
and the interaction module is used for determining the feedback action corresponding to the emotion feedback mode, acquiring the action parameter corresponding to the feedback action, and generating a control instruction according to the action parameter, wherein the control instruction is used for controlling the character robot to execute the feedback action.
In a third aspect, the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the human-machine interaction method for character robots as described in the first aspect.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the human-computer interaction method for character robots according to the first aspect.
The invention provides a human-computer interaction method and a human-computer interaction device for a role robot, which are used for acquiring multi-source information in a target area; determining a behavior mode of the pedestrian in the target area according to the multi-source information; determining an emotional feedback mode of the character robot based on the behavior mode; and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action. The method fuses multi-source information to judge the current interaction subject behavior; and calling the corresponding feedback action according to the interaction subject behavior so as to enable the interaction reaction of the role robot to be more appropriate and further improve the interaction effect of the role robot.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a human-computer interaction method for a character robot provided by the invention;
FIG. 2 is a schematic diagram illustrating a generation method of an action parameter corresponding to an interactive action provided by the present invention;
FIG. 3 is a schematic structural diagram of a human-computer interaction device for a character robot provided by the invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The man-machine interaction method for the character robot provided by the invention is described below with reference to fig. 1 to 4.
For a character robot used for public exhibition, requirements in terms of motion accuracy, load characteristics, sensing perception, and the like are approximately the same as those of an industrial robot and a service robot. In addition, the role robot needs to consider the role attributes and the interaction background which are set in the aspects of motion characteristics, interaction feedback and action realization as much as possible, realize the natural interaction of the human and the machine, and keep the motion characteristics of the role attributes. In order to make the character robot more approximate to the design target in terms of motion characteristics, the character robot generally needs to consider the motion realization of multiple degrees of freedom and complex degrees of freedom in structural design. This is no more than increasing the difficulty of human-computer interaction. In view of this, the present invention provides a human-computer interaction method for a character robot, as shown in fig. 1, the method including:
s11, obtaining multi-source information in a target area;
it is to be understood that multi-source information refers to the collective term for different types of information obtained from different sources of information.
S12, determining a behavior mode of the pedestrian in the target area according to the multi-source information;
when multi-source information is processed, the character robot cannot only react one to one when making interactive feedback, which can cause interaction to be hard and mechanical. Therefore, fusion of multi-source information is required to determine the current interaction subject behavior. For example, when the character robot is touched or knocked by the outside, the orientation and age of the interactive main body can be judged according to the image information, so that more matched interactive reaction can be performed.
S13, determining an emotion feedback mode of the character robot based on the behavior mode;
the type of the behavior pattern and the emotion feedback pattern related to the behavior pattern are already determined in the design stage, and the emotion feedback pattern can correspond to a query in actual application.
S14, determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action.
It is understood that the motion parameters are the types of parameters for controlling the rotation angle, the rotation speed, and the like of the joints of the character robot.
In the aspect of interactive feedback, the self-dynamic role robot is mainly utilized to perform anthropomorphic action feedback. For example, the current emotional characteristics of the character robot are expressed by the action combination of the head and the tail and the overall body posture.
The invention provides a man-machine interaction method for a role robot, which fuses multi-source information to judge the current interaction subject behavior; and calling a corresponding feedback action according to the interaction subject behavior so as to enable the interaction reaction of the character robot to be more appropriate and further improve the interaction effect of the character robot.
On the basis of the foregoing embodiments, as an optional embodiment, the acquiring multi-source information in the target area specifically includes:
and acquiring multi-source information in the target area by using a multi-source sensor.
For character robots, the various sensors are windows through which the character robot interacts with the outside world. The character robot acquires multi-source information (such as image information, voice information, touch information, knocking information and the like) from the outside by using sensing equipment arranged on the character robot, and then converts information such as physical acousto-optic vibration pressure in the multi-source information into electric signals to be input into a character robot control system, so that the character robot control system performs different feedbacks according to program setting.
On the basis of the above embodiments, as an optional embodiment, the method for acquiring multi-source information in the target area by using a multi-source sensor specifically includes at least two of the following steps:
the multi-source sensor comprises an image sensor, and the image sensor is used for collecting a scene image of the target area; the target area is an area within a certain range around the role robot;
the multi-source sensor comprises a distance sensor, and the distance sensor is used for acquiring distance information between the character robot and the pedestrian in the target area;
the multi-source sensor comprises a pressure sensor, and pressure information of the character robot is acquired by using the pressure sensor;
the multi-source sensor comprises a vibration sensor, and vibration information of the character robot is utilized.
In addition to the above sensors, the multi-source sensor also includes an image sensor, a voice sensor (e.g., a microphone array), a light sensor, and the like.
The following describes the interaction devices arranged on the character robot;
image sensor (camera): various visual information is obtained, and the method can be used for detecting human faces and gesture actions. If a depth camera or a double-shooting system is used, the inaccurate position information of the front object can be obtained, the limb action can be better recognized, and therefore richer interaction is achieved.
Microphone array: the voice interaction between man and machine can be realized by detecting the surrounding sound information and combining the voice recognition and the dialogue system.
A pressure sensor: pressure signals on the 'skin' of the robot can be detected, so that touch information such as stroking and pressing of an interaction body is identified, and limb interaction is realized.
A vibration sensor: vibration information around the robot is recognized and used as supplement of the pressure sensor, and touch interaction is perfected. The device can also be arranged around the robot to realize the purpose of detecting the interaction of special behaviors such as approaching or knocking of people.
A distance sensor: whether a front object approaches the robot is detected, and generally, the detection is used as the judgment of the short-distance interactive behavior.
A light ray sensor: the change of the brightness of the surrounding environment of the robot is sensed, and some special interaction behaviors can be realized.
It should be noted that, for different interaction scenarios of the character robot, the required multi-source sensors are different and should be adjusted appropriately according to actual situations.
On the basis of the above embodiments, as an optional embodiment, the scene image of the target area, the distance information between the character robot and the pedestrian in the target area, the pressure information of the character robot, and the vibration information of the character robot are used as the multi-source information;
the determining the behavior mode of the pedestrian in the target area according to the multi-source information comprises the following steps:
carrying out portrait recognition on the scene image of the target area;
if the pressure information/vibration information of the role robot is subjected to pressure/vibration, recording a touch Boolean value/a knock Boolean value as positive; otherwise, the touch boolean value/tap boolean value is not recorded as positive;
and determining the behavior pattern of the pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch Boolean value and the tapping Boolean value.
In the embodiment, a scene image of the target area, distance information between the character robot and a pedestrian in the target area, pressure information of the character robot and vibration information of the character robot are used as multi-source information for judging a pedestrian behavior mode; on the basis, the scene image of the target area, the pressure information of the character robot and the vibration information of the character robot are preprocessed, and preprocessed data are obtained.
The invention fuses the multi-source information to judge the current interaction subject behavior, and lays a foundation for the generation of the interaction reaction of the role robot.
On the basis of the foregoing embodiments, as an optional embodiment, the determining a behavior pattern of a pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch boolean value, and the tap boolean value includes:
if the human figure recognition result indicates that the human figure image envelope surface is not recognized, the behavior mode is an unmanned interaction mode;
if the portrait recognition result is that at least one human-shaped image envelope surface is recognized and all the recognized human-shaped image envelope surfaces meet preset conditions, the behavior mode is a passing mode;
if the human image recognition result is that a plurality of human-shaped image enveloping surfaces are recognized and at least two recognized human-shaped image enveloping surfaces do not meet preset conditions, the behavior mode is a multi-person interaction mode;
if the portrait identification result is that a human-shaped image envelope surface is identified, the identified human-shaped image envelope surface does not meet the preset condition, and the touch Boolean value is positive, the behavior mode is a touch mode;
if the portrait identification result is that a human-shaped image enveloping surface is identified, the identified human-shaped image enveloping surface does not meet the preset condition, and the knocking Boolean value is positive, the behavior mode is a knocking mode;
if the portrait identification result is that a human-shaped image envelope surface is identified and the identified human-shaped image envelope surface does not meet the preset condition, the spatial position of the identified human-shaped image envelope surface is only changed within a preset spatial range, the touch Boolean value is not positive and the tap Boolean value is not positive, and the behavior mode is a staring mode;
the change condition of the space position of the envelope surface of the identified human-shaped image is determined by the distance information between the character robot and the pedestrian in the target area;
the preset condition is that the time of the human-shaped image enveloping surface passing through the role robot is less than the preset time.
Because the interactive input of the character robot is mainly the behavior of pedestrians/audiences in the public place, the interactive purpose is to attract the attention of the pedestrians/audiences through the anthropomorphic character interactive design, increase the interactive display participation and spontaneous multi-user interaction, and improve the standing rate in the public place. Therefore, the pedestrian behavior states in the public scene are classified, and according to the pedestrian behavior observation, the pedestrian states can be classified into unmanned interaction, rapid passing, touch, knocking, stationary staring observation and the like. In order to avoid the invariance of the interaction behavior, a large number of pedestrian mode identification parameters including the standing distance, the standing time, the interactive behavior intensity and the like of the pedestrians are introduced into the interaction design of the character robot, so that the mode identification degree is improved.
The present embodiment is only one possible pedestrian state determination method. Can be adjusted adaptively based on the same principle to adapt to different application scenes. And gestures and voice information can be used for distinguishing behavior modes so as to realize deeper interaction.
On the basis of the foregoing embodiments, as an alternative embodiment, the priority of the behavior pattern is recorded in the behavior pattern priority sequence;
the determining of the emotional feedback mode of the character robot based on the behavior mode comprises:
under the condition that the behavior mode is not the multi-user interaction mode, searching an emotion feedback mode associated with the behavior mode from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot;
under the condition that the behavior mode is the multi-person interaction mode, the behavior mode of each pedestrian participating in interaction is refined, and the behavior mode with the highest priority is selected;
and searching the emotion feedback mode associated with the behavior mode with the highest priority from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot.
According to the interaction strength relationship, when the character robot carries out multi-person interaction, the strong interaction behavior can attract the attention of the robot from the weak interaction behavior. This gives the robot the ability to communicate with multiple people simultaneously, while encouraging pedestrians to interact with the character robot at a deeper level.
On the basis of the foregoing embodiments, as an optional embodiment, the determining a feedback action corresponding to the emotion feedback mode includes:
searching a feedback action corresponding to the emotion feedback mode from a prestored emotion feedback mode-feedback action comparison table;
the obtaining of the action parameter corresponding to the feedback action includes:
calling action parameters corresponding to the feedback action from a pre-stored action library; each feedback action designed for the role robot and corresponding action parameters thereof are stored in the action library;
it can be understood that the emotion feedback mode-feedback action comparison table and each feedback action of the character robot and corresponding action parameters are designed by engineering designers when the character robot interaction scene is designed.
Wherein the emotion feedback mode comprises: a boring mode, a frightening mode, a curiosity mode, a sticky mode and an excitation mode;
the feedback action is a combination of interaction actions of different body parts of the character robot.
For the interaction behavior of the pedestrian, the reaction of the character robot needs to be guided by setting the character of the whole character robot. And when the feedback behavior is designed, the emotional factors are considered, and the corresponding feedback behavior is designed according to different emotion models of the character robot.
Table 1 is a four-footed character robot interaction feedback design table;
the feedback action corresponding to each emotional feedback mode and the emotional feedback action corresponding to each behavior mode can be seen from table 1.
TABLE 1
Figure BDA0003684800730000141
On the basis of the foregoing embodiments, as an optional embodiment, the generating process of the action parameter corresponding to each feedback action includes:
determining an interaction action contained in each feedback action;
generating action parameters corresponding to each interactive action in each feedback action by adopting a three-dimensional modeling technology and an animation redirection technology;
performing interpolation processing on the action parameters corresponding to each interactive action respectively so as to enable the character robot to smoothly execute the corresponding interactive actions;
performing action arrangement on the interaction action in each feedback action based on the action parameter corresponding to each interaction action after interpolation processing to obtain an action parameter corresponding to each feedback action;
verifying the action parameters corresponding to each feedback action in an action simulation system, and outputting the action parameters corresponding to each feedback action under the condition that the verification result is that a control instruction is generated according to the action parameters corresponding to each feedback action so as to control the character robot to smoothly execute each feedback action; and in the case that the control instruction generated according to the action parameter corresponding to each feedback action cannot control the character robot to smoothly execute each feedback action according to the verification result, re-executing the operation.
The design and implementation of actions are core functions of a character robot mainly based on interaction and performance, so that the generation of vivid and restored actions, smooth and fluent action execution and simple and convenient action arrangement are very important. General industrial robot or four-footed robot on the market, the action is more single, lack the change and only need consider the parameter such as time, angle, the speed of motor operation, robot action parameter only need obtain through programming. However, for a character robot, the robot has many degrees of freedom and many motion couplings, and if motion parameters are generated in a pure programming manner, the robot is not only tedious and time-consuming, but also the motion execution is hard and unnatural.
The action generating process of the role robot is divided into four stages;
an action generation stage: in this stage, considering that the action creation of the character robot is more similar to the creation of animation, the three-dimensional modeling and animation redirection are performed on different kinds of character robots by the ue4 game engine and the three-dimensional software blender, and action parameters capable of being executed on the character robot are obtained.
And (3) an action smoothing stage: after the character robot motion data is generated, the robot system is required to smoothly and smoothly execute the motion. This involves trajectory planning and motion smoothing problems in robotics. In general robotics, the problem involves the balance between precise position control and smooth motion operation, and is therefore a relatively difficult problem, but for character robots, precise control of parameters such as position speed is of little significance, and more importantly, the overall motion impression is smooth and natural. Therefore, the invention mainly focuses on the fact that the robot motion data are subjected to pvt (position-velocity-time polynomial interpolation) or ptp (point-to-point linear interpolation) interpolation through discrete control, and a relatively smooth overall motion plan is obtained.
And an action arranging stage: a large amount of action data prestored in the role robot needs to be reasonably fused and scheduled to form a complete interaction flow, so that action arrangement needs to be performed on the action data generated before. The invention utilizes the animation system of the open-source software Blender to realize the work of pre-storing, fusing and arranging the actions of the role robot.
Virtual simulation and real-time mapping stage: the simulation of the robot is an important step for verifying the final operation effect of the robot, however, the existing robot simulation software is usually based on an inherent robot model mode, lacks of degree of freedom, has higher requirements on a bottom system of the robot in communication, is difficult to get to hands of non-robot professional personnel, and is not friendly enough to designer groups. The game engine can sufficiently meet the design requirements of the role robot in animation production, physical simulation and interface richness, so that the invention aims to build a set of action simulation system capable of real-time communication and action mapping based on the ue4 engine. On one hand, the ue4 has a convenient interface, can receive real-time bvh data captured by the motion and animation skeleton data of animation software, and on the other hand, the ue4 has a good graphical program compiling blueprint, can conveniently communicate with the robot, and can realize real-time simulation and real-time transmission of interactive motion.
And matching the robot motion of motion capture or animation editing with robot joint data by using an animation skeleton remapping algorithm by using the ue4 platform, generating angle data of a robot joint in real time, and uploading the angle data to the robot for execution through communication. The process greatly simplifies the simulation and execution processes of the traditional robot, and facilitates the debugging work of robot operators.
On the basis of the foregoing embodiments, as an optional embodiment, the generating, by using a three-dimensional modeling technique and an animation redirection technique, an action parameter corresponding to each interactive action in each feedback action includes:
based on a three-dimensional modeling technology, generating a robot animation model with virtual animation skeletons according to the number, the positions and the degrees of freedom of joints of the role robot;
generating action animation corresponding to each interactive action by adopting an action capturing or animation software editing mode;
and generating action parameters corresponding to each interactive action capable of being executed on the character robot by adopting an animation redirection technology based on the action animation corresponding to each interactive action and the robot animation model.
Fig. 2 illustrates a schematic diagram of a motion parameter generation method corresponding to an interactive motion, where ROS2 is an open-source robot operating system, and as shown in fig. 2, in the three-dimensional modeling of the present invention, a robot animation model with a virtual animation skeleton is generated according to the number, position, and degrees of freedom of a role robot, and then a motion animation of the robot model is generated by using a motion capture or manual dragging and arranging manner of an animator, and then angle data of each degree of freedom of the robot model in a game engine is extracted, and executable motion data of the robot can be obtained through motion mapping and modification, and the robot is waited to call.
In a second aspect, the human-computer interaction device for a character robot according to the present invention is described, and the human-computer interaction device for a character robot described below and the human-computer interaction method for a character robot described above may be referred to with each other. Fig. 3 illustrates a schematic structural diagram of a human-computer interaction device for a character robot, wherein the device comprises:
an obtaining module 21, configured to obtain multi-source information in a target area;
the behavior mode determining module 22 is configured to determine a behavior mode of a pedestrian in the target area according to the multi-source information;
an emotion feedback mode determination module 23, configured to determine an emotion feedback mode of the character robot based on the behavior mode;
and the interaction module 24 is configured to determine a feedback action corresponding to the emotion feedback mode, acquire an action parameter corresponding to the feedback action, and generate a control instruction according to the action parameter, where the control instruction is used to control the character robot to execute the feedback action.
The human-computer interaction device for a character robot according to the embodiments of the present invention specifically executes the flows of the above-mentioned human-computer interaction method for a character robot, and please refer to the contents of the above-mentioned human-computer interaction method for a character robot in detail, which is not described herein again.
The invention provides a human-computer interaction device for a role robot, which fuses multi-source information to judge the current interaction subject behavior; and calling a corresponding feedback action according to the interaction subject behavior so as to enable the interaction reaction of the character robot to be more appropriate and further improve the interaction effect of the character robot.
In a third aspect, fig. 4 illustrates a physical structure diagram of an electronic device. As shown in fig. 4, the electronic device may include: a processor (processor) 410, a communication Interface 420, a memory (memory) 430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. Processor 410 may invoke logic instructions in memory 430 to perform a human-machine interaction method for a character robot, the method comprising: acquiring multi-source information in a target area; determining a behavior mode of the pedestrian in the target area according to the multi-source information; determining an emotional feedback mode of the character robot based on the behavior mode; and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In a fourth aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being operative to perform a method for human-computer interaction for a character robot, the method comprising: acquiring multi-source information in a target area; determining a behavior mode of the pedestrian in the target area according to the multi-source information; determining an emotional feedback mode of the character robot based on the behavior mode; and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the role robot to execute the feedback action.
In a fifth aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program to execute a human-computer interaction method for a character robot, the method including: acquiring multi-source information in a target area; determining a behavior mode of the pedestrian in the target area according to the multi-source information; determining an emotional feedback mode of the character robot based on the behavior mode; and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A human-computer interaction method for a character robot, the method comprising:
acquiring multi-source information in a target area;
determining a behavior mode of the pedestrian in the target area according to the multi-source information;
determining an emotional feedback mode of the character robot based on the behavior mode;
and determining a feedback action corresponding to the emotion feedback mode, acquiring action parameters corresponding to the feedback action, and generating a control instruction according to the action parameters, wherein the control instruction is used for controlling the character robot to execute the feedback action.
2. The human-computer interaction method for the character robot according to claim 1, wherein the obtaining of multi-source information in the target area specifically comprises:
and acquiring multi-source information in the target area by using a multi-source sensor.
3. The human-computer interaction method for the character robot according to claim 2, wherein the multi-source information in the target area is collected by using a multi-source sensor, and the method specifically comprises at least two of the following steps:
the multi-source sensor comprises an image sensor, and a scene image of the target area is acquired by the image sensor; the target area is an area within a certain range around the role robot;
the multi-source sensor comprises a distance sensor, and the distance sensor is used for acquiring distance information between the character robot and the pedestrian in the target area;
the multi-source sensor comprises a pressure sensor, and pressure information of the character robot is acquired by using the pressure sensor;
the multi-source sensor comprises a vibration sensor, and vibration information of the character robot is utilized.
4. The human-computer interaction method for a character robot according to claim 3, wherein a scene image of the target area, distance information between the character robot and a pedestrian in the target area, pressure information of the character robot, and vibration information of the character robot are taken as the multi-source information;
the determining the behavior mode of the pedestrian in the target area according to the multi-source information comprises the following steps:
carrying out portrait recognition on the scene image of the target area;
if the pressure information/vibration information of the role robot is subjected to pressure/vibration, recording a touch Boolean value/a knock Boolean value as positive; otherwise, the touch boolean value/tap boolean value is not recorded as positive;
and determining the behavior pattern of the pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch Boolean value and the tapping Boolean value.
5. The human-computer interaction method for the character robot according to claim 4, wherein the determining of the behavior pattern of the pedestrian in the target area based on the portrait recognition result, the distance information between the character robot and the pedestrian in the target area, the touch Boolean value and the tap Boolean value comprises:
if the human figure recognition result indicates that the human figure image envelope surface is not recognized, the behavior mode is an unmanned interaction mode;
if the portrait recognition result is that at least one human-shaped image envelope surface is recognized and all the recognized human-shaped image envelope surfaces meet preset conditions, the behavior mode is a passing mode;
if the human image recognition result is that a plurality of human-shaped image enveloping surfaces are recognized and at least two recognized human-shaped image enveloping surfaces do not meet preset conditions, the behavior mode is a multi-person interaction mode;
if the portrait identification result is that a human-shaped image envelope surface is identified, the identified human-shaped image envelope surface does not meet the preset condition, and the touch Boolean value is positive, the behavior mode is a touch mode;
if the portrait identification result is that a human-shaped image envelope surface is identified, the identified human-shaped image envelope surface does not meet the preset condition, and the knocking Boolean value is positive, the behavior mode is a knocking mode;
if the portrait identification result is that a human-shaped image envelope surface is identified and the identified human-shaped image envelope surface does not meet the preset condition, the spatial position of the identified human-shaped image envelope surface is only changed within a preset spatial range, the touch Boolean value is not positive and the tap Boolean value is not positive, and the behavior mode is a staring mode;
the change condition of the space position of the envelope surface of the identified human-shaped image is determined by the distance information between the character robot and the pedestrian in the target area;
the preset condition is that the time of the human-shaped image enveloping surface passing through the role robot is less than the preset time.
6. The human-computer interaction method for character robots of claim 5 wherein the priority of said behavior pattern is documented in a behavior pattern priority sequence;
the determining of the emotional feedback mode of the character robot based on the behavior mode comprises:
under the condition that the behavior mode is not the multi-user interaction mode, searching an emotion feedback mode associated with the behavior mode from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot;
under the condition that the behavior mode is the multi-person interaction mode, the behavior mode of each pedestrian participating in interaction is refined, and the behavior mode with the highest priority is selected;
and searching the emotion feedback mode associated with the behavior mode with the highest priority from a prestored behavior mode-emotion feedback mode comparison table, and taking the searched emotion feedback mode as the emotion feedback mode of the character robot.
7. The human-computer interaction method for the character robot according to claim 1, wherein the determining of the feedback action corresponding to the emotional feedback mode comprises:
searching a feedback action corresponding to the emotion feedback mode from a prestored emotion feedback mode-feedback action comparison table;
the obtaining of the action parameter corresponding to the feedback action includes:
calling action parameters corresponding to the feedback action from a pre-stored action library; each feedback action designed for the role robot and corresponding action parameters thereof are stored in the action library;
wherein the emotion feedback mode comprises: a boring mode, a frightening mode, a curiosity mode, a sticky mode and an excitation mode;
the feedback action is a combination of interaction actions of different body parts of the character robot.
8. The human-computer interaction method for character robots according to claim 7, wherein the generation process of the action parameters corresponding to each feedback action comprises:
determining an interaction action contained in each feedback action;
generating action parameters corresponding to each interactive action in each feedback action by adopting a three-dimensional modeling technology and an animation redirection technology;
performing interpolation processing on the action parameters corresponding to each interactive action respectively so as to enable the character robot to smoothly execute the corresponding interactive actions;
performing action arrangement on the interaction action in each feedback action based on the action parameter corresponding to each interaction action after interpolation processing to obtain an action parameter corresponding to each feedback action;
verifying the action parameters corresponding to each feedback action in an action simulation system, and outputting the action parameters corresponding to each feedback action under the condition that the verification result is that a control instruction is generated according to the action parameters corresponding to each feedback action so as to control the character robot to smoothly execute each feedback action; and in the case that the control instruction generated according to the action parameter corresponding to each feedback action cannot control the character robot to smoothly execute each feedback action according to the verification result, re-executing the operation.
9. The human-computer interaction method for character robots of claim 8 wherein said generating action parameters corresponding to each of said each feedback action using three-dimensional modeling techniques and animation redirection techniques comprises:
based on a three-dimensional modeling technology, generating a robot animation model with virtual animation skeletons according to the number, the positions and the degrees of freedom of joints of the role robot;
generating action animation corresponding to each interactive action by adopting an action capturing or animation software editing mode;
and generating action parameters corresponding to each interactive action capable of being executed on the character robot by adopting an animation redirection technology based on the action animation corresponding to each interactive action and the robot animation model.
10. A human-computer interaction device for a character robot, the device comprising:
the acquisition module is used for acquiring multi-source information in the target area;
the behavior mode determining module is used for determining the behavior mode of the pedestrian in the target area according to the multi-source information;
the emotion feedback mode determination module is used for determining the emotion feedback mode of the character robot based on the behavior mode;
and the interaction module is used for determining the feedback action corresponding to the emotion feedback mode, acquiring the action parameter corresponding to the feedback action, and generating a control instruction according to the action parameter, wherein the control instruction is used for controlling the character robot to execute the feedback action.
11. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the human-computer interaction method for character robots according to any one of claims 1 to 10 when executing the program.
12. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the human-machine interaction method for character robots according to any one of claims 1 to 10.
CN202210648115.6A 2022-06-08 2022-06-08 Man-machine interaction method and device for character robot Pending CN115268628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210648115.6A CN115268628A (en) 2022-06-08 2022-06-08 Man-machine interaction method and device for character robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210648115.6A CN115268628A (en) 2022-06-08 2022-06-08 Man-machine interaction method and device for character robot

Publications (1)

Publication Number Publication Date
CN115268628A true CN115268628A (en) 2022-11-01

Family

ID=83760292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210648115.6A Pending CN115268628A (en) 2022-06-08 2022-06-08 Man-machine interaction method and device for character robot

Country Status (1)

Country Link
CN (1) CN115268628A (en)

Similar Documents

Publication Publication Date Title
Martinez-Gonzalez et al. Unrealrox: an extremely photorealistic virtual reality environment for robotics simulations and synthetic data generation
Wang et al. A comprehensive survey of augmented reality assembly research
CN107861714B (en) Development method and system of automobile display application based on Intel RealSense
CN103258338A (en) Method and system for driving simulated virtual environments with real data
Wolfartsberger et al. A virtual reality supported 3D environment for engineering design review
US11957995B2 (en) Toy system for augmented reality
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN107844195B (en) Intel RealSense-based development method and system for virtual driving application of automobile
Cao et al. Ani-bot: A modular robotics system supporting creation, tweaking, and usage with mixed-reality interactions
Fu et al. Real-time multimodal human–avatar interaction
CN115268628A (en) Man-machine interaction method and device for character robot
CN112686990A (en) Three-dimensional model display method and device, storage medium and computer equipment
Gosselin et al. Robot Companion, an intelligent interactive robot coworker for the Industry 5.0
CN112233208B (en) Robot state processing method, apparatus, computing device and storage medium
Shen Application and Implementation Methods of VR Technology in Higher Education Mechanical Manufacturing Programs
Lisboa et al. 3D virtual environments for manufacturing automation
Kirakosian et al. Immersive simulation and training of person-to-3d character dance in real-time
Ganal et al. PePUT: A Unity Toolkit for the Social Robot Pepper
Ernst et al. Creating virtual worlds with a graspable user interface
Zhang et al. Tele-immersive interaction with intelligent virtual agents based on real-time 3D modeling
Sharma et al. Exploring The Potential of VR Interfaces in Animation: A Comprehensive Review
Lahdenperä Design and Implementation of a Virtual Reality Application for Mechanical Assembly Training
CN116204167A (en) Method and system for realizing full-flow visual editing Virtual Reality (VR)
Profanter Implementation and Evaluation of multimodal input/output channels for task-based industrial robot programming
Frijns et al. Programming Robot Animation Through Human Body Movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination