CN108363492A - A kind of man-machine interaction method and interactive robot - Google Patents
A kind of man-machine interaction method and interactive robot Download PDFInfo
- Publication number
- CN108363492A CN108363492A CN201810193098.5A CN201810193098A CN108363492A CN 108363492 A CN108363492 A CN 108363492A CN 201810193098 A CN201810193098 A CN 201810193098A CN 108363492 A CN108363492 A CN 108363492A
- Authority
- CN
- China
- Prior art keywords
- user
- interaction
- robot
- information
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/0005—Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Manipulator (AREA)
Abstract
The present invention provides a kind of methods of human-computer interaction and interaction robot, method to include:S1 obtains user's integrated information, user's integrated information includes the personal information of active user and the environmental information of current robot system when robot detects the user for needing active interactive;S2 generates the active interaction content to match with user's integrated information;S3 with user actively interact according to the active interaction content.The present invention can realize personalized active human-computer interaction during robot carries out human-computer interaction according to the different user information of different user and the information of current environment.
Description
Technical field
The present invention relates to field of human-computer interaction, espespecially a kind of man-machine interaction method and interaction robot.
Background technology
Robot is the emerging interdisciplinary study to grow up in recent decades, it has concentrated mechanical engineering, electronic engineering, letter
Breath science automatically controls and the multi-disciplinary newest research results such as artificial intelligence, and being that current development in science and technology is most active grinds
Study carefully one of field.With the development of science and technology, service humanoid robot has been obtained for being widely applied.
For servicing humanoid robot, good man-machine interaction experience is to service the key point of humanoid robot service performance,
It is also user's demand the most basic to robot.The service robot of mainstream generally all has a degree of man-machine at present
Interactive function.Common human-computer interaction includes the mouse button disc type human-computer interaction of PC classes, the touch slidingtype of flat board mobile phone class
Human-computer interaction and voice human-computer interaction etc..Wherein, the man-machine interaction mode of interactive voice class due to interactive mode just
Profit, naturally, and the advantages such as interactive learning is at low cost, increasingly become the mostly important man-machine interaction mode of service humanoid robot
One of.
The voice interactive mode of mainstream mainly uses passive voice activation mode to activate entire man-machine friendship
Mutual flow, robot are proceeded by by the phonetic order of continuous monitoring users after receiving specific phonetic order
Speech recognition makes corresponding answer and feedback according to the particular content of identification for user.But such man-machine interaction mode
It is passive to compare, and robot will not be exchanged actively with user, not strong enough to the attraction of user;And the answer of this interaction
Content is more inflexible, and underaction is changeable, and is directed to different people, and the content answered is all, not personalized enough.It can drop
The experience sense of low user.
Therefore, in order to solve above-mentioned drawback, the present invention provides a kind of active personalized human-computer interactions.
Invention content
The object of the present invention is to provide a kind of method of human-computer interaction and interaction robots, and human-computer interaction is carried out in robot
During, it can be according to the different user information of different user, and the information of current environment is combined, realize personalized active
Interaction.
Technical solution provided by the invention is as follows:
The present invention provides a kind of methods of human-computer interaction, including step:S1 detects needs actively interaction when robot
User when, obtain user's integrated information, user's integrated information includes the personal information and current machine of the user
The environmental information of people's system;S2 generates the active interaction content to match with user's integrated information;S3 is according to the active
Interaction content with user actively interact.
Preferably, include step before the step S1:S0 is according to current robot local environment information, setting and institute
The application scenarios of the matched robot of environmental information are stated, when the application scenarios include rules of interaction and the interaction of robot interactive
The mutual resource used.
Preferably, step S2 is specifically included:S20 is obtained and user's integrated information phase according to the rules of interaction
The mutual resource matched, to generate active interaction content.
Preferably, the mutual resource includes:Voice content, movement content or multimedia content;The rules of interaction
Including multiple regular nodes, each regular node embodies the mapping relations of different user integrated information and distinct interaction resource.
Preferably, the personal information of the active user includes:Gender, age, expression, facial angle, face space bit
It sets, face occurrence number, user's name, be currently detected face quantity and voice messaging;The current robot system
Environmental information include:Time, place, temperature, weather, network connection state, system language.
Preferably, the step S1 specifically includes step:S10 when robot detect need actively interaction user when,
User's integrated information is obtained, and corresponding user characteristics are assigned to each user characteristics in the personal information of the active user
Keyword and user characteristics parameter value are assigned to each environmental characteristic in the environmental information of the current robot system and being corresponded to
Environmental characteristic keyword and environment characteristic parameters value.
Preferably, step S21 specifically includes step:S211, which filters out to integrate with the user in the rules of interaction, to be believed
Several matched candidate rule nodes of manner of breathing;S212 is raw according to several described corresponding mutual resources of candidate rule node
At interaction content;The interaction content includes interactive voice content, action interaction content, multimedia interactive content.
Preferably, step S211 specifically includes step:S2111 judges default feature all in each regular node one by one
Keyword and corresponding default characteristic ginseng value whether with Partial Feature keyword in user's integrated information and corresponding
Characteristic ginseng value is identical;The characteristic key words include user characteristics keyword and environmental characteristic keyword, the characteristic parameter
Value includes user characteristics parameter value and environment characteristic parameters value;S2112 is if so, using the regular node for the condition that meets as candidate
Regular node.
Preferably, step S212 specifically includes step:Several described candidate rule nodes of S2121 analyses are respective preferential
Value, and the candidate rule node is ranked up according to the preferred value;For multiple candidate rule sections of same preferred value
Point, random or Weighted random choose one of candidate rule node and participate in sequence;S2122 is by the candidate rule node after sequence
Corresponding mutual resource is combined successively, generates the interaction content.
The present invention also provides a kind of interactive robots, which is characterized in that including:Data obtaining module, for working as machine
When people detects the user for needing active interaction, user's integrated information is obtained, user's integrated information includes the user
The environmental information of personal information and current robot system;Processing module is electrically connected with described information acquisition module, is used for
Generate the active interaction content to match with user's integrated information;Interactive module, for according to the active interaction content
With user actively interact.
Preferably, scene setting module is additionally operable to according to current robot local environment information, and setting is believed with the environment
The application scenarios of matched robot are ceased, the application scenarios include the friendship used when rules of interaction and the interaction of robot interactive
Mutual resource.
The preferred processing module is additionally operable to, according to the rules of interaction, obtain and user's integrated information phase
The mutual resource matched, to generate active interaction content.
Preferably, the mutual resource includes:Voice content, movement content or multimedia content;The rules of interaction
Including multiple regular nodes, each regular node embodies the mapping relations of different user integrated information and distinct interaction resource.
Preferably, the personal information of the active user includes:Gender, age, expression, facial angle, face space bit
It sets, face occurrence number, user's name, be currently detected face quantity and voice messaging;The current robot system
Environmental information include:Time, place, temperature, weather, network connection state, system language.
Preferably, described information acquisition module is additionally operable to, when robot detects the user for needing active interaction, obtain
User's integrated information, and assign corresponding user characteristics key to each user characteristics in the personal information of the active user
Word and user characteristics parameter value assign corresponding ring to each environmental characteristic in the environmental information of the current robot system
Border characteristic key words and environment characteristic parameters value.
Preferably, matched sub-block matches for being filtered out in the rules of interaction with user's integrated information
Several candidate rule nodes;Each regular node includes multiple default characteristic key words and corresponding default characteristic parameter
Value;Interaction content generates submodule, according to several described corresponding mutual resources of candidate rule node, generates interaction content;
The interaction content includes interactive voice content, action interaction content, multimedia interactive content.
Preferably, the matched sub-block is additionally operable to judge one by one default feature critical all in each regular node
Word and corresponding default characteristic ginseng value whether in user's integrated information Partial Feature keyword and corresponding feature
Parameter value is identical;The characteristic key words include user characteristics keyword and environmental characteristic keyword, the characteristic ginseng value packet
Include user characteristics parameter value and environment characteristic parameters value;If so, using the regular node for the condition that meets as candidate rule node.
Preferably, the interaction content generates submodule, and it is respective to be additionally operable to several described candidate rule nodes of analysis
Preferred value, and the candidate rule node is ranked up according to the preferred value;For multiple candidate rule of same preferred value
Then node randomly selects one of candidate rule node and participates in sequence or the one of candidate rule section of Weighted random selection
Point participates in sequence;The interaction content generates submodule, is additionally operable to the corresponding mutual resource of candidate rule node after sorting
It is combined successively, generates the interaction content.The method and interaction robot of a kind of human-computer interaction provided through the invention,
Following at least one advantageous effect can be brought:
1, in the present invention, the interactive mode of robot is different from current passive type and interacts, and robot is recognizing user
Later, it can actively be interacted with user, more attract clients and participate in interaction, improve interactive experience.Meanwhile machine
People can obtain the personal information of active user and the environmental information of current robot system in interaction, comprehensive to be formed actively
Interaction content so that interaction content can more agree with current environment and the personal characteristics of user, and user is allowed more to incorporate people
In machine interaction, the experience sense of human-computer interaction is improved.
2, after robot can identify active user, the personal information of user is got by recognition of face, such as age, property
Not, expression etc., and the current facial expression of user is got, if user speaks, voice messaging can be also obtained, by these
Informix gets up to be formed with personalized current user information.Robot can pass through the current of network system or robot
System can get current date, time, robot site, weather etc. environmental information.Robot can be according to this
A little user's integrated informations for carrying user personality, generate corresponding interaction content so that interaction is more close to the users, and interaction is improved
Intelligence.
Description of the drawings
Below by a manner of clearly understandable, preferred embodiment is described with reference to the drawings, to a kind of method of human-computer interaction
And above-mentioned characteristic, technical characteristic, advantage and its realization method of interaction robot are further described.
Fig. 1 is a kind of flow chart of one embodiment of the method for human-computer interaction of the present invention;
Fig. 2 is a kind of flow chart of another embodiment of the method for human-computer interaction of the present invention;
Fig. 3 is a kind of flow chart of the further embodiment of the method for human-computer interaction of the present invention;
Fig. 4 is a kind of one embodiment structural schematic diagram of interactive robot of the present invention;
Drawing reference numeral explanation:
1- scene settings module, 2- data obtaining modules, 3- user's integrated information module, 4- processing modules, 41- matching
Module, 42- interaction contents generate submodule, 5- interactive modules.
Specific implementation mode
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, control is illustrated below
The specific implementation mode of the present invention.It should be evident that drawings in the following description are only some embodiments of the invention, for
For those of ordinary skill in the art, without creative efforts, other are can also be obtained according to these attached drawings
Attached drawing, and obtain other embodiments.
To make simplified form, part related to the present invention is only schematically shown in each figure, they are not represented
Its practical structures as product.In addition, so that simplified form is easy to understand, there is identical structure or function in some figures
Component only symbolically depicts one of those, or has only marked one of those.Herein, "one" is not only indicated
" only this ", can also indicate the situation of " more than one ".
The present invention provides a kind of method one embodiment of human-computer interaction, as shown in Figure 1, including:
S1 obtains user's integrated information when robot detects the user for needing active interactive, and the user integrates letter
Breath includes the personal information of active user and the environmental information of current robot system;
S2 generates the active interaction content to match with user's integrated information;
S3 with user actively interact according to the active interaction content.
Preferably, include step before the step S1:S0 according to current robot local environment information, setting with it is described
The application scenarios of the matched robot of environmental information, the application scenarios are used when including rules of interaction and the interaction of robot interactive
The mutual resource arrived.
In the present embodiment, application scenarios include under the scene, when robot interactive, the rules of interaction abided by and friendship
Required mutual resource when mutually.User by being selected or based on webpage, using etc. platforms, customize customized user's scene,
On deployed in real time to robot, to reach while the change for avoiding systemic hierarchial so that robot is met in different rings
Quick application under border.
Specifically, in robot before use, user can be arranged to be more in line with and work as previously according to the scene of robot application
The application scenarios of preceding use environment can more agree with current environment in interactive process, user is allowed more to incorporate people
In machine interaction, the experience sense of human-computer interaction is improved.Application scenarios include rules of interaction and mutual resource.If robot is in quotient
It uses, then can increase and do shopping relevant rules of interaction and mutual resource in.Such as rules of interaction recognizes for robot
When user, which commodity inquiry user has purchased, or which service inquiry user market also has not enough in place etc.;Interaction money
Cheerful and light-hearted song can be set, or the action clapped hands, bowed in source, in the commodity of user response purchase, bat can be made
The action of hand encourages user to continue to consume.
If robot can then increase the relevant rules of interaction such as medical treatment, drug, epidemic prevention and friendship using within the hospital
Mutual resource.Such as rules of interaction be robot recognize user it is sad when, can inquire why not happy user is, and comfort use
Family, while the action for patting user's shoulder is made, and play some cheerful and light-hearted music;It is cheerful and light-hearted that some can be set in mutual resource
Music, clap shoulder, refuel etc. actions.
After robot recognizes user, current user information can be obtained, and pass through robot interior by recognition of face
System obtains the environmental information of robot, eventually forms user's integrated information with user individual.Then robot can root
According to the rules of interaction in application scenarios, what the interaction for obtaining currently completing is, transfers the corresponding friendship in mutual resource
Mutual resource forms active interaction content, is actively interacted with user.Different from current interactive mode, current many friendships
Mutual mode is all the interaction of passive type, and robot is referred to by the phonetic order of continuous monitoring users receiving specific voice
After order, speech recognition is proceeded by, according to the particular content of identification, corresponding answer and feedback is made for user.It is such
Man-machine interaction mode is more passive, and robot will not be exchanged actively with user, not strong enough to the attraction of user.And this hair
In bright, robot can be formed by the user's integrated information got with user individual after detecting user
Active interaction content, actively interacts with user, more user is attracted to participate in human-computer interaction.
If for example, robot uses in market, user can increase and do shopping relevant rules of interaction and interaction
Resource.Rules of interaction is:After robot recognizes user, user is identified first, obtains current user information and environmental information,
If identify be adult, after can inquire which commodity user has purchased, give a response;Secondly which inquiry user market also has
Service not enough in place, gives a response;Last and user says goodbye.If child, then can first greet with child, then be small
My child dances one.
Cheerful and light-hearted song can be arranged in user in mutual resource, or the action for clapping hands, bowing, waving, squat down etc.,
In the commodity of user response purchase, the action clapped hands can be made, user is encouraged to continue to consume.
When the user that robot recognizes is a Ms, current time is 10 thirty of the morning, and robot can be said:" Ms
Good morning!It may I ask which commodity you have bought today" while making the action waved and greet Xiang Ms.Ms responds:It " buys
Dress." robot can respond:" thank to the support to our markets!" while making the action bowed.It then proceedes to
It says:" which service market also has inadequate in place " and then the content record that user says is got off.Last and user says " goodbye ".
When the user that robot recognizes is so that for little girl, the current time is at 5 points in afternoon, the table of current little girl
Feelings are smiling face, and robot can be said:" good afternoon by child!It is how happy" while making the action squatted down, with little girl
Keep sustained height.Little girl responds:" today, mother to me bought part novel clothes." robot can respond:" I jumps one for you
Branch is waved " while playing dance music.
As can be seen that the user's integrated information embodied due to user is different, the interactive mode of robot also has prodigious
Difference, and in the prior art, robot is the same the interaction content of each user, can only be said after recognizing user:
" you are good, what, which may I ask, can help you" and the different characteristic different from that can be carried according to different user in the present invention, output
Personalized interaction content.
Above-mentioned rules of interaction and mutual resource can sets itself, robot can handed over according to user's integrated information
Corresponding response mutually is found in rule, interaction content is generated in conjunction with mutual resource, carries out human-computer interaction.It can be seen that difference is used
User's integrated information at family is different, and robot can be according to the different sexes of different user, different expressions, all ages and classes,
Different interaction contents is generated, allows interaction content that there is personalization.
As shown in Fig. 2, the present invention also provides a kind of one embodiment of the method for human-computer interaction, including:
S0 is according to current robot local environment information, the applied field of setting and the matched robot of the environmental information
Scape, the application scenarios include the mutual resource used when rules of interaction and the interaction of robot interactive.
S1 obtains user's integrated information when robot detects the user for needing active interactive, and the user integrates letter
Breath includes the personal information of active user and the environmental information of current robot system;
S201 filters out several candidate rule sections to match with user's integrated information in the rules of interaction
Point;
S202 generates interaction content according to several described corresponding mutual resources of candidate rule node;In the interaction
Appearance includes interactive voice content, action interaction content, multimedia interactive content.
S3 with user actively interact according to the active interaction content.
The mutual resource includes:Voice content, movement content or multimedia content;The rules of interaction includes more
A regular node, each regular node embody the mapping relations of different user integrated information and distinct interaction resource.
The rules of interaction includes:Whether robot responds certain specific testing results in interaction, for example whether
Inquire address name, if detecting system time etc.;And when to the testing result of response, the tool fed back accordingly is selected
Hold in vivo, for example, the difference to different sexes is called, different actions of different user etc..
The mutual resource includes:Resource needed under corresponding rules of interaction, such as all speech text contents, institute
There are optional movement content, all music and video content etc..
The personal information of the active user includes:Gender, age, expression, facial angle, face spatial position, face
Occurrence number, is currently detected face quantity and voice messaging at user's name;The environment of the current robot system
Information includes:Time, place, temperature, weather, network connection state, system language.
Specifically, in the present embodiment, system is by human bioequivalence, recognition of face, the various identifications such as environmental system detection with
Detection integrates for user and exports one group of user information description, for describing the specifying information of current interactive user.User integrates
Message form indicates as follows:
head;key1:value1;key2:value2;key3:value3;...;
Wherein head is fixed, for the word string identifier of the word string, identifies the particular content of the word string, so as to its
The word string of his type makes difference.Key is each feature in user's integrated information, i.e. user characteristics keyword and environmental characteristic
Keyword, each key illustrate that a feature for describing active user, these features may include:Face number, people
Face name, gender, age, time, expression, facial angle, face location, face size, weather, place, temperature, movement class
Type, network connection status etc..Value values are design parameter value corresponding with current key values, i.e., heretofore described use
Family characteristic ginseng value and environment characteristic parameters value.<key:value>To can according to different human bioequivalences, recognition of face, be
The various detections of detection, the motion detection etc. of uniting and the output of identification facility change, and change the number and content of user's Expressive Features.
Simple user's integrated information is as follows:
Example 1:
rvn;ultrasound;event;payload;type:passby;
Wherein rvn;ultrasound;event;payload;As word string head, illustrate that the word string is to contain ultrasound
The user information of wave sensor describes.This description is relatively simple, merely illustrates robot and is perceived by ultrasonic sensor
It has arrived someone and has crossed and passed by face of it.
Example 2:
rvf;vison;event;face;payload;type:face;stop:yes;name:avatar;gender:
masculine;age:53;time:13.04;emotion:none;roll:-13.61209;pitch:23.196611;yaw:
23.330135;fac eID:1;number:1;sequence:1;px:646;py:189;pw:352;ph:352;
Wherein rvf;vison;event;face;payload;As word string head, illustrate that the word string is to contain vision
User's integrated information of sensor information describes.Per a pair of key, value illustrates the information characteristics of this user.Tool
Body can be read as:The face information of user is continuous;Address name:avatar;Gender:Male;Age:53 years old;This
Record generation time:13 points 04 minute;User's human face expression:Nothing;Facial angle roll values:- 13.61209 degree;Facial angle
Pitch values:23.196611 degree;Facial angle yaw values:23.330135 degree;Face record number:No. 1;Face in current picture
Number;1;The face of the user is first in total face;Face location X values:646px;Face location Y value:189px;
Face width:352px;Face length:352px;
Different users, different environment, different interaction familiarities, different detections and identification facility, can generate
Different users describe result.The comprehensive description of these users is that personalized user describes, passes through to these difference descriptions
Parsing, system generate corresponding interaction content by the rule and resource of current scene.
This method generates one group of corresponding interaction content according to user's integrated information of input.The interaction of feedback
Content contains following three kinds:Voice content, movement content and multimedia content.Voice content is the active language that robot plays
Sound prompts;Movement content is one group of motion component of the moving parts such as its head, four limbs;Multimedia content includes picture, sound
Pleasure, video, using etc., and played by the display platform of robot front.Multimedia content can be carried with voice
Show while playing, can also be played after voice prompt, to meet the needs of different scenes.
The regular node includes Node nodes, and the rules of interaction is with every identification knot of tree-like data structure storage
Voice, action, multimedia content corresponding to fruit.Multiple Node nodes are contained in rules of interaction tree, each Node node
In contain several pre-set default characteristic key words and corresponding default characteristic ginseng value, further comprise a plurality of language
The mutual resources such as sound, action, multimedia.
Key values in Node nodes describe this group of sentence, action, multimedia need selected necessary condition.First,
When there are the integrated information of a user, each group of Node node can all carry out with user's integrated information for currently inputting
Match, if current user's integrated information and the necessary condition of Node nodes mutually meet, which will become a time
Node is selected, is selected after waiting for.It, should if current user's integrated information and the necessary condition of Node nodes not exclusively meet
Node nodes just will not become candidate rule node.
As shown in figure 3, the present invention also provides a kind of one embodiment of the method for human-computer interaction, including:
S0 is according to current robot local environment information, the applied field of setting and the matched robot of the environmental information
Scape, the application scenarios include the mutual resource used when rules of interaction and the interaction of robot interactive.
S10 obtains user's integrated information, and give the current use when robot detects the user for needing active interactive
Each user characteristics in the personal information at family assign corresponding user characteristics keyword and user characteristics parameter value, work as to described
Each environmental characteristic in the environmental information of preceding robot system assigns corresponding environmental characteristic keyword and environment characteristic parameters
Value.
S2000 judges default characteristic key words all in each regular node and corresponding default characteristic ginseng value one by one
Whether in user's integrated information Partial Feature keyword and corresponding characteristic ginseng value it is identical;The characteristic key words
Including user characteristics keyword and environmental characteristic keyword, the characteristic ginseng value includes user characteristics parameter value and environmental characteristic
Parameter value;
S2001 is if so, using the regular node for the condition that meets as candidate rule node.
Several described respective preferred values of candidate rule node of S2010 analyses, and by the candidate rule node according to
The preferred value is ranked up;For multiple candidate rule nodes of same preferred value, random or Weighted random chooses wherein one
A candidate rule node participates in sequence;
The corresponding mutual resource of candidate rule node after sequence is combined by S2011 successively, is generated in the interaction
Hold.
S3 with user actively interact according to the active interaction content.
Specifically, in the present embodiment, the characteristic key words are key values, and characteristic ginseng value is the corresponding Value of key values
Value, regular node are Node nodes.
Filter out the specific steps such as step of several candidate rule nodes to match with user's integrated information
Shown in S411, step S412, if the Value values in all features in Node nodes and identical spy in user's integrated information
The Value values of property all meet, then using this Node node as candidate rule node.If the value of the feature in Node nodes is
All indicates that this feature is all considered as satisfaction to the Value results of individual features in all user informations.The comprehensive letter of user
Feature in breath usually can all be more than the characteristic value needed for Node nodes, and for the characteristic value having more, system will not be according to its knot
Fruit is judged and is screened.
While matching candidate Node nodes, for the Node nodes as both candidate nodes, it will according to its preferred value
The combinations of values of Priority goes out a complete voice prompt.One complete interactive voice content resolution is not by this method
Same statement interlude, each statement interlude are a segmentation in a complete speech prompt.Preferred value in each Node node
Priority illustrates position of the statement interlude in complete sentence.
Simply, a complete sentence can be divided into multiple statement interludes by us, such as call section, the period, content
Section and problem section.The present invention is not intended to limit the division number of sentence, and user can voluntarily be segmented according to sentence integrity degree.It is each
It is independent assortment between section and next section.Therefore, the complete sentence being finally combined into will become very flexibly.It is same for being in
Multiple both candidate nodes of one position, this method select a Node node for meeting condition by random selection.
For example, the interactive voice content that will be generated includes address section, greets section and inclusive segment, there are two preferential for address section
It is worth identical Node nodes, is " elder person ", " uncle " respectively;Greeting end, there are three the identical Node nodes of preferred value, respectively
It is that " you are good!" " good morning!" " good morning!" there are one Node nodes for inclusive segment, it is that " your body is very good." for calling section and asking
Section is waited, the sequence that one of content participates in voice messaging can be randomly selected.It can be seen that the interior of voice messaging has 6 kinds
Combination, under the same conditions, the content that robot carries out interactive voice can constantly change, will not be very inflexible, influence to use
The experience at family.
In some Node nodes, multiple Item options (content that i.e. robot executes) can be pre-set, for
Multiple Item options present in Node nodes, according to the difference of its Key value, selection meets the final result of condition.If phase
Same Key values result there are corresponding multiple Item options, by sequential selection or randomly selected method selected by this method
One result is exported as final result.
Since interactive content will depend in the combination of candidate rule node, and each candidate rule node
The selection of Item options, under Same Scene, the candidate rule node of same preferred value only randomly selects one and participates in sequence, right
In each candidate rule node, the corresponding multiple Item options of key values can random, Weighted random, or according to certain sequence
One is chosen, then the mode combined is very more.In interactive process, for same scene, interactive content is also different,
The interaction of pattern will not be fixed very inflexiblely.
Such as when greeting, possible key values are corresponding act be shake hands, wave, three Item options of saluting, this
In the case of, can optionally one of Item exported.When greeting, robot will not be led to since action is too single
Interaction it is too inflexible.Equally, the language of greeting may be set to be a variety of, assign into different Item options.Random
When choosing an Item, the language of output also can be different, and content will present diversification when interactive.
Movement content is existing later by being attached to each specific speech sentences with multimedia content.Pass through definition
Additional move after each specific sentence and multimedia content, it is corresponding dynamic when active feedback content combines completion
Work also generates therewith with multimedia content.Action is determined with multimedia content by the additional content of the last one sentence ingredient.
Therefore, different content, the sentence of different length can also make the prompt in different action and multimedia content.Flexibly use
Voice script and resource script, with the difference that user's integrated information inputs, the feedback information that this method generates therewith can also
Corresponding variation is got up.
Certain personal information of user can be got in conjunction with the personal information in internet by face identification method,
Such as name, age, the basic informations such as gender.The present embodiment, which will give one example, illustrates interactive system of the present invention.
Such as when robot recognizes front there are one the spadger's childhood cryyed, user's integrated information of acquisition can
To include:Male, 8 years old, name, facial expression was cryyed, and current time is at 10 points in the morning, and current weather is 2 degrees Celsius, current position
Point is hospital, and air quality is good, and will form word string after its assignment, is transferred for system.Robot is integrated according to user to be believed
Breath, the candidate rule node for meeting matching condition is found out in regular node, these regular nodes are carried out according to preferred value
Sequence, and the corresponding mutual resource of candidate rule node is combined successively.After robot recognizes spadger, it can lead
Dynamic and spadger greets, and when greeting, the corresponding Item values of key values might have many (its in Node nodes
Language content is respectively that " hello by child!", " hello, small handsome boy!" " hi, child ", movement content is respectively to wave, hold
Hand bows, salutes) robot only need in language content or movement content person's one as interaction content, because
This, the interaction content exported every time be all it is continually changing, will not be too inflexible.
As shown in figure 4, the present invention provides a kind of one embodiment of interactive robot, including:
Data obtaining module 2, for obtaining current user information and environmental information;
User's integrated information module 3, is electrically connected with described information acquisition module 2, for according to the current user information
And environmental information generates user's integrated information;
Processing module 4 is electrically connected with user's integrated information module 3, for according to user's integrated information and
The application scenarios generate interaction content;
Interactive module 5, for carrying out active human-computer interaction according to the interaction content.
Preferably, the interactive robot further includes:Scene setting module 1 is additionally operable to according to ring residing for current robot
Border information, the application scenarios of setting and the matched robot of the environmental information, the application scenarios include robot interactive
The mutual resource used when rules of interaction and interaction.
Preferably, the scene setting module 1 is additionally operable to store the rules of interaction of pre-set robot, Yi Jiji
Device people required mutual resource when interaction under the rules of interaction, and using the rules of interaction and the application resource as answering
Use scene;The rules of interaction includes multiple regular nodes, and each regular node includes multiple default characteristic key words and correspondence
Default characteristic ginseng value.
Preferably, user's integrated information module 3 is additionally operable to assign to each user characteristics in the current user information
Corresponding user characteristics keyword and user characteristics parameter value assign corresponding to each environmental characteristic in the environmental information
Environmental characteristic keyword and environment characteristic parameters value;User's integrated information module 3, be additionally operable to the user characteristics keyword and
Corresponding user characteristics parameter value, environmental characteristic keyword and corresponding environment characteristic parameters value are combined into word string, and to described
Word string assigns corresponding word string identifier;Using the word string with the word string identifier as active user's integrated information.
Preferably, the processing module 4 specifically includes:Matched sub-block 41, for being filtered out in the application scenarios
Several candidate rule nodes to match with user's integrated information;Interaction content generates submodule 42, if according to described
The dry corresponding mutual resource of candidate rule node, generates interaction content;The interaction content includes interactive voice content, action
Interaction content, multimedia interactive content.
Preferably, the matched sub-block 41 is additionally operable to judge one by one that default feature all in each regular node is closed
Keyword and corresponding default characteristic ginseng value whether in user's integrated information Partial Feature keyword and corresponding spy
It is identical to levy parameter value;The characteristic key words include user characteristics keyword and environmental characteristic keyword, the characteristic ginseng value
Including user characteristics parameter value and environment characteristic parameters value;If so, using the regular node for the condition that meets as candidate rule section
Point.
Preferably, the interaction content generates submodule 42, is additionally operable to several described candidate rule nodes of analysis respectively
Preferred value, and the candidate rule node is ranked up according to the preferred value;For multiple candidates of same preferred value
Regular node chooses one of candidate rule node and participates in sequence;By the corresponding interaction money of the candidate rule node after sequence
Source is combined successively, generates the interaction content.
Specifically, data obtaining module 2 has two big effects to obtain user resources one is for identifying active user;Example
Such as when detect have user within a preset range when, start that user is identified, mainly identify user facial characteristics.It is logical
Face recognition technology is crossed, the expression of active user can be identified, and combine internet big data, get some bases of user
Notebook data information.The second is according to current robot system, current robot site, time, weather etc. ring are got
Border information.
The processing module 4 can be made of the processor of robot, and the interactive module 5 includes robot interactive mistake
Used voice activated control, display system, drive system etc. in journey.The process of robot interactive can refer to above method implementation
Example, details are not described herein again.
It should be noted that above-described embodiment can be freely combined as needed.The above is only the preferred of the present invention
Embodiment, it is noted that for those skilled in the art, in the premise for not departing from the principle of the invention
Under, several improvements and modifications can also be made, these improvements and modifications also should be regarded as protection scope of the present invention.
Claims (10)
1. a kind of method of human-computer interaction, which is characterized in that including step:
S1 obtains user's integrated information, user's integrated information packet when robot detects the user for needing active interactive
Include the personal information of the user and the environmental information of current robot system;
S2 generates the active interaction content to match with user's integrated information;
S3 with user actively interact according to the active interaction content.
2. a kind of method of human-computer interaction according to claim 1, which is characterized in that include step before the step S1
Suddenly:
S0 is according to current robot local environment information, the application scenarios of setting and the matched robot of the environmental information, institute
State the mutual resource used when the rules of interaction and interaction that application scenarios include robot interactive.
3. a kind of method of human-computer interaction according to claim 2, which is characterized in that step S2 is specifically included:
S20 obtains the mutual resource to match with user's integrated information according to the rules of interaction, is actively handed over to generate
Mutual content.
4. a kind of method of human-computer interaction according to claim 3, it is characterised in that:
The mutual resource includes:Voice content, movement content or multimedia content;The rules of interaction includes multiple rule
Then node, each regular node embody the mapping relations of different user integrated information and distinct interaction resource.
5. a kind of method of human-computer interaction according to any one of claim 1-4, it is characterised in that:
The personal information of the user includes:Gender, the age, expression, facial angle, face spatial position, face occurrence number,
User's name is currently detected face quantity and voice messaging;
The environmental information of the current robot system includes:Time, place, temperature, weather, network connection state, system language
Speech.
6. a kind of interactive robot, which is characterized in that including:
Data obtaining module, it is described for when robot detects the user for needing active interaction, obtaining user's integrated information
User's integrated information includes the personal information of the user and the environmental information of current robot system;
Processing module is electrically connected with described information acquisition module, for generating the active to match with user's integrated information
Interaction content;
Interactive module, for user actively interact according to the active interaction content.
7. a kind of interactive robot according to claim 6, which is characterized in that further include:
Scene setting module is additionally operable to according to current robot local environment information, setting and the matched machine of the environmental information
The application scenarios of device people, the application scenarios include the mutual resource used when rules of interaction and the interaction of robot interactive.
8. a kind of interactive robot according to claim 7, it is characterised in that:
The processing module is additionally operable to, according to the rules of interaction, obtain and interact money with what user's integrated information matched
Source, to generate active interaction content.
9. a kind of interactive robot according to claim 8, it is characterised in that:
The mutual resource includes:Voice content, movement content or multimedia content;The rules of interaction includes multiple rule
Then node, each regular node embody the mapping relations of different user integrated information and distinct interaction resource.
10. a kind of interactive robot according to any one of claim 6-9, it is characterised in that:
The personal information of the active user includes:Gender, age, expression, facial angle, face spatial position, face occur
Number, is currently detected face quantity and voice messaging at user's name;
The environmental information of the current robot system includes:Time, place, temperature, weather, network connection state, system language
Speech.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810193098.5A CN108363492B (en) | 2018-03-09 | 2018-03-09 | Man-machine interaction method and interaction robot |
PCT/CN2018/106780 WO2019169854A1 (en) | 2018-03-09 | 2018-09-20 | Human-computer interaction method, and interactive robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810193098.5A CN108363492B (en) | 2018-03-09 | 2018-03-09 | Man-machine interaction method and interaction robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108363492A true CN108363492A (en) | 2018-08-03 |
CN108363492B CN108363492B (en) | 2021-06-25 |
Family
ID=63003702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810193098.5A Active CN108363492B (en) | 2018-03-09 | 2018-03-09 | Man-machine interaction method and interaction robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108363492B (en) |
WO (1) | WO2019169854A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109409063A (en) * | 2018-10-10 | 2019-03-01 | 北京小鱼在家科技有限公司 | A kind of information interacting method, device, computer equipment and storage medium |
CN110097400A (en) * | 2019-04-29 | 2019-08-06 | 贵州小爱机器人科技有限公司 | Information recommendation method, apparatus and system, storage medium, intelligent interaction device |
CN110154048A (en) * | 2019-02-21 | 2019-08-23 | 北京格元智博科技有限公司 | Control method, control device and the robot of robot |
WO2019169854A1 (en) * | 2018-03-09 | 2019-09-12 | 南京阿凡达机器人科技有限公司 | Human-computer interaction method, and interactive robot |
CN110716634A (en) * | 2019-08-28 | 2020-01-21 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and display equipment |
CN111176503A (en) * | 2019-12-16 | 2020-05-19 | 珠海格力电器股份有限公司 | Interactive system setting method and device and storage medium |
WO2020103455A1 (en) * | 2018-11-22 | 2020-05-28 | 广州小鹏汽车科技有限公司 | Weather information-based smart greeting method, system, storage medium and vehicle |
CN111327772A (en) * | 2020-02-25 | 2020-06-23 | 广州腾讯科技有限公司 | Method, device, equipment and storage medium for automatic voice response processing |
CN111428637A (en) * | 2020-03-24 | 2020-07-17 | 新石器慧通(北京)科技有限公司 | Method for actively initiating human-computer interaction by unmanned vehicle and unmanned vehicle |
CN111949773A (en) * | 2019-05-17 | 2020-11-17 | 华为技术有限公司 | Reading equipment, server and data processing method |
CN113147771A (en) * | 2021-05-10 | 2021-07-23 | 前海七剑科技(深圳)有限公司 | Active interaction method and device based on vehicle-mounted virtual robot |
WO2023098090A1 (en) * | 2021-11-30 | 2023-06-08 | 达闼机器人股份有限公司 | Smart device control method and apparatus, server, and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112527095A (en) * | 2019-09-18 | 2021-03-19 | 奇酷互联网络科技(深圳)有限公司 | Man-machine interaction method, electronic device and computer storage medium |
CN111993438A (en) * | 2020-08-26 | 2020-11-27 | 陕西工业职业技术学院 | Intelligent robot |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105598972A (en) * | 2016-02-04 | 2016-05-25 | 北京光年无限科技有限公司 | Robot system and interactive method |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106462255A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | A method, system and robot for generating interactive content of robot |
CN106462254A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Robot interaction content generation method, system and robot |
CN106489114A (en) * | 2016-06-29 | 2017-03-08 | 深圳狗尾草智能科技有限公司 | A kind of generation method of robot interactive content, system and robot |
CN106537294A (en) * | 2016-06-29 | 2017-03-22 | 深圳狗尾草智能科技有限公司 | Method, system and robot for generating interactive content of robot |
CN106537293A (en) * | 2016-06-29 | 2017-03-22 | 深圳狗尾草智能科技有限公司 | Method and system for generating robot interactive content, and robot |
CN106625711A (en) * | 2016-12-30 | 2017-05-10 | 华南智能机器人创新研究院 | Method for positioning intelligent interaction of robot |
CN106843463A (en) * | 2016-12-16 | 2017-06-13 | 北京光年无限科技有限公司 | A kind of interactive output intent for robot |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105701211A (en) * | 2016-01-13 | 2016-06-22 | 北京光年无限科技有限公司 | Question-answering system-oriented active interaction data processing method and system |
CN106774845B (en) * | 2016-11-24 | 2020-01-31 | 北京儒博科技有限公司 | intelligent interaction method, device and terminal equipment |
CN107045587A (en) * | 2016-12-30 | 2017-08-15 | 北京光年无限科技有限公司 | A kind of interaction output intent and robot for robot |
CN108363492B (en) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | Man-machine interaction method and interaction robot |
-
2018
- 2018-03-09 CN CN201810193098.5A patent/CN108363492B/en active Active
- 2018-09-20 WO PCT/CN2018/106780 patent/WO2019169854A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105598972A (en) * | 2016-02-04 | 2016-05-25 | 北京光年无限科技有限公司 | Robot system and interactive method |
CN106462255A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | A method, system and robot for generating interactive content of robot |
CN106462254A (en) * | 2016-06-29 | 2017-02-22 | 深圳狗尾草智能科技有限公司 | Robot interaction content generation method, system and robot |
CN106489114A (en) * | 2016-06-29 | 2017-03-08 | 深圳狗尾草智能科技有限公司 | A kind of generation method of robot interactive content, system and robot |
CN106537294A (en) * | 2016-06-29 | 2017-03-22 | 深圳狗尾草智能科技有限公司 | Method, system and robot for generating interactive content of robot |
CN106537293A (en) * | 2016-06-29 | 2017-03-22 | 深圳狗尾草智能科技有限公司 | Method and system for generating robot interactive content, and robot |
CN106297789A (en) * | 2016-08-19 | 2017-01-04 | 北京光年无限科技有限公司 | The personalized interaction method of intelligent robot and interactive system |
CN106843463A (en) * | 2016-12-16 | 2017-06-13 | 北京光年无限科技有限公司 | A kind of interactive output intent for robot |
CN106625711A (en) * | 2016-12-30 | 2017-05-10 | 华南智能机器人创新研究院 | Method for positioning intelligent interaction of robot |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019169854A1 (en) * | 2018-03-09 | 2019-09-12 | 南京阿凡达机器人科技有限公司 | Human-computer interaction method, and interactive robot |
CN109409063A (en) * | 2018-10-10 | 2019-03-01 | 北京小鱼在家科技有限公司 | A kind of information interacting method, device, computer equipment and storage medium |
WO2020103455A1 (en) * | 2018-11-22 | 2020-05-28 | 广州小鹏汽车科技有限公司 | Weather information-based smart greeting method, system, storage medium and vehicle |
CN110154048A (en) * | 2019-02-21 | 2019-08-23 | 北京格元智博科技有限公司 | Control method, control device and the robot of robot |
CN110097400A (en) * | 2019-04-29 | 2019-08-06 | 贵州小爱机器人科技有限公司 | Information recommendation method, apparatus and system, storage medium, intelligent interaction device |
CN111949773A (en) * | 2019-05-17 | 2020-11-17 | 华为技术有限公司 | Reading equipment, server and data processing method |
CN110716634A (en) * | 2019-08-28 | 2020-01-21 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and display equipment |
CN111176503A (en) * | 2019-12-16 | 2020-05-19 | 珠海格力电器股份有限公司 | Interactive system setting method and device and storage medium |
CN111327772A (en) * | 2020-02-25 | 2020-06-23 | 广州腾讯科技有限公司 | Method, device, equipment and storage medium for automatic voice response processing |
CN111327772B (en) * | 2020-02-25 | 2021-09-17 | 广州腾讯科技有限公司 | Method, device, equipment and storage medium for automatic voice response processing |
CN111428637A (en) * | 2020-03-24 | 2020-07-17 | 新石器慧通(北京)科技有限公司 | Method for actively initiating human-computer interaction by unmanned vehicle and unmanned vehicle |
CN113147771A (en) * | 2021-05-10 | 2021-07-23 | 前海七剑科技(深圳)有限公司 | Active interaction method and device based on vehicle-mounted virtual robot |
WO2023098090A1 (en) * | 2021-11-30 | 2023-06-08 | 达闼机器人股份有限公司 | Smart device control method and apparatus, server, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108363492B (en) | 2021-06-25 |
WO2019169854A1 (en) | 2019-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108363492A (en) | A kind of man-machine interaction method and interactive robot | |
AU2020239704B2 (en) | Emotion type classification for interactive dialog system | |
US20220148271A1 (en) | Immersive story creation | |
KR102306624B1 (en) | Persistent companion device configuration and deployment platform | |
US11222632B2 (en) | System and method for intelligent initiation of a man-machine dialogue based on multi-modal sensory inputs | |
US20220020360A1 (en) | System and method for dialogue management | |
US20170206064A1 (en) | Persistent companion device configuration and deployment platform | |
CN105917404B (en) | For realizing the method, apparatus and system of personal digital assistant | |
US20190206402A1 (en) | System and Method for Artificial Intelligence Driven Automated Companion | |
JP2019523714A (en) | Multi-interaction personality robot | |
US20160193732A1 (en) | Engaging in human-based social interaction with members of a group using a persistent companion device | |
US11003860B2 (en) | System and method for learning preferences in dialogue personalization | |
CN110070879A (en) | A method of intelligent expression and phonoreception game are made based on change of voice technology | |
JP2018014094A (en) | Virtual robot interaction method, system, and robot | |
KR20020071917A (en) | User interface/entertainment device that simulates personal interaction and charges external database with relevant data | |
JPWO2005093650A1 (en) | Will expression model device, psychological effect program, will expression simulation method | |
US20230009454A1 (en) | Digital character with dynamic interactive behavior | |
Corradini et al. | Animating an interactive conversational character for an educational game system | |
JP2001249949A (en) | Feeling generation method, feeling generator and recording medium | |
Sommerer et al. | Interface cultures: artistic aspects of interaction | |
WO2018183812A1 (en) | Persistent companion device configuration and deployment platform | |
CN112138410B (en) | Interaction method of virtual objects and related device | |
Hacker et al. | Incorporating intentional and emotional behaviors into a virtual human for better customer-engineer-interaction | |
KR20230099936A (en) | A dialogue friends porviding system based on ai dialogue model | |
Höök et al. | and WP9 members |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |