CN106462255A - A method, system and robot for generating interactive content of robot - Google Patents
A method, system and robot for generating interactive content of robot Download PDFInfo
- Publication number
- CN106462255A CN106462255A CN201680001745.7A CN201680001745A CN106462255A CN 106462255 A CN106462255 A CN 106462255A CN 201680001745 A CN201680001745 A CN 201680001745A CN 106462255 A CN106462255 A CN 106462255A
- Authority
- CN
- China
- Prior art keywords
- robot
- signal
- variable element
- parameter
- conjunction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Toys (AREA)
Abstract
The present invention provides a method for generating a robot interactive content, comprising: acquiring a multimode signal; determining a user's intention based on the multimode signal; combining the multidimensional signal and the user's intention, in combination with the current robot variable parameters generate robot interactive content. The invention adds the variable parameter of the robot to the interactive content generation of the robot so that the robot can generate the interactive content according to the previous variable parameter, so that the robot can be more anthropomorphic when interacting with the human being so that the robot can live shaft has a human way of life; the method can enhance the robot interactive content generated anthropomorphic, enhance the human-computer interaction experience and improve intelligence.
Description
Technical field
The present invention relates to robot interactive technical field, more particularly, to a kind of generation method of robot interactive content, it is
System and robot.
Background technology
Generally the mankind make an expression in interaction again, usually eyes see or ear hear sound it
Afterwards, reasonably expressed one's feelings after brains analysis feedback, people carrys out the living scene on the time shafts of some day, such as eats
Meal, sleep, motion etc., the change of various scene values can affect the feedback of human expressions.And for robot, want at present to allow
Robot makes the feedback in expression, is mainly got with deep learning corpus by way of pre-designed, this logical
Cross the expression feedback that pre-designed program trained with language material to suffer from the drawback that:The output of expression depends on the text of the mankind
Represent, that is, similar to the machine of question and answer, the different expression of the different language triggering of user, robot is actual in this case
The output expressed one's feelings also according to the pre-designed interactive mode of the mankind, this leads to robot can not more personalize, no
Expression feedback can be made, therefore expresses one's feelings as the mankind, according to interpersonal interaction times, interbehavior, cohesion etc.
Generation need substantial amounts of man-machine interaction, lead to the intelligent very poor of robot.
Therefore, how to propose a kind of based on multi-modal input and the expression generation method actively interacting variable element, can
The personification that hoisting machine people's interaction content generates, is the technical problem of the art urgent need to resolve.
Content of the invention
It is an object of the invention to provide a kind of generation method of robot interactive content, system and robot, based on multimode
State inputs and actively interacts variable element, is capable of the personification of hoisting machine people's interaction content generation, lifts man-machine interaction experience,
Improve intelligent.
The purpose of the present invention is achieved through the following technical solutions:
A kind of generation method of robot interactive content, including:
Obtain multi-modal signal;
User view is determined according to described multi-modal signal;
According to described multi-modal signal and described user view, generate robot in conjunction with current robot variable element and hand over
Mutually content.
Preferably, the generation method of described robot variable element includes:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates robot variable
Parameter.
Preferably, wherein, described variable element at least includes the behavior after changing the behavior of user's script and changing, with
And the parameter value of the behavior after representing the behavior changing user's script and changing.
Preferably, described according to described multi-modal signal and described user view, in conjunction with current robot variable element
The step generating robot interactive content further includes:According to described user view and multi-modal signal, in conjunction with current machine
The matched curve of device people's variable element and parameter change probability generates robot interactive content.
Preferably, the generation method of the matched curve of described parameter change probability includes:Using probabilistic algorithm, by robot
Between parameter network do probability Estimation, calculate the scenario parameters on life-time axle when the robot on life-time axle
After change, the probability of each parameter change, form the matched curve of described parameter change probability.
Preferably, described multi-modal signal at least includes picture signal, described according to described multi-modal signal and described use
Family is intended to, and the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described image signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
Preferably, described multi-modal signal at least includes voice signal, described according to described multi-modal signal and described use
Family is intended to, and the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described voice signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
Preferably, described multi-modal signal at least includes hand signal, described according to described multi-modal signal and described use
Family is intended to, and the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described hand signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
A kind of generation system of present invention robot interactive content is it is characterised in that include:
Acquisition module, for obtaining multi-modal signal;
Intention assessment module, for determining user view according to described multi-modal signal;
Content generating module, for according to described multi-modal signal and described user view, can in conjunction with current robot
Variable element generates robot interactive content.
Preferably, described system includes, based on time shafts and artificial intelligence's cloud processing module, being used for:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates robot variable
Parameter.
Preferably, wherein, described variable element at least includes the behavior after changing the behavior of user's script and changing, with
And the parameter value of the behavior after representing the behavior changing user's script and changing.
Preferably, described it is further used for artificial intelligence's cloud processing module based on time shafts:According to described user view
With multi-modal signal, in conjunction with current robot variable element and parameter change probability matched curve generate robot interactive
Content.
Preferably, described it is further used for artificial intelligence's cloud processing module based on time shafts:Using probabilistic algorithm, calculate
The probability of each parameter change after the change of time shafts scenario parameters for the robot on life-time axle, forms matched curve.
Preferably, described multi-modal signal at least includes picture signal, described content generating module specifically for:According to institute
State picture signal and described user view, generate robot interactive content in conjunction with current robot variable element.
Preferably, described multi-modal signal at least includes voice signal, described content generating module specifically for:According to institute
State voice signal and described user view, generate robot interactive content in conjunction with current robot variable element.
Preferably, described multi-modal signal at least includes hand signal, described content generating module specifically for:According to institute
State hand signal and described user view, generate robot interactive content in conjunction with current robot variable element.
The present invention discloses a kind of robot, including a kind of generation system of such as any of the above-described described robot interactive content
System.
Compared to existing technology, the present invention has advantages below:Existing robot, for application scenarios, is generally based on
Solid scene in question and answer interaction robot interactive content generation method it is impossible to more accurately be given birth to based on current scene
Become the expression of robot.A kind of generation method of present invention robot interactive content, including:Obtain multi-modal signal;According to institute
State multi-modal signal and determine user view;According to described multi-modal signal and described user view, can in conjunction with current robot
Variable element generates robot interactive content.Thus can be according to multi-modal signal such as picture signal, voice signal, bonding machine
Device people's variable element more accurately to generate robot interactive content, thus more accurately, interacting with people of personalizing
And communication.Variable element is:In interactive process, the parameter of user's active control, for example:Robot is controlled to go to take exercises,
Robot is controlled to exchange.The present invention by robot variable element be added to robot interaction content generate in go so that
Robot generate interaction content when can according to before variable element generated, for example when variable element for robot
Through a hour of having moved, again to during the orders such as robot transmission cleaning, robot will say that I tires out, and refusal is beaten
Sweep.So make more to personalize so that robot has the life side of the mankind in life-time axle during machine person to person interaction
Formula, the method is capable of the personification of hoisting machine people's interaction content generation, lifts man-machine interaction experience, improves intelligent.
Brief description
Fig. 1 is a kind of flow chart of the generation method of robot interactive content of the embodiment of the present invention one;
Fig. 2 is a kind of schematic diagram of the generation system of robot interactive content of the embodiment of the present invention two.
Specific embodiment
Although operations are described as the process of order by flow chart, many of which operation can by concurrently,
Concomitantly or simultaneously implement.The order of operations can be rearranged.Process when its operations are completed and can be terminated,
It is also possible to have the additional step being not included in accompanying drawing.Process can correspond to method, function, code, subroutine, son
Program etc..
Computer equipment includes user equipment and the network equipment.Wherein, user equipment or client include but is not limited to electricity
Brain, smart mobile phone, PDA etc.;The network equipment includes but is not limited to single network server, the service of multiple webserver composition
Device group or the cloud being made up of a large amount of computers or the webserver based on cloud computing.Computer equipment can isolated operation realizing
The present invention, also can access network and by realizing the present invention with the interactive operation of other computer equipments in network.Calculate
Network residing for machine equipment includes but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
May have been used term " first ", " second " etc. here to describe unit, but these units should not
When limited by these terms, it is used for the purpose of making a distinction a unit and another unit using these terms.Here institute
The term "and/or" using includes any and all combination of one of or more listed associated item.When one
Unit is referred to as " connection " or during " coupled " to another unit, and it can be connected or coupled to described another unit, or
There may be temporary location.
Term used herein above is used for the purpose of description specific embodiment and is not intended to limit exemplary embodiment.Unless
Context clearly refers else, and otherwise singulative " one " used herein above, " one " also attempt to including plural number.Also should
When being understood by, term " inclusion " used herein above and/or "comprising" specify stated feature, integer, step, operation,
Unit and/or the presence of assembly, and do not preclude the presence or addition of other features one or more, integer, step, operation, unit,
Assembly and/or a combination thereof.
The invention will be further described with preferred embodiment below in conjunction with the accompanying drawings.
Embodiment one
As shown in figure 1, a kind of generation method of robot interactive content disclosed in the present embodiment, including:
S101, the multi-modal signal of acquisition;
S102, user view is determined according to described multi-modal signal;
S103, according to described multi-modal signal and described user view, generate machine in conjunction with current robot variable element
Device people's interaction content.
For application scenarios, the question and answer being generally based in fixing scene interact robot interactive for existing robot
The generation method of content is it is impossible to more accurately generate the expression of robot based on current scene.A kind of machine of the present invention
The generation method of people's interaction content, including:Obtain multi-modal signal;User view is determined according to described multi-modal signal;According to
Described multi-modal signal and described user view, generate robot interactive content in conjunction with current robot variable element.So
Just more accurately can be generated in conjunction with robot variable element according to multi-modal signal such as picture signal, voice signal
Robot interactive content, thus more accurately, personalize interacting with people and linking up.Variable element is:Man-machine interaction
Cheng Zhong, the parameter of user's active control, for example:Control robot to go to take exercises, control robot to exchange.The present invention is by machine
Device people's variable element is added in the interaction content generation of robot and goes so that robot can basis when generating interaction content
Variable element before is generated, such as when variable element has moved a hour for robot, again to machine
When people sends the order such as cleaning, robot will say that I tires out, and refusal is swept.So make during machine person to person interaction more
Personalize so that robot has the life style of the mankind in life-time axle, the method can hoisting machine people interaction in
Hold the personification generating, lift man-machine interaction experience, improve intelligent.Wherein multi-modal signal is generally the group of multi-signal
Close, for example picture signal adds voice signal, or picture signal adds voice signal and adds hand signal etc. again.The variable ginseng of robot
Number 300 is fitted in advance and is provided with, and specifically, robot variable element 300 is a series of parameter intersection,
This parameter is transferred to system carry out generating interaction content.
In the present embodiment, variable element is specifically:The burst that people is occurred with machine changes, and on such as time shafts is born
Work is to have a meal, sleep, interacting, running, having a meal, sleeping.That in that case, if the scene of suddenly change robot, than
As gone to the beach etc. in the time period band run, for the parameter of robot, as variable element, these change for these mankind's actives
Change can make the self cognition of robot produce change.Life-time axle and variable element can to the attribute in self cognition,
Such as mood value, the change of fatigue data etc., it is also possible to be automatically added to new self cognition information, does not such as have indignation before
Value, the scene based on life-time axle and variable factor will automatically according to front simulation mankind's self cognition scene, thus
The self cognition of robot is added.
For example, according to life-time axle, at noon 12 points when should be the time having a meal, and if changing this
Scene, such as at noon 12 points when go out to go window-shopping, then robot will be using this as one of variable ginseng
Number is write, and within this time period when user and robot interactive, robot will be attached to 12 noon and go out to go window-shopping
Carry out generating interaction content, rather than be at table with 12 noon before and be combined generation interaction content, generate concrete
During interaction content, robot will in conjunction with obtain multi-modal signal, the combination of such as voice messaging and pictorial information or with
Video believes that combination of signal etc. and variable element are generated.The accident in some human lives thus can be added to exist
In the life axle of robot, the interaction of robot is allowed more to personalize.Wherein multi-modal signal is generally the combination of multi-signal,
For example picture signal adds voice signal, or picture signal adds voice signal and adds hand signal etc. again.
And for example, multi-modal signal includes robot and obtains expression and text emotion etc., these can by phonetic entry or
Person's video input or gesture input or combination, robot expression input is happy, and text analyzing is unhappy, and user is many simultaneously
Secondary control robot takes exercises.Robot will refuse to accept instruction, and interaction is:I am very tired, needs at present to rest.
According to one of example, the generation method of described robot variable element includes:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates robot variable
Parameter.So pass through the scene in the robot with reference to variable element, by the self cognition row extension of robot itself, to self
It is fitted using the parameter of scene in parameter in cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.
According to one of example, wherein, described variable element at least includes behavior and the change changing user's script
Behavior afterwards, and represent the parameter value of the behavior after the behavior changing user's script and change.
Variable element is exactly to plan according to script, is in a kind of state, it is another that unexpected change allows user to be in
Kind of state, variable element just represents the state of user or behavior, example after the change of this behavior or state, and change
As originally in the afternoon 5 points be to run, having suddenly other things, for example, go to play ball, then it is exactly variable for being changed to play ball from running
Parameter, in addition also will study the probability of this change.
According to other in which example, described according to described multi-modal signal and described user view, in conjunction with current machine
The step that device people's variable element generates robot interactive content further includes:According to described user view and multi-modal signal,
Matched curve in conjunction with current robot variable element and parameter change probability generates robot interactive content.Thus may be used
So that matched curve is generated by the probability training of variable element, thus generating robot interactive content.
According to other in which example, the generation method of the matched curve of described parameter change probability includes:Using probability
Algorithm, the parameter network between robot is done probability Estimation, calculates when the robot on life-time axle is in life-time
After scenario parameters on axle change, the probability of each parameter change, form the matched curve of described parameter change probability.Probability is calculated
Method can adopt Bayesian probability algorithm.
By the scene in the robot with reference to variable element, by the self cognition row extension of robot itself, to self
It is fitted using the parameter of scene in parameter in cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.Meanwhile, add
Identification for place scene, so that robot will appreciate that the geographical position of oneself, can change according to oneself residing geographical environment
Become the mode that interaction content generates.In addition, we use Bayesian probability algorithm, by the parameter Bayesian network between robot
Network does probability Estimation, after calculating robot itself the time shafts scenario parameters change on life-time axle, each parameter change
Probability, forms matched curve, the dynamic effect robot self cognition of itself.The module of this innovation makes robot itself have
There is the life style of the mankind, for this block of expressing one's feelings, the change of expression aspect can be made according to residing place scene.
According to other in which example, described multi-modal signal at least includes picture signal, described according to described multi-modal
Signal and described user view, the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described image signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.Multi-modal signal at least includes picture signal, and robot so can be allowed to grasp the intention of user, and in order to more preferable
Solve the intention of user, typically can add other signals, such as voice signal, hand signal etc., so can be more accurate
Recognize that user is the meaning of real expression on earth, the meaning of exploration of still jokeing.
According to other in which example, described multi-modal signal at least includes voice signal, described according to described multi-modal
Signal and described user view, the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described voice signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
According to other in which example, described multi-modal signal at least includes hand signal, described according to described multi-modal
Signal and described user view, the step generating robot interactive content in conjunction with current robot variable element specifically includes:
According to described hand signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
For example, in interior lasting singing for the previous period, then at this moment user tells robot by voice for robot,
Continue to sing.And picture signal is shown that user is serious, then robot will reply, too tired, allows me
Have a rest, mix face tired out.And if it is that a face is happy that pictorial information number is shown that user, then robot will return
Multiple say, owner I first have a rest and sing to you again, mix happy face.Thus can be given birth to according to the difference of multi-modal signal
Become different replies.Typically pass through voice signal and picture signal just can relatively accurately recognize the meaning of user, thus
More accurately reply user.Certainly add that other signals are more accurate, such as hand signal, video signal etc..
Embodiment two
As shown in Fig. 2 a kind of generation system of robot interactive content of the present invention disclosed in the present embodiment, its feature exists
In, including:
Acquisition module 201, for obtaining multi-modal signal;
Intention assessment module 202, for determining user view according to described multi-modal signal;
Content generating module 203, for according to described multi-modal signal and described user view, in conjunction with current robot
Variable element generates robot interactive content.
Thus can be come more in conjunction with robot variable element according to multi-modal signal such as picture signal, voice signal
Plus accurately generate robot interactive content, thus more accurately, personalize interacting with people and linking up.Variable element
For:In interactive process, the parameter of user's active control, for example:Control robot to go to take exercises, control robot to exchange
Deng.The present invention goes in generating the interaction content that robot variable element is added to robot so that robot is generating interaction
Can be generated according to variable element before during content, such as when variable element has moved a hour for robot
, again to during the orders such as robot transmission cleaning, robot will say that I tires out, and refusal is swept.So make robot
More personalize when interacting with people so that robot has the life style of the mankind in life-time axle, the method can carry
Rise the personification that robot interactive content generates, lift man-machine interaction experience, improve intelligent.Wherein multi-modal signal is generally
The combination of multi-signal, such as picture signal add voice signal, or picture signal adds voice signal and adds hand signal etc. again.
Multi-modal information in the present embodiment can be user's expression, voice messaging, gesture information, scene information, image
The one of which therein or several such as information, video information, face information, pupil iris information, light sensation information and finger print information.
It is preferably picture signal in the present embodiment and adds voice signal and add hand signal again, so identification accurate and the efficiency that identifies
High.
For example, variable element can be the thing that robot did a default time period, and such as robot was last
Time period and user interaction have talked a hour, if at this moment user gives expression to continuation by multi-modal signal to robot
The intention of talk, then robot is just it may be said that I has tired out needs a rest, and mixes state content tired out, such as table
Feelings etc..If multi-modal signal display user is to joke, then robot is not just it may be said that tease me, and mixes happy table
Feelings.Wherein multi-modal signal is generally the combination of multi-signal, and such as picture signal adds voice signal, or picture signal adds
Voice signal adds hand signal etc. again.
According to one of example, described system includes, based on time shafts and artificial intelligence's cloud processing module, being used for:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates robot variable
Parameter.
So pass through the scene in the robot with reference to variable element, the self cognition row of robot itself is extended, right
It is fitted using the parameter of scene in parameter in self cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.
According to one of example, wherein, described variable element at least includes behavior and the change changing user's script
Behavior afterwards, and represent the parameter value of the behavior after the behavior changing user's script and change.
Variable element is exactly to plan according to script, is in a kind of state, it is another that unexpected change allows user to be in
Kind of state, variable element just represents the state of user or behavior, example after the change of this behavior or state, and change
As originally in the afternoon 5 points be to run, having suddenly other things, for example, go to play ball, then it is exactly variable for being changed to play ball from running
Parameter, in addition also will study the probability of this change.
According to other in which example, described it is further used for artificial intelligence's cloud processing module based on time shafts:According to
Described user view and multi-modal signal, in conjunction with the matched curve life of current robot variable element and parameter change probability
Become robot interactive content.Thus matched curve can be generated by the probability training of variable element, thus generating robot
Interaction content.
According to other in which example, described it is further used for artificial intelligence's cloud processing module based on time shafts:Use
Probabilistic algorithm, the probability of each parameter change after the change of time shafts scenario parameters for the robot on calculating life-time axle,
Form matched curve.
By the scene in the robot with reference to variable element, by the self cognition row extension of robot itself, to self
It is fitted using the parameter of scene in parameter in cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.Meanwhile, add
Identification for place scene, so that robot will appreciate that the geographical position of oneself, can change according to oneself residing geographical environment
Become the mode that interaction content generates.In addition, we use Bayesian probability algorithm, by the parameter Bayesian network between robot
Network does probability Estimation, after calculating robot itself the time shafts scenario parameters change on life-time axle, each parameter change
Probability, forms matched curve, the dynamic effect robot self cognition of itself.The module of this innovation makes robot itself have
There is the life style of the mankind, for this block of expressing one's feelings, the change of expression aspect can be made according to residing place scene.
According to other in which example, described multi-modal signal at least includes picture signal, described expression generation module tool
Body is used for:According to described image signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
Multi-modal signal at least includes picture signal, and robot so can be allowed to grasp the intention of user, and in order to more preferable
The intention recognizing user, typically can add other signals, such as voice signal, hand signal etc., so can be more accurate
True recognizes that user is the meaning of real expression on earth, the meaning of exploration of still jokeing.
According to other in which example, described multi-modal signal at least includes voice signal, described expression generation module tool
Body is used for:According to described voice signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
According to other in which example, described multi-modal signal at least includes hand signal, described expression generation module tool
Body is used for:According to described hand signal and described user view, generate robot interactive in conjunction with current robot variable element
Content.
For example, in interior lasting singing for the previous period, then at this moment user tells robot by voice for robot,
Continue to sing.And picture signal is shown that user is serious, then robot will reply, too tired, allows me
Have a rest, mix face tired out.And if it is that a face is happy that pictorial information number is shown that user, then robot will return
Multiple say, owner I first have a rest and sing to you again, mix happy face.Thus can be given birth to according to the difference of multi-modal signal
Become different replies.Typically pass through voice signal and picture signal just can relatively accurately recognize the meaning of user, thus
More accurately reply user.
The present invention discloses a kind of robot, including a kind of generation system of such as any of the above-described described robot interactive content
System.
Above content is to further describe it is impossible to assert with reference to specific preferred implementation is made for the present invention
Being embodied as of the present invention is confined to these explanations.For general technical staff of the technical field of the invention,
On the premise of present inventive concept, some simple deduction or replace can also be made, all should be considered as belonging to the present invention's
Protection domain.
Claims (17)
1. a kind of generation method of robot interactive content is it is characterised in that include:
Obtain multi-modal signal;
User view is determined according to described multi-modal signal;
According to described multi-modal signal and described user view, generate in robot interactive in conjunction with current robot variable element
Hold.
2. generation method according to claim 1 is it is characterised in that the generation method bag of described robot variable element
Include:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates the variable ginseng of robot
Number.
3. it is characterised in that wherein, described variable element at least includes changing using generation method according to claim 2
The behavior of family script and change after behavior, and represent change user's script behavior and change after behavior parameter
Value.
4. generation method according to claim 1 it is characterised in that described according to described multi-modal signal and described user
It is intended to, the step generating robot interactive content in conjunction with current robot variable element further includes:According to described user
It is intended to and multi-modal signal, the matched curve in conjunction with current robot variable element and parameter change probability generates robot
Interaction content.
5. generation method according to claim 4 is it is characterised in that the generation of the matched curve of described parameter change probability
Method includes:Using probabilistic algorithm, the parameter network between robot is done probability Estimation, calculate when on life-time axle
Robot on life-time axle scenario parameters change after, the probability of each parameter change, formed described parameter change probability
Matched curve.
6. generation method according to claim 1 is it is characterised in that described multi-modal signal at least includes picture signal,
Described according to described multi-modal signal and described user view, generate in robot interactive in conjunction with current robot variable element
The step held specifically includes:
According to described image signal and described user view, generate in robot interactive in conjunction with current robot variable element
Hold.
7. generation method according to claim 1 is it is characterised in that described multi-modal signal at least includes voice signal,
Described according to described multi-modal signal and described user view, generate in robot interactive in conjunction with current robot variable element
The step held specifically includes:
According to described voice signal and described user view, generate in robot interactive in conjunction with current robot variable element
Hold.
8. generation method according to claim 1 is it is characterised in that described multi-modal signal at least includes hand signal,
Described according to described multi-modal signal and described user view, generate in robot interactive in conjunction with current robot variable element
The step held specifically includes:
According to described hand signal and described user view, generate in robot interactive in conjunction with current robot variable element
Hold.
9. a kind of generation system of robot interactive content is it is characterised in that include:
Acquisition module, for obtaining multi-modal signal;
Intention assessment module, for determining user view according to described multi-modal signal;
Content generating module, for according to described multi-modal signal and described user view, in conjunction with the current variable ginseng of robot
Number generates robot interactive content.
10. generation system according to claim 9 is it is characterised in that described system is included based on time shafts and artificial intelligence
Energy cloud processing module, is used for:
The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates the variable ginseng of robot
Number.
It is characterised in that wherein, described variable element at least includes changing 11. generation systems according to claim 10
The behavior of user's script and change after behavior, and represent change user's script behavior and change after behavior ginseng
Numerical value.
12. generation systems according to claim 9 are it is characterised in that described processed with artificial intelligence's cloud based on time shafts
Module is further used for:According to described user view and multi-modal signal, in conjunction with current robot variable element and parameter
The matched curve changing probability generates robot interactive content.
13. generation systems according to claim 12 are it is characterised in that described processed with artificial intelligence's cloud based on time shafts
Module is further used for:Using probabilistic algorithm, the robot on calculating life-time axle is after the change of time shafts scenario parameters
The probability of each parameter change, forms matched curve.
14. generation systems according to claim 9 it is characterised in that described multi-modal signal at least includes picture signal,
Described content generating module specifically for:According to described image signal and described user view, variable in conjunction with current robot
Parameter generates robot interactive content.
15. generation systems according to claim 9 it is characterised in that described multi-modal signal at least includes voice signal,
Described content generating module specifically for:According to described voice signal and described user view, variable in conjunction with current robot
Parameter generates robot interactive content.
16. generation systems according to claim 9 it is characterised in that described multi-modal signal at least includes hand signal,
Described content generating module specifically for:According to described hand signal and described user view, variable in conjunction with current robot
Parameter generates robot interactive content.
A kind of 17. robots are it is characterised in that include described a kind of robot interactive content as arbitrary in claim 9 to 16
Generation system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/087752 WO2018000267A1 (en) | 2016-06-29 | 2016-06-29 | Method for generating robot interaction content, system, and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106462255A true CN106462255A (en) | 2017-02-22 |
Family
ID=58215718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680001745.7A Pending CN106462255A (en) | 2016-06-29 | 2016-06-29 | A method, system and robot for generating interactive content of robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106462255A (en) |
WO (1) | WO2018000267A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107564522A (en) * | 2017-09-18 | 2018-01-09 | 郑州云海信息技术有限公司 | A kind of intelligent control method and device |
CN108363492A (en) * | 2018-03-09 | 2018-08-03 | 南京阿凡达机器人科技有限公司 | A kind of man-machine interaction method and interactive robot |
WO2018171223A1 (en) * | 2017-03-24 | 2018-09-27 | 华为技术有限公司 | Data processing method and nursing robot device |
CN110154048A (en) * | 2019-02-21 | 2019-08-23 | 北京格元智博科技有限公司 | Control method, control device and the robot of robot |
CN110228065A (en) * | 2019-04-29 | 2019-09-13 | 北京云迹科技有限公司 | Motion planning and robot control method and device |
CN112775991A (en) * | 2021-02-10 | 2021-05-11 | 溪作智能(深圳)有限公司 | Head mechanism of robot, robot and control method of robot |
CN113450436A (en) * | 2021-06-28 | 2021-09-28 | 武汉理工大学 | Face animation generation method and system based on multi-mode correlation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080195566A1 (en) * | 2007-02-08 | 2008-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for expressing behavior of software robot |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294725A (en) * | 2012-03-03 | 2013-09-11 | 李辉 | Intelligent response robot software |
-
2016
- 2016-06-29 CN CN201680001745.7A patent/CN106462255A/en active Pending
- 2016-06-29 WO PCT/CN2016/087752 patent/WO2018000267A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080195566A1 (en) * | 2007-02-08 | 2008-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for expressing behavior of software robot |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105511608A (en) * | 2015-11-30 | 2016-04-20 | 北京光年无限科技有限公司 | Intelligent robot based interaction method and device, and intelligent robot |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018171223A1 (en) * | 2017-03-24 | 2018-09-27 | 华为技术有限公司 | Data processing method and nursing robot device |
KR20190126906A (en) * | 2017-03-24 | 2019-11-12 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Data processing method and device for care robot |
KR102334942B1 (en) | 2017-03-24 | 2021-12-06 | 후아웨이 테크놀러지 컴퍼니 리미티드 | Data processing method and device for caring robot |
US11241789B2 (en) | 2017-03-24 | 2022-02-08 | Huawei Technologies Co., Ltd. | Data processing method for care-giving robot and apparatus |
CN107564522A (en) * | 2017-09-18 | 2018-01-09 | 郑州云海信息技术有限公司 | A kind of intelligent control method and device |
CN108363492A (en) * | 2018-03-09 | 2018-08-03 | 南京阿凡达机器人科技有限公司 | A kind of man-machine interaction method and interactive robot |
CN108363492B (en) * | 2018-03-09 | 2021-06-25 | 南京阿凡达机器人科技有限公司 | Man-machine interaction method and interaction robot |
CN110154048A (en) * | 2019-02-21 | 2019-08-23 | 北京格元智博科技有限公司 | Control method, control device and the robot of robot |
CN110228065A (en) * | 2019-04-29 | 2019-09-13 | 北京云迹科技有限公司 | Motion planning and robot control method and device |
CN112775991A (en) * | 2021-02-10 | 2021-05-11 | 溪作智能(深圳)有限公司 | Head mechanism of robot, robot and control method of robot |
CN112775991B (en) * | 2021-02-10 | 2021-09-07 | 溪作智能(深圳)有限公司 | Head mechanism of robot, robot and control method of robot |
CN113450436A (en) * | 2021-06-28 | 2021-09-28 | 武汉理工大学 | Face animation generation method and system based on multi-mode correlation |
Also Published As
Publication number | Publication date |
---|---|
WO2018000267A1 (en) | 2018-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106462255A (en) | A method, system and robot for generating interactive content of robot | |
Sinha et al. | Human computer interaction | |
CN106462254A (en) | Robot interaction content generation method, system and robot | |
Egges et al. | Generic personality and emotion simulation for conversational agents | |
CN106537294A (en) | Method, system and robot for generating interactive content of robot | |
Rázuri et al. | Automatic emotion recognition through facial expression analysis in merged images based on an artificial neural network | |
US9690784B1 (en) | Culturally adaptive avatar simulator | |
Tao et al. | Affective information processing | |
CN106503786A (en) | Multi-modal exchange method and device for intelligent robot | |
CN106022294A (en) | Intelligent robot-oriented man-machine interaction method and intelligent robot-oriented man-machine interaction device | |
CN106502382A (en) | Active exchange method and system for intelligent robot | |
Basori | Emotion walking for humanoid avatars using brain signals | |
CN106471572A (en) | A kind of method of simultaneous voice and virtual acting, system and robot | |
Ochs et al. | 18 facial expressions of emotions for virtual characters | |
CN106462804A (en) | Method and system for generating robot interaction content, and robot | |
Sobhan et al. | A communication aid system for deaf and mute using vibrotactile and visual feedback | |
CN106489114A (en) | A kind of generation method of robot interactive content, system and robot | |
Turk | Moving from guis to puis | |
CN106537293A (en) | Method and system for generating robot interactive content, and robot | |
Liao et al. | A systematic review of global research on natural user interface for smart home system | |
Tyler et al. | The MIDAS human performance model | |
CN106537425A (en) | Method and system for generating robot interaction content, and robot | |
Ali et al. | A framework for modeling and designing of intelligent and adaptive interfaces for human computer interaction | |
Ruiz et al. | Multimodal input | |
Korhonen et al. | Training Hard Skills in Virtual Reality: Developing a Theoretical Framework for AI-Based Immersive Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |
|
RJ01 | Rejection of invention patent application after publication |