CN106471444A - A kind of exchange method of virtual 3D robot, system and robot - Google Patents
A kind of exchange method of virtual 3D robot, system and robot Download PDFInfo
- Publication number
- CN106471444A CN106471444A CN201680001725.XA CN201680001725A CN106471444A CN 106471444 A CN106471444 A CN 106471444A CN 201680001725 A CN201680001725 A CN 201680001725A CN 106471444 A CN106471444 A CN 106471444A
- Authority
- CN
- China
- Prior art keywords
- robot
- user
- interaction
- modal information
- variable element
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Abstract
The present invention provides a kind of exchange method of virtual 3D robot, including:Obtain the multi-modal information of user;Interaction content is generated according to described multi-modal information and variable element;Described interaction content is converted to the discernible machine code of robot;Robot is exported according to interaction content, and the described way of output at least includes lovers' interaction, meets interaction and house pet interaction by chance.So robot just can identify the specifying information in interaction content, thus allowing robot to be exported and controlling, with user mutual, make the form of expression of robot more diversified and personalize, lifting user and the Experience Degree of robot interactive, the way of output of the present invention at least includes lovers' interaction, meets interaction, house pet interaction by chance, robot thus can be allowed to need to show different functions according to different, robot is allowed to have more kinds of interactive modes, the scope of application of hoisting machine people and Consumer's Experience.
Description
Technical field
The present invention relates to robot interactive technical field, the exchange method of more particularly, to a kind of virtual 3D robot, system
And robot.
Background technology
Robot gets more and more as the interactive tool with the mankind, the occasion of use, and for example some old men, child are more lonely
When it is possible to and robot interactive, including dialogue, amusement etc..And in order to allow robot and human interaction when more personalize, send out
A person of good sense works out a kind of display device of virtual robot and imaging system, can form the animating image of 3D, virtual robot
Main frame accept instruction such as voice of the mankind etc. and interact with the mankind, then virtual 3D animating image can be according to main frame
Instruction carries out the reply of sound and action, and robot thus can be allowed more to personalize, not only can be with sound, expression
Human interaction, but also can in action etc. with human interaction, substantially increase interactive experience sense.
However, how virtual robot is controlled being Important Problems, it is also more complicated problem.Therefore, how to carry
For a kind of exchange method controlling convenient virtual 3D robot, system and robot, lifting man-machine interaction experience becomes
The technical problem of urgent need to resolve.
Content of the invention
It is an object of the invention to provide a kind of control the convenient exchange method of virtual 3D robot, system and machine
People, lifts man-machine interaction experience.
The purpose of the present invention is achieved through the following technical solutions:
A kind of exchange method of virtual 3D robot, including:
Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is converted to the discernible machine code of robot;
Robot is exported according to interaction content, and the described way of output at least includes lovers' interaction, meets interaction by chance and dote on
Thing interacts.
Preferably, described meet by chance interaction specifically include:Obtain the multi-modal information of user;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and handing over
Mutually.
Preferably, described lovers' interaction specifically includes:Obtain the multi-modal information of user;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send through robot to the lovers user with this user-association
The multi-modal information processing.
Preferably, described house pet interaction specifically includes:Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
Preferably, the generation method of the variable element of described robot includes:By the parameter of the self cognition of robot with
The parameter of variable element Scene is fitted, and generates the variable element of robot.
Preferably, described variable element at least includes the behavior after changing the behavior of user's script and changing, Yi Jidai
The parameter value of the behavior after the behavior of table change user's script and change.
Preferably, the described step according to described multi-modal information and variable element generation interaction content specifically includes:Root
Matched curve according to described multi-modal information and variable element and parameter change probability generates interaction content.
Preferably, the generation method of the matched curve of described parameter change probability includes:Using probabilistic algorithm, by robot
Between parameter network do probability Estimation, calculate the scenario parameters on life-time axle when the robot on life-time axle
After change, the probability of each parameter change, form the matched curve of described parameter change probability.
A kind of interactive system of virtual 3D robot, including:
Acquisition module, for obtaining the multi-modal information of user;
Artificial intelligence module, for generating interaction content according to described multi-modal information and variable element;
Modular converter, for being converted to the discernible machine code of robot by described interaction content;
Control module, is exported according to interaction content for robot, the described way of output at least include lovers interaction,
Meet interaction and house pet interaction by chance.
Preferably, described meet by chance interaction specifically include:Obtain the multi-modal information of user;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and handing over
Mutually.
Preferably, described lovers' interaction specifically includes:Obtain the multi-modal information of user;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send through robot to the lovers user with this user-association
The multi-modal information processing.
Preferably, described house pet interaction specifically includes:Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
Preferably, described system also includes processing module, for by the parameter of the self cognition of robot and variable element
The parameter of Scene is fitted, and generates variable element.
Preferably, described variable element at least includes the behavior after changing the behavior of user's script and changing, Yi Jidai
The parameter value of the behavior after the behavior of table change user's script and change.
Preferably, described artificial intelligence module specifically for:According to described multi-modal information and variable element and parameter
The matched curve changing probability generates interaction content.
Preferably, described system includes matched curve generation module, for using probabilistic algorithm, by the ginseng between robot
Number network does probability Estimation, calculates after scenario parameters change on life-time axle for the robot on life-time axle,
The probability of each parameter change, forms the matched curve of described parameter change probability.
The present invention discloses a kind of robot, including the interactive system of such as any of the above-described described a kind of virtual 3D robot.
Compared to existing technology, the present invention has advantages below:The exchange method of the virtual 3D robot of the present invention includes:Obtain
Take the multi-modal information at family;Interaction content is generated according to described multi-modal information and variable element;Robot is according in interaction
Hold and exported, the described way of output at least includes lovers' interaction, meets interaction and house pet interaction by chance.Thus can use obtaining
After the multi-modal information at family, the variable element in conjunction with robot generates interaction content, and such robot just can identify interaction
Specifying information in content, thus allowing robot to be exported and controlling, thus allow 3D image carry out corresponding representing, with user
Interaction, makes robot not only have phonetic representation in interaction, can also have various form of expression such as action, make robot
The form of expression more diversified and personalize, the Experience Degree of lifting user and robot interactive, the way of output of the present invention is extremely
Include lovers' interaction less, meet interaction, house pet interaction by chance, thus can allow robot according to different need to show different
Function, allows robot have more kinds of interactive modes, the scope of application of hoisting machine people and Consumer's Experience.
Brief description
Fig. 1 is the flow chart of the exchange method of a kind of virtual 3D robot of the embodiment of the present invention one;
Fig. 2 is the schematic diagram of the interactive system of a kind of virtual 3D robot of the embodiment of the present invention two.
Specific embodiment
Although operations are described as the process of order by flow chart, many of which operation can by concurrently,
Concomitantly or simultaneously implement.The order of operations can be rearranged.Process when its operations are completed and can be terminated,
It is also possible to have the additional step being not included in accompanying drawing.Process can correspond to method, function, code, subroutine, son
Program etc..
Computer equipment includes user equipment and the network equipment.Wherein, user equipment or client include but is not limited to electricity
Brain, smart mobile phone, PDA etc.;The network equipment includes but is not limited to single network server, the service of multiple webserver composition
Device group or the cloud being made up of a large amount of computers or the webserver based on cloud computing.Computer equipment can isolated operation realizing
The present invention, also can access network and by realizing the present invention with the interactive operation of other computer equipments in network.Calculate
Network residing for machine equipment includes but is not limited to the Internet, wide area network, Metropolitan Area Network (MAN), LAN, VPN etc..
May have been used term " first ", " second " etc. here to describe unit, but these units should not
When limited by these terms, it is used for the purpose of making a distinction a unit and another unit using these terms.Here institute
The term "and/or" using includes any and all combination of one of or more listed associated item.When one
Unit is referred to as " connection " or during " coupled " to another unit, and it can be connected or coupled to described another unit, or
There may be temporary location.
Term used herein above is used for the purpose of description specific embodiment and is not intended to limit exemplary embodiment.Unless
Context clearly refers else, and otherwise singulative " one " used herein above, " one " also attempt to including plural number.Also should
When being understood by, term " inclusion " used herein above and/or "comprising" specify stated feature, integer, step, operation,
Unit and/or the presence of assembly, and do not preclude the presence or addition of other features one or more, integer, step, operation, unit,
Assembly and/or a combination thereof.
The invention will be further described with preferred embodiment below in conjunction with the accompanying drawings.
Embodiment one
As shown in figure 1, a kind of exchange method of virtual 3D robot, the method master in the present embodiment disclosed in the present embodiment
It is used in virtual 3D robot, specifically in such as VR (Virtual Reality, i.e. virtual reality), methods described includes:
S101, the multi-modal information of acquisition user;
S102, interaction content is generated according to described multi-modal information and variable element 300;
S103, robot are exported according to interaction content, and the described way of output at least includes lovers' interaction, meets interaction by chance
With house pet interaction.
The exchange method of the virtual 3D robot of the present invention includes:Obtain the multi-modal information of user;According to described multimode
State information and variable element generate interaction content;Robot is exported according to interaction content, and the described way of output at least includes
Lovers are interactive, meet interaction by chance and house pet interaction.Thus can obtain user multi-modal information after, in conjunction with robot can
Variable element generates interaction content, and such robot just can identify the specifying information in interaction content, thus allowing robot to enter
Row output and controlling, thus allow 3D image carry out corresponding represent, and user mutual, make robot not only have language in interaction
Sound shows, and can also have various form of expression such as action, makes the form of expression of robot more diversified and personalize, carries
Rise the Experience Degree of user and robot interactive, the way of output of the present invention at least includes lovers' interaction, meets interaction, house pet friendship by chance
Mutually, robot thus can be allowed to need to show different functions according to different, allow robot have more kinds of interaction sides
Formula, the scope of application of hoisting machine people and Consumer's Experience.
In the present embodiment, described interaction content can include voice messaging, action message etc., so can be carried out many
The output of mode, increases the form of expression of robot feedback.
In addition, in the present embodiment, interaction content can include voice messaging and action message, in order to allow action message and language
Message breath is mated, and when generating interaction content, voice messaging and action message can be adjusted and mate.For example,
The time span of the time span of voice messaging and action message is adjusted to identical.Adjustment concrete meaning be preferably compression or
The time span of the stretching time span of voice messaging or/and action message is broadcast it is also possible to be to speed up broadcasting speed or slow down
Put speed, for example, the broadcasting speed of voice messaging is multiplied by 2, or the reproduction time of action message is multiplied by 0.8 etc..
For example, in the interaction content according to the multi-modal information generation of user for the robot, the time span of voice messaging is 1
Minute, the time span of action message is 2 minutes, then just can accelerate the broadcasting speed of action message, plays for original
The twice of speed, then the reproduction time after action message adjustment will be 1 minute, thus synchronizing with voice messaging.When
So it is also possible to allow the broadcasting speed of voice messaging to slow down, it is adjusted to 0.5 times of original broadcasting speed, voice thus can be allowed to believe
Breath slows down as 2 minutes after being adjusted, thus synchronous with action message.Alternatively, it is also possible to by voice messaging and action message all
Adjustment, for example voice messaging slows down, and accelerates action message simultaneously, be all adjusted to 1 point 30 seconds it is also possible to allow voice and action to enter
Row is synchronous.
Multi-modal information in the present embodiment can be user's expression, voice messaging, gesture information, scene information, image
The one of which therein or several such as information, video information, face information, pupil iris information, light sensation information and finger print information.
In the present embodiment, variable element is specifically:The burst that people is occurred with machine changes, and on such as time shafts is born
Work is to have a meal, sleep, interacting, running, having a meal, sleeping.That in that case, if the scene of suddenly change robot, than
As gone to the beach etc. in the time period band run, for the parameter of robot, as variable element, these change for these mankind's actives
Change can make the self cognition of robot produce change.Life-time axle and variable element can to the attribute in self cognition,
Such as mood value, the change of fatigue data etc., it is also possible to be automatically added to new self cognition information, does not such as have indignation before
Value, the scene based on life-time axle and variable factor will automatically according to front simulation mankind's self cognition scene, thus
The self cognition of robot is added.
For example, according to life-time axle, at noon 12 points when should be the time having a meal, and if changing this
Scene, such as at noon 12 points when go out to go window-shopping, then robot will be using this as one of variable ginseng
Number is write, and within this time period when user and robot interactive, robot will be attached to 12 noon and go out to go window-shopping
Carry out generating interaction content, rather than be at table with 12 noon before and be combined generation interaction content, generate concrete
During interaction content, robot will be in conjunction with the multi-modal information of the user obtaining, such as voice messaging, screen information, picture letter
Breath etc. and variable element are generated.The accident in some human lives thus can be added in the life axle of robot
In, allow the interaction of robot more to personalize.
According to one of example, described meet by chance interaction specifically include:Obtain the multi-modal information of user;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and handing over
Mutually.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..Such as user recorded one section of voice, is then stored in data base, another stranger user exists
Interact it is possible to set up with this user after having got this section of voice at random, linked up and exchanged.
According to one of example, described lovers' interaction specifically includes:Obtain the multi-modal information of user;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send through robot to the lovers user with this user-association
The multi-modal information processing.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..Such as user recorded one section of voice " wife goes to bed earlier ", then robot is according to this section of language
After sound is analyzed and identifies, this section of voice is changed, after the lovers robot being sent to this user, will be replied
For " dear certain so-and-so, your husband allows you to go to bed earlier ", thus can with the communication between convenient user with exchange,
Make the exchange between lovers more intimate.Certainly, it is to carry out in advance binding and arranged between lovers robot.In addition,
It is also possible to interoperation information carries out multimodal displaying after robot receives voice messaging, improve user experience.
According to one of example, described house pet interaction specifically includes:Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..For example, user says one section of voice " today, weather how ", and then robot after acquisition, will inquire about
The weather of today, then sends the result to be shown in the display unit such as mobile terminal such as mobile phone, flat board, and informs use
The weather of family today, for example, sunny, it is shown the modes such as action, expression can also be mixed in feedback simultaneously.
According to one of example, the generation method of the variable element of described robot includes:Self recognizing robot
The parameter known is fitted with the parameter of variable element Scene, generates the variable element of robot.So passing through can in combination
The scene of the robot of variable element, by the self cognition row extension of robot itself, to the parameter in self cognition and variable ginseng
Can revive and be fitted using the parameter of scene in axle, produce the impact personalizing.
According to one of example, described variable element at least includes the row after changing the behavior of user's script and changing
For, and the parameter value representing the behavior after the behavior changing user's script and change.
Variable element is exactly to plan according to script, is in a kind of state, it is another that unexpected change allows user to be in
Kind of state, variable element just represents the state of user or behavior, example after the change of this behavior or state, and change
As originally in the afternoon 5 points be to run, having suddenly other things, for example, go to play ball, then it is exactly variable for being changed to play ball from running
Parameter, in addition also will study the probability of this change.
According to one of example, the described step tool generating interaction content according to described multi-modal information and variable element
Body includes:Matched curve according to described multi-modal information and variable element and parameter change probability generates interaction content.
Thus matched curve can be generated by the probability training of variable element, thus generating robot interactive content.
According to one of example, the generation method of the matched curve of described parameter change probability includes:Calculated using probability
Method, the parameter network between robot is done probability Estimation, calculates when the robot on life-time axle is in life-time axle
On scenario parameters change after, the probability of each parameter change, formed described parameter change probability matched curve.Wherein, generally
Rate algorithm can adopt Bayesian probability algorithm.
By the scene in the robot with reference to variable element, by the self cognition row extension of robot itself, to self
It is fitted using the parameter of scene in parameter in cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.Meanwhile, add
Identification for place scene, so that robot will appreciate that the geographical position of oneself, can change according to oneself residing geographical environment
Become the mode that interaction content generates.In addition, we use Bayesian probability algorithm, by the parameter Bayesian network between robot
Network does probability Estimation, after calculating robot itself the time shafts scenario parameters change on life-time axle, each parameter change
Probability, forms matched curve, the dynamic effect robot self cognition of itself.The module of this innovation makes robot itself have
There is the life style of the mankind, for this block of expressing one's feelings, the change of expression aspect can be made according to residing place scene.
Embodiment two
As shown in Fig. 2 a kind of interactive system of virtual 3D robot disclosed in the present embodiment, including:
Acquisition module 201, for obtaining the multi-modal information of user;
Artificial intelligence module 202, for interaction content is generated according to described multi-modal information and variable element, wherein variable
Parameter is generated by variable parameter modulator 301;
Modular converter 203, for being converted to the discernible machine code of robot by described interaction content;
Control module 204, is exported according to interaction content for robot, and the described way of output at least includes lovers and hands over
Mutually, interaction and house pet interaction are met by chance.
So robot just can identify the specifying information in interaction content, thus allowing robot to be exported and controlling
System, thus allow 3D image carry out corresponding represent, and user mutual, make robot not only have phonetic representation in interaction, also
Can have various form of expression such as action, make the form of expression of robot more diversified and personalize, lifting user with
The Experience Degree of robot interactive, the way of output of the present invention at least includes lovers' interaction, meets interaction, house pet interaction by chance, thus
Robot can be allowed to need to show different functions according to different, allow robot have more kinds of interactive modes, elevator
The scope of application of device people and Consumer's Experience.
In the present embodiment, described interaction content can include voice messaging, action message etc., so can be carried out many
The output of mode, increases the form of expression of robot feedback.
In addition, in the present embodiment, interaction content can also include voice messaging, in order to allow action message and voice messaging to enter
Row coupling, when generating interaction content, can be adjusted to voice messaging and action message and mate.For example, voice is believed
The time span of the time span of breath and action message is adjusted to identical.The concrete meaning of adjustment is preferably and compresses or stretching voice
The time span of the time span of information or/and action message it is also possible to being to speed up broadcasting speed or slowing down broadcasting speed, example
As the broadcasting speed of voice messaging is multiplied by 2, or the reproduction time of action message is multiplied by 0.8 etc..
For example, in the interaction content according to the multi-modal information generation of user for the robot, the time span of voice messaging is 1
Minute, the time span of action message is 2 minutes, then just can accelerate the broadcasting speed of action message, plays for original
The twice of speed, then the reproduction time after action message adjustment will be 1 minute, thus synchronizing with voice messaging.When
So it is also possible to allow the broadcasting speed of voice messaging to slow down, it is adjusted to 0.5 times of original broadcasting speed, voice thus can be allowed to believe
Breath slows down as 2 minutes after being adjusted, thus synchronous with action message.Alternatively, it is also possible to by voice messaging and action message all
Adjustment, for example voice messaging slows down, and accelerates action message simultaneously, be all adjusted to 1 point 30 seconds it is also possible to allow voice and action to enter
Row is synchronous.
Multi-modal information in the present embodiment can be user's expression, voice messaging, gesture information, scene information, image
The one of which therein or several such as information, video information, face information, pupil iris information, light sensation information and finger print information.
In the present embodiment, variable element is specifically:The burst that people is occurred with machine changes, and on such as time shafts is born
Work is to have a meal, sleep, interacting, running, having a meal, sleeping.That in that case, if the scene of suddenly change robot, than
As gone to the beach etc. in the time period band run, for the parameter of robot, as variable element, these change for these mankind's actives
Change can make the self cognition of robot produce change.Life-time axle and variable element can to the attribute in self cognition,
Such as mood value, the change of fatigue data etc., it is also possible to be automatically added to new self cognition information, does not such as have indignation before
Value, the scene based on life-time axle and variable factor will automatically according to front simulation mankind's self cognition scene, thus
The self cognition of robot is added.
For example, according to life-time axle, at noon 12 points when should be the time having a meal, and if changing this
Scene, such as at noon 12 points when go out to go window-shopping, then robot will be using this as one of variable ginseng
Number is write, and within this time period when user and robot interactive, robot will be attached to 12 noon and go out to go window-shopping
Carry out generating interaction content, rather than be at table with 12 noon before and be combined generation interaction content, generate concrete
During interaction content, robot will be in conjunction with the multi-modal information of the user obtaining, such as voice messaging, screen information, picture letter
Breath etc. and variable element are generated.The accident in some human lives thus can be added in the life axle of robot
In, allow the interaction of robot more to personalize.
According to one of example, described meet by chance interaction specifically include:Obtain the multi-modal information of user;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and handing over
Mutually.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..Such as user recorded one section of voice, is then stored in data base, another stranger user exists
Interact it is possible to set up with this user after having got this section of voice at random, linked up and exchanged.
According to one of example, described lovers' interaction specifically includes:Obtain the multi-modal information of user;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send through robot to the lovers user with this user-association
The multi-modal information processing.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..Such as user recorded one section of voice " wife goes to bed earlier ", then robot is according to this section of language
After sound is analyzed and identifies, this section of voice is changed, after the lovers robot being sent to this user, will be replied
For " dear certain so-and-so, your husband allows you to go to bed earlier ", thus can with the communication between convenient user with exchange,
Make the exchange between lovers more intimate.Certainly, it is to carry out in advance binding and arranged between lovers robot.In addition,
It is also possible to interoperation information carries out multimodal displaying after robot receives voice messaging, improve user experience.
According to one of example, described house pet interaction specifically includes:Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
In the present embodiment, multi-modal information can be voice messaging naturally it is also possible to be other information, for example video letter
Breath, action message etc..For example, user says one section of voice " today, weather how ", and then robot after acquisition, will inquire about
The weather of today, then sends the result to be shown in the display unit such as mobile terminal such as mobile phone, flat board, and informs use
The weather of family today, for example, sunny, it is shown the modes such as action, expression can also be mixed in feedback simultaneously.
According to one of example, described system also includes processing module, for by the parameter of the self cognition of robot
It is fitted with the parameter of variable element Scene, generate variable element.
So pass through the scene in the robot with reference to variable element, the self cognition row of robot itself is extended, right
It is fitted using the parameter of scene in parameter in self cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.
According to one of example, described variable element at least includes the row after changing the behavior of user's script and changing
For, and the parameter value representing the behavior after the behavior changing user's script and change.
Variable element is exactly to plan according to script, is in a kind of state, it is another that unexpected change allows user to be in
Kind of state, variable element just represents the state of user or behavior, example after the change of this behavior or state, and change
As originally in the afternoon 5 points be to run, having suddenly other things, for example, go to play ball, then it is exactly variable for being changed to play ball from running
Parameter, in addition also will study the probability of this change.
According to one of example, described artificial intelligence module specifically for:According to described multi-modal information and variable ginseng
The matched curve of number and parameter change probability generates interaction content.
Thus matched curve can be generated by the probability training of variable element, thus generating robot interactive content.
According to one of example, described system includes matched curve generation module, for using probabilistic algorithm, by machine
Parameter network between people does probability Estimation, calculates the scene ginseng on life-time axle when the robot on life-time axle
After number changes, the probability of each parameter change, form the matched curve of described parameter change probability.Wherein, probabilistic algorithm is permissible
Using Bayesian probability algorithm.
By the scene in the robot with reference to variable element, by the self cognition row extension of robot itself, to self
It is fitted using the parameter of scene in parameter in cognition and variable Soviet Union's axle of attending a meeting, produce the impact personalizing.Meanwhile, add
Identification for place scene, so that robot will appreciate that the geographical position of oneself, can change according to oneself residing geographical environment
Become the mode that interaction content generates.In addition, we use Bayesian probability algorithm, by the parameter Bayesian network between robot
Network does probability Estimation, after calculating robot itself the time shafts scenario parameters change on life-time axle, each parameter change
Probability, forms matched curve, the dynamic effect robot self cognition of itself.The module of this innovation makes robot itself have
There is the life style of the mankind, for this block of expressing one's feelings, the change of expression aspect can be made according to residing place scene.
The present invention discloses a kind of robot, including the interactive system of such as any of the above-described described a kind of virtual 3D robot.
Above content is to further describe it is impossible to assert with reference to specific preferred implementation is made for the present invention
Being embodied as of the present invention is confined to these explanations.For general technical staff of the technical field of the invention,
On the premise of present inventive concept, some simple deduction or replace can also be made, all should be considered as belonging to the present invention's
Protection domain.
Claims (17)
1. a kind of exchange method of virtual 3D robot is it is characterised in that include:
Obtain the multi-modal information of user;
Interaction content is generated according to described multi-modal information and variable element;
Robot is exported according to interaction content, and the described way of output at least includes lovers' interaction, meets interaction and house pet friendship by chance
Mutually.
2. exchange method according to claim 1 is it is characterised in that described meet interactive specifically including by chance:Obtain user's
Multi-modal information;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and interacting.
3. exchange method according to claim 1 is it is characterised in that described lovers interaction specifically includes:Obtain user's
Multi-modal information;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send to the lovers user with this user-association and process through robot
Multi-modal information.
4. exchange method according to claim 1 is it is characterised in that the interaction of described house pet specifically includes:Obtain user's
Multi-modal information;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
5. exchange method according to claim 1 is it is characterised in that the generation method bag of the variable element of described robot
Include:The parameter of the parameter of the self cognition of robot and variable element Scene is fitted, generates the variable ginseng of robot
Number.
6. exchange method according to claim 5 is it is characterised in that described variable element at least includes changing user originally
Behavior and change after behavior, and represent change user's script behavior and change after behavior parameter value.
7. exchange method according to claim 1 it is characterised in that described according to described multi-modal information and variable element
The step generating interaction content specifically includes:Matching according to described multi-modal information and variable element and parameter change probability
Curve generates interaction content.
8. exchange method according to claim 7 is it is characterised in that the generation of the matched curve of described parameter change probability
Method includes:Using probabilistic algorithm, the parameter network between robot is done probability Estimation, calculate when on life-time axle
Robot on life-time axle scenario parameters change after, the probability of each parameter change, formed described parameter change probability
Matched curve.
9. a kind of interactive system of virtual 3D robot is it is characterised in that include:
Acquisition module, for obtaining the multi-modal information of user;
Artificial intelligence module, for generating interaction content according to described multi-modal information and variable element;
Modular converter, for being converted to the discernible machine code of robot by described interaction content;
Control module, is exported according to interaction content for robot, and the described way of output at least includes lovers' interaction, meets by chance
Interaction and house pet interaction.
10. interactive system according to claim 9 is it is characterised in that described meet interactive specifically including by chance:Obtain user's
Multi-modal information;
Described multi-modal information is stored in data base;
If there being stranger user to obtain described multi-modal information from described data base, setting up with this stranger user and interacting.
11. interactive systems according to claim 9 are it is characterised in that described lovers interaction specifically includes:Obtain user's
Multi-modal information;
It is intended to according to described multi-modal information and scene information identifying user;
Multi-modal information according to user and user view send to the lovers user with this user-association and process through robot
Multi-modal information.
12. interactive systems according to claim 9 are it is characterised in that the interaction of described house pet specifically includes:Obtain user's
Multi-modal information;
Interaction content is generated according to described multi-modal information and variable element;
Described interaction content is sent to display unit, sets up with user and interact.
13. interactive systems according to claim 9 it is characterised in that described system also includes processing module, for by machine
The parameter of the self cognition of device people is fitted with the parameter of variable element Scene, generates variable element.
14. interactive systems according to claim 13 it is characterised in that described variable element at least include change user former
This behavior and the behavior after changing, and the parameter value of the behavior after representing the behavior changing user's script and changing.
15. interactive systems according to claim 9 it is characterised in that described artificial intelligence module specifically for:According to institute
State multi-modal information and variable element and the matched curve of parameter change probability generates interaction content.
16. interactive systems according to claim 15 it is characterised in that described system includes matched curve generation module,
For using probabilistic algorithm, the parameter network between robot is done probability Estimation, calculate when the machine on life-time axle
People on life-time axle scenario parameters change after, the probability of each parameter change, formed described parameter change probability plan
Close curve.
A kind of 17. robots are it is characterised in that include described a kind of virtual 3D robot as arbitrary in claim 9 to 16
Interactive system.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/089214 WO2018006370A1 (en) | 2016-07-07 | 2016-07-07 | Interaction method and system for virtual 3d robot, and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106471444A true CN106471444A (en) | 2017-03-01 |
Family
ID=58230938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680001725.XA Pending CN106471444A (en) | 2016-07-07 | 2016-07-07 | A kind of exchange method of virtual 3D robot, system and robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106471444A (en) |
WO (1) | WO2018006370A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018006370A1 (en) * | 2016-07-07 | 2018-01-11 | 深圳狗尾草智能科技有限公司 | Interaction method and system for virtual 3d robot, and robot |
CN107632706A (en) * | 2017-09-08 | 2018-01-26 | 北京光年无限科技有限公司 | The application data processing method and system of multi-modal visual human |
CN107678617A (en) * | 2017-09-14 | 2018-02-09 | 北京光年无限科技有限公司 | The data interactive method and system of Virtual robot |
CN107765852A (en) * | 2017-10-11 | 2018-03-06 | 北京光年无限科技有限公司 | Multi-modal interaction processing method and system based on visual human |
CN109202925A (en) * | 2018-09-03 | 2019-01-15 | 深圳狗尾草智能科技有限公司 | Realize robot motion method, system and the equipment synchronous with voice |
CN110941329A (en) * | 2018-09-25 | 2020-03-31 | 未来市股份有限公司 | Artificial intelligence system and interactive response method |
CN114747505A (en) * | 2022-04-07 | 2022-07-15 | 神马人工智能科技(深圳)有限公司 | Smart pet training assistant system based on artificial intelligence |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111045582B (en) * | 2019-11-28 | 2023-05-23 | 深圳市木愚科技有限公司 | Personalized virtual portrait activation interaction system and method |
CN111063346A (en) * | 2019-12-12 | 2020-04-24 | 第五维度(天津)智能科技有限公司 | Cross-media star emotion accompany interaction system based on machine learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1392826A (en) * | 2000-10-05 | 2003-01-22 | 索尼公司 | Robot apparatus and its control method |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105005614A (en) * | 2015-07-17 | 2015-10-28 | 深圳狗尾草智能科技有限公司 | Robot lover social system and interaction method thereof |
CN105094315A (en) * | 2015-06-25 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for smart man-machine chat based on artificial intelligence |
CN105446953A (en) * | 2015-11-10 | 2016-03-30 | 深圳狗尾草智能科技有限公司 | Intelligent robot and virtual 3D interactive system and method |
CN105740948A (en) * | 2016-02-04 | 2016-07-06 | 北京光年无限科技有限公司 | Intelligent robot-oriented interaction method and device |
CN105739688A (en) * | 2016-01-21 | 2016-07-06 | 北京光年无限科技有限公司 | Man-machine interaction method and device based on emotion system, and man-machine interaction system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5963663A (en) * | 1996-07-08 | 1999-10-05 | Sony Corporation | Land mark recognition method for mobile robot navigation |
JP3178393B2 (en) * | 1997-11-11 | 2001-06-18 | オムロン株式会社 | Action generation device, action generation method, and action generation program recording medium |
US6754560B2 (en) * | 2000-03-31 | 2004-06-22 | Sony Corporation | Robot device, robot device action control method, external force detecting device and external force detecting method |
CN105427865A (en) * | 2015-11-04 | 2016-03-23 | 百度在线网络技术(北京)有限公司 | Voice control system and method of intelligent robot based on artificial intelligence |
CN106471444A (en) * | 2016-07-07 | 2017-03-01 | 深圳狗尾草智能科技有限公司 | A kind of exchange method of virtual 3D robot, system and robot |
-
2016
- 2016-07-07 CN CN201680001725.XA patent/CN106471444A/en active Pending
- 2016-07-07 WO PCT/CN2016/089214 patent/WO2018006370A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1392826A (en) * | 2000-10-05 | 2003-01-22 | 索尼公司 | Robot apparatus and its control method |
CN102103707A (en) * | 2009-12-16 | 2011-06-22 | 群联电子股份有限公司 | Emotion engine, emotion engine system and control method of electronic device |
CN104951077A (en) * | 2015-06-24 | 2015-09-30 | 百度在线网络技术(北京)有限公司 | Man-machine interaction method and device based on artificial intelligence and terminal equipment |
CN105094315A (en) * | 2015-06-25 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for smart man-machine chat based on artificial intelligence |
CN105005614A (en) * | 2015-07-17 | 2015-10-28 | 深圳狗尾草智能科技有限公司 | Robot lover social system and interaction method thereof |
CN105446953A (en) * | 2015-11-10 | 2016-03-30 | 深圳狗尾草智能科技有限公司 | Intelligent robot and virtual 3D interactive system and method |
CN105739688A (en) * | 2016-01-21 | 2016-07-06 | 北京光年无限科技有限公司 | Man-machine interaction method and device based on emotion system, and man-machine interaction system |
CN105740948A (en) * | 2016-02-04 | 2016-07-06 | 北京光年无限科技有限公司 | Intelligent robot-oriented interaction method and device |
Non-Patent Citations (1)
Title |
---|
涂序彦: "《拟人学与拟人系统》", 31 August 2013 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018006370A1 (en) * | 2016-07-07 | 2018-01-11 | 深圳狗尾草智能科技有限公司 | Interaction method and system for virtual 3d robot, and robot |
CN107632706A (en) * | 2017-09-08 | 2018-01-26 | 北京光年无限科技有限公司 | The application data processing method and system of multi-modal visual human |
CN107678617A (en) * | 2017-09-14 | 2018-02-09 | 北京光年无限科技有限公司 | The data interactive method and system of Virtual robot |
CN107765852A (en) * | 2017-10-11 | 2018-03-06 | 北京光年无限科技有限公司 | Multi-modal interaction processing method and system based on visual human |
CN109202925A (en) * | 2018-09-03 | 2019-01-15 | 深圳狗尾草智能科技有限公司 | Realize robot motion method, system and the equipment synchronous with voice |
CN110941329A (en) * | 2018-09-25 | 2020-03-31 | 未来市股份有限公司 | Artificial intelligence system and interactive response method |
CN114747505A (en) * | 2022-04-07 | 2022-07-15 | 神马人工智能科技(深圳)有限公司 | Smart pet training assistant system based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
WO2018006370A1 (en) | 2018-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106471444A (en) | A kind of exchange method of virtual 3D robot, system and robot | |
CN107340859B (en) | Multi-modal interaction method and system of multi-modal virtual robot | |
US8555164B2 (en) | Method for customizing avatars and heightening online safety | |
CN107632706B (en) | Application data processing method and system of multi-modal virtual human | |
CN106663219A (en) | Methods and systems of handling a dialog with a robot | |
CN106471572B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
WO2017173141A1 (en) | Persistent companion device configuration and deployment platform | |
CN106462255A (en) | A method, system and robot for generating interactive content of robot | |
CN106462124A (en) | Method, system and robot for identifying and controlling household appliances based on intention | |
CN107704169A (en) | The method of state management and system of visual human | |
CN106663127A (en) | An interaction method and system for virtual robots and a robot | |
DE112021001301T5 (en) | DIALOGUE-BASED AI PLATFORM WITH RENDERED GRAPHIC OUTPUT | |
CN106462254A (en) | Robot interaction content generation method, system and robot | |
CN107808191A (en) | The output intent and system of the multi-modal interaction of visual human | |
CN106463118B (en) | Method, system and the robot of a kind of simultaneous voice and virtual acting | |
CN106503786A (en) | Multi-modal exchange method and device for intelligent robot | |
CN106462804A (en) | Method and system for generating robot interaction content, and robot | |
JP2019008513A (en) | Virtual reality system and program | |
CN106537293A (en) | Method and system for generating robot interactive content, and robot | |
CN106537425A (en) | Method and system for generating robot interaction content, and robot | |
DE102023102142A1 (en) | CONVERSATIONAL AI PLATFORM WITH EXTRAACTIVE QUESTION ANSWER | |
Bilvi et al. | Communicative and statistical eye gaze predictions | |
CN106662931A (en) | Robot man-machine interactive system, device and method | |
Aylett et al. | An architecture for emotional facial expressions as social signals | |
CN111844055A (en) | Multi-mode man-machine interaction robot with auditory, visual, tactile and emotional feedback functions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170301 |
|
RJ01 | Rejection of invention patent application after publication |