CN106294726A - Based on the processing method and processing device that robot role is mutual - Google Patents
Based on the processing method and processing device that robot role is mutual Download PDFInfo
- Publication number
- CN106294726A CN106294726A CN201610647418.0A CN201610647418A CN106294726A CN 106294726 A CN106294726 A CN 106294726A CN 201610647418 A CN201610647418 A CN 201610647418A CN 106294726 A CN106294726 A CN 106294726A
- Authority
- CN
- China
- Prior art keywords
- role
- output
- robot
- modal
- mutual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Toys (AREA)
Abstract
The present invention provides a kind of data processing method mutual based on robot role.The method comprises the following steps: receive the multi-modal input data of user;According to the multi-modal input data received, call and combine the role that described robot represents and the language output model generated, produce multi-modal output data;The produced multi-modal output data of output.Robot according to the present invention not only profile is similar with the cartoon character imitated, also this cartoon character can be imitated when carrying out interaction with user, so that the company function to the mankind of tangible machine people is more perfect, thus it is better able to the demand meeting the mankind to accompanying.
Description
Technical field
The present invention relates to field in intelligent robotics, specifically, relate to a kind of process side mutual based on robot role
Method and device.
Background technology
Current robot industry development is rapid, occurs in that various family life accompanies class robot.These machines
In device people, having some is the hypostazation to popular animation IP (Intellectual Property) image.Such as, for time
Under popular animation image calabash baby, tangible machine people can be designed to the appearance of calabash baby.But, tangible machine people
Only in image similar with animation IP not enough, in addition it is also necessary to similar with this animation IP in daily expression and behavioural habits,
The experience of user could be promoted.
To this end, be accomplished by a kind of to imitate the technical scheme that animation image carries out the tangible machine people of multi-modal output
Design, so that tangible machine people more can meet user's demand to accompanying.
Summary of the invention
It is an object of the invention to solve the asking of demand that the tangible machine people of prior art can not meet the company of user
Topic, it is provided that a kind of data processing method mutual based on robot role.The method comprises the following steps:
Receive the multi-modal input data of user;
According to the multi-modal input data received, call the language combining role that described robot represents and generate defeated
Go out model, produce multi-modal output data;
The produced multi-modal output data of output.
According to the data processing method mutual based on robot role of the present invention, the language output model generated includes
Classical sentence output model, role's feature output model, sight output model.
According to the data processing method mutual based on robot role of the present invention, generating described language output model
Time, comprise the following steps:
Obtain the dialog history text relevant with the role that described robot represents in advance;
Described dialog history text is trained, the language output model adapted with described role with generation.
According to the data processing method mutual based on robot role of the present invention, described dialog text is being trained
Time, use RNN (Recurrent Neural Net, Recognition with Recurrent Neural Network) algorithm that described dialog text is trained.
According to one embodiment of present invention, when exporting multi-modal output data, use and representative role phase
The customization TTS (Text to Speech, Text To Speech) joined carries out phonetic synthesis output.
According to another aspect of the present invention, a kind of data processing equipment mutual based on robot role is additionally provided.
This device includes with lower unit:
Multi-modal input data receipt unit, it is for receiving the multi-modal input data of user;
Multi-modal output data generating unit, it, for according to the multi-modal input data received, calls combination described
Role that robot represents and the language output model that generates, produce multi-modal output data;
Output unit, it is used for exporting produced multi-modal output data.
According to the data processing equipment mutual based on robot role of the present invention, the language output model generated includes
Classical sentence output model, role's feature output model, sight output model.
According to the data processing equipment mutual based on robot role of the present invention, it is used for generating described language output model
Unit include:
Acquiring unit in advance, it obtains the dialog history text relevant with the role that described robot represents in advance;
Training unit, it is for being trained described dialog history text, the language adapted with described role with generation
Speech output model.
According to the data processing equipment mutual based on robot role of the present invention, described dialog text is being trained
Time, use RNN algorithm that described dialog text is trained.
The data processing equipment mutual based on robot role according to the present invention, it is characterised in that multi-modal in output
During output data, use the customization TTS matched with representative role to carry out phonetic synthesis output.
The present invention is brought to be beneficial in that, according to robot not only profile and the cartoon shaped imitated of the present invention
As similar, this cartoon character also can be imitated when carrying out interaction with user, so that the company to the mankind of tangible machine people
Function is more perfect, thus is better able to the demand meeting the mankind to accompanying.Further, by combining customization TTS technology,
Robot one's voice in speech is arranged to the sound of card Tongli, then so that the most defeated according to the tangible machine people of the present invention
Go out more to press close to this role that it is representative.
Other features and advantages of the present invention will illustrate in the following description, and, partly become from description
Obtain it is clear that or understand by implementing the present invention.The purpose of the present invention and other advantages can be by description, rights
Structure specifically noted in claim and accompanying drawing realizes and obtains.
Accompanying drawing explanation
Accompanying drawing is for providing a further understanding of the present invention, and constitutes a part for description, with the reality of the present invention
Execute example to be provided commonly for explaining the present invention, be not intended that limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the mutual data processing method of the based role according to one embodiment of the present of invention;
Fig. 2 shows the classification schematic diagram of language output model according to an embodiment of the invention;
Fig. 3 shows the flow chart carrying out language output model routine call according to one embodiment of present invention;And
Fig. 4 shows the knot of the data processing equipment mutual based on robot role according to an embodiment of the invention
Structure block diagram.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, the embodiment of the present invention is made
Describe in detail further.
Just as previously described, because current robot industry development is rapid, occur in that a lot of family life miscellaneous,
Accompany robot.In these robots, some is the hypostazation to popular ip image, such as popular animation image calabash
Baby, (present invention includes but not limited to support can to make the tangible machine people (hereinafter referred to as IP entity) of a calabash baby
The mutual robot of man-machine natural dialogue, toy etc.).
To this end, the invention provides a kind of new mode to allow ip image all of experience, the background etc. in cartoon
Information, it is possible to moving on tangible machine people of comparatively perfect, thus tangible machine people can come as inside cartoon, animation
With people talks with.Further, by combining customization Text To Speech TTS technology, robot one's voice in speech is arranged to card
The sound of Tongli, then can more press close to this role.
Mode by the following examples is discussed in detail implementation method and the ultimate principle of the present invention.
Fig. 1 shows the flow chart of the mutual data processing method of based role according to an embodiment of the invention.
In FIG, method starts from step S101.When user needs and is designed as the machine of cartoon character oneself liked
When people interacts, user can send specific voice or action to indicate robot entity interaction behavior to occur.Such as,
User can say specific greeting statement or other statements or action.After tangible machine people receives this instruction, it follows that
In step S102, receive the multi-modal input data of user.
Under multi-modal interaction scenarios, the multi-modal input data of user include express one's feelings input, phonetic entry, action input
Deng.Tangible machine people according to the present invention needs to be identified the input of these classifications, thus obtains the intention of user.
It follows that in step s 103, tangible machine people is according to the expression of the multi-modal input data such as mankind received
Input, phonetic entry, action input, call and combine the role and the language output model that generates that this tangible machine people represents, from
And produce multi-modal output data.
As in figure 2 it is shown, classical sentence output model can be included according to language output model produced by the present invention according to classification
201, role's feature output model 202 and sight output model 203.For classical sentence output model 201, in interaction
In, IP entity can say, in conjunction with different scenes, the statement that this IP role is the most classical.For role's feature output model 202,
In interaction, when relating to the Some features of IP role, IP tangible machine people can feed back accurately.Defeated for sight
Going out model 203, in interaction, when relating to some scenes in animation, IP tangible machine people can be by the feelings in animation
Joint is intactly expressed, with reality situation slitless connection.
In an object lesson of classical sentence output model, the such as tangible machine machine that artificially classical role one stops
People, when user interacts with it, the language that this IP role is the most classical can be said, in conjunction with different scenes, by this robot
Sentence, such as its pet phrase " are had a rest, are had a rest!" it is further preferred that this robot can also be equipped with corresponding classical dynamic
Make, so that imitate even more like.
In an object lesson of role's feature output model, such as tangible machine artificially many A dream robot, when with
When family interacts with it, ask about " you like best what is eaten?" robot can answer " I like best eat bronze gong burn!", ask
And " whom great Xiong likes best?" robot can answer " yes quiet perfume (or spice)!”
In an object lesson of sight output model, such as tangible machine people still trembles A dream robot, works as user
When interacting with it, ask about that " the memory bread having eaten you is the most failed!" time, robot can answer that " who allows you not make great efforts
Hard!" this special scenes dialogue inside this animation.
The output of these specific language needs tangible machine people for a large amount of language materials of this role in the animation inputted before
Constantly learn, thus produce specific language output model, such as situation speech like sound output model, classical sentence output mould
Type and role's feature class output model.
The foundation of the language output model that according to the present invention carried out is described below.
As it is shown on figure 3, in step S301, defeated at the language calling the role combining this tangible machine people representative and generate
When going out model, language output model is not be originally present within, but carries out unceasing study according to the language material of input and produce
Raw.In step s 302, tangible machine people can obtain the dialog history text relevant with the role that it represents in advance.History
Dialog text needs not to be whole conversation history data.Computational analysis can also be passed through, simply enter some classical sentences, the most permissible
Carry out learning by Recognition with Recurrent Neural Network algorithm thus language output model required for generating, step S303.Gone through by study
History data, robot can imitate the speech feature of specific role, carries out table by its distinctive tongue in particular context
Reach.
When model being trained according to the dialog history text obtained, use RNN Recognition with Recurrent Neural Network algorithm to right
Words text is trained, thus obtains language output model.The present invention carries out language except using RNN Recognition with Recurrent Neural Network algorithm
The generation of model, it is also possible to apply existing published any algorithm to carry out the generation of this kind of dialog model.
Finally, in step S104, tangible machine people is according to dialog model, the multi-modal expression that output is corresponding.In output
Multi-modal output data in, it correspondingly can also include emotional expression data, manual expression data and phonetic representation data.
According to one embodiment of present invention, when exporting multi-modal output data, can use and representative role
The customization TTS matched is to carry out phonetic synthesis output.Customization TTS represents that this voice output aims at as robot particularly customized
Phonetic synthesis, the language so making robot not only imitate cartoon character has also imitated its voice.If be equipped with
Corresponding characteristic action output, this tangible machine people will become a real cartoon character and represent.
As shown in Figure 4, present invention also offers a kind of data processing equipment 400 mutual based on robot role.This dress
Put and include with lower unit:
Multi-modal input data receipt unit 401, it is for receiving the multi-modal input data of user;
Multi-modal output data generating unit 402, it, for according to the multi-modal input data received, calls and combines institute
State role and the language output model that generates that robot represents, produce multi-modal output data;
Output unit 403, it is used for exporting produced multi-modal output data.
The language output model generated includes classical sentence output model, role's feature output model, sight output model.
Wherein, the unit being used for generating described language output model includes:
Acquiring unit in advance, it obtains the dialog history text relevant with the role that described robot represents in advance;
Training unit, it is for being trained described dialog history text, the language adapted with described role with generation
Speech output model.
According to the present invention, when described dialog text is trained, use RNN Recognition with Recurrent Neural Network algorithm to described right
Words text is trained.When model being trained according to the dialog history text obtained, RNN Recognition with Recurrent Neural Network is used to calculate
Dialog text is trained by method, thus obtains language output model.The present invention enters except using RNN Recognition with Recurrent Neural Network algorithm
The generation of row language model, it is also possible to apply existing published any algorithm to carry out the generation of this kind of dialog model.
When exporting multi-modal output data, use the customization TTS matched with representative role to carry out voice conjunction
Become output.
While it is disclosed that embodiment as above, but described content is only to facilitate understand the present invention and adopt
Embodiment, be not limited to the present invention.Technical staff in any the technical field of the invention, without departing from this
On the premise of spirit and scope disclosed in invention, in form and any amendment and change can be made in details implement,
But the scope of patent protection of the present invention, still must be defined in the range of standard with appending claims.
Claims (10)
1. one kind based on the mutual data processing method of robot role, it is characterised in that said method comprising the steps of:
Receive the multi-modal input data of user;
According to the multi-modal input data received, call and combine the role that described robot represents and the language output mould generated
Type, produces multi-modal output data;
The produced multi-modal output data of output.
2. as claimed in claim 1 based on the data processing method that robot role is mutual, it is characterised in that the language generated
Speech output model includes classical sentence output model, role's feature output model, sight output model.
3. as claimed in claim 1 based on the data processing method that robot role is mutual, it is characterised in that described generating
During language output model, comprise the following steps:
Obtain the dialog history text relevant with the role that described robot represents in advance;
Described dialog history text is trained, the language output model adapted with described role with generation.
4. as claimed in claim 3 based on the data processing method that robot role is mutual, it is characterised in that to described right
When words text is trained, use RNN Recognition with Recurrent Neural Network algorithm that described dialog text is trained.
5. the data processing method mutual based on robot role as according to any one of claim 1-4, it is characterised in that
When exporting multi-modal output data, use the customization TTS matched with representative role to carry out phonetic synthesis output.
6. one kind based on the mutual data processing equipment of robot role, it is characterised in that described device includes with lower unit:
Multi-modal input data receipt unit, it is for receiving the multi-modal input data of user;
Multi-modal output data generating unit, it, for according to the multi-modal input data received, calls and combines described machine
Role that people represents and the language output model that generates, produce multi-modal output data;
Output unit, it is used for exporting produced multi-modal output data.
7. as claimed in claim 6 based on the data processing equipment that robot role is mutual, it is characterised in that the language generated
Speech output model includes classical sentence output model, role's feature output model, sight output model.
8. as claimed in claim 6 based on the data processing equipment that robot role is mutual, it is characterised in that be used for generating institute
The unit of predicate speech output model includes:
Acquiring unit in advance, it obtains the dialog history text relevant with the role that described robot represents in advance;
Training unit, it is for being trained described dialog history text, defeated with the language that described role adapts to generate
Go out model.
9. as claimed in claim 8 based on the data processing equipment that robot role is mutual, it is characterised in that to described right
When words text is trained, use RNN Recognition with Recurrent Neural Network algorithm that described dialog text is trained.
10. the data processing equipment mutual based on robot role as according to any one of claim 6-9, its feature exists
In, when exporting multi-modal output data, use defeated to carry out phonetic synthesis with the customization TTS that representative role matches
Go out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610647418.0A CN106294726A (en) | 2016-08-09 | 2016-08-09 | Based on the processing method and processing device that robot role is mutual |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610647418.0A CN106294726A (en) | 2016-08-09 | 2016-08-09 | Based on the processing method and processing device that robot role is mutual |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106294726A true CN106294726A (en) | 2017-01-04 |
Family
ID=57666993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610647418.0A Pending CN106294726A (en) | 2016-08-09 | 2016-08-09 | Based on the processing method and processing device that robot role is mutual |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106294726A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874472A (en) * | 2017-02-16 | 2017-06-20 | 深圳追科技有限公司 | A kind of anthropomorphic robot's client service method |
CN107272885A (en) * | 2017-05-09 | 2017-10-20 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN107294837A (en) * | 2017-05-22 | 2017-10-24 | 北京光年无限科技有限公司 | Engaged in the dialogue interactive method and system using virtual robot |
CN107480122A (en) * | 2017-06-26 | 2017-12-15 | 迈吉客科技(北京)有限公司 | A kind of artificial intelligence exchange method and artificial intelligence interactive device |
CN107688983A (en) * | 2017-07-27 | 2018-02-13 | 北京光年无限科技有限公司 | Intelligent robot custom service processing method and system based on business platform |
CN107704530A (en) * | 2017-09-19 | 2018-02-16 | 百度在线网络技术(北京)有限公司 | Speech ciphering equipment exchange method, device and equipment |
CN107729983A (en) * | 2017-09-21 | 2018-02-23 | 北京深度奇点科技有限公司 | A kind of method, apparatus and electronic equipment using realizing of Robot Vision man-machine chess |
WO2018196684A1 (en) * | 2017-04-24 | 2018-11-01 | 北京京东尚科信息技术有限公司 | Method and device for generating conversational robot |
WO2019010678A1 (en) * | 2017-07-13 | 2019-01-17 | 深圳前海达闼云端智能科技有限公司 | Robot role switching method and apparatus, and robot |
CN110399474A (en) * | 2019-07-18 | 2019-11-01 | 腾讯科技(深圳)有限公司 | A kind of Intelligent dialogue method, apparatus, equipment and storage medium |
CN110609620A (en) * | 2019-09-05 | 2019-12-24 | 深圳追一科技有限公司 | Human-computer interaction method and device based on virtual image and electronic equipment |
CN110730953A (en) * | 2017-10-03 | 2020-01-24 | 谷歌有限责任公司 | Customizing interactive dialog applications based on creator-provided content |
CN111832691A (en) * | 2020-07-01 | 2020-10-27 | 娄兆文 | Role-substituted scalable multi-object intelligent accompanying robot |
CN114818609A (en) * | 2022-06-29 | 2022-07-29 | 阿里巴巴达摩院(杭州)科技有限公司 | Interaction method for virtual object, electronic device and computer storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1846228A (en) * | 2003-09-01 | 2006-10-11 | 松下电器产业株式会社 | Electronic device having user authentication function |
US8941643B1 (en) * | 2010-12-28 | 2015-01-27 | Lucasfilm Entertainment Company Ltd. | Quality assurance testing of virtual environments |
CN105183712A (en) * | 2015-08-27 | 2015-12-23 | 北京时代焦点国际教育咨询有限责任公司 | Method and apparatus for scoring English composition |
CN105425953A (en) * | 2015-11-02 | 2016-03-23 | 小天才科技有限公司 | Man-machine interaction method and system |
CN105550173A (en) * | 2016-02-06 | 2016-05-04 | 北京京东尚科信息技术有限公司 | Text correction method and device |
CN105787560A (en) * | 2016-03-18 | 2016-07-20 | 北京光年无限科技有限公司 | Dialogue data interaction processing method and device based on recurrent neural network |
-
2016
- 2016-08-09 CN CN201610647418.0A patent/CN106294726A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1846228A (en) * | 2003-09-01 | 2006-10-11 | 松下电器产业株式会社 | Electronic device having user authentication function |
US8941643B1 (en) * | 2010-12-28 | 2015-01-27 | Lucasfilm Entertainment Company Ltd. | Quality assurance testing of virtual environments |
CN105183712A (en) * | 2015-08-27 | 2015-12-23 | 北京时代焦点国际教育咨询有限责任公司 | Method and apparatus for scoring English composition |
CN105425953A (en) * | 2015-11-02 | 2016-03-23 | 小天才科技有限公司 | Man-machine interaction method and system |
CN105550173A (en) * | 2016-02-06 | 2016-05-04 | 北京京东尚科信息技术有限公司 | Text correction method and device |
CN105787560A (en) * | 2016-03-18 | 2016-07-20 | 北京光年无限科技有限公司 | Dialogue data interaction processing method and device based on recurrent neural network |
Non-Patent Citations (1)
Title |
---|
谢广明等: "《机器人概论》", 30 September 2013 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874472A (en) * | 2017-02-16 | 2017-06-20 | 深圳追科技有限公司 | A kind of anthropomorphic robot's client service method |
CN108733722B (en) * | 2017-04-24 | 2020-07-31 | 北京京东尚科信息技术有限公司 | Automatic generation method and device for conversation robot |
WO2018196684A1 (en) * | 2017-04-24 | 2018-11-01 | 北京京东尚科信息技术有限公司 | Method and device for generating conversational robot |
CN108733722A (en) * | 2017-04-24 | 2018-11-02 | 北京京东尚科信息技术有限公司 | A kind of dialogue robot automatic generation method and device |
CN107272885A (en) * | 2017-05-09 | 2017-10-20 | 北京光年无限科技有限公司 | A kind of man-machine interaction method and device for intelligent robot |
CN107272885B (en) * | 2017-05-09 | 2020-06-26 | 北京光年无限科技有限公司 | Man-machine interaction method and device for intelligent robot |
CN107294837A (en) * | 2017-05-22 | 2017-10-24 | 北京光年无限科技有限公司 | Engaged in the dialogue interactive method and system using virtual robot |
CN107480122A (en) * | 2017-06-26 | 2017-12-15 | 迈吉客科技(北京)有限公司 | A kind of artificial intelligence exchange method and artificial intelligence interactive device |
CN107480122B (en) * | 2017-06-26 | 2020-05-08 | 迈吉客科技(北京)有限公司 | Artificial intelligence interaction method and artificial intelligence interaction device |
WO2019001127A1 (en) * | 2017-06-26 | 2019-01-03 | 迈吉客科技(北京)有限公司 | Virtual character-based artificial intelligence interaction method and artificial intelligence interaction device |
WO2019010678A1 (en) * | 2017-07-13 | 2019-01-17 | 深圳前海达闼云端智能科技有限公司 | Robot role switching method and apparatus, and robot |
CN107688983A (en) * | 2017-07-27 | 2018-02-13 | 北京光年无限科技有限公司 | Intelligent robot custom service processing method and system based on business platform |
CN107704530A (en) * | 2017-09-19 | 2018-02-16 | 百度在线网络技术(北京)有限公司 | Speech ciphering equipment exchange method, device and equipment |
CN107729983A (en) * | 2017-09-21 | 2018-02-23 | 北京深度奇点科技有限公司 | A kind of method, apparatus and electronic equipment using realizing of Robot Vision man-machine chess |
CN107729983B (en) * | 2017-09-21 | 2021-06-25 | 北京深度奇点科技有限公司 | Method and device for realizing man-machine chess playing by using machine vision and electronic equipment |
CN110730953A (en) * | 2017-10-03 | 2020-01-24 | 谷歌有限责任公司 | Customizing interactive dialog applications based on creator-provided content |
CN110730953B (en) * | 2017-10-03 | 2023-08-29 | 谷歌有限责任公司 | Method and system for customizing interactive dialogue application based on content provided by creator |
CN110399474A (en) * | 2019-07-18 | 2019-11-01 | 腾讯科技(深圳)有限公司 | A kind of Intelligent dialogue method, apparatus, equipment and storage medium |
CN110609620A (en) * | 2019-09-05 | 2019-12-24 | 深圳追一科技有限公司 | Human-computer interaction method and device based on virtual image and electronic equipment |
CN111832691A (en) * | 2020-07-01 | 2020-10-27 | 娄兆文 | Role-substituted scalable multi-object intelligent accompanying robot |
CN111832691B (en) * | 2020-07-01 | 2024-01-09 | 娄兆文 | Role-substituted upgradeable multi-object intelligent accompanying robot |
CN114818609A (en) * | 2022-06-29 | 2022-07-29 | 阿里巴巴达摩院(杭州)科技有限公司 | Interaction method for virtual object, electronic device and computer storage medium |
CN114818609B (en) * | 2022-06-29 | 2022-09-23 | 阿里巴巴达摩院(杭州)科技有限公司 | Interaction method for virtual object, electronic device and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106294726A (en) | Based on the processing method and processing device that robot role is mutual | |
CN110688911B (en) | Video processing method, device, system, terminal equipment and storage medium | |
WO2022048403A1 (en) | Virtual role-based multimodal interaction method, apparatus and system, storage medium, and terminal | |
KR101925440B1 (en) | Method for providing vr based live video chat service using conversational ai | |
CN108000526B (en) | Dialogue interaction method and system for intelligent robot | |
CN110427472A (en) | The matched method, apparatus of intelligent customer service, terminal device and storage medium | |
CN105807933B (en) | A kind of man-machine interaction method and device for intelligent robot | |
CN110400251A (en) | Method for processing video frequency, device, terminal device and storage medium | |
CN107797663A (en) | Multi-modal interaction processing method and system based on visual human | |
JP7352115B2 (en) | Non-linguistic information generation device, non-linguistic information generation model learning device, non-linguistic information generation method, non-linguistic information generation model learning method and program | |
CN106531162A (en) | Man-machine interaction method and device used for intelligent robot | |
JP7157340B2 (en) | Nonverbal information generation device, nonverbal information generation model learning device, method, and program | |
CN109461435A (en) | A kind of phoneme synthesizing method and device towards intelligent robot | |
CN109800295A (en) | The emotion session generation method being distributed based on sentiment dictionary and Word probability | |
Ritschel et al. | Multimodal joke generation and paralinguistic personalization for a socially-aware robot | |
CN115953521A (en) | Remote digital human rendering method, device and system | |
Gjaci et al. | Towards culture-aware co-speech gestures for social robots | |
Esposito et al. | On the recognition of emotional vocal expressions: motivations for a holistic approach | |
Gamborino et al. | Towards effective robot-assisted photo reminiscence: Personalizing interactions through visual understanding and inferring | |
Schiller et al. | Human-inspired socially-aware interfaces | |
Cerezo et al. | Interactive agents for multimodal emotional user interaction | |
KR102120936B1 (en) | System for providing customized character doll including smart phone | |
Chandrasiri et al. | Internet communication using real-time facial expression analysis and synthesis | |
CN111897434A (en) | System, method, and medium for signal control of virtual portrait | |
Zhang et al. | Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170104 |
|
RJ01 | Rejection of invention patent application after publication |