CN105797374A - Method for giving out corresponding voice in following way by being matched with face expressions and terminal - Google Patents

Method for giving out corresponding voice in following way by being matched with face expressions and terminal Download PDF

Info

Publication number
CN105797374A
CN105797374A CN201410856659.7A CN201410856659A CN105797374A CN 105797374 A CN105797374 A CN 105797374A CN 201410856659 A CN201410856659 A CN 201410856659A CN 105797374 A CN105797374 A CN 105797374A
Authority
CN
China
Prior art keywords
countenance
eigenvalue
voice
terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410856659.7A
Other languages
Chinese (zh)
Inventor
刘美鸿
陈易华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen magic eye Technology Co., Ltd.
Original Assignee
Shenzhen Estar Displaytech Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Estar Displaytech Co filed Critical Shenzhen Estar Displaytech Co
Priority to CN201410856659.7A priority Critical patent/CN105797374A/en
Publication of CN105797374A publication Critical patent/CN105797374A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for giving out corresponding voice in a following way by being matched with face expressions and a terminal. The method comprises the following steps that the terminal obtains a face expression image of a user, and extracts a first feature value of the image so as to set the initial face expression of a role model; the terminal monitors the face expression change of the user, and associates a second feature value after the change to the role model in real time; and when the role model associates the second feature value, the terminal gives out the voice set in a matched mode. By adopting the mode, the game role model can give out the corresponding voice in a following way by being matched with the face expressions; hardware equipment is sufficiently used for expanding playing methods of games; and the interestingness of the games is enhanced.

Description

A kind of countenance is coordinated to follow the method and terminal sending corresponding voice
Technical field
The present invention relates to field of image recognition, particularly relate to a kind of coordinate countenance to follow the method and terminal sending corresponding voice.
Background technology
The popular virtual role model game of current mobile terminal, corresponding reaction is made rapidly during by touching actor model different parts, or the voice record that actor model was said by user also plays back after changing voice, with the action of actor model mouth, to the funny bone that people is very strong.
Present this kind of game play is single, and simply by the fixing action of user operation Trigger Angle color model, or the language imitating user makes actor model sounding, fails to utilize hardware device that game play is extended fully.
Summary of the invention
The technical problem that present invention mainly solves is to provide and a kind of coordinates countenance to follow the method and terminal sending corresponding voice, avatar model can be made to follow user's countenance and to be simultaneously emitted by corresponding voice, make full use of hardware device game play is extended, strengthen the interest of game.
For solving above-mentioned technical problem, the technical scheme that the present invention adopts is: provides a kind of and coordinates countenance to follow the method sending corresponding voice, the step of the method includes: terminal obtains user's countenance image, extract the First Eigenvalue of image, express one's feelings with the initial face of set angle color model;The change of terminal monitoring user's countenance, the Second Eigenvalue real time correlation after change to actor model;While actor model association Second Eigenvalue, terminal sends the voice set by match pattern;Eigenvalue includes position of human eye, two eye distances from, face, nose, chin position in one or more combination;The corresponding relation of match pattern record Second Eigenvalue and actor model sounding.
Wherein, while actor model association Second Eigenvalue, terminal sends the step of the voice set by match pattern and includes: the First Eigenvalue of terminal contrast countenance change and Second Eigenvalue;According to the voice that comparing result and match pattern prestore from the terminal database of the corresponding Second Eigenvalue of the data base querying of terminal;Terminal plays voice according to Second Eigenvalue when actor model countenance changes.
Wherein, including at least at least one voice corresponding with the change of user's each countenance eigenvalue in the data base of terminal, the linguistic context of voice meets the countenance of current character model.
Wherein, in the step of the inquiry voice corresponding with Second Eigenvalue, the preset voice of terminal database, or user adds voice to terminal database in the way of word or recording.
Wherein, in the step of the initial expression of set angle color model, terminal obtains the First Eigenvalue association role model of countenance image, as the benchmark of actor model countenance eigenvalue, after terminal monitoring changes to user's countenance, with this bench-marking.
For solving above-mentioned technical problem, another technical solution used in the present invention is: providing a kind of and coordinate countenance to follow the terminal sending corresponding voice, this terminal includes: countenance extraction module, countenance relating module and actor model control module;Countenance extraction module is used for obtaining user's countenance image, extracts the First Eigenvalue of image, with the initial expression of set angle color model, and extracts Second Eigenvalue after user's countenance changes;Countenance relating module is for monitoring the change of user's countenance, the First Eigenvalue and Second Eigenvalue real time correlation to actor model;Actor model controls module for making terminal send the voice set by match pattern;Eigenvalue includes position of human eye, two eye distances from, face, nose, chin position in one or more combination;The corresponding relation of match pattern record Second Eigenvalue and actor model sounding.
Wherein, actor model controls module and includes: expressive features contrast unit, data base querying unit and voice playing unit, expressive features contrast unit is for contrasting the First Eigenvalue and the Second Eigenvalue of countenance change, the data base querying unit voice for prestoring from the terminal database of the corresponding Second Eigenvalue of the data base querying of terminal according to comparing result and match pattern, voice playing unit is for the broadcasting voice when actor model countenance change according to Second Eigenvalue.
Wherein, at least including at least one voice corresponding with the change of user's each countenance eigenvalue in the data base of terminal, the linguistic context of voice meets the countenance of current character model.
Wherein, voice is preset the data base of terminal, or is added to the data base of terminal in the way of word or recording by user.
Wherein, actor model countenance association the First Eigenvalue as the benchmark of actor model countenance eigenvalue, the countenance of user change after with this bench-marking.
The invention has the beneficial effects as follows: be different from the situation of prior art, the present invention coordinates countenance to follow in the method sending corresponding voice, terminal monitors after countenance feature changes by face identification functions, the countenance feature association after change to actor model, terminal sends the voice corresponding with current countenance with the name of actor model simultaneously, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Accompanying drawing explanation
Fig. 1 is a kind of schematic flow sheet coordinating countenance to follow method the first embodiment sending corresponding voice of the present invention;
Fig. 2 is a kind of schematic flow sheet coordinating countenance to follow method the second embodiment sending corresponding voice of the present invention;
Fig. 3 is a kind of schematic flow sheet coordinating countenance to follow method the 3rd embodiment sending corresponding voice of the present invention;
Fig. 4 is a kind of structural representation coordinating countenance to follow terminal the first embodiment sending corresponding voice of the present invention.
Detailed description of the invention
Consulting Fig. 1, Fig. 1 is a kind of schematic flow sheet coordinating countenance to follow method the first embodiment sending corresponding voice of the present invention.The step of the method includes:
S101: terminal obtains user's countenance image, extracts the First Eigenvalue of image, expresses one's feelings with the initial face of set angle color model.
The main technical flows of face recognition technology mainly has the steps such as collection, process, extraction eigenvalue and identification.This technology is commonly used to authentication and safety-security area, need to accurately process and identify.Mobile phone terminal also possesses easy face identification functions, and available face identification functions carries out countenance and follows, and the feature of the countenance got gives the virtual role in the pets actor model game imitated.The basic emotion generally acknowledged at present relatively has six kinds: glad, sad, detest, frightened, surprised and angry.Expression corresponding to above six kinds of basic emotions can effectively identify through face recognition technology.
In the present embodiment, terminal, after opening game, is obtained the countenance image of user, and this image is carried out recognition of face by photographic head, extracts the First Eigenvalue of countenance in this countenance image.The most significant feature of face of people has eye position, the position of binocular interval and face, nose and chin, terminal extracts the relative position between above-mentioned remarkable characteristic, the characteristic of recognition of face is obtained according to the range performance between above-mentioned facial feature points, its characteristic component generally includes the Euclidean distance between characteristic point, curvature and angle etc., and each characteristic point of face making actor model is corresponding with the position of each characteristic point in user's countenance image.Now think that actor model has been assigned user's expression at that time in user's countenance image.
S102: the change of terminal monitoring user's countenance, the Second Eigenvalue real time correlation after change to actor model.
The photographic head of terminal monitors the countenance of user before photographic head in real time, extracts the eigenvalue of countenance, if the eigenvalue extracted is identical with the First Eigenvalue in step S101, does not then process, and continues to monitor;After the countenance monitoring user changes, obtain the countenance image after change, extract the eigenvalue of countenance as Second Eigenvalue.Same by the method in step S101, the Second Eigenvalue of user's countenance is given the actor model of terminal game, changes the countenance of actor model, so as to it is identical currently to express one's feelings with user.
S103: while actor model association Second Eigenvalue, terminal sends the voice set by match pattern.
In the most significant feature of face of people, the position of the position of eyes, binocular interval and face, nose and chin, in these characteristic points, any one or several change, and obtain new countenance, and the combination of each characteristic point forms Second Eigenvalue.In the present embodiment, terminal preset matching pattern, the corresponding relation of this mode record Second Eigenvalue and actor model sounding.Namely in the First Eigenvalue of user's countenance, any one or several change after, all can record in match pattern, and have corresponding voice corresponding with it.In a particular embodiment, terminal monitoring changes to the face of user before photographic head and eyes the two characteristic point, face magnifies, eyes become big, in the countenance respective change of the actor model of terminal, and can send such as the similar voice such as " Ohmygod " or " I is stupefied with my junior partner ";Or before monitoring photographic head, the face of user and this two characteristic point of eyes change, and face has sticked up and diminished, and eyes also diminish, and in the countenance respective change of the actor model of terminal, and can send the voice of " my anger " etc.
It is different from prior art, present embodiment is a kind of coordinates countenance to follow in the method sending corresponding voice, terminal monitors after countenance feature changes by face identification functions, the countenance feature association after change to actor model, terminal sends the voice corresponding with current countenance with the name of actor model simultaneously, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Consulting Fig. 2, Fig. 2 is a kind of schematic flow sheet coordinating countenance to follow method the second embodiment sending corresponding voice of the present invention.The step of the method includes:
S201: terminal obtains user's countenance image, extracts the First Eigenvalue of image, expresses one's feelings with the initial face of set angle color model.
S202: state the change of terminal monitoring user's countenance, the Second Eigenvalue real time correlation after change to actor model.
In present embodiment, terminal obtains user's countenance image, extract the First Eigenvalue therein and the initial face according to the First Eigenvalue set angle color model is expressed one's feelings, continue the change of monitoring user's countenance, and after being associated with actor model according to the countenance image zooming-out Second Eigenvalue after change, enter step S203.
S203: terminal contrasts the First Eigenvalue and the Second Eigenvalue of described countenance change.
Terminal, after getting Second Eigenvalue, compares Second Eigenvalue and the First Eigenvalue, by each eigenvalue comparison one by one in user's countenance image, collects being had any different of Second Eigenvalue and the First Eigenvalue.After comparison completes, enter step S204.
S204: the voice prestored from the terminal database of the corresponding Second Eigenvalue of the data base querying of terminal according to comparing result and match pattern.
In the present embodiment, terminal database is added up eye position, any one contingent variation pattern in these features of the position of binocular interval and face, nose and chin, for instance the upper lower eyelid spacing change etc. of monitoring eyes.By these contingent variation pattern random combines, it is possible to form certain emotion of people.Setting up type of emotion in the data base of terminal corresponding with countenance, in previous embodiment, eyes, face become the big emotion representing shock;Then terminal database sets up the correspondence of voice and type of emotion, in previous embodiment, and the emotion of the corresponding shock of the voice such as " Ohmygod " or " I is stupefied with my junior partner ".For being made up of, substantially can determine whether the Second Eigenvalue of type of emotion multiple changing features, this type of emotion of terminal judges, and from data base, inquire the voice met with this categories of emotions;For single changing features, it is impossible to clearly judge type of emotion, terminal is understood the popular term of random fit current network, ancient poetry words and phrases or sings lyrics.The First Eigenvalue that integrating step S203 obtains and the comparing result of Second Eigenvalue, if there being multiple countenance feature to change, terminal first determines whether the type of emotion that multiple countenance changing features represents, and selects the voice corresponding with this type of emotion from data base;If only having single features to change, it is impossible to judging type of emotion, terminal database can randomly draw voice from speech database.
S205: terminal plays voice according to Second Eigenvalue when actor model countenance changes.
Terminal extracts eigenvalue after user's countenance changes, judge the type of emotion belonging to expression after change, and mate and corresponding voice of currently expressing one's feelings, the countenance at actor model can not interval if having time according to being simultaneously emitted by the middle of this voice of changing of user's countenance.
It is different from prior art, present embodiment coordinates countenance to follow in the method sending corresponding voice, according to the expression of user face after change, determine the type of emotion of user, voice corresponding with this type of emotion is found out from terminal database, make full use of hardware device game play is extended, strengthen the interest of game simultaneously.
Consulting Fig. 3, Fig. 3 is a kind of schematic flow sheet coordinating countenance to follow method the 3rd embodiment sending corresponding voice of invention.The method step includes:
S301: the voice of terminal preset data base.
The voice of preset game in terminal database, forms sound bank.The voice that sound bank includes playing the voice carried and user adds.Terminal, by approach such as networks, is collected the popular term of network, ancient poetry words and phrases or suits the lyrics of human emotion.This kind of voice can be that process of changing voice sends after terminal is recorded, or directly sends with the primary sound of voice.User can also add voice to sound bank, the dialect in the song of oneself, pet phrase or place can be uploaded to terminal speech storehouse by user, and select the voice oneself uploaded to be referred in the voice that a certain type of emotion is corresponding, prioritizing selection can be set simultaneously and play the voice that user adds.The voice of user is recorded in terminal speech storehouse, and when user is referred to certain type of emotion this voice, terminal monitoring represents this type of emotion to the countenance of user, changes the countenance of actor model and plays this voice.User can also add voice in a text form simultaneously, and user is by text editor a word or lyrics, and terminal obtains the voice of the corresponding text again by the approach such as software editing or network, and downloads it in sound bank.And when subsequent implementation present embodiment, user can carry out voice interpolation at any time.After setting sound bank, subsequent step can be carried out.
S302: terminal obtains user's countenance image, extracts the First Eigenvalue of image, expresses one's feelings with the initial face of set angle color model.
When the initial face of set angle color model is expressed one's feelings, according to the First Eigenvalue of the user's countenance image zooming-out for getting from photographic head.Terminal can set the initial face expression of actor model and express one's feelings as fixing, namely the initial face expression of actor model associates certain once from the First Eigenvalue of user's countenance image zooming-out, during follow-up unlatching game, the initial face expression of actor model continues to use the initial face expression of last time, extracts the First Eigenvalue from the initial face of actor model expressing one's feelings;Or the initial face expression of set angle color model is on the game association the First Eigenvalue from the user's countenance image zooming-out obtained in real time.After determining the initial face expression of actor model, enter step S303.
S303: the change of terminal monitoring user's countenance, the Second Eigenvalue real time correlation after change to actor model.
S304: while actor model association Second Eigenvalue, terminal sends the voice set by match pattern.
Above 2 steps step of coordinating countenance follow in method first embodiment that send corresponding voice a kind of with the present invention is identical, repeats no more.
It is different from prior art, present embodiment is a kind of coordinates countenance to follow in the method sending corresponding voice, user can add voice in the sound bank of terminal database, after the actor model of game changes countenance, the voice that user sets can be sent, it is probably the sound of user oneself, makes full use of hardware device and game play is extended, strengthen the interest of game simultaneously.
Consulting Fig. 4, Fig. 4 is a kind of structural representation coordinating countenance to follow terminal the first embodiment sending corresponding voice of the present invention.This terminal 400 includes: countenance extraction module 410, countenance relating module 420 and actor model control module 430;Countenance extraction module 410 extracts the First Eigenvalue when opening game from user's countenance image, or when user selects the actor model countenance fixed, the First Eigenvalue is extracted from fixing actor model countenance, the First Eigenvalue is position of human eye, two eye distances from, face, nose, chin position in one or more combination, these eigenvalues can determine that the countenance of user;Countenance extraction module 410 is additionally operable to monitor after user's countenance changes at countenance relating module 420, extracts Second Eigenvalue.Countenance change relating module 420 is according to user's countenance, and the First Eigenvalue and Second Eigenvalue that countenance extraction module 410 is extracted are associated with actor model.Actor model controls module 430 and includes expressive features contrast unit 431, data base querying unit 432 and voice playing unit 433.Expressive features contrast unit 431, by contrasting the First Eigenvalue and Second Eigenvalue, by each eigenvalue comparison one by one in user's countenance image, collects all differences of Second Eigenvalue and the First Eigenvalue.Using the First Eigenvalue of actor model association as benchmark, after the follow-up user's of monitoring countenance changes, the eigenvalue every time extracted is Second Eigenvalue.Data base querying unit 432 inquires about whether all different situations that expressive features contrast unit 431 collects Second Eigenvalue and the First Eigenvalue as benchmark obtains meet certain type of emotion that data base is preset.The variation pattern random combine of some facial feature points forms type of emotion, expressive features contrast unit 431 contrasts Second Eigenvalue and the First Eigenvalue, a kind of type of emotion of countenance feature point group synthesis changed, or the eigenvalue negligible amounts changed, do not form the record any expression type in data base, data base querying unit 432 carries out different inquiries according to above-mentioned two situations: when expressive features contrast unit 431 comparing result is the countenance feature point group synthesis one type of emotion changed, inquire the voice corresponding with this kind of type of emotion, randomly choose one to should the voice of type of emotion;When the countenance characteristic point that expressive features contrast unit 431 comparing result changes is not combined into a kind of type of emotion, randomly choose a voice.If user sets the voice that added by user as prioritizing selection voice, then data base querying unit 432 is when selecting voice in a manner described, the voice that prioritizing selection user adds.User add voice be by user in the data base adding terminal in the way of word or recording to.The data base of terminal at least includes at least one voice corresponding with the change of user's each countenance eigenvalue, and the linguistic context of voice meets the countenance of current character model.After data base querying unit 432 selectes voice, voice playing unit 433 plays the voice selected by data base querying unit 432 when actor model countenance changes.
Contrast and prior art, the present invention a kind of coordinates countenance to follow the terminal that sends corresponding voice to monitor after countenance feature changes by face identification functions, the countenance feature association after change to actor model, terminal sends the voice corresponding with current countenance in the way of the sounding of actor model simultaneously, and by user's sets itself voice, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
The foregoing is only embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing description of the present invention and accompanying drawing content to make or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, all in like manner include in the scope of patent protection of the present invention.

Claims (10)

1. one kind coordinates countenance to follow the method sending corresponding voice, it is characterised in that the step of described method includes:
Terminal obtains user's countenance image, extracts the First Eigenvalue of described image, expresses one's feelings with the initial face of set angle color model;
The change of user's countenance described in described terminal monitoring, the Second Eigenvalue real time correlation after change to described actor model;
While described actor model associates described Second Eigenvalue, described terminal sends the voice set by match pattern;
Wherein, described the First Eigenvalue, Second Eigenvalue include position of human eye, two eye distances from, face, nose, chin position in one or more combination;The corresponding relation of Second Eigenvalue described in described match pattern record and described actor model sounding.
2. method according to claim 1, it is characterised in that while described actor model associates described Second Eigenvalue, described terminal sends the step of the voice set by match pattern and includes:
Described terminal contrasts described the First Eigenvalue and the Second Eigenvalue of described countenance change;
According to the voice that comparing result and described match pattern prestore from the terminal database of the corresponding described Second Eigenvalue of the data base querying of described terminal;
Described terminal plays described voice when described actor model countenance changes according to described Second Eigenvalue.
3. method according to claim 2, it is characterised in that including at least at least one voice corresponding with the change of user's each countenance eigenvalue in the data base of described terminal, the linguistic context of described voice meets the countenance of presently described actor model.
4. method according to claim 2, it is characterized in that, in the step of the inquiry described voice corresponding with described Second Eigenvalue, the preset described voice of described terminal database, or user adds described voice to described terminal database in the way of word or recording.
5. method according to claim 1, it is characterized in that, in the step of the initial expression of described set angle color model, described terminal obtains the First Eigenvalue association role model of countenance image, benchmark as described actor model countenance eigenvalue, after described terminal monitoring changes to user's countenance, with described bench-marking.
6. one kind coordinates countenance to follow the terminal sending corresponding voice, it is characterised in that described terminal includes: countenance extraction module, countenance relating module and actor model control module;
Wherein, described countenance extraction module is used for obtaining user's countenance image, extracts the First Eigenvalue of described image, with the initial expression of set angle color model, and extracts Second Eigenvalue after user's countenance changes;Described countenance relating module is for monitoring the change of described user's countenance, described the First Eigenvalue and Second Eigenvalue real time correlation to described actor model;Described actor model controls module for making described terminal send the voice set by match pattern;Described eigenvalue includes position of human eye, two eye distances from, face, nose, chin position in one or more combination;The corresponding relation of Second Eigenvalue described in described match pattern record and described actor model sounding.
7. terminal according to claim 6, it is characterised in that described actor model controls module and includes: expressive features contrast unit, data base querying unit and voice playing unit;
Wherein, described expressive features contrast unit is for contrasting described the First Eigenvalue and the Second Eigenvalue of the change of described countenance;The described data base querying unit voice for prestoring from the described terminal database of the corresponding described Second Eigenvalue of the data base querying of described terminal according to comparing result and described match pattern;Described voice playing unit is for playing described voice when described actor model countenance changes according to described Second Eigenvalue.
8. terminal according to claim 7, it is characterised in that at least including at least one voice corresponding with the change of user's each countenance eigenvalue in the data base of described terminal, the linguistic context of described voice meets the countenance of presently described actor model.
9. terminal according to claim 7, it is characterised in that described voice is preset the data base of described terminal, or in the way of word or recording, added to the data base of described terminal by user.
10. terminal according to claim 6, it is characterised in that described actor model countenance associates described the First Eigenvalue as the benchmark of described actor model countenance eigenvalue, the countenance of user change after with described bench-marking.
CN201410856659.7A 2014-12-31 2014-12-31 Method for giving out corresponding voice in following way by being matched with face expressions and terminal Pending CN105797374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410856659.7A CN105797374A (en) 2014-12-31 2014-12-31 Method for giving out corresponding voice in following way by being matched with face expressions and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410856659.7A CN105797374A (en) 2014-12-31 2014-12-31 Method for giving out corresponding voice in following way by being matched with face expressions and terminal

Publications (1)

Publication Number Publication Date
CN105797374A true CN105797374A (en) 2016-07-27

Family

ID=56465385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410856659.7A Pending CN105797374A (en) 2014-12-31 2014-12-31 Method for giving out corresponding voice in following way by being matched with face expressions and terminal

Country Status (1)

Country Link
CN (1) CN105797374A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108176056A (en) * 2018-01-08 2018-06-19 深圳市原力科创发展有限公司 A kind of Face Changing sounding male earner
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN108905192A (en) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 Information processing method and device, storage medium
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN109150690A (en) * 2017-06-16 2019-01-04 腾讯科技(深圳)有限公司 Interaction data processing method, device, computer equipment and storage medium
CN111787986A (en) * 2018-02-28 2020-10-16 苹果公司 Voice effects based on facial expressions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101836219A (en) * 2007-11-01 2010-09-15 索尼爱立信移动通讯有限公司 Generating music playlist based on facial expression
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition
US20120148161A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Apparatus for controlling facial expression of virtual human using heterogeneous data and method thereof
CN102541259A (en) * 2011-12-26 2012-07-04 鸿富锦精密工业(深圳)有限公司 Electronic equipment and method for same to provide mood service according to facial expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393599A (en) * 2007-09-19 2009-03-25 中国科学院自动化研究所 Game role control method based on human face expression
CN101836219A (en) * 2007-11-01 2010-09-15 索尼爱立信移动通讯有限公司 Generating music playlist based on facial expression
US20120148161A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Apparatus for controlling facial expression of virtual human using heterogeneous data and method thereof
CN102271241A (en) * 2011-09-02 2011-12-07 北京邮电大学 Image communication method and system based on facial expression/action recognition
CN102541259A (en) * 2011-12-26 2012-07-04 鸿富锦精密工业(深圳)有限公司 Electronic equipment and method for same to provide mood service according to facial expression

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137455A1 (en) * 2017-01-25 2018-08-02 迈吉客科技(北京)有限公司 Image interaction method and interaction apparatus
CN109150690A (en) * 2017-06-16 2019-01-04 腾讯科技(深圳)有限公司 Interaction data processing method, device, computer equipment and storage medium
CN109150690B (en) * 2017-06-16 2021-05-25 腾讯科技(深圳)有限公司 Interactive data processing method and device, computer equipment and storage medium
CN108176056A (en) * 2018-01-08 2018-06-19 深圳市原力科创发展有限公司 A kind of Face Changing sounding male earner
CN108176056B (en) * 2018-01-08 2023-12-29 东莞市金钐科技有限公司 Face-changing sounding doll
CN111787986A (en) * 2018-02-28 2020-10-16 苹果公司 Voice effects based on facial expressions
CN108905192A (en) * 2018-06-01 2018-11-30 北京市商汤科技开发有限公司 Information processing method and device, storage medium
CN109045688A (en) * 2018-07-23 2018-12-21 广州华多网络科技有限公司 Game interaction method, apparatus, electronic equipment and storage medium
CN109045688B (en) * 2018-07-23 2022-04-26 广州方硅信息技术有限公司 Game interaction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105797374A (en) Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN108075892B (en) Voice processing method, device and equipment
CN105512228B (en) A kind of two-way question and answer data processing method and system based on intelligent robot
CN108804698A (en) Man-machine interaction method, system, medium based on personage IP and equipment
CN107340865A (en) Multi-modal virtual robot exchange method and system
US20180085928A1 (en) Robot, robot control method, and robot system
CN109887484A (en) A kind of speech recognition based on paired-associate learning and phoneme synthesizing method and device
CN106649704A (en) Intelligent dialogue control method and intelligent dialogue control system
CN107766506A (en) A kind of more wheel dialog model construction methods based on stratification notice mechanism
CN111672098A (en) Virtual object marking method and device, electronic equipment and storage medium
CN107368572A (en) Multifunctional intellectual man-machine interaction method and system
CN109176535A (en) Exchange method and system based on intelligent robot
JPWO2017130497A1 (en) Communication system and communication control method
WO2017191696A1 (en) Information processing system and information processing method
Zhao et al. Chatbridge: Bridging modalities with large language model as a language catalyst
TWI736054B (en) Avatar facial expression generating system and method of avatar facial expression generation
CN105744368A (en) Method for television account-based user management by employing voiceprint recognition technology
CN106502382A (en) Active exchange method and system for intelligent robot
CN107924482A (en) Emotional control system, system and program
CN108052250A (en) Virtual idol deductive data processing method and system based on multi-modal interaction
WO2020129959A1 (en) Computer program, server device, terminal device, and display method
CN109324515A (en) A kind of method and controlling terminal controlling intelligent electric appliance
CN105797375A (en) Method and terminal for changing role model expressions along with user facial expressions
CN110442867A (en) Image processing method, device, terminal and computer storage medium
CN110324650A (en) Method, apparatus, electronic equipment and the storage medium of Data Matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20161214

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the three global digital Building Room 806 No. 9

Applicant after: Shenzhen magic eye Technology Co., Ltd.

Address before: 518000 Shenzhen, Nanshan District, China Hi Tech in the world, the world's 806 digital building, room three

Applicant before: SHENZHEN ESTAR DISPLAYTECH CO., LTD.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160727

WD01 Invention patent application deemed withdrawn after publication