CN105797375A - Method and terminal for changing role model expressions along with user facial expressions - Google Patents

Method and terminal for changing role model expressions along with user facial expressions Download PDF

Info

Publication number
CN105797375A
CN105797375A CN201410856666.7A CN201410856666A CN105797375A CN 105797375 A CN105797375 A CN 105797375A CN 201410856666 A CN201410856666 A CN 201410856666A CN 105797375 A CN105797375 A CN 105797375A
Authority
CN
China
Prior art keywords
eigenvalue
emotion
type
terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410856666.7A
Other languages
Chinese (zh)
Inventor
刘美鸿
陈易华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen magic eye Technology Co., Ltd.
Original Assignee
Shenzhen Estar Displaytech Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Estar Displaytech Co filed Critical Shenzhen Estar Displaytech Co
Priority to CN201410856666.7A priority Critical patent/CN105797375A/en
Publication of CN105797375A publication Critical patent/CN105797375A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and terminal for changing role model expressions along with user facial expressions. The method comprises the steps that the terminal obtains a user facial expression image in real time and extracts first characteristic value of the image; the terminal determines the expression type corresponding to the first characteristic value is determined according to the first preset mode; according to the determined expression type, a second characteristic value is selected according to the second preset mode, and therefore the role model facial expression is changed. In this way, a game role model can make corresponding different expression according to user facial expressions, hardware equipment is sufficiently utilized for expanding the game playing method, and the game interestingness is enhanced.

Description

A kind of user's countenance of following changes method and the terminal of actor model expression
Technical field
The present invention relates to field of image recognition, particularly relate to a kind of user's countenance of following and change method and the terminal of actor model expression.
Background technology
The virtual role model game of currently a popular mobile terminal, corresponding reaction is made rapidly during by touching actor model different parts, or the voice record that actor model was said by user also plays back after changing voice, with the action of actor model mouth, to the funny bone that people is very strong.
Present this kind of game play is single, and simply by the fixing action of user operation Trigger Angle color model, or the language imitating user makes actor model sounding, fails to utilize hardware device that game play is extended fully.
Summary of the invention
The technical problem that present invention mainly solves is to provide a kind of user's countenance of following and changes method and the terminal of actor model expression, avatar model can be made to follow user's countenance and to make expressions different accordingly, make full use of hardware device game play is extended, strengthen the interest of game.
For solving above-mentioned technical problem, the technical scheme that the present invention adopts is: provides a kind of and follows the method that user's countenance changes actor model countenance, the step of the method includes: terminal user in real countenance image, extracts the First Eigenvalue of image;Terminal determines the type of emotion of corresponding the First Eigenvalue according to the first preset mode;According to the type of emotion confirmed, select Second Eigenvalue according to the second preset mode, to change actor model countenance;The First Eigenvalue, Second Eigenvalue include position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination, Second Eigenvalue is triggered by the type of emotion confirmed, and is different from the First Eigenvalue.
Wherein, in the step of terminal user in real countenance image, terminal user in real voice;User speech is the voice that user sends to actor model, is any one that describe, order, berate and praise in tone voice.
Wherein, in the step determining type of emotion that user's countenance is corresponding according to the first preset mode, if not determining type of emotion, then terminal determines type of emotion by user speech, if having not determined type of emotion, then terminal directly determines type of emotion.
Wherein, the relevant correspondence of the first preset mode record the First Eigenvalue and type of emotion, the relevant or phase inverse correspondence of type of emotion in second preset mode record the first preset mode and Second Eigenvalue.
Wherein, changing actor model countenance simultaneously, terminal sends the voice corresponding with actor model expression.
For solving above-mentioned technical problem, another technical solution used in the present invention is: providing a kind of user's countenance of following to change the terminal of actor model expression, this terminal includes: image collection module, the first processing module and the second processing module;Image collection module is used for user in real countenance image, extracts the First Eigenvalue of image;First processing module is for determining corresponding type of emotion according to the first preset mode;Second processing module is for according to type of emotion, selecting Second Eigenvalue to be associated with actor model according to the second preset mode;The First Eigenvalue, Second Eigenvalue include position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination, Second Eigenvalue is triggered by the type of emotion confirmed, and is different from the First Eigenvalue.
Wherein, terminal also includes voice acquisition module, and voice acquisition module is used for while image collection module user in real countenance image, user in real voice;User speech is the voice that user sends to actor model, is any one that describe, order, berate and praise in tone voice.
Wherein, if the first processing module does not determine type of emotion according to the First Eigenvalue, then determining type of emotion by user speech, if having not determined type of emotion, then terminal directly determines type of emotion.
Wherein, the relevant correspondence of the first preset mode record the First Eigenvalue and type of emotion, the relevant or phase inverse correspondence of type of emotion in second preset mode record the first preset mode and Second Eigenvalue.
Wherein, changing actor model countenance simultaneously, terminal also includes voice playing module, and voice playing module is for being simultaneously emitted by the voice corresponding with actor model expression at change actor model countenance.
The invention has the beneficial effects as follows: be different from the situation of prior art, terminal of the present invention utilizes the countenance of image recognition technology identification user, determine the type of emotion representated by active user's countenance, the eigenvalue of the countenance corresponding with this type of emotion is selected from terminal preset mode, ascribed role model, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Accompanying drawing explanation
Fig. 1 is that the present invention is a kind of follows the schematic flow sheet that user's countenance changes method first embodiment of actor model expression;
Fig. 2 is that the present invention is a kind of follows the schematic flow sheet that user's countenance changes method second embodiment of actor model expression;
Fig. 3 is that the present invention is a kind of follows the schematic flow sheet that user's countenance changes method second embodiment of actor model expression;
Fig. 4 is that the present invention is a kind of follows the structural representation that user's countenance changes terminal first embodiment of actor model expression.
Detailed description of the invention
Consulting Fig. 1, Fig. 1 is that a kind of user's countenance of following of the present invention changes the schematic flow sheet of method the first embodiment that actor model is expressed one's feelings.The step of the method includes:
S101: terminal user in real countenance image, extracts the First Eigenvalue of image.
The main technical flows of face recognition technology mainly has the steps such as collection, process, extraction eigenvalue and identification.This technology is commonly used to authentication and safety-security area, need to accurately process and identify.A lot of mobile phone terminals also possess face identification functions now, and available face identification functions carries out countenance and follows, and the feature of the countenance got gives the virtual role in the pets actor model game imitated.The basic emotion generally acknowledged at present relatively has six kinds: glad, sad, detest, frightened, surprised and angry.Expression corresponding to above six kinds of basic emotions can effectively identify through face recognition technology.
In present embodiment, terminal is after opening game, the countenance image of user is obtained by photographic head, and this image is carried out recognition of face, extracting the First Eigenvalue in this countenance image, the First Eigenvalue is to be combined by one or more in each characteristic point of user's countenance.The most significant feature of face of people has eye position, the position of binocular interval and face, nose and chin, terminal extracts the relative position between above-mentioned remarkable characteristic, obtain the characteristic of recognition of face according to the range performance between above-mentioned facial feature points, its characteristic component generally includes the Euclidean distance between characteristic point, curvature and angle etc..After extracting the First Eigenvalue, enter step S102.
S102: terminal determines the type of emotion of corresponding the First Eigenvalue according to the first preset mode.
The emotion of people generally shows directly through its countenance, it can thus be assumed that the combination of the some characteristic points of user face, commissarial type of emotion.After terminal gets the First Eigenvalue in step S101, according to the state of each characteristic point in the First Eigenvalue and the relative position between each characteristic point, find out the type of emotion corresponding with current the First Eigenvalue according to the first preset mode that terminal is preset.The first preset mode that terminal is preset have recorded the relevant corresponding relation between combination and the type of emotion of people of different conditions countenance characteristic point, namely determines the type of emotion belonging to user's real-time face expression according to the first preset mode.In the present embodiment, the data base of terminal stores the combination under different conditions of each characteristic point and corresponding type of emotion.Such as, the First Eigenvalue that terminal analysis is obtained by step S101, accurate scan eye position, these characteristic points of the position of binocular interval and face, nose and chin, the state of each characteristic point comprehensive, with the combinations matches of each characteristic point state of storage in data base, and obtain the type of emotion that this combination of storage is corresponding in data base, enter step S103.
S103: according to the type of emotion confirmed, selects Second Eigenvalue according to the second preset mode, to change actor model countenance.
The data base of terminal goes back preset second level pattern.Second preset mode record is corresponding relation relevant or contrary between type of emotion and countenance characteristic point, the type of emotion that Second Eigenvalue is confirmed by step S102 is triggered, being different from the First Eigenvalue, Second Eigenvalue and the First Eigenvalue has at least a characteristic point distinct.The countenance that Second Eigenvalue is corresponding not necessarily meets according to the second corresponding type of emotion of preset mode.When the countenance that Second Eigenvalue is corresponding is relevant to the type of emotion corresponding according to the second preset mode, Second Eigenvalue and the First Eigenvalue need distinct.Obtain, as step S102 analyzes the First Eigenvalue, the emotion that result is the First Eigenvalue correspondence anger, then the Second Eigenvalue that this emotion is corresponding, the type of emotion of performance is grievance;Or step S102 analyze the First Eigenvalue obtain result be the First Eigenvalue correspondence be taken aback emotion time, Second Eigenvalue corresponding to this emotion can also appear as startled emotion, but distinct in Second Eigenvalue and the First Eigenvalue, concrete difference can be that the characteristic point of some countenances is distinct with the First Eigenvalue in Second Eigenvalue.After selected Second Eigenvalue, change the countenance of actor model according to Second Eigenvalue.
It is different from prior art, a kind of user's of following countenance of the present invention changes the terminal in the method for actor model expression and utilizes the countenance of image recognition technology identification user, analyze active user's countenance, determine representative type of emotion, from terminal preset mode, select the countenance eigenvalue corresponding with this type of emotion, ascribed role model, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Consulting Fig. 2, Fig. 2 is that a kind of user's countenance of following of the present invention changes the schematic flow sheet of method the second embodiment that actor model is expressed one's feelings.The step of the method includes:
S201: terminal user in real countenance image, extracts the First Eigenvalue of image.
The step obtaining user's countenance image and extraction image the First Eigenvalue is similar with aforementioned embodiments even identical, repeats no more.
S202: while terminal user in real countenance image, user in real voice.
User is when using present invention game to carry out recreation, while countenance changes, voice can be sent to the actor model of game, the voice of user is any one that describe, order, berate and praise in tone voice, terminal judges the tone type belonging to voice by the keyword in identification voice, the emotion of the tone type correspondence user of voice.While the photographic head user in real countenance image of terminal, the voice collector of terminal, at the voice of user in real, is saved in terminal database the voice collected.
S203: terminal determines the type of emotion of corresponding the First Eigenvalue according to the first preset mode.
Identical with previous embodiment, terminal, by analyzing the First Eigenvalue got in step s 201, is found out from the first preset mode that terminal is preset and is met the type of emotion that the First Eigenvalue currently got is corresponding.
S204: if not determining type of emotion, terminal determines type of emotion in conjunction with the First Eigenvalue and user speech.
If in step S203, terminal not can determine that the corresponding type of emotion of storage in data base according to the First Eigenvalue and the first preset mode, then the voice obtained in integrating step S202 carries out secondary and determines.According to voice packet containing keyword judge the tone type of voice, keyword is preset at terminal database, and the title of type of emotion and synonym thereof and near synonym can be chosen in keyword, it is possible to is arranged key word by user voluntarily in terminal.In the present embodiment, user speech and user's countenance image obtain simultaneously, it is determined that the tone type of user speech, determine type of emotion according to the tone type of voice.
S205: if having not determined type of emotion, then directly judged type of emotion by terminal.
If according to the First Eigenvalue not can determine that type of emotion, and occur collecting the voice of the keyword not comprising any terminal preset or in step S202 terminal speech harvester do not get any voice, then terminal will not take other judgment mechanism to judge type of emotion again, and type of emotion is directly determined by terminal, to ensure the response speed of terminal.Can get along well the First Eigenvalue and user speech of the type of emotion that terminal is directly determined is correlated with, and by user at terminal profile, or can be selected at random by terminal.
S206: according to the type of emotion confirmed, selects Second Eigenvalue according to the second preset mode, to change actor model countenance.
Selecting Second Eigenvalue according to the second preset mode, the step to change actor model countenance is similar with aforementioned embodiments even identical, repeats no more.
Be different from prior art, a kind of user's of following countenance of the present invention change the method for actor model expression judge user express one's feelings the type of emotion of representative time, judge at twice, it is ensured that the correctness of judged result;Directly determined by terminal after twice judgement has no resolution, reduce and judge step, it is ensured that terminal response speed, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Consulting Fig. 3, Fig. 3 is that a kind of user's countenance of following of the present invention changes the schematic flow sheet of method the second embodiment that actor model is expressed one's feelings.The step of the method includes:
S301: terminal user in real countenance image, extracts the First Eigenvalue of image.
In present embodiment, the step of the above step method first embodiment kind of following user countenance change actor model expression a kind of to the present invention is similar even identical, does not repeat.
S302: terminal determines the type of emotion of corresponding the First Eigenvalue according to the first preset mode.
The First Eigenvalue that first preset mode record extracts from user's countenance image and the corresponding relation of type of emotion.Such as, in the First Eigenvalue extracted, eyebrow raises up, the position of the face corners of the mouth also upwards, eyes diminish, features described above point occurs simultaneously, then can determine whether that the type of emotion of user is glad emotion;In the First Eigenvalue extracted, eyes diminish, eyes, double; two eyebrow spacing also diminish, the position of the face corners of the mouth is drop-down, features described above point occurs simultaneously, then can determine whether the type of emotion of user for sadness.By analyzing, it is determined that the type of emotion belonging to active user's countenance.
S303: according to the type of emotion confirmed, selects Second Eigenvalue according to the second preset mode, to change actor model countenance.
Second preset mode record needs the corresponding relation of the countenance made according to the type of emotion determined in step S302 and actor model.Second Eigenvalue is triggered by the type of emotion confirmed, and is different from the First Eigenvalue.In second preset mode, the expression of type of emotion and actor model is relevant or incoherent, and when type of emotion is relevant with the expression of actor model, actor model makes the expression relevant to type of emotion.Such as when determining that user emotion type is for time glad, if the countenance of the actor model corresponding with the type of emotion determined of the second preset mode record shows as happiness in terminal database, in Second Eigenvalue, the characteristic point of actor model countenance is identical with in the First Eigenvalue, but would be likely to occur the difference of some characteristic points between the First Eigenvalue and Second Eigenvalue, eyes such as user are opened wide, but role's eyes one are opened and are closed, to show role's expression that user is aughty;If the countenance of the actor model corresponding with the type of emotion determined of the second preset mode record shows as incoherent as, time startled, actor model countenance changes according to Second Eigenvalue in terminal database, if actor model can be the expression feared.
S304: send the voice corresponding with actor model expression changing actor model countenance terminal simultaneously.
In the present embodiment, also preset 3rd preset mode in the data base of terminal.The corresponding relation of the 3rd preset mode record type of emotion and actor model voice, it is possible to added voice be referred to corresponding type of emotion by terminal, it is possible to added voice by user and sort out, and user can arrange the voice that prioritizing selection user adds.In step S303, while changing the countenance of actor model, obtain the voice corresponding with the type of emotion currently determined from data base the 3rd preset mode of terminal, by the voice that terminal plays is selected.
It is different from prior art, a kind of user's of following countenance of the present invention changes the method for actor model expression when actor model countenance changes with user's countenance, the expression identical with user can be changed over, and at the corresponding voice that is simultaneously emitted by expressing one's feelings with actor model of actor model countenance change, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Consulting Fig. 4, Fig. 4 is that a kind of user's countenance of following of the present invention changes the structural representation of terminal the first embodiment that actor model is expressed one's feelings.This terminal 400 includes: image collection module the 410, first processing module the 420, second processing module 430 and voice acquisition module 440.
Image collection module 410 utilizes face recognition technology user in real countenance image after terminal opens game, extracts each characteristic point of user's countenance, be combined as the First Eigenvalue from image.The First Eigenvalue includes position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination, some monitoring means 411 of image collection module 410 monitor each characteristic point in real time, after arbitrary monitoring means 411 detects that character pair point changes, image collection module 410 obtains active user countenance image and has the First Eigenvalue extraction unit 412 to extract the First Eigenvalue of family countenance.
First processing module 420, after extracting the First Eigenvalue, is determined the type of emotion corresponding with the First Eigenvalue according to the first preset mode by terminal 400.First preset mode have recorded the relevant corresponding relation between combination and the type of emotion of people of different conditions countenance characteristic point, namely determines the type of emotion belonging to user's real-time face expression according to the first preset mode.
Voice acquisition module 440 user in real voice being saved in terminal 400 while image collection module 410 obtains user's countenance image.If the first processing module 420 not can determine that the type of emotion corresponding with the First Eigenvalue according to the first preset mode, then combine the voice got by voice acquisition module 440 and carry out secondary and determine.The voice of user is any one that describe, order, berate and praise in tone voice, and terminal is by identifying that the keyword in voice judges the tone type belonging to voice, and the emotion of the tone correspondence user of voice.Keyword is preset in terminal 400, and the title of type of emotion and synonym thereof and near synonym can be chosen in keyword, it is possible to is arranged key word by user voluntarily in terminal 400.If not can determine that type of emotion after two above is determined, then directly determined type of emotion by terminal 400, to ensure response speed.Can get along well the First Eigenvalue and user speech of the type of emotion that terminal 400 is directly determined is correlated with, and can be set in terminal 400 by user, or be selected at random by terminal.
Second processing module 430 determines Second Eigenvalue according to the second preset mode.Second preset mode have recorded the relevant corresponding relation between the type of emotion of people and the combination of countenance characteristic point, after the first processing module 420 determines the type of emotion that user's real-time face expression is representative, second processing module 430 search the second preset mode neutralize it has been determined that countenance eigenvalue corresponding to type of emotion, and so as to be Second Eigenvalue.Second Eigenvalue includes position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination.Second Eigenvalue is triggered by the type of emotion confirmed, and is different from the First Eigenvalue.Second Eigenvalue sets according to type of emotion, it is possible to be set by the user the state of each characteristic point.After determining Second Eigenvalue, ascribed role model meets the countenance of Second Eigenvalue.
After user changes expression, image collection module 410 obtains countenance image in real time, extract the First Eigenvalue, the voice of voice acquisition module 440 user in real, first processing module 420 determines the type of emotion of user according to the First Eigenvalue and user speech, second processing module 430 selects Second Eigenvalue according to the type of emotion determined, changes the countenance of actor model, so as to meet Second Eigenvalue.
While changing actor model countenance, voice playing module 450 exports (sign) by the audio frequency of terminal 400 and plays corresponding voice of currently expressing one's feelings with actor model.Preset 3rd preset mode of terminal 400, the corresponding relation of record type of emotion and actor model voice, voice can be added by terminal and be referred to corresponding type of emotion, it is possible to added voice by user and sort out, and user can arrange the voice that prioritizing selection user adds.
It in contrast to prior art, a kind of user's of following countenance of the present invention changes the terminal of actor model expression and monitors after countenance feature changes by face identification functions, change according to user's expression changes the expression of actor model accordingly, and at the corresponding voice that is simultaneously emitted by expressing one's feelings with actor model of actor model countenance change, utilize hardware device that game play is extended fully, increase game play, strengthen game interest greatly.
Should be understood that and illustrate for mobile electronic device above, but the terminal of the present invention is not limited to this, it is also possible to for multimedia entertainment equipment or multi-media information terminal.
The foregoing is only embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing description of the present invention and accompanying drawing content to make or equivalence flow process conversion; or directly or indirectly it is used in other relevant technical fields, all in like manner include in the scope of patent protection of the present invention.

Claims (10)

1. follow the method that user's countenance changes actor model countenance for one kind, it is characterised in that the step of described method includes:
Terminal user in real countenance image, extracts the First Eigenvalue of described image;
Described terminal determines the type of emotion of corresponding described the First Eigenvalue according to the first preset mode;
Type of emotion according to described confirmation, selects Second Eigenvalue according to the second preset mode, to change described actor model countenance;
Wherein, described the First Eigenvalue, Second Eigenvalue include position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination, described Second Eigenvalue is triggered by the type of emotion of described confirmation, and is different from the First Eigenvalue.
2. method according to claim 1, it is characterised in that in the step of described terminal user in real countenance image, described terminal user in real voice;Described user speech is the voice that user sends to described actor model, is any one that describe, order, berate and praise in tone voice.
3. method according to claim 2, it is characterized in that, in the described step determining type of emotion that described user's countenance is corresponding according to the first preset mode, if not determining described type of emotion, then described terminal determines type of emotion in conjunction with described the First Eigenvalue and described user speech, if having not determined described type of emotion, then described terminal directly determines described type of emotion.
4. method according to claim 1, it is characterized in that, relevant or the phase inverse correspondence of the relevant correspondence of the First Eigenvalue and described type of emotion described in described first preset mode record, described type of emotion in described second preset mode record the first preset mode and described Second Eigenvalue.
5. method according to claim 1, it is characterised in that changing described actor model countenance simultaneously, described terminal sends the voice corresponding with described actor model expression.
6. the terminal following user's countenance change actor model expression, it is characterised in that described terminal includes: image collection module, the first processing module and the second processing module;
Wherein, described image collection module is used for user in real countenance image, extracts the First Eigenvalue of described image;Described first processing module is for determining corresponding type of emotion according to the first preset mode;Described second processing module is for according to described type of emotion, selecting Second Eigenvalue according to the second preset mode, to change described actor model countenance;Described the First Eigenvalue, Second Eigenvalue include position of human eye, two eye distances from, eyebrow, face, nose, chin position in one or more combination, described Second Eigenvalue is triggered by the type of emotion of described confirmation, and is different from the First Eigenvalue.
7. terminal according to claim 6, it is characterised in that described terminal also includes voice acquisition module, described voice acquisition module is used for while described image collection module user in real countenance image, user in real voice;Described user speech is the voice that user sends to described actor model, is any one that describe, order, berate and praise in tone voice.
8. terminal according to claim 7, it is characterized in that, if described first processing module does not determine described type of emotion according to described the First Eigenvalue, then determine type of emotion by described user speech, if having not determined described type of emotion, then described terminal directly determines described type of emotion.
9. terminal according to claim 6, it is characterized in that, relevant or the phase inverse correspondence of the relevant correspondence of the First Eigenvalue and described type of emotion described in described first preset mode record, described type of emotion in described second preset mode record the first preset mode and described Second Eigenvalue.
10. terminal according to claim 6, it is characterised in that described terminal also includes voice playing module, described voice playing module is for being simultaneously emitted by the voice corresponding with actor model expression at the described actor model countenance of change.
CN201410856666.7A 2014-12-31 2014-12-31 Method and terminal for changing role model expressions along with user facial expressions Pending CN105797375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410856666.7A CN105797375A (en) 2014-12-31 2014-12-31 Method and terminal for changing role model expressions along with user facial expressions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410856666.7A CN105797375A (en) 2014-12-31 2014-12-31 Method and terminal for changing role model expressions along with user facial expressions

Publications (1)

Publication Number Publication Date
CN105797375A true CN105797375A (en) 2016-07-27

Family

ID=56465378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410856666.7A Pending CN105797375A (en) 2014-12-31 2014-12-31 Method and terminal for changing role model expressions along with user facial expressions

Country Status (1)

Country Link
CN (1) CN105797375A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961431A (en) * 2018-07-03 2018-12-07 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of facial expression
CN109621418A (en) * 2018-12-03 2019-04-16 网易(杭州)网络有限公司 The expression adjustment and production method, device of virtual role in a kind of game
CN110837294A (en) * 2019-10-14 2020-02-25 成都西山居世游科技有限公司 Facial expression control method and system based on eyeball tracking
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148161A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Apparatus for controlling facial expression of virtual human using heterogeneous data and method thereof
CN102722246A (en) * 2012-05-30 2012-10-10 南京邮电大学 Human face information recognition-based virtual pet emotion expression method
CN103413113A (en) * 2013-01-15 2013-11-27 上海大学 Intelligent emotional interaction method for service robot
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148161A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Apparatus for controlling facial expression of virtual human using heterogeneous data and method thereof
CN102722246A (en) * 2012-05-30 2012-10-10 南京邮电大学 Human face information recognition-based virtual pet emotion expression method
CN103413113A (en) * 2013-01-15 2013-11-27 上海大学 Intelligent emotional interaction method for service robot
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961431A (en) * 2018-07-03 2018-12-07 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of facial expression
CN109621418A (en) * 2018-12-03 2019-04-16 网易(杭州)网络有限公司 The expression adjustment and production method, device of virtual role in a kind of game
CN109621418B (en) * 2018-12-03 2022-09-30 网易(杭州)网络有限公司 Method and device for adjusting and making expression of virtual character in game
CN110837294A (en) * 2019-10-14 2020-02-25 成都西山居世游科技有限公司 Facial expression control method and system based on eyeball tracking
CN110837294B (en) * 2019-10-14 2023-12-12 成都西山居世游科技有限公司 Facial expression control method and system based on eyeball tracking
CN113908553A (en) * 2021-11-22 2022-01-11 广州简悦信息科技有限公司 Game character expression generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107197384B (en) The multi-modal exchange method of virtual robot and system applied to net cast platform
CN105512348B (en) For handling the method and apparatus and search method and device of video and related audio
CN110209844B (en) Multimedia data matching method, device and storage medium
CN108197330B (en) Data digging method and device based on social platform
CN106897372B (en) Voice query method and device
CN109522835A (en) Children's book based on intelligent robot is read and exchange method and system
CN108345385A (en) Virtual accompany runs the method and device that personage establishes and interacts
CN103366782B (en) Method and device automatically playing expression on virtual image
CN108897732B (en) Statement type identification method and device, storage medium and electronic device
CN109176535A (en) Exchange method and system based on intelligent robot
CN105095415B (en) The determination method and apparatus of network mood
CN110459214A (en) Voice interactive method and device
CN108446320A (en) A kind of data processing method, device and the device for data processing
WO2016115835A1 (en) Human body characteristic data processing method and apparatus
CN106250553A (en) A kind of service recommendation method and terminal
CN111586466B (en) Video data processing method and device and storage medium
CN108205684A (en) Image disambiguation method, device, storage medium and electronic equipment
CN105797375A (en) Method and terminal for changing role model expressions along with user facial expressions
CN105797374A (en) Method for giving out corresponding voice in following way by being matched with face expressions and terminal
CN107911643A (en) Show the method and apparatus of scene special effect in a kind of video communication
CN105212949A (en) A kind of method using skin pricktest signal to carry out culture experience emotion recognition
CN106407287A (en) Multimedia resource pushing method and system
CN110442867A (en) Image processing method, device, terminal and computer storage medium
CN110781327B (en) Image searching method and device, terminal equipment and storage medium
CN105797376A (en) Method and terminal for controlling role model behavior according to expression of user

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20161206

Address after: 518000 Guangdong city of Shenzhen province Nanshan District Guangdong streets high in the three global digital Building Room 806 No. 9

Applicant after: Shenzhen magic eye Technology Co., Ltd.

Address before: 518000 Shenzhen, Nanshan District, China Hi Tech in the world, the world's 806 digital building, room three

Applicant before: SHENZHEN ESTAR DISPLAYTECH CO., LTD.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160727

WD01 Invention patent application deemed withdrawn after publication