CN107526994A - A kind of information processing method, device and mobile terminal - Google Patents

A kind of information processing method, device and mobile terminal Download PDF

Info

Publication number
CN107526994A
CN107526994A CN201610448961.8A CN201610448961A CN107526994A CN 107526994 A CN107526994 A CN 107526994A CN 201610448961 A CN201610448961 A CN 201610448961A CN 107526994 A CN107526994 A CN 107526994A
Authority
CN
China
Prior art keywords
information
user
characteristic
characteristic information
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610448961.8A
Other languages
Chinese (zh)
Inventor
郭辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201610448961.8A priority Critical patent/CN107526994A/en
Priority to PCT/CN2016/093112 priority patent/WO2017219450A1/en
Publication of CN107526994A publication Critical patent/CN107526994A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The present invention provides a kind of information processing method, device and mobile terminal, is related to communication technical field.The information processing method includes:When detecting that mobile terminal is operated, start camera according to predetermined period dynamic acquisition user images information;According to the user images information, the characteristic information of acquisition user's face characteristic portion;The characteristic information is subjected to continuous comparative analysis, and matched with the template characteristic information of pre-stored corresponding facial characteristic portion, determines the emotional state of user.The solution of the present invention, during static state identifies, the change of dynamic analysis user's face characteristic portion, solve the problems, such as that the mode accuracy of existing face recognition user emotion is relatively low.

Description

A kind of information processing method, device and mobile terminal
Technical field
The present invention relates to communication technical field, particularly relates to a kind of information processing method, device and mobile terminal.
Background technology
Current facial recognition techniques are mainly using the facial information of camera collection people, utilize Face datection or identification Technology, the facial characteristics of user is identified.
But existing facial emotions identification is only a static identification process, and facial letter is collected by camera After breath, facial characteristics is extracted, and contrasted with pre-set characteristic value, a similarity is obtained, so as to sentence Break and user emotion.Often accuracy is relatively low for this mode, and can not go the change of real-time capture user's face feature.
The content of the invention
It is an object of the invention to provide a kind of information processing method, device and mobile terminal, during static state identifies, The change of dynamic analysis user's face characteristic portion, the emotional state of user is more precisely determined.
To reach above-mentioned purpose, embodiments of the invention provide a kind of information processing method, including:
When detecting that mobile terminal is operated, start camera according to predetermined period dynamic acquisition user images information;
According to the user images information, the characteristic information of acquisition user's face characteristic portion;
The characteristic information is subjected to continuous comparative analysis, and it is special with the template of pre-stored corresponding facial characteristic portion Reference breath is matched, and determines the emotional state of user.
Wherein, include according to the user images information, the step of the characteristic information for obtaining user's face characteristic portion:
By Face datection, facial zone is determined in the user images information;
The collection of face feature information is carried out in the facial zone;
According to the face feature information, the facial characteristics position of user is distinguished, and it is corresponding to extract facial characteristics position Characteristic information.
Wherein, according to the face feature information, the facial characteristics position of user is distinguished, and extracts facial characteristics position The step of corresponding characteristic information, includes:
According to the face feature information, the mouth region of user is distinguished;
Obtain the characteristic point position of the mouth region;
According to the characteristic point position, the characteristic information of user's mouth is obtained.
Wherein, the characteristic information is subjected to continuous comparative analysis, and with pre-stored corresponding facial characteristic portion Template characteristic information is matched, and is included the step of the emotional state for determining user:
Characteristic information is matched with the template characteristic information of pre-stored corresponding facial characteristic portion, obtains a matching As a result;
The characteristic information for the corresponding facial characteristic portion that characteristic information was got with a upper cycle is analyzed, and obtains To an analysis result;
According to the matching result and the analysis result, the emotional state of user is determined.
Wherein, after detecting that mobile terminal is operated, in addition to:
Gather user voice information;
According to the user voice information and default mood sound pattern, the sound mood of user is determined.
Wherein, in addition to:
With reference to the emotional state of sound mood checking user.
Wherein, in addition to:
The characteristic information for the emotional state for having determined that user is stored in the template characteristic information of corresponding facial characteristic portion In.
Wherein, in addition to:
Obtain the corresponding relation of default emotional state and session template;
According to the corresponding relation, the session template of the emotional state of the corresponding fixed user of selection;
According to the session template, the session with user is initiated.
To reach above-mentioned purpose, embodiments of the invention additionally provide a kind of information processor, including:
First processing module, for when detecting that mobile terminal is operated, starting camera according to predetermined period dynamic Gather user images information;
First acquisition module, for according to the user images information, the characteristic information of acquisition user's face characteristic portion;
Second processing module, for the characteristic information to be carried out into continuous comparative analysis, and with pre-stored corresponding surface The template characteristic information of portion's characteristic portion is matched, and determines the emotional state of user.
Wherein, first acquisition module includes:
Determination sub-module, for by Face datection, facial zone to be determined in the user images information;
Submodule is gathered, for carrying out the collection of face feature information in the facial zone;
Extracting sub-module, for according to the face feature information, distinguishing the facial characteristics position of user, and extract face Characteristic information corresponding to portion's characteristic portion.
Wherein, extracting sub-module includes:
Discrimination unit, for according to the face feature information, distinguishing the mouth region of user;
Acquiring unit, for obtaining the characteristic point position of the mouth region;
Processing unit, for according to the characteristic point position, obtaining the characteristic information of user's mouth.
Wherein, the Second processing module includes:
Matched sub-block, for characteristic information to be carried out with the template characteristic information of pre-stored corresponding facial characteristic portion Matching, obtains a matching result;
Submodule is analyzed, for the characteristic information for the corresponding facial characteristic portion for getting characteristic information with a upper cycle It is analyzed, obtains an analysis result;
Submodule is handled, for according to the matching result and the analysis result, determining the emotional state of user.
Wherein, described information processing unit also includes:
Acquisition module, for gathering user voice information;
3rd processing module, for according to the user voice information and default mood sound pattern, determining user's Sound mood.
Wherein, described information processing unit also includes:
Authentication module, for the emotional state with reference to sound mood checking user.
Wherein, described information processing unit also includes:
Fourth processing module, for the characteristic information for having determined that the emotional state of user to be stored in into corresponding facial features In the template characteristic information of position.
Wherein, described information processing unit also includes:
Second acquisition module, for obtaining the corresponding relation of default emotional state and session template;
Selecting module, for according to the corresponding relation, the session mould of the emotional state of the corresponding fixed user of selection Plate;
Session setup module, for according to the session template, initiating the session with user.
To reach above-mentioned purpose, embodiments of the invention additionally provide a kind of mobile terminal, including information as described above Processing unit.
The above-mentioned technical proposal of the present invention has the beneficial effect that:
The information processing method of the embodiment of the present invention, can automatic start camera when detecting that mobile terminal is operated User images information is gathered according to predetermined period.Then, according to the user images information, user's face characteristic portion is got Characteristic information.And because camera is the user images information of dynamic acquisition, different time points, therefore, correspond to and get The characteristic information of user's face characteristic portion be also in different time points.These characteristic informations can just be carried out afterwards Continuous comparative analysis, and matched with the template characteristic information of pre-stored corresponding facial characteristic portion, determine user's Emotional state.So as to not only by the characteristic information at the facial characteristics position got and pre-stored template characteristic information progress The static treatment matched somebody with somebody, Mobile state processing can be also entered to it, is more accurately analyzed by the change of the characteristic information at facial characteristics position Go out the emotional state of user, bring user and more preferably experience.
Brief description of the drawings
Fig. 1 is the step flow chart of the information processing method of the embodiment of the present invention;
Fig. 2 is the specific steps for the characteristic information that facial characteristics position is obtained in the information processing method of the embodiment of the present invention Flow chart;
Fig. 3 is the specific steps flow chart for the characteristic information that mouth is obtained in the information processing method of the embodiment of the present invention;
Fig. 4 is the specific steps flow chart that the emotional state of user is determined in the information processing method of the embodiment of the present invention;
Fig. 5 is the concrete application schematic diagram of the information processing method of the embodiment of the present invention;
Fig. 6 is the structural representation of the information processor of the embodiment of the present invention.
Embodiment
To make the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with accompanying drawing and tool Body embodiment is described in detail.
The present invention can not go reality for the process that the mode of existing face recognition user emotion is only a static identification When catch the change of user's face feature, the problem of accuracy is relatively low, there is provided a kind of information processing method, in the mistake of static state identification Cheng Zhong, the change of dynamic analysis user's face characteristic portion, the emotional state of user is more precisely determined.
As shown in figure 1, a kind of information processing method of the embodiment of the present invention, including:
Step 101, when detecting that mobile terminal is operated, start camera and scheme according to predetermined period dynamic acquisition user As information;
Step 102, according to the user images information, the characteristic information of acquisition user's face characteristic portion;
Step 103, the characteristic information is subjected to continuous comparative analysis, and with pre-stored corresponding facial characteristic portion Template characteristic information matched, determine the emotional state of user.
It should be appreciated that the information processing method of the embodiment of the present invention, pre- in the mobile terminal applied to mobile terminal The characteristic information at the facial characteristics position (such as eye, mouth) of different moods (such as laugh at, cry, anger) is first stored as template Characteristic information.By taking mouth as an example, pre-stored characteristic information can include with left and right corners of the mouth characteristic point and upper and lower lip feature point Corners of the mouth radian (raise up or under raise) that structure coordinate system obtains and the up and down difference in height of lip feature point, corresponding different moods, The difference in height of corners of the mouth arc range and upper and lower lip feature point has certain predetermined threshold value.These characteristic informations in advance according to The template characteristic information classification of different moods is stored into database, for follow-up matching.
The information processing method of the embodiment of the present invention, detecting that mobile terminal is operated, that is, user is used During mobile terminal, meeting automatic start camera (more using front camera) gathers user images information according to predetermined period.So Afterwards, according to the user images information, the characteristic information of user's face characteristic portion is got.And because camera is dynamic acquisition It is user images information, different time points, therefore, the characteristic information of the corresponding user's face characteristic portion got is also In different time points.These characteristic informations can just carry out to continuous comparative analysis afterwards, and with it is pre-stored corresponding The template characteristic information at facial characteristics position is matched, and determines the emotional state of user.
Because the facial expression of people has a process for forming change, if the state only obtained by a certain moment is can not Remove to accurately reflect the current emotional characteristics of user, and the information processing method of the embodiment of the present invention, the face that will not only get The static treatment that the characteristic information of characteristic portion is matched with pre-stored template characteristic information, can also enter at Mobile state to it Reason, the emotional state of user is more accurately analyzed by the change of the characteristic information at facial characteristics position, brings user more preferably Experience.
In view of wanting seizure user images information as much as possible in comparative analysis, user's table can be just clearly understood that End of love, it is necessary to reduce the time interval of camera collection image information, it is preferred that predetermined period is less than or equal to 1s.And In order to avoid the user images information processing that collects is not in time, it is necessary to which the user images information collected every time can then be added Into collection queue, so that subsequently from the queue, extraction is handled one by one.
Wherein, the scene that mobile terminal is operated include but is not limited to user by power key light screen, unblock mobile phone, Detect that screen is touched by user.And the state not operated in mobile terminal, in order to avoid camera is always on backstage Caused power consumption, just without starting.
As shown in Fig. 2 in the information processing method of the embodiment of the present invention, specifically, step 102 includes:
Step 1021, by Face datection, facial zone is determined in the user images information.
Face-image can not only be included in the user images information that camera collects after starting, it is also possible to collect other Body part image and/or background image, and sometime put in the user images information collected even including face Image, so,, can be from collection queue to user in this step in order to ensure the validity of the characteristic information finally got Image information is extracted one by one, by Face datection, determines facial zone, avoids other useless region image informations may The interference brought.
Step 1202, the collection of face feature information is carried out in the facial zone.
In this step, face feature information collection is carried out in the facial zone that step 1201 is determined, wherein, face is special Reference breath includes the information such as position, the region of each face organ.Specifically, can be by face recognition, by face organ's The distance between shape description and each face organ characteristic obtain the face feature information, can also be based on algebraic characteristic Or the characterizing method of statistical learning obtains, and will not be repeated here.
Step 1203, according to the face feature information, the facial characteristics position of user is distinguished, and extracts facial characteristics Characteristic information corresponding to position.
Due in step 1202, having collected the facial characteristics of the information such as the position including each face organ, region Information, in this step, you can according to the facial characteristics region, distinguish facial characteristics the position such as eye, mouth, and right of user The characteristic information at the facial characteristics position should be extracted.
It is well known that in the facial expression of people, the changing features of different mood lower mandible portions relative to other positions more Substantially, therefore, on the basis of the above embodiment of the present invention, as shown in Figure 3, it is preferred that step 1203 includes:
Step 12031, according to the face feature information, the mouth region of user is distinguished.
As shown in the above, face feature information includes the information such as the position of each face organ, region, so, this In step, the mouth region of user according to face feature information, can be distinguished.
Step 12032, the characteristic point position of the mouth region is obtained.
In order to finally give the characteristic information of mouth region, mouth region such as the left and right corners of the mouth, upper lower lip can be preset The characteristic point on summit, therefore, in this step, it is possible to determine corresponding characteristic point in the mouth region, and then obtain To characteristic point position.Of course for the authenticity for ensureing data, the sampling point of predetermined number can be also chosen between characteristic point position Position carries out assistant analysis.
Step 12033, according to the characteristic point position, the characteristic information of user's mouth is obtained.
In this step, according to characteristic point position obtained in step 12032 or so corners of the mouth position, upper lower lip summit position Put, and the auxiliary of sampling point position, the characteristic information of the mouth of user is obtained, such as corners of the mouth radian and the height of upper and lower lip feature point Spend the geometric properties information such as difference.Specifically, left and right corners of the mouth position, upper lower lip vertex position and sampling point position structure can be utilized Coordinate system is built, the difference in height of corners of the mouth radian and upper and lower lip feature point is calculated by each position coordinate.
In addition, the mood of people can also be showed come facial characteristics position can also be ocular, and its is right by eyes The characteristic information answered then selects the size of pupil.It is common with reference to eye feature information on the basis of mouth feature information is extracted Processing, to obtain more accurate analysis result.Certainly, in addition to mouth region, ocular, can also enter in conjunction with other positions Row processing, will not enumerate herein.
It should be appreciated that the information processing method of the embodiment of the present invention, in order to lift the judgement to user emotion state Accuracy, it is contemplated that people facial expression change forming process, such as step 103, enter in the template characteristic information with storage During row matching, the character pair information of different time points can also be carried out to continuous comparative analysis, further specifically, such as Fig. 4 Shown, step 103 includes:
Step 1031, characteristic information is matched with the template characteristic information of pre-stored corresponding facial characteristic portion, Obtain a matching result.
In this step, the characteristic information at the facial characteristics position got through step 102 is extracted one by one, it is and pre- in database The template characteristic information of the same facial characteristic portion of storage is matched respectively, it becomes possible to corresponding matching result is obtained, Solve the characteristic information at the facial characteristics position and the matching degree of each template characteristic information.By taking mouth feature information as an example, data It is all to have respective threshold value model that different emotional states, which correspondingly store corners of the mouth radian and the difference in height of upper and lower lip feature point, in storehouse Enclose, such as emotional state for when laughing at, corners of the mouth geometric profile raises up, and the difference in height of corners of the mouth radian and upper and lower lip feature point is all On the occasion of, and in a threshold range set in advance.Currently obtaining the corners of the mouth radian and upper and lower lip feature point of user Difference in height be all in emotional state corresponding to the template laughed in threshold range when, active user's mouth feature information can be obtained It is the matching degree highest laughed at emotional state.
Step 1032, the characteristic information of corresponding facial characteristic portion characteristic information got with a upper cycle is carried out pair Than analysis, an analysis result is obtained.
In this step, the characteristic information at the facial characteristics position that step 102 is obtained sequentially in time with a upper cycle The characteristic information of the facial characteristic portion of correspondence got is analyzed, and obtains corresponding analysis result, understands feature letter The situation of change of breath.
Step 1033, according to the matching result and the analysis result, the emotional state of user is determined.
In this step, with reference to facial characteristics position and each template of the user recognized by matching result and analysis result The matching degree of characteristic information and the situation of change of characteristic information, you can determine the emotional state of user.
By taking the user images information sometime put as an example, after step 102 gets current mouth feature information, through step Rapid 1031, the matching result obtained by matching recognizes the current mouth feature information of user and emotional state is the matching degree laughed at Highest.But people, during laughing at, it is that continuous change is big to start the radian that the corners of the mouth raises up, and then can constantly be diminished afterwards, therefore, By analyzing the variation tendency of corners of the mouth radian, the situation of change of user emotion state is capable of determining that, finds constant or continuous change Greatly, illustrate that user laughs at;It was found that diminishing, illustrate that the emotional state that user laughs at closes to an end.Except the arc of the corners of the mouth in analysis Degree, it is also necessary to which (face during opening and closing up, upper lower lip is special with reference to the change of the difference in height of upper and lower lip feature point It is continually changing when levying the difference in height of point) more accurately complete comparative analysis.So, the mood shape of user can not only be determined The particular type of state, the changes phase of emotional state can also be determined, further lift the accuracy of information processing.
In addition, it is contemplated that applicability of the mobile terminal to user, the information processing method of the embodiment of the present invention, in addition to: The characteristic information for the emotional state for having determined that user is stored in the template characteristic information of corresponding facial characteristic portion.
By the step, mobile terminal has carried out self-teaching, by the facial characteristics position after determination user emotion state Characteristic information stored, the template characteristic information at the facial characteristics position in fill database so, can according to user Personalized data message is established, higher matching degree will be also obtained in processing procedure afterwards, identification is faster more accurate.
In the information processing method of the embodiment of the present invention, after detecting that mobile terminal is operated, in addition to:
Gather user voice information;
According to the user voice information and default mood sound pattern, the sound mood of user is determined.
So, in addition to the mood analysis of face, it can be combined with sound assistant analysis.Detecting that mobile terminal is operated Afterwards, the collection (opening call microphone) to user voice information, the different mood shapes being pre-stored by mobile terminal are started The sound of the mood sound pattern of the acoustic information of state, such as laugh, sob, roar with emotional state is identified, it is determined that The sound mood of user.
Afterwards, further comprise:With reference to the emotional state of sound mood checking user.
Using the sound mood identified, the mood determined by the characteristic information at the facial characteristics position of active user is verified State, comprehensive analysis identify the true emotional state of user.
In general, although current mobile terminal becomes pith in daily life, only It is a physical instrument, " entrance " of a contact external information, but it is very dry as dust, therefore, in the embodiment of the present invention, It is determined that after the emotional state of user, in addition to:Obtain the corresponding relation of default emotional state and session template;According to described Corresponding relation, the session template of the emotional state of the corresponding fixed user of selection;According to the session template, initiation and user Session.
So, can be according to the emotional state of user after mobile terminal accurately determines the emotional state of user, it is suitable to select Session template, automatic start phonetic function is actively and user initiates to talk with, and user is had a more preferably usage experience, it is mobile eventually End is not just for a physical instrument or " friend " that can chat.
It is understood that it is determined that after the emotional state of user, in addition to starting the voice dialogue adapted to, can also enter Other functions of row, the broadcasting of such as music, will not enumerate herein.
As shown in figure 5, for the embodiment of the present invention information processing method mobile terminal concrete application example:
S501, in the normal Standby resting state of mobile terminal, detect whether to be operated, if so, S502 is then performed, if It is no, then continue to detect;
S502, AutoBackground start front camera and microphone, gather user images information and acoustic information respectively, And the information collected is added to collection message queue, wherein, user images information is acquired according to predetermined period, is preset Cycle is 0.5s;
S503, current user images information is obtained from queue, facial zone is determined by Face datection, obtains and uses Family face feature information;
S504, the characteristic information of mouth region is extracted from face feature information using recognition of face;
S505, the template characteristic information for the mouth region that the characteristic information of mouth region and mobile terminal are pre-stored are carried out Match one by one;
S506, according to matching result, find out matching degree highest emotional state;
S507, the feature of the characteristic information of current mouth region and upper cycle user image information mouth region is believed Breath is compared, if emotional state is different corresponding to both, performs S511, if emotional state corresponding to both is identical, performs S508;
S508, situation of change according to the difference in height of current corners of the mouth radian and upper and lower lip feature point and before, just Step determines the emotional state of user;
S509, assistant analysis is carried out according to the acoustic information of collection, determines the current emotional states of user;
S510, start phonetic function, the session for starting to adapt to according to the current emotional states of user;
S511, using the characteristic information for determining emotional state as new template characteristic information, it is stored in corresponding mood shape State.
In summary, the information processing method of the embodiment of the present invention, detecting that mobile terminal is operated, that is, user When mobile terminal is used, meeting automatic start camera (more using front camera) gathers user images according to predetermined period Information.Then, according to the user images information, the characteristic information of user's face characteristic portion is got.And because camera is The user images information of dynamic acquisition, different time points, therefore, the feature of the corresponding user's face characteristic portion got Information is also in different time points.Just these characteristic informations can be subjected to continuous comparative analysis afterwards, and with prestoring The template characteristic information of the facial characteristic portion of correspondence of storage is matched, and determines the emotional state of user.So as to will not only obtain To facial characteristics position the static treatment that is matched with pre-stored template characteristic information of characteristic information, it can also be entered Mobile state processing, the emotional state of user is more accurately analyzed by the change of the characteristic information at facial characteristics position, brings use More preferably experience at family.
As shown in fig. 6, embodiments of the invention additionally provide a kind of information processor, including:
First processing module 601, moved for when detecting that mobile terminal is operated, starting camera according to predetermined period State gathers user images information;
First acquisition module 602, for according to the user images information, the feature for obtaining user's face characteristic portion to be believed Breath;
Second processing module 603, for the characteristic information to be carried out into continuous comparative analysis, and with it is pre-stored corresponding The template characteristic information at facial characteristics position is matched, and determines the emotional state of user.
Wherein, first acquisition module includes:
Determination sub-module, for by Face datection, facial zone to be determined in the user images information;
Submodule is gathered, for carrying out the collection of face feature information in the facial zone;
Extracting sub-module, for according to the face feature information, distinguishing the facial characteristics position of user, and extract face Characteristic information corresponding to portion's characteristic portion.
Wherein, extracting sub-module includes:
Discrimination unit, for according to the face feature information, distinguishing the mouth region of user;
Acquiring unit, for obtaining the characteristic point position of the mouth region;
Processing unit, for according to the characteristic point position, obtaining the characteristic information of user's mouth.
Wherein, the Second processing module includes:
Matched sub-block, for characteristic information to be carried out with the template characteristic information of pre-stored corresponding facial characteristic portion Matching, obtains a matching result;
Submodule is analyzed, for the characteristic information for the corresponding facial characteristic portion for getting characteristic information with a upper cycle It is analyzed, obtains an analysis result;
Submodule is handled, for according to the matching result and the analysis result, determining the emotional state of user.
Wherein, described information processing unit also includes:
Acquisition module, for gathering user voice information;
3rd processing module, for according to the user voice information and default mood sound pattern, determining user's Sound mood.
Wherein, described information processing unit also includes:
Authentication module, for the emotional state with reference to sound mood checking user.
Wherein, described information processing unit also includes:
Fourth processing module, for the characteristic information for having determined that the emotional state of user to be stored in into corresponding facial features In the template characteristic information of position.
Wherein, described information processing unit also includes:
Second acquisition module, for obtaining the corresponding relation of default emotional state and session template;
Selecting module, for according to the corresponding relation, the session mould of the emotional state of the corresponding fixed user of selection Plate;
Session setup module, for according to the session template, initiating the session with user.
The information processor of the embodiment of the present invention, detecting that mobile terminal is operated, that is, user is used During mobile terminal, first processing module meeting automatic start camera (more using front camera) gathers user according to predetermined period Image information.Then, the first acquisition module according to the user images information, believe by the feature for getting user's face characteristic portion Breath.And because camera is the user images information of dynamic acquisition, different time points, therefore, the corresponding user got The characteristic information at facial characteristics position is also in different time points.Afterwards just can be by Second processing module by these features Information carries out continuous comparative analysis, and is matched with the template characteristic information of pre-stored corresponding facial characteristic portion, really Determine the emotional state of user.So as to which not only the characteristic information at the facial characteristics position got and pre-stored template characteristic be believed The static treatment that is matched is ceased, Mobile state processing can be also entered to it, by the change of the characteristic information at facial characteristics position more The accurate emotional state for analyzing user, brings user and more preferably experiences.
It should be noted that the device is the device for applying above- mentioned information processing method, above- mentioned information processing method The implementation of embodiment is applied to the device, can also reach identical technique effect.
Embodiments of the invention additionally provide a kind of mobile terminal, including information processor as described above.
The mobile terminal, can automatic start camera when detecting that itself is operated, that is, is used by a user (more using front camera) gathers user images information according to predetermined period.Then, according to the user images information, get The characteristic information of user's face characteristic portion.And because camera is the user images information of dynamic acquisition, different time points , therefore, the characteristic information of the corresponding user's face characteristic portion got is also in different time points.Just can afterwards These characteristic informations are subjected to continuous comparative analysis, and entered with the template characteristic information of pre-stored corresponding facial characteristic portion Row matching, determine the emotional state of user.So as to not only by the characteristic information at the facial characteristics position got with it is pre-stored The static treatment that template characteristic information is matched, Mobile state processing can be also entered to it, be believed by the feature at facial characteristics position The change of breath more accurately analyzes the emotional state of user, brings user and more preferably experiences.
It should be noted that the mobile terminal is the mobile terminal for applying above- mentioned information processing method, at above- mentioned information The implementation of the embodiment of reason method is applied to the mobile terminal, can also reach identical technique effect.
Explanation is needed further exist for, this mobile terminal described in this description includes but is not limited to smart mobile phone, put down Plate computer etc., and described many functional parts are all referred to as module, specifically to emphasize the only of its implementation Vertical property.
In the embodiment of the present invention, module can be realized with software, so as to by various types of computing devices.Citing comes Say, the executable code module of a mark can include the one or more physics or logical block of computer instruction, citing For, it can be built as object, process or function.Nevertheless, the executable code of institute's mark module is without physically It is located together, but the different instructions being stored in different positions can be included, is combined together when in these command logics When, it forms module and realizes the regulation purpose of the module.
In fact, executable code module can be the either many bar instructions of individual instructions, and can even be distributed On multiple different code segments, it is distributed among distinct program, and is distributed across multiple memory devices.Similarly, grasp Making data can be identified in module, and can be realized according to any appropriate form and be organized in any appropriate class In the data structure of type.The operation data can be collected as individual data collection, or can be distributed on diverse location (being included in different storage device), and only can be present at least in part as electronic signal in system or network.
When module can be realized using software, it is contemplated that the level of existing hardware technique, it is possible to implemented in software Module, in the case where not considering cost, those skilled in the art can build corresponding to hardware circuit come realize correspondingly Function, the hardware circuit includes conventional ultra-large integrated (VLSI) circuit or gate array and such as logic core The existing semiconductor of piece, transistor etc either other discrete elements.Module can also use programmable hardware device, such as Field programmable gate array, programmable logic array, programmable logic device etc. are realized.
Above-mentioned exemplary embodiment describes with reference to those accompanying drawings, many different forms and embodiment be it is feasible and Without departing from spirit of the invention and teaching, therefore, the present invention should not be construed as the limitation of exemplary embodiment is proposed at this. More precisely, these exemplary embodiments are provided so that the present invention can be perfect and complete, and can be by the scope of the invention It is communicated to those those of skill in the art.In those schemas, size of components and relative size are perhaps based on for the sake of clear And it is exaggerated.Term used herein is based only on description particular example embodiment purpose, and being not intended to, which turns into limitation, uses.Such as Use ground at this, unless the interior text clearly refers else, otherwise the singulative " one ", "one" and "the" be intended to by Those multiple forms are also included.Those term "comprising"s and/or " comprising " will become further apparent when being used in this specification, The presence of the feature, integer, step, operation, component and/or component is represented, but is not excluded for one or more further features, whole Number, step, operation, component, component and/or the presence of its group or increase.Unless otherwise indicated, narrative tense, a value scope bag Bound containing the scope and any subrange therebetween.
Described above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, on the premise of principle of the present invention is not departed from, some improvements and modifications can also be made, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (11)

  1. A kind of 1. information processing method, it is characterised in that including:
    When detecting that mobile terminal is operated, start camera according to predetermined period dynamic acquisition user images information;
    According to the user images information, the characteristic information of acquisition user's face characteristic portion;
    The characteristic information is subjected to continuous comparative analysis, and believed with the template characteristic of pre-stored corresponding facial characteristic portion Breath is matched, and determines the emotional state of user.
  2. 2. information processing method according to claim 1, it is characterised in that according to the user images information, obtain and use The step of characteristic information at family facial characteristics position, includes:
    By Face datection, facial zone is determined in the user images information;
    The collection of face feature information is carried out in the facial zone;
    According to the face feature information, the facial characteristics position of user is distinguished, and is extracted special corresponding to facial characteristics position Reference ceases.
  3. 3. information processing method according to claim 2, it is characterised in that according to the face feature information, distinguish The facial characteristics position of user, and include the step of extract characteristic information corresponding to facial characteristics position:
    According to the face feature information, the mouth region of user is distinguished;
    Obtain the characteristic point position of the mouth region;
    According to the characteristic point position, the characteristic information of user's mouth is obtained.
  4. 4. information processing method according to claim 1, it is characterised in that continuously contrasted the characteristic information Analysis, and matched with the template characteristic information of pre-stored corresponding facial characteristic portion, determine the emotional state of user Step includes:
    Characteristic information is matched with the template characteristic information of pre-stored corresponding facial characteristic portion, obtains a matching knot Fruit;
    The characteristic information for the corresponding facial characteristic portion that characteristic information was got with a upper cycle is analyzed, and obtains one Analysis result;
    According to the matching result and the analysis result, the emotional state of user is determined.
  5. 5. information processing method according to claim 1, it is characterised in that detect that mobile terminal by after operation, is gone back Including:
    Gather user voice information;
    According to the user voice information and default mood sound pattern, the sound mood of user is determined.
  6. 6. information processing method according to claim 5, it is characterised in that also include:
    With reference to the emotional state of sound mood checking user.
  7. 7. information processing method according to claim 1, it is characterised in that also include:
    The characteristic information for the emotional state for having determined that user is stored in the template characteristic information of corresponding facial characteristic portion.
  8. 8. information processing method according to claim 1, it is characterised in that also include:
    Obtain the corresponding relation of default emotional state and session template;
    According to the corresponding relation, the session template of the emotional state of the corresponding fixed user of selection;
    According to the session template, the session with user is initiated.
  9. A kind of 9. information processor, it is characterised in that including:
    First processing module, for when detecting that mobile terminal is operated, starting camera according to predetermined period dynamic acquisition User images information;
    First acquisition module, for according to the user images information, the characteristic information of acquisition user's face characteristic portion;
    Second processing module, for the characteristic information to be carried out into continuous comparative analysis, and it is special with pre-stored corresponding face The template characteristic information at sign position is matched, and determines the emotional state of user.
  10. 10. information processor according to claim 9, it is characterised in that first acquisition module includes:
    Determination sub-module, for by Face datection, facial zone to be determined in the user images information;
    Submodule is gathered, for carrying out the collection of face feature information in the facial zone;
    Extracting sub-module, for according to the face feature information, distinguishing the facial characteristics position of user, and extract facial spy Levy characteristic information corresponding to position.
  11. 11. a kind of mobile terminal, it is characterised in that including the information processor as described in claim 9 or 10.
CN201610448961.8A 2016-06-21 2016-06-21 A kind of information processing method, device and mobile terminal Pending CN107526994A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610448961.8A CN107526994A (en) 2016-06-21 2016-06-21 A kind of information processing method, device and mobile terminal
PCT/CN2016/093112 WO2017219450A1 (en) 2016-06-21 2016-08-03 Information processing method and device, and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610448961.8A CN107526994A (en) 2016-06-21 2016-06-21 A kind of information processing method, device and mobile terminal

Publications (1)

Publication Number Publication Date
CN107526994A true CN107526994A (en) 2017-12-29

Family

ID=60734797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610448961.8A Pending CN107526994A (en) 2016-06-21 2016-06-21 A kind of information processing method, device and mobile terminal

Country Status (2)

Country Link
CN (1) CN107526994A (en)
WO (1) WO2017219450A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509041A (en) * 2018-03-29 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for executing operation
CN108804893A (en) * 2018-03-30 2018-11-13 百度在线网络技术(北京)有限公司 A kind of control method, device and server based on recognition of face
CN108830265A (en) * 2018-08-29 2018-11-16 奇酷互联网络科技(深圳)有限公司 Method, communication terminal and the storage device that mood in internet exchange is reminded
CN109192050A (en) * 2018-10-25 2019-01-11 重庆鲁班机器人技术研究院有限公司 Experience type language teaching method, device and educational robot
CN109343919A (en) * 2018-08-30 2019-02-15 深圳市口袋网络科技有限公司 A kind of rendering method and terminal device, storage medium of bubble of chatting

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773676A (en) * 2020-07-23 2020-10-16 网易(杭州)网络有限公司 Method and device for determining virtual role action
CN114125145B (en) * 2021-10-19 2022-11-18 华为技术有限公司 Method for unlocking display screen, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
WO2013085193A1 (en) * 2011-12-06 2013-06-13 경북대학교 산학협력단 Apparatus and method for enhancing user recognition
CN103309449A (en) * 2012-12-17 2013-09-18 广东欧珀移动通信有限公司 Mobile terminal and method for automatically switching wall paper based on facial expression recognition
CN104091153A (en) * 2014-07-03 2014-10-08 苏州工业职业技术学院 Emotion judgment method applied to chatting robot
CN104900007A (en) * 2015-06-19 2015-09-09 四川分享微联科技有限公司 Monitoring watch triggering wireless alarm based on voice
CN105549841A (en) * 2015-12-02 2016-05-04 小天才科技有限公司 Voice interaction method, device and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509041A (en) * 2018-03-29 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for executing operation
CN108804893A (en) * 2018-03-30 2018-11-13 百度在线网络技术(北京)有限公司 A kind of control method, device and server based on recognition of face
CN108830265A (en) * 2018-08-29 2018-11-16 奇酷互联网络科技(深圳)有限公司 Method, communication terminal and the storage device that mood in internet exchange is reminded
CN109343919A (en) * 2018-08-30 2019-02-15 深圳市口袋网络科技有限公司 A kind of rendering method and terminal device, storage medium of bubble of chatting
CN109192050A (en) * 2018-10-25 2019-01-11 重庆鲁班机器人技术研究院有限公司 Experience type language teaching method, device and educational robot

Also Published As

Publication number Publication date
WO2017219450A1 (en) 2017-12-28

Similar Documents

Publication Publication Date Title
CN107526994A (en) A kind of information processing method, device and mobile terminal
TWI714834B (en) Human face live detection method, device and electronic equipment
TWI661363B (en) Smart robot and human-computer interaction method
CN110110715A (en) Text detection model training method, text filed, content determine method and apparatus
US11914787B2 (en) Method for dynamic interaction and electronic device thereof
CN107358241A (en) Image processing method, device, storage medium and electronic equipment
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
CN103745235A (en) Human face identification method, device and terminal device
CN108198159A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN105279492A (en) Iris identification method and device
CN104143097A (en) Classification function obtaining method and device, face age recognition method and device and equipment
KR102203720B1 (en) Method and apparatus for speech recognition
CN108388553A (en) Talk with method, electronic equipment and the conversational system towards kitchen of disambiguation
CN104965589A (en) Human living body detection method and device based on human brain intelligence and man-machine interaction
CN102890777A (en) Computer system capable of identifying facial expressions
CN108318042A (en) Navigation mode-switching method, device, terminal and storage medium
CN105677636A (en) Information processing method and device for intelligent question-answering system
CN111382655A (en) Hand-lifting behavior identification method and device and electronic equipment
CN110866962B (en) Virtual portrait and expression synchronization method based on convolutional neural network
CN109877834A (en) Multihead display robot, method and apparatus, display robot and display methods
US11819996B2 (en) Expression feedback method and smart robot
CN104317392A (en) Information control method and electronic equipment
US20160351185A1 (en) Voice recognition device and method
CN106708950B (en) Data processing method and device for intelligent robot self-learning system
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171229