Content of the invention
In order to solve the problems, such as that in prior art, expression input speed is slow and process is complicated, embodiments provides one
Plant expression input method and device.Described technical scheme is as follows:
A kind of first aspect, there is provided expression input method, with the electronic device, methods described includes:
Input signal is gathered by the input block on described electronic equipment;
Expressive features value is extracted from described input signal;
Described expressive features value according to extracting chooses the expression needing input from feature database, deposits in described feature database
Contain the corresponding relation between different expressive features values and different expressions.
Optionally, described extraction expressive features value from described input signal, comprising:
If described input signal includes the input signal of speech form, extract from the input signal of described speech form
The expressive features value of speech form;
If described input signal includes the input signal of graphic form, determine from the input signal of described graphic form
Human face region, and extract the expressive features value of face form from described human face region;
If described input signal includes the input signal of visual form, extract from the input signal of described visual form
The expressive features value of attitude track form.
Optionally, when the described expressive features value extracted be described speech form expressive features value, described people's shape of face
During any one in the expressive features value of the expressive features value of formula and described attitude track form, described basis is extracted
Described expressive features value chooses the expression needing input from feature database, comprising:
The described expressive features value extracted is mated with the expressive features value of storage in described feature database;
The individual described expression of the corresponding n of the described expressive features value of m that matching degree is more than predetermined threshold is alternately expressed one's feelings,
n≥m≥1;
Choose at least one sort criteria according to pre-set priority n described alternative expression is ranked up, described sequence
Condition includes any one in history access times, nearest use time and described matching degree;
Filter out a described alternative expression according to ranking results as the described expression needing input.
Optionally, when the described expressive features value extracted includes the expressive features value of described speech form, and also include
During the expressive features value of the expressive features value of described face form or described attitude track form, the institute that described basis is extracted
State expressive features value and choose the expression needing input from feature database, comprising:
The first expressive features value by the expressive features value of the described speech form extracting and storage in fisrt feature storehouse
Mated;
Obtain a described first expressive features value that matching degree is more than first threshold, a >=1;
By the expressive features value of the expressive features value of the described face form extracted or described attitude track form with
In second feature storehouse, the second expressive features value of storage is mated;
Obtain the b described second expressive features value that matching degree is more than Second Threshold, b >=1;
A described first expressive features are worth corresponding x described expression and b described second expressive features value corresponds to
Y described expression alternately express one's feelings, x >=a, y >=b;
Choose at least one sort criteria according to pre-set priority described alternative expression is ranked up, described sort criteria
Including any one in number of repetition, history access times, nearest use time and described matching degree;
Filter out a described alternative expression according to ranking results as the described expression needing input;
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag
Include described first expressive features value and described second expressive features value.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it
Before, also include:
Gather the environmental information around described electronic equipment, described environmental information include temporal information, environmental volume information,
At least one in environmental light intensity information and ambient image information;
Currently used environment is determined according to described environmental information;
Described alternative features storehouse conduct corresponding with described currently used environment is chosen from least one alternative features storehouse
Described feature database.
Optionally, the described input block by described electronic equipment gathers input signal, comprising:
If described input signal includes the input signal of described speech form, described speech form is gathered by mike
Input signal;
If described input signal includes the input signal of described graphic form or the input signal of described visual form,
Gather the input signal of described graphic form or the input signal of described visual form by photographic head.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it
Before, also include:
For expression each described, record at least one training signal for training described expression;
At least one training characteristics value is extracted from least one described training signal;
Using described training characteristics values most for number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it
Afterwards, also include:
The described expression needing input is directly displayed in input frame or chat hurdle.
A kind of second aspect, there is provided expression input equipment, with the electronic device, described device includes:
Signal acquisition module, for gathering input signal by the input block on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression chooses module, for choosing the table needing input from feature database according to the described expressive features value extracted
Feelings, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
Optionally, described characteristic extracting module, comprising: the first extraction unit, and/or, the second extraction unit, and/or, the
Three extraction units;
Described first extraction unit, if include the input signal of speech form for described input signal, from institute's predicate
The expressive features value of speech form is extracted in the input signal of sound form;
Described second extraction unit, if include the input signal of graphic form for described input signal, from described figure
Determine human face region in the input signal of sheet form, and extract the expressive features value of face form from described human face region;
Described 3rd extraction unit, if include the input signal of visual form for described input signal, regards from described
The expressive features value of attitude track form is extracted in the input signal of frequency form.
Optionally, when the described expressive features value extracted be described speech form expressive features value, described people's shape of face
During any one in the expressive features value of the expressive features value of formula and described attitude track form, described expression chooses mould
Block, comprising: characteristic matching unit, alternatively selection unit, expression arrangement units and expression determining unit;
Described characteristic matching unit, for the expression that will store in the described expressive features value extracted and described feature database
Eigenvalue is mated;
Described alternative selection unit, the m described expressive features for matching degree is more than predetermined threshold are worth corresponding n
Described expression is alternately expressed one's feelings, n >=m >=1;
Described expression arrangement units, described to n alternative at least one sort criteria is chosen according to pre-set priority
Expression is ranked up, and described sort criteria includes any in history access times, nearest use time and described matching degree
A kind of;
Described expression determining unit, defeated as described needs for filtering out a described alternative expression according to ranking results
The expression entering.
Optionally, when the described expressive features value extracted includes the expressive features value of described speech form, and also include
During the expressive features value of the expressive features value of described face form or described attitude track form, described expression chooses module,
Including: the first matching unit, first acquisition unit, the second matching unit, second acquisition unit, alternative determining unit, alternative row
Sequence unit and expression choose unit;
Described first matching unit, for by the expressive features value of the described speech form extracting and fisrt feature storehouse
First expressive features value of storage is mated;
Described first acquisition unit, is more than a described first expressive features value of first threshold, a for obtaining matching degree
≥1;
Described second matching unit, for by the expressive features value of the described face form extracted or described attitude rail
The expressive features value of trace form is mated with the second expressive features value of storage in second feature storehouse;
Described second acquisition unit, is more than b described second expressive features value of Second Threshold, b for obtaining matching degree
≥1;
Described alternative determining unit, for being worth corresponding x described expression and b by a described first expressive features
The corresponding y described expression of described second expressive features value is alternately expressed one's feelings, x >=a, y >=b;
Described alternative sequencing unit, for choosing at least one sort criteria to described alternative expression according to pre-set priority
It is ranked up, described sort criteria is included in number of repetition, history access times, nearest use time and described matching degree
Any one;
Described expression chooses unit, defeated as described needs for filtering out a described alternative expression according to ranking results
The expression entering;
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag
Include described first expressive features value and described second expressive features value.
Optionally, described device also includes:
Information acquisition module, for gathering the environmental information around described electronic equipment, described environmental information includes the time
At least one in information, environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining currently used environment according to described environmental information;
Feature selection module, for choosing institute corresponding with described currently used environment from least one alternative features storehouse
State alternative features storehouse as described feature database.
Optionally, described signal acquisition module, comprising: voice collecting unit, and/or, image acquisition units;
Described voice collecting unit, if include the input signal of described speech form for described input signal, passes through
Mike gathers the input signal of described speech form;
Described image collecting unit, if include the input signal of described graphic form or described for described input signal
The input signal of visual form, then gather the input signal of described graphic form or the defeated of described visual form by photographic head
Enter signal.
Optionally, described device also includes:
Signal record module, for for expression each described, recording at least one instruction for training described expression
Practice signal;
Feature logging modle, for extracting at least one training characteristics value from least one described training signal;
Characteristic selecting module, for using described training characteristics values most for number of iterations as with described expression corresponding
Expressive features value;
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value
In.
Optionally, described device also includes:
Expression display module, for directly displaying the described expression needing input in input frame or chat hurdle.
Technical scheme provided in an embodiment of the present invention has the benefit that
Input signal is gathered by the input block on electronic equipment, extracts expressive features value from input signal, according to
The expressive features value extracted choose from feature database need input expression, the different expressive features values that are stored with feature database with
Corresponding relation between different expressions;Solve the problems, such as that in prior art, expression input speed is slow and process is complicated;Reach
Simplify expression input process, improve the effect of the speed of expression input.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention
Formula is described in further detail.
In each embodiment of the present invention, electronic equipment can be mobile phone, panel computer, E-book reader, mp3 broadcasting
Device (moving picture experts group audio layer iii, dynamic image expert's compression standard audio frequency aspect
3), mp4(moving picture experts group audio layer iv, dynamic image expert's compression standard audio layer
Face 3) player, pocket computer on knee, desk computer and intelligent television etc..
Refer to Fig. 1, the method flow diagram of the expression input method of one embodiment of the invention offer, this reality are provided
Apply example to be applied to electronic equipment to illustrate with this expression input method.This expression input method includes several steps as follows:
Step 102, gathers input signal by the input block on electronic equipment.
Step 104, extracts expressive features value from input signal.
Step 106, being chosen from feature database according to the expressive features value extracted needs the expression of input, deposits in feature database
Contain the corresponding relation between different expressive features values and different expressions.
In sum, the expression input method that the present embodiment provides, by the input block collection input on electronic equipment
Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database
Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art
The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input
Really.
Refer to Fig. 2 a, the method flow diagram of the expression input method of another embodiment of the present invention offer is provided, this
Embodiment is applied to electronic equipment to illustrate with this expression input method.This expression input method includes several steps as follows
Rapid:
Step 201, judges that electronic equipment is in automatic data collection state or manual acquisition state.
Electronic equipment judges that its own is in automatic data collection state or manual acquisition state.Wherein, automatic data collection state
Refer to automatically turn on, by electronic equipment, the collection that input block carries out input signal;Manual acquisition state refers to be opened by user defeated
Enter the collection that unit carries out input signal.
Step 202, if judged result is in automatic data collection state for electronic equipment, opens input block.
If judged result is in automatic data collection state for electronic equipment, electronic equipment automatically turns on input block.Input
Unit includes mike and/or photographic head.Input block can be the built-in input block of electronic equipment or electronics sets
Standby external input block.
After input block opened by electronic equipment, execute following step 204.
Step 203, if judged result is in manual acquisition state for electronic equipment, whether detection input block is in out
Open state.
If judged result is in manual acquisition state for electronic equipment, whether electronic equipment detection input block is in out
Open state.Because manual acquisition state refers to open, by user, the collection that input block carries out input signal, so electronic equipment
Now whether detection user opens input block.It is single that user can open input by the control of button or switch etc
Unit.
When input block is for mike, incorporated by reference to reference to Fig. 2 b, it illustrates a kind of typical instant messaging application
Chat interface.Microphone button 22 is located in input frame 24.Mike can be kept with the head of a household to be in by this microphone button 22
Opening, when user discharges this microphone button 22, mike cuts out.
If testing result is yes, namely testing result is in opening for input block, then execute following step 204;
If testing result is no, namely testing result is not in opening for input block, then do not execute following step.
Step 204, gathers input signal by the input block on electronic equipment.
Whether electronic equipment is in automatic data collection state or manual acquisition state, after input block is opened, electricity
Sub- equipment gathers input signal by input block.
In the first possible implementation, if input block includes mike, voice shape is gathered by mike
The input signal of formula.The input signal of speech form can be user's word, or sent by user or other object
Sound.
In the possible implementation of second, if input block includes photographic head, picture shape is gathered by photographic head
Formula or the input signal of visual form.The input signal of graphic form can be the countenance of user, visual form defeated
Enter gesture path of the movement posture that signal can be user or user etc..
Step 205, extracts expressive features value from input signal.
After electronic equipment collects input signal, extract expressive features value from input signal.
In the first possible implementation, if input signal includes the input signal of speech form, from voice shape
The expressive features value of speech form is extracted in the input signal of formula.
Electronic equipment can be by Method of Data with Adding Windows or eigenvalue system of selection from the input signal of speech form
Extract the expressive features value of speech form.Wherein, Method of Data with Adding Windows be a kind of conventional to high-dimensional voice or image
Etc signal more simplified and the effective method analyzed, by dimensionality reduction is carried out to high-dimensional signal, one can be removed
The data of a little substitutive characteristics not reflecting signal.Therefore, the feature in input signal can be obtained by Method of Data with Adding Windows
Value, this feature value is as capable of the data of the substitutive characteristics of reflected input signal.Due in the present embodiment, being from voice shape
Extract the eigenvalue of speech form in the input signal of formula, and this feature value be used for the expression input method that the present embodiment provides,
So this feature value is referred to as expressive features value.
Furthermore it is also possible to expressive features value is extracted from input signal by eigenvalue system of selection.Electronic equipment is permissible
Pre-set at least one expressive features value, after collecting input signal, input signal being analyzed and searches is
The expressive features value that no presence pre-sets.
In this example, it is assumed that the input signal of speech form that electronic equipment is collected by mike be " when
So can out of question heartily ", electronic equipment the input signal of this speech form is analyzed after therefrom extract speech form
Expressive features value " heartily ".
In the possible implementation of second, if input signal includes the input signal of graphic form, from picture shape
Determine human face region in the input signal of formula, and extract the expressive features value of face form from human face region.
Electronic equipment can first pass through image recognition technology and determine human face region from the input signal of graphic form, so
Pass through Method of Data with Adding Windows afterwards or the expressive features value of face form is extracted in eigenvalue system of selection from human face region.
Such as, shoot the picture of user face by photographic head after, determine the human face region in picture, then to this people
The expression of the face form that " happy ", " sad ", " crying " or " going mad " etc is therefrom extracted after being analyzed in face region is special
Value indicative.
In the third possible implementation, if input signal includes the input signal of visual form, from video shape
The expressive features value of attitude track form is extracted in the input signal of formula.
When input signal passes through user's attitude action or gesture path in collection a period of time etc for electronic equipment
The input signal of visual form when, electronic equipment can extract attitude track form from the input signal of this visual form
Expressive features value.
Step 206, being chosen from feature database according to the expressive features value extracted needs the expression of input.
Due to the corresponding relation being stored with feature database between different expressive features values and different expressions, electronic equipment according to
In the expressive features value extracted and feature database, the corresponding relation of storage chooses the expression of required input, the table that then will choose
Feelings are inserted in input frame 24 treats that user sends or directly displays in chat hurdle 26.
Specifically, when the expressive features value extracted be the expressive features value of speech form, the expression of face form special
During any one in the expressive features value of value indicative and attitude track form, this step can include several sub-steps as follows:
(1) expressive features extracted value is mated with the expressive features value of storage in feature database.
The expressive features extracted value is mated by electronic equipment with the expressive features value of storage in feature database.Due to spy
The expressive features value levying storage in storehouse is specific expressive features value, and the expressive features value of such as speech form is specific by certain
People's typing, the expressive features value that electronic equipment extracts has a certain degree of difference with the expressive features value of storage in feature database
Different, therefore electronic equipment needs to be mated both, obtains matching degree.
(2) the corresponding n expression of m expressive features value that matching degree is more than predetermined threshold is alternately expressed one's feelings, n >=m
≥1.
The corresponding n expression of m expressive features value that matching degree is more than predetermined threshold is alternately expressed one's feelings by electronic equipment,
n≥m≥1.Wherein, an expressive features value corresponds at least one and expresses one's feelings.Predetermined threshold can set in advance according to practical situation
Fixed, such as it is set as 80%.
In this example, it is assumed that the alternative expression that electronic equipment obtains is: matching degree is 98% expressive features value
Corresponding tri- expressions of a, b and c, and the expressive features value corresponding d expression that another matching degree is 90%.
(3) choose at least one sort criteria according to pre-set priority n alternative expression is ranked up.
Electronic equipment is chosen at least one sort criteria according to pre-set priority and n alternative expression is ranked up, sequence
Condition includes any one in history access times, nearest use time and matching degree.Excellent between each sort criteria
First level order can preset according to practical situation, such as according to priority from high to low be matching degree, history access times,
Use time recently.When electronic equipment cannot filter out, according to first sort criteria, the expression needing input, choose second
Individual sort criteria continues screening, by that analogy, finally filters out an alternative expression as the expression needing input.
In the present embodiment, obtain successively after electronic equipment is ranked up to tetra- expressions of a, b, c and d according to matching degree first
To a, b, c and d, find that the matching degree of tri- expressions of a, b and c is 98%;Afterwards, electronic equipment is according to history access times pair
Tri- expressions of a, b and c obtain b, a and c(successively and assume that ordering rule is according to history access times from more to less after being ranked up
Arrangement, and the history access times of a expression are 15 times, the history access times of b expression are 20 times, and the history of c expression is using secondary
Number is 3 times);The history access times of now electronic equipment discovery b expression at most, therefore choose b expression as needs input
Expression.
(4) filter out an alternative expression as the expression needing input according to ranking results.
Electronic equipment filters out an alternative expression as the expression needing input according to ranking results.Implement in the present invention
In the expression input method that example provides, electronic equipment Automatic sieve from multiple alternative expressions selects an alternative expression as needs
The expression of input, it is not necessary to user is chosen or confirms, simplifies the flow process of expression input so that expression input is more high
Effect, convenient.
When the expressive features value extracted includes the expressive features value of speech form, and it is special also to include the expression of face form
During the expressive features value of value indicative or attitude track form, this step can include several steps as follows:
(1) the first expressive features value of the expressive features value of the speech form extracting and storage in fisrt feature storehouse is entered
Row coupling.
Need unlike the mode of expression of input from above-mentioned selection, the expression of two kinds of forms of electronic equipment comprehensive analysis
Eigenvalue determines the expression needing input, so that the expression chosen is more accurate, fully meets user's request.
Electronic equipment will be special with the first expression of storage in fisrt feature storehouse for the expressive features value of the speech form extracting
Value indicative is mated.Likewise, electronic equipment obtains depositing in the expressive features value of speech form extracted and fisrt feature storehouse
Matching degree between first expressive features value of storage.In this example, it is assumed that the table of speech form that electronic equipment extracts
Feelings eigenvalue is " heartily ".
(2) a the first expressive features value that matching degree is more than first threshold, a >=1 are obtained.
Electronic equipment obtains a the first expressive features value that matching degree is more than first threshold, a >=1.In the present embodiment,
Assume a=1.
(3) the expressive features value of the expressive features value of the face extracting form or attitude track form is special with second
The the second expressive features value levying storage in storehouse is mated.
Electronic equipment by the expressive features value of the expressive features value of the face extracting form or attitude track form with
In second feature storehouse, the second expressive features value of storage is mated.In this example, it is assumed that the people that electronic equipment extracts
The expressive features value of shape of face formula is the facial expression laughed.
(4) b the second expressive features value that matching degree is more than Second Threshold, b >=1 are obtained.
Electronic equipment obtains b the second expressive features value that matching degree is more than Second Threshold, b >=1.In the present embodiment,
Assume b=2.
(5) a corresponding x expression of the first expressive features value and b the second expressive features are worth corresponding y expression
Alternately express one's feelings, x >=a, y >=b.
The a corresponding x expression of the first expressive features value and b the second expressive features are worth corresponding y by electronic equipment
Individual expression is alternately expressed one's feelings, x >=a, y >=b.In this example, it is assumed that alternatively express one's feelings being more than first threshold for matching degree
Three expressions of first expressive features value corresponding " laugh ", " smile " and " snagging ", matching degree is more than first of Second Threshold
Second expressive features are worth corresponding " smile " expression, and matching degree is more than second the second expressive features value pair of Second Threshold
" beep mouth " expression answered.
(6) choose at least one sort criteria according to pre-set priority alternative expression is ranked up.
Electronic equipment is chosen at least one sort criteria according to pre-set priority and alternative expression is ranked up, sort criteria
Including any one in number of repetition, history access times, nearest use time and matching degree.Between each sort criteria
Priority orders can be preset according to practical situation, be that number of repetition, history make from high to low such as according to priority
With number of times, nearest use time, matching degree.Need the table of input when electronic equipment cannot filter out according to first sort criteria
During feelings, second sort criteria of selection continues screening, by that analogy, finally filters out an alternative expression as needs input
Expression.
In this example, it is assumed that first according to number of repetition " laugh ", " smile ", " snagging " and " beep mouth " are expressed one's feelings into
Row sequence, finds that the number of repetition that " smile " expresses one's feelings is most, then directly choose " smile " expression as the expression needing input.
(7) filter out an alternative expression as the expression needing input according to ranking results.
Electronic equipment filters out an alternative expression as the expression needing input according to ranking results.Implement in the present invention
In the expression input method that example provides, electronic equipment Automatic sieve from multiple alternative expressions selects an alternative expression as needs
The expression of input, it is not necessary to user is chosen or confirms, simplifies the flow process of expression input so that expression input is more high
Effect, convenient.
In addition, when the expressive features extracted value is mated by electronic equipment with the expressive features value of storage in feature database
Afterwards, if finding there is not the expressive features value that matching degree is more than threshold value, user can be pointed out cannot to find matching result.Than
As informed user in the form of pop-up.
Step 207 is it would be desirable to the expression of input directly displays in input frame or chat hurdle.
It would be desirable to the expression of input directly displays in defeated after the expression needing input chosen from feature database by electronic equipment
Enter in frame or chat hurdle.In conjunction with reference to Fig. 2 b, electronic equipment the expression of selection can be inserted in input frame 24 and treat user
Send or directly display in chat hurdle 26.
It should be noted that the expression input method that the present embodiment provides can be combined with electronic equipment local environment to table
Feelings are chosen.Specifically, before above-mentioned steps 206, several steps as follows can also be included:
(1) environmental information around collection electronic equipment.
Environmental information around electronic equipment collection, environmental information includes temporal information, environmental volume information, environmental light intensity
At least one in information and ambient image information.Wherein, environmental volume information can be by mike collection, environmental light intensity
Information can be gathered by light intensity sensor, ambient image information can be gathered by photographic head.
(2) currently used environment is determined according to environmental information.
Electronic equipment determines currently used environment according to environmental information.After environmental information around electronic equipment collection,
Each environmental information of comprehensive analysis is to determine currently used environment.Such as, when temporal information be 22:00, environmental volume information be 2
Decibel and when environmental light intensity information is very weak it may be determined that currently used environment is the environment in sleep for the user.For another example, work as the time
When information is 14:00, environmental volume information is 75 decibels, environmental light intensity information is relatively strong and ambient image information is street, permissible
Determine that currently used environment is user in the environment gone window-shopping.
(3) choose alternative features storehouse corresponding with currently used environment from least one alternative features storehouse as feature
Storehouse.
The corresponding relation between different use environments and different alternative features storehouses is prestored, when electronics sets in electronic equipment
After the currently used environment of standby acquisition, choose corresponding alternative features storehouse as feature database.Afterwards, electronic equipment is further according to extracting
Expressive features value choose from feature database need input expression.
Also, it should be noted the different expressive features values of storage can from the corresponding relation between different expressions in feature database
To be to be set by system or designer in advance, such as in user installation expression bag, in this expression bag, just carry spy
Levy storehouse., after design completes expression, also to set right between different expressive features values and different expressions simultaneously for designer
Should be related to, and create feature database, then expression and feature database are together packaged into expression bag.In addition, storage in feature database
Corresponding relation between different expressive features values and different expressions can also be by user's sets itself.When voluntarily being set by user
Regularly, the expression input method that the present embodiment provides also includes several steps as follows:
First, for each expression, record at least one training signal for training this expression.
For each expression, electronic equipment record is used for training at least one training signal of this expression.User is permissible
Expression is trained, by the corresponding relation between the different expressive features value of User Defined and different expressions.Such as, use
Family have chosen conventional four expression from expression selection interface, is respectively as follows: expression a, expression b, expression c and expression d.With to table
As a example feelings a are trained, user selectes expression a, is repeated 3 times " snagging ", this 3 training signals of electronic equipment record.
Certainly, electronic equipment is acquired to training signal still through the input block of mike or photographic head etc
And record.
Second, extract at least one training characteristics value from least one training signal.
Electronic equipment extracts at least one training characteristics value from least one training signal.It is identical with above-mentioned steps 205,
Electronic equipment can extract training characteristics value by Method of Data with Adding Windows or eigenvalue system of selection from training signal.Training
Signal can be the training signal of speech form or the training signal of graphic form, can also be the instruction of visual form
Practice signal.
3rd, using training characteristics values most for number of iterations as the expressive features value corresponding with expression.
Electronic equipment is using training characteristics values most for number of iterations as the expressive features value corresponding with expression.Work as electronics
When the training signal of equipment record is identical, generally extracting the training characteristics value obtaining from training signal is identical.Such as, electricity
When 3 training signals of sub- equipment record are " snagging " that user says, 3 training characteristics values that it extracts are usually
" snagging ".
However, when electronic equipment gathers training signal by the input block of mike or photographic head etc, may
There is the interference of the interference of surrounding, such as noise or image, now electronic equipment extracts from training signal and to obtain
Training characteristics value may be different.Therefore, electronic equipment using training characteristics values most for number of iterations as with expression phase
Corresponding expressive features value.Such as, when 3 training signals of electronic equipment record are " snagging " that user says, it extracts
3 training characteristics values in two be " snagging ", another be " ", now electronic equipment choose " snagging " be with expression a
Corresponding expressive features value.
4th, the corresponding relation of expression and expressive features value is stored in feature database.
The corresponding relation expressed one's feelings with expressive features value is stored in feature database electronic equipment.In actual applications, permissible
The trained corresponding relation obtaining is stored in original feature database;A user-defined feature can also voluntarily be created by user
Storehouse, the trained corresponding relation obtaining is stored in user-defined feature storehouse.
By aforementioned four step it is achieved that being expressed one's feelings the corresponding relation and expressive features value between by user's sets itself,
Further increase Consumer's Experience.
Also, it should be noted the expression input method providing with the present embodiment to distinguish user when to need is carried out
Expression input, can also carry out the step whether detection cursor is located in input frame before step 201.Cursor is used for instruction and uses
Family inputs the position of the contents such as word, expression or picture.Incorporated by reference to reference to Fig. 2 b, cursor 28 is located in input frame 24.Electronics
Equipment carries out the contents such as word, expression or picture according to whether the position detection user of cursor 28 is currently in use input frame 24
Input.When cursor 28 is located in input frame 24, default user is currently in use input frame 24, now executes above-mentioned steps 201.
In sum, the expression input method that the present embodiment provides, by the input block collection input on electronic equipment
Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database
Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art
The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input
Really.
In addition, gather the input signal of speech form also by mike, or photographic head gathers graphic form or regards
The input signal of frequency form, and then carry out expression input, enrich the mode of expression input;And user can be with sets itself
Corresponding relation between different expressive features values and different expressions, fully meets the demand of user.
In addition, above-described embodiment additionally provides two kinds of modes choosing the expression needing input, first kind of way is passed through to divide
The expression needing input is determined after analysing a form of expressive features value, relatively simple, quick;The second way is passed through comprehensive
The expressive features value of two kinds of forms of analysis determines the expression needing input, so that the expression chosen is more accurate, fully
Meet user's request.
In a specific example, Xiao Ming opens the application with information transmit-receive function installed in intelligent television
Software, and open the picture that the front-facing camera of intelligent television gathers its human face region simultaneously.Xiao Ming's corners of the mouth raises up slightly, exposes
The expression smiled.Intelligent television extracts expressive features value from the picture of the human face region collecting, and finds in feature database
After corresponding relation between expressive features value and expression, insertion in the input frame of chat interface is smiled and is expressed one's feelings.Afterwards, Xiao Ming
After exposing sad expression, intelligent television inserts sad expression in the input frame of chat interface.
In another specific example, a MSN of installation in little red use mobile phone, by expression
It is trained, the corresponding relation between several groups of expressive features values of sets itself and expression.Afterwards, when little red in chat process
In, when mobile phone receives the input signal of the speech form of " today is well happy ", according to expressive features value " happy " and table
FeelingsCorresponding relation, in the input frame of chat interface insertion expression;When mobile phone receives, " snow in outside
" the input signal of speech form when, according to expressive features value " snowing " with expressionCorresponding relation, chat
Insertion expression in the input frame at its interface;When mobile phone receives the speech form of " this snow is very beautiful, and I likes well "
During input signal, according to expressive features value " liking " and expressionCorresponding relation, in the input frame of chat interface insert
Enter expression.
Following for apparatus of the present invention embodiment, can be used for executing the inventive method embodiment.Real for apparatus of the present invention
Apply the details not disclosed in example, refer to the inventive method embodiment.
Refer to Fig. 3, the block diagram of the expression input equipment of one embodiment of the invention offer, this table are provided
Feelings input equipment is used in electronic equipment.This expression input equipment can be by software, hardware or both be implemented in combination with into
Some or all of for electronic equipment, this expression input equipment includes: signal acquisition module 310, characteristic extracting module 320 and
Expression chooses module 330.
Signal acquisition module 310, for gathering input signal by the input block on described electronic equipment.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Expression chooses module 330, needs to input for being chosen from feature database according to the described expressive features value extracted
Expression, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
In sum, the expression input equipment that the present embodiment provides, by the input block collection input on electronic equipment
Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database
Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art
The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input
Really.
Refer to Fig. 4, the block diagram of the expression input equipment of another embodiment of the present invention offer, this table are provided
Feelings input equipment is used in electronic equipment.This expression input equipment can be by software, hardware or both be implemented in combination with into
Some or all of for electronic equipment, this expression input equipment includes: signal acquisition module 310, characteristic extracting module 320,
Information acquisition module 321, environment determination module 322, feature selection module 323, expression choose module 330 and expression display module
331.
Signal acquisition module 310, for gathering input signal by the input block on described electronic equipment.
Specifically, described signal acquisition module 310, comprising: voice collecting unit 310a, and/or, image acquisition units
310b.
Described voice collecting unit 310a, if include the input signal of described speech form for described input signal,
Gather the input signal of described speech form by mike.
Described image collecting unit 310b, if for described input signal include described graphic form input signal or
The input signal of described visual form, then gather the input signal of described graphic form or described visual form by photographic head
Input signal.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Specifically, described characteristic extracting module 320, comprising: the first extraction unit 320a, and/or, the second extraction unit
320b, and/or, the 3rd extraction unit 320c.
Described first extraction unit 320a, if include the input signal of speech form for described input signal, from institute
State the expressive features value extracting speech form in the input signal of speech form.
Described second extraction unit 320b, if include the input signal of graphic form for described input signal, from institute
State determination human face region in the input signal of graphic form, and extract the expressive features of face form from described human face region
Value.
Described 3rd extraction unit 320c, if include the input signal of visual form for described input signal, from institute
State the expressive features value extracting attitude track form in the input signal of visual form.
Optionally, described expression input equipment also includes: information acquisition module 321, environment determination module 322 and feature choosing
Select module 323.
Information acquisition module 321, for gathering the environmental information around described electronic equipment, when described environmental information includes
Between information, environmental volume information, at least one in environmental light intensity information and ambient image information.
Environment determination module 322, for determining currently used environment according to described environmental information.
Feature selection module 323, corresponding with described currently used environment for choosing from least one alternative features storehouse
Described alternative features storehouse as described feature database.
Expression chooses module 330, needs to input for being chosen from feature database according to the described expressive features value extracted
Expression, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
When the described expressive features value extracted be described speech form expressive features value, described face form expression
During any one in the expressive features value of eigenvalue and described attitude track form, described expression chooses module 330, bag
Include: characteristic matching unit 330a, alternative selection unit 330b, expression arrangement units 330c and expression determining unit 330d.
Described characteristic matching unit 330a, for by storage in the described expressive features value extracted and described feature database
Expressive features value is mated.
Described alternative selection unit 330b, the m described expressive features value for matching degree is more than predetermined threshold corresponds to
N described expression alternately express one's feelings, n >=m >=1.
Described expression arrangement units 330c, for choosing at least one sort criteria to described in n according to pre-set priority
Alternative expression is ranked up, and described sort criteria is included in history access times, nearest use time and described matching degree
Any one.
Described expression determining unit 330d, for filtering out a described alternative expression as described need according to ranking results
Expression to be inputted.
When the described expressive features value extracted includes the expressive features value of described speech form, and also include described face
During the expressive features value of the expressive features value of form or described attitude track form, described expression chooses module 330, comprising:
First matching unit 330e, first acquisition unit 330f, the second matching unit 330g, second acquisition unit 330h, alternative determination
Unit 330i, alternative sequencing unit 330j and expression choose unit 330k.
Described first matching unit 330e, for by the expressive features value of the described speech form extracting and fisrt feature
In storehouse, the first expressive features value of storage is mated.
Described first acquisition unit 330f, is more than a described first expressive features of first threshold for obtaining matching degree
Value, a >=1.
Described second matching unit 330g, for by the expressive features value of the described face form extracted or described appearance
The expressive features value of state track form is mated with the second expressive features value of storage in second feature storehouse;
Described second acquisition unit 330h, is more than b described second expressive features of Second Threshold for obtaining matching degree
Value, b >=1.
Described alternative determining unit 330i, for by described for a the first expressive features be worth corresponding x described expression and
The individual described expression of the corresponding y of the described second expressive features value of b is alternately expressed one's feelings, x >=a, y >=b.
Described alternative sequencing unit 330j, for choosing at least one sort criteria to described alternative according to pre-set priority
Expression is ranked up, and described sort criteria includes number of repetition, history access times, nearest use time and described matching degree
In any one.
Described expression chooses unit 330k, for filtering out a described alternative expression as described need according to ranking results
Expression to be inputted.
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag
Include described first expressive features value and described second expressive features value.
Expression display module 331, for directly displaying the described expression needing input in input frame or chat hurdle.
Optionally, described expression input equipment, also includes: signal record module, feature logging modle, characteristic selecting module
With characteristic storage module.
Signal record module, for for expression each described, recording at least one instruction for training described expression
Practice signal.
Feature logging modle, for extracting at least one training characteristics value from least one described training signal.
Characteristic selecting module, for using described training characteristics values most for number of iterations as with described expression corresponding
Expressive features value.
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value
In.
In sum, the expression input equipment that the present embodiment provides, by the input block collection input on electronic equipment
Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database
Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art
The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input
Really.In addition, gather the input signal of speech form also by mike, or photographic head collection graphic form or visual form
Input signal, and then carry out expression input, enrich expression input mode;And user can be with sets itself difference table
Corresponding relation between feelings eigenvalue and different expressions, fully meets the demand of user.
It should be understood that the expression input equipment that above-described embodiment provides is in input expression, only with above-mentioned each function
The division of module is illustrated, and in practical application, can distribute above-mentioned functions by different function moulds as desired
Block completes, and the internal structure of equipment will be divided into different functional modules, to complete all or part of work(described above
Energy.In addition, the expression input equipment that above-described embodiment provides belongs to same design with the embodiment of the method for expression input method, its
The process of implementing refers to embodiment of the method, repeats no more here.
It should be appreciated that it is used in the present context, unless exceptional case, singulative " clearly supported in context
Individual " (" a ", " an ", " the ") be intended to also include plural form.It is to be further understood that "and/or" used herein is
Refer to include one or project that more than one is listed in association arbitrarily and be possible to combination.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can pass through hardware
To complete it is also possible to the hardware being instructed correlation by program is completed, described program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read only memory, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and
Within principle, any modification, equivalent substitution and improvement made etc., should be included within the scope of the present invention.