CN103823561B - expression input method and device - Google Patents

expression input method and device Download PDF

Info

Publication number
CN103823561B
CN103823561B CN201410069166.9A CN201410069166A CN103823561B CN 103823561 B CN103823561 B CN 103823561B CN 201410069166 A CN201410069166 A CN 201410069166A CN 103823561 B CN103823561 B CN 103823561B
Authority
CN
China
Prior art keywords
expression
expressive features
features value
input
input signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410069166.9A
Other languages
Chinese (zh)
Other versions
CN103823561A (en
Inventor
陈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201410069166.9A priority Critical patent/CN103823561B/en
Publication of CN103823561A publication Critical patent/CN103823561A/en
Priority to PCT/CN2014/095872 priority patent/WO2015127825A1/en
Application granted granted Critical
Publication of CN103823561B publication Critical patent/CN103823561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an expression input method and device, and belongs to the field of Internet. The method includes the steps of collecting input signals through an input unit on an electronic device, extracting expression characteristic values from the input signals, and obtaining expressions which need to be inputted and correspond to the expression characteristic values from a characteristic base according to the extracted expression characteristic values, wherein corresponding relations between the different expression characteristic values and the different expressions are stored in the characteristic base. By collecting the input signals through the input unit on the electronic device, extracting the expression characteristic values from the input signals, obtaining the expressions which need to be inputted and correspond to the expression characteristic values from the characteristic base according to the extracted expression characteristic values, and storing the corresponding relations between the different expression characteristic values and the different expressions in the characteristic base, the problems that in the prior art, the expression input speed is low, and the process is complex are solved; the expression input process is simplified, and the expression input speed is increased.

Description

Expression input method and device
Technical field
The present invention relates to internet arena, particularly to a kind of expression input method and device.
Background technology
With im(instant messenger, instant messaging) application, blog(blog) and sms(short Messaging service, Short Message Service) popularizing of applying, user has depended on these to have information further The application of transmission-receiving function carries out exchange and contact to each other.
User, when being exchanged using above-mentioned application, in order to increase the interest of input content, generally requires to input one Express one's feelings to express particular meaning a bit, or abundant input content.During implementing, a side user needs input expression When, opening expression selection interface and therefrom choosing needs the expression of input, then the expression of selection is sent to the opposing party user.Right Ying Di, the opposing party user receives and reads the expression that a side user sends.
During realizing the present invention, inventor finds prior art at least there is problems in that in order to as much as possible Meet the demand of user, an application has usually contained tens even hundreds of expressions and selected for user.When expression selection interface In comprise compared with multiple expression when, need classification and/or Pagination Display these expression.User, in input expression, needs to find first The classification corresponding to expression of required input and/or the page at place, then therefrom choose the expression of required input.This results in The speed of user input expression is very slow, and increased the complexity of expression input process.
Content of the invention
In order to solve the problems, such as that in prior art, expression input speed is slow and process is complicated, embodiments provides one Plant expression input method and device.Described technical scheme is as follows:
A kind of first aspect, there is provided expression input method, with the electronic device, methods described includes:
Input signal is gathered by the input block on described electronic equipment;
Expressive features value is extracted from described input signal;
Described expressive features value according to extracting chooses the expression needing input from feature database, deposits in described feature database Contain the corresponding relation between different expressive features values and different expressions.
Optionally, described extraction expressive features value from described input signal, comprising:
If described input signal includes the input signal of speech form, extract from the input signal of described speech form The expressive features value of speech form;
If described input signal includes the input signal of graphic form, determine from the input signal of described graphic form Human face region, and extract the expressive features value of face form from described human face region;
If described input signal includes the input signal of visual form, extract from the input signal of described visual form The expressive features value of attitude track form.
Optionally, when the described expressive features value extracted be described speech form expressive features value, described people's shape of face During any one in the expressive features value of the expressive features value of formula and described attitude track form, described basis is extracted Described expressive features value chooses the expression needing input from feature database, comprising:
The described expressive features value extracted is mated with the expressive features value of storage in described feature database;
The individual described expression of the corresponding n of the described expressive features value of m that matching degree is more than predetermined threshold is alternately expressed one's feelings, n≥m≥1;
Choose at least one sort criteria according to pre-set priority n described alternative expression is ranked up, described sequence Condition includes any one in history access times, nearest use time and described matching degree;
Filter out a described alternative expression according to ranking results as the described expression needing input.
Optionally, when the described expressive features value extracted includes the expressive features value of described speech form, and also include During the expressive features value of the expressive features value of described face form or described attitude track form, the institute that described basis is extracted State expressive features value and choose the expression needing input from feature database, comprising:
The first expressive features value by the expressive features value of the described speech form extracting and storage in fisrt feature storehouse Mated;
Obtain a described first expressive features value that matching degree is more than first threshold, a >=1;
By the expressive features value of the expressive features value of the described face form extracted or described attitude track form with In second feature storehouse, the second expressive features value of storage is mated;
Obtain the b described second expressive features value that matching degree is more than Second Threshold, b >=1;
A described first expressive features are worth corresponding x described expression and b described second expressive features value corresponds to Y described expression alternately express one's feelings, x >=a, y >=b;
Choose at least one sort criteria according to pre-set priority described alternative expression is ranked up, described sort criteria Including any one in number of repetition, history access times, nearest use time and described matching degree;
Filter out a described alternative expression according to ranking results as the described expression needing input;
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag Include described first expressive features value and described second expressive features value.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it Before, also include:
Gather the environmental information around described electronic equipment, described environmental information include temporal information, environmental volume information, At least one in environmental light intensity information and ambient image information;
Currently used environment is determined according to described environmental information;
Described alternative features storehouse conduct corresponding with described currently used environment is chosen from least one alternative features storehouse Described feature database.
Optionally, the described input block by described electronic equipment gathers input signal, comprising:
If described input signal includes the input signal of described speech form, described speech form is gathered by mike Input signal;
If described input signal includes the input signal of described graphic form or the input signal of described visual form, Gather the input signal of described graphic form or the input signal of described visual form by photographic head.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it Before, also include:
For expression each described, record at least one training signal for training described expression;
At least one training characteristics value is extracted from least one described training signal;
Using described training characteristics values most for number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
Optionally, the described expressive features value that described basis is extracted choose from feature database need input expression it Afterwards, also include:
The described expression needing input is directly displayed in input frame or chat hurdle.
A kind of second aspect, there is provided expression input equipment, with the electronic device, described device includes:
Signal acquisition module, for gathering input signal by the input block on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression chooses module, for choosing the table needing input from feature database according to the described expressive features value extracted Feelings, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
Optionally, described characteristic extracting module, comprising: the first extraction unit, and/or, the second extraction unit, and/or, the Three extraction units;
Described first extraction unit, if include the input signal of speech form for described input signal, from institute's predicate The expressive features value of speech form is extracted in the input signal of sound form;
Described second extraction unit, if include the input signal of graphic form for described input signal, from described figure Determine human face region in the input signal of sheet form, and extract the expressive features value of face form from described human face region;
Described 3rd extraction unit, if include the input signal of visual form for described input signal, regards from described The expressive features value of attitude track form is extracted in the input signal of frequency form.
Optionally, when the described expressive features value extracted be described speech form expressive features value, described people's shape of face During any one in the expressive features value of the expressive features value of formula and described attitude track form, described expression chooses mould Block, comprising: characteristic matching unit, alternatively selection unit, expression arrangement units and expression determining unit;
Described characteristic matching unit, for the expression that will store in the described expressive features value extracted and described feature database Eigenvalue is mated;
Described alternative selection unit, the m described expressive features for matching degree is more than predetermined threshold are worth corresponding n Described expression is alternately expressed one's feelings, n >=m >=1;
Described expression arrangement units, described to n alternative at least one sort criteria is chosen according to pre-set priority Expression is ranked up, and described sort criteria includes any in history access times, nearest use time and described matching degree A kind of;
Described expression determining unit, defeated as described needs for filtering out a described alternative expression according to ranking results The expression entering.
Optionally, when the described expressive features value extracted includes the expressive features value of described speech form, and also include During the expressive features value of the expressive features value of described face form or described attitude track form, described expression chooses module, Including: the first matching unit, first acquisition unit, the second matching unit, second acquisition unit, alternative determining unit, alternative row Sequence unit and expression choose unit;
Described first matching unit, for by the expressive features value of the described speech form extracting and fisrt feature storehouse First expressive features value of storage is mated;
Described first acquisition unit, is more than a described first expressive features value of first threshold, a for obtaining matching degree ≥1;
Described second matching unit, for by the expressive features value of the described face form extracted or described attitude rail The expressive features value of trace form is mated with the second expressive features value of storage in second feature storehouse;
Described second acquisition unit, is more than b described second expressive features value of Second Threshold, b for obtaining matching degree ≥1;
Described alternative determining unit, for being worth corresponding x described expression and b by a described first expressive features The corresponding y described expression of described second expressive features value is alternately expressed one's feelings, x >=a, y >=b;
Described alternative sequencing unit, for choosing at least one sort criteria to described alternative expression according to pre-set priority It is ranked up, described sort criteria is included in number of repetition, history access times, nearest use time and described matching degree Any one;
Described expression chooses unit, defeated as described needs for filtering out a described alternative expression according to ranking results The expression entering;
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag Include described first expressive features value and described second expressive features value.
Optionally, described device also includes:
Information acquisition module, for gathering the environmental information around described electronic equipment, described environmental information includes the time At least one in information, environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining currently used environment according to described environmental information;
Feature selection module, for choosing institute corresponding with described currently used environment from least one alternative features storehouse State alternative features storehouse as described feature database.
Optionally, described signal acquisition module, comprising: voice collecting unit, and/or, image acquisition units;
Described voice collecting unit, if include the input signal of described speech form for described input signal, passes through Mike gathers the input signal of described speech form;
Described image collecting unit, if include the input signal of described graphic form or described for described input signal The input signal of visual form, then gather the input signal of described graphic form or the defeated of described visual form by photographic head Enter signal.
Optionally, described device also includes:
Signal record module, for for expression each described, recording at least one instruction for training described expression Practice signal;
Feature logging modle, for extracting at least one training characteristics value from least one described training signal;
Characteristic selecting module, for using described training characteristics values most for number of iterations as with described expression corresponding Expressive features value;
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value In.
Optionally, described device also includes:
Expression display module, for directly displaying the described expression needing input in input frame or chat hurdle.
Technical scheme provided in an embodiment of the present invention has the benefit that
Input signal is gathered by the input block on electronic equipment, extracts expressive features value from input signal, according to The expressive features value extracted choose from feature database need input expression, the different expressive features values that are stored with feature database with Corresponding relation between different expressions;Solve the problems, such as that in prior art, expression input speed is slow and process is complicated;Reach Simplify expression input process, improve the effect of the speed of expression input.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, will make to required in embodiment description below Accompanying drawing be briefly described it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is the method flow diagram of the expression input method that one embodiment of the invention provides;
Fig. 2 a is the method flow diagram of the expression input method that another embodiment of the present invention provides;
Fig. 2 b is a kind of schematic diagram of the chat interface of typical instant messaging application;
Fig. 3 is the block diagram of the expression input equipment that one embodiment of the invention provides;
Fig. 4 is the block diagram of the expression input equipment that another embodiment of the present invention provides.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
In each embodiment of the present invention, electronic equipment can be mobile phone, panel computer, E-book reader, mp3 broadcasting Device (moving picture experts group audio layer iii, dynamic image expert's compression standard audio frequency aspect 3), mp4(moving picture experts group audio layer iv, dynamic image expert's compression standard audio layer Face 3) player, pocket computer on knee, desk computer and intelligent television etc..
Refer to Fig. 1, the method flow diagram of the expression input method of one embodiment of the invention offer, this reality are provided Apply example to be applied to electronic equipment to illustrate with this expression input method.This expression input method includes several steps as follows:
Step 102, gathers input signal by the input block on electronic equipment.
Step 104, extracts expressive features value from input signal.
Step 106, being chosen from feature database according to the expressive features value extracted needs the expression of input, deposits in feature database Contain the corresponding relation between different expressive features values and different expressions.
In sum, the expression input method that the present embodiment provides, by the input block collection input on electronic equipment Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input Really.
Refer to Fig. 2 a, the method flow diagram of the expression input method of another embodiment of the present invention offer is provided, this Embodiment is applied to electronic equipment to illustrate with this expression input method.This expression input method includes several steps as follows Rapid:
Step 201, judges that electronic equipment is in automatic data collection state or manual acquisition state.
Electronic equipment judges that its own is in automatic data collection state or manual acquisition state.Wherein, automatic data collection state Refer to automatically turn on, by electronic equipment, the collection that input block carries out input signal;Manual acquisition state refers to be opened by user defeated Enter the collection that unit carries out input signal.
Step 202, if judged result is in automatic data collection state for electronic equipment, opens input block.
If judged result is in automatic data collection state for electronic equipment, electronic equipment automatically turns on input block.Input Unit includes mike and/or photographic head.Input block can be the built-in input block of electronic equipment or electronics sets Standby external input block.
After input block opened by electronic equipment, execute following step 204.
Step 203, if judged result is in manual acquisition state for electronic equipment, whether detection input block is in out Open state.
If judged result is in manual acquisition state for electronic equipment, whether electronic equipment detection input block is in out Open state.Because manual acquisition state refers to open, by user, the collection that input block carries out input signal, so electronic equipment Now whether detection user opens input block.It is single that user can open input by the control of button or switch etc Unit.
When input block is for mike, incorporated by reference to reference to Fig. 2 b, it illustrates a kind of typical instant messaging application Chat interface.Microphone button 22 is located in input frame 24.Mike can be kept with the head of a household to be in by this microphone button 22 Opening, when user discharges this microphone button 22, mike cuts out.
If testing result is yes, namely testing result is in opening for input block, then execute following step 204; If testing result is no, namely testing result is not in opening for input block, then do not execute following step.
Step 204, gathers input signal by the input block on electronic equipment.
Whether electronic equipment is in automatic data collection state or manual acquisition state, after input block is opened, electricity Sub- equipment gathers input signal by input block.
In the first possible implementation, if input block includes mike, voice shape is gathered by mike The input signal of formula.The input signal of speech form can be user's word, or sent by user or other object Sound.
In the possible implementation of second, if input block includes photographic head, picture shape is gathered by photographic head Formula or the input signal of visual form.The input signal of graphic form can be the countenance of user, visual form defeated Enter gesture path of the movement posture that signal can be user or user etc..
Step 205, extracts expressive features value from input signal.
After electronic equipment collects input signal, extract expressive features value from input signal.
In the first possible implementation, if input signal includes the input signal of speech form, from voice shape The expressive features value of speech form is extracted in the input signal of formula.
Electronic equipment can be by Method of Data with Adding Windows or eigenvalue system of selection from the input signal of speech form Extract the expressive features value of speech form.Wherein, Method of Data with Adding Windows be a kind of conventional to high-dimensional voice or image Etc signal more simplified and the effective method analyzed, by dimensionality reduction is carried out to high-dimensional signal, one can be removed The data of a little substitutive characteristics not reflecting signal.Therefore, the feature in input signal can be obtained by Method of Data with Adding Windows Value, this feature value is as capable of the data of the substitutive characteristics of reflected input signal.Due in the present embodiment, being from voice shape Extract the eigenvalue of speech form in the input signal of formula, and this feature value be used for the expression input method that the present embodiment provides, So this feature value is referred to as expressive features value.
Furthermore it is also possible to expressive features value is extracted from input signal by eigenvalue system of selection.Electronic equipment is permissible Pre-set at least one expressive features value, after collecting input signal, input signal being analyzed and searches is The expressive features value that no presence pre-sets.
In this example, it is assumed that the input signal of speech form that electronic equipment is collected by mike be " when So can out of question heartily ", electronic equipment the input signal of this speech form is analyzed after therefrom extract speech form Expressive features value " heartily ".
In the possible implementation of second, if input signal includes the input signal of graphic form, from picture shape Determine human face region in the input signal of formula, and extract the expressive features value of face form from human face region.
Electronic equipment can first pass through image recognition technology and determine human face region from the input signal of graphic form, so Pass through Method of Data with Adding Windows afterwards or the expressive features value of face form is extracted in eigenvalue system of selection from human face region.
Such as, shoot the picture of user face by photographic head after, determine the human face region in picture, then to this people The expression of the face form that " happy ", " sad ", " crying " or " going mad " etc is therefrom extracted after being analyzed in face region is special Value indicative.
In the third possible implementation, if input signal includes the input signal of visual form, from video shape The expressive features value of attitude track form is extracted in the input signal of formula.
When input signal passes through user's attitude action or gesture path in collection a period of time etc for electronic equipment The input signal of visual form when, electronic equipment can extract attitude track form from the input signal of this visual form Expressive features value.
Step 206, being chosen from feature database according to the expressive features value extracted needs the expression of input.
Due to the corresponding relation being stored with feature database between different expressive features values and different expressions, electronic equipment according to In the expressive features value extracted and feature database, the corresponding relation of storage chooses the expression of required input, the table that then will choose Feelings are inserted in input frame 24 treats that user sends or directly displays in chat hurdle 26.
Specifically, when the expressive features value extracted be the expressive features value of speech form, the expression of face form special During any one in the expressive features value of value indicative and attitude track form, this step can include several sub-steps as follows:
(1) expressive features extracted value is mated with the expressive features value of storage in feature database.
The expressive features extracted value is mated by electronic equipment with the expressive features value of storage in feature database.Due to spy The expressive features value levying storage in storehouse is specific expressive features value, and the expressive features value of such as speech form is specific by certain People's typing, the expressive features value that electronic equipment extracts has a certain degree of difference with the expressive features value of storage in feature database Different, therefore electronic equipment needs to be mated both, obtains matching degree.
(2) the corresponding n expression of m expressive features value that matching degree is more than predetermined threshold is alternately expressed one's feelings, n >=m ≥1.
The corresponding n expression of m expressive features value that matching degree is more than predetermined threshold is alternately expressed one's feelings by electronic equipment, n≥m≥1.Wherein, an expressive features value corresponds at least one and expresses one's feelings.Predetermined threshold can set in advance according to practical situation Fixed, such as it is set as 80%.
In this example, it is assumed that the alternative expression that electronic equipment obtains is: matching degree is 98% expressive features value Corresponding tri- expressions of a, b and c, and the expressive features value corresponding d expression that another matching degree is 90%.
(3) choose at least one sort criteria according to pre-set priority n alternative expression is ranked up.
Electronic equipment is chosen at least one sort criteria according to pre-set priority and n alternative expression is ranked up, sequence Condition includes any one in history access times, nearest use time and matching degree.Excellent between each sort criteria First level order can preset according to practical situation, such as according to priority from high to low be matching degree, history access times, Use time recently.When electronic equipment cannot filter out, according to first sort criteria, the expression needing input, choose second Individual sort criteria continues screening, by that analogy, finally filters out an alternative expression as the expression needing input.
In the present embodiment, obtain successively after electronic equipment is ranked up to tetra- expressions of a, b, c and d according to matching degree first To a, b, c and d, find that the matching degree of tri- expressions of a, b and c is 98%;Afterwards, electronic equipment is according to history access times pair Tri- expressions of a, b and c obtain b, a and c(successively and assume that ordering rule is according to history access times from more to less after being ranked up Arrangement, and the history access times of a expression are 15 times, the history access times of b expression are 20 times, and the history of c expression is using secondary Number is 3 times);The history access times of now electronic equipment discovery b expression at most, therefore choose b expression as needs input Expression.
(4) filter out an alternative expression as the expression needing input according to ranking results.
Electronic equipment filters out an alternative expression as the expression needing input according to ranking results.Implement in the present invention In the expression input method that example provides, electronic equipment Automatic sieve from multiple alternative expressions selects an alternative expression as needs The expression of input, it is not necessary to user is chosen or confirms, simplifies the flow process of expression input so that expression input is more high Effect, convenient.
When the expressive features value extracted includes the expressive features value of speech form, and it is special also to include the expression of face form During the expressive features value of value indicative or attitude track form, this step can include several steps as follows:
(1) the first expressive features value of the expressive features value of the speech form extracting and storage in fisrt feature storehouse is entered Row coupling.
Need unlike the mode of expression of input from above-mentioned selection, the expression of two kinds of forms of electronic equipment comprehensive analysis Eigenvalue determines the expression needing input, so that the expression chosen is more accurate, fully meets user's request.
Electronic equipment will be special with the first expression of storage in fisrt feature storehouse for the expressive features value of the speech form extracting Value indicative is mated.Likewise, electronic equipment obtains depositing in the expressive features value of speech form extracted and fisrt feature storehouse Matching degree between first expressive features value of storage.In this example, it is assumed that the table of speech form that electronic equipment extracts Feelings eigenvalue is " heartily ".
(2) a the first expressive features value that matching degree is more than first threshold, a >=1 are obtained.
Electronic equipment obtains a the first expressive features value that matching degree is more than first threshold, a >=1.In the present embodiment, Assume a=1.
(3) the expressive features value of the expressive features value of the face extracting form or attitude track form is special with second The the second expressive features value levying storage in storehouse is mated.
Electronic equipment by the expressive features value of the expressive features value of the face extracting form or attitude track form with In second feature storehouse, the second expressive features value of storage is mated.In this example, it is assumed that the people that electronic equipment extracts The expressive features value of shape of face formula is the facial expression laughed.
(4) b the second expressive features value that matching degree is more than Second Threshold, b >=1 are obtained.
Electronic equipment obtains b the second expressive features value that matching degree is more than Second Threshold, b >=1.In the present embodiment, Assume b=2.
(5) a corresponding x expression of the first expressive features value and b the second expressive features are worth corresponding y expression Alternately express one's feelings, x >=a, y >=b.
The a corresponding x expression of the first expressive features value and b the second expressive features are worth corresponding y by electronic equipment Individual expression is alternately expressed one's feelings, x >=a, y >=b.In this example, it is assumed that alternatively express one's feelings being more than first threshold for matching degree Three expressions of first expressive features value corresponding " laugh ", " smile " and " snagging ", matching degree is more than first of Second Threshold Second expressive features are worth corresponding " smile " expression, and matching degree is more than second the second expressive features value pair of Second Threshold " beep mouth " expression answered.
(6) choose at least one sort criteria according to pre-set priority alternative expression is ranked up.
Electronic equipment is chosen at least one sort criteria according to pre-set priority and alternative expression is ranked up, sort criteria Including any one in number of repetition, history access times, nearest use time and matching degree.Between each sort criteria Priority orders can be preset according to practical situation, be that number of repetition, history make from high to low such as according to priority With number of times, nearest use time, matching degree.Need the table of input when electronic equipment cannot filter out according to first sort criteria During feelings, second sort criteria of selection continues screening, by that analogy, finally filters out an alternative expression as needs input Expression.
In this example, it is assumed that first according to number of repetition " laugh ", " smile ", " snagging " and " beep mouth " are expressed one's feelings into Row sequence, finds that the number of repetition that " smile " expresses one's feelings is most, then directly choose " smile " expression as the expression needing input.
(7) filter out an alternative expression as the expression needing input according to ranking results.
Electronic equipment filters out an alternative expression as the expression needing input according to ranking results.Implement in the present invention In the expression input method that example provides, electronic equipment Automatic sieve from multiple alternative expressions selects an alternative expression as needs The expression of input, it is not necessary to user is chosen or confirms, simplifies the flow process of expression input so that expression input is more high Effect, convenient.
In addition, when the expressive features extracted value is mated by electronic equipment with the expressive features value of storage in feature database Afterwards, if finding there is not the expressive features value that matching degree is more than threshold value, user can be pointed out cannot to find matching result.Than As informed user in the form of pop-up.
Step 207 is it would be desirable to the expression of input directly displays in input frame or chat hurdle.
It would be desirable to the expression of input directly displays in defeated after the expression needing input chosen from feature database by electronic equipment Enter in frame or chat hurdle.In conjunction with reference to Fig. 2 b, electronic equipment the expression of selection can be inserted in input frame 24 and treat user Send or directly display in chat hurdle 26.
It should be noted that the expression input method that the present embodiment provides can be combined with electronic equipment local environment to table Feelings are chosen.Specifically, before above-mentioned steps 206, several steps as follows can also be included:
(1) environmental information around collection electronic equipment.
Environmental information around electronic equipment collection, environmental information includes temporal information, environmental volume information, environmental light intensity At least one in information and ambient image information.Wherein, environmental volume information can be by mike collection, environmental light intensity Information can be gathered by light intensity sensor, ambient image information can be gathered by photographic head.
(2) currently used environment is determined according to environmental information.
Electronic equipment determines currently used environment according to environmental information.After environmental information around electronic equipment collection, Each environmental information of comprehensive analysis is to determine currently used environment.Such as, when temporal information be 22:00, environmental volume information be 2 Decibel and when environmental light intensity information is very weak it may be determined that currently used environment is the environment in sleep for the user.For another example, work as the time When information is 14:00, environmental volume information is 75 decibels, environmental light intensity information is relatively strong and ambient image information is street, permissible Determine that currently used environment is user in the environment gone window-shopping.
(3) choose alternative features storehouse corresponding with currently used environment from least one alternative features storehouse as feature Storehouse.
The corresponding relation between different use environments and different alternative features storehouses is prestored, when electronics sets in electronic equipment After the currently used environment of standby acquisition, choose corresponding alternative features storehouse as feature database.Afterwards, electronic equipment is further according to extracting Expressive features value choose from feature database need input expression.
Also, it should be noted the different expressive features values of storage can from the corresponding relation between different expressions in feature database To be to be set by system or designer in advance, such as in user installation expression bag, in this expression bag, just carry spy Levy storehouse., after design completes expression, also to set right between different expressive features values and different expressions simultaneously for designer Should be related to, and create feature database, then expression and feature database are together packaged into expression bag.In addition, storage in feature database Corresponding relation between different expressive features values and different expressions can also be by user's sets itself.When voluntarily being set by user Regularly, the expression input method that the present embodiment provides also includes several steps as follows:
First, for each expression, record at least one training signal for training this expression.
For each expression, electronic equipment record is used for training at least one training signal of this expression.User is permissible Expression is trained, by the corresponding relation between the different expressive features value of User Defined and different expressions.Such as, use Family have chosen conventional four expression from expression selection interface, is respectively as follows: expression a, expression b, expression c and expression d.With to table As a example feelings a are trained, user selectes expression a, is repeated 3 times " snagging ", this 3 training signals of electronic equipment record.
Certainly, electronic equipment is acquired to training signal still through the input block of mike or photographic head etc And record.
Second, extract at least one training characteristics value from least one training signal.
Electronic equipment extracts at least one training characteristics value from least one training signal.It is identical with above-mentioned steps 205, Electronic equipment can extract training characteristics value by Method of Data with Adding Windows or eigenvalue system of selection from training signal.Training Signal can be the training signal of speech form or the training signal of graphic form, can also be the instruction of visual form Practice signal.
3rd, using training characteristics values most for number of iterations as the expressive features value corresponding with expression.
Electronic equipment is using training characteristics values most for number of iterations as the expressive features value corresponding with expression.Work as electronics When the training signal of equipment record is identical, generally extracting the training characteristics value obtaining from training signal is identical.Such as, electricity When 3 training signals of sub- equipment record are " snagging " that user says, 3 training characteristics values that it extracts are usually " snagging ".
However, when electronic equipment gathers training signal by the input block of mike or photographic head etc, may There is the interference of the interference of surrounding, such as noise or image, now electronic equipment extracts from training signal and to obtain Training characteristics value may be different.Therefore, electronic equipment using training characteristics values most for number of iterations as with expression phase Corresponding expressive features value.Such as, when 3 training signals of electronic equipment record are " snagging " that user says, it extracts 3 training characteristics values in two be " snagging ", another be " ", now electronic equipment choose " snagging " be with expression a Corresponding expressive features value.
4th, the corresponding relation of expression and expressive features value is stored in feature database.
The corresponding relation expressed one's feelings with expressive features value is stored in feature database electronic equipment.In actual applications, permissible The trained corresponding relation obtaining is stored in original feature database;A user-defined feature can also voluntarily be created by user Storehouse, the trained corresponding relation obtaining is stored in user-defined feature storehouse.
By aforementioned four step it is achieved that being expressed one's feelings the corresponding relation and expressive features value between by user's sets itself, Further increase Consumer's Experience.
Also, it should be noted the expression input method providing with the present embodiment to distinguish user when to need is carried out Expression input, can also carry out the step whether detection cursor is located in input frame before step 201.Cursor is used for instruction and uses Family inputs the position of the contents such as word, expression or picture.Incorporated by reference to reference to Fig. 2 b, cursor 28 is located in input frame 24.Electronics Equipment carries out the contents such as word, expression or picture according to whether the position detection user of cursor 28 is currently in use input frame 24 Input.When cursor 28 is located in input frame 24, default user is currently in use input frame 24, now executes above-mentioned steps 201.
In sum, the expression input method that the present embodiment provides, by the input block collection input on electronic equipment Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input Really.
In addition, gather the input signal of speech form also by mike, or photographic head gathers graphic form or regards The input signal of frequency form, and then carry out expression input, enrich the mode of expression input;And user can be with sets itself Corresponding relation between different expressive features values and different expressions, fully meets the demand of user.
In addition, above-described embodiment additionally provides two kinds of modes choosing the expression needing input, first kind of way is passed through to divide The expression needing input is determined after analysing a form of expressive features value, relatively simple, quick;The second way is passed through comprehensive The expressive features value of two kinds of forms of analysis determines the expression needing input, so that the expression chosen is more accurate, fully Meet user's request.
In a specific example, Xiao Ming opens the application with information transmit-receive function installed in intelligent television Software, and open the picture that the front-facing camera of intelligent television gathers its human face region simultaneously.Xiao Ming's corners of the mouth raises up slightly, exposes The expression smiled.Intelligent television extracts expressive features value from the picture of the human face region collecting, and finds in feature database After corresponding relation between expressive features value and expression, insertion in the input frame of chat interface is smiled and is expressed one's feelings.Afterwards, Xiao Ming After exposing sad expression, intelligent television inserts sad expression in the input frame of chat interface.
In another specific example, a MSN of installation in little red use mobile phone, by expression It is trained, the corresponding relation between several groups of expressive features values of sets itself and expression.Afterwards, when little red in chat process In, when mobile phone receives the input signal of the speech form of " today is well happy ", according to expressive features value " happy " and table FeelingsCorresponding relation, in the input frame of chat interface insertion expression;When mobile phone receives, " snow in outside " the input signal of speech form when, according to expressive features value " snowing " with expressionCorresponding relation, chat Insertion expression in the input frame at its interface;When mobile phone receives the speech form of " this snow is very beautiful, and I likes well " During input signal, according to expressive features value " liking " and expressionCorresponding relation, in the input frame of chat interface insert Enter expression.
Following for apparatus of the present invention embodiment, can be used for executing the inventive method embodiment.Real for apparatus of the present invention Apply the details not disclosed in example, refer to the inventive method embodiment.
Refer to Fig. 3, the block diagram of the expression input equipment of one embodiment of the invention offer, this table are provided Feelings input equipment is used in electronic equipment.This expression input equipment can be by software, hardware or both be implemented in combination with into Some or all of for electronic equipment, this expression input equipment includes: signal acquisition module 310, characteristic extracting module 320 and Expression chooses module 330.
Signal acquisition module 310, for gathering input signal by the input block on described electronic equipment.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Expression chooses module 330, needs to input for being chosen from feature database according to the described expressive features value extracted Expression, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
In sum, the expression input equipment that the present embodiment provides, by the input block collection input on electronic equipment Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input Really.
Refer to Fig. 4, the block diagram of the expression input equipment of another embodiment of the present invention offer, this table are provided Feelings input equipment is used in electronic equipment.This expression input equipment can be by software, hardware or both be implemented in combination with into Some or all of for electronic equipment, this expression input equipment includes: signal acquisition module 310, characteristic extracting module 320, Information acquisition module 321, environment determination module 322, feature selection module 323, expression choose module 330 and expression display module 331.
Signal acquisition module 310, for gathering input signal by the input block on described electronic equipment.
Specifically, described signal acquisition module 310, comprising: voice collecting unit 310a, and/or, image acquisition units 310b.
Described voice collecting unit 310a, if include the input signal of described speech form for described input signal, Gather the input signal of described speech form by mike.
Described image collecting unit 310b, if for described input signal include described graphic form input signal or The input signal of described visual form, then gather the input signal of described graphic form or described visual form by photographic head Input signal.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Specifically, described characteristic extracting module 320, comprising: the first extraction unit 320a, and/or, the second extraction unit 320b, and/or, the 3rd extraction unit 320c.
Described first extraction unit 320a, if include the input signal of speech form for described input signal, from institute State the expressive features value extracting speech form in the input signal of speech form.
Described second extraction unit 320b, if include the input signal of graphic form for described input signal, from institute State determination human face region in the input signal of graphic form, and extract the expressive features of face form from described human face region Value.
Described 3rd extraction unit 320c, if include the input signal of visual form for described input signal, from institute State the expressive features value extracting attitude track form in the input signal of visual form.
Optionally, described expression input equipment also includes: information acquisition module 321, environment determination module 322 and feature choosing Select module 323.
Information acquisition module 321, for gathering the environmental information around described electronic equipment, when described environmental information includes Between information, environmental volume information, at least one in environmental light intensity information and ambient image information.
Environment determination module 322, for determining currently used environment according to described environmental information.
Feature selection module 323, corresponding with described currently used environment for choosing from least one alternative features storehouse Described alternative features storehouse as described feature database.
Expression chooses module 330, needs to input for being chosen from feature database according to the described expressive features value extracted Expression, the corresponding relation being stored with described feature database between different expressive features values and different expressions.
When the described expressive features value extracted be described speech form expressive features value, described face form expression During any one in the expressive features value of eigenvalue and described attitude track form, described expression chooses module 330, bag Include: characteristic matching unit 330a, alternative selection unit 330b, expression arrangement units 330c and expression determining unit 330d.
Described characteristic matching unit 330a, for by storage in the described expressive features value extracted and described feature database Expressive features value is mated.
Described alternative selection unit 330b, the m described expressive features value for matching degree is more than predetermined threshold corresponds to N described expression alternately express one's feelings, n >=m >=1.
Described expression arrangement units 330c, for choosing at least one sort criteria to described in n according to pre-set priority Alternative expression is ranked up, and described sort criteria is included in history access times, nearest use time and described matching degree Any one.
Described expression determining unit 330d, for filtering out a described alternative expression as described need according to ranking results Expression to be inputted.
When the described expressive features value extracted includes the expressive features value of described speech form, and also include described face During the expressive features value of the expressive features value of form or described attitude track form, described expression chooses module 330, comprising: First matching unit 330e, first acquisition unit 330f, the second matching unit 330g, second acquisition unit 330h, alternative determination Unit 330i, alternative sequencing unit 330j and expression choose unit 330k.
Described first matching unit 330e, for by the expressive features value of the described speech form extracting and fisrt feature In storehouse, the first expressive features value of storage is mated.
Described first acquisition unit 330f, is more than a described first expressive features of first threshold for obtaining matching degree Value, a >=1.
Described second matching unit 330g, for by the expressive features value of the described face form extracted or described appearance The expressive features value of state track form is mated with the second expressive features value of storage in second feature storehouse;
Described second acquisition unit 330h, is more than b described second expressive features of Second Threshold for obtaining matching degree Value, b >=1.
Described alternative determining unit 330i, for by described for a the first expressive features be worth corresponding x described expression and The individual described expression of the corresponding y of the described second expressive features value of b is alternately expressed one's feelings, x >=a, y >=b.
Described alternative sequencing unit 330j, for choosing at least one sort criteria to described alternative according to pre-set priority Expression is ranked up, and described sort criteria includes number of repetition, history access times, nearest use time and described matching degree In any one.
Described expression chooses unit 330k, for filtering out a described alternative expression as described need according to ranking results Expression to be inputted.
Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value bag Include described first expressive features value and described second expressive features value.
Expression display module 331, for directly displaying the described expression needing input in input frame or chat hurdle.
Optionally, described expression input equipment, also includes: signal record module, feature logging modle, characteristic selecting module With characteristic storage module.
Signal record module, for for expression each described, recording at least one instruction for training described expression Practice signal.
Feature logging modle, for extracting at least one training characteristics value from least one described training signal.
Characteristic selecting module, for using described training characteristics values most for number of iterations as with described expression corresponding Expressive features value.
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value In.
In sum, the expression input equipment that the present embodiment provides, by the input block collection input on electronic equipment Signal, extracts expressive features value from input signal, is chosen according to the expressive features value extracted and need to input from feature database Expression, the corresponding relation being stored with feature database between different expressive features values and different expressions;Solve in prior art The expression problem that input speed is slow and process is complicated;Reach simplification expression input process, improve the effect of the speed of expression input Really.In addition, gather the input signal of speech form also by mike, or photographic head collection graphic form or visual form Input signal, and then carry out expression input, enrich expression input mode;And user can be with sets itself difference table Corresponding relation between feelings eigenvalue and different expressions, fully meets the demand of user.
It should be understood that the expression input equipment that above-described embodiment provides is in input expression, only with above-mentioned each function The division of module is illustrated, and in practical application, can distribute above-mentioned functions by different function moulds as desired Block completes, and the internal structure of equipment will be divided into different functional modules, to complete all or part of work(described above Energy.In addition, the expression input equipment that above-described embodiment provides belongs to same design with the embodiment of the method for expression input method, its The process of implementing refers to embodiment of the method, repeats no more here.
It should be appreciated that it is used in the present context, unless exceptional case, singulative " clearly supported in context Individual " (" a ", " an ", " the ") be intended to also include plural form.It is to be further understood that "and/or" used herein is Refer to include one or project that more than one is listed in association arbitrarily and be possible to combination.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can pass through hardware To complete it is also possible to the hardware being instructed correlation by program is completed, described program can be stored in a kind of computer-readable In storage medium, storage medium mentioned above can be read only memory, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and Within principle, any modification, equivalent substitution and improvement made etc., should be included within the scope of the present invention.

Claims (12)

1. a kind of expression input method is it is characterised in that with the electronic device, methods described includes:
Input signal is gathered by the input block on described electronic equipment;
Expressive features value is extracted from described input signal;
Described expressive features value according to extracting chooses the expression needing input from feature database, is stored with described feature database Corresponding relation between different expressive features values and different expressions;
Wherein, the expressive features value being the expressive features value of speech form, face form when the described expressive features value extracted And during any one in the expressive features value of attitude track form, the described expressive features value that described basis is extracted is from spy Levy to choose in storehouse and need the expression of input, comprising: by the table of the described expressive features value extracted and storage in described feature database Feelings eigenvalue is mated;Matching degree is more than predetermined threshold the described expressive features of m be worth corresponding n described expression as Alternative expression, n >=m >=1;Choose at least one sort criteria according to pre-set priority n described alternative expression is ranked up; Filter out a described alternative expression according to ranking results as the described expression needing input;
Or, when the described expressive features value extracted includes the expressive features value of speech form, and also include face form During the expressive features value of expressive features value or attitude track form, the described expressive features value that described basis is extracted is from feature Choosing in storehouse needs the expression of input, comprising: by the expressive features value of the described speech form extracting and fisrt feature storehouse First expressive features value of storage is mated;Obtain a described first expressive features value that matching degree is more than first threshold, a ≥1;Expressive features value and second by the expressive features value of the described face form extracted or described attitude track form In feature database, the second expressive features value of storage is mated;Obtain b described second expression that matching degree is more than Second Threshold Eigenvalue, b >=1;A described first expressive features are worth corresponding x described expression and b described second expressive features value Corresponding y described expression is alternately expressed one's feelings, x >=a, y >=b;At least one sort criteria pair is chosen according to pre-set priority Described alternative expression is ranked up;Filter out a described alternative expression according to ranking results as the described table needing input Feelings;Wherein, described feature database includes described fisrt feature storehouse and described second feature storehouse, and described expressive features value include described First expressive features value and described second expressive features value.
2. method according to claim 1 it is characterised in that described from described input signal extract expressive features value, Including:
If described input signal includes the input signal of speech form, extract described from the input signal of described speech form The expressive features value of speech form;
If described input signal includes the input signal of graphic form, determine face from the input signal of described graphic form Region, and extract the expressive features value of described face form from described human face region;
If described input signal includes the input signal of visual form, extract described from the input signal of described visual form The expressive features value of attitude track form.
3. method according to claim 1 is it is characterised in that the described expressive features value extracted of described basis is from feature Before choosing, in storehouse, the expression needing to input, also include:
Gather the environmental information around described electronic equipment, described environmental information includes temporal information, environmental volume information, environment At least one in intensity signal and ambient image information;
Currently used environment is determined according to described environmental information;
Choose described alternative features storehouse corresponding with described currently used environment as described from least one alternative features storehouse Feature database.
4. method according to claim 2 is it is characterised in that described gathered by the input block on described electronic equipment Input signal, comprising:
If described input signal includes the input signal of described speech form, the defeated of described speech form is gathered by mike Enter signal;
If described input signal includes the input signal of described graphic form or the input signal of described visual form, pass through The photographic head collection input signal of described graphic form or the input signal of described visual form.
5. method according to claim 1 is it is characterised in that the described expressive features value extracted of described basis is from feature Before choosing, in storehouse, the expression needing to input, also include:
For expression each described, record at least one training signal for training described expression;
At least one training characteristics value is extracted from least one described training signal;
Using described training characteristics values most for number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
6. according to the arbitrary described method of claim 1 to 5 it is characterised in that the described expressive features extracted of described basis After value chooses the expression needing input from feature database, also include:
The described expression needing input is directly displayed in input frame or chat hurdle.
7. a kind of input equipment of expression is it is characterised in that with the electronic device, described device includes:
Signal acquisition module, for gathering input signal by the input block on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression chooses module, for choosing the expression needing input from feature database according to the described expressive features value extracted, The corresponding relation being stored with described feature database between different expressive features values and different expressions;
Wherein, the expressive features value being the expressive features value of speech form, face form when the described expressive features value extracted And during any one in the expressive features value of attitude track form, described expression chooses module, comprising: characteristic matching list Unit, alternatively selection unit, expression arrangement units and expression determining unit;Described characteristic matching unit, for by the institute extracting The expressive features value stating expressive features value with storage in described feature database is mated;Described alternative selection unit, for general The individual described expression of the corresponding n of the described expressive features value of m that degree of joining is more than predetermined threshold is alternately expressed one's feelings, n >=m >=1;Described Expression arrangement units, are ranked up to n described alternative expression for choosing at least one sort criteria according to pre-set priority; Described expression determining unit, for filtering out a described alternative expression as the described table needing input according to ranking results Feelings;
Or, when the described expressive features value extracted includes the expressive features value of speech form, and also include face form During the expressive features value of expressive features value or attitude track form, described expression chooses module, comprising: the first matching unit, First acquisition unit, the second matching unit, second acquisition unit, alternative determining unit, alternative sequencing unit and expression are chosen single Unit;Described first matching unit, for storing in the expressive features value of the described speech form extracting and fisrt feature storehouse The first expressive features value mated;Described first acquisition unit, is more than described in a of first threshold for obtaining matching degree First expressive features value, a >=1;Described second matching unit, for by the expressive features value of the described face form extracted or The expressive features value of attitude track form described in person is mated with the second expressive features value of storage in second feature storehouse;Described Second acquisition unit, is more than b described second expressive features value of Second Threshold, b >=1 for obtaining matching degree;Described alternative Determining unit, for being worth corresponding x described expression and b described second expressive features by a described first expressive features It is worth corresponding y described expression alternately to express one's feelings, x >=a, y >=b;Described alternative sequencing unit, for according to pre-set priority Choose at least one sort criteria described alternative expression is ranked up;Described expression chooses unit, for according to ranking results Filter out a described alternative expression as the described expression needing input;Wherein, described feature database includes described fisrt feature Storehouse and described second feature storehouse, and described expressive features value includes described first expressive features value and described second expressive features Value.
8. device according to claim 7 is it is characterised in that described characteristic extracting module, comprising: the first extraction unit, And/or, the second extraction unit, and/or, the 3rd extraction unit;
Described first extraction unit, if include the input signal of speech form for described input signal, from described voice shape The expressive features value of described speech form is extracted in the input signal of formula;
Described second extraction unit, if include the input signal of graphic form for described input signal, from described picture shape Determine human face region in the input signal of formula, and extract the expressive features value of described face form from described human face region;
Described 3rd extraction unit, if include the input signal of visual form for described input signal, from described video shape The expressive features value of described attitude track form is extracted in the input signal of formula.
9. device according to claim 7 is it is characterised in that described device also includes:
Information acquisition module, for gathering the environmental information around described electronic equipment, described environmental information include temporal information, At least one in environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining currently used environment according to described environmental information;
Feature selection module, corresponding described standby with described currently used environment for choosing from least one alternative features storehouse Select feature database as described feature database.
10. device according to claim 8 is it is characterised in that described signal acquisition module, comprising: voice collecting unit, And/or, image acquisition units;
Described voice collecting unit, if include the input signal of described speech form for described input signal, passes through Mike The input signal of speech form described in elegance collection;
Described image collecting unit, if include the input signal of described graphic form or described video for described input signal The input signal of form, then gather the input signal of described graphic form or the input letter of described visual form by photographic head Number.
11. devices according to claim 7 are it is characterised in that described device also includes:
Signal record module, for for expression each described, recording at least one the training letter for training described expression Number;
Feature logging modle, for extracting at least one training characteristics value from least one described training signal;
Characteristic selecting module, for using described training characteristics values most for number of iterations as with the corresponding expression of described expression Eigenvalue;
Characteristic storage module, for being stored in the corresponding relation of described expression and described expressive features value in described feature database.
12. according to the arbitrary described device of claim 7 to 11 it is characterised in that described device also includes:
Expression display module, for directly displaying the described expression needing input in input frame or chat hurdle.
CN201410069166.9A 2014-02-27 2014-02-27 expression input method and device Active CN103823561B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device
PCT/CN2014/095872 WO2015127825A1 (en) 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device

Publications (2)

Publication Number Publication Date
CN103823561A CN103823561A (en) 2014-05-28
CN103823561B true CN103823561B (en) 2017-01-18

Family

ID=50758662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410069166.9A Active CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device

Country Status (2)

Country Link
CN (1) CN103823561B (en)
WO (1) WO2015127825A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823561B (en) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 expression input method and device
US10387717B2 (en) 2014-07-02 2019-08-20 Huawei Technologies Co., Ltd. Information transmission method and transmission apparatus
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106886396B (en) * 2015-12-16 2020-07-07 北京奇虎科技有限公司 Expression management method and device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
WO2017120924A1 (en) * 2016-01-15 2017-07-20 李强生 Information prompting method for use when inserting emoticon, and instant communication tool
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106020504B (en) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 Information output method and device
CN107623830B (en) * 2016-07-15 2019-03-15 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
CN106175727B (en) * 2016-07-25 2018-11-20 广东小天才科技有限公司 A kind of expression method for pushing and wearable device applied to wearable device
CN106293120B (en) * 2016-07-29 2020-06-23 维沃移动通信有限公司 Expression input method and mobile terminal
WO2018023576A1 (en) * 2016-08-04 2018-02-08 薄冰 Method for adjusting emoji sending technique according to market feedback, and emoji system
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN106293131A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 expression input method and device
CN106503630A (en) * 2016-10-08 2017-03-15 广东小天才科技有限公司 A kind of expression sending method, equipment and system
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN107315820A (en) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 The expression searching method and device of User Interface based on mobile terminal
CN107153496B (en) 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons
CN109254669B (en) * 2017-07-12 2022-05-10 腾讯科技(深圳)有限公司 Expression picture input method and device, electronic equipment and system
CN110019885B (en) * 2017-08-01 2021-10-15 北京搜狗科技发展有限公司 Expression data recommendation method and device
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107479723B (en) * 2017-08-18 2021-01-15 联想(北京)有限公司 Emotion symbol inserting method and device and electronic equipment
CN109165072A (en) * 2018-08-28 2019-01-08 珠海格力电器股份有限公司 A kind of expression packet generation method and device
CN109412935B (en) * 2018-10-12 2021-12-07 北京达佳互联信息技术有限公司 Instant messaging sending method, receiving method, sending device and receiving device
CN114173258B (en) * 2022-02-07 2022-05-10 深圳市朗琴音响技术有限公司 Intelligent sound box control method and intelligent sound box

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824A (en) * 2013-06-17 2013-10-16 百度在线网络技术(北京)有限公司 Method for inputting character strings through voice, device and terminal equipment
CN103529946A (en) * 2013-10-29 2014-01-22 广东欧珀移动通信有限公司 Input method and device
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102255820B (en) * 2010-05-18 2016-08-03 腾讯科技(深圳)有限公司 Instant communication method and device
CN102890776B (en) * 2011-07-21 2017-08-04 爱国者电子科技有限公司 The method that expression figure explanation is transferred by facial expression
CN102662961B (en) * 2012-03-08 2015-04-08 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN103823561B (en) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 expression input method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824A (en) * 2013-06-17 2013-10-16 百度在线网络技术(北京)有限公司 Method for inputting character strings through voice, device and terminal equipment
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information
CN103529946A (en) * 2013-10-29 2014-01-22 广东欧珀移动通信有限公司 Input method and device

Also Published As

Publication number Publication date
WO2015127825A1 (en) 2015-09-03
CN103823561A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
CN103823561B (en) expression input method and device
Kazakos et al. Epic-fusion: Audio-visual temporal binding for egocentric action recognition
Aran et al. Broadcasting oneself: Visual discovery of vlogging styles
CN110519636B (en) Voice information playing method and device, computer equipment and storage medium
CN109800744A (en) Image clustering method and device, electronic equipment and storage medium
CN110266879A (en) Broadcast interface display methods, device, terminal and storage medium
CN106789543A (en) The method and apparatus that facial expression image sends are realized in session
CN110147467A (en) A kind of generation method, device, mobile terminal and the storage medium of text description
CN106250553A (en) A kind of service recommendation method and terminal
CN106528859A (en) Data pushing system and method
CN108227950A (en) A kind of input method and device
CN105635519B (en) Method for processing video frequency, apparatus and system
KR101934280B1 (en) Apparatus and method for analyzing speech meaning
CN105868686A (en) Video classification method and apparatus
CN102905233A (en) Method and device for recommending terminal function
CN110263220A (en) A kind of video highlight segment recognition methods and device
CN108038243A (en) Music recommends method, apparatus, storage medium and electronic equipment
CN111444357A (en) Content information determination method and device, computer equipment and storage medium
CN109877834A (en) Multihead display robot, method and apparatus, display robot and display methods
Peixoto et al. Harnessing high-level concepts, visual, and auditory features for violence detection in videos
CN115114395A (en) Content retrieval and model training method and device, electronic equipment and storage medium
CN109697676A (en) Customer analysis and application method and device based on social group
CN103984415B (en) A kind of information processing method and electronic equipment
CN113450804A (en) Voice visualization method and device, projection equipment and computer readable storage medium
Galvan et al. Audiovisual affect recognition in spontaneous filipino laughter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 511446 Guangzhou City, Guangdong Province, Panyu District, South Village, Huambo Business District Wanda Plaza, block B1, floor 28

Applicant after: Guangzhou Huaduo Network Technology Co., Ltd.

Address before: 510655, Guangzhou, Whampoa Avenue, No. 2, creative industrial park, building 3-08,

Applicant before: Guangzhou Huaduo Network Technology Co., Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210111

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511446 28th floor, block B1, Wanda Plaza, Wanbo business district, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20140528

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000053

Denomination of invention: Expression input method and device

Granted publication date: 20170118

License type: Common License

Record date: 20210208