WO2015127825A1 - Expression input method and apparatus and electronic device - Google Patents

Expression input method and apparatus and electronic device Download PDF

Info

Publication number
WO2015127825A1
WO2015127825A1 PCT/CN2014/095872 CN2014095872W WO2015127825A1 WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1 CN 2014095872 W CN2014095872 W CN 2014095872W WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
feature value
feature
input information
input
Prior art date
Application number
PCT/CN2014/095872
Other languages
French (fr)
Chinese (zh)
Inventor
陈超
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201410069166.9A priority Critical patent/CN103823561B/en
Priority to CN201410069166.9 priority
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2015127825A1 publication Critical patent/WO2015127825A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Abstract

The present invention relates to the field of Internet. Disclosed are an expression input method and apparatus and an electronic device. The method comprises: collecting input information; extracting an expression characteristic value from the input information; and obtaining an expression that needs to be input and corresponds to the expression characteristic value from a characteristic base according to the expression characteristic value, wherein the characteristic base stores the correspondence between different expression characteristic values and different expressions. In the present invention, by collecting input information, extracting an expression characteristic value from the input information, and selecting an expression that needs to be input from a characteristic base according to the extracted expression characteristic value, the characteristic base storing the correspondence between different expression characteristic values and different expressions, the problems that the expression input speed is low and the process is complex are solved; the expression input process is simplified, and the expression input speed is increased.

Description

Expression input method, device and electronic device

The present application claims priority to Chinese Patent Application No. 201410069166.9, entitled "Expression Input Method and Apparatus" on February 27, 2014, the entire disclosure of which is incorporated herein by reference.

Technical field

The present invention relates to the field of the Internet, and in particular, to an expression input method, device, and electronic device.

Background technique

With the promotion and popularization of IM (Instant Messenger) applications, blogs, and SMS (Short Messaging Service) applications, users have become increasingly dependent on these applications with information transceiving functions to communicate with each other. And contact.

When users use the above applications to communicate, in order to increase the interest of the input content, it is often necessary to input some expressions to express special meanings, or enrich the input content. In a specific implementation process, when one user needs to input an expression, the expression selection interface is opened to select an expression to be input, and then the selected expression is sent to the other user. Correspondingly, the other party receives and reads the expression sent by the other party's user.

In the process of implementing the present invention, the inventors have found that the related art has at least the following problems: in order to satisfy the user's needs as much as possible, an application often contains dozens or even hundreds of expressions for the user to select. When the emoticon selection interface contains more emoticons, it is necessary to display these emoticons in a pagination manner. When the user inputs the expression, he or she needs to first find the page where the expression of the desired input is located, and then select the expression to be input from. This causes the user to input expressions very slowly and increases the complexity of the expression input process.

Summary of the invention

In order to solve the problem that the expression input speed is slow and the process is complicated in the related art, the embodiment of the invention provides an expression input method, device and electronic device. The technical solution is as follows:

In a first aspect, an expression input method is provided, the method comprising:

Collect input information;

Extracting an expression feature value from the input information;

Selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions.

Optionally, the extracting the expression feature value from the input information includes:

If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;

If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;

If the input information includes video input information, extracting a third specified feature value from the video input information.

Optionally, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the Select the expressions you want to enter in the feature library, including:

Matching the expression feature value with the expression feature value stored in the feature library;

n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;

Selecting at least one sorting condition according to the preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, latest usage time, and the matching degree;

Filtering an alternate expression according to the sorting result, using the candidate emoticon as the input required expression.

Optionally, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the feature identifier is used according to the feature library Select the expressions you want to enter, including:

Matching the first specified feature value with a first expression feature value stored in the first feature library;

Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;

Matching the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;

Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;

The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;

Selecting at least one sorting condition according to the preset priority, and sorting the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;

Filtering an alternative expression according to the sorting result, and using the candidate expression as the expression to be input;

The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.

Optionally, before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:

Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;

Determining a current use environment according to the environmental information;

An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.

Optionally, the collecting input information includes:

If the input information includes the voice input information, collecting the voice input information through a microphone;

If the input information includes the picture input information or the video input information, the picture input information or the video input information is collected by a camera.

Optionally, before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:

For each of the expressions, recording at least one training information for training the expression;

Extracting at least one training feature value from the at least one training information;

The training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression;

A correspondence between the expression and the expression feature value is stored in the feature library.

Optionally, after the selecting the expression that needs to be input from the feature library according to the expression feature value, the method further includes:

Display the expression that needs to be input in the input box or chat bar.

In a second aspect, an expression input device is provided, the device comprising:

a first information collecting module, configured to collect input information;

a feature extraction module, configured to extract an expression feature value from the input information;

The expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.

Optionally, the feature extraction module includes at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;

The first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;

The second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;

The third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.

Optionally, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression selection module includes: a feature a matching unit, an alternative selection unit, an expression arrangement unit, and an expression determination unit;

The feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library;

The candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n≥m≥1;

The expression arranging unit is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a historical usage count, a latest usage time, and the Any of the matching degrees;

The expression determining unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression to be input.

Optionally, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the expression selection module includes: a first match a unit, a first obtaining unit, a second matching unit, a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;

The first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database;

The first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a≥1;

The second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;

The second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b≥1;

The candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b;

The candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;

The expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;

The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.

Optionally, the device further includes:

a second information collecting module, configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;

An environment determining module, configured to determine a current usage environment according to the environment information;

And a feature selection module, configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.

Optionally, the first information collection module includes: a voice collection unit, and an image collection unit;

The voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;

The image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.

Optionally, the device further includes:

An information recording module, configured to record, for each of the expressions, at least one training information for training the expression;

a feature recording module, configured to extract at least one training feature value from the at least one training information;

a feature selection module, configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression;

a feature storage module, configured to store a correspondence between the expression and the expression feature value in the In the feature library.

Optionally, the device further includes:

An expression display module, configured to display the expression that needs to be input in an input box or a chat bar.

In a third aspect, an electronic device is provided, the electronic device comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein the system memory stores a set of program codes, and the central processing unit passes through the system bus Used to call program code stored in system memory to perform the following operations:

Collecting input information; extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions .

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

And when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library The expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n≥m≥1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected The expression serves as the expression that needs to be input.

Preferably, the central processing unit is configured to invoke program code stored in the system memory, Used to do the following:

When the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1; and the second specified feature value or the third specified feature value and the second feature database Matching the second expression feature values stored in the second; obtaining b second expression feature values whose matching degree is greater than the second threshold, b≥1; x expressions corresponding to a first expression feature value and b second expressions y expressions corresponding to the feature values as alternative expressions, x≥a, y≥b; selecting at least one sorting condition according to the preset priority, sorting the candidate expressions according to the at least one sorting condition, the sorting The condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and an alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be input; The feature library comprises the first feature and the second feature database library, and the expression includes a second characteristic value of said first face feature values and feature values of the expression.

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

If the input information includes the voice input information, the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; storing a correspondence between the emoticon and the emoticon feature value in the feature library.

Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:

Display the expression that needs to be input in the input box or chat bar.

The beneficial effects brought by the technical solutions provided by the embodiments of the present invention are:

By collecting the input information, the expression feature value is extracted from the input information, and the expression to be input is selected from the feature library according to the extracted expression feature value, and the correspondence relationship between different expression feature values and different expressions is stored in the feature library; The problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.

DRAWINGS

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Other drawings may also be obtained from those of ordinary skill in the art in light of the inventive work.

1 is a flowchart of a method for an expression input method according to an embodiment of the present invention;

2A is a flowchart of a method for an expression input method according to another embodiment of the present invention;

2B is a schematic diagram of a chat interface of a typical instant messaging application;

3 is a block diagram showing the structure of an expression input device according to an embodiment of the present invention;

4 is a block diagram showing the structure of an expression input device according to another embodiment of the present invention;

Figure 5 is an illustrative terminal architecture of an electronic device 500 for use in an embodiment of the present invention;

FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

detailed description

The embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.

In various embodiments of the present invention, the electronic device may be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3), and an MP4 (Moving Picture) Experts Group Audio Layer IV, motion imaging experts compress standard audio layers 3) players, laptops, desktop computers, smart TVs, etc.

Please refer to FIG. 1 , which is a flowchart of a method for inputting an expression according to an embodiment of the present invention. The embodiment is illustrated by using the expression input method in an electronic device. The expression input method includes the following steps:

In step 102, input information is collected.

Step 104, extracting an expression feature value from the input information.

Step 106: Select an expression to be input from the feature library according to the expression feature value, and store a correspondence between different expression feature values and different expressions in the feature library.

In summary, the expression input method provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.

Preferably, extracting the expression feature value from the input information comprises:

If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;

If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;

If the input information includes video input information, the third specified feature value is extracted from the video input information.

Preferably, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression to be input is selected from the feature library according to the expression feature value, including:

Matching the expression feature values with the expression feature values stored in the feature library;

n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;

Selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, recent usage time, and matching degree;

An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.

Preferably, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, selecting an expression to be input from the feature library according to the expression feature value includes:

Matching the first specified feature value with the first expression feature value stored in the first feature library;

Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;

Matching the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library;

Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;

The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;

Selecting at least one sorting condition according to a preset priority, and sorting the candidate expressions according to at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;

Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input;

The feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.

Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

Collecting environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;

Determine the current usage environment based on environmental information;

The candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.

Preferably, the input information is collected, including:

If the input information includes voice input information, the voice input information is collected through the microphone;

If the input information includes picture input information or video input information, the picture input information or the video input information is collected through the camera.

Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

For each expression, record at least one training information for training the expression;

Extracting at least one training feature value from the at least one training information;

The training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression;

The correspondence between the expression and the expression feature value is stored in the feature library.

Preferably, after selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

Display the expression you want to enter in the input box or chat bar.

All of the above optional technical solutions may be used in any combination to form an optional embodiment of the present invention, and will not be further described herein.

Please refer to FIG. 2A , which is a flowchart of a method for inputting an expression according to another embodiment of the present invention. The embodiment is illustrated by using the expression input method in an electronic device. The expression input method includes the following steps:

Step 201: Determine whether the electronic device is in an automatic collection state or a manual acquisition state; if the electronic device is in an automatic collection state, perform step 202; if the electronic device is in a manual collection state, execute Step 203.

The automatic acquisition state refers to that the electronic device automatically turns on the input unit to collect input information; the manual acquisition state refers to the input of the input information by the user to open the input unit.

Step 202: If the electronic device is in an automatic acquisition state, the input unit is turned on.

If the electronic device is in an automatic acquisition state, the electronic device automatically turns on the input unit. The input unit includes a microphone and/or a camera. The input unit may be an input unit built in the electronic device, or may be an input unit external to the electronic device, which is not specifically limited in the embodiment of the present invention.

After the electronic device turns on the input unit, the following step 204 is performed.

Step 203: If the electronic device is in the manual collection state, it is detected whether the input unit is in an open state.

If the electronic device is in the manual acquisition state, the electronic device detects whether the input unit is in an open state. Since the manual acquisition state refers to the collection of input information by the user turning on the input unit, the electronic device detects at this time whether the user turns on the input unit. The user can turn on the input unit with a control such as a button or a switch.

When the input unit is a microphone, please refer to FIG. 2B in combination with a chat interface of a typical instant messaging application. The microphone button 22 is located in the input box 24. The user presses the microphone button 22 to turn the microphone on, and the microphone turns off when the user releases the microphone button 22.

If the input unit is in the on state, the following step 204 is performed; if the input unit is not in the on state, the following steps are not performed.

Step 204: Acquire input information through an input unit on the electronic device.

Regardless of whether the electronic device is in an automatic acquisition state or a manual acquisition state, after the input unit is turned on, the electronic device collects input information through the input unit.

In a first possible implementation, if the input unit includes a microphone, the voice input information is collected through the microphone. The voice input information can be what the user says, or a sound made by the user or other object.

In a second possible implementation, if the input unit includes a camera, it is collected by the camera. Picture input information or video input information. The picture input information may be a facial expression of the user, and the video input information may be a gesture gesture of the user or a gesture track of the user, and the like.

Step 205: Extract an expression feature value from the input information.

After the electronic device collects the input information, the expression feature value is extracted from the input information.

In a first possible implementation manner, if the input information includes voice input information, the voice input information is voice-recognized, and then the first specified feature value is extracted from the voice input information. The first specified feature value is used to represent the user's voice.

The electronic device may extract the first specified feature value from the voice input information by a data dimensionality reduction method or a feature value selection method. Among them, the data dimensionality reduction method is a commonly used method for simplifying and effectively analyzing information such as high-dimensional speech or images. By reducing the dimensionality of high-dimensional information, it is possible to remove some data that does not reflect the essential characteristics of the information. Therefore, the feature value in the input information can be obtained by the data dimensionality reduction method, and the feature value is data capable of reflecting the essential characteristics of the input information. In the present embodiment, the first specified feature value is extracted from the voice input information, and the first specified feature value is used in the expression input method provided in this embodiment, so the first specified feature value is referred to as an expression feature value. .

In addition, the expression feature value can also be extracted from the input information by the feature value selection method. The electronic device may preset at least one expression feature value, and after collecting the input information, analyze the input information and find out whether there is a preset expression feature value.

In this embodiment, it is assumed that the voice input information collected by the electronic device through the microphone is “of course, no problem haha”, and the electronic device extracts the first specified feature value “haha” from the voice input information.

In a second possible implementation manner, if the input information includes picture input information, the face area is determined from the picture input information, and the second specified feature value is extracted from the face area. Wherein, the second specified feature value user represents a facial expression of the person.

The electronic device may first determine a face region from the picture input information by using an image recognition technology, and then extract a second specified feature value from the face region by a data dimensionality reduction method or a feature value selection method.

For example, after capturing a picture of the user's face through the camera, the face area in the picture is determined. After analyzing the face region, the second specified feature value corresponding to the expressions such as "happy", "sad", "cry" or "crazy" is extracted therefrom.

In a third possible implementation, if the input information includes video input information, the third specified feature value is extracted from the video input information. Wherein, the third specified feature value is used to represent the gesture trajectory of the person.

When the input information is video input information such as a user gesture or a gesture track collected by the camera, the electronic device may extract a third specified feature value from the video input information.

Step 206: Select an expression to be input from the feature library according to the extracted expression feature value.

Since the feature library stores different correspondence between different expression feature values and different expressions, the electronic device can select the expression to be input according to the extracted expression feature values and the corresponding relationship stored in the feature library. The selected emoticons are then inserted into the input box 24 for the user to send or directly display in the chat bar 26.

Specifically, when the extracted expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the step may include the following sub-steps:

(1) Matching the extracted expression feature values with the expression feature values stored in the feature library.

Since the expression feature value stored in the feature library is a specific expression feature value, for example, the first specified feature value is entered by a specific person, the expression feature value extracted by the electronic device and the expression feature value stored in the feature library have A certain degree of difference, so the electronic device needs to match the two to get the matching degree.

(2) n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1.

Wherein an expression feature value corresponds to at least one expression. The predetermined threshold can be preset according to the actual situation, for example, set to 80%.

In this embodiment, it is assumed that the alternative expression obtained by the electronic device is: three expressions A, B, and C corresponding to an expression feature value with a matching degree of 98%, and another expression feature value corresponding to 90% of the matching feature value. D expression.

(3) selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to at least one sorting condition.

The sorting condition includes any one of historical usage times, recent usage time, and matching degree. The order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of priority from high to low is the degree of matching, the number of historical uses, and the most recently used time. When the electronic device cannot filter out the expression to be input according to the first sorting condition, select the second sorting condition to continue the screening, and so on, and finally select an alternative expression as the expression to be input.

In this embodiment, the electronic device first sorts the four expressions A, B, C, and D according to the matching degree, and then obtains A, B, C, and D in turn, and finds that the matching degrees of the three expressions A, B, and C are all 98%; after that, the electronic device sorts the three expressions A, B, and C according to the historical usage times, and then obtains B, A, and C in turn (assuming that the sorting rule is sorted according to the number of historical uses, and the history of the A expression The number of uses is 15 times, the historical usage of the B expression is 20 times, and the historical usage of the C expression is 3 times; at this time, the electronic device finds that the B expression has the most usage history, so the B expression is selected as the expression to be input.

(4) Filtering an alternative expression according to the sorting result, and using the candidate expression as an expression to be input.

In the expression input method provided by the embodiment of the present invention, the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.

When the extracted expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the step may include the following steps:

(1) Matching the first specified feature value with the first expression feature value stored in the first feature library.

Different from the above manner of selecting an expression to be input, the electronic device comprehensively analyzes two forms of expression feature values to determine an expression to be input, which can make the selected expression more accurate and fully satisfy the user's needs.

The electronic device performs the first specified feature value and the first expression feature value stored in the first feature library match. Similarly, the electronic device obtains a matching degree between the first specified feature value and the first expression feature value stored in the first feature library. In this embodiment, it is assumed that the first specified feature value extracted by the electronic device is “haha”.

(2) Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1.

The electronic device acquires a first expression feature values whose matching degree is greater than the first threshold, a≥1. In this embodiment, a = 1 is assumed.

(3) Matching the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library.

In the embodiment, the facial expression of the second designated feature value is laughed as an example for illustration.

(4) Obtaining b second expression feature values whose matching degree is greater than the second threshold, b≥1.

The electronic device acquires b second expression feature values whose matching degree is greater than a second threshold, b≥1. In this embodiment, it is assumed that b = 2.

(5) The x expressions corresponding to the a first expression feature values and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b.

In this embodiment, it is assumed that the candidate expression is three expressions of “laughing”, “smile” and “fang” corresponding to the first expression feature value whose matching degree is greater than the first threshold, and the matching degree is greater than the second threshold. a "smile" expression corresponding to the second expression feature value, and a "beep" expression corresponding to the second second expression feature value having a matching degree greater than the second threshold.

(6) selecting at least one sorting condition according to the preset priority, and sorting the alternative expressions according to at least one sorting condition.

The sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree. The order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of repetition is the order of repetition, the number of historical usages, the latest usage time, and the matching degree. When the electronic device cannot filter out the expression to be input according to the first sorting condition, select the second sorting condition to continue the screening, and so on, and finally select an alternative expression as the expression to be input.

In this embodiment, it is assumed that the "Laughter", "Smile", "Tooth" and "Beep" expressions are first sorted according to the number of repetitions, and the "smile" expression is found to have the most repetitions, and the "smile" is directly selected. The expression is an expression that needs to be entered.

(7) Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input.

In the expression input method provided by the embodiment of the present invention, the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.

In addition, after the electronic device matches the extracted expression feature value with the expression feature value stored in the feature library, if it is found that there is no expression feature value whose matching degree is greater than the threshold, the user may be prompted to find the matching result. For example, the user is notified in the form of a pop-up window.

In step 207, the expression that needs to be input is displayed in the input box or the chat bar.

After the electronic device selects an expression to be input from the feature library, the expression to be input is directly displayed in the input box or the chat bar. Referring to FIG. 2B, the electronic device can insert the selected emoticons into the input box 24 for the user to send or directly display in the chat bar 26.

It should be noted that the expression input method provided in this embodiment may also select an expression in combination with an environment in which the electronic device is located. Specifically, before the foregoing step 206, the following steps may also be included:

(1) Collect environmental information around the electronic device.

The environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information. The ambient volume information can be collected by the microphone, the ambient light intensity information can be collected by the light intensity sensor, and the environmental image information can be collected by the camera.

(2) Determine the current usage environment based on the environmental information.

After the electronic device collects surrounding environmental information, the various environmental information is comprehensively analyzed to determine the current usage environment. For example, when the time information is 22:00, the environment volume information is 2 decibels, and the ambient light intensity information is weak, it can be determined that the current use environment is the environment in which the user is sleeping. For example, when the time information is 14:00, the ambient volume information is 75 decibels, the ambient light intensity information is strong, and the environmental image information is a street, It can be determined that the current usage environment is an environment in which the user is shopping.

(3) Selecting, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and using the candidate feature library as the feature library.

The correspondence between different usage environments and different candidate feature libraries is pre-stored in the electronic device. After the electronic device acquires the current usage environment, the corresponding candidate feature library is selected as the feature library. Then, the electronic device selects an expression to be input from the feature library according to the extracted expression feature value.

It should also be noted that the correspondence between different expression feature values stored in the feature library and different expressions may be previously set by the system or designer. For example, when a user installs an emoticon package, the emoticon package carries a feature library. After designing the expression, the designer also sets the correspondence between different expression feature values and different expressions, and creates a feature library, and then packs the expression together with the feature library into an expression package. In addition, the correspondence between different expression feature values stored in the feature library and different expressions may also be set by the user. When set by the user, the expression input method provided in this embodiment further includes the following steps:

First, for each expression, at least one piece of training information for training the expression is recorded.

For each expression, the electronic device records at least one training information for training the expression. The user can train the expression, and the user can customize the correspondence between different expression feature values and different expressions. For example, the user selects four commonly used expressions from the expression selection interface, namely: expression A, expression B, expression C, and expression D. Taking the training of the expression A as an example, the user selects the expression A, repeats the "fangs" three times, and the electronic device records the three training information.

Of course, the electronic device still collects and records the training information through an input unit such as a microphone or a camera.

Second, at least one training feature value is extracted from the at least one training information.

Similar to the above step 205, the electronic device may extract the training feature value from the training information by a data dimensionality reduction method or a feature value selection method. The training information may be training information in the form of voice, training information in the form of pictures, or training information in the form of video.

Third, the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression.

When the training information recorded by the electronic device is the same, the training feature values normally extracted from the training information are the same. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, the three training feature values extracted are usually “fangs”.

However, when the electronic device collects training information through an input unit such as a microphone or a camera, there may be interference of the surrounding environment, such as noise or image interference. At this time, the training feature value extracted by the electronic device from the training information may be Different. Therefore, the electronic device takes the most repeated training feature value as the expression feature value corresponding to the expression. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, two of the three training feature values extracted are “fangs” and the other is “in the case”, at this time, the electronic The device selects "fangs" as the expression feature values corresponding to the expression A.

Fourth, the correspondence between the expression and the expression feature value is stored in the feature library.

In the actual application, the trained correspondence can be stored in the original feature database; the user can also create a custom feature database and store the trained correspondence in the custom feature database.

Through the above four steps, the correspondence between the expression and the expression feature value is set by the user, thereby further improving the user experience.

It should be noted that, in order to identify when the user needs to use the expression input method provided by the embodiment to perform expression input, the step of detecting whether the cursor is located in the input box may be performed before step 201. The cursor is used to indicate the location where the user enters text, an expression, or a picture. Referring to FIG. 2B in conjunction with the cursor 28, the cursor 28 is located in the input box 24. The electronic device detects whether the user is using the input box 24 to input content such as characters, expressions, or pictures based on the position of the cursor 28. When the cursor 28 is in the input box 24, the default user is using the input box 24, at which point step 201 above is performed.

In summary, the expression input method provided by the embodiment collects input information through an input unit on the electronic device, extracts an expression feature value from the input information, and selects an expression to be input from the feature library according to the extracted expression feature value. The feature library stores the correspondence between different expression feature values and different expressions; solves the problem that the expression input speed is slow and the process is complicated; Cheng, improve the effect of the expression input speed.

In addition, the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.

In addition, the foregoing embodiment further provides two ways of selecting an expression that needs to be input. The first method is simple and fast by analyzing a form of expression feature value and determining an expression to be input; Analysis of the two forms of expression feature values to determine the expressions that need to be input can make the selected expressions more accurate and fully satisfy the user's needs.

In a specific example, Xiao Ming opens an application software with information transceiving function installed in the smart TV, and simultaneously opens the front camera of the smart TV to collect pictures of the face area thereof. Xiao Ming’s mouth is slightly raised, showing a smiling expression. The smart TV extracts the expression feature value from the collected face region picture, and finds the correspondence between the expression feature value and the expression in the feature library, and then inserts a smile expression in the input box of the chat interface. After that, Xiao Ming showed a sad expression, and the smart TV inserted a sad expression in the input box of the chat interface.

In another specific example, Xiaohong uses an instant messaging software installed in the mobile phone to train the expressions and set the correspondence between several sets of expression feature values and expressions. After that, during the chat between Xiaohong and others, when the mobile phone receives the voice input information of “Today is so happy”, according to the expression feature value “happy” and expression

Figure PCTCN2014095872-appb-000001
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000002
When the mobile phone receives the voice input information "snowing outside", according to the expression feature value "snowing" and the expression
Figure PCTCN2014095872-appb-000003
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000004
When the mobile phone receives the voice input information of "This snow is really beautiful, I like it", according to the expression feature value "like" and expression
Figure PCTCN2014095872-appb-000005
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000006

The following is an embodiment of the apparatus of the present invention, which can be used to carry out the method embodiments of the present invention. For details not disclosed in the embodiment of the device of the present invention, please refer to the method embodiment of the present invention.

Please refer to FIG. 3, which is a structural block diagram of an expression input device according to an embodiment of the present invention, which is used in an electronic device. The expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two. The expression input device includes: a first information collection module 310, a feature extraction module 320, and an expression selection module 330.

The first information collection module 310 is configured to collect input information.

The feature extraction module 320 is configured to extract an expression feature value from the input information.

The expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.

In summary, the expression input device provided by the embodiment extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the expression feature value, and the feature library stores different expression features. Corresponding relationship between value and different expressions; solving the problem that the expression input speed is slow and the process is complicated in the related art; the effect of simplifying the expression input process and improving the speed of expression input is achieved.

Please refer to FIG. 4, which is a structural block diagram of an expression input device according to another embodiment of the present invention, which is used in an electronic device. The expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two. The expression input device includes: a first information collection module 310, a feature extraction module 320, a second information collection module 321, and an environment determination. The module 322, the feature selection module 323, the expression selection module 330, and the expression display module 331.

The first information collection module 310 is configured to collect input information.

Specifically, the first information collecting module 310 includes: a voice collecting unit 310a and an image collecting unit 310b.

The voice collection unit 310a is configured to collect voice input information through a microphone if the input information includes voice input information.

The image capturing unit 310b is configured to collect image input information or video input information through the camera if the input information includes picture input information or video input information.

The feature extraction module 320 is configured to extract an expression feature value from the input information.

Specifically, the feature extraction module 320 includes at least one extraction unit: a first extraction unit 320a, a second extraction unit 320b, and a third extraction unit 320c.

The first extracting unit 320a is configured to perform voice recognition on the voice input information if the input information includes voice input information, to obtain a first specified feature value.

The second extracting unit 320b is configured to determine a face area in the picture input information and extract a second specified feature value from the face area, if the input information includes picture input information.

The third extracting unit 320c is configured to extract a third specified feature value from the video input information if the input information includes video input information.

Optionally, the expression input device further includes: a second information collection module 321, an environment determination module 322, and a feature selection module 323.

The second information collecting module 321 is configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.

The environment determining module 322 is configured to determine a current usage environment according to the environment information.

The feature selection module 323 is configured to select an candidate feature library corresponding to the current use environment from the at least one candidate feature library, and use the candidate feature library as a feature library.

The expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.

When the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression selection module 330 includes: a feature matching unit 330a, an optional selection unit 330b, and an expression arrangement unit. 330c and expression determining unit 330d.

The feature matching unit 330a is configured to match the expression feature value with the expression feature value stored in the feature library.

The candidate selecting unit 330b is configured to match the m expression feature values whose matching degree is greater than a predetermined threshold. n expressions as alternative expressions, n ≥ m ≥ 1.

The expression arranging unit 330c is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, and the sorting condition includes any one of historical usage times, recent usage time, and matching degree. .

The expression determining unit 330d is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.

When the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the expression selection module 330 includes: a first matching unit 330e, a first obtaining unit 330f, and a second matching unit. 330g, second acquisition unit 330h, alternative determination unit 330i, alternative sorting unit 330j, and expression selection unit 330k.

The first matching unit 330e is configured to match the first specified feature value with the first expression feature value stored in the first feature library.

The first obtaining unit 330f is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, and a≥1.

a second matching unit 330g, configured to match the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library;

The second obtaining unit 330h is configured to obtain b second expression feature values whose matching degree is greater than the second threshold, b≥1.

The candidate determining unit 330i is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b.

The candidate sorting unit 330j is configured to select at least one sorting condition according to the preset priority, and sort the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree. One.

The expression selection unit 330k is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.

The feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.

The expression display module 331 is configured to display an expression that needs to be input in an input box or a chat bar.

Optionally, the expression input device further includes: an information recording module, a feature recording module, a feature selection module, and a feature storage module.

An information recording module for recording at least one training information for training an expression for each expression.

And a feature recording module, configured to extract at least one training feature value from the at least one training information.

The feature selection module is configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression.

The feature storage module is configured to store the correspondence between the expression and the expression feature value in the feature library.

In summary, the expression input device provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved. In addition, the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.

It should be noted that the expression input device provided by the above embodiment is only illustrated by the division of the above functional modules when inputting an expression. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the embodiment of the present invention is the same as the method embodiment of the present invention. The specific implementation process is not described here.

Referring to Figure 5, an illustrative electronic device architecture of an electronic device 500 for use in one embodiment of the present invention is shown. The electronic device 500 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, a desktop computer, a smart TV, and the like. The electronic device 500 includes a central processing unit (CPU) 501, a system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503, and a system bus 505 that connects the system memory 504 and the central processing unit 501. . The electronic device 500 also includes a basic input/output system (I/O system) 506 that facilitates transfer of information between various devices within the electronic device, and a large storage system 513, application program 514, and other program modules 515. Capacity storage device 507.

The basic input/output system 506 includes a display 508 for displaying information and an input device 509 such as a mouse or keyboard for user input of information. Both the display 508 and the input device 509 are connected to the central processing unit 501 via an input and output controller 510 that is coupled to the system bus 505. The basic input/output system 506 can also include an input and output controller 510 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input and output controller 510 also provides output to a display screen, printer, or other type of output device.

The mass storage device 507 is connected to the central processing unit 501 by a mass storage controller (not shown) connected to the system bus 505. The mass storage device 507 and its associated electronic device readable medium provide non-volatile storage for the electronic device 500. That is, the mass storage device 507 may include an electronic device readable medium (not shown) such as a hard disk or a CD-ROM drive.

Without loss of generality, the computer readable medium can include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage medium is not limited to the above.

According to various embodiments of the present invention, the electronic device 500 may also be connected to a remote computer running on a network through a network such as the Internet. That is, the electronic device 500 can be connected to the network 512 through a network interface unit 511 connected to the system bus 505, or can be connected to other types of networks or remote computer systems using the network interface unit 511 (not shown). ).

FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device may be used to implement the expression input method provided in the foregoing embodiment. Specifically:

The electronic device 600 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, and a WiFi (wireless fidelity, The Wireless Fidelity module 170 includes a processor 180 having one or more processing cores, and a power supply 190 and the like. It will be understood by those skilled in the art that the electronic device device structure shown in FIG. 6 does not constitute a limitation on the electronic device device, and may include more or less components than those illustrated, or combine some components, or different components. Arrangement. among them:

The RF circuit 110 can be used for transmitting and receiving information or during a call, and receiving and transmitting signals. Specifically, after receiving downlink information of the base station, the downlink information is processed by one or more processors 180. In addition, the data related to the uplink is sent to the base station. . Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier). , duplexer, etc. In addition, RF circuitry 110 can also communicate with the network and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.

The memory 120 can be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by running software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the electronic device 600 (such as audio data, phone book, etc.) and the like. Moreover, memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 120 may also include a memory controller to provide access to memory 120 by processor 180 and input unit 130.

The input unit 130 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls. In particular, input unit 130 can include touch-sensitive surface 131 as well as other input devices 132. Touch-sensitive surface 131, also referred to as a touch display or trackpad, can collect touch operations on or near the user (such as a user using a finger, stylus, etc., on any suitable object or accessory on touch-sensitive surface 131 or The operation near the touch-sensitive surface 131) and driving the corresponding connecting device according to a preset program. Alternatively, the touch-sensitive surface 131 can include two portions of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 180 is provided and can receive commands from the processor 180 and execute them. In addition, the touch-sensitive surface 131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 can also include other input devices 132. Specifically, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.

The display unit 140 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the electronic device 600, which can be composed of graphics, text, icons, video, and any combination thereof. The display unit 140 may include a display panel 141, optionally, may The display panel 141 is configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when the touch-sensitive surface 131 detects a touch operation thereon or nearby, it is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 according to the touch event The type provides a corresponding visual output on display panel 141. Although in FIG. 6, touch-sensitive surface 131 and display panel 141 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 131 can be integrated with display panel 141 for input. And output function.

Electronic device 600 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 141 according to the brightness of the ambient light, and the proximity sensor may close the display panel 141 when the electronic device 600 moves to the ear. And / or backlight. As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the electronic device 600 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.

The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the electronic device 600. The audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal by the audio circuit 160. After receiving, it is converted into audio data, and then processed by the audio data output processor 180, transmitted to the electronic device, for example, by the RF circuit 110, or outputted to the memory 120 for further processing. The audio circuit 160 may also include an earbud jack to provide communication of the peripheral earphones with the electronic device 600.

WiFi is a short-range wireless transmission technology, and the electronic device 600 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 170, which provides wireless broadband Internet access for users. Although FIG. 6 shows the WiFi module 170, it can be understood that it is not The necessary configuration of the electronic device 600 can be omitted as long as it does not change the essence of the invention as needed.

The processor 180 is a control center of the electronic device 600 that connects various portions of the entire handset with various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120. The various functions and processing data of the electronic device 600 are executed to perform overall monitoring of the mobile phone. Optionally, the processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 180.

The electronic device 600 also includes a power source 190 (such as a battery) for powering various components. Preferably, the power source can be logically coupled to the processor 180 through a power management system to manage functions such as charging, discharging, and power management through the power management system. . Power supply 190 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.

Although not shown, the electronic device 600 may further include a camera, a Bluetooth module, and the like, and details are not described herein. Specifically, in this embodiment, the display unit of the electronic device device is a touch screen display, the electronic device device further includes a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be Or more than one processor executing, the one or more programs comprising instructions for performing the operations described in the embodiment corresponding to Figure 1 above or the embodiment corresponding to Figure 2A.

In another aspect, still another embodiment of the present invention provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the above embodiment; There is a computer readable storage medium that is not assembled into the terminal. The computer readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform an expression input method, the method comprising:

Collect input information;

Extracting expression feature values from the input information;

The expressions to be input are selected from the feature library according to the expression feature values, and the correspondence between the different expression feature values and the different expressions is stored in the feature library.

Preferably, extracting the expression feature value from the input information comprises:

If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;

If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;

If the input information includes video input information, the third specified feature value is extracted from the video input information.

Preferably, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression to be input is selected from the feature library according to the expression feature value, including:

Matching the expression feature values with the expression feature values stored in the feature library;

n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;

Selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, recent usage time, and matching degree;

An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.

Preferably, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, selecting an expression to be input from the feature library according to the expression feature value includes:

Matching the first specified feature value with the first expression feature value stored in the first feature library;

Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;

The second specified feature value or the third specified feature value and the second expression stored in the second feature library Characteristic values are matched;

Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;

The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;

Selecting at least one sorting condition according to a preset priority, and sorting the candidate expressions according to at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;

Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input;

The feature library includes a first feature library and a second feature library, and the expression feature values include a first expression feature value and a second expression feature value.

Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

Collecting environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;

Determine the current usage environment based on environmental information;

The candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.

Preferably, the input information is collected, including:

If the input information includes voice input information, the voice input information is collected through the microphone;

If the input information includes picture input information or video input information, the picture input information or the video input information is collected through the camera.

Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

For each expression, record at least one training information for training the expression;

Extracting at least one training feature value from the at least one training information;

The training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression;

The correspondence between the expression and the expression feature value is stored in the feature library.

Preferably, after selecting an expression to be input from the feature library according to the expression feature value, the method further includes:

Display the expression you want to enter in the input box or chat bar.

The computer readable storage medium provided by the embodiment of the present invention extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores different expressions. Corresponding relationship between feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.

It is to be understood that the singular forms "a", "the", "the" It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.

The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.

A person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium. The storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

The above are only the preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalents, improvements, etc., which are within the spirit and scope of the present invention, should be included in the protection of the present invention. Within the scope.

Claims (24)

  1. An expression input method, characterized in that the method comprises:
    Collect input information;
    Extracting an expression feature value from the input information;
    Selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions.
  2. The method according to claim 1, wherein the extracting the expression feature value from the input information comprises:
    If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
    If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
    If the input information includes video input information, extracting a third specified feature value from the video input information.
  3. The method according to claim 2, wherein when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, The selecting an expression to be input from the feature library according to the expression feature value includes:
    Matching the expression feature value with the expression feature value stored in the feature library;
    n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;
    Selecting at least one sorting condition according to the preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, latest usage time, and the matching degree;
    An alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be input.
  4. The method according to claim 2, wherein when the expression feature value includes the first specified feature value and further includes the second specified feature value or the third specified feature value, Selecting an expression to be input from the feature library according to the expression feature value, including:
    Matching the first specified feature value with a first expression feature value stored in the first feature library;
    Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;
    Matching the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
    Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;
    The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;
    Selecting at least one sorting condition according to the preset priority, and sorting the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;
    Filtering an alternative expression according to the sorting result, and using the candidate expression as the expression to be input;
    The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  5. The method according to claim 1, wherein before the selecting the expression to be input from the feature library according to the expression feature value, the method further comprises:
    Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;
    Determining a current use environment according to the environmental information;
    An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.
  6. The method of claim 2, wherein the collecting input information comprises:
    If the input information includes the voice input information, collecting the voice input information through a microphone;
    If the input information includes the picture input information or the video input information, the picture input information or the video input information is collected by a camera.
  7. The method according to claim 1, wherein before the selecting the expression to be input from the feature library according to the expression feature value, the method further comprises:
    For each of the expressions, recording at least one training information for training the expression;
    Extracting at least one training feature value from the at least one training information;
    The training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression;
    A correspondence between the expression and the expression feature value is stored in the feature library.
  8. The method according to any one of claims 1 to 7, further comprising: after selecting the expression to be input from the feature library according to the expression feature value, further comprising:
    Display the expression that needs to be input in the input box or chat bar.
  9. An expression input device, characterized in that the device comprises:
    a first information collecting module, configured to collect input information;
    a feature extraction module, configured to extract an expression feature value from the input information;
    The expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  10. The device according to claim 9, wherein the feature extraction module comprises at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;
    The first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;
    The second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;
    The third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.
  11. The apparatus according to claim 10, wherein when the extracted expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value The expression selection module includes: a feature matching unit, an optional selection unit, an expression arrangement unit, and an expression determination unit;
    The feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library;
    The candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n≥m≥1;
    The expression arranging unit is configured to select at least one sorting condition according to a preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a history usage time, a recent usage time, and Any one of the matching degrees;
    The expression determining unit is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as the expression that needs to be input.
  12. The apparatus according to claim 10, wherein when said expression feature value includes said first specified feature value and further comprises said second specified feature value or said third specified feature value, said said The expression selection module includes: a first matching unit, a first acquiring unit, a second matching unit, and a a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;
    The first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database;
    The first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a≥1;
    The second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
    The second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b≥1;
    The candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b;
    The candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;
    The expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;
    The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  13. The device according to claim 9, wherein the device further comprises:
    a second information collecting module, configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
    An environment determining module, configured to determine a current usage environment according to the environment information;
    And a feature selection module, configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.
  14. The device according to claim 10, wherein the first information collecting module comprises: a voice collecting unit, an image collecting unit;
    The voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;
    The image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.
  15. The device according to claim 9, wherein the device further comprises:
    An information recording module, configured to record, for each expression, at least one training information for training the expression;
    a feature recording module, configured to extract at least one training feature value from the at least one training information;
    a feature selection module, configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression;
    And a feature storage module, configured to store, in the feature library, a correspondence between the expression and the expression feature value.
  16. The device according to any one of claims 9 to 15, wherein the device further comprises:
    An expression display module, configured to display the expression that needs to be input in an input box or a chat bar.
  17. An electronic device, comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein a set of program codes is stored in the system memory, and the central processing unit is used by the system bus Calls the program code stored in system memory to do the following:
    Collecting input information; extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores different expression feature values and different The correspondence between expressions.
  18. The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.
  19. The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    And when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library The expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n≥m≥1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected The expression serves as the expression that needs to be input.
  20. The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    When the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1; and the second specified feature value or the third specified feature value and the second feature database The second expression feature value stored in the matching is matched; b second expression feature values whose matching degree is greater than the second threshold are obtained, b≥ 1; x expressions corresponding to a first expression feature value and y expressions corresponding to b second expression feature values are used as candidate expressions, x≥a, y≥b; at least one sort is selected according to preset priorities Conditioning, the candidate expressions are sorted according to the at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and filtering out one according to the sorting result An alternative expression, the candidate expression being the expression to be input; wherein the feature library includes the first feature library and the second feature library, and the expression feature value includes the first An expression feature value and the second expression feature value.
  21. The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.
  22. The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    If the input information includes the voice input information, the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.
  23. The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; a correspondence between the expression and the expression eigenvalue Stored in the feature library.
  24. The electronic device according to any one of claims 17 to 23, wherein the central processing unit is configured to call program code stored in the system memory for performing the following operations:
    Display the expression that needs to be input in the input box or chat bar.
PCT/CN2014/095872 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device WO2015127825A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device
CN201410069166.9 2014-02-27

Publications (1)

Publication Number Publication Date
WO2015127825A1 true WO2015127825A1 (en) 2015-09-03

Family

ID=50758662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095872 WO2015127825A1 (en) 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device

Country Status (2)

Country Link
CN (1) CN103823561B (en)
WO (1) WO2015127825A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823561B (en) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 expression input method and device
JP6289662B2 (en) 2014-07-02 2018-03-07 ホアウェイ・テクノロジーズ・カンパニー・リミテッド Information transmitting method and transmitting apparatus
CN106886396A (en) * 2015-12-16 2017-06-23 北京奇虎科技有限公司 expression management method and device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
WO2017120924A1 (en) * 2016-01-15 2017-07-20 李强生 Information prompting method for use when inserting emoticon, and instant communication tool
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106020504B (en) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 Information output method and device
CN107623830B (en) * 2016-07-15 2019-03-15 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
CN106175727B (en) * 2016-07-25 2018-11-20 广东小天才科技有限公司 A kind of expression method for pushing and wearable device applied to wearable device
CN106293120A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 Expression input method and mobile terminal
WO2018023576A1 (en) * 2016-08-04 2018-02-08 薄冰 Method for adjusting emoji sending technique according to market feedback, and emoji system
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN106293131A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 expression input method and device
CN106503630A (en) * 2016-10-08 2017-03-15 广东小天才科技有限公司 A kind of expression sending method, equipment and system
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN107315820A (en) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 The expression searching method and device of User Interface based on mobile terminal
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107479723A (en) * 2017-08-18 2017-12-15 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102255820A (en) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 Instant communication method and device
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN103823561A (en) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 Expression input method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824B (en) * 2013-06-17 2016-08-17 百度在线网络技术(北京)有限公司 The method of phonetic entry character string, device and terminal unit
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information
CN103529946B (en) * 2013-10-29 2016-06-01 广东欧珀移动通信有限公司 A kind of input method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102255820A (en) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 Instant communication method and device
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN103823561A (en) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 Expression input method and device

Also Published As

Publication number Publication date
CN103823561A (en) 2014-05-28
CN103823561B (en) 2017-01-18

Similar Documents

Publication Publication Date Title
ES2643176T3 (en) Method and apparatus for providing independent view activity reports that respond to a tactile gesture
JP6311194B2 (en) Contact grouping method and apparatus
WO2017206916A1 (en) Method for determining kernel running configuration in processor and related product
WO2016107501A1 (en) Intelligent device control method and device
KR20150079829A (en) Gesture-based conversation processing method, apparatus, and terminal device
CN105141496B (en) A kind of instant communication message playback method and device
CN104951159A (en) Touch key and fingerprint identification method
RU2632153C2 (en) Method, device and terminal for displaying virtual keyboard
US20170091335A1 (en) Search method, server and client
CN104636047B (en) The method, apparatus and touch screen terminal operated to the object in list
CN104123093A (en) Information processing method and device
CN104239535A (en) Method and system for matching pictures with characters, server and terminal
WO2018103525A1 (en) Method and device for tracking facial key point, and storage medium
WO2015100569A1 (en) Sidebar menu display method, device and terminal
US9918138B2 (en) Method for controlling multimedia playing, apparatus thereof and storage medium
CN103473011A (en) Mobile terminal, and performance detecting method and performance detecting device for mobile terminal
CN104238918B (en) List View component slippage display methods and device
US20150019764A1 (en) Information displaying method, mobile terminal device and non-transitory computer readable storage medium
CN103279272B (en) A kind of method and device starting application program in an electronic
JP6450029B2 (en) Advertisement push system, apparatus and method
US10133480B2 (en) Method for adjusting input-method keyboard and mobile terminal thereof
US20150121295A1 (en) Window displaying method of mobile terminal and mobile terminal
CN104298436B (en) A kind of quickly revert operating method and terminal
US9355637B2 (en) Method and apparatus for performing speech keyword retrieval
WO2015043525A1 (en) Method, apparatus, and system for picture sharing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.01.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1