WO2015127825A1 - Expression input method and apparatus and electronic device - Google Patents

Expression input method and apparatus and electronic device Download PDF

Info

Publication number
WO2015127825A1
WO2015127825A1 PCT/CN2014/095872 CN2014095872W WO2015127825A1 WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1 CN 2014095872 W CN2014095872 W CN 2014095872W WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
feature value
feature
input information
input
Prior art date
Application number
PCT/CN2014/095872
Other languages
French (fr)
Chinese (zh)
Inventor
陈超
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2015127825A1 publication Critical patent/WO2015127825A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present invention relates to the field of the Internet, and in particular, to an expression input method, device, and electronic device.
  • the expression selection interface is opened to select an expression to be input, and then the selected expression is sent to the other user.
  • the other party receives and reads the expression sent by the other party's user.
  • the inventors have found that the related art has at least the following problems: in order to satisfy the user's needs as much as possible, an application often contains dozens or even hundreds of expressions for the user to select.
  • the emoticon selection interface contains more emoticons, it is necessary to display these emoticons in a pagination manner.
  • the user inputs the expression, he or she needs to first find the page where the expression of the desired input is located, and then select the expression to be input from. This causes the user to input expressions very slowly and increases the complexity of the expression input process.
  • the embodiment of the invention provides an expression input method, device and electronic device.
  • the technical solution is as follows:
  • an expression input method comprising:
  • the extracting the expression feature value from the input information includes:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the input information includes video input information, extracting a third specified feature value from the video input information.
  • the Select the expressions you want to enter in the feature library including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • the feature identifier is used according to the feature library Select the expressions you want to enter, including:
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • Sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;
  • the feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  • the method before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
  • the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the collecting input information includes:
  • the input information includes the voice input information, collecting the voice input information through a microphone;
  • the input information includes the picture input information or the video input information
  • the picture input information or the video input information is collected by a camera.
  • the method before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression
  • a correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • an expression input device comprising:
  • a first information collecting module configured to collect input information
  • a feature extraction module configured to extract an expression feature value from the input information
  • the expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the feature extraction module includes at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;
  • the first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;
  • the second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;
  • the third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.
  • the expression selection module includes: a feature a matching unit, an alternative selection unit, an expression arrangement unit, and an expression determination unit;
  • the feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library
  • the candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n ⁇ m ⁇ 1;
  • the expression arranging unit is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a historical usage count, a latest usage time, and the Any of the matching degrees;
  • the expression determining unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression to be input.
  • the expression selection module includes: a first match a unit, a first obtaining unit, a second matching unit, a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;
  • the first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database
  • the first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a ⁇ 1;
  • the second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
  • the second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b ⁇ 1;
  • the candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x ⁇ a, y ⁇ b;
  • the candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;
  • the expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;
  • the feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  • the device further includes:
  • a second information collecting module configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • An environment determining module configured to determine a current usage environment according to the environment information
  • a feature selection module configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.
  • the first information collection module includes: a voice collection unit, and an image collection unit;
  • the voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;
  • the image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.
  • the device further includes:
  • An information recording module configured to record, for each of the expressions, at least one training information for training the expression
  • a feature recording module configured to extract at least one training feature value from the at least one training information
  • a feature selection module configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression
  • a feature storage module configured to store a correspondence between the expression and the expression feature value in the In the feature library.
  • the device further includes:
  • An expression display module configured to display the expression that needs to be input in an input box or a chat bar.
  • an electronic device comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein the system memory stores a set of program codes, and the central processing unit passes through the system bus Used to call program code stored in system memory to perform the following operations:
  • Collecting input information extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions .
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library
  • the expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n ⁇ m ⁇ 1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected
  • the expression serves as the expression that needs to be input.
  • the central processing unit is configured to invoke program code stored in the system memory, Used to do the following:
  • the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a ⁇ 1; and the second specified feature value or the third specified feature value and the second feature database Matching the second expression feature values stored in the second; obtaining b second expression feature values whose matching degree is greater than the second threshold, b ⁇ 1; x expressions corresponding to a first expression feature value and b second expressions y expressions corresponding to the feature values as alternative expressions, x ⁇ a, y ⁇ b; selecting at least one sorting condition according to the preset priority, sorting the candidate expressions according to the at least one sorting condition, the sorting The condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and an alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • each expression For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; storing a correspondence between the emoticon and the emoticon feature value in the feature library.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the expression feature value is extracted from the input information, and the expression to be input is selected from the feature library according to the extracted expression feature value, and the correspondence relationship between different expression feature values and different expressions is stored in the feature library;
  • FIG. 1 is a flowchart of a method for an expression input method according to an embodiment of the present invention
  • 2A is a flowchart of a method for an expression input method according to another embodiment of the present invention.
  • 2B is a schematic diagram of a chat interface of a typical instant messaging application
  • FIG. 3 is a block diagram showing the structure of an expression input device according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the structure of an expression input device according to another embodiment of the present invention.
  • Figure 5 is an illustrative terminal architecture of an electronic device 500 for use in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device may be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3), and an MP4 (Moving Picture) Experts Group Audio Layer IV, motion imaging experts compress standard audio layers 3) players, laptops, desktop computers, smart TVs, etc.
  • MP3 player Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3
  • MP4 Motion Imaging experts compress standard audio layers
  • FIG. 1 is a flowchart of a method for inputting an expression according to an embodiment of the present invention.
  • the embodiment is illustrated by using the expression input method in an electronic device.
  • the expression input method includes the following steps:
  • step 102 input information is collected.
  • Step 104 extracting an expression feature value from the input information.
  • Step 106 Select an expression to be input from the feature library according to the expression feature value, and store a correspondence between different expression feature values and different expressions in the feature library.
  • the expression input method provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • extracting the expression feature value from the input information comprises:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the third specified feature value is extracted from the video input information.
  • the expression to be input is selected from the feature library according to the expression feature value, including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
  • selecting an expression to be input from the feature library according to the expression feature value includes:
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
  • the feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • the candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the input information is collected, including:
  • the voice input information is collected through the microphone
  • the picture input information or the video input information is collected through the camera.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression
  • the correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • FIG. 2A is a flowchart of a method for inputting an expression according to another embodiment of the present invention.
  • the embodiment is illustrated by using the expression input method in an electronic device.
  • the expression input method includes the following steps:
  • Step 201 Determine whether the electronic device is in an automatic collection state or a manual acquisition state; if the electronic device is in an automatic collection state, perform step 202; if the electronic device is in a manual collection state, execute Step 203.
  • the automatic acquisition state refers to that the electronic device automatically turns on the input unit to collect input information
  • the manual acquisition state refers to the input of the input information by the user to open the input unit.
  • Step 202 If the electronic device is in an automatic acquisition state, the input unit is turned on.
  • the input unit includes a microphone and/or a camera.
  • the input unit may be an input unit built in the electronic device, or may be an input unit external to the electronic device, which is not specifically limited in the embodiment of the present invention.
  • step 204 is performed.
  • Step 203 If the electronic device is in the manual collection state, it is detected whether the input unit is in an open state.
  • the electronic device detects whether the input unit is in an open state. Since the manual acquisition state refers to the collection of input information by the user turning on the input unit, the electronic device detects at this time whether the user turns on the input unit. The user can turn on the input unit with a control such as a button or a switch.
  • the microphone button 22 is located in the input box 24. The user presses the microphone button 22 to turn the microphone on, and the microphone turns off when the user releases the microphone button 22.
  • step 204 is performed; if the input unit is not in the on state, the following steps are not performed.
  • Step 204 Acquire input information through an input unit on the electronic device.
  • the electronic device collects input information through the input unit.
  • the voice input information is collected through the microphone.
  • the voice input information can be what the user says, or a sound made by the user or other object.
  • the input unit includes a camera
  • Picture input information or video input information if the input unit includes a camera, it is collected by the camera.
  • Picture input information may be a facial expression of the user
  • the video input information may be a gesture gesture of the user or a gesture track of the user, and the like.
  • Step 205 Extract an expression feature value from the input information.
  • the expression feature value is extracted from the input information.
  • the voice input information is voice-recognized, and then the first specified feature value is extracted from the voice input information.
  • the first specified feature value is used to represent the user's voice.
  • the electronic device may extract the first specified feature value from the voice input information by a data dimensionality reduction method or a feature value selection method.
  • the data dimensionality reduction method is a commonly used method for simplifying and effectively analyzing information such as high-dimensional speech or images. By reducing the dimensionality of high-dimensional information, it is possible to remove some data that does not reflect the essential characteristics of the information. Therefore, the feature value in the input information can be obtained by the data dimensionality reduction method, and the feature value is data capable of reflecting the essential characteristics of the input information.
  • the first specified feature value is extracted from the voice input information, and the first specified feature value is used in the expression input method provided in this embodiment, so the first specified feature value is referred to as an expression feature value. .
  • the expression feature value can also be extracted from the input information by the feature value selection method.
  • the electronic device may preset at least one expression feature value, and after collecting the input information, analyze the input information and find out whether there is a preset expression feature value.
  • the voice input information collected by the electronic device through the microphone is “of course, no problem haha”, and the electronic device extracts the first specified feature value “haha” from the voice input information.
  • the face area is determined from the picture input information, and the second specified feature value is extracted from the face area.
  • the second specified feature value user represents a facial expression of the person.
  • the electronic device may first determine a face region from the picture input information by using an image recognition technology, and then extract a second specified feature value from the face region by a data dimensionality reduction method or a feature value selection method.
  • the face area in the picture is determined.
  • the second specified feature value corresponding to the expressions such as "happy”, “sad”, “cry” or “crazy” is extracted therefrom.
  • the third specified feature value is extracted from the video input information.
  • the third specified feature value is used to represent the gesture trajectory of the person.
  • the electronic device may extract a third specified feature value from the video input information.
  • Step 206 Select an expression to be input from the feature library according to the extracted expression feature value.
  • the electronic device can select the expression to be input according to the extracted expression feature values and the corresponding relationship stored in the feature library.
  • the selected emoticons are then inserted into the input box 24 for the user to send or directly display in the chat bar 26.
  • the step may include the following sub-steps:
  • the expression feature value stored in the feature library is a specific expression feature value, for example, the first specified feature value is entered by a specific person, the expression feature value extracted by the electronic device and the expression feature value stored in the feature library have A certain degree of difference, so the electronic device needs to match the two to get the matching degree.
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1.
  • an expression feature value corresponds to at least one expression.
  • the predetermined threshold can be preset according to the actual situation, for example, set to 80%.
  • the alternative expression obtained by the electronic device is: three expressions A, B, and C corresponding to an expression feature value with a matching degree of 98%, and another expression feature value corresponding to 90% of the matching feature value. D expression.
  • the sorting condition includes any one of historical usage times, recent usage time, and matching degree.
  • the order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of priority from high to low is the degree of matching, the number of historical uses, and the most recently used time.
  • the electronic device first sorts the four expressions A, B, C, and D according to the matching degree, and then obtains A, B, C, and D in turn, and finds that the matching degrees of the three expressions A, B, and C are all 98%; after that, the electronic device sorts the three expressions A, B, and C according to the historical usage times, and then obtains B, A, and C in turn (assuming that the sorting rule is sorted according to the number of historical uses, and the history of the A expression The number of uses is 15 times, the historical usage of the B expression is 20 times, and the historical usage of the C expression is 3 times; at this time, the electronic device finds that the B expression has the most usage history, so the B expression is selected as the expression to be input.
  • the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
  • the step may include the following steps:
  • the electronic device comprehensively analyzes two forms of expression feature values to determine an expression to be input, which can make the selected expression more accurate and fully satisfy the user's needs.
  • the electronic device performs the first specified feature value and the first expression feature value stored in the first feature library match. Similarly, the electronic device obtains a matching degree between the first specified feature value and the first expression feature value stored in the first feature library. In this embodiment, it is assumed that the first specified feature value extracted by the electronic device is “haha”.
  • the electronic device acquires a first expression feature values whose matching degree is greater than the first threshold, a ⁇ 1.
  • a 1 is assumed.
  • the facial expression of the second designated feature value is laughed as an example for illustration.
  • the electronic device acquires b second expression feature values whose matching degree is greater than a second threshold, b ⁇ 1.
  • b 2.
  • the candidate expression is three expressions of “laughing”, “smile” and “fang” corresponding to the first expression feature value whose matching degree is greater than the first threshold, and the matching degree is greater than the second threshold.
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree.
  • the order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of repetition is the order of repetition, the number of historical usages, the latest usage time, and the matching degree.
  • the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
  • the electronic device matches the extracted expression feature value with the expression feature value stored in the feature library, if it is found that there is no expression feature value whose matching degree is greater than the threshold, the user may be prompted to find the matching result. For example, the user is notified in the form of a pop-up window.
  • step 207 the expression that needs to be input is displayed in the input box or the chat bar.
  • the electronic device selects an expression to be input from the feature library, the expression to be input is directly displayed in the input box or the chat bar.
  • the electronic device can insert the selected emoticons into the input box 24 for the user to send or directly display in the chat bar 26.
  • the expression input method provided in this embodiment may also select an expression in combination with an environment in which the electronic device is located. Specifically, before the foregoing step 206, the following steps may also be included:
  • the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.
  • the ambient volume information can be collected by the microphone, the ambient light intensity information can be collected by the light intensity sensor, and the environmental image information can be collected by the camera.
  • the various environmental information is comprehensively analyzed to determine the current usage environment. For example, when the time information is 22:00, the environment volume information is 2 decibels, and the ambient light intensity information is weak, it can be determined that the current use environment is the environment in which the user is sleeping. For example, when the time information is 14:00, the ambient volume information is 75 decibels, the ambient light intensity information is strong, and the environmental image information is a street, It can be determined that the current usage environment is an environment in which the user is shopping.
  • the correspondence between different usage environments and different candidate feature libraries is pre-stored in the electronic device. After the electronic device acquires the current usage environment, the corresponding candidate feature library is selected as the feature library. Then, the electronic device selects an expression to be input from the feature library according to the extracted expression feature value.
  • the correspondence between different expression feature values stored in the feature library and different expressions may be previously set by the system or designer.
  • the emoticon package carries a feature library.
  • the designer After designing the expression, the designer also sets the correspondence between different expression feature values and different expressions, and creates a feature library, and then packs the expression together with the feature library into an expression package.
  • the correspondence between different expression feature values stored in the feature library and different expressions may also be set by the user.
  • the expression input method provided in this embodiment further includes the following steps:
  • the electronic device For each expression, the electronic device records at least one training information for training the expression.
  • the user can train the expression, and the user can customize the correspondence between different expression feature values and different expressions.
  • the user selects four commonly used expressions from the expression selection interface, namely: expression A, expression B, expression C, and expression D. Taking the training of the expression A as an example, the user selects the expression A, repeats the "fangs" three times, and the electronic device records the three training information.
  • the electronic device still collects and records the training information through an input unit such as a microphone or a camera.
  • At least one training feature value is extracted from the at least one training information.
  • the electronic device may extract the training feature value from the training information by a data dimensionality reduction method or a feature value selection method.
  • the training information may be training information in the form of voice, training information in the form of pictures, or training information in the form of video.
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression.
  • the training feature values normally extracted from the training information are the same.
  • the three training information recorded by the electronic device is the “fangs” that the user says, the three training feature values extracted are usually “fangs”.
  • the electronic device collects training information through an input unit such as a microphone or a camera, there may be interference of the surrounding environment, such as noise or image interference.
  • the training feature value extracted by the electronic device from the training information may be Different. Therefore, the electronic device takes the most repeated training feature value as the expression feature value corresponding to the expression. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, two of the three training feature values extracted are “fangs” and the other is “in the case”, at this time, the electronic The device selects "fangs" as the expression feature values corresponding to the expression A.
  • the trained correspondence can be stored in the original feature database; the user can also create a custom feature database and store the trained correspondence in the custom feature database.
  • the correspondence between the expression and the expression feature value is set by the user, thereby further improving the user experience.
  • the step of detecting whether the cursor is located in the input box may be performed before step 201.
  • the cursor is used to indicate the location where the user enters text, an expression, or a picture. Referring to FIG. 2B in conjunction with the cursor 28, the cursor 28 is located in the input box 24.
  • the electronic device detects whether the user is using the input box 24 to input content such as characters, expressions, or pictures based on the position of the cursor 28.
  • the cursor 28 is in the input box 24, the default user is using the input box 24, at which point step 201 above is performed.
  • the expression input method provided by the embodiment collects input information through an input unit on the electronic device, extracts an expression feature value from the input information, and selects an expression to be input from the feature library according to the extracted expression feature value.
  • the feature library stores the correspondence between different expression feature values and different expressions; solves the problem that the expression input speed is slow and the process is complicated; Cheng, improve the effect of the expression input speed.
  • the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
  • the foregoing embodiment further provides two ways of selecting an expression that needs to be input.
  • the first method is simple and fast by analyzing a form of expression feature value and determining an expression to be input; Analysis of the two forms of expression feature values to determine the expressions that need to be input can make the selected expressions more accurate and fully satisfy the user's needs.
  • Xiao Ming opens an application software with information transceiving function installed in the smart TV, and simultaneously opens the front camera of the smart TV to collect pictures of the face area thereof.
  • Xiao Ming s mouth is slightly raised, showing a smiling expression.
  • the smart TV extracts the expression feature value from the collected face region picture, and finds the correspondence between the expression feature value and the expression in the feature library, and then inserts a smile expression in the input box of the chat interface. After that, Xiao Ming showed a sad expression, and the smart TV inserted a sad expression in the input box of the chat interface.
  • Xiaohong uses an instant messaging software installed in the mobile phone to train the expressions and set the correspondence between several sets of expression feature values and expressions.
  • the mobile phone receives the voice input information of “Today is so happy”, according to the expression feature value “happy” and expression Correspondence, insert an emoticon in the input box of the chat interface
  • the mobile phone receives the voice input information "snowing outside”, according to the expression feature value "snowing” and the expression Correspondence
  • insert an emoticon in the input box of the chat interface When the mobile phone receives the voice input information of "This snow is really beautiful, I like it", according to the expression feature value "like” and expression Correspondence, insert an emoticon in the input box of the chat interface
  • FIG. 3 is a structural block diagram of an expression input device according to an embodiment of the present invention, which is used in an electronic device.
  • the expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two.
  • the expression input device includes: a first information collection module 310, a feature extraction module 320, and an expression selection module 330.
  • the first information collection module 310 is configured to collect input information.
  • the feature extraction module 320 is configured to extract an expression feature value from the input information.
  • the expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the expression input device provided by the embodiment extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the expression feature value, and the feature library stores different expression features. Corresponding relationship between value and different expressions; solving the problem that the expression input speed is slow and the process is complicated in the related art; the effect of simplifying the expression input process and improving the speed of expression input is achieved.
  • FIG. 4 is a structural block diagram of an expression input device according to another embodiment of the present invention, which is used in an electronic device.
  • the expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two.
  • the expression input device includes: a first information collection module 310, a feature extraction module 320, a second information collection module 321, and an environment determination.
  • the first information collection module 310 is configured to collect input information.
  • the first information collecting module 310 includes: a voice collecting unit 310a and an image collecting unit 310b.
  • the voice collection unit 310a is configured to collect voice input information through a microphone if the input information includes voice input information.
  • the image capturing unit 310b is configured to collect image input information or video input information through the camera if the input information includes picture input information or video input information.
  • the feature extraction module 320 is configured to extract an expression feature value from the input information.
  • the feature extraction module 320 includes at least one extraction unit: a first extraction unit 320a, a second extraction unit 320b, and a third extraction unit 320c.
  • the first extracting unit 320a is configured to perform voice recognition on the voice input information if the input information includes voice input information, to obtain a first specified feature value.
  • the second extracting unit 320b is configured to determine a face area in the picture input information and extract a second specified feature value from the face area, if the input information includes picture input information.
  • the third extracting unit 320c is configured to extract a third specified feature value from the video input information if the input information includes video input information.
  • the expression input device further includes: a second information collection module 321, an environment determination module 322, and a feature selection module 323.
  • the second information collecting module 321 is configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.
  • the environment determining module 322 is configured to determine a current usage environment according to the environment information.
  • the feature selection module 323 is configured to select an candidate feature library corresponding to the current use environment from the at least one candidate feature library, and use the candidate feature library as a feature library.
  • the expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the expression selection module 330 includes: a feature matching unit 330a, an optional selection unit 330b, and an expression arrangement unit. 330c and expression determining unit 330d.
  • the feature matching unit 330a is configured to match the expression feature value with the expression feature value stored in the feature library.
  • the candidate selecting unit 330b is configured to match the m expression feature values whose matching degree is greater than a predetermined threshold.
  • the expression arranging unit 330c is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, and the sorting condition includes any one of historical usage times, recent usage time, and matching degree. .
  • the expression determining unit 330d is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
  • the expression selection module 330 includes: a first matching unit 330e, a first obtaining unit 330f, and a second matching unit. 330g, second acquisition unit 330h, alternative determination unit 330i, alternative sorting unit 330j, and expression selection unit 330k.
  • the first matching unit 330e is configured to match the first specified feature value with the first expression feature value stored in the first feature library.
  • the first obtaining unit 330f is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, and a ⁇ 1.
  • a second matching unit 330g configured to match the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library
  • the second obtaining unit 330h is configured to obtain b second expression feature values whose matching degree is greater than the second threshold, b ⁇ 1.
  • the candidate determining unit 330i is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x ⁇ a, y ⁇ b.
  • the candidate sorting unit 330j is configured to select at least one sorting condition according to the preset priority, and sort the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree.
  • the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree.
  • the expression selection unit 330k is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
  • the feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
  • the expression display module 331 is configured to display an expression that needs to be input in an input box or a chat bar.
  • the expression input device further includes: an information recording module, a feature recording module, a feature selection module, and a feature storage module.
  • An information recording module for recording at least one training information for training an expression for each expression.
  • a feature recording module configured to extract at least one training feature value from the at least one training information.
  • the feature selection module is configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression.
  • the feature storage module is configured to store the correspondence between the expression and the expression feature value in the feature library.
  • the expression input device extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
  • the expression input device provided by the above embodiment is only illustrated by the division of the above functional modules when inputting an expression.
  • the function distribution may be completed by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
  • the embodiment of the present invention is the same as the method embodiment of the present invention. The specific implementation process is not described here.
  • the electronic device 500 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, a desktop computer, a smart TV, and the like.
  • the electronic device 500 includes a central processing unit (CPU) 501, a system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503, and a system bus 505 that connects the system memory 504 and the central processing unit 501.
  • CPU central processing unit
  • system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503
  • system bus 505 that connects the system memory 504 and the central processing unit 501.
  • the electronic device 500 also includes a basic input/output system (I/O system) 506 that facilitates transfer of information between various devices within the electronic device, and a large storage system 513, application program 514, and other program modules 515. Capacity storage device 507.
  • I/O system basic input/output system
  • the basic input/output system 506 includes a display 508 for displaying information and an input device 509 such as a mouse or keyboard for user input of information. Both the display 508 and the input device 509 are connected to the central processing unit 501 via an input and output controller 510 that is coupled to the system bus 505.
  • the basic input/output system 506 can also include an input and output controller 510 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input and output controller 510 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 507 is connected to the central processing unit 501 by a mass storage controller (not shown) connected to the system bus 505.
  • the mass storage device 507 and its associated electronic device readable medium provide non-volatile storage for the electronic device 500. That is, the mass storage device 507 may include an electronic device readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • the computer readable medium can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the electronic device 500 may also be connected to a remote computer running on a network through a network such as the Internet. That is, the electronic device 500 can be connected to the network 512 through a network interface unit 511 connected to the system bus 505, or can be connected to other types of networks or remote computer systems using the network interface unit 511 (not shown). ).
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device may be used to implement the expression input method provided in the foregoing embodiment. Specifically:
  • the electronic device 600 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, and a WiFi (wireless fidelity,
  • the Wireless Fidelity module 170 includes a processor 180 having one or more processing cores, and a power supply 190 and the like. It will be understood by those skilled in the art that the electronic device device structure shown in FIG. 6 does not constitute a limitation on the electronic device device, and may include more or less components than those illustrated, or combine some components, or different components. Arrangement. among them:
  • the RF circuit 110 can be used for transmitting and receiving information or during a call, and receiving and transmitting signals. Specifically, after receiving downlink information of the base station, the downlink information is processed by one or more processors 180. In addition, the data related to the uplink is sent to the base station. .
  • the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier). , duplexer, etc.
  • RF circuitry 110 can also communicate with the network and other devices via wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • e-mail Short Messaging Service
  • the memory 120 can be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by running software programs and modules stored in the memory 120.
  • the memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the electronic device 600 (such as audio data, phone book, etc.) and the like.
  • memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 120 may also include a memory controller to provide access to memory 120 by processor 180 and input unit 130.
  • the input unit 130 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 130 can include touch-sensitive surface 131 as well as other input devices 132.
  • Touch-sensitive surface 131 also referred to as a touch display or trackpad, can collect touch operations on or near the user (such as a user using a finger, stylus, etc., on any suitable object or accessory on touch-sensitive surface 131 or The operation near the touch-sensitive surface 131) and driving the corresponding connecting device according to a preset program.
  • the touch-sensitive surface 131 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 180 is provided and can receive commands from the processor 180 and execute them.
  • the touch-sensitive surface 131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 can also include other input devices 132.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 140 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the electronic device 600, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 140 may include a display panel 141, optionally, may
  • the display panel 141 is configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch-sensitive surface 131 may cover the display panel 141, and when the touch-sensitive surface 131 detects a touch operation thereon or nearby, it is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 according to the touch event The type provides a corresponding visual output on display panel 141.
  • touch-sensitive surface 131 and display panel 141 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 131 can be integrated with display panel 141 for input. And output function.
  • Electronic device 600 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 141 according to the brightness of the ambient light, and the proximity sensor may close the display panel 141 when the electronic device 600 moves to the ear. And / or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the electronic device 600 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the electronic device 600.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal by the audio circuit 160. After receiving, it is converted into audio data, and then processed by the audio data output processor 180, transmitted to the electronic device, for example, by the RF circuit 110, or outputted to the memory 120 for further processing.
  • the audio circuit 160 may also include an earbud jack to provide communication of the peripheral earphones with the electronic device 600.
  • WiFi is a short-range wireless transmission technology
  • the electronic device 600 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 170, which provides wireless broadband Internet access for users.
  • FIG. 6 shows the WiFi module 170, it can be understood that it is not The necessary configuration of the electronic device 600 can be omitted as long as it does not change the essence of the invention as needed.
  • the processor 180 is a control center of the electronic device 600 that connects various portions of the entire handset with various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120.
  • the various functions and processing data of the electronic device 600 are executed to perform overall monitoring of the mobile phone.
  • the processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 180.
  • the electronic device 600 also includes a power source 190 (such as a battery) for powering various components.
  • the power source can be logically coupled to the processor 180 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • Power supply 190 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the electronic device 600 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • the display unit of the electronic device device is a touch screen display
  • the electronic device device further includes a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be Or more than one processor executing, the one or more programs comprising instructions for performing the operations described in the embodiment corresponding to Figure 1 above or the embodiment corresponding to Figure 2A.
  • Still another embodiment of the present invention provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the above embodiment; There is a computer readable storage medium that is not assembled into the terminal.
  • the computer readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform an expression input method, the method comprising:
  • the expressions to be input are selected from the feature library according to the expression feature values, and the correspondence between the different expression feature values and the different expressions is stored in the feature library.
  • extracting the expression feature value from the input information comprises:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the third specified feature value is extracted from the video input information.
  • the expression to be input is selected from the feature library according to the expression feature value, including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
  • selecting an expression to be input from the feature library according to the expression feature value includes:
  • the second specified feature value or the third specified feature value and the second expression stored in the second feature library Characteristic values are matched;
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
  • the feature library includes a first feature library and a second feature library, and the expression feature values include a first expression feature value and a second expression feature value.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • the candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the input information is collected, including:
  • the voice input information is collected through the microphone
  • the picture input information or the video input information is collected through the camera.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression
  • the correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • the computer readable storage medium provided by the embodiment of the present invention extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores different expressions. Corresponding relationship between feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

The present invention relates to the field of Internet. Disclosed are an expression input method and apparatus and an electronic device. The method comprises: collecting input information; extracting an expression characteristic value from the input information; and obtaining an expression that needs to be input and corresponds to the expression characteristic value from a characteristic base according to the expression characteristic value, wherein the characteristic base stores the correspondence between different expression characteristic values and different expressions. In the present invention, by collecting input information, extracting an expression characteristic value from the input information, and selecting an expression that needs to be input from a characteristic base according to the extracted expression characteristic value, the characteristic base storing the correspondence between different expression characteristic values and different expressions, the problems that the expression input speed is low and the process is complex are solved; the expression input process is simplified, and the expression input speed is increased.

Description

表情输入方法、装置及电子设备Expression input method, device and electronic device
本申请要求于2014年02月27日提交中国专利局、申请号为201410069166.9、发明名称为“表情输入方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201410069166.9, entitled "Expression Input Method and Apparatus" on February 27, 2014, the entire disclosure of which is incorporated herein by reference.
技术领域Technical field
本发明涉及互联网领域,特别涉及一种表情输入方法、装置及电子设备。The present invention relates to the field of the Internet, and in particular, to an expression input method, device, and electronic device.
背景技术Background technique
随着IM(Instant Messenger,即时通讯)应用、Blog(博客)以及SMS(Short Messaging Service,短消息服务)应用的推广和普及,用户已经愈加依赖于这些具有信息收发功能的应用进行彼此间的交流和联系。With the promotion and popularization of IM (Instant Messenger) applications, blogs, and SMS (Short Messaging Service) applications, users have become increasingly dependent on these applications with information transceiving functions to communicate with each other. And contact.
用户在使用上述应用进行交流时,为了增加输入内容的趣味性,往往需要输入一些表情以表达特殊含义,或者丰富输入内容。在具体实现过程中,一方用户需要输入表情时,打开表情选择界面从中选取需要输入的表情,然后将选取的表情发送给另一方用户。对应地,另一方用户接收并读取对方用户发送的表情。When users use the above applications to communicate, in order to increase the interest of the input content, it is often necessary to input some expressions to express special meanings, or enrich the input content. In a specific implementation process, when one user needs to input an expression, the expression selection interface is opened to select an expression to be input, and then the selected expression is sent to the other user. Correspondingly, the other party receives and reads the expression sent by the other party's user.
在实现本发明的过程中,发明人发现相关技术至少存在以下问题:为了尽可能地满足用户的需求,一个应用往往包含有几十甚至几百个表情供用户选择。当表情选择界面中包含较多表情时,需要用分页的方式显示这些表情。用户在输入表情时,需要首先找到所需输入的表情所在的页面,然后从中选取所需输入的表情。这就导致用户输入表情的速度很慢,而且增加了表情输入过程的复杂度。 In the process of implementing the present invention, the inventors have found that the related art has at least the following problems: in order to satisfy the user's needs as much as possible, an application often contains dozens or even hundreds of expressions for the user to select. When the emoticon selection interface contains more emoticons, it is necessary to display these emoticons in a pagination manner. When the user inputs the expression, he or she needs to first find the page where the expression of the desired input is located, and then select the expression to be input from. This causes the user to input expressions very slowly and increases the complexity of the expression input process.
发明内容Summary of the invention
为了解决相关技术中表情输入速度慢且过程复杂的问题,本发明实施例提供了一种表情输入方法、装置及电子设备。所述技术方案如下:In order to solve the problem that the expression input speed is slow and the process is complicated in the related art, the embodiment of the invention provides an expression input method, device and electronic device. The technical solution is as follows:
第一方面,提供了一种表情输入方法,所述方法包括:In a first aspect, an expression input method is provided, the method comprising:
采集输入信息;Collect input information;
从所述输入信息中提取表情特征值;Extracting an expression feature value from the input information;
根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同表情之间的对应关系。Selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions.
可选的,所述从所述输入信息中提取表情特征值,包括:Optionally, the extracting the expression feature value from the input information includes:
若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。If the input information includes video input information, extracting a third specified feature value from the video input information.
可选的,当所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,所述根据所述表情特征值从特征库中选取需要输入的表情,包括:Optionally, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the Select the expressions you want to enter in the feature library, including:
将所述表情特征值与所述特征库中存储的表情特征值进行匹配;Matching the expression feature value with the expression feature value stored in the feature library;
将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;
根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种;Selecting at least one sorting condition according to the preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, latest usage time, and the matching degree;
根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的 表情。Filtering an alternate expression according to the sorting result, using the candidate emoticon as the input required expression.
可选的,当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,所述根据所述表情特征值从特征库中选取需要输入的表情,包括:Optionally, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the feature identifier is used according to the feature library Select the expressions you want to enter, including:
将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;Matching the first specified feature value with a first expression feature value stored in the first feature library;
获取匹配度大于第一阈值的a个第一表情特征值,a≥1;Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;
将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;Matching the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
获取匹配度大于第二阈值的b个第二表情特征值,b≥1;Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;
将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;
根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;Selecting at least one sorting condition according to the preset priority, and sorting the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;
根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情;Filtering an alternative expression according to the sorting result, and using the candidate expression as the expression to be input;
其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
可选的,所述根据所述表情特征值从特征库中选取需要输入的表情之前,还包括:Optionally, before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;
根据所述环境信息确定当前使用环境;Determining a current use environment according to the environmental information;
从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.
可选的,所述采集输入信息,包括: Optionally, the collecting input information includes:
若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;If the input information includes the voice input information, collecting the voice input information through a microphone;
若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。If the input information includes the picture input information or the video input information, the picture input information or the video input information is collected by a camera.
可选的,所述根据所述表情特征值从特征库中选取需要输入的表情之前,还包括:Optionally, before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
对于每一个所述表情,记录用于训练所述表情的至少一个训练信息;For each of the expressions, recording at least one training information for training the expression;
从所述至少一个训练信息中提取至少一个训练特征值;Extracting at least one training feature value from the at least one training information;
将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;The training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression;
将所述表情和所述表情特征值的对应关系存储在所述特征库中。A correspondence between the expression and the expression feature value is stored in the feature library.
可选的,所述根据所述表情特征值从特征库中选取需要输入的表情之后,还包括:Optionally, after the selecting the expression that needs to be input from the feature library according to the expression feature value, the method further includes:
将所述需要输入的表情显示于输入框或者聊天栏中。Display the expression that needs to be input in the input box or chat bar.
第二方面,提供了一种表情输入装置,所述装置包括:In a second aspect, an expression input device is provided, the device comprising:
第一信息采集模块,用于采集输入信息;a first information collecting module, configured to collect input information;
特征提取模块,用于从所述输入信息中提取表情特征值;a feature extraction module, configured to extract an expression feature value from the input information;
表情选取模块,用于根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同表情之间的对应关系。The expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
可选的,所述特征提取模块,包括下述至少一个提取单元:第一提取单元,第二提取单元,第三提取单元;Optionally, the feature extraction module includes at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;
所述第一提取单元,用于若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;The first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;
所述第二提取单元,用于若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;The second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;
所述第三提取单元,用于若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。 The third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.
可选的,当所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,所述表情选取模块,包括:特征匹配单元、备选选取单元、表情排列单元和表情确定单元;Optionally, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression selection module includes: a feature a matching unit, an alternative selection unit, an expression arrangement unit, and an expression determination unit;
所述特征匹配单元,用于将所述表情特征值与所述特征库中存储的表情特征值进行匹配;The feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library;
所述备选选取单元,用于将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;The candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n≥m≥1;
所述表情排列单元,用于根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种;The expression arranging unit is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a historical usage count, a latest usage time, and the Any of the matching degrees;
所述表情确定单元,用于根据排序结果筛选出一个所述备选表情,将所述备选表情作为所述需要输入的表情。The expression determining unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression to be input.
可选的,当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,所述表情选取模块,包括:第一匹配单元、第一获取单元、第二匹配单元、第二获取单元、备选确定单元、备选排序单元和表情选取单元;Optionally, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the expression selection module includes: a first match a unit, a first obtaining unit, a second matching unit, a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;
所述第一匹配单元,用于将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;The first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database;
所述第一获取单元,用于获取匹配度大于第一阈值的a个第一表情特征值,a≥1;The first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a≥1;
所述第二匹配单元,用于将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;The second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
所述第二获取单元,用于获取匹配度大于第二阈值的b个第二表情特征值,b≥1;The second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b≥1;
所述备选确定单元,用于将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b; The candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b;
所述备选排序单元,用于根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;The candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;
所述表情选取单元,用于根据排序结果筛选出一个所述备选表情,将所述备选表情作为所述需要输入的表情;The expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;
其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
可选的,所述装置还包括:Optionally, the device further includes:
第二信息采集模块,用于采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;a second information collecting module, configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
环境确定模块,用于根据所述环境信息确定当前使用环境;An environment determining module, configured to determine a current usage environment according to the environment information;
特征选择模块,用于从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。And a feature selection module, configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.
可选的,所述第一信息采集模块,包括:语音采集单元,图像采集单元;Optionally, the first information collection module includes: a voice collection unit, and an image collection unit;
所述语音采集单元,用于若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;The voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;
所述图像采集单元,用于若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。The image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.
可选的,所述装置还包括:Optionally, the device further includes:
信息记录模块,用于对于每一个所述表情,记录用于训练所述表情的至少一个训练信息;An information recording module, configured to record, for each of the expressions, at least one training information for training the expression;
特征记录模块,用于从所述至少一个训练信息中提取至少一个训练特征值;a feature recording module, configured to extract at least one training feature value from the at least one training information;
特征选取模块,用于将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;a feature selection module, configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression;
特征存储模块,用于将所述表情和所述表情特征值的对应关系存储在所述 特征库中。a feature storage module, configured to store a correspondence between the expression and the expression feature value in the In the feature library.
可选的,所述装置还包括:Optionally, the device further includes:
表情显示模块,用于将所述需要输入的表情显示于输入框或者聊天栏中。An expression display module, configured to display the expression that needs to be input in an input box or a chat bar.
第三方面,提供了一种电子设备,所述电子设备包括:中央处理单元、网络接口单元、传感器、麦克风、显示器和系统存储器,系统存储器中存储一组程序代码,且中央处理单元通过系统总线用于调用系统存储器中存储的程序代码,用于执行以下操作:In a third aspect, an electronic device is provided, the electronic device comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein the system memory stores a set of program codes, and the central processing unit passes through the system bus Used to call program code stored in system memory to perform the following operations:
采集输入信息;从所述输入信息中提取表情特征值;根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同表情之间的对应关系。Collecting input information; extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions .
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
当所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,将所述表情特征值与所述特征库中存储的表情特征值进行匹配;将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种;根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情。And when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library The expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n≥m≥1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected The expression serves as the expression that needs to be input.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码, 用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory, Used to do the following:
当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;获取匹配度大于第一阈值的a个第一表情特征值,a≥1;将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;获取匹配度大于第二阈值的b个第二表情特征值,b≥1;将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情;其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。When the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1; and the second specified feature value or the third specified feature value and the second feature database Matching the second expression feature values stored in the second; obtaining b second expression feature values whose matching degree is greater than the second threshold, b≥1; x expressions corresponding to a first expression feature value and b second expressions y expressions corresponding to the feature values as alternative expressions, x≥a, y≥b; selecting at least one sorting condition according to the preset priority, sorting the candidate expressions according to the at least one sorting condition, the sorting The condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and an alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be input; The feature library comprises the first feature and the second feature database library, and the expression includes a second characteristic value of said first face feature values and feature values of the expression.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;根据所述环境信息确定当前使用环境;从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。If the input information includes the voice input information, the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作: Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
对于每一个表情,记录用于训练所述表情的至少一个训练信号;从所述至少一个训练信号中提取至少一个训练特征值;将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;将所述表情和所述表情特征值的对应关系存储在所述特征库中。For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; storing a correspondence between the emoticon and the emoticon feature value in the feature library.
优选地,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:Preferably, the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
将所述需要输入的表情显示于输入框或者聊天栏中。Display the expression that needs to be input in the input box or chat bar.
本发明实施例提供的技术方案带来的有益效果是:The beneficial effects brought by the technical solutions provided by the embodiments of the present invention are:
通过采集输入信息,从输入信息中提取表情特征值,根据提取到的表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了表情输入速度慢且过程复杂的问题;达到了简化表情输入过程,提高表情输入速度的效果。By collecting the input information, the expression feature value is extracted from the input information, and the expression to be input is selected from the feature library according to the extracted expression feature value, and the correspondence relationship between different expression feature values and different expressions is stored in the feature library; The problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Other drawings may also be obtained from those of ordinary skill in the art in light of the inventive work.
图1是本发明一个实施例提供的表情输入方法的方法流程图;1 is a flowchart of a method for an expression input method according to an embodiment of the present invention;
图2A是本发明另一实施例提供的表情输入方法的方法流程图;2A is a flowchart of a method for an expression input method according to another embodiment of the present invention;
图2B是一种典型的即时通信应用的聊天界面的示意图;2B is a schematic diagram of a chat interface of a typical instant messaging application;
图3是本发明一个实施例提供的表情输入装置的结构方框图;3 is a block diagram showing the structure of an expression input device according to an embodiment of the present invention;
图4是本发明另一实施例提供的表情输入装置的结构方框图;4 is a block diagram showing the structure of an expression input device according to another embodiment of the present invention;
图5是本发明一个实施例中使用的电子设备500的说明性终端体系结构;Figure 5 is an illustrative terminal architecture of an electronic device 500 for use in an embodiment of the present invention;
图6是本发明一个实施例所涉及的电子设备的结构示意图。 FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。The embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.
在本发明各个实施例中,电子设备可以是手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer I I I,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面3)播放器、膝上型便携计算机、台式计算机以及智能电视等等。In various embodiments of the present invention, the electronic device may be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3), and an MP4 (Moving Picture) Experts Group Audio Layer IV, motion imaging experts compress standard audio layers 3) players, laptops, desktop computers, smart TVs, etc.
请参考图1,其示出了本发明一个实施例提供的表情输入方法的方法流程图,本实施例以该表情输入方法应用于电子设备来举例说明。该表情输入方法包括如下几个步骤:Please refer to FIG. 1 , which is a flowchart of a method for inputting an expression according to an embodiment of the present invention. The embodiment is illustrated by using the expression input method in an electronic device. The expression input method includes the following steps:
步骤102,采集输入信息。In step 102, input information is collected.
步骤104,从输入信息中提取表情特征值。 Step 104, extracting an expression feature value from the input information.
步骤106,根据该表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系。Step 106: Select an expression to be input from the feature library according to the expression feature value, and store a correspondence between different expression feature values and different expressions in the feature library.
综上所述,本实施例提供的表情输入方法,通过采集输入信息,从输入信息中提取表情特征值,根据提取到的表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了表情输入速度慢且过程复杂的问题;达到了简化表情输入过程,提高表情输入速度的效果。In summary, the expression input method provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
优选地,从输入信息中提取表情特征值,包括:Preferably, extracting the expression feature value from the input information comprises:
若输入信息包括语音输入信息,则对语音输入信息进行语音识别,得到第一指定特征值;If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
若输入信息包括图片输入信息,则在图片输入信息中确定人脸区域,从人脸区域中提取第二指定特征值; If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
若输入信息包括视频输入信息,则从视频输入信息中提取第三指定特征值。If the input information includes video input information, the third specified feature value is extracted from the video input information.
优选地,当表情特征值为第一指定特征值、第二指定特征值以及第三指定特征值中的任意一种时,根据表情特征值从特征库中选取需要输入的表情,包括:Preferably, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression to be input is selected from the feature library according to the expression feature value, including:
将表情特征值与特征库中存储的表情特征值进行匹配;Matching the expression feature values with the expression feature values stored in the feature library;
将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;
根据预设优先级选取至少一个排序条件,根据至少一个排序条件对n个备选表情进行排序,排序条件包括历史使用次数、最近使用时间以及匹配度中的任意一种;Selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, recent usage time, and matching degree;
根据排序结果筛选出一个备选表情,将备选表情作为需要输入的表情。An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
优选地,当表情特征值包括第一指定特征值,且还包括第二指定特征值或者第三指定特征值时,根据表情特征值从特征库中选取需要输入的表情,包括:Preferably, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, selecting an expression to be input from the feature library according to the expression feature value includes:
将第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;Matching the first specified feature value with the first expression feature value stored in the first feature library;
获取匹配度大于第一阈值的a个第一表情特征值,a≥1;Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;
将第二指定特征值或者第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;Matching the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library;
获取匹配度大于第二阈值的b个第二表情特征值,b≥1;Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;
将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;
根据预设优先级选取至少一个排序条件,根据至少一个排序条件对备选表情进行排序,排序条件包括重复次数、历史使用次数、最近使用时间以及匹配度中的任意一种;Selecting at least one sorting condition according to a preset priority, and sorting the candidate expressions according to at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
根据排序结果筛选出一个备选表情,将备选表情作为需要输入的表情;Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input;
其中,特征库包括第一特征库和第二特征库,且表情特征值包括第一表情 特征值和第二表情特征值。The feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
优选地,根据表情特征值从特征库中选取需要输入的表情之前,还包括:Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
采集电子设备周围的环境信息,环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;Collecting environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
根据环境信息确定当前使用环境;Determine the current usage environment based on environmental information;
从至少一个备选特征库中选取与当前使用环境对应的备选特征库,将备选特征库作为特征库。The candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
优选地,采集输入信息,包括:Preferably, the input information is collected, including:
若输入信息包括语音输入信息,则通过麦克风采集语音输入信息;If the input information includes voice input information, the voice input information is collected through the microphone;
若输入信息包括图片输入信息或者视频输入信息,则通过摄像头采集图片输入信息或者视频输入信息。If the input information includes picture input information or video input information, the picture input information or the video input information is collected through the camera.
优选地,根据表情特征值从特征库中选取需要输入的表情之前,还包括:Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
对于每一个表情,记录用于训练表情的至少一个训练信息;For each expression, record at least one training information for training the expression;
从至少一个训练信息中提取至少一个训练特征值;Extracting at least one training feature value from the at least one training information;
将重复数量最多的训练特征值作为与表情相对应的表情特征值;The training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression;
将表情和表情特征值的对应关系存储在特征库中。The correspondence between the expression and the expression feature value is stored in the feature library.
优选地,根据表情特征值从特征库中选取需要输入的表情之后,还包括:Preferably, after selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
将需要输入的表情显示于输入框或者聊天栏中。Display the expression you want to enter in the input box or chat bar.
上述所有可选技术方案,可以采用任意结合形成本发明的可选实施例,在此不再一一赘述。All of the above optional technical solutions may be used in any combination to form an optional embodiment of the present invention, and will not be further described herein.
请参考图2A,其示出了本发明另一实施例提供的表情输入方法的方法流程图,本实施例以该表情输入方法应用于电子设备来举例说明。该表情输入方法包括如下几个步骤:Please refer to FIG. 2A , which is a flowchart of a method for inputting an expression according to another embodiment of the present invention. The embodiment is illustrated by using the expression input method in an electronic device. The expression input method includes the following steps:
步骤201,判断电子设备处于自动采集状态还是手动采集状态;若电子设备处于自动采集状态,则执行步骤202;若电子设备处于手动采集状态,则执 行步骤203。Step 201: Determine whether the electronic device is in an automatic collection state or a manual acquisition state; if the electronic device is in an automatic collection state, perform step 202; if the electronic device is in a manual collection state, execute Step 203.
其中,自动采集状态是指由电子设备自动开启输入单元进行输入信息的采集;手动采集状态是指由用户开启输入单元进行输入信息的采集。The automatic acquisition state refers to that the electronic device automatically turns on the input unit to collect input information; the manual acquisition state refers to the input of the input information by the user to open the input unit.
步骤202,若电子设备处于自动采集状态,则开启输入单元。Step 202: If the electronic device is in an automatic acquisition state, the input unit is turned on.
若电子设备处于自动采集状态,则电子设备自动开启输入单元。输入单元包括麦克风和/或摄像头。输入单元可以是电子设备内置的输入单元,也可以是电子设备外接的输入单元,本发明实施例对此不进行具体限定。If the electronic device is in an automatic acquisition state, the electronic device automatically turns on the input unit. The input unit includes a microphone and/or a camera. The input unit may be an input unit built in the electronic device, or may be an input unit external to the electronic device, which is not specifically limited in the embodiment of the present invention.
电子设备开启输入单元之后,执行下述步骤204。After the electronic device turns on the input unit, the following step 204 is performed.
步骤203,若电子设备处于手动采集状态,则检测输入单元是否处于开启状态。Step 203: If the electronic device is in the manual collection state, it is detected whether the input unit is in an open state.
若电子设备处于手动采集状态,则电子设备检测输入单元是否处于开启状态。由于手动采集状态是指由用户开启输入单元进行输入信息的采集,所以电子设备此时检测用户是否将输入单元开启。用户可以通过按钮或者开关之类的控件开启输入单元。If the electronic device is in the manual acquisition state, the electronic device detects whether the input unit is in an open state. Since the manual acquisition state refers to the collection of input information by the user turning on the input unit, the electronic device detects at this time whether the user turns on the input unit. The user can turn on the input unit with a control such as a button or a switch.
当输入单元为麦克风时,请结合参考图2B,其示出了一种典型的即时通信应用的聊天界面。麦克风按钮22位于输入框24中。用户长按该麦克风按钮22可以使麦克风处于开启状态,当用户释放该麦克风按钮22时麦克风关闭。When the input unit is a microphone, please refer to FIG. 2B in combination with a chat interface of a typical instant messaging application. The microphone button 22 is located in the input box 24. The user presses the microphone button 22 to turn the microphone on, and the microphone turns off when the user releases the microphone button 22.
若输入单元处于开启状态,则执行下述步骤204;若输入单元未处于开启状态,则不执行下述步骤。If the input unit is in the on state, the following step 204 is performed; if the input unit is not in the on state, the following steps are not performed.
步骤204,通过电子设备上的输入单元采集输入信息。Step 204: Acquire input information through an input unit on the electronic device.
不论电子设备处于自动采集状态还是手动采集状态,当输入单元开启之后,电子设备均通过输入单元采集输入信息。Regardless of whether the electronic device is in an automatic acquisition state or a manual acquisition state, after the input unit is turned on, the electronic device collects input information through the input unit.
在第一种可能的实现方式中,若输入单元包括麦克风,则通过麦克风采集语音输入信息。语音输入信息可以是用户说的话,或者由用户或者其它物体发出的声音。In a first possible implementation, if the input unit includes a microphone, the voice input information is collected through the microphone. The voice input information can be what the user says, or a sound made by the user or other object.
在第二种可能的实现方式中,若输入单元包括摄像头,则通过摄像头采集 图片输入信息或者视频输入信息。图片输入信息可以是用户的脸部表情,视频输入信息可以是用户的动作姿态或者用户的手势轨迹等等。In a second possible implementation, if the input unit includes a camera, it is collected by the camera. Picture input information or video input information. The picture input information may be a facial expression of the user, and the video input information may be a gesture gesture of the user or a gesture track of the user, and the like.
步骤205,从输入信息中提取表情特征值。Step 205: Extract an expression feature value from the input information.
电子设备采集到输入信息之后,从输入信息中提取表情特征值。After the electronic device collects the input information, the expression feature value is extracted from the input information.
在第一种可能的实现方式中,若输入信息包括语音输入信息,则对语音输入信息进行语音识别,进而从语音输入信息中提取第一指定特征值。其中,第一指定特征值用于表征用户语音。In a first possible implementation manner, if the input information includes voice input information, the voice input information is voice-recognized, and then the first specified feature value is extracted from the voice input information. The first specified feature value is used to represent the user's voice.
电子设备可以通过数据降维方法或者特征值选择方法从语音输入信息中提取第一指定特征值。其中,数据降维方法是一种常用的对高维度的语音或者图像之类的信息进行更为简化和有效分析的方法。通过对高维度信息进行降维,可以去除一些没有反映信息本质特征的数据。因此,通过数据降维方法可以得到输入信息中的特征值,该特征值即为能够反映输入信息本质特征的数据。由于在本实施方式中,从语音输入信息中提取第一指定特征值,且该第一指定特征值用于本实施例提供的表情输入方法,所以将该第一指定特征值称为表情特征值。The electronic device may extract the first specified feature value from the voice input information by a data dimensionality reduction method or a feature value selection method. Among them, the data dimensionality reduction method is a commonly used method for simplifying and effectively analyzing information such as high-dimensional speech or images. By reducing the dimensionality of high-dimensional information, it is possible to remove some data that does not reflect the essential characteristics of the information. Therefore, the feature value in the input information can be obtained by the data dimensionality reduction method, and the feature value is data capable of reflecting the essential characteristics of the input information. In the present embodiment, the first specified feature value is extracted from the voice input information, and the first specified feature value is used in the expression input method provided in this embodiment, so the first specified feature value is referred to as an expression feature value. .
另外,还可以通过特征值选择方法从输入信息中提取表情特征值。电子设备可以预先设置至少一个表情特征值,当采集到输入信息之后,对输入信息进行分析并查找是否存在预先设置的表情特征值。In addition, the expression feature value can also be extracted from the input information by the feature value selection method. The electronic device may preset at least one expression feature value, and after collecting the input information, analyze the input information and find out whether there is a preset expression feature value.
在本实施例中,假设电子设备通过麦克风采集到的语音输入信息为“当然可以没问题哈哈”,则电子设备对该语音输入信息进行分析之后从中提取第一指定特征值“哈哈”。In this embodiment, it is assumed that the voice input information collected by the electronic device through the microphone is “of course, no problem haha”, and the electronic device extracts the first specified feature value “haha” from the voice input information.
在第二种可能的实现方式中,若输入信息包括图片输入信息,则从图片输入信息中确定人脸区域,从人脸区域中提取第二指定特征值。其中,第二指定特征值用户表征人的脸部表情。In a second possible implementation manner, if the input information includes picture input information, the face area is determined from the picture input information, and the second specified feature value is extracted from the face area. Wherein, the second specified feature value user represents a facial expression of the person.
电子设备可以首先通过图像识别技术从图片输入信息中确定人脸区域,然后通过数据降维方法或者特征值选择方法从人脸区域中提取第二指定特征值。 The electronic device may first determine a face region from the picture input information by using an image recognition technology, and then extract a second specified feature value from the face region by a data dimensionality reduction method or a feature value selection method.
比如,通过摄像头拍摄用户脸部的图片之后,确定图片中的人脸区域。在对该人脸区域进行分析之后,从中提取“开心”、“难过”、“哭”或者“抓狂”之类表情对应的第二指定特征值。For example, after capturing a picture of the user's face through the camera, the face area in the picture is determined. After analyzing the face region, the second specified feature value corresponding to the expressions such as "happy", "sad", "cry" or "crazy" is extracted therefrom.
在第三种可能的实现方式中,若输入信息包括视频输入信息,则从视频输入信息中提取第三指定特征值。其中,第三指定特征值用于表征人的姿态轨迹。In a third possible implementation, if the input information includes video input information, the third specified feature value is extracted from the video input information. Wherein, the third specified feature value is used to represent the gesture trajectory of the person.
当输入信息为通过摄像头采集的用户姿态动作或者手势轨迹之类的视频输入信息时,电子设备可以从该视频输入信息中提取第三指定特征值。When the input information is video input information such as a user gesture or a gesture track collected by the camera, the electronic device may extract a third specified feature value from the video input information.
步骤206,根据提取到的表情特征值从特征库中选取需要输入的表情。Step 206: Select an expression to be input from the feature library according to the extracted expression feature value.
由于特征库中存储有不同表情特征值与不同表情之间的对应关系,所以电子设备可根据提取到的表情特征值以及特征库中存储的对应关系,选取所需输入的表情。然后,将选取的表情插入至输入框24中,以待用户发送或者直接显示于聊天栏26中。Since the feature library stores different correspondence between different expression feature values and different expressions, the electronic device can select the expression to be input according to the extracted expression feature values and the corresponding relationship stored in the feature library. The selected emoticons are then inserted into the input box 24 for the user to send or directly display in the chat bar 26.
具体来讲,当提取到的表情特征值为第一指定特征值、第二指定特征值以及第三指定特征值中的任意一种时,本步骤可以包括如下几个子步骤:Specifically, when the extracted expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the step may include the following sub-steps:
(1)将提取到的表情特征值与特征库中存储的表情特征值进行匹配。(1) Matching the extracted expression feature values with the expression feature values stored in the feature library.
由于特征库中存储的表情特征值是特定的表情特征值,比如第一指定特征值是由某个特定人录入的,因此电子设备提取到的表情特征值与特征库中存储的表情特征值具有一定程度的差异,所以电子设备需要将两者进行匹配,得到匹配度。Since the expression feature value stored in the feature library is a specific expression feature value, for example, the first specified feature value is entered by a specific person, the expression feature value extracted by the electronic device and the expression feature value stored in the feature library have A certain degree of difference, so the electronic device needs to match the two to get the matching degree.
(2)将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1。(2) n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1.
其中,一个表情特征值对应于至少一个表情。预定阈值可以根据实际情况预先设定,比如设定为80%。Wherein an expression feature value corresponds to at least one expression. The predetermined threshold can be preset according to the actual situation, for example, set to 80%.
在本实施例中,假设电子设备得到的备选表情为:一个匹配度为98%的表情特征值对应的A、B和C三个表情,以及另一匹配度为90%的表情特征值对应的D表情。 In this embodiment, it is assumed that the alternative expression obtained by the electronic device is: three expressions A, B, and C corresponding to an expression feature value with a matching degree of 98%, and another expression feature value corresponding to 90% of the matching feature value. D expression.
(3)根据预设优先级选取至少一个排序条件,根据至少一个排序条件对n个备选表情进行排序。(3) selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to at least one sorting condition.
其中,排序条件包括历史使用次数、最近使用时间以及匹配度中的任意一种。各个排序条件之间的优先级顺序可以根据实际情况预先设定,比如按照优先级从高到低依次为匹配度、历史使用次数、最近使用时间。当电子设备根据第一个排序条件无法筛选出需要输入的表情时,选取第二个排序条件继续筛选,以此类推,最终筛选出一个备选表情作为需要输入的表情。The sorting condition includes any one of historical usage times, recent usage time, and matching degree. The order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of priority from high to low is the degree of matching, the number of historical uses, and the most recently used time. When the electronic device cannot filter out the expression to be input according to the first sorting condition, select the second sorting condition to continue the screening, and so on, and finally select an alternative expression as the expression to be input.
在本实施例中,电子设备首先根据匹配度对A、B、C和D四个表情进行排序后依次得到A、B、C和D,发现A、B和C三个表情的匹配度均为98%;之后,电子设备根据历史使用次数对A、B和C三个表情进行排序后依次得到B、A和C(假设排序规则为按照历史使用次数由多到少排序,且A表情的历史使用次数为15次,B表情的历史使用次数为20次,C表情的历史使用次数为3次);此时电子设备发现B表情的历史使用次数最多,因此选取B表情作为需要输入的表情。In this embodiment, the electronic device first sorts the four expressions A, B, C, and D according to the matching degree, and then obtains A, B, C, and D in turn, and finds that the matching degrees of the three expressions A, B, and C are all 98%; after that, the electronic device sorts the three expressions A, B, and C according to the historical usage times, and then obtains B, A, and C in turn (assuming that the sorting rule is sorted according to the number of historical uses, and the history of the A expression The number of uses is 15 times, the historical usage of the B expression is 20 times, and the historical usage of the C expression is 3 times; at this time, the electronic device finds that the B expression has the most usage history, so the B expression is selected as the expression to be input.
(4)根据排序结果筛选出一个备选表情,将该备选表情作为需要输入的表情。(4) Filtering an alternative expression according to the sorting result, and using the candidate expression as an expression to be input.
在本发明实施例提供的表情输入方法中,电子设备从多个备选表情中自动筛选出一个备选表情作为需要输入的表情,不需要用户进行选取或者确认,简化表情输入的流程,使得表情输入更为高效、便捷。In the expression input method provided by the embodiment of the present invention, the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
当提取到的表情特征值包括第一指定特征值,且还包括第二指定特征值或者第三指定特征值时,本步骤可以包括如下几个步骤:When the extracted expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the step may include the following steps:
(1)将第一指定特征值与第一特征库中存储的第一表情特征值进行匹配。(1) Matching the first specified feature value with the first expression feature value stored in the first feature library.
与上述选取需要输入的表情的方式不同的是,电子设备综合分析两种形式的表情特征值确定出需要输入的表情,可以使得选取的表情更为准确,充分满足用户需求。Different from the above manner of selecting an expression to be input, the electronic device comprehensively analyzes two forms of expression feature values to determine an expression to be input, which can make the selected expression more accurate and fully satisfy the user's needs.
电子设备将第一指定特征值与第一特征库中存储的第一表情特征值进行 匹配。同样的,电子设备得到该第一指定特征值与第一特征库中存储的第一表情特征值之间的匹配度。在本实施例中,假设电子设备提取到的第一指定特征值为“哈哈”。The electronic device performs the first specified feature value and the first expression feature value stored in the first feature library match. Similarly, the electronic device obtains a matching degree between the first specified feature value and the first expression feature value stored in the first feature library. In this embodiment, it is assumed that the first specified feature value extracted by the electronic device is “haha”.
(2)获取匹配度大于第一阈值的a个第一表情特征值,a≥1。(2) Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1.
电子设备获取匹配度大于第一阈值的a个第一表情特征值,a≥1。在本实施例中,假设a=1。The electronic device acquires a first expression feature values whose matching degree is greater than the first threshold, a≥1. In this embodiment, a = 1 is assumed.
(3)将第二指定特征值或者第三指定特征值与第二特征库中存储的第二表情特征值进行匹配。(3) Matching the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library.
在本实施例中,以第二指定特征值为大笑的面部表情为例进行举例说明。In the embodiment, the facial expression of the second designated feature value is laughed as an example for illustration.
(4)获取匹配度大于第二阈值的b个第二表情特征值,b≥1。(4) Obtaining b second expression feature values whose matching degree is greater than the second threshold, b≥1.
电子设备获取匹配度大于第二阈值的b个第二表情特征值,b≥1。在本实施例中,假设b=2。The electronic device acquires b second expression feature values whose matching degree is greater than a second threshold, b≥1. In this embodiment, it is assumed that b = 2.
(5)将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b。(5) The x expressions corresponding to the a first expression feature values and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b.
在本实施例中,假设备选表情为匹配度大于第一阈值的第一表情特征值对应的“大笑”、“微笑”和“龇牙”三个表情,匹配度大于第二阈值的第一个第二表情特征值对应的“微笑”表情,以及匹配度大于第二阈值的第二个第二表情特征值对应的“嘟嘴”表情。In this embodiment, it is assumed that the candidate expression is three expressions of “laughing”, “smile” and “fang” corresponding to the first expression feature value whose matching degree is greater than the first threshold, and the matching degree is greater than the second threshold. a "smile" expression corresponding to the second expression feature value, and a "beep" expression corresponding to the second second expression feature value having a matching degree greater than the second threshold.
(6)根据预设优先级选取至少一个排序条件,根据至少一个排序条件对备选表情进行排序。(6) selecting at least one sorting condition according to the preset priority, and sorting the alternative expressions according to at least one sorting condition.
其中,排序条件包括重复次数、历史使用次数、最近使用时间以及匹配度中的任意一种。各个排序条件之间的优先级顺序可以根据实际情况预先设定,比如按照优先级从高到低依次为重复次数、历史使用次数、最近使用时间、匹配度。当电子设备根据第一个排序条件无法筛选出需要输入的表情时,选取第二个排序条件继续筛选,以此类推,最终筛选出一个备选表情作为需要输入的表情。 The sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree. The order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of repetition is the order of repetition, the number of historical usages, the latest usage time, and the matching degree. When the electronic device cannot filter out the expression to be input according to the first sorting condition, select the second sorting condition to continue the screening, and so on, and finally select an alternative expression as the expression to be input.
在本实施例中,假设首先根据重复次数对“大笑”、“微笑”、“龇牙”和“嘟嘴”表情进行排序,发现“微笑”表情的重复次数最多,则直接选取“微笑”表情作为需要输入的表情。In this embodiment, it is assumed that the "Laughter", "Smile", "Tooth" and "Beep" expressions are first sorted according to the number of repetitions, and the "smile" expression is found to have the most repetitions, and the "smile" is directly selected. The expression is an expression that needs to be entered.
(7)根据排序结果筛选出一个备选表情,将备选表情作为需要输入的表情。(7) Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input.
在本发明实施例提供的表情输入方法中,电子设备从多个备选表情中自动筛选出一个备选表情作为需要输入的表情,不需要用户进行选取或者确认,简化表情输入的流程,使得表情输入更为高效、便捷。In the expression input method provided by the embodiment of the present invention, the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
另外,当电子设备将提取到的表情特征值与特征库中存储的表情特征值进行匹配之后,若发现不存在匹配度大于阈值的表情特征值,则可提示用户无法找到匹配结果。比如,以弹窗的形式告知用户。In addition, after the electronic device matches the extracted expression feature value with the expression feature value stored in the feature library, if it is found that there is no expression feature value whose matching degree is greater than the threshold, the user may be prompted to find the matching result. For example, the user is notified in the form of a pop-up window.
步骤207,将需要输入的表情显示于输入框或者聊天栏中。In step 207, the expression that needs to be input is displayed in the input box or the chat bar.
电子设备从特征库中选取需要输入的表情之后,将需要输入的表情直接显示于输入框或者聊天栏中。结合参考图2B,电子设备可以将选取的表情插入至输入框24中,以待用户发送或者直接显示于聊天栏26中。After the electronic device selects an expression to be input from the feature library, the expression to be input is directly displayed in the input box or the chat bar. Referring to FIG. 2B, the electronic device can insert the selected emoticons into the input box 24 for the user to send or directly display in the chat bar 26.
需要说明的是,本实施例提供的表情输入方法还可以结合电子设备所处环境对表情进行选取。具体地,在上述步骤206之前,还可以包括如下几个步骤:It should be noted that the expression input method provided in this embodiment may also select an expression in combination with an environment in which the electronic device is located. Specifically, before the foregoing step 206, the following steps may also be included:
(1)采集电子设备周围的环境信息。(1) Collect environmental information around the electronic device.
其中,环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种。其中,环境音量信息可以通过麦克风采集、环境光强信息可以通过光强传感器采集、环境图像信息可以通过摄像头采集。The environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information. The ambient volume information can be collected by the microphone, the ambient light intensity information can be collected by the light intensity sensor, and the environmental image information can be collected by the camera.
(2)根据环境信息确定当前使用环境。(2) Determine the current usage environment based on the environmental information.
电子设备采集周围的环境信息之后,综合分析各个环境信息以确定当前使用环境。比如,当时间信息为22:00、环境音量信息为2分贝且环境光强信息很弱时,可以确定当前使用环境为用户在睡觉的环境。再比如,当时间信息为14:00、环境音量信息为75分贝、环境光强信息较强且环境图像信息为街道时, 可以确定当前使用环境为用户在逛街的环境。After the electronic device collects surrounding environmental information, the various environmental information is comprehensively analyzed to determine the current usage environment. For example, when the time information is 22:00, the environment volume information is 2 decibels, and the ambient light intensity information is weak, it can be determined that the current use environment is the environment in which the user is sleeping. For example, when the time information is 14:00, the ambient volume information is 75 decibels, the ambient light intensity information is strong, and the environmental image information is a street, It can be determined that the current usage environment is an environment in which the user is shopping.
(3)从至少一个备选特征库中,选取与当前使用环境对应的备选特征库,将该备选特征库作为特征库。(3) Selecting, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and using the candidate feature library as the feature library.
电子设备中预先存储不同使用环境与不同备选特征库之间的对应关系。当电子设备获取当前使用环境后,选取对应的备选特征库作为特征库。之后,电子设备再根据提取到的表情特征值从特征库中选取需要输入的表情。The correspondence between different usage environments and different candidate feature libraries is pre-stored in the electronic device. After the electronic device acquires the current usage environment, the corresponding candidate feature library is selected as the feature library. Then, the electronic device selects an expression to be input from the feature library according to the extracted expression feature value.
还需要说明的是,特征库中存储的不同表情特征值与不同表情之间的对应关系可以是预先由系统或者设计人员设定的。比如在用户安装表情包时,该表情包中就携带有特征库。设计人员在设计完成表情之后,同时也设定了不同表情特征值与不同表情之间的对应关系,并创建了特征库,而后将表情与特征库一同打包成表情包。另外,特征库中存储的不同表情特征值与不同表情之间的对应关系还可以是由用户自行设定的。当由用户自行设定时,本实施例提供的表情输入方法还包括如下几个步骤:It should also be noted that the correspondence between different expression feature values stored in the feature library and different expressions may be previously set by the system or designer. For example, when a user installs an emoticon package, the emoticon package carries a feature library. After designing the expression, the designer also sets the correspondence between different expression feature values and different expressions, and creates a feature library, and then packs the expression together with the feature library into an expression package. In addition, the correspondence between different expression feature values stored in the feature library and different expressions may also be set by the user. When set by the user, the expression input method provided in this embodiment further includes the following steps:
第一,对于每一个表情,记录用于训练该表情的至少一个训练信息。First, for each expression, at least one piece of training information for training the expression is recorded.
对于每一个表情,电子设备记录用于训练该表情的至少一个训练信息。用户可以对表情进行训练,由用户自定义不同的表情特征值与不同的表情之间的对应关系。比如,用户从表情选择界面中选取了常用的四个表情,分别为:表情A、表情B、表情C和表情D。以对表情A进行训练为例,用户选定表情A,重复3次说“龇牙”,电子设备记录该3个训练信息。For each expression, the electronic device records at least one training information for training the expression. The user can train the expression, and the user can customize the correspondence between different expression feature values and different expressions. For example, the user selects four commonly used expressions from the expression selection interface, namely: expression A, expression B, expression C, and expression D. Taking the training of the expression A as an example, the user selects the expression A, repeats the "fangs" three times, and the electronic device records the three training information.
当然,电子设备仍然通过麦克风或者摄像头之类的输入单元对训练信息进行采集和记录。Of course, the electronic device still collects and records the training information through an input unit such as a microphone or a camera.
第二,从至少一个训练信息中提取至少一个训练特征值。Second, at least one training feature value is extracted from the at least one training information.
与上述步骤205相同,电子设备可以通过数据降维方法或者特征值选择方法从训练信息中提取训练特征值。训练信息可以是语音形式的训练信息,也可以是图片形式的训练信息,还可以是视频形式的训练信息。Similar to the above step 205, the electronic device may extract the training feature value from the training information by a data dimensionality reduction method or a feature value selection method. The training information may be training information in the form of voice, training information in the form of pictures, or training information in the form of video.
第三,将重复数量最多的训练特征值作为与表情相对应的表情特征值。 Third, the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression.
当电子设备记录的训练信息相同时,通常从训练信息中提取得到的训练特征值是相同的。比如,电子设备记录的3次训练信息均为用户说的“龇牙”时,其提取到的3个训练特征值通常均为“龇牙”。When the training information recorded by the electronic device is the same, the training feature values normally extracted from the training information are the same. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, the three training feature values extracted are usually “fangs”.
然而,当电子设备通过麦克风或者摄像头之类的输入单元采集训练信息时,可能存在周围环境的干扰,比如噪声或者图像的干扰,此时电子设备从训练信息中提取得到的训练特征值可能会有所不同。因此,电子设备将重复数量最多的训练特征值作为与表情相对应的表情特征值。比如,电子设备记录的3次训练信息均为用户说的“龇牙”时,其提取到的3个训练特征值中两个为“龇牙”,另一个为“在啊”,此时电子设备选取“龇牙”为与表情A相对应的表情特征值。However, when the electronic device collects training information through an input unit such as a microphone or a camera, there may be interference of the surrounding environment, such as noise or image interference. At this time, the training feature value extracted by the electronic device from the training information may be Different. Therefore, the electronic device takes the most repeated training feature value as the expression feature value corresponding to the expression. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, two of the three training feature values extracted are “fangs” and the other is “in the case”, at this time, the electronic The device selects "fangs" as the expression feature values corresponding to the expression A.
第四,将表情和表情特征值的对应关系存储在特征库中。Fourth, the correspondence between the expression and the expression feature value is stored in the feature library.
在实际应用中,可以将经训练得到的对应关系存储于原有的特征库中;也可以由用户自行创建一个自定义特征库,将经训练得到的对应关系存储于自定义特征库中。In the actual application, the trained correspondence can be stored in the original feature database; the user can also create a custom feature database and store the trained correspondence in the custom feature database.
通过上述四个步骤,实现了由用户自行设定表情和表情特征值之间的对应关系,进一步提高了用户体验。Through the above four steps, the correspondence between the expression and the expression feature value is set by the user, thereby further improving the user experience.
还需要说明的是,为了辨别用户何时需要运用本实施例提供的表情输入方法进行表情输入,在步骤201之前还可以执行检测光标是否位于输入框中的步骤。光标用于指示用户输入文字、表情或者图片等内容的位置。请结合参考图2B,光标28位于输入框24中。电子设备根据光标28的位置检测用户是否正在使用输入框24进行文字、表情或者图片等内容的输入。当光标28位于输入框24中时,默认用户正在使用输入框24,此时执行上述步骤201。It should be noted that, in order to identify when the user needs to use the expression input method provided by the embodiment to perform expression input, the step of detecting whether the cursor is located in the input box may be performed before step 201. The cursor is used to indicate the location where the user enters text, an expression, or a picture. Referring to FIG. 2B in conjunction with the cursor 28, the cursor 28 is located in the input box 24. The electronic device detects whether the user is using the input box 24 to input content such as characters, expressions, or pictures based on the position of the cursor 28. When the cursor 28 is in the input box 24, the default user is using the input box 24, at which point step 201 above is performed.
综上所述,本实施例提供的表情输入方法,通过电子设备上的输入单元采集输入信息,从输入信息中提取表情特征值,根据提取到的表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了表情输入速度慢且过程复杂的问题;达到了简化表情输入过 程,提高表情输入速度的效果。In summary, the expression input method provided by the embodiment collects input information through an input unit on the electronic device, extracts an expression feature value from the input information, and selects an expression to be input from the feature library according to the extracted expression feature value. The feature library stores the correspondence between different expression feature values and different expressions; solves the problem that the expression input speed is slow and the process is complicated; Cheng, improve the effect of the expression input speed.
另外,还通过麦克风采集语音输入信息,或者摄像头采集图片形式或者视频输入信息,进而进行表情输入,丰富了表情输入的方式;而且用户还可以自行设定不同表情特征值与不同表情之间的对应关系,充分满足了用户的需求。In addition, the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
另外,上述实施例还提供了两种选取需要输入的表情的方式,第一种方式通过分析一种形式的表情特征值后确定出需要输入的表情,较为简单、快速;第二种方式通过综合分析两种形式的表情特征值确定出需要输入的表情,可以使得选取的表情更为准确,充分满足用户需求。In addition, the foregoing embodiment further provides two ways of selecting an expression that needs to be input. The first method is simple and fast by analyzing a form of expression feature value and determining an expression to be input; Analysis of the two forms of expression feature values to determine the expressions that need to be input can make the selected expressions more accurate and fully satisfy the user's needs.
在一个具体的例子中,小明打开智能电视中安装的一个具有信息收发功能的应用软件,并同时打开智能电视的前置摄像头采集其人脸区域的图片。小明嘴角微微上扬,露出微笑的表情。智能电视从采集的人脸区域图片中提取表情特征值,在特征库中寻找表情特征值与表情之间的对应关系之后,在聊天界面的输入框中插入微笑表情。之后,小明露出难过的表情,智能电视在聊天界面的输入框中插入难过表情。In a specific example, Xiao Ming opens an application software with information transceiving function installed in the smart TV, and simultaneously opens the front camera of the smart TV to collect pictures of the face area thereof. Xiao Ming’s mouth is slightly raised, showing a smiling expression. The smart TV extracts the expression feature value from the collected face region picture, and finds the correspondence between the expression feature value and the expression in the feature library, and then inserts a smile expression in the input box of the chat interface. After that, Xiao Ming showed a sad expression, and the smart TV inserted a sad expression in the input box of the chat interface.
在另一个具体的例子中,小红使用手机中安装的一个即时通讯软件,通过对表情进行训练,自行设定了几组表情特征值与表情之间的对应关系。之后,在小红与他人聊天过程中,当手机接收到“今天好开心啊”的语音输入信息时,根据表情特征值“开心”与表情
Figure PCTCN2014095872-appb-000001
的对应关系,在聊天界面的输入框中插入表情
Figure PCTCN2014095872-appb-000002
当手机接收到“外面下雪了”的语音输入信息时,根据表情特征值“下雪了”与表情
Figure PCTCN2014095872-appb-000003
的对应关系,在聊天界面的输入框中插入表情
Figure PCTCN2014095872-appb-000004
当手机接收到“这雪真漂亮我好喜欢”的语音输入信息时,根据表情特征值“喜欢”与表情
Figure PCTCN2014095872-appb-000005
的对应关系,在聊天界面的输入框中插入表情
Figure PCTCN2014095872-appb-000006
In another specific example, Xiaohong uses an instant messaging software installed in the mobile phone to train the expressions and set the correspondence between several sets of expression feature values and expressions. After that, during the chat between Xiaohong and others, when the mobile phone receives the voice input information of “Today is so happy”, according to the expression feature value “happy” and expression
Figure PCTCN2014095872-appb-000001
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000002
When the mobile phone receives the voice input information "snowing outside", according to the expression feature value "snowing" and the expression
Figure PCTCN2014095872-appb-000003
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000004
When the mobile phone receives the voice input information of "This snow is really beautiful, I like it", according to the expression feature value "like" and expression
Figure PCTCN2014095872-appb-000005
Correspondence, insert an emoticon in the input box of the chat interface
Figure PCTCN2014095872-appb-000006
下述为本发明装置实施例,可以用于执行本发明方法实施例。对于本发明装置实施例中未披露的细节,请参照本发明方法实施例。The following is an embodiment of the apparatus of the present invention, which can be used to carry out the method embodiments of the present invention. For details not disclosed in the embodiment of the device of the present invention, please refer to the method embodiment of the present invention.
请参考图3,其示出了本发明一个实施例提供的表情输入装置的结构方框图,该表情输入装置用于电子设备中。该表情输入装置可以通过软件、硬件或者两者的结合实现成为电子设备的部分或者全部,该表情输入装置包括:第一信息采集模块310、特征提取模块320和表情选取模块330。Please refer to FIG. 3, which is a structural block diagram of an expression input device according to an embodiment of the present invention, which is used in an electronic device. The expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two. The expression input device includes: a first information collection module 310, a feature extraction module 320, and an expression selection module 330.
第一信息采集模块310,用于采集输入信息。The first information collection module 310 is configured to collect input information.
特征提取模块320,用于从输入信息中提取表情特征值。The feature extraction module 320 is configured to extract an expression feature value from the input information.
表情选取模块330,用于根据表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系。The expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
综上所述,本实施例提供的表情输入装置,通过采集输入信息,从输入信息中提取表情特征值,根据表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了相关技术中表情输入速度慢且过程复杂的问题;达到了简化表情输入过程,提高表情输入的速度的效果。In summary, the expression input device provided by the embodiment extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the expression feature value, and the feature library stores different expression features. Corresponding relationship between value and different expressions; solving the problem that the expression input speed is slow and the process is complicated in the related art; the effect of simplifying the expression input process and improving the speed of expression input is achieved.
请参考图4,其示出了本发明另一实施例提供的表情输入装置的结构方框图,该表情输入装置用于电子设备中。该表情输入装置可以通过软件、硬件或者两者的结合实现成为电子设备的部分或者全部,该表情输入装置包括:第一信息采集模块310、特征提取模块320、第二信息采集模块321、环境确定模块322、特征选择模块323、表情选取模块330和表情显示模块331。Please refer to FIG. 4, which is a structural block diagram of an expression input device according to another embodiment of the present invention, which is used in an electronic device. The expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two. The expression input device includes: a first information collection module 310, a feature extraction module 320, a second information collection module 321, and an environment determination. The module 322, the feature selection module 323, the expression selection module 330, and the expression display module 331.
第一信息采集模块310,用于采集输入信息。The first information collection module 310 is configured to collect input information.
具体来讲,第一信息采集模块310,包括:语音采集单元310a,图像采集单元310b。Specifically, the first information collecting module 310 includes: a voice collecting unit 310a and an image collecting unit 310b.
语音采集单元310a,用于若输入信息包括语音输入信息,则通过麦克风采集语音输入信息。 The voice collection unit 310a is configured to collect voice input information through a microphone if the input information includes voice input information.
图像采集单元310b,用于若输入信息包括图片输入信息或者视频输入信息,则通过摄像头采集图片输入信息或者视频输入信息。The image capturing unit 310b is configured to collect image input information or video input information through the camera if the input information includes picture input information or video input information.
特征提取模块320,用于从输入信息中提取表情特征值。The feature extraction module 320 is configured to extract an expression feature value from the input information.
具体来讲,特征提取模块320,包括下述至少一个提取单元:第一提取单元320a,第二提取单元320b,第三提取单元320c。Specifically, the feature extraction module 320 includes at least one extraction unit: a first extraction unit 320a, a second extraction unit 320b, and a third extraction unit 320c.
第一提取单元320a,用于若输入信息包括语音输入信息,则对语音输入信息进行语音识别,得到第一指定特征值。The first extracting unit 320a is configured to perform voice recognition on the voice input information if the input information includes voice input information, to obtain a first specified feature value.
第二提取单元320b,用于若输入信息包括图片输入信息,则在图片输入信息中确定人脸区域,从人脸区域中提取第二指定特征值。The second extracting unit 320b is configured to determine a face area in the picture input information and extract a second specified feature value from the face area, if the input information includes picture input information.
第三提取单元320c,用于若输入信息包括视频输入信息,则从视频输入信息中提取第三指定特征值。The third extracting unit 320c is configured to extract a third specified feature value from the video input information if the input information includes video input information.
可选的,表情输入装置还包括:第二信息采集模块321、环境确定模块322和特征选择模块323。Optionally, the expression input device further includes: a second information collection module 321, an environment determination module 322, and a feature selection module 323.
第二信息采集模块321,用于采集电子设备周围的环境信息,环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种。The second information collecting module 321 is configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.
环境确定模块322,用于根据环境信息确定当前使用环境。The environment determining module 322 is configured to determine a current usage environment according to the environment information.
特征选择模块323,用于从至少一个备选特征库中选取与当前使用环境对应的备选特征库,将该备选特征库作为特征库。The feature selection module 323 is configured to select an candidate feature library corresponding to the current use environment from the at least one candidate feature library, and use the candidate feature library as a feature library.
表情选取模块330,用于根据表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系。The expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
当表情特征值为第一指定特征值、第二指定特征值以及第三指定特征值中的任意一种时,表情选取模块330,包括:特征匹配单元330a、备选选取单元330b、表情排列单元330c和表情确定单元330d。When the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression selection module 330 includes: a feature matching unit 330a, an optional selection unit 330b, and an expression arrangement unit. 330c and expression determining unit 330d.
特征匹配单元330a,用于将表情特征值与特征库中存储的表情特征值进行匹配。The feature matching unit 330a is configured to match the expression feature value with the expression feature value stored in the feature library.
备选选取单元330b,用于将匹配度大于预定阈值的m个表情特征值对应的 n个表情作为备选表情,n≥m≥1。The candidate selecting unit 330b is configured to match the m expression feature values whose matching degree is greater than a predetermined threshold. n expressions as alternative expressions, n ≥ m ≥ 1.
表情排列单元330c,用于根据预设优先级选取至少一个排序条件,根据至少一个排序条件对n个备选表情进行排序,排序条件包括历史使用次数、最近使用时间以及匹配度中的任意一种。The expression arranging unit 330c is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, and the sorting condition includes any one of historical usage times, recent usage time, and matching degree. .
表情确定单元330d,用于根据排序结果筛选出一个备选表情,将该备选表情作为需要输入的表情。The expression determining unit 330d is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
当表情特征值包括第一指定特征值,且还包括第二指定特征值或者第三指定特征值时,表情选取模块330,包括:第一匹配单元330e、第一获取单元330f、第二匹配单元330g、第二获取单元330h、备选确定单元330i、备选排序单元330j和表情选取单元330k。When the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, the expression selection module 330 includes: a first matching unit 330e, a first obtaining unit 330f, and a second matching unit. 330g, second acquisition unit 330h, alternative determination unit 330i, alternative sorting unit 330j, and expression selection unit 330k.
第一匹配单元330e,用于将第一指定特征值与第一特征库中存储的第一表情特征值进行匹配。The first matching unit 330e is configured to match the first specified feature value with the first expression feature value stored in the first feature library.
第一获取单元330f,用于获取匹配度大于第一阈值的a个第一表情特征值,a≥1。The first obtaining unit 330f is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, and a≥1.
第二匹配单元330g,用于将第二指定特征值或者第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;a second matching unit 330g, configured to match the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library;
第二获取单元330h,用于获取匹配度大于第二阈值的b个第二表情特征值,b≥1。The second obtaining unit 330h is configured to obtain b second expression feature values whose matching degree is greater than the second threshold, b≥1.
备选确定单元330i,用于将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b。The candidate determining unit 330i is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b.
备选排序单元330j,用于根据预设优先级选取至少一个排序条件,根据至少一个排序条件对备选表情进行排序,排序条件包括重复次数、历史使用次数、最近使用时间以及匹配度中的任意一种。The candidate sorting unit 330j is configured to select at least one sorting condition according to the preset priority, and sort the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree. One.
表情选取单元330k,用于根据排序结果筛选出一个备选表情,将该备选表情作为需要输入的表情。The expression selection unit 330k is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
其中,特征库包括第一特征库和第二特征库,且表情特征值包括第一表情 特征值和第二表情特征值。The feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
表情显示模块331,用于将需要输入的表情显示于输入框或者聊天栏中。The expression display module 331 is configured to display an expression that needs to be input in an input box or a chat bar.
可选的,表情输入装置,还包括:信息记录模块、特征记录模块、特征选取模块和特征存储模块。Optionally, the expression input device further includes: an information recording module, a feature recording module, a feature selection module, and a feature storage module.
信息记录模块,用于对于每一个表情,记录用于训练表情的至少一个训练信息。An information recording module for recording at least one training information for training an expression for each expression.
特征记录模块,用于从至少一个训练信息中提取至少一个训练特征值。And a feature recording module, configured to extract at least one training feature value from the at least one training information.
特征选取模块,用于将重复数量最多的训练特征值作为与表情相对应的表情特征值。The feature selection module is configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression.
特征存储模块,用于将表情和表情特征值的对应关系存储在特征库中。The feature storage module is configured to store the correspondence between the expression and the expression feature value in the feature library.
综上所述,本实施例提供的表情输入装置,通过采集输入信息,从输入信息中提取表情特征值,根据提取到的表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了表情输入速度慢且过程复杂的问题;达到了简化表情输入过程,提高表情输入速度的效果。另外,还通过麦克风采集语音输入信息,或者摄像头采集图片形式或者视频输入信息,进而进行表情输入,丰富了表情输入的方式;而且用户还可以自行设定不同表情特征值与不同表情之间的对应关系,充分满足了用户的需求。In summary, the expression input device provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved. In addition, the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
需要说明的是:上述实施例提供的表情输入装置在输入表情时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的表情输入装置与表情输入方法的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。 It should be noted that the expression input device provided by the above embodiment is only illustrated by the division of the above functional modules when inputting an expression. In actual applications, the function distribution may be completed by different functional modules as needed. The internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the embodiment of the present invention is the same as the method embodiment of the present invention. The specific implementation process is not described here.
请参考图5,其示出了本发明的一个实施例中使用的电子设备500的说明性电子设备体系结构。所述电子设备500可为手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器、膝上型便携计算机、台式计算机以及智能电视等等。所述电子设备500包括中央处理单元(CPU)501、包括随机存取存储器(RAM)502和只读存储器(ROM)503的系统存储器504,以及连接系统存储器504和中央处理单元501的系统总线505。所述电子设备500还包括帮助电子设备内的各个器件之间传输信息的基本输入/输出系统(I/O系统)506,和用于存储操作系统513、应用程序514和其他程序模块515的大容量存储设备507。Referring to Figure 5, an illustrative electronic device architecture of an electronic device 500 for use in one embodiment of the present invention is shown. The electronic device 500 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, a desktop computer, a smart TV, and the like. The electronic device 500 includes a central processing unit (CPU) 501, a system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503, and a system bus 505 that connects the system memory 504 and the central processing unit 501. . The electronic device 500 also includes a basic input/output system (I/O system) 506 that facilitates transfer of information between various devices within the electronic device, and a large storage system 513, application program 514, and other program modules 515. Capacity storage device 507.
所述基本输入/输出系统506包括有用于显示信息的显示器508和用于用户输入信息的诸如鼠标、键盘之类的输入设备509。其中所述显示器508和输入设备509都通过连接到系统总线505的输入输出控制器510连接到中央处理单元501。所述基本输入/输出系统506还可以包括输入输出控制器510以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入输出控制器510还提供输出到显示屏、打印机或其他类型的输出设备。The basic input/output system 506 includes a display 508 for displaying information and an input device 509 such as a mouse or keyboard for user input of information. Both the display 508 and the input device 509 are connected to the central processing unit 501 via an input and output controller 510 that is coupled to the system bus 505. The basic input/output system 506 can also include an input and output controller 510 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input and output controller 510 also provides output to a display screen, printer, or other type of output device.
所述大容量存储设备507通过连接到系统总线505的大容量存储控制器(未示出)连接到中央处理单元501。所述大容量存储设备507及其相关联的电子设备可读介质为电子设备500提供非易失性存储。也就是说,所述大容量存储设备507可以包括诸如硬盘或者CD-ROM驱动器之类的电子设备可读介质(未示出)。The mass storage device 507 is connected to the central processing unit 501 by a mass storage controller (not shown) connected to the system bus 505. The mass storage device 507 and its associated electronic device readable medium provide non-volatile storage for the electronic device 500. That is, the mass storage device 507 may include an electronic device readable medium (not shown) such as a hard disk or a CD-ROM drive.
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、EPROM、EEPROM、闪存或其他固态存储其技术,CD-ROM、DVD或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。 Without loss of generality, the computer readable medium can include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage medium is not limited to the above.
根据本发明的各种实施例,所述电子设备500还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即电子设备500可以通过连接在所述系统总线505上的网络接口单元511连接到网络512,或者说,也可以使用网络接口单元511来连接到其他类型的网络或远程计算机系统(未示出)。According to various embodiments of the present invention, the electronic device 500 may also be connected to a remote computer running on a network through a network such as the Internet. That is, the electronic device 500 can be connected to the network 512 through a network interface unit 511 connected to the system bus 505, or can be connected to other types of networks or remote computer systems using the network interface unit 511 (not shown). ).
图6为本发明实施例所涉及的电子设备的结构示意图,该电子设备可以用于实施上述实施例中提供的表情输入方法。具体来讲:FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device may be used to implement the expression input method provided in the foregoing embodiment. Specifically:
电子设备600可以包括RF(Radio Frequency,射频)电路110、包括有一个或一个以上计算机可读存储介质的存储器120、输入单元130、显示单元140、传感器150、音频电路160、WiFi(wireless fidelity,无线保真)模块170、包括有一个或者一个以上处理核心的处理器180、以及电源190等部件。本领域技术人员可以理解,图6中示出的电子设备设备结构并不构成对电子设备设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:The electronic device 600 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, and a WiFi (wireless fidelity, The Wireless Fidelity module 170 includes a processor 180 having one or more processing cores, and a power supply 190 and the like. It will be understood by those skilled in the art that the electronic device device structure shown in FIG. 6 does not constitute a limitation on the electronic device device, and may include more or less components than those illustrated, or combine some components, or different components. Arrangement. among them:
RF电路110可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器180处理;另外,将涉及上行的数据发送给基站。通常,RF电路110包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM)卡、收发信机、耦合器、LNA(Low Noise Amplifier,低噪声放大器)、双工器等。此外,RF电路110还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA(Code Division Multiple Access,码分多址)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、LTE(Long Term Evolution,长期演进)、电子邮件、SMS(Short Messaging Service,短消息服务)等。 The RF circuit 110 can be used for transmitting and receiving information or during a call, and receiving and transmitting signals. Specifically, after receiving downlink information of the base station, the downlink information is processed by one or more processors 180. In addition, the data related to the uplink is sent to the base station. . Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier). , duplexer, etc. In addition, RF circuitry 110 can also communicate with the network and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
存储器120可用于存储软件程序以及模块,处理器180通过运行存储在存储器120的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器120可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备600的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器120还可以包括存储器控制器,以提供处理器180和输入单元130对存储器120的访问。The memory 120 can be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by running software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the electronic device 600 (such as audio data, phone book, etc.) and the like. Moreover, memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 120 may also include a memory controller to provide access to memory 120 by processor 180 and input unit 130.
输入单元130可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,输入单元130可包括触敏表面131以及其他输入设备132。触敏表面131,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面131上或在触敏表面131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器180,并能接收处理器180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面131。除了触敏表面131,输入单元130还可以包括其他输入设备132。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 130 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls. In particular, input unit 130 can include touch-sensitive surface 131 as well as other input devices 132. Touch-sensitive surface 131, also referred to as a touch display or trackpad, can collect touch operations on or near the user (such as a user using a finger, stylus, etc., on any suitable object or accessory on touch-sensitive surface 131 or The operation near the touch-sensitive surface 131) and driving the corresponding connecting device according to a preset program. Alternatively, the touch-sensitive surface 131 can include two portions of a touch detection device and a touch controller. Wherein, the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information. The processor 180 is provided and can receive commands from the processor 180 and execute them. In addition, the touch-sensitive surface 131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 can also include other input devices 132. Specifically, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及电子设备600的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元140可包括显示面板141,可选的,可以 采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板141。进一步的,触敏表面131可覆盖显示面板141,当触敏表面131检测到在其上或附近的触摸操作后,传送给处理器180以确定触摸事件的类型,随后处理器180根据触摸事件的类型在显示面板141上提供相应的视觉输出。虽然在图6中,触敏表面131与显示面板141是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面131与显示面板141集成而实现输入和输出功能。The display unit 140 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the electronic device 600, which can be composed of graphics, text, icons, video, and any combination thereof. The display unit 140 may include a display panel 141, optionally, may The display panel 141 is configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when the touch-sensitive surface 131 detects a touch operation thereon or nearby, it is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 according to the touch event The type provides a corresponding visual output on display panel 141. Although in FIG. 6, touch-sensitive surface 131 and display panel 141 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 131 can be integrated with display panel 141 for input. And output function.
电子设备600还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板141的亮度,接近传感器可在电子设备600移动到耳边时,关闭显示面板141和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于电子设备600还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。 Electronic device 600 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 141 according to the brightness of the ambient light, and the proximity sensor may close the display panel 141 when the electronic device 600 moves to the ear. And / or backlight. As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the electronic device 600 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.
音频电路160、扬声器161,传声器162可提供用户与电子设备600之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器180处理后,经RF电路110以发送给比如另一电子设备设备,或者将音频数据输出至存储器120以便进一步处理。音频电路160还可能包括耳塞插孔,以提供外设耳机与电子设备600的通信。The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the electronic device 600. The audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal by the audio circuit 160. After receiving, it is converted into audio data, and then processed by the audio data output processor 180, transmitted to the electronic device, for example, by the RF circuit 110, or outputted to the memory 120 for further processing. The audio circuit 160 may also include an earbud jack to provide communication of the peripheral earphones with the electronic device 600.
WiFi属于短距离无线传输技术,电子设备600通过WiFi模块170可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图6示出了WiFi模块170,但是可以理解的是,其并不属 于电子设备600的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。WiFi is a short-range wireless transmission technology, and the electronic device 600 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 170, which provides wireless broadband Internet access for users. Although FIG. 6 shows the WiFi module 170, it can be understood that it is not The necessary configuration of the electronic device 600 can be omitted as long as it does not change the essence of the invention as needed.
处理器180是电子设备600的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行电子设备600的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器180可包括一个或多个处理核心;优选的,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器180中。The processor 180 is a control center of the electronic device 600 that connects various portions of the entire handset with various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120. The various functions and processing data of the electronic device 600 are executed to perform overall monitoring of the mobile phone. Optionally, the processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 180.
电子设备600还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源190还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。The electronic device 600 also includes a power source 190 (such as a battery) for powering various components. Preferably, the power source can be logically coupled to the processor 180 through a power management system to manage functions such as charging, discharging, and power management through the power management system. . Power supply 190 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
尽管未示出,电子设备600还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,电子设备设备的显示单元是触摸屏显示器,电子设备设备还包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行,所述一个或者一个以上程序包含用于进行上述图1对应的实施例或图2A对应的实施例中所述的操作的指令。Although not shown, the electronic device 600 may further include a camera, a Bluetooth module, and the like, and details are not described herein. Specifically, in this embodiment, the display unit of the electronic device device is a touch screen display, the electronic device device further includes a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be Or more than one processor executing, the one or more programs comprising instructions for performing the operations described in the embodiment corresponding to Figure 1 above or the embodiment corresponding to Figure 2A.
作为另一方面,本发明再一实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序被一个或者一个以上的处理器用来执行一个表情输入方法,该方法包括: In another aspect, still another embodiment of the present invention provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the above embodiment; There is a computer readable storage medium that is not assembled into the terminal. The computer readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform an expression input method, the method comprising:
采集输入信息;Collect input information;
从输入信息中提取表情特征值;Extracting expression feature values from the input information;
根据表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系。The expressions to be input are selected from the feature library according to the expression feature values, and the correspondence between the different expression feature values and the different expressions is stored in the feature library.
优选地,从输入信息中提取表情特征值,包括:Preferably, extracting the expression feature value from the input information comprises:
若输入信息包括语音输入信息,则对语音输入信息进行语音识别,得到第一指定特征值;If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
若输入信息包括图片输入信息,则在图片输入信息中确定人脸区域,从人脸区域中提取第二指定特征值;If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
若输入信息包括视频输入信息,则从视频输入信息中提取第三指定特征值。If the input information includes video input information, the third specified feature value is extracted from the video input information.
优选地,当表情特征值为第一指定特征值、第二指定特征值以及第三指定特征值中的任意一种时,根据表情特征值从特征库中选取需要输入的表情,包括:Preferably, when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, the expression to be input is selected from the feature library according to the expression feature value, including:
将表情特征值与特征库中存储的表情特征值进行匹配;Matching the expression feature values with the expression feature values stored in the feature library;
将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;
根据预设优先级选取至少一个排序条件,根据至少一个排序条件对n个备选表情进行排序,排序条件包括历史使用次数、最近使用时间以及匹配度中的任意一种;Selecting at least one sorting condition according to a preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, recent usage time, and matching degree;
根据排序结果筛选出一个备选表情,将备选表情作为需要输入的表情。An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
优选地,当表情特征值包括第一指定特征值,且还包括第二指定特征值或者第三指定特征值时,根据表情特征值从特征库中选取需要输入的表情,包括:Preferably, when the expression feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, selecting an expression to be input from the feature library according to the expression feature value includes:
将第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;Matching the first specified feature value with the first expression feature value stored in the first feature library;
获取匹配度大于第一阈值的a个第一表情特征值,a≥1;Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;
将第二指定特征值或者第三指定特征值与第二特征库中存储的第二表情 特征值进行匹配;The second specified feature value or the third specified feature value and the second expression stored in the second feature library Characteristic values are matched;
获取匹配度大于第二阈值的b个第二表情特征值,b≥1;Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;
将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;
根据预设优先级选取至少一个排序条件,根据至少一个排序条件对备选表情进行排序,排序条件包括重复次数、历史使用次数、最近使用时间以及匹配度中的任意一种;Selecting at least one sorting condition according to a preset priority, and sorting the candidate expressions according to at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
根据排序结果筛选出一个备选表情,将备选表情作为需要输入的表情;Filtering an alternative expression according to the sorting result, and using the alternative expression as an expression to be input;
其中,特征库包括第一特征库和第二特征库,且表情特征值包括第一表情特征值和第二表情特征值。The feature library includes a first feature library and a second feature library, and the expression feature values include a first expression feature value and a second expression feature value.
优选地,根据表情特征值从特征库中选取需要输入的表情之前,还包括:Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
采集电子设备周围的环境信息,环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;Collecting environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
根据环境信息确定当前使用环境;Determine the current usage environment based on environmental information;
从至少一个备选特征库中选取与当前使用环境对应的备选特征库,将备选特征库作为特征库。The candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
优选地,采集输入信息,包括:Preferably, the input information is collected, including:
若输入信息包括语音输入信息,则通过麦克风采集语音输入信息;If the input information includes voice input information, the voice input information is collected through the microphone;
若输入信息包括图片输入信息或者视频输入信息,则通过摄像头采集图片输入信息或者视频输入信息。If the input information includes picture input information or video input information, the picture input information or the video input information is collected through the camera.
优选地,根据表情特征值从特征库中选取需要输入的表情之前,还包括:Preferably, before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
对于每一个表情,记录用于训练表情的至少一个训练信息;For each expression, record at least one training information for training the expression;
从至少一个训练信息中提取至少一个训练特征值;Extracting at least one training feature value from the at least one training information;
将重复数量最多的训练特征值作为与表情相对应的表情特征值;The training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression;
将表情和表情特征值的对应关系存储在特征库中。The correspondence between the expression and the expression feature value is stored in the feature library.
优选地,根据表情特征值从特征库中选取需要输入的表情之后,还包括: Preferably, after selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
将需要输入的表情显示于输入框或者聊天栏中。Display the expression you want to enter in the input box or chat bar.
本发明实施例提供的计算机可读存储介质,通过采集输入信息,从输入信息中提取表情特征值,根据提取到的表情特征值从特征库中选取需要输入的表情,特征库中存储有不同表情特征值与不同表情之间的对应关系;解决了表情输入速度慢且过程复杂的问题;达到了简化表情输入过程,提高表情输入速度的效果。The computer readable storage medium provided by the embodiment of the present invention extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores different expressions. Corresponding relationship between feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
应当理解的是,在本文中使用的,除非上下文清楚地支持例外情况,单数形式“一个”(“a”、“an”、“the”)旨在也包括复数形式。还应当理解的是,在本文中使用的“和/或”是指包括一个或者一个以上相关联地列出的项目的任意和所有可能组合。It is to be understood that the singular forms "a", "the", "the" It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。A person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium. The storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。 The above are only the preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalents, improvements, etc., which are within the spirit and scope of the present invention, should be included in the protection of the present invention. Within the scope.

Claims (24)

  1. 一种表情输入方法,其特征在于,所述方法包括:An expression input method, characterized in that the method comprises:
    采集输入信息;Collect input information;
    从所述输入信息中提取表情特征值;Extracting an expression feature value from the input information;
    根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同表情之间的对应关系。Selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions.
  2. 根据权利要求1所述的方法,其特征在于,所述从所述输入信息中提取表情特征值,包括:The method according to claim 1, wherein the extracting the expression feature value from the input information comprises:
    若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
    若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;If the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
    若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。If the input information includes video input information, extracting a third specified feature value from the video input information.
  3. 根据权利要求2所述的方法,其特征在于,当所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,所述根据所述表情特征值从特征库中选取需要输入的表情,包括:The method according to claim 2, wherein when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, The selecting an expression to be input from the feature library according to the expression feature value includes:
    将所述表情特征值与所述特征库中存储的表情特征值进行匹配;Matching the expression feature value with the expression feature value stored in the feature library;
    将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n≥m≥1;
    根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种; Selecting at least one sorting condition according to the preset priority, and sorting the n candidate expressions according to the at least one sorting condition, where the sorting condition includes any one of historical usage times, latest usage time, and the matching degree;
    根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情。An alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be input.
  4. 根据权利要求2所述的方法,其特征在于,当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,所述根据所述表情特征值从特征库中选取需要输入的表情,包括:The method according to claim 2, wherein when the expression feature value includes the first specified feature value and further includes the second specified feature value or the third specified feature value, Selecting an expression to be input from the feature library according to the expression feature value, including:
    将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;Matching the first specified feature value with a first expression feature value stored in the first feature library;
    获取匹配度大于第一阈值的a个第一表情特征值,a≥1;Obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1;
    将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;Matching the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
    获取匹配度大于第二阈值的b个第二表情特征值,b≥1;Obtaining b second expression feature values whose matching degree is greater than a second threshold, b≥1;
    将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;The x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x≥a, y≥b;
    根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;Selecting at least one sorting condition according to the preset priority, and sorting the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;
    根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情;Filtering an alternative expression according to the sorting result, and using the candidate expression as the expression to be input;
    其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述表情特征值从特征库中选取需要输入的表情之前,还包括:The method according to claim 1, wherein before the selecting the expression to be input from the feature library according to the expression feature value, the method further comprises:
    采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;
    根据所述环境信息确定当前使用环境; Determining a current use environment according to the environmental information;
    从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.
  6. 根据权利要求2所述的方法,其特征在于,所述采集输入信息,包括:The method of claim 2, wherein the collecting input information comprises:
    若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;If the input information includes the voice input information, collecting the voice input information through a microphone;
    若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。If the input information includes the picture input information or the video input information, the picture input information or the video input information is collected by a camera.
  7. 根据权利要求1所述的方法,其特征在于,所述根据所述表情特征值从特征库中选取需要输入的表情之前,还包括:The method according to claim 1, wherein before the selecting the expression to be input from the feature library according to the expression feature value, the method further comprises:
    对于每一个表情,记录用于训练所述表情的至少一个训练信息;For each of the expressions, recording at least one training information for training the expression;
    从所述至少一个训练信息中提取至少一个训练特征值;Extracting at least one training feature value from the at least one training information;
    将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;The training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression;
    将所述表情和所述表情特征值的对应关系存储在所述特征库中。A correspondence between the expression and the expression feature value is stored in the feature library.
  8. 根据权利要求1至7中任一权利要求所述的方法,其特征在于,所述根据所述表情特征值从特征库中选取需要输入的表情之后,还包括:The method according to any one of claims 1 to 7, further comprising: after selecting the expression to be input from the feature library according to the expression feature value, further comprising:
    将所述需要输入的表情显示于输入框或者聊天栏中。Display the expression that needs to be input in the input box or chat bar.
  9. 一种表情输入装置,其特征在于,所述装置包括:An expression input device, characterized in that the device comprises:
    第一信息采集模块,用于采集输入信息;a first information collecting module, configured to collect input information;
    特征提取模块,用于从所述输入信息中提取表情特征值;a feature extraction module, configured to extract an expression feature value from the input information;
    表情选取模块,用于根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同表情之间的对应关系。 The expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  10. 根据权利要求9所述的装置,其特征在于,所述特征提取模块,包括下述至少一个提取单元:第一提取单元,第二提取单元,第三提取单元;The device according to claim 9, wherein the feature extraction module comprises at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;
    所述第一提取单元,用于若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;The first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;
    所述第二提取单元,用于若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;The second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;
    所述第三提取单元,用于若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。The third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.
  11. 根据权利要求10所述的装置,其特征在于,当提取到的所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,所述表情选取模块,包括:特征匹配单元、备选选取单元、表情排列单元和表情确定单元;The apparatus according to claim 10, wherein when the extracted expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value The expression selection module includes: a feature matching unit, an optional selection unit, an expression arrangement unit, and an expression determination unit;
    所述特征匹配单元,用于将所述表情特征值与所述特征库中存储的表情特征值进行匹配;The feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library;
    所述备选选取单元,用于将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;The candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n≥m≥1;
    所述表情排列单元,用于根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个所述备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种;The expression arranging unit is configured to select at least one sorting condition according to a preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a history usage time, a recent usage time, and Any one of the matching degrees;
    所述表情确定单元,用于根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情。The expression determining unit is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as the expression that needs to be input.
  12. 根据权利要求10所述的装置,其特征在于,当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,所述表情选取模块,包括:第一匹配单元、第一获取单元、第二匹配单元、第 二获取单元、备选确定单元、备选排序单元和表情选取单元;The apparatus according to claim 10, wherein when said expression feature value includes said first specified feature value and further comprises said second specified feature value or said third specified feature value, said said The expression selection module includes: a first matching unit, a first acquiring unit, a second matching unit, and a a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;
    所述第一匹配单元,用于将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;The first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database;
    所述第一获取单元,用于获取匹配度大于第一阈值的a个第一表情特征值,a≥1;The first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a≥1;
    所述第二匹配单元,用于将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;The second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
    所述第二获取单元,用于获取匹配度大于第二阈值的b个第二表情特征值,b≥1;The second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b≥1;
    所述备选确定单元,用于将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;The candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x≥a, y≥b;
    所述备选排序单元,用于根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;The candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;
    所述表情选取单元,用于根据排序结果筛选出一个所述备选表情,将所述备选表情作为所述需要输入的表情;The expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;
    其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。The feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  13. 根据权利要求9所述的装置,其特征在于,所述装置还包括:The device according to claim 9, wherein the device further comprises:
    第二信息采集模块,用于采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;a second information collecting module, configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
    环境确定模块,用于根据所述环境信息确定当前使用环境;An environment determining module, configured to determine a current usage environment according to the environment information;
    特征选择模块,用于从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。 And a feature selection module, configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.
  14. 根据权利要求10所述的装置,其特征在于,所述第一信息采集模块,包括:语音采集单元,图像采集单元;The device according to claim 10, wherein the first information collecting module comprises: a voice collecting unit, an image collecting unit;
    所述语音采集单元,用于若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;The voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;
    所述图像采集单元,用于若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。The image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.
  15. 根据权利要求9所述的装置,其特征在于,所述装置还包括:The device according to claim 9, wherein the device further comprises:
    信息记录模块,用于对于每一个表情,记录用于训练所述表情的至少一个训练信息;An information recording module, configured to record, for each expression, at least one training information for training the expression;
    特征记录模块,用于从所述至少一个训练信息中提取至少一个训练特征值;a feature recording module, configured to extract at least one training feature value from the at least one training information;
    特征选取模块,用于将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;a feature selection module, configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression;
    特征存储模块,用于将所述表情和所述表情特征值的对应关系存储在所述特征库中。And a feature storage module, configured to store, in the feature library, a correspondence between the expression and the expression feature value.
  16. 根据权利要求9至15中任一权利要求所述的装置,其特征在于,所述装置还包括:The device according to any one of claims 9 to 15, wherein the device further comprises:
    表情显示模块,用于将所述需要输入的表情显示于输入框或者聊天栏中。An expression display module, configured to display the expression that needs to be input in an input box or a chat bar.
  17. 一种电子设备,其特征在于,所述电子设备包括:中央处理单元、网络接口单元、传感器、麦克风、显示器和系统存储器,系统存储器中存储一组程序代码,且中央处理单元通过系统总线用于调用系统存储器中存储的程序代码,用于执行以下操作:An electronic device, comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein a set of program codes is stored in the system memory, and the central processing unit is used by the system bus Calls the program code stored in system memory to do the following:
    采集输入信息;从所述输入信息中提取表情特征值;根据所述表情特征值从特征库中选取需要输入的表情,所述特征库中存储有不同表情特征值与不同 表情之间的对应关系。Collecting input information; extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores different expression feature values and different The correspondence between expressions.
  18. 根据权利要求17所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    若所述输入信息包括语音输入信息,则对所述语音输入信息进行语音识别,得到第一指定特征值;若所述输入信息包括图片输入信息,则在所述图片输入信息中确定人脸区域,从所述人脸区域中提取第二指定特征值;若所述输入信息包括视频输入信息,则从所述视频输入信息中提取第三指定特征值。If the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.
  19. 根据权利要求18所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    当所述表情特征值为所述第一指定特征值、所述第二指定特征值以及所述第三指定特征值中的任意一种时,将所述表情特征值与所述特征库中存储的表情特征值进行匹配;将匹配度大于预定阈值的m个表情特征值对应的n个表情作为备选表情,n≥m≥1;根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对n个备选表情进行排序,所述排序条件包括历史使用次数、最近使用时间以及所述匹配度中的任意一种;根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情。And when the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library The expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n≥m≥1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected The expression serves as the expression that needs to be input.
  20. 根据权利要求18所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    当所述表情特征值包括所述第一指定特征值,且还包括所述第二指定特征值或者所述第三指定特征值时,将所述第一指定特征值与第一特征库中存储的第一表情特征值进行匹配;获取匹配度大于第一阈值的a个第一表情特征值,a≥1;将所述第二指定特征值或者所述第三指定特征值与第二特征库中存储的第二表情特征值进行匹配;获取匹配度大于第二阈值的b个第二表情特征值,b≥ 1;将a个第一表情特征值对应的x个表情以及b个第二表情特征值对应的y个表情作为备选表情,x≥a,y≥b;根据预设优先级选取至少一个排序条件,根据所述至少一个排序条件对所述备选表情进行排序,所述排序条件包括重复次数、历史使用次数、最近使用时间以及所述匹配度中的任意一种;根据排序结果筛选出一个备选表情,将所述备选表情作为所述需要输入的表情;其中,所述特征库包括所述第一特征库和所述第二特征库,且所述表情特征值包括所述第一表情特征值和所述第二表情特征值。When the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a≥1; and the second specified feature value or the third specified feature value and the second feature database The second expression feature value stored in the matching is matched; b second expression feature values whose matching degree is greater than the second threshold are obtained, b≥ 1; x expressions corresponding to a first expression feature value and y expressions corresponding to b second expression feature values are used as candidate expressions, x≥a, y≥b; at least one sort is selected according to preset priorities Conditioning, the candidate expressions are sorted according to the at least one sorting condition, the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and filtering out one according to the sorting result An alternative expression, the candidate expression being the expression to be input; wherein the feature library includes the first feature library and the second feature library, and the expression feature value includes the first An expression feature value and the second expression feature value.
  21. 根据权利要求17所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    采集电子设备周围的环境信息,所述环境信息包括时间信息、环境音量信息、环境光强信息以及环境图像信息中的至少一种;根据所述环境信息确定当前使用环境;从至少一个备选特征库中选取与所述当前使用环境对应的备选特征库,将所述备选特征库作为所述特征库。Collecting environment information around the electronic device, the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.
  22. 根据权利要求18所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 18, wherein the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
    若所述输入信息包括所述语音输入信息,则通过麦克风采集所述语音输入信息;若所述输入信息包括所述图片输入信息或者所述视频输入信息,则通过摄像头采集所述图片输入信息或者所述视频输入信息。If the input information includes the voice input information, the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.
  23. 根据权利要求17所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to claim 17, wherein said central processing unit is configured to invoke program code stored in said system memory for performing the following operations:
    对于每一个表情,记录用于训练所述表情的至少一个训练信号;从所述至少一个训练信号中提取至少一个训练特征值;将重复数量最多的训练特征值作为与所述表情相对应的表情特征值;将所述表情和所述表情特征值的对应关系 存储在所述特征库中。For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; a correspondence between the expression and the expression eigenvalue Stored in the feature library.
  24. 根据权利要求17至23中任一权利要求所述的电子设备,其特征在于,所述中央处理单元用于调用所述系统存储器中存储的程序代码,用于执行以下操作:The electronic device according to any one of claims 17 to 23, wherein the central processing unit is configured to call program code stored in the system memory for performing the following operations:
    将所述需要输入的表情显示于输入框或者聊天栏中。 Display the expression that needs to be input in the input box or chat bar.
PCT/CN2014/095872 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device WO2015127825A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device
CN201410069166.9 2014-02-27

Publications (1)

Publication Number Publication Date
WO2015127825A1 true WO2015127825A1 (en) 2015-09-03

Family

ID=50758662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095872 WO2015127825A1 (en) 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device

Country Status (2)

Country Link
CN (1) CN103823561B (en)
WO (1) WO2015127825A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412935A (en) * 2018-10-12 2019-03-01 北京达佳互联信息技术有限公司 The sending method and method of reseptance of instant messaging, sending device and reception device
CN114173258A (en) * 2022-02-07 2022-03-11 深圳市朗琴音响技术有限公司 Intelligent sound box control method and intelligent sound box

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823561B (en) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 expression input method and device
WO2016000219A1 (en) * 2014-07-02 2016-01-07 华为技术有限公司 Information transmission method and transmission device
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106886396B (en) * 2015-12-16 2020-07-07 北京奇虎科技有限公司 Expression management method and device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
WO2017120924A1 (en) * 2016-01-15 2017-07-20 李强生 Information prompting method for use when inserting emoticon, and instant communication tool
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106020504B (en) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 Information output method and device
CN107623830B (en) * 2016-07-15 2019-03-15 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
CN106175727B (en) * 2016-07-25 2018-11-20 广东小天才科技有限公司 A kind of expression method for pushing and wearable device applied to wearable device
CN106293120B (en) * 2016-07-29 2020-06-23 维沃移动通信有限公司 Expression input method and mobile terminal
WO2018023576A1 (en) * 2016-08-04 2018-02-08 薄冰 Method for adjusting emoji sending technique according to market feedback, and emoji system
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN106293131A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 expression input method and device
CN106503630A (en) * 2016-10-08 2017-03-15 广东小天才科技有限公司 A kind of expression sending method, equipment and system
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN107315820A (en) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 The expression searching method and device of User Interface based on mobile terminal
CN107153496B (en) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons
CN109254669B (en) * 2017-07-12 2022-05-10 腾讯科技(深圳)有限公司 Expression picture input method and device, electronic equipment and system
CN110019885B (en) * 2017-08-01 2021-10-15 北京搜狗科技发展有限公司 Expression data recommendation method and device
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107479723B (en) * 2017-08-18 2021-01-15 联想(北京)有限公司 Emotion symbol inserting method and device and electronic equipment
CN109165072A (en) * 2018-08-28 2019-01-08 珠海格力电器股份有限公司 A kind of expression packet generation method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102255820A (en) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 Instant communication method and device
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN103823561A (en) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 Expression input method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824B (en) * 2013-06-17 2016-08-17 百度在线网络技术(北京)有限公司 The method of phonetic entry character string, device and terminal unit
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information
CN103529946B (en) * 2013-10-29 2016-06-01 广东欧珀移动通信有限公司 A kind of input method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102255820A (en) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 Instant communication method and device
CN102890776A (en) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 Method for searching emoticons through facial expression
CN102662961A (en) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN103823561A (en) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 Expression input method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412935A (en) * 2018-10-12 2019-03-01 北京达佳互联信息技术有限公司 The sending method and method of reseptance of instant messaging, sending device and reception device
CN109412935B (en) * 2018-10-12 2021-12-07 北京达佳互联信息技术有限公司 Instant messaging sending method, receiving method, sending device and receiving device
CN114173258A (en) * 2022-02-07 2022-03-11 深圳市朗琴音响技术有限公司 Intelligent sound box control method and intelligent sound box
CN114173258B (en) * 2022-02-07 2022-05-10 深圳市朗琴音响技术有限公司 Intelligent sound box control method and intelligent sound box

Also Published As

Publication number Publication date
CN103823561B (en) 2017-01-18
CN103823561A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
WO2015127825A1 (en) Expression input method and apparatus and electronic device
WO2021213496A1 (en) Message display method and electronic device
CN108093123A (en) A kind of message informing processing method, terminal and computer readable storage medium
WO2016110182A1 (en) Method, apparatus and terminal for matching expression image
WO2021077897A1 (en) File sending method and apparatus, and electronic device
CN106303070B (en) notification message prompting method and device and mobile terminal
US10673790B2 (en) Method and terminal for displaying instant messaging message
CN104238893B (en) A kind of method and apparatus that video preview picture is shown
CN108156508B (en) Barrage information processing method and device, mobile terminal, server and system
CN108885525A (en) Menu display method and terminal
WO2016173453A1 (en) Living body identification method, information generation method and terminal
WO2015043403A1 (en) Method, apparatus, and terminal for obtaining video data
CN108573064A (en) Information recommendation method, mobile terminal, server and computer readable storage medium
CN109543014B (en) Man-machine conversation method, device, terminal and server
CN107862059A (en) A kind of song recommendations method and mobile terminal
CN108958680A (en) Display control method, device, display system and computer readable storage medium
CN108829444A (en) A kind of method that background application is automatically closed, terminal and computer storage medium
CN110874128A (en) Visualized data processing method and electronic equipment
CN108628534B (en) Character display method and mobile terminal
CN107819936B (en) Short message classification method, mobile terminal and storage medium
CN109062643A (en) A kind of display interface method of adjustment, device and terminal
CN104794139B (en) Information retrieval method, apparatus and system
WO2013152656A1 (en) Method and mobile terminal for drawing sliding track
CN109213398A (en) A kind of application quick start method, terminal and computer readable storage medium
CN109062964A (en) A kind of data-erasure method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.01.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1