WO2015127825A1 - Procédé et appareil d'entrée d'expression et dispositif électronique - Google Patents

Procédé et appareil d'entrée d'expression et dispositif électronique Download PDF

Info

Publication number
WO2015127825A1
WO2015127825A1 PCT/CN2014/095872 CN2014095872W WO2015127825A1 WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1 CN 2014095872 W CN2014095872 W CN 2014095872W WO 2015127825 A1 WO2015127825 A1 WO 2015127825A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
feature value
feature
input information
input
Prior art date
Application number
PCT/CN2014/095872
Other languages
English (en)
Chinese (zh)
Inventor
陈超
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2015127825A1 publication Critical patent/WO2015127825A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Definitions

  • the present invention relates to the field of the Internet, and in particular, to an expression input method, device, and electronic device.
  • the expression selection interface is opened to select an expression to be input, and then the selected expression is sent to the other user.
  • the other party receives and reads the expression sent by the other party's user.
  • the inventors have found that the related art has at least the following problems: in order to satisfy the user's needs as much as possible, an application often contains dozens or even hundreds of expressions for the user to select.
  • the emoticon selection interface contains more emoticons, it is necessary to display these emoticons in a pagination manner.
  • the user inputs the expression, he or she needs to first find the page where the expression of the desired input is located, and then select the expression to be input from. This causes the user to input expressions very slowly and increases the complexity of the expression input process.
  • the embodiment of the invention provides an expression input method, device and electronic device.
  • the technical solution is as follows:
  • an expression input method comprising:
  • the extracting the expression feature value from the input information includes:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the input information includes video input information, extracting a third specified feature value from the video input information.
  • the Select the expressions you want to enter in the feature library including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • the feature identifier is used according to the feature library Select the expressions you want to enter, including:
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • Sorting condition includes any of the number of repetitions, the number of historical uses, the most recently used time, and the matching degree One type;
  • the feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  • the method before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
  • the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • An candidate feature library corresponding to the current use environment is selected from at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the collecting input information includes:
  • the input information includes the voice input information, collecting the voice input information through a microphone;
  • the input information includes the picture input information or the video input information
  • the picture input information or the video input information is collected by a camera.
  • the method before the selecting the expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value having the largest number of repetitions is used as the expression feature value corresponding to the expression
  • a correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • an expression input device comprising:
  • a first information collecting module configured to collect input information
  • a feature extraction module configured to extract an expression feature value from the input information
  • the expression selection module is configured to select an expression to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the feature extraction module includes at least one extraction unit: a first extraction unit, a second extraction unit, and a third extraction unit;
  • the first extracting unit is configured to perform voice recognition on the voice input information to obtain a first specified feature value, if the input information includes voice input information;
  • the second extracting unit is configured to: if the input information includes picture input information, determine a face area in the picture input information, and extract a second specified feature value from the face area;
  • the third extracting unit is configured to: if the input information includes video input information, extract a third specified feature value from the video input information.
  • the expression selection module includes: a feature a matching unit, an alternative selection unit, an expression arrangement unit, and an expression determination unit;
  • the feature matching unit is configured to match the expression feature value with the expression feature value stored in the feature library
  • the candidate selecting unit is configured to use n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold as an alternative expression, n ⁇ m ⁇ 1;
  • the expression arranging unit is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, where the sorting condition includes a historical usage count, a latest usage time, and the Any of the matching degrees;
  • the expression determining unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression to be input.
  • the expression selection module includes: a first match a unit, a first obtaining unit, a second matching unit, a second obtaining unit, an alternative determining unit, an alternative sorting unit, and an expression selecting unit;
  • the first matching unit is configured to match the first specified feature value with a first expression feature value stored in the first feature database
  • the first acquiring unit is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, a ⁇ 1;
  • the second matching unit is configured to match the second specified feature value or the third specified feature value with a second expression feature value stored in the second feature library;
  • the second acquiring unit is configured to obtain b second expression feature values whose matching degree is greater than a second threshold, b ⁇ 1;
  • the candidate determining unit is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x ⁇ a, y ⁇ b;
  • the candidate sorting unit is configured to select at least one sorting condition according to a preset priority, and sort the candidate emoticons according to the at least one sorting condition, where the sorting condition includes a repetition quantity, a historical usage count, and a recent use Any one of time and the degree of matching;
  • the expression selection unit is configured to filter out one of the candidate expressions according to the sorting result, and use the candidate expression as the expression that needs to be input;
  • the feature library includes the first feature library and the second feature library, and the expression feature value includes the first expression feature value and the second expression feature value.
  • the device further includes:
  • a second information collecting module configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • An environment determining module configured to determine a current usage environment according to the environment information
  • a feature selection module configured to select, from the at least one candidate feature library, an candidate feature library corresponding to the current use environment, and use the candidate feature library as the feature library.
  • the first information collection module includes: a voice collection unit, and an image collection unit;
  • the voice collecting unit is configured to collect the voice input information by using a microphone if the input information includes the voice input information;
  • the image collecting unit is configured to collect the picture input information or the video input information by using a camera if the input information includes the picture input information or the video input information.
  • the device further includes:
  • An information recording module configured to record, for each of the expressions, at least one training information for training the expression
  • a feature recording module configured to extract at least one training feature value from the at least one training information
  • a feature selection module configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression
  • a feature storage module configured to store a correspondence between the expression and the expression feature value in the In the feature library.
  • the device further includes:
  • An expression display module configured to display the expression that needs to be input in an input box or a chat bar.
  • an electronic device comprising: a central processing unit, a network interface unit, a sensor, a microphone, a display, and a system memory, wherein the system memory stores a set of program codes, and the central processing unit passes through the system bus Used to call program code stored in system memory to perform the following operations:
  • Collecting input information extracting an expression feature value from the input information; selecting an expression to be input from the feature library according to the expression feature value, wherein the feature library stores a correspondence between different expression feature values and different expressions .
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value; if the input information includes picture input information, determining a face region in the picture input information Extracting a second specified feature value from the face region; and if the input information includes video input information, extracting a third specified feature value from the video input information.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the expression feature value is any one of the first specified feature value, the second specified feature value, and the third specified feature value, storing the expression feature value and the feature library
  • the expression features are matched; the n expressions corresponding to the m expression feature values whose matching degree is greater than the predetermined threshold are used as the candidate expressions, n ⁇ m ⁇ 1; and at least one sorting condition is selected according to the preset priority, according to the at least one a sorting condition sorting n candidate expressions, the sorting condition including any one of historical usage times, latest usage time, and the matching degree; and filtering an alternative expression according to the sorting result, the candidate is selected
  • the expression serves as the expression that needs to be input.
  • the central processing unit is configured to invoke program code stored in the system memory, Used to do the following:
  • the emoticon feature value includes the first specified feature value, and further includes the second specified feature value or the third specified feature value, storing the first specified feature value and the first feature library Matching the first expression feature values; obtaining a first expression feature value whose matching degree is greater than the first threshold, a ⁇ 1; and the second specified feature value or the third specified feature value and the second feature database Matching the second expression feature values stored in the second; obtaining b second expression feature values whose matching degree is greater than the second threshold, b ⁇ 1; x expressions corresponding to a first expression feature value and b second expressions y expressions corresponding to the feature values as alternative expressions, x ⁇ a, y ⁇ b; selecting at least one sorting condition according to the preset priority, sorting the candidate expressions according to the at least one sorting condition, the sorting The condition includes any one of a repetition number, a history usage count, a recent usage time, and the matching degree; and an alternative expression is filtered according to the sorting result, and the candidate expression is used as the expression to be
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the environment information including at least one of time information, environment volume information, ambient light intensity information, and environment image information; determining a current use environment according to the environment information; from at least one candidate feature An candidate feature library corresponding to the current use environment is selected in the library, and the candidate feature library is used as the feature library.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the voice input information is collected by using a microphone; if the input information includes the picture input information or the video input information, the picture input information is collected by a camera or The video input information.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • each expression For each expression, recording at least one training signal for training the expression; extracting at least one training feature value from the at least one training signal; and using the most repeated training feature value as an expression corresponding to the expression An eigenvalue; storing a correspondence between the emoticon and the emoticon feature value in the feature library.
  • the central processing unit is configured to invoke program code stored in the system memory for performing the following operations:
  • the expression feature value is extracted from the input information, and the expression to be input is selected from the feature library according to the extracted expression feature value, and the correspondence relationship between different expression feature values and different expressions is stored in the feature library;
  • FIG. 1 is a flowchart of a method for an expression input method according to an embodiment of the present invention
  • 2A is a flowchart of a method for an expression input method according to another embodiment of the present invention.
  • 2B is a schematic diagram of a chat interface of a typical instant messaging application
  • FIG. 3 is a block diagram showing the structure of an expression input device according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the structure of an expression input device according to another embodiment of the present invention.
  • Figure 5 is an illustrative terminal architecture of an electronic device 500 for use in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device may be a mobile phone, a tablet computer, an e-book reader, an MP3 player (Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3), and an MP4 (Moving Picture) Experts Group Audio Layer IV, motion imaging experts compress standard audio layers 3) players, laptops, desktop computers, smart TVs, etc.
  • MP3 player Moving Picture Experts Group Audio Layer I I I, a motion picture expert compression standard audio layer 3
  • MP4 Motion Imaging experts compress standard audio layers
  • FIG. 1 is a flowchart of a method for inputting an expression according to an embodiment of the present invention.
  • the embodiment is illustrated by using the expression input method in an electronic device.
  • the expression input method includes the following steps:
  • step 102 input information is collected.
  • Step 104 extracting an expression feature value from the input information.
  • Step 106 Select an expression to be input from the feature library according to the expression feature value, and store a correspondence between different expression feature values and different expressions in the feature library.
  • the expression input method provided by the embodiment extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • extracting the expression feature value from the input information comprises:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the third specified feature value is extracted from the video input information.
  • the expression to be input is selected from the feature library according to the expression feature value, including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
  • selecting an expression to be input from the feature library according to the expression feature value includes:
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
  • the feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • the candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the input information is collected, including:
  • the voice input information is collected through the microphone
  • the picture input information or the video input information is collected through the camera.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression
  • the correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • FIG. 2A is a flowchart of a method for inputting an expression according to another embodiment of the present invention.
  • the embodiment is illustrated by using the expression input method in an electronic device.
  • the expression input method includes the following steps:
  • Step 201 Determine whether the electronic device is in an automatic collection state or a manual acquisition state; if the electronic device is in an automatic collection state, perform step 202; if the electronic device is in a manual collection state, execute Step 203.
  • the automatic acquisition state refers to that the electronic device automatically turns on the input unit to collect input information
  • the manual acquisition state refers to the input of the input information by the user to open the input unit.
  • Step 202 If the electronic device is in an automatic acquisition state, the input unit is turned on.
  • the input unit includes a microphone and/or a camera.
  • the input unit may be an input unit built in the electronic device, or may be an input unit external to the electronic device, which is not specifically limited in the embodiment of the present invention.
  • step 204 is performed.
  • Step 203 If the electronic device is in the manual collection state, it is detected whether the input unit is in an open state.
  • the electronic device detects whether the input unit is in an open state. Since the manual acquisition state refers to the collection of input information by the user turning on the input unit, the electronic device detects at this time whether the user turns on the input unit. The user can turn on the input unit with a control such as a button or a switch.
  • the microphone button 22 is located in the input box 24. The user presses the microphone button 22 to turn the microphone on, and the microphone turns off when the user releases the microphone button 22.
  • step 204 is performed; if the input unit is not in the on state, the following steps are not performed.
  • Step 204 Acquire input information through an input unit on the electronic device.
  • the electronic device collects input information through the input unit.
  • the voice input information is collected through the microphone.
  • the voice input information can be what the user says, or a sound made by the user or other object.
  • the input unit includes a camera
  • Picture input information or video input information if the input unit includes a camera, it is collected by the camera.
  • Picture input information may be a facial expression of the user
  • the video input information may be a gesture gesture of the user or a gesture track of the user, and the like.
  • Step 205 Extract an expression feature value from the input information.
  • the expression feature value is extracted from the input information.
  • the voice input information is voice-recognized, and then the first specified feature value is extracted from the voice input information.
  • the first specified feature value is used to represent the user's voice.
  • the electronic device may extract the first specified feature value from the voice input information by a data dimensionality reduction method or a feature value selection method.
  • the data dimensionality reduction method is a commonly used method for simplifying and effectively analyzing information such as high-dimensional speech or images. By reducing the dimensionality of high-dimensional information, it is possible to remove some data that does not reflect the essential characteristics of the information. Therefore, the feature value in the input information can be obtained by the data dimensionality reduction method, and the feature value is data capable of reflecting the essential characteristics of the input information.
  • the first specified feature value is extracted from the voice input information, and the first specified feature value is used in the expression input method provided in this embodiment, so the first specified feature value is referred to as an expression feature value. .
  • the expression feature value can also be extracted from the input information by the feature value selection method.
  • the electronic device may preset at least one expression feature value, and after collecting the input information, analyze the input information and find out whether there is a preset expression feature value.
  • the voice input information collected by the electronic device through the microphone is “of course, no problem haha”, and the electronic device extracts the first specified feature value “haha” from the voice input information.
  • the face area is determined from the picture input information, and the second specified feature value is extracted from the face area.
  • the second specified feature value user represents a facial expression of the person.
  • the electronic device may first determine a face region from the picture input information by using an image recognition technology, and then extract a second specified feature value from the face region by a data dimensionality reduction method or a feature value selection method.
  • the face area in the picture is determined.
  • the second specified feature value corresponding to the expressions such as "happy”, “sad”, “cry” or “crazy” is extracted therefrom.
  • the third specified feature value is extracted from the video input information.
  • the third specified feature value is used to represent the gesture trajectory of the person.
  • the electronic device may extract a third specified feature value from the video input information.
  • Step 206 Select an expression to be input from the feature library according to the extracted expression feature value.
  • the electronic device can select the expression to be input according to the extracted expression feature values and the corresponding relationship stored in the feature library.
  • the selected emoticons are then inserted into the input box 24 for the user to send or directly display in the chat bar 26.
  • the step may include the following sub-steps:
  • the expression feature value stored in the feature library is a specific expression feature value, for example, the first specified feature value is entered by a specific person, the expression feature value extracted by the electronic device and the expression feature value stored in the feature library have A certain degree of difference, so the electronic device needs to match the two to get the matching degree.
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1.
  • an expression feature value corresponds to at least one expression.
  • the predetermined threshold can be preset according to the actual situation, for example, set to 80%.
  • the alternative expression obtained by the electronic device is: three expressions A, B, and C corresponding to an expression feature value with a matching degree of 98%, and another expression feature value corresponding to 90% of the matching feature value. D expression.
  • the sorting condition includes any one of historical usage times, recent usage time, and matching degree.
  • the order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of priority from high to low is the degree of matching, the number of historical uses, and the most recently used time.
  • the electronic device first sorts the four expressions A, B, C, and D according to the matching degree, and then obtains A, B, C, and D in turn, and finds that the matching degrees of the three expressions A, B, and C are all 98%; after that, the electronic device sorts the three expressions A, B, and C according to the historical usage times, and then obtains B, A, and C in turn (assuming that the sorting rule is sorted according to the number of historical uses, and the history of the A expression The number of uses is 15 times, the historical usage of the B expression is 20 times, and the historical usage of the C expression is 3 times; at this time, the electronic device finds that the B expression has the most usage history, so the B expression is selected as the expression to be input.
  • the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
  • the step may include the following steps:
  • the electronic device comprehensively analyzes two forms of expression feature values to determine an expression to be input, which can make the selected expression more accurate and fully satisfy the user's needs.
  • the electronic device performs the first specified feature value and the first expression feature value stored in the first feature library match. Similarly, the electronic device obtains a matching degree between the first specified feature value and the first expression feature value stored in the first feature library. In this embodiment, it is assumed that the first specified feature value extracted by the electronic device is “haha”.
  • the electronic device acquires a first expression feature values whose matching degree is greater than the first threshold, a ⁇ 1.
  • a 1 is assumed.
  • the facial expression of the second designated feature value is laughed as an example for illustration.
  • the electronic device acquires b second expression feature values whose matching degree is greater than a second threshold, b ⁇ 1.
  • b 2.
  • the candidate expression is three expressions of “laughing”, “smile” and “fang” corresponding to the first expression feature value whose matching degree is greater than the first threshold, and the matching degree is greater than the second threshold.
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree.
  • the order of priority between the various sorting conditions may be preset according to actual conditions, for example, the order of repetition is the order of repetition, the number of historical usages, the latest usage time, and the matching degree.
  • the electronic device automatically filters out an alternative expression from the plurality of candidate expressions as an expression that needs to be input, and does not require the user to select or confirm, and simplifies the flow of the expression input, so that the expression Input is more efficient and convenient.
  • the electronic device matches the extracted expression feature value with the expression feature value stored in the feature library, if it is found that there is no expression feature value whose matching degree is greater than the threshold, the user may be prompted to find the matching result. For example, the user is notified in the form of a pop-up window.
  • step 207 the expression that needs to be input is displayed in the input box or the chat bar.
  • the electronic device selects an expression to be input from the feature library, the expression to be input is directly displayed in the input box or the chat bar.
  • the electronic device can insert the selected emoticons into the input box 24 for the user to send or directly display in the chat bar 26.
  • the expression input method provided in this embodiment may also select an expression in combination with an environment in which the electronic device is located. Specifically, before the foregoing step 206, the following steps may also be included:
  • the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.
  • the ambient volume information can be collected by the microphone, the ambient light intensity information can be collected by the light intensity sensor, and the environmental image information can be collected by the camera.
  • the various environmental information is comprehensively analyzed to determine the current usage environment. For example, when the time information is 22:00, the environment volume information is 2 decibels, and the ambient light intensity information is weak, it can be determined that the current use environment is the environment in which the user is sleeping. For example, when the time information is 14:00, the ambient volume information is 75 decibels, the ambient light intensity information is strong, and the environmental image information is a street, It can be determined that the current usage environment is an environment in which the user is shopping.
  • the correspondence between different usage environments and different candidate feature libraries is pre-stored in the electronic device. After the electronic device acquires the current usage environment, the corresponding candidate feature library is selected as the feature library. Then, the electronic device selects an expression to be input from the feature library according to the extracted expression feature value.
  • the correspondence between different expression feature values stored in the feature library and different expressions may be previously set by the system or designer.
  • the emoticon package carries a feature library.
  • the designer After designing the expression, the designer also sets the correspondence between different expression feature values and different expressions, and creates a feature library, and then packs the expression together with the feature library into an expression package.
  • the correspondence between different expression feature values stored in the feature library and different expressions may also be set by the user.
  • the expression input method provided in this embodiment further includes the following steps:
  • the electronic device For each expression, the electronic device records at least one training information for training the expression.
  • the user can train the expression, and the user can customize the correspondence between different expression feature values and different expressions.
  • the user selects four commonly used expressions from the expression selection interface, namely: expression A, expression B, expression C, and expression D. Taking the training of the expression A as an example, the user selects the expression A, repeats the "fangs" three times, and the electronic device records the three training information.
  • the electronic device still collects and records the training information through an input unit such as a microphone or a camera.
  • At least one training feature value is extracted from the at least one training information.
  • the electronic device may extract the training feature value from the training information by a data dimensionality reduction method or a feature value selection method.
  • the training information may be training information in the form of voice, training information in the form of pictures, or training information in the form of video.
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression.
  • the training feature values normally extracted from the training information are the same.
  • the three training information recorded by the electronic device is the “fangs” that the user says, the three training feature values extracted are usually “fangs”.
  • the electronic device collects training information through an input unit such as a microphone or a camera, there may be interference of the surrounding environment, such as noise or image interference.
  • the training feature value extracted by the electronic device from the training information may be Different. Therefore, the electronic device takes the most repeated training feature value as the expression feature value corresponding to the expression. For example, when the three training information recorded by the electronic device is the “fangs” that the user says, two of the three training feature values extracted are “fangs” and the other is “in the case”, at this time, the electronic The device selects "fangs" as the expression feature values corresponding to the expression A.
  • the trained correspondence can be stored in the original feature database; the user can also create a custom feature database and store the trained correspondence in the custom feature database.
  • the correspondence between the expression and the expression feature value is set by the user, thereby further improving the user experience.
  • the step of detecting whether the cursor is located in the input box may be performed before step 201.
  • the cursor is used to indicate the location where the user enters text, an expression, or a picture. Referring to FIG. 2B in conjunction with the cursor 28, the cursor 28 is located in the input box 24.
  • the electronic device detects whether the user is using the input box 24 to input content such as characters, expressions, or pictures based on the position of the cursor 28.
  • the cursor 28 is in the input box 24, the default user is using the input box 24, at which point step 201 above is performed.
  • the expression input method provided by the embodiment collects input information through an input unit on the electronic device, extracts an expression feature value from the input information, and selects an expression to be input from the feature library according to the extracted expression feature value.
  • the feature library stores the correspondence between different expression feature values and different expressions; solves the problem that the expression input speed is slow and the process is complicated; Cheng, improve the effect of the expression input speed.
  • the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
  • the foregoing embodiment further provides two ways of selecting an expression that needs to be input.
  • the first method is simple and fast by analyzing a form of expression feature value and determining an expression to be input; Analysis of the two forms of expression feature values to determine the expressions that need to be input can make the selected expressions more accurate and fully satisfy the user's needs.
  • Xiao Ming opens an application software with information transceiving function installed in the smart TV, and simultaneously opens the front camera of the smart TV to collect pictures of the face area thereof.
  • Xiao Ming s mouth is slightly raised, showing a smiling expression.
  • the smart TV extracts the expression feature value from the collected face region picture, and finds the correspondence between the expression feature value and the expression in the feature library, and then inserts a smile expression in the input box of the chat interface. After that, Xiao Ming showed a sad expression, and the smart TV inserted a sad expression in the input box of the chat interface.
  • Xiaohong uses an instant messaging software installed in the mobile phone to train the expressions and set the correspondence between several sets of expression feature values and expressions.
  • the mobile phone receives the voice input information of “Today is so happy”, according to the expression feature value “happy” and expression Correspondence, insert an emoticon in the input box of the chat interface
  • the mobile phone receives the voice input information "snowing outside”, according to the expression feature value "snowing” and the expression Correspondence
  • insert an emoticon in the input box of the chat interface When the mobile phone receives the voice input information of "This snow is really beautiful, I like it", according to the expression feature value "like” and expression Correspondence, insert an emoticon in the input box of the chat interface
  • FIG. 3 is a structural block diagram of an expression input device according to an embodiment of the present invention, which is used in an electronic device.
  • the expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two.
  • the expression input device includes: a first information collection module 310, a feature extraction module 320, and an expression selection module 330.
  • the first information collection module 310 is configured to collect input information.
  • the feature extraction module 320 is configured to extract an expression feature value from the input information.
  • the expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the expression input device provided by the embodiment extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the expression feature value, and the feature library stores different expression features. Corresponding relationship between value and different expressions; solving the problem that the expression input speed is slow and the process is complicated in the related art; the effect of simplifying the expression input process and improving the speed of expression input is achieved.
  • FIG. 4 is a structural block diagram of an expression input device according to another embodiment of the present invention, which is used in an electronic device.
  • the expression input device can be implemented as part or all of the electronic device by software, hardware or a combination of the two.
  • the expression input device includes: a first information collection module 310, a feature extraction module 320, a second information collection module 321, and an environment determination.
  • the first information collection module 310 is configured to collect input information.
  • the first information collecting module 310 includes: a voice collecting unit 310a and an image collecting unit 310b.
  • the voice collection unit 310a is configured to collect voice input information through a microphone if the input information includes voice input information.
  • the image capturing unit 310b is configured to collect image input information or video input information through the camera if the input information includes picture input information or video input information.
  • the feature extraction module 320 is configured to extract an expression feature value from the input information.
  • the feature extraction module 320 includes at least one extraction unit: a first extraction unit 320a, a second extraction unit 320b, and a third extraction unit 320c.
  • the first extracting unit 320a is configured to perform voice recognition on the voice input information if the input information includes voice input information, to obtain a first specified feature value.
  • the second extracting unit 320b is configured to determine a face area in the picture input information and extract a second specified feature value from the face area, if the input information includes picture input information.
  • the third extracting unit 320c is configured to extract a third specified feature value from the video input information if the input information includes video input information.
  • the expression input device further includes: a second information collection module 321, an environment determination module 322, and a feature selection module 323.
  • the second information collecting module 321 is configured to collect environment information around the electronic device, where the environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information.
  • the environment determining module 322 is configured to determine a current usage environment according to the environment information.
  • the feature selection module 323 is configured to select an candidate feature library corresponding to the current use environment from the at least one candidate feature library, and use the candidate feature library as a feature library.
  • the expression selection module 330 is configured to select an expression that needs to be input from the feature library according to the expression feature value, and the feature library stores a correspondence between different expression feature values and different expressions.
  • the expression selection module 330 includes: a feature matching unit 330a, an optional selection unit 330b, and an expression arrangement unit. 330c and expression determining unit 330d.
  • the feature matching unit 330a is configured to match the expression feature value with the expression feature value stored in the feature library.
  • the candidate selecting unit 330b is configured to match the m expression feature values whose matching degree is greater than a predetermined threshold.
  • the expression arranging unit 330c is configured to select at least one sorting condition according to the preset priority, and sort the n candidate expressions according to the at least one sorting condition, and the sorting condition includes any one of historical usage times, recent usage time, and matching degree. .
  • the expression determining unit 330d is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
  • the expression selection module 330 includes: a first matching unit 330e, a first obtaining unit 330f, and a second matching unit. 330g, second acquisition unit 330h, alternative determination unit 330i, alternative sorting unit 330j, and expression selection unit 330k.
  • the first matching unit 330e is configured to match the first specified feature value with the first expression feature value stored in the first feature library.
  • the first obtaining unit 330f is configured to obtain a first expression feature value whose matching degree is greater than the first threshold, and a ⁇ 1.
  • a second matching unit 330g configured to match the second specified feature value or the third specified feature value with the second expression feature value stored in the second feature library
  • the second obtaining unit 330h is configured to obtain b second expression feature values whose matching degree is greater than the second threshold, b ⁇ 1.
  • the candidate determining unit 330i is configured to use, as an alternative expression, x expressions corresponding to the a first expression feature values and y expressions corresponding to the b second expression feature values, x ⁇ a, y ⁇ b.
  • the candidate sorting unit 330j is configured to select at least one sorting condition according to the preset priority, and sort the candidate expressions according to the at least one sorting condition, where the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree.
  • the sorting condition includes any of the number of repetitions, the number of historical usages, the latest usage time, and the matching degree.
  • the expression selection unit 330k is configured to filter out an alternative expression according to the sorting result, and use the candidate expression as an expression to be input.
  • the feature library includes a first feature library and a second feature library, and the expression feature value includes the first expression The feature value and the second expression feature value.
  • the expression display module 331 is configured to display an expression that needs to be input in an input box or a chat bar.
  • the expression input device further includes: an information recording module, a feature recording module, a feature selection module, and a feature storage module.
  • An information recording module for recording at least one training information for training an expression for each expression.
  • a feature recording module configured to extract at least one training feature value from the at least one training information.
  • the feature selection module is configured to use the training feature value with the largest number of repetitions as the expression feature value corresponding to the expression.
  • the feature storage module is configured to store the correspondence between the expression and the expression feature value in the feature library.
  • the expression input device extracts the expression feature value from the input information by collecting the input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores Corresponding relationship between different expression feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • the voice input information is collected through the microphone, or the camera captures the image form or the video input information, thereby performing the expression input, enriching the manner of the expression input; and the user can also set the correspondence between different expression feature values and different expressions. Relationships fully meet the needs of users.
  • the expression input device provided by the above embodiment is only illustrated by the division of the above functional modules when inputting an expression.
  • the function distribution may be completed by different functional modules as needed.
  • the internal structure of the device is divided into different functional modules to perform all or part of the functions described above.
  • the embodiment of the present invention is the same as the method embodiment of the present invention. The specific implementation process is not described here.
  • the electronic device 500 can be a mobile phone, a tablet computer, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, a desktop computer, a smart TV, and the like.
  • the electronic device 500 includes a central processing unit (CPU) 501, a system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503, and a system bus 505 that connects the system memory 504 and the central processing unit 501.
  • CPU central processing unit
  • system memory 504 including a random access memory (RAM) 502 and a read only memory (ROM) 503
  • system bus 505 that connects the system memory 504 and the central processing unit 501.
  • the electronic device 500 also includes a basic input/output system (I/O system) 506 that facilitates transfer of information between various devices within the electronic device, and a large storage system 513, application program 514, and other program modules 515. Capacity storage device 507.
  • I/O system basic input/output system
  • the basic input/output system 506 includes a display 508 for displaying information and an input device 509 such as a mouse or keyboard for user input of information. Both the display 508 and the input device 509 are connected to the central processing unit 501 via an input and output controller 510 that is coupled to the system bus 505.
  • the basic input/output system 506 can also include an input and output controller 510 for receiving and processing input from a plurality of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input and output controller 510 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 507 is connected to the central processing unit 501 by a mass storage controller (not shown) connected to the system bus 505.
  • the mass storage device 507 and its associated electronic device readable medium provide non-volatile storage for the electronic device 500. That is, the mass storage device 507 may include an electronic device readable medium (not shown) such as a hard disk or a CD-ROM drive.
  • the computer readable medium can include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, EPROM, EEPROM, flash memory or other solid state storage technologies, CD-ROM, DVD or other optical storage, tape cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices.
  • RAM random access memory
  • ROM read only memory
  • EPROM Erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • the electronic device 500 may also be connected to a remote computer running on a network through a network such as the Internet. That is, the electronic device 500 can be connected to the network 512 through a network interface unit 511 connected to the system bus 505, or can be connected to other types of networks or remote computer systems using the network interface unit 511 (not shown). ).
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device may be used to implement the expression input method provided in the foregoing embodiment. Specifically:
  • the electronic device 600 may include an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, and a WiFi (wireless fidelity,
  • the Wireless Fidelity module 170 includes a processor 180 having one or more processing cores, and a power supply 190 and the like. It will be understood by those skilled in the art that the electronic device device structure shown in FIG. 6 does not constitute a limitation on the electronic device device, and may include more or less components than those illustrated, or combine some components, or different components. Arrangement. among them:
  • the RF circuit 110 can be used for transmitting and receiving information or during a call, and receiving and transmitting signals. Specifically, after receiving downlink information of the base station, the downlink information is processed by one or more processors 180. In addition, the data related to the uplink is sent to the base station. .
  • the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier). , duplexer, etc.
  • RF circuitry 110 can also communicate with the network and other devices via wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • e-mail Short Messaging Service
  • the memory 120 can be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by running software programs and modules stored in the memory 120.
  • the memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the electronic device 600 (such as audio data, phone book, etc.) and the like.
  • memory 120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 120 may also include a memory controller to provide access to memory 120 by processor 180 and input unit 130.
  • the input unit 130 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 130 can include touch-sensitive surface 131 as well as other input devices 132.
  • Touch-sensitive surface 131 also referred to as a touch display or trackpad, can collect touch operations on or near the user (such as a user using a finger, stylus, etc., on any suitable object or accessory on touch-sensitive surface 131 or The operation near the touch-sensitive surface 131) and driving the corresponding connecting device according to a preset program.
  • the touch-sensitive surface 131 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 180 is provided and can receive commands from the processor 180 and execute them.
  • the touch-sensitive surface 131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 can also include other input devices 132.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 140 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of the electronic device 600, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 140 may include a display panel 141, optionally, may
  • the display panel 141 is configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch-sensitive surface 131 may cover the display panel 141, and when the touch-sensitive surface 131 detects a touch operation thereon or nearby, it is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 according to the touch event The type provides a corresponding visual output on display panel 141.
  • touch-sensitive surface 131 and display panel 141 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 131 can be integrated with display panel 141 for input. And output function.
  • Electronic device 600 may also include at least one type of sensor 150, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 141 according to the brightness of the ambient light, and the proximity sensor may close the display panel 141 when the electronic device 600 moves to the ear. And / or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the electronic device 600 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the electronic device 600.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal by the audio circuit 160. After receiving, it is converted into audio data, and then processed by the audio data output processor 180, transmitted to the electronic device, for example, by the RF circuit 110, or outputted to the memory 120 for further processing.
  • the audio circuit 160 may also include an earbud jack to provide communication of the peripheral earphones with the electronic device 600.
  • WiFi is a short-range wireless transmission technology
  • the electronic device 600 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 170, which provides wireless broadband Internet access for users.
  • FIG. 6 shows the WiFi module 170, it can be understood that it is not The necessary configuration of the electronic device 600 can be omitted as long as it does not change the essence of the invention as needed.
  • the processor 180 is a control center of the electronic device 600 that connects various portions of the entire handset with various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120.
  • the various functions and processing data of the electronic device 600 are executed to perform overall monitoring of the mobile phone.
  • the processor 180 may include one or more processing cores; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 180.
  • the electronic device 600 also includes a power source 190 (such as a battery) for powering various components.
  • the power source can be logically coupled to the processor 180 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • Power supply 190 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the electronic device 600 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • the display unit of the electronic device device is a touch screen display
  • the electronic device device further includes a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be Or more than one processor executing, the one or more programs comprising instructions for performing the operations described in the embodiment corresponding to Figure 1 above or the embodiment corresponding to Figure 2A.
  • Still another embodiment of the present invention provides a computer readable storage medium, which may be a computer readable storage medium included in the memory in the above embodiment; There is a computer readable storage medium that is not assembled into the terminal.
  • the computer readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform an expression input method, the method comprising:
  • the expressions to be input are selected from the feature library according to the expression feature values, and the correspondence between the different expression feature values and the different expressions is stored in the feature library.
  • extracting the expression feature value from the input information comprises:
  • the input information includes voice input information, performing voice recognition on the voice input information to obtain a first specified feature value;
  • the input information includes picture input information, determining a face area in the picture input information, and extracting a second specified feature value from the face area;
  • the third specified feature value is extracted from the video input information.
  • the expression to be input is selected from the feature library according to the expression feature value, including:
  • n expressions corresponding to m expression feature values whose matching degree is greater than a predetermined threshold are used as alternative expressions, n ⁇ m ⁇ 1;
  • An alternative expression is filtered according to the sorting result, and the alternative expression is used as an expression to be input.
  • selecting an expression to be input from the feature library according to the expression feature value includes:
  • the second specified feature value or the third specified feature value and the second expression stored in the second feature library Characteristic values are matched;
  • the x expressions corresponding to the a first expression feature value and the y expressions corresponding to the b second expression feature values are used as alternative expressions, x ⁇ a, y ⁇ b;
  • the sorting condition includes any one of a repetition number, a history usage count, a recent usage time, and a matching degree;
  • the feature library includes a first feature library and a second feature library, and the expression feature values include a first expression feature value and a second expression feature value.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • environment information includes at least one of time information, environment volume information, ambient light intensity information, and environment image information;
  • the candidate feature library corresponding to the current use environment is selected from the at least one candidate feature library, and the candidate feature library is used as the feature library.
  • the input information is collected, including:
  • the voice input information is collected through the microphone
  • the picture input information or the video input information is collected through the camera.
  • the method before selecting an expression to be input from the feature library according to the expression feature value, the method further includes:
  • the training feature value with the largest number of repetitions is used as the expression feature value corresponding to the expression
  • the correspondence between the expression and the expression feature value is stored in the feature library.
  • the method further includes:
  • the computer readable storage medium provided by the embodiment of the present invention extracts an expression feature value from the input information by collecting input information, and selects an expression to be input from the feature library according to the extracted expression feature value, and the feature library stores different expressions. Corresponding relationship between feature values and different expressions; solving the problem that the expression input speed is slow and the process is complicated; the effect of simplifying the expression input process and improving the expression input speed is achieved.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention se rapporte au domaine de l'Internet. L'invention concerne un procédé et un appareil d'entrée d'expression, ainsi qu'un dispositif électronique. Le procédé consiste à : collecter des informations d'entrée; extraire une valeur de caractéristique d'expression à partir des informations d'entrée; et obtenir une expression qui a besoin d'être entrée et correspond à la valeur de caractéristique d'expression à partir d'une base de caractéristiques selon la valeur de caractéristique d'expression, la base de caractéristiques stockant la correspondance entre différentes valeurs de caractéristique d'expression et différentes expressions. Dans la présente invention, par collecte d'informations d'entrée, extraction d'une valeur de caractéristique d'expression à partir des informations d'entrée et sélection d'une expression qui a besoin d'être entrée à partir d'une base de caractéristiques selon la valeur de caractéristique d'expression extraite, la base de caractéristiques stockant la correspondance entre différentes valeurs de caractéristique d'expression et différentes expressions, les problèmes selon lesquels la vitesse d'entrée d'expression est faible et le processus est complexe sont résolus; le processus d'entrée d'expression est simplifié, et la vitesse d'entrée d'expression est augmentée.
PCT/CN2014/095872 2014-02-27 2014-12-31 Procédé et appareil d'entrée d'expression et dispositif électronique WO2015127825A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (zh) 2014-02-27 2014-02-27 表情输入方法和装置
CN201410069166.9 2014-02-27

Publications (1)

Publication Number Publication Date
WO2015127825A1 true WO2015127825A1 (fr) 2015-09-03

Family

ID=50758662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095872 WO2015127825A1 (fr) 2014-02-27 2014-12-31 Procédé et appareil d'entrée d'expression et dispositif électronique

Country Status (2)

Country Link
CN (1) CN103823561B (fr)
WO (1) WO2015127825A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412935A (zh) * 2018-10-12 2019-03-01 北京达佳互联信息技术有限公司 即时通信的发送方法和接收方法、发送装置和接收装置
CN112306254A (zh) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 一种表情处理方法、装置和介质
CN114173258A (zh) * 2022-02-07 2022-03-11 深圳市朗琴音响技术有限公司 智能音箱控制方法及智能音箱

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823561B (zh) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 表情输入方法和装置
WO2016000219A1 (fr) * 2014-07-02 2016-01-07 华为技术有限公司 Procédé de transmission d'informations et dispositif de transmission
CN106789543A (zh) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 会话中实现表情图像发送的方法和装置
CN106886396B (zh) * 2015-12-16 2020-07-07 北京奇虎科技有限公司 表情管理方法及装置
CN105677059A (zh) * 2015-12-31 2016-06-15 广东小天才科技有限公司 一种表情图片输入方法及系统
WO2017120924A1 (fr) * 2016-01-15 2017-07-20 李强生 Procédé de transmission d'informations destiné à être utilisé lors de l'insertion d'une émoticône, et outil de communication instantanée
CN105872838A (zh) * 2016-04-28 2016-08-17 徐文波 即时视频的媒体特效发送方法和装置
CN106020504B (zh) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 信息输出方法和装置
CN107623830B (zh) * 2016-07-15 2019-03-15 掌赢信息科技(上海)有限公司 一种视频通话方法及电子设备
CN106175727B (zh) * 2016-07-25 2018-11-20 广东小天才科技有限公司 一种应用于可穿戴设备的表情推送方法及可穿戴设备
CN106293120B (zh) * 2016-07-29 2020-06-23 维沃移动通信有限公司 表情输入方法及移动终端
WO2018023576A1 (fr) * 2016-08-04 2018-02-08 薄冰 Procédé d'ajustement d'une technique d'envoi d'émoji selon une rétroaction de marché, et système d'émoji
CN106339103A (zh) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 图像查询方法和装置
CN106293131A (zh) * 2016-08-16 2017-01-04 广东小天才科技有限公司 表情输入方法及装置
CN106503630A (zh) * 2016-10-08 2017-03-15 广东小天才科技有限公司 一种表情发送方法、设备及系统
CN106503744A (zh) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 对聊天过程中的输入表情进行自动纠错的方法及装置
CN106682091A (zh) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 一种无人机控制方法及装置
CN107315820A (zh) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 基于移动终端的用户交互界面的表情搜索方法及装置
CN107153496B (zh) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 用于输入表情图标的方法和装置
CN109254669B (zh) * 2017-07-12 2022-05-10 腾讯科技(深圳)有限公司 一种表情图片输入方法、装置、电子设备及系统
CN110019885B (zh) * 2017-08-01 2021-10-15 北京搜狗科技发展有限公司 一种表情数据推荐方法及装置
CN107450746A (zh) * 2017-08-18 2017-12-08 联想(北京)有限公司 一种表情符号的插入方法、装置和电子设备
CN107479723B (zh) * 2017-08-18 2021-01-15 联想(北京)有限公司 一种表情符号的插入方法、装置和电子设备
CN109165072A (zh) * 2018-08-28 2019-01-08 珠海格力电器股份有限公司 一种表情包生成方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (zh) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 表情输入方法及装置
CN102255820A (zh) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 即时通讯方法和装置
CN102662961A (zh) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 一种语义与图像匹配处理方法、装置及终端设备
CN102890776A (zh) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 通过面部表情调取表情图释的方法
CN103823561A (zh) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 表情输入方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (zh) * 2004-10-29 2006-02-15 康佳集团股份有限公司 一种手机短消息中表情符号及语音的实现方法
CN102104658A (zh) * 2009-12-22 2011-06-22 康佳集团股份有限公司 一种利用sms短信发送表情的方法、系统及移动终端
CN103353824B (zh) * 2013-06-17 2016-08-17 百度在线网络技术(北京)有限公司 语音输入字符串的方法、装置和终端设备
CN103530313A (zh) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 应用信息的搜索方法及装置
CN103529946B (zh) * 2013-10-29 2016-06-01 广东欧珀移动通信有限公司 一种输入方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (zh) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 表情输入方法及装置
CN102255820A (zh) * 2010-05-18 2011-11-23 腾讯科技(深圳)有限公司 即时通讯方法和装置
CN102890776A (zh) * 2011-07-21 2013-01-23 爱国者电子科技(天津)有限公司 通过面部表情调取表情图释的方法
CN102662961A (zh) * 2012-03-08 2012-09-12 北京百舜华年文化传播有限公司 一种语义与图像匹配处理方法、装置及终端设备
CN103823561A (zh) * 2014-02-27 2014-05-28 广州华多网络科技有限公司 表情输入方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412935A (zh) * 2018-10-12 2019-03-01 北京达佳互联信息技术有限公司 即时通信的发送方法和接收方法、发送装置和接收装置
CN109412935B (zh) * 2018-10-12 2021-12-07 北京达佳互联信息技术有限公司 即时通信的发送方法和接收方法、发送装置和接收装置
CN112306254A (zh) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 一种表情处理方法、装置和介质
CN114173258A (zh) * 2022-02-07 2022-03-11 深圳市朗琴音响技术有限公司 智能音箱控制方法及智能音箱
CN114173258B (zh) * 2022-02-07 2022-05-10 深圳市朗琴音响技术有限公司 智能音箱控制方法及智能音箱

Also Published As

Publication number Publication date
CN103823561B (zh) 2017-01-18
CN103823561A (zh) 2014-05-28

Similar Documents

Publication Publication Date Title
WO2015127825A1 (fr) Procédé et appareil d'entrée d'expression et dispositif électronique
WO2021213496A1 (fr) Procédé d'affichage de message et dispositif électronique
CN108093123A (zh) 一种消息通知处理方法、终端及计算机可读存储介质
WO2021077897A1 (fr) Procédé et appareil d'envoi de fichier et dispositif électronique
WO2016110182A1 (fr) Procédé, appareil et terminal pour mettre en correspondance une image d'expression
CN106303070B (zh) 一种通知消息的提示方法、装置及移动终端
US10673790B2 (en) Method and terminal for displaying instant messaging message
CN104238893B (zh) 一种对视频预览图片进行显示的方法和装置
CN108156508B (zh) 弹幕信息处理的方法、装置、移动终端、服务器及系统
CN108885525A (zh) 菜单显示方法及终端
WO2016173453A1 (fr) Procédé d'identification de corps vivant, procédé de génération d'informations et terminal
CN108958629B (zh) 分屏退出方法、装置、存储介质和电子设备
CN110874128B (zh) 可视化数据处理方法和电子设备
CN108573064A (zh) 信息推荐方法、移动终端、服务器及计算机可读存储介质
CN109543014B (zh) 人机对话方法、装置、终端及服务器
US20170160921A1 (en) Media file processing method and terminal
CN108958680A (zh) 显示控制方法、装置、显示系统及计算机可读存储介质
CN107862059A (zh) 一种歌曲推荐方法及移动终端
CN108829444A (zh) 一种自动关闭后台应用的方法、终端和计算机存储介质
CN108628534B (zh) 一种字符展示方法及移动终端
CN110418004A (zh) 截图处理方法、终端及计算机可读存储介质
CN107819936B (zh) 一种短信分类方法、移动终端和存储介质
CN109062643A (zh) 一种显示界面调整方法、装置及终端
CN104794139B (zh) 信息检索方法、装置及系统
CN111897916A (zh) 语音指令识别方法、装置、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.01.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 14883827

Country of ref document: EP

Kind code of ref document: A1