CN111078022B - Input method and device - Google Patents

Input method and device Download PDF

Info

Publication number
CN111078022B
CN111078022B CN201811213349.8A CN201811213349A CN111078022B CN 111078022 B CN111078022 B CN 111078022B CN 201811213349 A CN201811213349 A CN 201811213349A CN 111078022 B CN111078022 B CN 111078022B
Authority
CN
China
Prior art keywords
user
words
candidate words
candidate
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811213349.8A
Other languages
Chinese (zh)
Other versions
CN111078022A (en
Inventor
费腾
崔欣
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201811213349.8A priority Critical patent/CN111078022B/en
Publication of CN111078022A publication Critical patent/CN111078022A/en
Application granted granted Critical
Publication of CN111078022B publication Critical patent/CN111078022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an input method, which comprises the following steps: determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of a user; determining a current mood of the user based on the facial expression; selecting a target candidate word which accords with the current mood of the user from a plurality of candidate words; and carrying out frequency modulation on the target candidate words. Thereby satisfying the input intention of the user and improving the intelligent and accurate technical effect of the input method. Meanwhile, the invention also provides an input device.

Description

Input method and device
Technical Field
The present invention relates to the field of electronic information technologies, and in particular, to an input method and apparatus.
Background
With the continuous development of technology, more and more electronic devices (such as smart phones, tablet computers, and the like) are brought into the lives of people, and a plurality of convenience is brought to people. When the user uses the electronic equipment, the user often uses the input method application, and the user can use the input method application to perform more convenient text input.
In general, when a user inputs a Chinese character by using a pinyin input method, since there are polyphones in the Chinese character and a "nine-square" keyboard integrates a plurality of pinyin letters into one key, the user generally obtains a plurality of candidate words when inputting the Chinese character, and the user needs to select a word which is needed from the candidate words.
Currently, most of the candidate words are ranked by determining the input intention of the user based on the context information, so as to recommend the candidate words most likely to be used by the user to the user. However, when the context information is insufficient, it is difficult to accurately judge the input intention of the user, so that candidate words conforming to the input intention of the user cannot be recommended to the user, which reduces the intelligence and accuracy of the input method.
Disclosure of Invention
The embodiment of the invention solves the technical problem that the candidate words conforming to the input intention of the user cannot be recommended to the user when the context information of the input method in the prior art is insufficient by providing the input method and the input device, and realizes that the candidate words conforming to the current mood of the user are recommended to the user, thereby meeting the input intention of the user and improving the intelligent and accurate technical effects of the input method.
In a first aspect, the present invention provides, according to an embodiment of the present invention, the following technical solutions:
an input method, comprising:
determining a plurality of candidate words based on an input operation of a user;
collecting facial expression images of the user;
Determining a current mood of the user based on the facial expression image;
selecting target candidate words which accord with the current mood of the user from the plurality of candidate words;
And carrying out frequency modulation on the target candidate words.
Preferably, the capturing facial expression images of the user includes:
starting a photographing or shooting function to obtain a picture or video containing the facial expression image of the user;
and identifying the facial expression image of the user from the picture or the video.
Preferably, the determining the current mood of the user based on the facial expression image includes:
And analyzing the facial expression image based on an image recognition technology to determine the current mood of the user.
Preferably, the selecting a target candidate word that meets the current mood from the plurality of candidate words includes:
Acquiring emotion labels of each candidate word in the plurality of candidate words;
And selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as the target candidate word.
Preferably, the frequency-modulating the target candidate word so that the target candidate displays the front of other candidate words includes:
The use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words;
the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
In a second aspect, the present invention provides, according to an embodiment of the present invention, the following technical solutions:
An input device, comprising:
A first determination unit configured to determine a plurality of candidate words based on an input operation by a user;
the acquisition unit is used for acquiring the facial expression image of the user;
A second determining unit configured to determine a current mood of the user based on the facial expression image;
A selection unit, configured to select a target candidate word that meets the current mood of the user from the plurality of candidate words;
and the frequency modulation unit is used for frequency-modulating the target candidate words.
Preferably, the acquisition unit is specifically configured to:
Starting a photographing or shooting function to obtain a picture or video containing the facial expression image of the user; and identifying the facial expression image of the user from the picture or the video.
Preferably, the second determining unit is specifically configured to:
And analyzing the facial expression image based on an image recognition technology to determine the current mood of the user.
Preferably, the selecting unit is specifically configured to:
Acquiring emotion labels of each candidate word in the plurality of candidate words; and selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as the target candidate word.
Preferably, the frequency modulation unit is specifically configured to:
The use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words; the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
In a third aspect, the present invention provides, according to an embodiment of the present invention, the following technical solutions:
an input device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
Determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words.
In a fourth aspect, the present invention provides, according to an embodiment of the present invention, the following technical solutions:
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words.
One or more technical solutions provided in the embodiments of the present invention at least have the following technical effects or advantages:
In an embodiment of the invention, an input method is disclosed, comprising: determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words. Therefore, the technical problem that candidate words meeting the input intention of the user cannot be recommended to the user when the context information is insufficient by the input method in the prior art is solved. The method and the device have the advantages that the candidate words which accord with the current mood of the user are recommended to the user, so that the input intention of the user is met, and the intelligent and accurate technical effects of the input method are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an input method according to an embodiment of the invention;
FIG. 2 is a block diagram of an input device according to an embodiment of the present invention;
FIG. 3 is a block diagram of an input device according to an embodiment of the present invention;
Fig. 4 is a diagram illustrating a configuration of an input device as a server according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention solves the technical problem that the candidate words conforming to the input intention of the user cannot be recommended to the user when the context information of the input method in the prior art is insufficient by providing the input method and the input device, and realizes that the candidate words conforming to the current mood of the user are recommended to the user, thereby meeting the input intention of the user and improving the intelligent and accurate technical effects of the input method.
The technical scheme of the embodiment of the invention aims to solve the technical problems, and the overall thought is as follows:
an input method, comprising: determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
First, the term "and/or" appearing herein is merely an association relationship describing associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used herein means "two or more" and includes the case of "two".
Example 1
The embodiment provides an input method, which is applied to a terminal device, where the terminal device may be a smart phone, a tablet computer, a smart television, or the like, and the embodiment is not limited specifically as to what kind of device the terminal device is specifically. In addition, an input method client is installed in the terminal device, and the program code corresponding to the method in this embodiment may be integrated in the input method client, that is, the execution subject of the method in this embodiment is the input method client.
Specifically, as shown in fig. 1, the input method includes:
step S101: based on the input operation of the user, a plurality of candidate words are determined.
In the implementation process, a user can use a full keyboard to perform pinyin input, and can also use a nine-square lattice keyboard to perform pinyin input, and when the user performs input operation, a plurality of candidate words can be determined based on the input operation of the user.
For example, taking the "Sudoku" pinyin input mode as an example, if the user presses the "7-PQRS", "4-GHI", "7-PQRS", "3-DEF", "6-MNO" keys in the keyboard in sequence, it can be determined that the candidate words are "dead", "private", "angry", "people", "odd", "even", "spell", "flager", etc.
For example, taking the "Sudoku" pinyin input mode as an example, if the user presses the "9-WXYZ", "4-GHI", "7-PQRS", "4-GHI" keys in the keyboard in turn, it can be determined that the candidate words are "gesture", "happy", "ceremony", "consciousness", and so on.
Step S102: facial expression images of the user are collected.
In the specific implementation process, a camera is arranged on the electronic equipment, and facial expression images of a user can be acquired through the camera. For example, facial expression images of a user are acquired by a front-facing camera of a smart phone.
In a specific implementation process, a photographing (or image capturing) function may be started, one or more pictures (or a video) containing facial expression images of the user are obtained, and then the facial expression images of the user are identified from the pictures (or the video).
Step S103: based on the facial expression image, a current mood of the user is determined.
In the specific implementation process, different moods often correspond to different facial expressions, so after the facial expression images of the user are obtained, the facial expression images can be analyzed by utilizing an image recognition technology to determine the current mood of the user. For example, the user is now happy, angry, sad, etc.
In the implementation process, the camera can be controlled to be always started, so that continuous expression images of the user are analyzed to determine the current mood of the user. In the analysis process, multi-frame expression images in the video can be analyzed within a preset time period (for example, within 0.5 seconds, or within 1 second, or within 2 seconds, etc.), so as to determine the current mood of the user. Further, the analysis may be repeated after a predetermined time interval (e.g., after 0.5 seconds, after 1 second, etc.), thereby more accurately determining the current mood of the user.
Here, the facial expression image may be analyzed by any image recognition technique in the related art, and the embodiment is not particularly limited as to which image recognition technique is specifically employed.
Step S104: and selecting target candidate words which accord with the current mood of the user from the plurality of candidate words.
In the implementation process, one or more emotion labels can be added to the words with emotion colors in advance, and generally, common emotion labels are "happy", "angry", "sad", and the like.
For example, the word "angry" may be tagged with "angry" and the word "dead" may be tagged with "sad" and the word "happy" may be tagged with "happy" and so on.
However, for some neutral words without emotional color (such as 'odd person', 'its person', 'seven person', 'even person', 'flag person', 'four person', 'private', 'flag person', 'posture', 'ceremony', 'consciousness', etc.), there is no need to add emotional tag.
Further, step S104 includes:
acquiring an emotion tag of each candidate word in a plurality of candidate words (specifically, extracting the emotion tag of each candidate word for the candidate word with the emotion tag); and selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as a target candidate word.
For example, when the candidate word is "dead", "private", "angry", "people thereof", "odd", "even", "spell", "flagman", etc., if the current mood of the user is angry, the candidate word "angry" is regarded as the target candidate word because the emotion tag of the candidate word "angry" is "angry" and matches the current mood of the user.
If the current mood of the user is sad, and the emotion label of the candidate word ' dead ' is sad ' and is matched with the current mood of the user, the candidate word ' dead ' is taken as a target candidate word.
For example, when the candidate word is "gesture", "happiness", "ceremony", "consciousness", or the like, if the current mood of the user is happy, the candidate word "happiness" is regarded as the target candidate word as the emotion label of the candidate word "happiness" matches the current mood of the user.
Step S105: the target candidate words are frequency modulated such that the target candidates are ranked first.
In the prior art, after a plurality of candidate words are determined based on the input operation of a user, a frequency of use parameter of each candidate word is generally counted, the parameter is used for indicating the frequency of using the corresponding candidate word by the user, and the candidate word with the high frequency of use parameter is preferentially recommended to the user, so that the input intention of the user is met as much as possible. Wherein the frequency of use parameter may be used as one weight of the recommended candidate word, and the other weights may also include context.
In this embodiment, the target candidate word needs to be preferentially recommended to the user, which is as follows: the using frequency parameter of the target candidate word is heightened; and sequentially displaying the plurality of candidate words in a candidate column based on the order of the using frequency parameter of each candidate word in the plurality of candidate words from high to low.
For example, when the target candidate word is "angry person", the frequency of use parameter of "angry person" is adjusted to be higher, so that when the candidate words (i.e. "dead person", "private", "angry person", "people", "odd person", "even person", "spell person", "flagperson", etc.) are displayed in the candidate column, the display position of the target candidate word "angry person" may be more forward. For example: the 'angry person' is originally located at the 7 th position, after frequency modulation (combined with the context), the display position of the 'angry person' is changed into the 3 rd position, and the 'angry person' moves forward by 4 positions.
Or when the use frequency parameter of the ' angry person ' is increased, the use frequency parameter of the ' angry person ' can be higher than that of other candidate words, so that when the candidate words (namely ' dead ', ' private ', ' angry person ', ' odd person ', ' aligned person ', ' flagperson ', ' and the like) are displayed in the candidate column, the target candidate word ' angry person ' can be displayed in front of the other candidate words, and the aim of preferentially recommending the target candidate word ' angry person ' to the user is achieved.
For example, when the target candidate word is "dead", the frequency of use parameter of the "dead" is adjusted to be higher, so that when the candidate words (i.e., "dead", "private", "angry", "people", "odd", "even", "spell", "man", "flagman", etc.) are displayed in the candidate column, the display position of the target candidate word "dead" can be more forward. For example: the dead person is originally positioned at the 6 th position, after frequency modulation (combined with the context), the display position of the dead person is changed into the 2 nd position, and the dead person moves forward by 4 positions.
Or when the use frequency parameter of the dead person is increased, the use frequency parameter of the dead person is higher than that of other candidate words, so that when the candidate words (namely, the dead person, the private person, the angry person, the odd person, the aligned person, the spell person, the flagperson, and the like) are displayed in the candidate column, the target candidate word dead person can be displayed in front of the other candidate words, and the aim of preferentially recommending the target candidate word dead person to the user is fulfilled.
For example, when the target candidate word is "happiness", the frequency of use parameter of "happiness" is increased, and the frequency of use parameter of "happiness" is increased, so that the display position of the target candidate word "dead" can be more advanced when the candidate words (i.e. "gesture", "happiness", "ceremony", "consciousness", etc.) are displayed in the candidate column. For example: the "comedy" is originally located at position 7, after frequency modulation (in combination with context), the display position of the "comedy" becomes position 2, and moves forward by 5 positions.
Or when the use frequency parameter of the "happiness" is increased, the use frequency parameter of the "happiness" can be made to be higher than that of other candidate words, so that when the candidate words (namely, the "gesture", "happiness", "ceremony", "consciousness", and the like) are displayed in the candidate column, the target candidate word "happiness" can be displayed in front of the other candidate words, and the purpose of preferentially recommending the target candidate word "happiness" to the user is achieved.
The technical scheme provided by the embodiment of the invention at least has the following technical effects or advantages:
In an embodiment of the invention, an input method is disclosed, comprising: determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words. Therefore, the technical problem that candidate words meeting the input intention of the user cannot be recommended to the user when the context information is insufficient by the input method in the prior art is solved. The method and the device have the advantages that the candidate words which accord with the current mood of the user are recommended to the user, so that the input intention of the user is met, and the intelligent and accurate technical effects of the input method are improved.
Example two
Based on the same inventive concept, as shown in fig. 2, the present embodiment provides an input device, including:
A first determining unit 201 for determining a plurality of candidate words based on an input operation by a user;
an acquisition unit 202, configured to acquire a facial expression image of the user;
a second determining unit 203 for determining a current mood of the user based on the facial expression image;
A selecting unit 204, configured to select a target candidate word that meets the current mood of the user from the plurality of candidate words;
and the frequency modulation unit 205 is used for frequency modulating the target candidate word.
As an alternative embodiment, the acquisition unit 202 is specifically configured to:
Starting a photographing or shooting function to obtain a picture or video containing the facial expression image of the user; and identifying the facial expression image of the user from the picture or the video.
As an alternative embodiment, the second determining unit 203 is specifically configured to:
And analyzing the facial expression image based on an image recognition technology to determine the current mood of the user.
As an alternative embodiment, the selection unit 204 is specifically configured to:
Acquiring emotion labels of each candidate word in the plurality of candidate words; and selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as the target candidate word.
As an alternative embodiment, the frequency modulation unit 205 is specifically configured to:
The use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words; the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
Since the input device described in this embodiment is a device for implementing the input method in this embodiment of the present invention, those skilled in the art will be able to understand the specific implementation of the input device in this embodiment and various modifications thereof based on the input method described in this embodiment of the present invention, so how this input device implements the method in this embodiment of the present invention will not be described in detail herein. The device used by those skilled in the art to implement the input method of the embodiments of the present invention is within the scope of the present invention.
The technical scheme provided by the embodiment of the invention at least has the following technical effects or advantages:
In an embodiment of the present invention, an input device is disclosed, including: a first determination unit configured to determine a plurality of candidate words based on an input operation by a user; the acquisition unit is used for acquiring the facial expression image of the user; a second determining unit configured to determine a current mood of the user based on the facial expression image; a selection unit, configured to select a target candidate word that meets the current mood of the user from the plurality of candidate words; and the frequency modulation unit is used for frequency-modulating the target candidate words. Therefore, the technical problem that candidate words meeting the input intention of the user cannot be recommended to the user when the context information is insufficient by the input method in the prior art is solved. The method and the device have the advantages that the candidate words which accord with the current mood of the user are recommended to the user, so that the input intention of the user is met, and the intelligent and accurate technical effects of the input method are improved.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 3 is a block diagram of an input device according to an exemplary embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 3, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or one component of the apparatus 800, the presence or absence of user contact with the apparatus 800, an orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication part 816 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of apparatus 800, enables apparatus 800 to perform an input method, the method comprising: determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting target candidate words which accord with the current mood of the user from the plurality of candidate words; and carrying out frequency modulation on the target candidate words.
Fig. 4 is a block diagram of an input device as a server in an embodiment of the present invention. The server 1900 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPUs) 1922 (e.g., one or more processors) and memory 1932, one or more storage mediums 1930 (e.g., one or more mass storage devices) that store applications 1942 or data 1944. Wherein the memory 1932 and storage medium 1930 may be transitory or persistent. The program stored in the storage medium 1930 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, a central processor 1922 may be provided in communication with a storage medium 1930 to execute a series of instruction operations in the storage medium 1930 on the server 1900.
The server 1900 may also include one or more power supplies 1926, one or more wired or wireless network interfaces 1950, one or more input/output interfaces 1958, one or more keyboards 1956, and/or one or more operating systems 1941, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, and the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present invention is to be limited only by the following claims, which are set forth herein as illustrative only and not by way of limitation, and any such modifications, equivalents, improvements, etc., which fall within the spirit and principles of the present invention, are intended to be included within the scope of the present invention.

Claims (10)

1. An input method, comprising:
Adding one or more emotion labels to words with emotion colors, and not adding emotion labels to neutral words without emotion colors;
determining a plurality of candidate words based on an input operation of a user;
collecting facial expression images of the user;
Determining a current mood of the user based on the facial expression image;
Selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words, wherein the method comprises the following steps: selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words based on the emotion tags of the plurality of candidate words;
Frequency modulation is carried out on the target candidate words, and the frequency modulation comprises the following steps:
The use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words;
the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
2. The input method of claim 1, wherein the capturing the facial expression image of the user comprises:
starting a photographing or shooting function to obtain a picture or video containing the facial expression image of the user;
The facial expression of the user is identified from the picture or video.
3. The input method of claim 1, wherein the determining the current mood of the user based on the facial expression comprises:
And analyzing the facial expression based on an image recognition technology to determine the current mood of the user.
4. The input method of claim 1, wherein the selecting a target candidate word from the plurality of candidate words that meets the current mood comprises:
Acquiring emotion labels of each candidate word in the plurality of candidate words;
And selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as the target candidate word.
5. An input device, comprising:
Adding one or more emotion labels to words with emotion colors, and not adding emotion labels to neutral words without emotion colors;
A first determination unit configured to determine a plurality of candidate words based on an input operation by a user;
the acquisition unit is used for acquiring the facial expression image of the user;
A second determining unit configured to determine a current mood of the user based on the facial expression image;
a selecting unit, configured to select a target candidate word that meets the current mood of the user from the plurality of candidate words, including: selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words based on the emotion tags of the plurality of candidate words;
the frequency modulation unit is used for frequency-modulating the target candidate words; the frequency modulation unit is specifically configured to:
The use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words;
the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
6. The input device of claim 5, wherein the acquisition unit is specifically configured to:
starting a photographing or shooting function to obtain a picture or video containing the facial expression image of the user; the facial expression of the user is identified from the picture or video.
7. The input device of claim 5, wherein the second determining unit is specifically configured to:
And analyzing the facial expression image based on an image recognition technology to determine the current mood of the user.
8. The input device of claim 5, wherein the selection unit is specifically configured to:
Acquiring emotion labels of each candidate word in the plurality of candidate words; and selecting a candidate word with emotion labels matched with the current mood of the user from the plurality of candidate words as the target candidate word.
9. An input device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor performs the following steps when executing the program:
Adding one or more emotion labels to words with emotion colors, and not adding emotion labels to neutral words without emotion colors; determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words, wherein the method comprises the following steps: selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words based on the emotion tags of the plurality of candidate words; frequency modulation is carried out on the target candidate words, and the frequency modulation comprises the following steps: the use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words; the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor performs the steps of:
Adding one or more emotion labels to words with emotion colors, and not adding emotion labels to neutral words without emotion colors; determining a plurality of candidate words based on an input operation of a user; collecting facial expression images of the user; determining a current mood of the user based on the facial expression image; selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words, wherein the method comprises the following steps: selecting a target candidate word which accords with the current mood of the user from the plurality of candidate words based on the emotion tags of the plurality of candidate words; frequency modulation is carried out on the target candidate words, and the frequency modulation comprises the following steps: the use frequency parameter of the target candidate word is increased, so that the use frequency parameter of the target candidate word is higher than that of other candidate words; the plurality of candidate words are sequentially presented in a candidate bar based on the order of the use frequency parameter from high to low.
CN201811213349.8A 2018-10-18 2018-10-18 Input method and device Active CN111078022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811213349.8A CN111078022B (en) 2018-10-18 2018-10-18 Input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811213349.8A CN111078022B (en) 2018-10-18 2018-10-18 Input method and device

Publications (2)

Publication Number Publication Date
CN111078022A CN111078022A (en) 2020-04-28
CN111078022B true CN111078022B (en) 2024-04-23

Family

ID=70308515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811213349.8A Active CN111078022B (en) 2018-10-18 2018-10-18 Input method and device

Country Status (1)

Country Link
CN (1) CN111078022B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314441A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method for user to input individualized primitive data and equipment and system
CN102955569A (en) * 2012-10-18 2013-03-06 北京天宇朗通通信设备股份有限公司 Method and device for text input
CN105929976A (en) * 2016-05-25 2016-09-07 广州市久邦数码科技有限公司 Input method-based dynamic expression input method and system
CN106527752A (en) * 2016-09-23 2017-03-22 百度在线网络技术(北京)有限公司 Method and device for providing input candidate items
CN106896932A (en) * 2016-06-07 2017-06-27 阿里巴巴集团控股有限公司 A kind of candidate word recommends method and device
CN107943317A (en) * 2017-11-01 2018-04-20 北京小米移动软件有限公司 Input method and device
CN108628911A (en) * 2017-03-24 2018-10-09 微软技术许可有限责任公司 It is predicted for expression input by user

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8756527B2 (en) * 2008-01-18 2014-06-17 Rpx Corporation Method, apparatus and computer program product for providing a word input mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314441A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method for user to input individualized primitive data and equipment and system
CN102955569A (en) * 2012-10-18 2013-03-06 北京天宇朗通通信设备股份有限公司 Method and device for text input
CN105929976A (en) * 2016-05-25 2016-09-07 广州市久邦数码科技有限公司 Input method-based dynamic expression input method and system
CN106896932A (en) * 2016-06-07 2017-06-27 阿里巴巴集团控股有限公司 A kind of candidate word recommends method and device
CN106527752A (en) * 2016-09-23 2017-03-22 百度在线网络技术(北京)有限公司 Method and device for providing input candidate items
CN108628911A (en) * 2017-03-24 2018-10-09 微软技术许可有限责任公司 It is predicted for expression input by user
CN107943317A (en) * 2017-11-01 2018-04-20 北京小米移动软件有限公司 Input method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Instant messaging with emotion-embedded vectorized handwritings on mobile devices;Nai-Sheng Syu等;EURASIP Journal on Image and Video Processing;20170311;第2017年卷(第23期);第1-15页 *
基于表情符的社交网络情绪词典构造;马秉楠等;计算机工程与设计;20160704;第37卷(第05期);第1129-1133页 *

Also Published As

Publication number Publication date
CN111078022A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US10509540B2 (en) Method and device for displaying a message
EP3316527A1 (en) Method and device for managing notification messages
CN110633700B (en) Video processing method and device, electronic equipment and storage medium
CN107193606B (en) Application distribution method and device
US10078422B2 (en) Method and device for updating a list
US11335348B2 (en) Input method, device, apparatus, and storage medium
US20170140254A1 (en) Method and device for adding font
CN107229403B (en) Information content selection method and device
EP3261046A1 (en) Method and device for image processing
CN106331328B (en) Information prompting method and device
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN112948704A (en) Model training method and device for information recommendation, electronic equipment and medium
CN107943317B (en) Input method and device
CN113807253A (en) Face recognition method and device, electronic equipment and storage medium
CN111596832B (en) Page switching method and device
CN113920293A (en) Information identification method and device, electronic equipment and storage medium
CN110955800A (en) Video retrieval method and device
CN106447747B (en) Image processing method and device
CN110213062B (en) Method and device for processing message
CN114356476B (en) Content display method, content display device, electronic equipment and storage medium
CN112667852B (en) Video-based searching method and device, electronic equipment and storage medium
CN111078022B (en) Input method and device
US20170060822A1 (en) Method and device for storing string
CN114051157A (en) Input method and device
CN110928621B (en) Information searching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant