CN107943317B - Input method and device - Google Patents

Input method and device Download PDF

Info

Publication number
CN107943317B
CN107943317B CN201711060047.7A CN201711060047A CN107943317B CN 107943317 B CN107943317 B CN 107943317B CN 201711060047 A CN201711060047 A CN 201711060047A CN 107943317 B CN107943317 B CN 107943317B
Authority
CN
China
Prior art keywords
candidate
candidate word
word
image corresponding
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711060047.7A
Other languages
Chinese (zh)
Other versions
CN107943317A (en
Inventor
卢山
王熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711060047.7A priority Critical patent/CN107943317B/en
Publication of CN107943317A publication Critical patent/CN107943317A/en
Application granted granted Critical
Publication of CN107943317B publication Critical patent/CN107943317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods

Abstract

The disclosure relates to an input method and device. The method comprises the following steps: receiving an input character; acquiring a plurality of candidate words corresponding to characters and an image corresponding to each candidate word; and outputting a plurality of candidate words, and displaying the image corresponding to the candidate word in a preset area near each candidate word, or displaying the image corresponding to the candidate word in a background area corresponding to each candidate word. Because the image corresponding to the candidate word is displayed in the preset area near each candidate word while the candidate word is output, or the image corresponding to the candidate word is displayed in the background area corresponding to each candidate word, when the user selects the target word, the probability that the user selects the target word correctly is improved based on the dimension of the image, and the user experience is improved.

Description

Input method and device
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an input method and device.
Background
When a user inputs characters into a terminal device, a pinyin input method or a wubi input method is generally adopted.
At present, when a user uses an input method to input characters, the user inputs characters through an input device corresponding to a terminal device, after the terminal device receives the characters input by the user, candidate words associated with the characters are searched in a preset word bank, then the candidate words are displayed on a screen of the terminal device, and the user selects a target word to be input from the candidate words.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an input method and apparatus. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an input method, including:
receiving an input character;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: receiving an input character; acquiring a plurality of candidate words corresponding to characters and an image corresponding to each candidate word; and outputting the candidate words, and displaying the images corresponding to the candidate words in a preset area near each candidate word, or displaying the images corresponding to the candidate words in a background area corresponding to each candidate word. Because the image corresponding to the candidate word is displayed in the preset area near each candidate word while the candidate word is output, or the image corresponding to the candidate word is displayed in the background area corresponding to each candidate word, when the user selects the target word, the probability that the user selects the target word correctly is improved based on the dimension of the image, and the user experience is improved.
In one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
searching a plurality of candidate objects corresponding to the character from a first preset table according to the character, wherein the first preset table comprises a mapping relation between the character and the candidate objects, and each candidate object comprises: candidate words and images corresponding to the candidate words;
alternatively, the first and second electrodes may be,
acquiring a plurality of candidate words corresponding to the characters from a preset word bank, wherein the preset word bank comprises a corresponding relation between the characters and the candidate words; and acquiring an image corresponding to each candidate word from a network side.
In one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
In one embodiment, the images include, but are not limited to: moving pictures or still pictures.
According to a second aspect of the embodiments of the present disclosure, there is provided an input device including:
the receiving module is used for receiving input characters;
the acquisition module is used for acquiring a plurality of candidate words corresponding to the characters received by the receiving module and an image corresponding to each candidate word;
the first output module is used for outputting the candidate words acquired by the acquisition module;
and the second output module is used for displaying the image corresponding to the candidate word in a preset area near each candidate word output by the first output module, or displaying the image corresponding to the candidate word in a background area corresponding to each candidate word output by the first output module.
In one embodiment, the obtaining module comprises: a search submodule;
the search sub-module is configured to search, according to the character received by the receiving module, a plurality of candidate objects corresponding to the character from a first preset table, where the first preset table includes a mapping relationship between the character and the candidate objects, and each candidate object includes: the candidate words and the images corresponding to the candidate words.
In one embodiment, the obtaining module comprises: a first obtaining submodule and a second obtaining submodule;
the first obtaining sub-module is configured to obtain, from a preset lexicon, a plurality of candidate words corresponding to the characters received by the receiving module, where the preset lexicon includes a correspondence between the characters and the candidate words;
the second obtaining sub-module is configured to obtain, from a network side, an image corresponding to each candidate word obtained by the first obtaining sub-module.
In one embodiment, the obtaining module comprises: a third obtaining submodule, a determining submodule and a fourth obtaining submodule;
the third obtaining sub-module is configured to obtain a plurality of candidate words corresponding to the characters received by the receiving module;
the determining submodule is configured to determine a part-of-speech of each candidate word acquired by the third acquiring submodule;
the fourth obtaining submodule is configured to obtain, from a second preset mapping table, an image corresponding to each candidate word according to the part-of-speech of each candidate word determined by the determining submodule; the second preset mapping table comprises corresponding relations between various parts of speech and images.
In one embodiment, the images include, but are not limited to: moving pictures or still pictures.
According to a third aspect of the embodiments of the present disclosure, there is provided an input device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving an input character;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of:
receiving an input character;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an input method according to an exemplary embodiment.
FIG. 2 is a schematic diagram of a display interface shown in accordance with an exemplary embodiment.
FIG. 3 is a diagram illustrating a display interface according to an example embodiment two.
FIG. 4 is a schematic diagram of a display interface shown in accordance with an example embodiment.
FIG. 5 is a diagram of a display interface according to an example embodiment shown in four.
FIG. 6 is a flowchart illustrating step S102 of the input method according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating step S102 in the input method according to an exemplary embodiment.
FIG. 8 is a flow diagram illustrating an input method according to an example embodiment two.
FIG. 9 is a diagram of a display interface shown in five, according to an example embodiment.
FIG. 10 is a block diagram illustrating an input device according to an exemplary embodiment.
FIG. 11 is a block diagram illustrating an acquisition module 12 in an input device according to an exemplary embodiment.
Fig. 12 is a block diagram of the acquisition module 12 in an input device according to an exemplary embodiment.
Fig. 13 is a block diagram of the acquisition module 12 in an input device according to a third exemplary embodiment.
FIG. 14 is a block diagram illustrating an input device 80 according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
When the user selects the target word to be input from the candidate words, the user is likely to miss the target word due to the large vocabulary of the candidate words, for example: for example, when "jqh" is input in the five-stroke input method, the word selection includes "evening", "evening" and "dinner", and the user mistakenly selects the target word "dinner" to be selected as the "evening" due to hasty.
In the present disclosure, an input character is received; acquiring a plurality of candidate words corresponding to characters and an image corresponding to each candidate word; and outputting the candidate words, and displaying the images corresponding to the candidate words in a preset area near each candidate word, or displaying the images corresponding to the candidate words in a background area corresponding to each candidate word. Because the image corresponding to the candidate word is displayed in the preset area near each candidate word while the candidate word is output, or the image corresponding to the candidate word is displayed in the background area corresponding to each candidate word, when the user selects the target word, the probability that the user selects the target word correctly is improved based on the dimension of the image, and the user experience is improved.
FIG. 1 is a flow diagram illustrating an input method, as shown in FIG. 1, including the following steps S101-S104, according to an exemplary embodiment:
in step S101, an input character is received.
In step S102, a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word are obtained.
In step S103, a candidate word is output.
In step S104, an image corresponding to the candidate word is displayed in a preset area near each candidate word, or an image corresponding to the candidate word is displayed in a background area corresponding to each candidate word.
For example, when a user wants to input a character in a terminal device, the user inputs the character through an input method carried by the terminal device, and the terminal device receives the input character.
It should be noted that the characters may be letters or tracks of handwriting input by the user through a handwriting pad.
After receiving an input character, in order to improve the accuracy of the user in selecting a target word, not only a plurality of candidate words corresponding to the character but also an image corresponding to each candidate word are acquired, so that the plurality of candidate words can be displayed to the user through a screen of the terminal device, and the image corresponding to each candidate word is displayed in a preset area near each candidate word, or the image corresponding to each candidate word is displayed in a background area corresponding to each candidate word. At the moment, the text of the dry bar is not displayed on the screen, but the text is an image, so that the user is difficult to select the target word by mistake when selecting the target word, and the user experience is effectively improved.
When the image corresponding to the candidate word is displayed in the preset area near each candidate word, the preset area may be an area located in front of the candidate word, an area located behind the candidate word, an area located above the candidate word, or an area located below the candidate word.
For example: as shown in the input method display interface shown in fig. 2, further, an image corresponding to each candidate word may be displayed behind each candidate word, and a box behind each candidate word shown in fig. 3 is a display area of the image corresponding to the candidate word; an image corresponding to each candidate word may be displayed below each candidate word, and a box below each candidate word as shown in fig. 4 is a display area of the image corresponding to the candidate word.
The image corresponding to the candidate word displayed in the preset area near each candidate word needs to occupy more areas in the input method display interface, and in order to save the area in the input method display interface, the image corresponding to the candidate word may be displayed in the background area corresponding to each candidate word.
When the image corresponding to the candidate word is displayed in the background area corresponding to each candidate word, the image corresponding to the candidate word may be used as the background image and displayed in the background area corresponding to the candidate word, as shown in the box shown in fig. 5.
In the present disclosure, an input character is received; acquiring a plurality of candidate words corresponding to characters and an image corresponding to each candidate word; and outputting a plurality of candidate words, and displaying images corresponding to the candidate words in a preset area near each candidate word, or displaying images corresponding to the candidate words in a background area corresponding to each candidate word. Because the image corresponding to the candidate word is displayed in the preset area near each candidate word while the candidate word is output, or the image corresponding to the candidate word is displayed in the background area corresponding to each candidate word, when the user selects the target word, the probability that the user selects the target word correctly is improved based on the dimension of the image, and the user experience is improved.
In an implementation manner, the step S102 may be implemented as: searching a plurality of candidate objects corresponding to the characters from a first preset mapping table according to the characters, wherein the first preset mapping table comprises mapping relations between the characters and the candidate objects, and each candidate object comprises: the candidate words and the images corresponding to the candidate words.
A first preset mapping table may be preset in the terminal device, and when an input character is received, a plurality of candidate objects corresponding to the character are determined from the first preset mapping table.
For example, as shown in table 1, the first preset mapping table is used, where an image corresponding to the candidate word "evening" is a clock, an image corresponding to the candidate word "dinner" is a meal map, and an image corresponding to the candidate word "evening" is a moon;
TABLE 1
Figure BDA0001454602630000081
When the character input by the user is "jph", the candidate corresponding to the character "jph" is determined from table 1 above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: by searching a plurality of candidate objects corresponding to the characters from the first preset mapping table according to the characters, the time delay of determining candidate words and images corresponding to the candidate words can be effectively reduced.
In another implementation manner, as shown in fig. 6, the step S102 may be implemented as the following steps S1021 to S1022, in order to save the memory of the terminal device, as the terminal device stores the first preset mapping table, which occupies more memory of the terminal device due to the image in the first preset mapping table:
in step S1021, a plurality of candidate words corresponding to the characters are obtained from a preset lexicon, where the preset lexicon includes a corresponding relationship between the characters and the candidate words.
In step S1022, an image corresponding to each candidate word is acquired from the network side.
In order to save the memory of the terminal device, a preset lexicon as shown in table 2 may be pre-stored in the terminal device, and when an input character is received, a candidate word corresponding to the character is first searched from the preset lexicon, and an image corresponding to each candidate word is further acquired from the network side.
TABLE 2
Figure BDA0001454602630000091
The terminal equipment can acquire the image corresponding to each candidate word from the network side in a wired or wireless mode.
Because the image corresponding to each candidate word can be acquired from the network side, the acquired image is more diversified, and the display interest is effectively improved.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: by acquiring a plurality of candidate words corresponding to the characters from the preset word bank and further acquiring an image corresponding to each candidate word from the network side, the memory of the terminal equipment is effectively saved.
In one embodiment, as shown in FIG. 7, the above step S102 can be implemented as the following steps S1023-S1025:
in step S1023, a plurality of candidate words corresponding to the characters are acquired.
The obtaining of multiple candidate words corresponding to the characters may be similar to the step S1021, and details are not repeated here.
In step S1024, the part of speech of each candidate word is determined.
For example, the part of speech of each candidate word may be obtained through a network side, or the part of speech of each candidate word may be obtained through a preset part of speech table, where the preset part of speech table includes a correspondence between each candidate word and the part of speech, and certainly, the part of speech of each candidate word may be determined in other manners, which is not limited in this disclosure.
In step S1025, an image corresponding to each candidate word is obtained from the second preset mapping table according to the part of speech of each candidate word; the second preset mapping table includes correspondence between various parts of speech and images.
Different images can be preset for the candidate side based on the part of speech of the candidate word, for example, the images can be pure color pictures with different colors (for example, a noun adopts a red picture, a verb adopts a green picture, etc.); the image may be different preset types of pictures (for example, a picture with a smiling face is used as a noun, a picture with a crying expression is used as a verb, and the like), and may be set as other images, which is not limited in the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: and determining an image for the candidate word according to the part of speech of the candidate word, so that the user experience can be effectively improved.
Notably, the images described above include, but are not limited to: moving pictures or still pictures.
Fig. 8 is a flowchart illustrating an input method according to an exemplary embodiment, which includes the following steps S201 to S204, as shown in fig. 8:
in step S201, an input character is received.
In step S202, a plurality of candidate objects corresponding to the character are searched from a first preset table according to the character, where the first preset table includes mapping relationships between the character and the candidate objects, and each candidate object includes: the candidate words and the images corresponding to the candidate words.
In step S203, a candidate word is output.
In step S204, an image corresponding to the candidate word is displayed in a preset area near each candidate word, or an image corresponding to the candidate word is displayed in a background area corresponding to each candidate word.
Taking table 1 as an example, when the character input by the user is "jph", the candidate object corresponding to the character "jph" is determined from table 1, and the interface shown in fig. 9 is displayed on the screen of the terminal device.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 10 is a block diagram illustrating an input device that may be implemented as part or all of an electronic device via software, hardware, or a combination of both, according to an example embodiment. As shown in fig. 10, the input device includes:
a receiving module 11, configured to receive an input character;
an obtaining module 12, configured to obtain a plurality of candidate words corresponding to the characters received by the receiving module 11 and an image corresponding to each candidate word;
a first output module 13, configured to output the candidate word obtained by the obtaining module 12;
a second output module 14, configured to display an image corresponding to each candidate word in a preset area near each candidate word output by the first output module 13, or display an image corresponding to each candidate word in a background area corresponding to each candidate word output by the first output module 13.
In one embodiment, as shown in fig. 11, the obtaining module 12 includes: a search sub-module 121;
the search sub-module 121 is configured to search, according to the character received by the receiving module 11, a plurality of candidate objects corresponding to the character from a first preset table, where the first preset table includes a mapping relationship between the character and the candidate objects, and each candidate object includes: candidate words and images corresponding to the candidate words;
in one embodiment, as shown in fig. 12, the obtaining module 12 includes: a first acquisition submodule 122 and a second acquisition submodule 123;
the first obtaining sub-module 122 is configured to obtain, from a preset word library, a plurality of candidate words corresponding to the characters received by the receiving module 11, where the preset word library includes a correspondence between the characters and the candidate words;
the second obtaining sub-module 123 is configured to obtain, from a network side, an image corresponding to each candidate word obtained by the first obtaining sub-module 122.
In one embodiment, as shown in fig. 13, the obtaining module 12 includes: a third acquisition submodule 124, a determination submodule 125 and a fourth acquisition submodule 126;
the third obtaining sub-module 124 is configured to obtain a plurality of candidate words corresponding to the characters received by the receiving module 11;
the determining submodule 125 is configured to determine a part of speech of each candidate word acquired by the third acquiring submodule 124;
the fourth obtaining submodule 126 is configured to obtain, from a second preset mapping table, an image corresponding to each candidate word according to the part-of-speech of each candidate word determined by the determining submodule 125; the second preset mapping table comprises corresponding relations between various parts of speech and images.
According to a third aspect of the embodiments of the present disclosure, there is provided an input device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving an input character;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word.
The processor may be further configured to:
in one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
searching a plurality of candidate objects corresponding to the character from a first preset table according to the character, wherein the first preset table comprises a mapping relation between the character and the candidate objects, and each candidate object comprises: candidate words and images corresponding to the candidate words;
alternatively, the first and second electrodes may be,
acquiring a plurality of candidate words corresponding to the characters from a preset word bank, wherein the preset word bank comprises a corresponding relation between the characters and the candidate words; and acquiring an image corresponding to each candidate word from a network side.
In one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
In one embodiment, the images include, but are not limited to: moving pictures or still pictures.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 14 is a block diagram illustrating an input apparatus 80 for a terminal device according to an exemplary embodiment. For example, the apparatus 80 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
The apparatus 80 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 80, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 80. Examples of such data include instructions for any application or method operating on the device 80, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the device 80. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 80.
The multimedia component 808 includes a screen that provides an output interface between the device 80 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 80 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 80 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 80. For example, the sensor assembly 814 may detect the open/closed status of the device 80, the relative positioning of the components, such as a display and keypad of the device 80, the change in position of the device 80 or a component of the device 80, the presence or absence of user contact with the device 80, the orientation or acceleration/deceleration of the device 80, and the change in temperature of the device 80. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the apparatus 80 and other devices. The device 80 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 80 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the apparatus 80 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an apparatus 80, enable the apparatus 80 to perform the input method described above, the method comprising:
receiving an input character;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word.
In one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
searching a plurality of candidate objects corresponding to the character from a first preset table according to the character, wherein the first preset table comprises a mapping relation between the character and the candidate objects, and each candidate object comprises: candidate words and images corresponding to the candidate words;
alternatively, the first and second electrodes may be,
acquiring a plurality of candidate words corresponding to the characters from a preset word bank, wherein the preset word bank comprises a corresponding relation between the characters and the candidate words; and acquiring an image corresponding to each candidate word from a network side.
In one embodiment, the obtaining a plurality of candidate words corresponding to the character and an image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
In one embodiment, the images include, but are not limited to: moving pictures or still pictures.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (9)

1. An input method, comprising:
receiving an input character; the characters are input by a user through an input method carried by the terminal equipment;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word;
the obtaining of the multiple candidate words corresponding to the character and the image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
2. The method of claim 1, wherein the obtaining the candidate words corresponding to the character and the image corresponding to each candidate word comprises:
searching a plurality of candidate objects corresponding to the character from a first preset table according to the character, wherein the first preset table comprises a mapping relation between the character and the candidate objects, and each candidate object comprises: candidate words and images corresponding to the candidate words;
alternatively, the first and second electrodes may be,
acquiring a plurality of candidate words corresponding to the characters from a preset word bank, wherein the preset word bank comprises a corresponding relation between the characters and the candidate words; and acquiring an image corresponding to each candidate word from a network side.
3. The method of claim 1, wherein the image includes, but is not limited to: moving pictures or still pictures.
4. An input device, comprising:
the receiving module is used for receiving input characters; the characters are input by a user through an input method carried by the terminal equipment;
the acquisition module is used for acquiring a plurality of candidate words corresponding to the characters received by the receiving module and an image corresponding to each candidate word;
the first output module is used for outputting the candidate words acquired by the acquisition module;
a second output module, configured to display an image corresponding to the candidate word in a preset area near each candidate word output by the first output module, or display an image corresponding to the candidate word in a background area corresponding to each candidate word output by the first output module;
the acquisition module includes: a third obtaining submodule, a determining submodule and a fourth obtaining submodule;
the third obtaining sub-module is configured to obtain a plurality of candidate words corresponding to the characters received by the receiving module;
the determining submodule is configured to determine a part-of-speech of each candidate word acquired by the third acquiring submodule;
the fourth obtaining submodule is configured to obtain, from a second preset mapping table, an image corresponding to each candidate word according to the part-of-speech of each candidate word determined by the determining submodule; the second preset mapping table comprises corresponding relations between various parts of speech and images.
5. The apparatus of claim 4, wherein the obtaining module comprises: a search submodule;
the search sub-module is configured to search, according to the character received by the receiving module, a plurality of candidate objects corresponding to the character from a first preset table, where the first preset table includes a mapping relationship between the character and the candidate objects, and each candidate object includes: the candidate words and the images corresponding to the candidate words.
6. The apparatus of claim 4, wherein the obtaining module comprises: a first obtaining submodule and a second obtaining submodule;
the first obtaining sub-module is configured to obtain, from a preset lexicon, a plurality of candidate words corresponding to the characters received by the receiving module, where the preset lexicon includes a correspondence between the characters and the candidate words;
the second obtaining sub-module is configured to obtain, from a network side, an image corresponding to each candidate word obtained by the first obtaining sub-module.
7. The apparatus of claim 4, wherein the image includes, but is not limited to: moving pictures or still pictures.
8. An input device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving an input character; the characters are input by a user through an input method carried by the terminal equipment;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word;
the obtaining of the multiple candidate words corresponding to the character and the image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
9. A computer readable storage medium having computer instructions stored thereon which, when executed by a processor, perform the steps of:
receiving an input character; the characters are input by a user through an input method carried by the terminal equipment;
acquiring a plurality of candidate words corresponding to the characters and an image corresponding to each candidate word;
outputting the candidate word;
displaying an image corresponding to each candidate word in a preset area near each candidate word, or displaying an image corresponding to each candidate word in a background area corresponding to each candidate word;
the obtaining of the multiple candidate words corresponding to the character and the image corresponding to each candidate word includes:
acquiring a plurality of candidate words corresponding to the characters;
determining the part of speech of each candidate word;
acquiring an image corresponding to each candidate word from a second preset mapping table according to the part of speech of each candidate word; the second preset mapping table comprises corresponding relations between various parts of speech and images.
CN201711060047.7A 2017-11-01 2017-11-01 Input method and device Active CN107943317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711060047.7A CN107943317B (en) 2017-11-01 2017-11-01 Input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711060047.7A CN107943317B (en) 2017-11-01 2017-11-01 Input method and device

Publications (2)

Publication Number Publication Date
CN107943317A CN107943317A (en) 2018-04-20
CN107943317B true CN107943317B (en) 2021-08-06

Family

ID=61934091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711060047.7A Active CN107943317B (en) 2017-11-01 2017-11-01 Input method and device

Country Status (1)

Country Link
CN (1) CN107943317B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271038B (en) * 2018-07-17 2023-06-27 努比亚技术有限公司 Candidate word recommendation method, terminal and computer readable storage medium
CN110442248A (en) * 2019-06-20 2019-11-12 上海萌家网络科技有限公司 A kind of input method and input system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314441A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method for user to input individualized primitive data and equipment and system
CN105786207A (en) * 2016-02-25 2016-07-20 百度在线网络技术(北京)有限公司 Information input method and device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1932730A (en) * 2005-09-14 2007-03-21 黄金富 Candidate words displaying method
CN101739143B (en) * 2009-12-03 2014-05-07 深圳市世纪光速信息技术有限公司 Character inputting method and character inputting system
CN103365833B (en) * 2012-03-28 2016-06-08 百度在线网络技术(北京)有限公司 A kind of input candidate word reminding method based on context and system
CN103793434A (en) * 2012-11-02 2014-05-14 北京百度网讯科技有限公司 Content-based image search method and device
US20140164981A1 (en) * 2012-12-11 2014-06-12 Nokia Corporation Text entry
CN103092969A (en) * 2013-01-22 2013-05-08 上海量明科技发展有限公司 Method, client side and system for conducting streaming media retrieval to input method candidate item
CN105335036A (en) * 2014-06-27 2016-02-17 北京搜狗科技发展有限公司 Input interaction method and input method system
CN106855748A (en) * 2015-12-08 2017-06-16 阿里巴巴集团控股有限公司 A kind of data inputting method, device and intelligent terminal
JP2019504413A (en) * 2015-12-29 2019-02-14 エム・ゼット・アイ・ピィ・ホールディングス・リミテッド・ライアビリティ・カンパニーMz Ip Holdings, Llc System and method for proposing emoji
CN106383595A (en) * 2016-10-28 2017-02-08 维沃移动通信有限公司 Method for adjusting display interface of input method and mobile terminal
CN107247731A (en) * 2017-05-04 2017-10-13 深圳哇哇鱼网络科技有限公司 A kind of semantics recognition recommends graphical method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314441A (en) * 2010-06-30 2012-01-11 百度在线网络技术(北京)有限公司 Method for user to input individualized primitive data and equipment and system
CN105786207A (en) * 2016-02-25 2016-07-20 百度在线网络技术(北京)有限公司 Information input method and device

Also Published As

Publication number Publication date
CN107943317A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
US10296201B2 (en) Method and apparatus for text selection
US9959487B2 (en) Method and device for adding font
US11335348B2 (en) Input method, device, apparatus, and storage medium
CN107229403B (en) Information content selection method and device
WO2016206295A1 (en) Character determination method and device
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN106331328B (en) Information prompting method and device
US10229165B2 (en) Method and device for presenting tasks
CN107943317B (en) Input method and device
CN104951522B (en) Method and device for searching
CN113920293A (en) Information identification method and device, electronic equipment and storage medium
CN107179837B (en) Input method and device
CN106447747B (en) Image processing method and device
US20160349947A1 (en) Method and device for sending message
CN111596832A (en) Page switching method and device
CN109799916B (en) Candidate item association method and device
CN106919302B (en) Operation control method and device of mobile terminal
CN111092971A (en) Display method and device for displaying
CN110648657A (en) Language model training method, language model construction method and language model construction device
US20170060822A1 (en) Method and device for storing string
CN109917927B (en) Candidate item determination method and device
CN107730452B (en) Image splicing method and device
CN111797215A (en) Dialogue method, dialogue device and storage medium
CN113157703B (en) Data query method and device, electronic equipment and storage medium
CN111078022B (en) Input method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant