CN102955569A - Method and device for text input - Google Patents

Method and device for text input Download PDF

Info

Publication number
CN102955569A
CN102955569A CN2012103982099A CN201210398209A CN102955569A CN 102955569 A CN102955569 A CN 102955569A CN 2012103982099 A CN2012103982099 A CN 2012103982099A CN 201210398209 A CN201210398209 A CN 201210398209A CN 102955569 A CN102955569 A CN 102955569A
Authority
CN
China
Prior art keywords
user
information
candidate word
text input
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103982099A
Other languages
Chinese (zh)
Other versions
CN102955569B (en
Inventor
党志立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING TIANYU LANGTONG COMMUNICATION EQUIPMENT Co Ltd
Original Assignee
BEIJING TIANYU LANGTONG COMMUNICATION EQUIPMENT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING TIANYU LANGTONG COMMUNICATION EQUIPMENT Co Ltd filed Critical BEIJING TIANYU LANGTONG COMMUNICATION EQUIPMENT Co Ltd
Priority to CN201210398209.9A priority Critical patent/CN102955569B/en
Publication of CN102955569A publication Critical patent/CN102955569A/en
Application granted granted Critical
Publication of CN102955569B publication Critical patent/CN102955569B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method and a device for text input. The method comprises the following steps of: detecting a text input instruction of a user; obtaining present expression information of the user; and outputting corresponding candidate words according to the text input instruction and the present expression information. By utilizing the method and the device, the problem that in a relevant technique, output candidate words do not consider the expression of the user is solved, so that method and the device have the advantage that the output candidate words are precise and humanized.

Description

Method and the device of text input
Technical field
The present invention relates to field of mobile terminals, in particular to a kind of method and device of text input.
Background technology
Along with the development of computer technology, various terminals have become in people's life and work must an obligato part, uses the most frequently used function of terminal and input method is people.How pressing the character of user's input, filter out efficiently, efficiently the literal that the user will input, for user selection, is a great problem, and the design of candidate word character library is particularly crucial.For portable terminal, limited resource requires higher to space hold and the searching algorithm of character library.
In the existing scheme, candidate's character library design of input method, generally go coupling by certain fixing algorithm, allow the user pass through the keyboard input character, perhaps input by touch screen hand-writing, then according to character or the stroke information of user's input, remove query candidate word in the dictionary, related candidate word is listed for user selection.With upper type, the special context when having ignored user's input characters is such as user's expression.
Do not consider the problem that the user expresses one's feelings for the candidate word of exporting in the correlation technique, not yet propose at present effective solution.
Summary of the invention
The invention provides a kind of method and device of text input, to address the above problem at least.
According to an aspect of the present invention, provide a kind of method of text input, having comprised: a kind of method of text input, it is characterized in that, comprising: detect user's text input instruction; Obtain user's current expression information; Export corresponding candidate word according to described text input instruction and described current expression information.
Preferably, detect before user's the text input instruction, described method also comprises: candidate word corresponding to expression information is set.
Preferably, the current expression information that obtains the user comprises: the current facial characteristics information of obtaining the user; Described current facial characteristics information identified obtain described current expression information.
Preferably, the current facial characteristics information of obtaining the user comprises: the current facial characteristics information of obtaining the user by the camera of terminal.
Preferably, exporting corresponding candidate word according to the input instruction of described text and described current expression information comprises: determine the candidate word that it is corresponding according to described expression information; Export described candidate word and other candidates' literal according to described text input instruction.
Preferably, described expression information comprises happy information, sad information or excited information.
According to a further aspect in the invention, provide a kind of device of text input, be applied on the terminal, having comprised: detection module, for detection of user's text input instruction; Acquisition module is for the current expression information that obtains the user; Output module is used for exporting corresponding candidate word according to described text input instruction and described current expression information.
Preferably, described device also comprises: module is set, is used for arranging candidate word corresponding to expression information.
Preferably, described acquisition module is used for obtaining user's current facial characteristics information, and described current facial characteristics information identified obtains described current expression information.
Preferably, described output module is used for determining the candidate word that it is corresponding according to described expression information, and exports described candidate word and other candidates' literal according to described text input instruction.
By the present invention, at first detect user's text input instruction, then obtain user's current expression information, export corresponding candidate word according to text input instruction and current expression information again, the candidate word of exporting in the correlation technique is not considered the problem of user's expression, and then has reached the more accurate more humane advantageous effect of the candidate word that makes output.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 is the process flow diagram according to the text entry method of the embodiment of the invention;
Fig. 2 is the structured flowchart of the text input device that provides according to the embodiment of the invention;
Fig. 3 is the process flow diagram of text entry method according to the preferred embodiment of the invention;
Fig. 4 is the process flow diagram of the method that arranges of expressing one's feelings according to the preferred embodiment of the invention;
Fig. 5 is the process flow diagram of determining according to the preferred embodiment of the invention the method for candidate word;
Fig. 6 is the structured flowchart of text input device according to the preferred embodiment of the invention.
Embodiment
Hereinafter also describe in conjunction with the embodiments the present invention in detail with reference to accompanying drawing.Need to prove, in the situation that do not conflict, embodiment and the feature among the embodiment among the application can make up mutually.
Embodiment one
The embodiment of the invention provides a kind of method of text input, and Fig. 1 is the process flow diagram according to the text entry method of the embodiment of the invention, and as shown in Figure 1, this flow process may further comprise the steps:
Step S102 detects user's text input instruction;
Step S104 obtains user's current expression information;
Step S106 exports corresponding candidate word according to text input instruction and current expression information.
By above-mentioned steps, changed and do not considered the way that the user expresses one's feelings when exporting candidate word in the correlation technique, and then so that the candidate word of output is more accurately humanized.
Association between expression and the candidate word can have a variety of modes, preferably, can set in advance candidate word corresponding to expression information.
The mode of obtaining user expression has a variety of, such as, the user inputs current expression or selects current mood.Preferably, can be the current facial characteristics information of obtaining the user, then current facial characteristics information is identified and obtained current expression information.Wherein, the mode of obtaining the current facial characteristics information of user also has a variety of, such as take again entry terminal by camera, preferably, can be the current facial characteristics information of obtaining the user by the camera of terminal.
The mode of exporting corresponding candidate word according to text input instruction and current expression information has a variety of, can be from candidate word corresponding to expression information, to search close candidate word according to text input instruction, and the preposition candidate word that finds, show that then other are according to the candidate word of input instruction output.Preferably, can be to determine the candidate word that it is corresponding according to expression information, and export candidate word and other candidates' literal according to text input instruction.By above-mentioned steps, can be so that the candidate word of output be considered user's expression, so that the candidate word of output is humanized.
Preferably, expression information can comprise happy information, sad information or excited information.
Embodiment two
The embodiment of the invention provides a kind of text input device, is applied on the terminal, and this device is used for realizing above-described embodiment and preferred implementation, had carried out repeating no more of explanation.As used below, the combination of software and/or the hardware of predetermined function can be realized in term " module ".Although the described device of following examples is preferably realized with software, hardware, perhaps the realization of the combination of software and hardware also may and be conceived.
Fig. 2 is the structured flowchart of the text input device that provides according to the embodiment of the invention, and as shown in Figure 2, this device comprises detection module 202, acquisition module 204, output module 206.
Detection module 202 is for detection of user's text input instruction;
Acquisition module 204 is for the current expression information that obtains the user;
Output module 206 is used for exporting corresponding candidate word according to text input instruction and current expression information.
Preferably, described device also comprises: module 208 is set, is used for arranging candidate word corresponding to expression information.
Preferably, acquisition module 204 is used for obtaining user's current facial characteristics information, and current facial characteristics information identified obtains current expression information.
Preferably, output module 206 is used for determining the candidate word that it is corresponding according to expression information, and exports candidate word and other candidates' literal according to text input instruction.
In the preferred implementation of the embodiment of the invention, each ingredient in this device can mutually make up according to above-described embodiment one described method and finish corresponding function, and has identical beneficial effect, and the concrete embodiment of the invention repeats no more.
Embodiment three
Fig. 3 is the process flow diagram of text entry method according to the preferred embodiment of the invention, and as shown in Figure 3, this flow process may further comprise the steps:
Step S302 arranges expression.
The guiding user defines expression, the incidence relation between expression and facial characteristics and the candidate word is set, and text composing display mode under the particular emotion is set.Specifically, related expression information and candidate word in character library, and the mapping relations between permission user oneself definition expression information and the candidate word.
Step S304, the output candidate word.
When the user inputs instruction by button or touch-screen input text, automatically identify user's expression, and in character library, preferentially inquire about the candidate word related with this expression, inquire rear these candidate words of preferential demonstration for user selection.Specifically, identify user's expression according to the user's who captures facial characteristics, then setting in advance according to the user, determine the candidate word of current expression information association, and when the show candidate word, the candidate word relevant with expression preferentially shown, to improve friendly and the convenience of portable terminal text input under the difference expression.In addition, can also allow the user to define some specific form, express own current mood, such as the mood that represents oneself by font color.
Embodiment four
The preferred embodiment of the present invention provides a kind of text entry method, when user's input characters, obtain user's facial characteristics information, and according to the expression of its correspondence of facial characteristics acquisition of information sign, wherein, expression sign is in order to the identifying user expression information, and expression information is the characteristic information of user's face during for certain mood, comprises eyes, face, face profile etc.; Then obtain candidate word related with this expression information in the character library, preferentially show these candidate words for user selection, and the permission user defines the incidence relation between vocabulary and the expression.The text input mode of the portable terminal that the embodiment of the invention provides, can realize automatically that the user expresses one's feelings when changing, candidate word also changes thereupon, the user under particular emotion and the related preposition demonstration of candidate word of this expression, so that the user can find and the current related candidate word of expressing one's feelings.
Fig. 4 is the process flow diagram of the method that arranges of expressing one's feelings according to the preferred embodiment of the invention, and as shown in Figure 4, this flow process may further comprise the steps:
Step S402 obtains current facial characteristics information.
Catch the current facial characteristics information of user by the camera of terminal, current facial characteristics information is identified, obtain user's expression information.
Step S404 judges whether current expression arranges.
Retrieve in the expression information database according to resulting expression information, judge according to result for retrieval whether current expression sets in advance, if set in advance, execution in step S408 then, otherwise execution in step S406.
Step S406, the guiding user arranges current expression.
The guiding user adds current expression in the expression information storehouse to execution in step S408.Certainly, the user also can select not arrange expression.
Step S408 judges whether to arrange the candidate word of expression information.
Judge whether the user has set in advance the candidate word of this expression information, if do not have, execution in step S410, otherwise, execution in step S412.
It is related with candidate word that step S410, guiding user arrange the definition expression.
Step S412, save data.
Preserve the expression of user's setting and the incidence relation of candidate word, withdraw from setting.
Fig. 5 is the process flow diagram of determining according to the preferred embodiment of the invention the method for candidate word, and as shown in Figure 5, this flow process may further comprise the steps:
Step S502 obtains the current facial characteristics of user.
Obtain the current facial characteristics of user, and identify according to current facial characteristics, obtain the current expression information of user.
Step S504 judges whether corresponding expression information.
According to resulting expression information, in the expression information table, retrieve, if retrieve corresponding expression information, execution in step S508 then, otherwise, execution in step S506.
Step S506 shows according to the DISPLAY ORDER of giving tacit consent to candidate word.
Process ends.
Does step S508 inquire the candidate word of expression information association?
After retrieving expression information, inquire about candidate word corresponding to this expression information.If retrieve candidate word corresponding to this expression information, execution in step S510 then, otherwise, execution in step S506.
Step S510, candidate word corresponding to preposition current expression information.
When the output candidate word, the candidate word that current expression information is corresponding is arranged in front, after the candidate word that other the text instruction according to input exports is arranged in.
Embodiment five
Fig. 6 is the structured flowchart of text input device according to the preferred embodiment of the invention, and as shown in Figure 6, this device comprises module 202, expression monitoring modular 602, candidate word enquiry module 604, text display module 606 are set.
Module 202 is set is used for the mapping relations that guide the user to define expression, expression and facial characteristics, the related candidate word of expressing one's feelings.Specifically, module 202 is set and mainly comprises following three functions:
One, expression definition.The guiding user defines expression information, such as happy, sad, excited, and is unique ID of each expression distribution.
Two, the guiding user typing facial characteristics information corresponding with expression.The definition of user's mood and identification have a lot of complicacy and otherness, have very big-difference between the performance that different people may be facial and the actual mood.Process and the biostatistics principle by fusion calculation machine image, allow the corresponding relation between user oneself definition expression and the facial characteristics information, improve the discrimination of individual character expression.Allow simultaneously the user that facial characteristics input method corresponding to these expressions is set, such as allowing the user when definition is expressed one's feelings, to extract the user when the characteristic information of front face by camera, be used as identifying the foundation of this expression.
Three, the guiding user arranges the mapping relations between expression and the candidate word.Lower such as the different expressions such as happy, excited, sad, the DISPLAY ORDER of candidate word.Consider that candidate word is a lot of in the character library, can be when the user input, the permission user adjusts the mapping relations between these candidate words and the expression, preserves after adjusting.When the user inputs same information again under same expression, by the preferential candidate word of being correlated with that shows of the mapping relations of preserving.
Expression monitoring modular 602 is used for obtaining user's facial characteristics information, and inquiry facial characteristics information is obtained corresponding with it expression information with the mapping table of expression.This module mainly comprises following three kinds of functions:
One, obtains user's facial characteristics information.Obtain the facial eye position information of user, eye shape information; Obtain the facial face positional information of user, face shape information; Obtain the facial face positional information of user and face shape information.
Two, expression inquiry.Expression and facial characteristics mapping relations table that search subscriber is preserved, adopt existing face recognition technology to mate such as the regional characteristics analysis algorithm, utilize built skin detection and the user's facial characteristics information that gets access to carry out signature analysis, provide a similar value according to the result who analyzes, can determine whether to be user-defined certain expression by this value.
Three, change notification.Real-Time Monitoring user facial expression, the notice that changes correlation module, and support the current expression information of inquiring user.
Candidate word enquiry module 604 calls expression monitoring modular 502 and obtains user's expression information when the user inputs, and according to the expression information query candidate word that obtains, inquires about character library according to text input instruction and the current expression of user's input.After inquiring candidate word, the candidate word that preferential demonstration is related with the current expression of user.
Text display module 606 is used for the show candidate word.If the user is provided with the composing display mode of text under the particular emotion, show by user's type-setting mode.Font is shown as blueness when being provided with melancholy such as the user, and font is shown in red during indignation, the variation of expressing one's feelings during then along with user's input characters, the color of the current input text of Lookup protocol user.
As can be seen from the above description, the present invention has realized following technique effect: when the show candidate word, the candidate word relevant with expression preferentially shown, to improve friendly and the convenience of portable terminal text input under the difference expression.In addition, can also allow the user to define some specific form, express own current mood, such as the mood that represents oneself by font color.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the memory storage and be carried out by calculation element, and in some cases, can carry out step shown or that describe with the order that is different from herein, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the method for a text input is characterized in that, comprising:
Detect user's text input instruction;
Obtain user's current expression information;
Export corresponding candidate word according to described text input instruction and described current expression information.
2. method according to claim 1 is characterized in that, detects before user's the text input instruction, and described method also comprises:
Candidate word corresponding to expression information is set.
3. method according to claim 1 and 2 is characterized in that, the current expression information that obtains the user comprises:
Obtain user's current facial characteristics information;
Described current facial characteristics information identified obtain described current expression information.
4. method according to claim 3 is characterized in that, the current facial characteristics information of obtaining the user comprises:
Obtain user's current facial characteristics information by the camera of terminal.
5. each described method in 4 according to claim 1 is characterized in that, exports corresponding candidate word according to described text input instruction and described current expression information and comprises:
Determine the candidate word that it is corresponding according to described expression information;
Export described candidate word and other candidates' literal according to described text input instruction.
6. method according to claim 5 is characterized in that, described expression information comprises happy information, sad information or excited information.
7. a text input device is applied on the terminal, it is characterized in that, comprising:
Detection module is for detection of user's text input instruction;
Acquisition module is for the current expression information that obtains the user;
Output module is used for exporting corresponding candidate word according to described text input instruction and described current expression information.
8. device according to claim 7 is characterized in that, described device also comprises:
Module is set, is used for arranging candidate word corresponding to expression information.
9. according to claim 7 or 8 described devices, it is characterized in that, described acquisition module is used for obtaining user's current facial characteristics information, and described current facial characteristics information identified obtains described current expression information.
10. according to claim 7 or 9 described devices, it is characterized in that, described output module is used for determining the candidate word that it is corresponding according to described expression information, and exports described candidate word and other candidates' literal according to described text input instruction.
CN201210398209.9A 2012-10-18 2012-10-18 The method of Text Input and device Expired - Fee Related CN102955569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210398209.9A CN102955569B (en) 2012-10-18 2012-10-18 The method of Text Input and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210398209.9A CN102955569B (en) 2012-10-18 2012-10-18 The method of Text Input and device

Publications (2)

Publication Number Publication Date
CN102955569A true CN102955569A (en) 2013-03-06
CN102955569B CN102955569B (en) 2016-03-23

Family

ID=47764450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210398209.9A Expired - Fee Related CN102955569B (en) 2012-10-18 2012-10-18 The method of Text Input and device

Country Status (1)

Country Link
CN (1) CN102955569B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425247A (en) * 2013-06-04 2013-12-04 深圳市中兴移动通信有限公司 User reaction based control terminal and information processing method thereof
CN103809759A (en) * 2014-03-05 2014-05-21 李志英 Face input method
CN104412258A (en) * 2014-05-22 2015-03-11 华为技术有限公司 Method and device utilizing text information to communicate
CN104423547A (en) * 2013-08-28 2015-03-18 联想(北京)有限公司 Inputting method and electronic equipment
CN104750380A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Information processing method and electronic equipment
CN108521369A (en) * 2018-04-03 2018-09-11 平安科技(深圳)有限公司 Information transmitting methods, receiving terminal apparatus and transmission terminal device
CN108958505A (en) * 2018-05-24 2018-12-07 维沃移动通信有限公司 A kind of method and terminal showing candidate information
CN110389667A (en) * 2018-04-17 2019-10-29 北京搜狗科技发展有限公司 A kind of input method and device
CN111078022A (en) * 2018-10-18 2020-04-28 北京搜狗科技发展有限公司 Input method and device
CN113641252A (en) * 2021-07-06 2021-11-12 维沃移动通信有限公司 Text input method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398834A (en) * 2007-09-29 2009-04-01 北京搜狗科技发展有限公司 Processing method and device for input information and input method system
CN102508903A (en) * 2011-11-09 2012-06-20 中兴通讯股份有限公司 Updating method for word bank of input method, character input method and terminal
CN102637071A (en) * 2011-02-09 2012-08-15 英华达(上海)电子有限公司 Multimedia input method applied to multimedia input device
CN102646022A (en) * 2012-04-10 2012-08-22 北京搜狗科技发展有限公司 Method and device for obtaining candidate

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398834A (en) * 2007-09-29 2009-04-01 北京搜狗科技发展有限公司 Processing method and device for input information and input method system
CN102637071A (en) * 2011-02-09 2012-08-15 英华达(上海)电子有限公司 Multimedia input method applied to multimedia input device
CN102508903A (en) * 2011-11-09 2012-06-20 中兴通讯股份有限公司 Updating method for word bank of input method, character input method and terminal
CN102646022A (en) * 2012-04-10 2012-08-22 北京搜狗科技发展有限公司 Method and device for obtaining candidate

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103425247A (en) * 2013-06-04 2013-12-04 深圳市中兴移动通信有限公司 User reaction based control terminal and information processing method thereof
CN104423547A (en) * 2013-08-28 2015-03-18 联想(北京)有限公司 Inputting method and electronic equipment
CN104423547B (en) * 2013-08-28 2018-04-27 联想(北京)有限公司 A kind of input method and electronic equipment
CN104750380A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Information processing method and electronic equipment
CN103809759A (en) * 2014-03-05 2014-05-21 李志英 Face input method
CN104412258A (en) * 2014-05-22 2015-03-11 华为技术有限公司 Method and device utilizing text information to communicate
CN108521369A (en) * 2018-04-03 2018-09-11 平安科技(深圳)有限公司 Information transmitting methods, receiving terminal apparatus and transmission terminal device
CN108521369B (en) * 2018-04-03 2021-03-09 平安科技(深圳)有限公司 Information transmission method, receiving terminal device and sending terminal device
CN110389667A (en) * 2018-04-17 2019-10-29 北京搜狗科技发展有限公司 A kind of input method and device
CN108958505A (en) * 2018-05-24 2018-12-07 维沃移动通信有限公司 A kind of method and terminal showing candidate information
CN108958505B (en) * 2018-05-24 2023-05-05 维沃移动通信有限公司 Method and terminal for displaying candidate information
CN111078022A (en) * 2018-10-18 2020-04-28 北京搜狗科技发展有限公司 Input method and device
CN113641252A (en) * 2021-07-06 2021-11-12 维沃移动通信有限公司 Text input method and device and electronic equipment
CN113641252B (en) * 2021-07-06 2024-03-29 维沃移动通信有限公司 Text input method and device and electronic equipment

Also Published As

Publication number Publication date
CN102955569B (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN102955569B (en) The method of Text Input and device
US9696873B2 (en) System and method for processing sliding operations on portable terminal devices
KR101710465B1 (en) Search recommendation method and apparatus
CN109284261B (en) Application searching method and device, storage medium and electronic equipment
CN103136321A (en) Method and device of multimedia information processing and mobile terminal
CN104199606B (en) A kind of method and apparatus sliding input
CN104735243B (en) Contact list displaying method and device
US9984486B2 (en) Method and apparatus for voice information augmentation and displaying, picture categorization and retrieving
CN108038102A (en) Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN103218555A (en) Logging-in method and device for application program
CN103248739A (en) Contact list navigation display method and device and mobile communication equipment
KR20130086621A (en) System and method for recording and querying original handwriting and electronic device
CN115859220B (en) Data processing method, related device and storage medium
CN103139348A (en) Method and device for linkman information processing and mobile terminal
CN106371711A (en) Information input method and electronic equipment
CN102932555A (en) Method and system for fast recognizing client software of mobile phone
CN104572848B (en) Searching method based on browser and device
CN105955507B (en) A kind of display methods and terminal of soft keyboard
CN103207890A (en) Method and device for acquiring contact person information
CN109388249A (en) Input processing method, device, terminal and the readable storage medium storing program for executing of information
CN104484334A (en) Fast search method and device
KR20220079431A (en) Method for extracting tag information from screenshot image and system thereof
CN108733436A (en) system setting method and terminal
CN106844717A (en) Webpage search display methods and device
CN103164504A (en) Smartphone refined picture searching system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160323

Termination date: 20211018

CF01 Termination of patent right due to non-payment of annual fee