CN110413133B - Input method and device - Google Patents

Input method and device Download PDF

Info

Publication number
CN110413133B
CN110413133B CN201810392678.7A CN201810392678A CN110413133B CN 110413133 B CN110413133 B CN 110413133B CN 201810392678 A CN201810392678 A CN 201810392678A CN 110413133 B CN110413133 B CN 110413133B
Authority
CN
China
Prior art keywords
input
user
item
corpus
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810392678.7A
Other languages
Chinese (zh)
Other versions
CN110413133A (en
Inventor
陈小帅
臧娇娇
张扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201810392678.7A priority Critical patent/CN110413133B/en
Publication of CN110413133A publication Critical patent/CN110413133A/en
Application granted granted Critical
Publication of CN110413133B publication Critical patent/CN110413133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques

Abstract

The embodiment of the invention provides an input method and device, wherein the method comprises the following steps: acquiring historical communication records of a first user and a second user; acquiring each input item according to the historical communication record, and establishing a first corpus; and predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item. According to the embodiment of the invention, the first corpus can be established according to the historical communication record, the input content of the user is predicted according to the on-screen output content and the first corpus, so that the predicted input item is obtained, the user can be prevented from inputting, the user can directly select from the predicted input items for use, the input efficiency of the user is effectively improved, the input experience of the user is improved, and the displayed predicted input item accords with the input content expected by the user to a certain extent due to the fact that the predicted input item is related to the on-screen output content and the historical communication record, and more accurate candidate items are provided.

Description

Input method and device
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to an input method and device.
Background
Currently, an input application can query a system word stock or a user word stock according to a coded character string input by a user, and acquire and display candidates matched with the coded character string. If the system word stock or the user word stock does not have the candidate words expected to be input by the user, the user needs complicated operation to input the expected candidate words. For example, if the user is chatting using instant messaging software, the other party inquires: "there are several real machines on line", the user wants to input "3 real machines". However, the "real machine" is not a common input word for the user, and when the user inputs the encoding string "shiji", the candidates displayed by the input method application are "opportunity", "actual", "ten times", "century", and the candidate "real machine" desired by the user cannot be provided. Users often need to separately input "shi" and "ji" and screen-output the individual words "real" and "machine" respectively to complete the input. Therefore, the input method provided by the prior art has the technical problems of inaccurate candidate items and low user input efficiency.
Disclosure of Invention
The embodiment of the invention provides an input method and device, which aim to solve the technical problems of inaccurate candidate items and low user input efficiency in the prior art.
Therefore, the embodiment of the invention provides the following technical scheme:
In a first aspect, an embodiment of the present invention provides an input method, including:
acquiring historical communication records of a first user and a second user; acquiring each input item according to the historical communication record, and establishing a first corpus; and predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item.
Preferably, the method further comprises: and responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
Preferably, the obtaining each input item according to the history communication record, and establishing a first corpus includes: according to the historical communication record, obtaining a communication record associated with input data of a second user, and establishing a first corpus; the first corpus contains entries extracted from communication records associated with a second user; or alternatively
Establishing a shared corpus associated with the first user and the second user according to the historical communication record; the shared corpus comprises entries extracted from the historical communication records;
the input item comprises an input word, an expression input or a picture input.
Preferably, the input item is specifically an input word, and the obtaining each input item according to the historical communication record includes: if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
Preferably, the predicting the predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item includes: and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
Preferably, the method further comprises: receiving coding character string input and/or voice input of a first user; determining, based on the encoded string input and/or the speech input, first candidates matching the encoded string input and/or the speech input in a first corpus, and second candidates matching the encoded string input and/or the speech input in a second corpus; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user; and obtaining a third candidate item by using the first candidate item and the second candidate item, sequencing the third candidate item, and displaying the sequenced third candidate item.
In a second aspect, an embodiment of the present invention provides an input device, including:
the communication record acquisition unit is used for acquiring the history communication records of the first user and the second user;
the corpus establishing unit is used for acquiring each input item according to the historical communication record and establishing a first corpus;
the prediction unit is used for predicting and obtaining a prediction input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the prediction input item.
In a third aspect, embodiments of the present invention provide an apparatus for input comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for: acquiring historical communication records of a first user and a second user;
Acquiring each input item according to the historical communication record, and establishing a first corpus;
and predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item.
In a fourth aspect, embodiments of the present invention provide a machine-readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the input method as described in the first aspect.
According to the input method and the input device provided by the embodiment of the invention, the input items can be obtained according to the historical communication records of the first user and the second user, the obtained input items are built into the first corpus, then the input items to be input by the first user are predicted according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and the predicted input items are displayed for the first user to directly select the content to be input from the predicted input items. Therefore, the first corpus can be established according to the historical communication record, and the content to be input of the user is predicted according to the on-screen output content and the first corpus to obtain the predicted input item, so that the user can directly select and use the predicted input item, the input efficiency is improved, the input experience of the user is improved, and the displayed predicted input item accords with the input content expected by the user to a certain extent due to the fact that the predicted input item is related to the on-screen output content and the historical communication record, and more accurate candidate items are provided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
FIG. 1 is a flow chart of an input method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a predicted input item effect according to an embodiment of the present invention;
FIG. 3 is a flowchart of another input method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an input device according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating an input device in accordance with an exemplary embodiment;
Fig. 6 is a block diagram of a server shown according to an example embodiment.
Detailed Description
The embodiment of the invention provides an input method and device, which can effectively predict the content to be input by a first user, display the predicted input item for the first user to directly use, improve the input efficiency, improve the input experience of the user, and provide more accurate candidate items because the predicted input item is related to the output content of an upper screen and the historical communication record, and the displayed predicted input item accords with the input content expected by the user to a certain extent.
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
An input method according to an exemplary embodiment of the present invention will be described with reference to fig. 1.
Referring to fig. 1, a flowchart of an input method is provided in an embodiment of the present invention.
The input method provided in this embodiment may include:
s101, acquiring historical communication records of the first user and the second user.
In this embodiment, the first user is a user of the current communication terminal, and the second user is a user of the end-to-end communication terminal. When the first user uses the instant messaging software to chat, the communication terminal can save the communication record of the second user and the first user to form a historical communication record. When the first user chat with different users, corresponding communication records can be saved for different users, so that respective corresponding historical communication records are formed. When the history communication record is stored, a corresponding relation can be established between the history communication record and the identifier of the second user, so that the corresponding history communication record can be obtained according to the identifier of the second user. The communication terminal can be a communication device such as a smart phone, a desktop computer, a tablet computer, a notebook computer and the like.
The historical communication records not only comprise communication records generated when the user chat, but also comprise communication records generated by mail communication modes and the like.
S102, acquiring each input item according to the historical communication record, and establishing a first corpus.
It can be understood that the user can send not only text information but also information such as pictures, expressions, voices and the like in the chat process, so that the stored historical communication record can comprise the text, the picture, the expression, the voices and the like. Wherein the input item may include an input word, an expression input, or a picture input.
In practical application, different processing methods are adopted when each input item is acquired for different contents in the historical communication record, and specific embodiments are provided for different contents.
In some embodiments, the input item is specifically an input word, and the obtaining each input item according to the historical communication record includes: if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
When the history communication record includes a picture, a text corresponding to the picture can be identified through an optical character recognition (Optical Character Recognition is called English is called OCR for short) technology, text content corresponding to a text recognition result is obtained, an input word is obtained according to the text content, and the input word is stored in a first corpus. For example, the history communication record includes a picture a, and the picture a includes the text "blue lean lentinus edodes", so that the text content is obtained by the text recognition processing as "blue lean lentinus edodes", the input word "blue lean lentinus edodes" is extracted according to the text content, and the input word is stored in the first corpus. For another example, the history communication record includes a picture B, where the picture B includes the word "the present palace" for obtaining the text content by the text recognition processing "the present palace" for obtaining the present palace "by the word segmentation processing, and the input words" the present palace "and" the present palace "may be added to the first corpus, or only the input word" the present palace "that is not commonly used may be added to the first corpus.
In a specific implementation of this embodiment, after text content corresponding to a text recognition result is obtained by using a text recognition technology, word segmentation processing may be performed on the text content, and then the word segmented input word is added to the first corpus. Meanwhile, the corresponding relation between the input word and the code character string, for example, the pinyin character string, the font code and the like can be saved. For example, the input word obtained by word segmentation may be subjected to phonetic notation, pinyin of the input word is obtained, and an index is established for the input word based on the pinyin, so that when the first user inputs the pinyin, the corresponding input word is output according to the index. For example, a first user inputs "bengong", based on the pinyin, the input word "bengong" corresponding to the pinyin is found in the first corpus. Of course, the input word after word segmentation may be disassembled according to the word pattern, and an index may be established for the input word based on the word pattern, so that the first user may input the input word by using a five-stroke or handwriting manner. For example, if the second user inputs "clear", then "clear" is disassembled into "created" and "nine", and the "created" is used as an index to establish a correspondence with "clear". When the first user inputs "vector-created", then based on the index, the first corpus is searched for the input word "clear" corresponding to the index.
When the history communication record comprises a voice record, voice recognition processing is carried out on voice content, text content corresponding to the voice is obtained, input words are obtained from the text content, and meanwhile the input words are stored in a first corpus.
The input word may refer to a complete word, and is a word obtained according to a dialogue scene of the user, for example, if information stored in the history communication record is "that is, a real machine", the obtained input word is "the real machine".
When the history communication record includes pure text information, the input word can be directly obtained from the pure text information, and the input word is stored in the first corpus.
When the historical communication record comprises the expression, a character identifier corresponding to the expression can be obtained, an input word is obtained according to the character identifier, and the input word is stored in a first corpus. For example, there are smiling expressions, tearing expressions, delegation expressions, etc., and input words are obtained according to the text identifiers corresponding to the expressions.
It should be noted that, according to the different data sources of the first corpus, there may be different ways to build the corpus. For example, the first corpus is built by using the input records of the second user only, and the shared corpus can also be built by using the input records of the first user and the second user, wherein the second user can be only one user, or can be multiple users, for example, the first user communicates with the multiple users during group chat.
The establishment of the first corpus will be described below, respectively.
In some embodiments, a communication record associated with the input data of the second user is obtained from the historical communication record, and a first corpus is established, the first corpus containing entries extracted from the communication record associated with the second user.
The communication record associated with the input data of the second user refers to information input by the second user in the chat process of the first user and the second user, that is, the first corpus only includes the input items corresponding to the input information of the second user, and does not include the input items corresponding to the input information of the first user. Therefore, the first corpus can accurately provide the input items for the first user to reply to the second user, and the input efficiency is improved.
Of course, in practical application, the first corpus may be built into a shared corpus, i.e. the corpus is built according to the historical communication records of the two parties, so as to be used by the two parties together. In specific implementation, the first corpus can be stored in the server, and the first user and the second user access the first corpus in the server through the corresponding terminal equipment, so that the storage space is saved, and a good chat environment is created.
Based on this, the present embodiment provides an implementation manner, specifically: establishing a shared corpus associated with the first user and the second user according to the historical communication record; the shared corpus contains entries extracted from the historical communications records.
The history communication record comprises input information of a first user and input information of a second user in the chat process, namely the first corpus comprises input items corresponding to the input information of the first user and input items corresponding to the input information of the second user. The first corpus is a shared corpus, and the first user and the second user can call the input items in the shared corpus through respective terminal equipment.
It should be noted that, the content of the first corpus may be updated at a timing or in real time. For example, after receiving the input content of the opposite communication end, the input item of the first corpus and the weight of each input item can be updated. The weight calculation mode of each input item is as follows: weight q= (X1X probability of occurrence of input word in current communication record)/(X2X time of latest use of input word to present time interval), wherein X1, X2 are coefficients, probability of occurrence of input word in current communication record means ratio of number of occurrence of certain input word to total input word in communication record, for example, "bengong" occurs 10 times in communication record, total 100 input words are included in communication record, probability of occurrence of input word "bengong" is 0.1. Of course, other calculation methods may be used to obtain the weights of the entries in the first corpus.
S103, predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item.
The predicted input item is predicted based on the screen output content of the first user and the characteristic attribute of each input item in the first corpus, and the common input item is obtained based on the coded characters input by the user or the speech input matching system word stock or the user word stock, so that the predicted input item in the embodiment is obviously different from the common input item.
In some embodiments, the predicting the predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item includes: and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
The first user outputs the content which is already output by the first user on the screen; the input frequency of each input item refers to the number of times the input item is input in the communication record; the use time of each input item refers to the duration corresponding to the current time from the time when the input item enters the first corpus.
Of course, the predicted input item may be predicted according to the content of the on-screen output of the first user and the weight of each input item in the first corpus, and the predicted input item may be displayed.
In this embodiment, prediction is performed based on the content output by the first user terminal device on the screen, the call times and the entering duration of each input item in the first corpus, so as to obtain a predicted input item, and the predicted input item is displayed for direct selection by the first user.
For ease of understanding, referring to fig. 2, the diagram is a schematic diagram of the prediction effect provided by the present embodiment, and the left diagram provides input items for matching a system word stock or a user word stock according to the encoding character string "shiji" input by the first user in the prior art; the right image is the screen output content of the terminal equipment corresponding to the first user, the output content is 'pair, 3 channels' and characteristic attributes in each input item in the first corpus, the predicted input item is obtained through prediction, and the predicted input item is displayed, wherein a 'real machine' is displayed in the first position, the input frequency of the input item is indicated to be large, the time for entering the first corpus is short, the input item is fresh, the probability of being used is large, and by using the method provided by the embodiment, the user can be prevented from inputting the coding character string 'shiji', the 'real machine' can be provided for the first user as the predicted input item, the user input efficiency is effectively improved, and the accurate input item is provided for the user.
In some implementations, in response to a triggering operation of the first user for the predicted input item, the on-screen outputs the predicted input item corresponding to the triggering operation. The first user can select a required input item from the displayed predicted input items, and the input item selected by the user is output on a screen according to the selection operation of the user, so that the input operation of the first user is completed.
The method provided by the embodiment of the invention can effectively predict the content to be input by the first user, display the predicted input item for the first user to directly use, improve the input efficiency and improve the input experience of the user, and the displayed predicted input item accords with the input content expected by the user to a certain extent due to the correlation between the predicted input item and the on-screen output content and the historical communication record, so that the more accurate input item is provided.
The above embodiment describes that the first corpus is established based on the historical communication records of the first user and the second user, the content to be input by the first user is predicted according to the characteristic attribute of each input item in the first corpus and the on-screen output content of the first user, and the predicted input item is displayed for the first user to directly use, so that the input operation of the first user is avoided.
In practical application, the method may also be described below in conjunction with fig. 3 according to the fact that the coding strings and/or voices input by the first user are respectively matched with corresponding candidates in the first corpus and the second corpus, and after the matched candidates are processed, the first user is provided with the desired candidates, so that the user input experience is improved.
Referring to fig. 3, another flowchart of an input method according to an embodiment of the present invention is shown.
In this embodiment, the input method may include:
s301: an encoded string input and/or a speech input of a first user is received.
S302: based on the encoding string input and/or the speech input, a first candidate matching the encoding string input and/or the speech input is determined in the first corpus.
The first corpus is obtained based on the historical communication records of the first user and the second user, and the establishment of the first corpus can be realized by referring to the method described in fig. 1, which is not described herein.
S303: a second candidate matching the encoding string and/or the speech input is determined in a second corpus.
The second corpus is a system corpus or a personalized corpus of the first user.
S304: and obtaining a third candidate item by using the first candidate item and the second candidate item, sequencing the third candidate item, and displaying the sequenced third candidate item.
In this embodiment, the first candidate may include a plurality of candidate entries, the second candidate may include a plurality of candidate entries, and considering that repeated entries may occur in the first candidate and the second candidate, any one of the repeated entries may be deleted, and then the two candidates are combined to obtain the third candidate. Of course, other methods may be used to obtain the third candidate, for example, the first three input items in the first candidate and the first three input items in the second candidate are extracted to form the third candidate, and the method for obtaining the third candidate is not limited in this embodiment.
It will be appreciated that the plurality of candidate entries in the first candidate and the second candidate are displayed in an ordered arrangement, so as to ensure that the respective entries in the third candidate are displayed in order to provide the user with accurate entries, and the third candidate is ordered before being displayed.
In some embodiments, the ordering the third candidate comprises: the weight score of the first candidate is used to rank the weight score of the second candidate.
In this embodiment, the weight score of each input item in the third candidate item is calculated according to the weight score of each input item in the first candidate item and the weight score of each input item in the second candidate item, and the input items in the third candidate item are ordered according to the calculated weight scores. The weights of the respective input items in the third candidate item may be q=r1×q1+r2×q2, where r1 and r2 are preset coefficients, Q1 is a weight score of the input item in the first candidate item, and if the input item is not in the first candidate item, Q1 is a preset default value; q2 is a weight score of the input item in the second candidate item, and if the input item is not in the second candidate item, Q2 is a preset default value.
For example, the first candidate sequence includes three input items A, B, C, and the weight scores of the three input items are q1, q2 and q3; the second candidate sequence comprises three input items of C, E, F, and the weight scores of the three input items are q4, q5 and q6 respectively; then the third candidate is A, B, C, E and F, then qa=r1×q1+r2×b2, since a is not in the second candidate sequence, b2 is the default; similarly, qb=r1×q2+r2×b2; qc=r1×q3+r2×q4, since the input C is present in both the first candidate sequence and the second candidate sequence, b2 is calculated as the weight score q4 of C in the second candidate sequence; qe=r1×b1+r2×q5, since a is not in the first candidate, b1 is a default value; similarly, qf=r1×b1+r2×q6. And then, sorting the input items in the third candidate item sequence according to the recalculated weight score, and displaying the sorted third candidate item for the second user.
Wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time. The frequency of input of the first candidate term refers to the number of times the input term in the first candidate term is input in the first corpus. The difference between the use time and the current time of the first candidate item is the difference between the time when the input item in the first candidate item enters the first corpus and the current time. For easy understanding, for example, if the input frequency of the input term in the first candidate term is f, and the duration of the input term in the first corpus is t, the weight score q1=a1×f/a2×t of the input term, where a1 and a2 are preset coefficients.
In addition, the weight can be calculated through the occurrence probability of the input word in the communication record and the occurrence time of the input word. For example: weight score q1= (probability of occurrence of input word in current communication record X1)/(time of last use of input word X2) to present time interval.
The weight score of the second candidate may be calculated with reference to the weight score of the first candidate, or by other calculation means. For example, if the second candidate is from the system word stock, the word frequency of each candidate may be counted according to the user input data collected by the input method application, and the weight score may be obtained according to the word frequency calculation. For another example, if the second candidate is from the user word stock, the word frequency of each candidate may be counted according to the history input record of the current user, and the weight score may be obtained according to the word frequency calculation. Of course, the weight score may be calculated in other manners, which are not limited herein.
According to the input method, based on the coded character strings and/or voices input by the first user, candidate items are respectively matched from the first corpus and the second corpus, a third candidate item is obtained on the basis of obtaining the first candidate item and the second candidate item, the third candidate item is ordered, the ordered third candidate item is displayed for the first user, and therefore accurate candidate items are displayed for the first user.
The following describes a device corresponding to the method provided by the embodiment of the present invention, and the setting and implementation of each module of the following device may be implemented correspondingly by referring to the methods shown in fig. 1 and fig. 3.
Referring to fig. 4, a schematic diagram of an input device according to an embodiment of the invention is shown.
An input device 400, the device comprising: an address book acquisition unit 401, a corpus establishment unit 402, and a prediction unit 403;
the communication record obtaining unit 401 is configured to obtain a history communication record of the first user and the second user.
The corpus establishing unit 402 is configured to obtain each input item according to the historical communication record, and establish a first corpus.
And the prediction unit 403 is configured to predict and obtain a predicted input item according to the on-screen output content of the first user and the feature attribute of each input item in the first corpus, and display the predicted input item.
In some embodiments, the apparatus further comprises:
and the output unit is used for responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
In some embodiments, the corpus creation unit is specifically configured to:
According to the historical communication record, obtaining a communication record associated with input data of a second user, and establishing a first corpus; the first corpus contains entries extracted from communication records associated with a second user; or establishing a shared corpus associated with the first user and the second user according to the historical communication record; the shared corpus comprises entries extracted from the historical communication records; the input item comprises an input word, an expression input or a picture input.
In some embodiments, the corpus creation unit is specifically configured to:
If the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
In some embodiments, the prediction unit is specifically configured to:
and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
In some embodiments, the apparatus further comprises:
And the receiving subunit is used for receiving the coded character string input and/or the voice input of the first user.
A first determining subunit, configured to determine, in a first corpus, a first candidate item that matches the encoding string input and/or the speech input based on the encoding string input and/or the speech input.
And a second determining subunit, configured to determine, in a second corpus, a second candidate item that matches the encoding string input and/or the speech input based on the encoding string input and/or the speech input. The first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user.
And the third determining subunit is used for obtaining a third candidate item by using the first candidate item and the second candidate item, sequencing the third candidate item and displaying the sequenced third candidate item.
In some embodiments, the third determining subunit is specifically configured to:
sorting by using the weight score of the first candidate item and the weight score of the second candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
It should be noted that, the arrangement of each module of the apparatus in this embodiment may be correspondingly implemented by referring to the methods shown in fig. 1 and fig. 3, which are not described herein again.
Referring to fig. 5, a block diagram of an input device according to an embodiment of the present invention is provided. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 5, an apparatus 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the apparatus 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at device 500. Examples of such data include instructions for any application or method operating on the apparatus 500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the apparatus 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the apparatus 500, the sensor assembly 515 may also detect a change in position of the apparatus 500 or any of the components of the apparatus 500, the presence or absence of user contact with the apparatus 500, the orientation or acceleration/deceleration of the apparatus 500, and a change in temperature of the apparatus 500. The sensor assembly 515 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
Specifically, an embodiment of the present invention provides an XX apparatus 500 comprising a memory 505 and one or more programs, wherein the one or more programs are stored in the memory 505 and configured to be executed by one or more processors 520 comprise instructions for: acquiring historical communication records of a first user and a second user; acquiring each input item according to the historical communication record, and establishing a first corpus; and predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: and responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: the step of obtaining each input item according to the history communication record, and the step of establishing a first corpus comprises the following steps: according to the historical communication record, obtaining a communication record associated with input data of a second user, and establishing a first corpus; the first corpus contains entries extracted from communication records associated with a second user; or establishing a shared corpus associated with the first user and the second user according to the historical communication record; the shared corpus comprises entries extracted from the historical communication records; the input item comprises an input word, an expression input or a picture input.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: the input item is specifically an input word, and the obtaining each input item according to the historical communication record includes: if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: the predicting to obtain the predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item comprises: and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: receiving coding character string input and/or voice input of a first user; determining, based on the encoded string input and/or the speech input, first candidates matching the encoded string input and/or the speech input in a first corpus, and second candidates matching the encoded string input and/or the speech input in a second corpus; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user; and obtaining a third candidate item by using the first candidate item and the second candidate item, sequencing the third candidate item, and displaying the sequenced third candidate item.
Further, the processor 520 is specifically further configured to execute the one or more programs to include instructions for: the ranking the third candidate includes: sorting by using the weight score of the first candidate item and the weight score of the second candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 505, comprising instructions executable by processor 520 of apparatus 500 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A machine-readable medium, for example, the machine-readable medium may be a non-transitory computer-readable storage medium, which when executed by a processor of an apparatus (terminal or server) causes the apparatus to perform an input method, the method comprising: acquiring historical communication records of a first user and a second user; acquiring each input item according to the historical communication record, and establishing a first corpus; and predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item.
Optionally, the method further comprises: and responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
Optionally, the obtaining each input item according to the historical communication record, and establishing the first corpus includes: according to the historical communication record, obtaining a communication record associated with input data of a second user, and establishing a first corpus; the first corpus contains entries extracted from communication records associated with a second user; or establishing a shared corpus associated with the first user and the second user according to the historical communication record; the shared corpus comprises entries extracted from the historical communication records; the input item comprises an input word, an expression input or a picture input.
Optionally, the input item is specifically an input word, and the obtaining each input item according to the historical communication record includes: if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
Optionally, the predicting the predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item includes: and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
Optionally, the method further comprises: receiving coding character string input and/or voice input of a first user; determining, based on the encoded string input and/or the speech input, first candidates matching the encoded string input and/or the speech input in a first corpus, and second candidates matching the encoded string input and/or the speech input in a second corpus; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user; and obtaining a third candidate item by using the first candidate item and the second candidate item, sequencing the third candidate item, and displaying the sequenced third candidate item.
Optionally, the ranking the third candidate includes: sorting by using the weight score of the first candidate item and the weight score of the second candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 600 may vary considerably in configuration or performance and may include one or more central processing units (central processing units, CPUs) 622 (e.g., one or more processors) and memory 632, one or more storage mediums 630 (e.g., one or more mass storage devices) that store applications 662 or data 666. Wherein memory 632 and storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 622 may be configured to communicate with a storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 660, one or more input/output interfaces 668, one or more keyboards 666, and/or one or more operating systems 661, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden. The foregoing is merely illustrative of the embodiments of this invention and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the invention, and it is intended to cover all modifications and variations as fall within the scope of the invention.
It should be noted that, the user related information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.

Claims (16)

1. An input method, comprising:
Acquiring historical communication records of a first user and a second user;
Acquiring an input item corresponding to the input information of the second user according to the historical communication record, and establishing a first corpus, wherein the first corpus does not comprise the input item corresponding to the input information of the first user; the input items in the first corpus are updated in real time after receiving the input content of the second user;
Predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item, wherein the on-screen output content of the first user is the content which is already output by the first user;
Receiving coding character string input and/or voice input of a first user;
Determining, based on the encoded string input and/or the speech input, first candidates matching the encoded string input and/or the speech input in a first corpus, and second candidates matching the encoded string input and/or the speech input in a second corpus; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user;
Obtaining a third candidate item by using the first candidate item and the second candidate item, sorting the third candidate item by using the weight score of the first candidate item and the weight score of the second candidate item, and displaying the sorted third candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
2. The method according to claim 1, wherein the method further comprises:
and responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
3. The method of claim 1, wherein the input item comprises an input word, an expression input, or a picture input.
4. A method according to claim 1 or 3, wherein the input items are in particular input words, and the obtaining each input item according to the historical communication record comprises:
if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents;
and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
5. The method of claim 1, wherein predicting the predicted input item based on the on-screen output content of the first user and the characteristic attribute of each input item comprises:
and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
6. An input device, comprising:
the communication record acquisition unit is used for acquiring the history communication records of the first user and the second user;
The corpus establishing unit is used for acquiring input items corresponding to the input information of the second user according to the historical communication record and establishing a first corpus, wherein the first corpus does not comprise the input items corresponding to the input information of the first user; the input items in the first corpus are updated in real time after receiving the input content of the second user;
The prediction unit is used for predicting and obtaining a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item, wherein the on-screen output content of the first user is the content which is already output by the first user;
a receiving subunit, configured to receive an encoded character string input and/or a voice input of a first user;
A first determining subunit, configured to determine, in a first corpus, a first candidate item that matches the encoding string input and/or the speech input based on the encoding string input and/or the speech input;
A second determining subunit, configured to determine, in a second corpus, a second candidate item that matches the encoding string input and/or the speech input based on the encoding string input and/or the speech input; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user;
a third determining subunit, configured to obtain a third candidate item by using the first candidate item and the second candidate item, sort the third candidate item by using the weight score of the first candidate item and the weight score of the second candidate item, and display the sorted third candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
7. The apparatus of claim 6, wherein the apparatus further comprises:
and the output unit is used for responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
8. The apparatus of claim 6, wherein the input item comprises an input word, an expression input, or a picture input.
9. The apparatus according to claim 6 or 8, wherein the input item is in particular an input word, and the corpus establishing unit is in particular configured to:
If the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
10. The apparatus according to claim 6, wherein the prediction unit is specifically configured to:
and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
11. An apparatus for input comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising instructions for:
Acquiring historical communication records of a first user and a second user;
Acquiring an input item corresponding to the input information of the second user according to the historical communication record, and establishing a first corpus, wherein the first corpus does not comprise the input item corresponding to the input information of the first user; the input items in the first corpus are updated in real time after receiving the input content of the second user;
Predicting to obtain a predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item in the first corpus, and displaying the predicted input item, wherein the on-screen output content of the first user is the content which is already output by the first user;
Receiving coding character string input and/or voice input of a first user;
Determining, based on the encoded string input and/or the speech input, first candidates matching the encoded string input and/or the speech input in a first corpus, and second candidates matching the encoded string input and/or the speech input in a second corpus; the first corpus is obtained based on historical communication records of the first user and the second user; the second corpus is a system corpus or a personalized corpus of the first user;
Obtaining a third candidate item by using the first candidate item and the second candidate item, sorting the third candidate item by using the weight score of the first candidate item and the weight score of the second candidate item, and displaying the sorted third candidate item; wherein the weight score of the first candidate is proportional to the input frequency of the first candidate and inversely proportional to the difference between the time of use of the first candidate and the current time.
12. The apparatus of claim 11, wherein the processor is further specifically configured to execute the one or more programs comprising instructions for: and responding to the triggering operation of the first user on the predicted input item, and outputting the predicted input item corresponding to the triggering operation on the screen.
13. The apparatus of claim 11, wherein the input item comprises an input word, an expression input, or a picture input.
14. The apparatus of claim 11 or 13, wherein the processor is further specifically configured to execute the one or more programs comprising instructions for: the input item is specifically an input word, and the obtaining each input item according to the historical communication record includes: if the history communication record contains pictures, performing word recognition processing on the pictures to obtain text contents corresponding to word recognition results, and obtaining each input word by using the text contents; and if the historical communication record is a voice record, performing voice recognition processing on the voice record to acquire text content corresponding to a voice recognition result, and acquiring each input word by using the text content.
15. The apparatus of claim 11, wherein the processor is further specifically configured to execute the one or more programs comprising instructions for: the predicting to obtain the predicted input item according to the on-screen output content of the first user and the characteristic attribute of each input item comprises: and predicting to obtain a predicted input item according to the on-screen output content of the first user, the input frequency of each input item and the use time of each input item.
16. A machine readable medium having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform the input method of one or more of claims 1 to 5.
CN201810392678.7A 2018-04-27 2018-04-27 Input method and device Active CN110413133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810392678.7A CN110413133B (en) 2018-04-27 2018-04-27 Input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810392678.7A CN110413133B (en) 2018-04-27 2018-04-27 Input method and device

Publications (2)

Publication Number Publication Date
CN110413133A CN110413133A (en) 2019-11-05
CN110413133B true CN110413133B (en) 2024-04-26

Family

ID=68346603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810392678.7A Active CN110413133B (en) 2018-04-27 2018-04-27 Input method and device

Country Status (1)

Country Link
CN (1) CN110413133B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101114298A (en) * 2007-08-31 2008-01-30 北京搜狗科技发展有限公司 Method for gaining oral vocabulary entry, device and input method system thereof
CN101276249A (en) * 2007-03-30 2008-10-01 北京三星通信技术研究有限公司 Method and device for forecasting and discriminating hand-written characters
CN101373468A (en) * 2007-08-20 2009-02-25 北京搜狗科技发展有限公司 Method for loading word stock, method for inputting character and input method system
CN102508554A (en) * 2011-10-02 2012-06-20 上海量明科技发展有限公司 Input method with communication association, personal repertoire and system
CN103825952A (en) * 2014-03-04 2014-05-28 百度在线网络技术(北京)有限公司 Cell lexicon pushing method and server
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104731364A (en) * 2015-03-30 2015-06-24 天脉聚源(北京)教育科技有限公司 Input method and input method system
CN107315487A (en) * 2016-04-27 2017-11-03 北京搜狗科技发展有限公司 A kind of input processing method, device and electronic equipment
CN107329585A (en) * 2017-06-28 2017-11-07 北京百度网讯科技有限公司 Method and apparatus for inputting word

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199332A1 (en) * 2012-07-20 2015-07-16 Mu Li Browsing history language model for input method editor
US8918408B2 (en) * 2012-08-24 2014-12-23 Microsoft Corporation Candidate generation for predictive input using input history
US9244906B2 (en) * 2013-06-21 2016-01-26 Blackberry Limited Text entry at electronic communication device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276249A (en) * 2007-03-30 2008-10-01 北京三星通信技术研究有限公司 Method and device for forecasting and discriminating hand-written characters
CN101373468A (en) * 2007-08-20 2009-02-25 北京搜狗科技发展有限公司 Method for loading word stock, method for inputting character and input method system
CN101114298A (en) * 2007-08-31 2008-01-30 北京搜狗科技发展有限公司 Method for gaining oral vocabulary entry, device and input method system thereof
CN102508554A (en) * 2011-10-02 2012-06-20 上海量明科技发展有限公司 Input method with communication association, personal repertoire and system
CN103825952A (en) * 2014-03-04 2014-05-28 百度在线网络技术(北京)有限公司 Cell lexicon pushing method and server
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104731364A (en) * 2015-03-30 2015-06-24 天脉聚源(北京)教育科技有限公司 Input method and input method system
CN107315487A (en) * 2016-04-27 2017-11-03 北京搜狗科技发展有限公司 A kind of input processing method, device and electronic equipment
CN107329585A (en) * 2017-06-28 2017-11-07 北京百度网讯科技有限公司 Method and apparatus for inputting word

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Communicative predictions can overrule linguistic priors;Leon O.H.Kroczek 等;scientific;20171214;1-9 *
基于术语自动抽取的科技文献翻译辅助系统的设计;黄政豪;崔荣一;;延边大学学报(自然科学版);20170920(03);74-78 *

Also Published As

Publication number Publication date
CN110413133A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN107247519B (en) Input method and device
CN108038102B (en) Method and device for recommending expression image, terminal and storage medium
CN108073606B (en) News recommendation method and device for news recommendation
CN109144285B (en) Input method and device
CN107918496B (en) Input error correction method and device for input error correction
CN111046210A (en) Information recommendation method and device and electronic equipment
CN110764627A (en) Input method and device and electronic equipment
CN110244860B (en) Input method and device and electronic equipment
CN110895558B (en) Dialogue reply method and related device
CN109901726B (en) Candidate word generation method and device and candidate word generation device
CN109144286B (en) Input method and device
CN111831132A (en) Information recommendation method and device and electronic equipment
CN112631435A (en) Input method, device, equipment and storage medium
CN110413133B (en) Input method and device
CN111198620A (en) Method, device and equipment for presenting input candidate items
CN107291259B (en) Information display method and device for information display
CN110471538B (en) Input prediction method and device
CN113589954A (en) Data processing method and device and electronic equipment
CN112181163A (en) Input method, input device and input device
CN108241438B (en) Input method, input device and input device
CN112462992B (en) Information processing method and device, electronic equipment and medium
CN111339263A (en) Information recommendation method and device and electronic equipment
CN110765338A (en) Data processing method and device and data processing device
CN108874170B (en) Input method and device
CN111666436B (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant