CN113589949A - Input method and device and electronic equipment - Google Patents
Input method and device and electronic equipment Download PDFInfo
- Publication number
- CN113589949A CN113589949A CN202010366780.7A CN202010366780A CN113589949A CN 113589949 A CN113589949 A CN 113589949A CN 202010366780 A CN202010366780 A CN 202010366780A CN 113589949 A CN113589949 A CN 113589949A
- Authority
- CN
- China
- Prior art keywords
- sentence
- input
- prediction model
- input sequence
- candidates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 138
- 238000012216 screening Methods 0.000 claims description 33
- 230000015654 memory Effects 0.000 claims description 21
- 238000010586 diagram Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 11
- 230000006399 behavior Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 230000001915 proofreading effect Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0236—Character input methods using selection techniques to select from displayed items
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides an input method, an input device and electronic equipment, wherein the method comprises the following steps: acquiring an input sequence input by a user in an input method; inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; displaying the sentence candidates; and further, under the condition that part of texts in the input sentences correspond to the input sequence, the user can directly display the sentences on the screen, and the input efficiency of the user is greatly improved.
Description
Technical Field
The present invention relates to the field of input methods, and in particular, to an input method, an input device, and an electronic apparatus.
Background
With the development of computer technology, electronic devices such as mobile phones and tablet computers are more and more popular, and great convenience is brought to life, study and work of people. These electronic devices are typically installed with an input method application (abbreviated as input method) so that a user can input information using the input method.
In the input process of the user, the input method can predict candidates matched with the input sequence for the user to directly screen up so as to improve the input efficiency of the user. For example, taking the input sequence as the pinyin sequence as an example, the user inputs the pinyin string "gaosuni", and the input method provides waiting options such as "tell you", "kill you".
But the basic unit of the expression habit of the user is a sentence; the method for providing word candidates based on the existing input enables a user to input words only in units of words, so that the input thought of the user is interrupted frequently, and the input efficiency is low. For example, the content that the user needs to input is "tell you one thing", the user will input the pinyin "gaosuni" first in the input process, and at this time, the input method will give the candidates "tell you", "do you" and the like corresponding to the pinyin input by the user; after the user tells you to screen, the user continues to input yigeshiqing, then the input method predicts the candidate 'one thing', and the user screens the 'one thing', so that the input of the required content is completed.
Disclosure of Invention
The embodiment of the invention provides an input method for improving input efficiency.
Correspondingly, the embodiment of the invention also provides an input device and electronic equipment, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present invention discloses an input method, which specifically includes:
acquiring an input sequence input by a user in an input method; inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; and displaying the sentence candidates.
Optionally, the input information of the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
Optionally, when the input information of the sentence prediction model further includes the above information, the inputting the input sequence into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model includes: and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: and predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the information, and outputting the screened sentence candidates.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting by adopting the sentence prediction model based on the above information to obtain a plurality of sentence candidates; and screening the sentence candidates according to the input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period; the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model; and screening the sentence candidates according to the first input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, when the input information of the sentence prediction model further includes application environment information, the inputting the input sequence into the sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the application environment information, and outputting the screened sentence candidates.
Optionally, the method further comprises: when the sentence candidates output by the sentence prediction model include a plurality of candidates, acquiring input associated information including: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information; and reordering the plurality of sentence candidates by adopting the input association information.
Optionally, the input method includes a client and a server, and the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: the client generates a long sentence prediction request according to the input sequence and sends the long sentence prediction request to the server; and the server acquires an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains sentence candidates output by the sentence prediction model and returns the sentence candidates to the client.
Optionally, the input sequence includes an input sequence input in a current input cycle, and the method further includes: the client determines the total length of an input sequence input by a user in a current input period; judging whether the total length of an input sequence input by a user in the current input period reaches a preset length or not; and if the total length of the input sequence input by the user in the current input period does not reach the preset length, executing the step of generating the long sentence prediction request according to the input sequence.
Optionally, the method further comprises: the client caches sentence candidates returned by the server by taking an input cycle as a unit; wherein the sentence candidates comprise sentence candidates returned by the server over time.
Optionally, the input sequence comprises: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period;
the method further comprises the following steps: the client judges whether an input sequence input by the user at this time is matched with the currently displayed sentence candidate; when an input sequence input by a user at this time is not matched with a currently displayed sentence candidate, executing the step of generating a long sentence prediction request according to the input sequence; the presenting the sentence candidates includes: and when an input sequence input by the user at this time is matched with the currently displayed sentence candidate, continuously displaying the currently displayed sentence candidate.
Optionally, the method further comprises: and performing other types of prediction according to the input sequence to obtain other types of prediction results, wherein the other types of prediction results comprise at least one of the following: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result;
the presenting the sentence candidates includes: and displaying the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
Optionally, the presenting the sentence candidates comprises: splicing the content in the edit box with the sentence candidates to obtain a corresponding splicing result; and displaying the splicing result.
The embodiment of the invention also discloses an input device, which specifically comprises: the acquisition module is used for acquiring an input sequence input by a user in an input method; the determining module is used for inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; and the display module is used for displaying the sentence candidates.
Optionally, the input information of the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
Optionally, when the input information of the sentence prediction model further includes the above information, the determining module includes: and the first candidate output submodule is used for inputting the input sequence and the information into a sentence prediction model to obtain the sentence candidates output by the sentence prediction model.
Optionally, the first candidate output sub-module includes: and the first prediction unit is used for predicting based on the input sequence and the above information by adopting the sentence prediction model to obtain the sentence candidates output by the sentence prediction model.
Optionally, the first candidate output sub-module includes: a second prediction unit, configured to perform prediction based on the input sequence by using the sentence prediction model to obtain a plurality of sentence candidates; and the first output unit is used for screening the sentence candidates by adopting the sentence prediction model according to the above information and outputting the screened sentence candidates.
Optionally, the first candidate output sub-module includes: a third prediction unit, configured to perform prediction based on the above information by using the sentence prediction model to obtain a plurality of sentence candidates; and the second output unit is used for screening the sentence candidates according to the input sequence by adopting the sentence prediction model and outputting the screened sentence candidates.
Optionally, the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period; the determining module comprises: the first candidate prediction submodule is used for predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model; and the second candidate output submodule is used for screening the plurality of sentence candidates according to the first input sequence by adopting the sentence prediction model and outputting the screened sentence candidates.
Optionally, when the input information of the sentence prediction model further includes application environment information, the determining module includes: the second candidate prediction submodule is used for predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and the third candidate output submodule is used for screening the plurality of sentence candidates by adopting the sentence prediction model according to the application environment information and outputting the screened sentence candidates.
Optionally, the apparatus further comprises: a relevance information obtaining module, configured to obtain input relevance information when a plurality of sentence candidates output by the sentence prediction model include: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information; and the sequencing module is used for adopting the input associated information to reorder the sentence candidates.
Optionally, the input method includes a client and a server, and the determining module includes: the sending submodule is used for calling the client to generate a long sentence prediction request according to the input sequence and sending the long sentence prediction request to the server; and the third candidate prediction submodule is used for calling the server to acquire an input sequence from the long sentence prediction request, inputting the input sequence into a sentence prediction model, acquiring a sentence candidate output by the sentence prediction model and returning the sentence candidate to the client.
Optionally, the input sequence includes an input sequence input in a current input cycle, and the apparatus further includes: the length determining module is used for calling the client to determine the total length of an input sequence input by the user in the current input period; the judging module is used for judging whether the total length of the input sequence input by the user in the current input period reaches a preset length or not; and the sending submodule is used for executing the steps of calling the client to generate a long sentence prediction request according to the input sequence and sending the long sentence prediction request to the server if the total length of the input sequence input in the current input period of the user does not reach the preset length.
Optionally, the apparatus further comprises: the cache module is used for calling the client to cache the sentence candidates returned by the server by taking the input period as a unit; wherein the sentence candidates comprise sentence candidates returned by the server over time.
Optionally, the input sequence comprises: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period; the device further comprises: the matching module is used for calling the client to judge whether an input sequence input by the user at this time is matched with the currently displayed sentence candidate; the sending submodule is used for executing the steps of calling the client to generate a long sentence prediction request according to an input sequence when the input sequence input by the user at this time is not matched with the currently displayed sentence candidate, and sending the long sentence prediction request to the server; the display module comprises: and the first candidate display sub-module is used for continuously displaying the currently displayed sentence candidates when an input sequence input by the user at this time is matched with the currently displayed sentence candidates.
Optionally, the apparatus further comprises: the other-type prediction module is configured to perform other-type prediction according to the input sequence to obtain other-type prediction results, where the other-type prediction results include at least one of the following: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result; the display module comprises: and the second candidate display submodule is used for displaying the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
Optionally, the display module comprises: the splicing submodule is used for splicing the content in the edit box with the candidate sentences to obtain a corresponding splicing result; and the result display submodule is used for displaying the splicing result.
The embodiment of the invention also discloses a readable storage medium, and when the instructions in the storage medium are executed by a processor of the electronic equipment, the electronic equipment can execute the input method according to any one of the embodiments of the invention.
An embodiment of the present invention also discloses an electronic device, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, and the one or more programs include instructions for: acquiring an input sequence input by a user in an input method; inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; and displaying the sentence candidates.
Optionally, the input information of the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
Optionally, when the input information of the sentence prediction model further includes the above information, the inputting the input sequence into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model includes: and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: and predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the information, and outputting the screened sentence candidates.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting by adopting the sentence prediction model based on the above information to obtain a plurality of sentence candidates; and screening the sentence candidates according to the input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period; the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model; and screening the sentence candidates according to the first input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, when the input information of the sentence prediction model further includes application environment information, the inputting the input sequence into the sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the application environment information, and outputting the screened sentence candidates.
Optionally, further comprising instructions for: when the sentence candidates output by the sentence prediction model include a plurality of candidates, acquiring input associated information including: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information; and reordering the plurality of sentence candidates by adopting the input association information.
Optionally, the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: generating a long sentence prediction request according to the input sequence, and sending the long sentence prediction request to a server; and receiving sentence candidates returned by the server, wherein the sentence candidates are obtained by the server from the long sentence prediction request, input sequences are input into a sentence prediction model, and the sentence candidates output by the sentence prediction model are obtained and returned.
Optionally, the input sequence includes an input sequence input in a current input cycle, and further includes instructions for: determining the total length of an input sequence input by a user in the current input period; judging whether the total length of an input sequence input by a user in the current input period reaches a preset length or not; and if the total length of the input sequence input by the user in the current input period does not reach the preset length, executing the step of generating the long sentence prediction request according to the input sequence.
Optionally, further comprising instructions for: caching sentence candidates returned by the server by taking an input period as a unit; wherein the sentence candidates comprise sentence candidates returned by the server over time.
Optionally, the input sequence comprises: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period; further comprising instructions for: judging whether an input sequence input by a user this time is matched with a currently displayed sentence candidate; when an input sequence input by a user at this time is not matched with a currently displayed sentence candidate, executing the step of generating a long sentence prediction request according to the input sequence; the presenting the sentence candidates includes: and when an input sequence input by the user at this time is matched with the currently displayed sentence candidate, continuously displaying the currently displayed sentence candidate.
Optionally, further comprising instructions for: and performing other types of prediction according to the input sequence to obtain other types of prediction results, wherein the other types of prediction results comprise at least one of the following: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result; the presenting the sentence candidates includes: and displaying the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
Optionally, the presenting the sentence candidates comprises: splicing the content of the edit box with the candidate sentences to obtain a corresponding splicing result; and displaying the splicing result.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, after an input sequence input by a user is obtained, the input sequence can be input into a sentence prediction model by an input method, a sentence candidate output by the sentence prediction model is obtained, and the sentence candidate is displayed; and further, under the condition that part of texts in the input sentences correspond to the input sequence, the user can directly display the sentences on the screen, and the input efficiency of the user is greatly improved.
Drawings
FIG. 1 is a flow chart of the steps of an input method embodiment of the present invention;
FIG. 2 is a flow chart of the steps of an alternative embodiment of an input method of the present invention;
FIG. 3 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 4 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 5 is a flow chart of steps of yet another alternative embodiment of an input method of the present invention;
FIG. 6 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 7 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 8 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 9 is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 10a is a flow chart of the steps of yet another alternative embodiment of an input method of the present invention;
FIG. 10b is a diagram of a sentence candidate display interface according to an embodiment of the present invention;
FIG. 10c is a diagram of a sentence candidate display interface according to another embodiment of the present invention;
FIG. 10d is a schematic illustration of a tiled display interface according to an embodiment of the present invention;
FIG. 11 is a block diagram of an input device according to an embodiment of the present invention;
FIG. 12 is a block diagram of an alternative embodiment of an input device of the present invention;
FIG. 13 illustrates a block diagram of an electronic device for input, in accordance with an exemplary embodiment;
fig. 14 is a schematic structural diagram of an electronic device for input according to another exemplary embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an input method according to the present invention is shown, which may specifically include the following steps:
and 102, acquiring an input sequence input by a user in the input method.
In the embodiment of the invention, when the user inputs the input sequence, long sentence prediction can be carried out based on the input sequence, and corresponding sentence candidates are generated.
The method comprises the steps of acquiring an input sequence input by a user in the process of inputting by the user through an input method; then, based on the acquired input sequence, corresponding sentence candidates are predicted. The input sequence may include various types, such as a pinyin sequence, a stroke sequence, a foreign language character string, and the like, which is not limited in this embodiment of the present invention. The input sequence may be a single code or may be multiple codes, which is not limited in this embodiment of the present invention.
And 104, inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
In the embodiment of the invention, a sentence prediction model for predicting candidate sentences can be trained in advance; and then, long sentence prediction is carried out on the basis of the acquired input sequence by adopting a sentence prediction model, and corresponding sentence candidates are determined.
In an example of the present invention, the obtained input sequence may be directly input into the trained sentence prediction model, and the sentence prediction model performs prediction based on the input sequence and outputs corresponding sentence candidates. The sentence candidates output by the sentence prediction model may be one or multiple, which is not limited in this embodiment of the present invention. In addition, the sentence candidates may be chinese sentences or english sentences; the language corresponding to the input sequence is not limited in this respect.
And 106, displaying the sentence candidates.
In the embodiment of the invention, the sentence prediction model can output the score (such as probability) corresponding to the sentence candidate while outputting the sentence candidate. Further, when the sentence prediction model outputs a plurality of sentence candidates, each sentence candidate may be sorted according to the score corresponding to the sentence candidate. And displaying the sentence candidates according to the sorted result, for example, displaying the sentence candidate with the highest score, which is not limited in the embodiment of the present invention. And then the user can directly screen the sentence on the condition that part of the text in the input sentence corresponds to the input sequence, and the input efficiency of the user can be improved.
It should be noted that, the embodiment of the present invention does not limit the length of the input sequence, and no matter whether the input sequence is short or long, the embodiment of the present invention may input the input sequence into the sentence prediction model for prediction to obtain the candidate sentences.
As an example of the present invention, assume that the contents in the edit box are: the data receiving, the input sequence obtained by the user input is a pinyin sequence: "wokanhou". Inputting the pinyin sequence into a sentence prediction model to obtain sentence candidates output by the sentence prediction model; further, the sentence prediction model can also output the score corresponding to the sentence candidate. For example, output: "reply you after me see" (sentence candidate) -0.9 (score), "give an opinion after me see" (sentence candidate) -0.8 (score), "speak after me see" (sentence candidate) -0.6 (score). The three sentence candidates may then be presented in terms of their scores.
As yet another example of the present invention, assume that the contents in the edit box are: "tell you", the input sequence that obtains the user input is the pinyin sequence: "yige". Inputting the pinyin sequence into a sentence prediction model to obtain sentence candidates output by the sentence prediction model; further, the sentence prediction model can also output the score corresponding to the sentence candidate. For example, output: "tell you one thing" (sentence candidates) -0.8 (score), "tell you one news" (sentence candidates) -0.7 (score), "tell you one good message" (sentence candidates) -0.9 (score). The three sentence candidates may then be presented in terms of their candidate scores.
As yet another example of the present invention, assuming there is no content in the edit box, the input sequence to obtain the user input is a Pinyin sequence: the 'fenxiangdao' inputs the pinyin sequence into a sentence prediction model to obtain sentence candidates output by the sentence prediction model; further, the sentence prediction model can also output the score corresponding to the sentence candidate. For example, output: "share to circle of friends" (sentence candidates) -9.5 (score), "share to platform" (sentence candidates) -0.8 (score). The two sentence candidates may then be presented in terms of their candidate scores.
In summary, in the embodiment of the present invention, after the input method obtains the input sequence input by the user, the input sequence may be input into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model, and the sentence candidates are displayed; and further, under the condition that part of texts in the input sentences correspond to the input sequence, the user can directly display the sentences on the screen, and the input efficiency of the user is greatly improved.
In an optional embodiment of the present invention, the sentence prediction model may perform prediction with a single sentence as a granularity, or with a multiple sentence as a granularity; the embodiments of the present invention are not limited in this regard. The sentence candidates provided for the user are more comprehensive, and better input experience is brought to the user. The single sentence can be a sentence formed by phrases or single words, and clauses cannot be separated out; the clauses are single sentences that are structurally similar without the syntactic units of a complete sentence tone. The compound sentence is composed of two or more than two clauses which are closely related in meaning and structurally not included mutually. Furthermore, the embodiment of the invention can predict a plurality of clauses containing punctuations in the sentence as a complete sentence.
In an alternative embodiment of the present invention, information related to user input and an input sequence of user input may be simultaneously input to the sentence prediction model; so that the sentence prediction model can predict more accurate sentence candidates.
In an example of the present invention, the input information of the sentence prediction model further includes at least one of: the above information, the application environment information and the candidate words of the input sequence; of course, other information, such as local user information, may also be included, which is not limited in this embodiment of the present invention.
The following takes the example of a sentence prediction model that predicts based on the input sequence and the above information.
In an optional embodiment of the present invention, a manner of inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
The above information includes interaction information and/or content in an edit box, and the interaction information may refer to information that has been sent by the home terminal and the peer terminal. When the information only contains the interactive information, the input method can realize the prediction of the sentence beginning input by the user at this time; when the above information includes the interactive information and the content in the edit box, or the above information includes only the content in the edit box, the input method can realize prediction in the sentence or the sentence end input by the user this time.
The sentence prediction model may be a statistical model (ngram), a retrieval model (fasttext), or a generative model (transform, LSTM (Long Short-Term Memory network)), which is not limited in this respect.
Referring to fig. 2, a flowchart illustrating steps of an alternative embodiment of the input method of the present invention is shown, which may specifically include the following steps:
In the embodiment of the invention, the input sequence input by the user and the information can be acquired in the input process of the user. The obtained input sequence and the above information are then used as input to a sentence prediction model to determine sentence candidates.
In one way, the input sequence and the above information are input into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model, refer to step 204:
and 204, predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
In one example of the present invention, the sentence prediction model may be a statistical model; the frequency of each sentence under certain conditions of the above information and the input sequence can be counted in advance. Then, in the process of predicting by adopting the sentence prediction model, the input sequence and the above information can be input into the sentence prediction model, and the sentence prediction model searches the frequency of each sentence under the condition of the input sequence and the above information acquired at this time. And taking the sentences with the frequency greater than the preset threshold value as sentence candidates. The preset threshold may be set as required, which is not limited in this embodiment of the present invention.
In yet another example of the present invention, the sentence prediction model may be a deep learning model. One of the modes can be that the input sequence and the above information are both input into a sentence prediction model, and the sentence prediction model extracts the characteristics corresponding to the input sequence and the above information; and predicting by the sentence prediction model according to the extracted features, and outputting corresponding sentence candidates. The input sequence and the above information can be respectively used as information of two dimensions to be input into the sentence prediction model; the information may also be input into the sentence prediction model as one-dimensional information, and may be set according to requirements, which is not limited in the embodiment of the present invention.
In yet another example of the present invention, after both the input sequence and the above information are input into the sentence prediction model, the input sequence may be converted into corresponding word candidates; and concatenates the word candidates with the above information. And the sentence prediction model predicts based on the information obtained by splicing and outputs a plurality of corresponding sentence candidates.
And step 206, displaying the sentence candidates.
In an example of the present invention, after a sentence may be displayed in a designated area of an input method, where the designated area may be set as required, such as an upper right area of the input method, which is not limited in this embodiment of the present invention.
In summary, in the embodiment of the present invention, an input sequence and the above information input by a user in an input method may be obtained, and then the sentence prediction model is used to perform prediction based on the input sequence and the above information, so as to obtain a candidate sentence output by the sentence prediction model; and furthermore, sentence candidates are predicted in a mode of combining the input sequence and the above information, and the accuracy of the determined sentence candidates can be improved. And then displaying the predicted sentence candidates, thereby further improving the input efficiency of the user.
Referring to fig. 3, a flowchart illustrating steps of another alternative embodiment of the input method of the present invention is shown, which may specifically include the following steps:
This step 302 is similar to the step 202 described above and will not be described herein again.
In one embodiment, the input sequence and the above information are input into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model, which may refer to steps 304-306:
and step 304, predicting by adopting the sentence prediction model based on the input sequence to obtain a plurality of sentence candidates.
And step 306, screening the sentence candidates by adopting the sentence prediction model according to the above information, and outputting the screened sentence candidates.
In the embodiment of the invention, after the input sequence and the information are input into the sentence prediction model, the input sequence can be predicted by the sentence prediction model to obtain a plurality of sentence candidates. And then, screening a plurality of sentence candidates predicted based on the input sequence by the sentence prediction model according to the information, and outputting the sentence candidates obtained by screening.
And step 308, displaying the sentence candidates.
This step 308 is similar to the step 206 described above and will not be described herein again.
In summary, in the embodiment of the present invention, an input sequence and the above information input by a user in an input method may be obtained, then a sentence prediction model is used to perform prediction based on the input sequence to obtain a plurality of sentence candidates, then the sentence prediction model is used to screen the plurality of sentence candidates according to the above information, and the screened sentence candidates are output; and then sentence candidates are predicted through the input sequence, and then the sentence candidates are screened through the information, so that the accuracy of the determined sentence candidates can be improved. And then displaying the screened sentence candidates, thereby further improving the input efficiency of the user.
Referring to fig. 4, a flowchart illustrating steps of another alternative embodiment of the input method of the present invention is shown, which may specifically include the following steps:
and step 402, acquiring an input sequence and the above information input by a user in the input method.
This step 402 is similar to the step 202 described above and will not be described herein again.
In one way of inputting the input sequence and the above information into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model, refer to steps 404-406:
and step 404, predicting by using the sentence prediction model based on the above information to obtain a plurality of sentence candidates.
And step 406, screening the sentence candidates by using the sentence prediction model according to the input sequence, and outputting the screened sentence candidates.
In the embodiment of the invention, after the input sequence and the above information are input into the sentence prediction model, the above information can be predicted by the sentence prediction model to obtain a plurality of sentence candidates. And then, screening a plurality of sentence candidates predicted based on the above information by the sentence prediction model according to the input sequence, and outputting the screened sentence candidates.
The above information may be input in a sentence prediction model, and after a complete sentence candidate is output, the input sequence may be used for screening. Or when the sentence prediction model predicts at each moment, screening preset word candidates by adopting an input sequence to obtain a candidate word set; then, X most probable words are selected from the candidate word set. Then outputting the X words selected at the moment, and predicting the most possible X words at the next moment based on the X words, the X words predicted at the previous moment and the above information; until the end of the sentence is predicted. Then splicing the X words output at each moment to obtain X sentence candidates; wherein X is a positive integer. The embodiment of the present invention does not limit the manner in which the input sequence is used to screen the plurality of sentence candidates predicted based on the above information.
This step 408 is similar to the step 206 described above and will not be described herein again.
In summary, in the embodiment of the present invention, an input sequence and the above information input by a user in an input method may be obtained, then a sentence prediction model is used to perform prediction based on the above information to obtain a plurality of sentence candidates, then the sentence prediction model is used to screen the plurality of sentence candidates according to the input sequence, and the screened sentence candidates are output; furthermore, the accuracy of the determined sentence candidates can be improved by predicting the sentence candidates by using the input sequence and then screening the sentence candidates by using the above information. And then the screened sentence candidates are displayed, so that the input efficiency of the user can be further improved.
The following description takes an example in which the sentence prediction model predicts based on the input sequence and the application environment information.
Referring to fig. 5, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
And 502, acquiring an input sequence and application environment information input by a user in the input method.
This step 502 is similar to the step 202, and will not be described herein again.
The application environment information may include various types, such as types of third-party application programs that currently invoke the input method, such as instant messaging, music, video, and the like; location information, weather information, time information, and the like, which are not limited in this respect.
In one embodiment, a manner of inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model may refer to steps 504 to 506:
and step 504, predicting by adopting the sentence prediction model based on the input sequence to obtain a plurality of sentence candidates.
In the embodiment of the invention, after the input sequence and the application environment information are both input into the sentence prediction model, the input sequence can be predicted by the sentence prediction model to obtain a plurality of sentence candidates. And then, the sentence prediction model screens a plurality of sentence candidates predicted based on the input sequence according to the application environment information, and outputs the screened sentence candidates.
And step 508, displaying the sentence candidates.
This step 508 is similar to the step 206 described above and will not be described herein again.
In summary, in the embodiment of the present invention, an input sequence and application environment information input by a user in an input method may be obtained, then a sentence prediction model is used to perform prediction based on the input sequence to obtain a plurality of sentence candidates, then the sentence prediction model is used to screen the plurality of sentence candidates according to the application environment information, the screened sentence candidates are output, and the screened sentence candidates are displayed; and then sentence candidates are determined by combining the input sequence and the application environment information, so that the accuracy of the determined sentence candidates can be improved, and the input efficiency of the user is further improved.
In an optional embodiment of the present invention, after the input sequence is obtained, the input sequence may be converted into a candidate word of the input sequence; and then inputting the input sequence and the candidate words of the input sequence into a sentence prediction model to obtain the sentence candidates output by the sentence prediction model.
In an alternative embodiment of the invention, the user may enter an input sequence comprising a plurality of codes during an input cycle; at this time, the input sequence input later by the user may be adopted to screen the sentence candidates of the input sequence input earlier by the user, so as to save the computing resources of the sentence prediction model.
The input duration between two adjacent screen applications may be referred to as an input period.
Referring to fig. 6, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
Step 602, acquiring an input sequence input by a user in an input method; wherein the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period.
In the embodiment of the invention, after a user inputs a single code in an input period, the input method can acquire a new input sequence containing a plurality of codes; the input sequence may be composed of a single code input by the user this time in the input period and all codes input by the user earlier in the input period. For convenience of subsequent description, a first input sequence obtained after the current input in the current input period may be a second input sequence before the current input in the current input period; the second input sequence before the current input in the current input period may be a second input sequence obtained after any input before the current input in the current period.
The first input sequence and the second input sequence may then be input to a sentence prediction model, determining sentence candidates. In one embodiment, a manner of inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model may refer to steps 604 to 606:
and step 604, predicting the second input sequence by using the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model.
And 606, screening the sentence candidates by adopting the sentence prediction model according to the first input sequence, and outputting the screened sentence candidates.
In the embodiment of the present invention, after the second input sequence is obtained, the sentence prediction model may predict the second input sequence first, and determine a plurality of corresponding sentence candidates. Then after the first input sequence is obtained, a sentence prediction model adopts the first input sequence to screen a plurality of sentence candidates predicted based on the second input sequence; and finally, outputting the screened sentence candidates.
And step 608, displaying the sentence candidates.
This step 608 is similar to the step 206 described above, and will not be described herein again.
In summary, in the embodiment of the present invention, an input sequence input by a user in an input method may be obtained, where the input sequence includes: a first input sequence obtained after the current input in the current input period and a second input sequence obtained before the current input in the current input period; then, predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model, screening the sentence candidates by adopting the sentence prediction model according to the first input sequence, outputting the screened sentence candidates, and displaying the output candidates; furthermore, the input sequence input later by the user can be adopted to screen the sentence candidates of the input sequence input earlier by the user, so that the computing resources of the sentence prediction model are saved; and the accuracy of sentence prediction can be improved.
The above-mentioned manner of determining sentence candidates is to input the input sequence and at least one of the above information, the application environment information, and the candidate words of the input sequence into a sentence prediction model, and determine sentence candidates. Because the information input into the sentence prediction model for predicting the candidate sentences is different, and the sentence prediction model is processed in different modes after being input into the sentence prediction model; and therefore may result in different accuracy using the different predictive models described above. Therefore, in one embodiment of the present invention, after the sentence candidates are obtained in the above-mentioned multiple ways, all the sentence candidates can be reordered by relatively comprehensively inputting the association information; therefore, the sorting result of the candidate sentences is more fit with the current scene, and the user requirements are better met.
Referring to fig. 7, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
Wherein, sentence candidates can be determined according to a plurality of steps 202-206, 302-308, 402-408, 502-508, 602-608.
Step 706, obtaining input associated information, wherein the input associated information includes: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information.
In one example of the present invention, the ranking model may be trained in advance, and then the trained ranking model may be used to reorder the sentence candidates. Input association information can be acquired, and then the input association information and each sentence candidate are input into a sequencing model; and the ranking model scores all sentence candidates based on the input association information, and determines the ranking score corresponding to each sentence candidate. The method for acquiring input associated information of multiple dimensions may include: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information. Of course, other information related to input may also be included, and the embodiment of the present invention is not limited in this respect.
Then, the candidate items corresponding to the paths can be sorted according to the sorting scores, and the sorting result of the sentence candidates is output. Of course, the ranking model may also directly output the ranking score of each sentence candidate, and then the other processing modules rank each sentence candidate according to the ranking score of each sentence candidate to determine the corresponding ranking result.
In addition, in the embodiment of the invention, other information associated with the predicted sentence candidates can be adopted to reorder the sentence candidates; if the current scenario is a chat scenario, the information of the opposite end user, the information of the home end user, and the like may be obtained, which is not limited in the embodiment of the present invention.
In summary, in the embodiment of the present invention, after the input sequence is input into a sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model, the plurality of sentence candidates may be reordered by using the above information, the application environment information, and the candidate words of the input sequence; the sentence candidates determined in various modes are reordered by adopting comprehensive information, so that the ordering result of the sentence candidates is more fit with the current scene, and the user requirements are better met; therefore, the input efficiency of the user is further improved, and the user experience is improved.
In the embodiment of the invention, the input method can comprise a client and a server; the sentence prediction model can be deployed at a client, and sentence candidates corresponding to an input sequence are determined by the client. Of course, the sentence prediction model may also be deployed in a server, and the server determines the sentence candidates corresponding to the input sequence.
The following description will be given by taking the example that the sentence prediction model is deployed in the server.
Referring to fig. 8, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
And step 804, the client generates a long sentence prediction request according to the input sequence and sends the long sentence prediction request to the server.
In the embodiment of the invention, the client can acquire an input sequence input by a user in the process of user input; and then generating a long sentence prediction request according to the input sequence, and sending the long sentence prediction request to the server so as to request the server to determine sentence candidates and return.
In an example of the present invention, the client may generate and send the long sentence prediction request to the server by using an input sequence input by the user in the current input period when each time a single code is detected to be input by the user (or each time a key of a certain code is detected to be pressed by the user).
In another example of the present invention, when a user inputs a word, all codes composing the word (e.g. all pinyins of a word, or all characters of a word) may be continuously input; therefore, after detecting that a word or a word input by a user corresponds to a complete code, generating a long sentence prediction request by adopting an input sequence input by the user in the current input period and sending the long sentence prediction request to the server so as to reduce the cloud times and reduce the load of the server.
In another example of the present invention, if a single code input by the user next is not detected after a set time after the single code is input by the user after the detection, the long sentence prediction request may be generated and transmitted to the server by using an input sequence input by the user in the current input period.
Step 806, the server obtains an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains a sentence candidate output by the sentence prediction model, and returns the sentence candidate to the client.
After receiving the long sentence prediction request, the server may obtain an input sequence from the long sentence prediction request, input the input sequence into a sentence prediction model, obtain a candidate sentence output by the sentence prediction model, and return the candidate sentence to the client. This is similar to the above-described manner of determining sentence candidates, and is not described in detail here.
In summary, in the embodiment of the present invention, after acquiring an input sequence input by a user, a client may generate a long sentence prediction request according to the input sequence, and send the long sentence prediction request to the server; then, the server acquires an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains a sentence candidate output by the sentence prediction model, returns the sentence candidate to the client, and displays the sentence candidate by the client; and further, the calculation pressure of the client can be reduced and the storage space of the client can be saved.
In the embodiment of the present invention, the long sentence prediction may be performed based on word candidates (which may include single characters, words, english phrases, and the like) corresponding to the input sequence input by the user in the current input period. When the input sequence input by the user in the current input period is long, the accuracy of the word candidate corresponding to the input sequence input by the user in the current input period is reduced, and correspondingly, the accuracy of long sentence prediction is also reduced. Therefore, when the input sequence input in the current input period is long, cloud sending is not performed; when the input sequence input in the current input period is short, judging whether to send cloud or not; and further, the cloud generation times can be reduced, and the server burden is reduced.
Referring to fig. 9, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
Step 902, the client obtains an input sequence input by the user in the input method.
In the embodiment of the invention, after the client acquires the input sequence, the total length of the input sequence input by the user in the current input period can be determined; and then judging whether the total length of the input sequence input by the user in the current input period reaches a preset length. If the total length of the input sequence input by the user in the current input period does not reach the preset length, step 908 may be executed. When the total length of the input sequence input by the user in the current input period reaches the preset length, the cloud sending may not be performed, and the step 902 may be executed.
Wherein, the total length of the input sequence may be the total number of codes corresponding to the input sequence; the preset length may be M1; the M1 is a positive integer and can be set as required; the embodiments of the present invention are not limited in this regard.
When the input sequence is a pinyin sequence, the total length of the input sequence can also be the total number of syllables corresponding to the input sequence; the preset length may be M2; the M2 is a positive integer and can be set as required, and the M2 can be smaller than M1; the embodiments of the present invention are not limited in this regard.
When the input sequence is a foreign language character string, the total length of the input sequence can also be the total number of words corresponding to the input sequence; the preset length may be M3; the M3 is a positive integer, and M3 may be set as required, which is not limited in this embodiment of the present invention.
Step 910, the server obtains an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains a sentence candidate output by the sentence prediction model, and returns the sentence candidate to the client.
In the existing cloud sending process of the client, a request sent to the server is processed in the same time line with the key, for example, a continuous key ABC is used for sending the request when a key A is pressed, and if a result is not returned when a key B is pressed, the request is counted as an overtime request and discarded; the server performs prediction, but the client does not use the prediction result of the prediction, which causes a lot of waste. Therefore, the present invention proposes an asynchronous cache mechanism: the request and the key input process are separated, and the requests discarded before overtime are cached for subsequent reuse. And further, on the one hand, the request cost is saved, and on the other hand, the efficiency of long sentence prediction display is improved. For example, the request sent by the continuous key ABC at the time of the key a is not discarded if the result is returned after the key B, but is cached locally and can be used continuously at the time of the key C.
Correspondingly, in an optional embodiment of the present invention, the client caches the sentence candidates returned by the server by using the input period as a unit; so that after the input sequence is subsequently received, sentence candidates matching the received input sequence can be searched from the cache. The sentence candidates may include sentence candidates returned by the server overtime, and may also include sentence candidates returned by the server without overtime. In one example of the present invention, one way of determining whether a received sentence candidate is a sentence candidate returned over time may be: judging whether the sentence candidates received by the client are behind a target input sequence input by a user received by the client; wherein the target input sequence is received by the client after sending the received sentence candidate corresponding long sentence prediction request. After the target input sequence input by the user is received by the client, determining that the received sentence candidate is a sentence candidate returned overtime; when a sentence candidate received by the client is before the target input sequence input by the user is received by the client, it is determined that the received sentence candidate is not a sentence candidate returned over time.
In one embodiment of the invention, continuous prediction may be based on an input sequence. When the input sequence input by the user is matched with the currently displayed sentence candidate, the currently displayed sentence candidate can be continuously displayed without cloud sending; and furthermore, the user experience can be improved, and the server burden can be reduced.
Referring to fig. 10a, a flow chart of steps of yet another alternative embodiment of an input method of the present invention is shown.
If an input sequence input by the user this time does not match with the currently displayed sentence candidate, step 1006 may be executed; if an input sequence inputted by the user this time matches the currently presented sentence candidate, step 1012 can be executed.
Step 1008, the server obtains an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains a sentence candidate output by the sentence prediction model, and returns the sentence candidate to the client.
For example, as shown in FIGS. 10b-10c, the input sequence may be a Pinyin sequence. Wherein, the sentence candidate shown in fig. 10b is "happy birthday congratulatory", and after the user continues to input a pinyin "s", the sentence candidate "happy birthday congratulatory happy you" is still shown in fig. 10 c.
In an optional embodiment of the present invention, one way to present the sentence candidates may be: splicing the content in the edit box with the sentence candidates to obtain a corresponding splicing result; and displaying the splicing result. And on the other hand, the user does not need to switch visual focuses back and forth between the long sentence prediction candidates and the previous long sentence prediction candidates to understand the long sentence prediction candidates and judge whether the long sentence prediction candidates are correct or not. For example, reference may be made to fig. 10d, in which the content in the edit box is "one person", and after the user inputs the pinyin sequence "zhaoy", the determined sentence is candidate as "hi as usual". The concatenation result "personal hi" is displayed in the upper right area of the input method keyboard.
In the embodiment of the invention, in the input process of a user, on one hand, long sentence prediction can be carried out based on an input sequence, and corresponding sentence candidates are determined; the user can only input the input sequence corresponding to part of the text in the sentence, and the corresponding sentence can be obtained, so that the input efficiency of the user is improved. On the other hand, other types of prediction are carried out according to the input sequence to obtain other types of prediction results; wherein the other types of predictors include at least one of: the embodiment of the present invention is not limited to the name prediction result, the word-by-word correction prediction result, the local cloud substitution prediction result, the cloud input prediction result, the expression prediction result, and the like. The input method can be used for displaying sentence candidates and other types of prediction results in the same display area; sentence candidates and other types of prediction results may be presented separately in different regions. When the input method displays the sentence candidates and other types of prediction results in the same display area, the candidates displayed in the display area may be determined and displayed after the long sentence prediction candidates and other types of prediction results are determined.
In one example of the present invention, the priority of sentence candidates and other types of predicted results may be determined first; the candidates may then be presented according to the determined priority information. If the priority of the sentence candidate is higher than the priority of each other type of prediction result, displaying the sentence candidate; and if the priority of the candidate sentences is lower than that of any other type of prediction result, displaying the other type of prediction result with the highest priority in the specified area of the input method.
In the embodiment of the invention, the expression modes of the users may be different under different conditions. For example, "a strong wind is tomorrow" may be said during chatting, and "a strong wind is tomorrow three to four levels tomorrow" may be said during official writing of a newsfeed. Also for example, the expression of the Chinese characters is more serious and formal than the expression of the Chinese characters in the chatting process, and the Chinese characters may be said to be "thank you"; it is easy to chat with friends, and may say "thank you, lovely", etc. Therefore, in the embodiment of the invention, after the sentence candidates output by the sentence prediction model are obtained, the sentence candidates output by the sentence prediction model can be screened based on the expression mode of the user, and the sentence candidates conforming to the expression mode of the user are screened and displayed. And further, sentence candidates which accord with the personalized expression of the user can be selected from the sentence candidates and displayed, so that the user experience is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 11, a block diagram of an embodiment of an input device according to the present invention is shown, which may specifically include the following modules:
an obtaining module 1102, configured to obtain an input sequence input by a user in an input method;
a determining module 1104, configured to input the input sequence into a sentence prediction model, so as to obtain a sentence candidate output by the sentence prediction model;
a presentation module 1106 configured to present the sentence candidates.
Referring to fig. 12, a block diagram of an alternative embodiment of an input device of the present invention is shown.
In an optional embodiment of the present invention, the input information of the sentence prediction model further includes at least one of the following: the above information, the application environment information, and the candidate words of the input sequence.
In an alternative embodiment of the present invention, when the input information of the sentence prediction model further includes the above information, the determining module 1104 includes:
a first candidate output sub-module 11042, configured to input the input sequence and the above information into a sentence prediction model, so as to obtain a sentence candidate output by the sentence prediction model.
In an alternative embodiment of the present invention, the first candidate output sub-module 11042 includes:
a first prediction unit 110422, configured to perform prediction based on the input sequence and the above information by using the sentence prediction model, so as to obtain a candidate sentence output by the sentence prediction model.
In an alternative embodiment of the present invention, the first candidate output sub-module 11042 includes:
a second prediction unit 110424, configured to perform prediction based on the input sequence by using the sentence prediction model, so as to obtain a plurality of sentence candidates;
a first output unit, configured to employ the sentence prediction model to filter 110426 the sentence candidates according to the above information, and output the filtered sentence candidates.
In an alternative embodiment of the present invention, the first candidate output sub-module 11042 includes:
a third prediction unit 110428, configured to perform prediction based on the above information by using the sentence prediction model, so as to obtain a plurality of sentence candidates;
a second output unit 1104210, configured to filter the sentence candidates according to the input sequence by using the sentence prediction model, and output the filtered sentence candidates.
In an alternative embodiment of the present invention, the input sequence includes: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period;
the determining module 1104 includes:
a first candidate prediction sub-module 11044, configured to use the sentence prediction model to predict the second input sequence, so as to obtain a plurality of sentence candidates output by the sentence prediction model;
a second candidate output sub-module 11046, configured to filter the sentence candidates according to the first input sequence by using the sentence prediction model, and output the filtered sentence candidates.
In an alternative embodiment of the present invention, when the input information of the sentence prediction model further includes application environment information, the determining module 1104 includes:
a second candidate prediction sub-module 11048, configured to perform prediction based on the input sequence by using the sentence prediction model, so as to obtain a plurality of sentence candidates;
a third candidate output sub-module 110410, configured to filter the multiple sentence candidates according to the application environment information by using the sentence prediction model, and output the filtered sentence candidates.
In an optional embodiment of the present invention, the apparatus further comprises:
a relevance information obtaining module 1108, configured to obtain input relevance information when the sentence candidates output by the sentence prediction model include multiple ones, where the input relevance information includes: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information;
a sorting module 1110, configured to reorder the multiple sentence candidates by using the input association information.
In an optional embodiment of the present invention, the input method includes a client and a server, and the determining module includes:
the sending submodule 110412 is configured to invoke the client to generate a long sentence prediction request according to the input sequence, and send the long sentence prediction request to the server;
and a third candidate prediction submodule 110414, configured to invoke the server to obtain an input sequence from the long sentence prediction request, input the input sequence into a sentence prediction model, obtain a candidate sentence output by the sentence prediction model, and return the candidate sentence to the client.
In an alternative embodiment of the present invention, the input sequence includes an input sequence input in a current input cycle, and the apparatus further includes:
a length determining module 1112, configured to invoke the client to determine a total length of an input sequence input by the user in the current input period;
the determining module 1114 is configured to determine whether a total length of an input sequence input by a user in a current input period reaches a preset length;
the sending sub-module 110412 is configured to execute the step of invoking the client to generate a long sentence prediction request according to the input sequence and sending the long sentence prediction request to the server if the total length of the input sequence input by the user in the current input period does not reach a preset length.
In an optional embodiment of the present invention, the apparatus further comprises:
a caching module 1116, configured to invoke the client to cache the sentence candidates returned by the server in units of input cycles; wherein the sentence candidates comprise sentence candidates returned by the server over time.
In an alternative embodiment of the present invention, the input sequence includes: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period;
the device further comprises:
a matching module 1118, configured to invoke the client to determine whether an input sequence input by the user this time matches a currently displayed sentence candidate;
the sending submodule 110412 is configured to execute the step of invoking the client to generate a long sentence prediction request according to an input sequence input by the user this time and sending the long sentence prediction request to the server when the input sequence input by the user this time is not matched with a currently displayed sentence candidate;
the display module 1106 includes:
the first candidate displaying sub-module 11062 is configured to continue displaying the currently displayed sentence candidate when an input sequence input by the user this time matches the currently displayed sentence candidate.
In an optional embodiment of the present invention, the apparatus further comprises:
the other-type prediction module 1120 is configured to perform other-type prediction according to the input sequence to obtain other-type prediction results, where the other-type prediction results include at least one of: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result;
the display module 1106 includes:
and a second candidate displaying sub-module 11064, configured to display the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
In an alternative embodiment of the present invention, the display module 1106 comprises:
the splicing submodule 11066 is configured to splice the content in the edit box with the candidate sentences to obtain a corresponding splicing result;
and the result display submodule 11068 is used for displaying the splicing result.
In summary, in the embodiment of the present invention, after the input method obtains the input sequence input by the user, the input sequence may be input into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model, and the sentence candidates are displayed; and further, under the condition that part of texts in the input sentences correspond to the input sequence, the user can directly display the sentences on the screen, and the input efficiency of the user is greatly improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
FIG. 13 is a block diagram illustrating a structure of an electronic device 1300 for input according to an example embodiment. For example, the electronic device 1300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 13, electronic device 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power component 1306, a multimedia component 1308, an audio component 1310, an interface for input/output (I/O) 1312, a sensor component 1314, and a communications component 1316.
The processing component 1302 generally controls overall operation of the electronic device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the method described above. Further, the processing component 1302 can include one or more modules that facilitate interaction between the processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operation at the device 1300. Examples of such data include instructions for any application or method operating on the electronic device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 1306 provide power to the various components of the electronic device 1300. Power components 1306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 1300.
The multimedia component 1308 includes a screen between the electronic device 1300 and a user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the electronic device 1300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1304 or transmitted via the communication component 1316. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1314 includes one or more sensors for providing various aspects of state assessment for the electronic device 1300. For example, the sensor assembly 1314 may detect the open/closed state of the device 1300, the relative positioning of components, such as a display and keypad of the electronic device 1300, the sensor assembly 1314 may also detect a change in the position of the electronic device 1300 or a component of the electronic device 1300, the presence or absence of user contact with the electronic device 1300, orientation or acceleration/deceleration of the electronic device 1300, and a change in the temperature of the electronic device 1300. The sensor assembly 1314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1316 is configured to facilitate communications between the electronic device 1300 and other devices in a wired or wireless manner. The electronic device 1300 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication section 1314 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1314 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1304 comprising instructions, executable by the processor 1320 of the electronic device 1300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform an input method, the method comprising: acquiring an input sequence input by a user in an input method; inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; and displaying the sentence candidates.
Optionally, the input information of the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
Optionally, when the input information of the sentence prediction model further includes the above information, the inputting the input sequence into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model includes: and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: and predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the information, and outputting the screened sentence candidates.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting by adopting the sentence prediction model based on the above information to obtain a plurality of sentence candidates; and screening the sentence candidates according to the input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period; the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model; and screening the sentence candidates according to the first input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, when the input information of the sentence prediction model further includes application environment information, the inputting the input sequence into the sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the application environment information, and outputting the screened sentence candidates.
Optionally, the method further comprises: when the sentence candidates output by the sentence prediction model include a plurality of candidates, acquiring input associated information including: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information; and reordering the plurality of sentence candidates by adopting the input association information.
Optionally, the input method includes a client and a server, and the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: the client generates a long sentence prediction request according to the input sequence and sends the long sentence prediction request to the server; and the server acquires an input sequence from the long sentence prediction request, inputs the input sequence into a sentence prediction model, obtains sentence candidates output by the sentence prediction model and returns the sentence candidates to the client.
Optionally, the input sequence includes an input sequence input in a current input cycle, and the method further includes: the client determines the total length of an input sequence input by a user in a current input period; judging whether the total length of an input sequence input by a user in the current input period reaches a preset length or not; and if the total length of the input sequence input by the user in the current input period does not reach the preset length, executing the step of generating the long sentence prediction request according to the input sequence.
Optionally, the method further comprises: the client caches sentence candidates returned by the server by taking an input cycle as a unit; wherein the sentence candidates comprise sentence candidates returned by the server over time.
Optionally, the input sequence comprises: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period;
the method further comprises the following steps: the client judges whether an input sequence input by the user at this time is matched with the currently displayed sentence candidate; when an input sequence input by a user at this time is not matched with a currently displayed sentence candidate, executing the step of generating a long sentence prediction request according to the input sequence; the presenting the sentence candidates includes: and when an input sequence input by the user at this time is matched with the currently displayed sentence candidate, continuously displaying the currently displayed sentence candidate.
Optionally, the method further comprises: and performing other types of prediction according to the input sequence to obtain other types of prediction results, wherein the other types of prediction results comprise at least one of the following: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result;
the presenting the sentence candidates includes: and displaying the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
Optionally, the presenting the sentence candidates comprises: splicing the content in the edit box with the sentence candidates to obtain a corresponding splicing result; and displaying the splicing result.
Fig. 14 is a schematic structural diagram of an electronic device 1400 for input according to another exemplary embodiment of the present invention. The electronic device 1400 may be a server, which may vary widely depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 1422 (e.g., one or more processors) and memory 1432, one or more storage media 1430 (e.g., one or more mass storage devices) that store applications 1442 or data 1444. Memory 1432 and storage media 1430, among other things, may be transient or persistent storage. The program stored on storage medium 1430 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Still further, a central processor 1422 may be disposed in communication with the storage medium 1430, executing a sequence of instruction operations in the storage medium 1430 on the server.
The server can also include one or more power supplies 1426, one or more wired or wireless network interfaces 1450, one or more input-output interfaces 1458, one or more keyboards 1456, and/or one or more operating systems 1441 such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for: acquiring an input sequence input by a user in an input method; inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model; and displaying the sentence candidates.
Optionally, the input information of the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
Optionally, when the input information of the sentence prediction model further includes the above information, the inputting the input sequence into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model includes: and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: and predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the information, and outputting the screened sentence candidates.
Optionally, the inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting by adopting the sentence prediction model based on the above information to obtain a plurality of sentence candidates; and screening the sentence candidates according to the input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period; the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model; and screening the sentence candidates according to the first input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
Optionally, when the input information of the sentence prediction model further includes application environment information, the inputting the input sequence into the sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates; and screening the sentence candidates by adopting the sentence prediction model according to the application environment information, and outputting the screened sentence candidates.
Optionally, further comprising instructions for: when the sentence candidates output by the sentence prediction model include a plurality of candidates, acquiring input associated information including: context information, input environment information, candidate words of the input sequence, application scene information, opposite-end user information, home terminal user information and home terminal user historical behavior information; and reordering the plurality of sentence candidates by adopting the input association information.
Optionally, the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes: generating a long sentence prediction request according to the input sequence, and sending the long sentence prediction request to a server; and receiving sentence candidates returned by the server, wherein the sentence candidates are obtained by the server from the long sentence prediction request, input sequences are input into a sentence prediction model, and the sentence candidates output by the sentence prediction model are obtained and returned.
Optionally, the input sequence includes an input sequence input in a current input cycle, and further includes instructions for: determining the total length of an input sequence input by a user in the current input period; judging whether the total length of an input sequence input by a user in the current input period reaches a preset length or not; and if the total length of the input sequence input by the user in the current input period does not reach the preset length, executing the step of generating the long sentence prediction request according to the input sequence.
Optionally, further comprising instructions for: caching sentence candidates returned by the server by taking an input period as a unit; wherein the sentence candidates comprise sentence candidates returned by the server over time.
Optionally, the input sequence comprises: an input sequence input in a current input cycle, the input sequence input in the current input cycle comprising: an input sequence input this time in the current period; further comprising instructions for: judging whether an input sequence input by a user this time is matched with a currently displayed sentence candidate; when an input sequence input by a user at this time is not matched with a currently displayed sentence candidate, executing the step of generating a long sentence prediction request according to the input sequence; the presenting the sentence candidates includes: and when an input sequence input by the user at this time is matched with the currently displayed sentence candidate, continuously displaying the currently displayed sentence candidate.
Optionally, further comprising instructions for: and performing other types of prediction according to the input sequence to obtain other types of prediction results, wherein the other types of prediction results comprise at least one of the following: the method comprises the following steps of (1) predicting a name, proofreading a prediction result word by word, replacing a local cloud prediction result, inputting a prediction result cloud and predicting an expression prediction result; the presenting the sentence candidates includes: and displaying the sentence candidates according to the other types of prediction results and the priority of the sentence candidates.
Optionally, the presenting the sentence candidates comprises: splicing the content of the edit box with the candidate sentences to obtain a corresponding splicing result; and displaying the splicing result.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The input method, the input device and the electronic device provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. An input method, comprising:
acquiring an input sequence input by a user in an input method;
inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model;
and displaying the sentence candidates.
2. The method of claim 1, wherein the input information for the sentence prediction model further comprises at least one of: the above information, the application environment information, and the candidate words of the input sequence.
3. The method according to claim 2, wherein when the input information of the sentence prediction model further includes the above information, the inputting the input sequence into the sentence prediction model to obtain the sentence candidates output by the sentence prediction model comprises:
and inputting the input sequence and the above information into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model.
4. The method of claim 3, wherein inputting the input sequence and the above information into a sentence prediction model to obtain the sentence candidates output by the sentence prediction model comprises:
and predicting by adopting the sentence prediction model based on the input sequence and the above information to obtain sentence candidates output by the sentence prediction model.
5. The method of claim 3, wherein inputting the input sequence and the above information into a sentence prediction model to obtain the sentence candidates output by the sentence prediction model comprises:
predicting based on the input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates;
and screening the sentence candidates by adopting the sentence prediction model according to the information, and outputting the screened sentence candidates.
6. The method of claim 3, wherein inputting the input sequence and the above information into a sentence prediction model to obtain the sentence candidates output by the sentence prediction model comprises:
predicting by adopting the sentence prediction model based on the above information to obtain a plurality of sentence candidates;
and screening the sentence candidates according to the input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
7. The method of claim 2, wherein the input sequence comprises: a first input sequence obtained after the current input in the current input period and a second input sequence before the current input in the current input period;
the inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model includes:
predicting the second input sequence by adopting the sentence prediction model to obtain a plurality of sentence candidates output by the sentence prediction model;
and screening the sentence candidates according to the first input sequence by adopting the sentence prediction model, and outputting the screened sentence candidates.
8. An input device, comprising:
the acquisition module is used for acquiring an input sequence input by a user in an input method;
the determining module is used for inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model;
and the display module is used for displaying the sentence candidates.
9. An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for:
acquiring an input sequence input by a user in an input method;
inputting the input sequence into a sentence prediction model to obtain a sentence candidate output by the sentence prediction model;
and displaying the sentence candidates.
10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the input method according to any of method claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010366780.7A CN113589949A (en) | 2020-04-30 | 2020-04-30 | Input method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010366780.7A CN113589949A (en) | 2020-04-30 | 2020-04-30 | Input method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113589949A true CN113589949A (en) | 2021-11-02 |
Family
ID=78237637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010366780.7A Pending CN113589949A (en) | 2020-04-30 | 2020-04-30 | Input method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113589949A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114791769A (en) * | 2022-06-24 | 2022-07-26 | 湖北云享客数字智能科技有限公司 | Big database establishment method for user behavior prediction result |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140136970A1 (en) * | 2011-07-14 | 2014-05-15 | Tencent Technology (Shenzhen) Company Limited | Text inputting method, apparatus and system |
WO2016107344A1 (en) * | 2014-12-30 | 2016-07-07 | 北京奇虎科技有限公司 | Method and device for screening on-screen candidate items of input method |
US20170220129A1 (en) * | 2014-07-18 | 2017-08-03 | Shanghai Chule (Coo Tek) Information Technology Co., Ltd. | Predictive Text Input Method and Device |
WO2018005395A1 (en) * | 2016-06-30 | 2018-01-04 | Microsoft Technology Licensing, Llc | Artificial neural network with side input for language modelling and prediction |
CN108132717A (en) * | 2017-12-21 | 2018-06-08 | 广东欧珀移动通信有限公司 | Recommendation method, apparatus, storage medium and the mobile terminal of candidate word |
CN108628911A (en) * | 2017-03-24 | 2018-10-09 | 微软技术许可有限责任公司 | It is predicted for expression input by user |
US20180302350A1 (en) * | 2016-08-03 | 2018-10-18 | Tencent Technology (Shenzhen) Company Limited | Method for determining candidate input, input prompting method and electronic device |
CN109597496A (en) * | 2017-09-30 | 2019-04-09 | 北京金山安全软件有限公司 | Information prediction method, device and equipment |
EP3523710A1 (en) * | 2016-11-29 | 2019-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for providing sentence based on user input |
CN110187780A (en) * | 2019-06-10 | 2019-08-30 | 北京百度网讯科技有限公司 | Long text prediction technique, device, equipment and storage medium |
CN110673748A (en) * | 2019-09-27 | 2020-01-10 | 北京百度网讯科技有限公司 | Method and device for providing candidate long sentences in input method |
CN110874145A (en) * | 2018-08-30 | 2020-03-10 | 北京搜狗科技发展有限公司 | Input method and device and electronic equipment |
CN110908523A (en) * | 2018-09-14 | 2020-03-24 | 北京搜狗科技发展有限公司 | Input method and device |
-
2020
- 2020-04-30 CN CN202010366780.7A patent/CN113589949A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140136970A1 (en) * | 2011-07-14 | 2014-05-15 | Tencent Technology (Shenzhen) Company Limited | Text inputting method, apparatus and system |
US20170220129A1 (en) * | 2014-07-18 | 2017-08-03 | Shanghai Chule (Coo Tek) Information Technology Co., Ltd. | Predictive Text Input Method and Device |
WO2016107344A1 (en) * | 2014-12-30 | 2016-07-07 | 北京奇虎科技有限公司 | Method and device for screening on-screen candidate items of input method |
WO2018005395A1 (en) * | 2016-06-30 | 2018-01-04 | Microsoft Technology Licensing, Llc | Artificial neural network with side input for language modelling and prediction |
US20180302350A1 (en) * | 2016-08-03 | 2018-10-18 | Tencent Technology (Shenzhen) Company Limited | Method for determining candidate input, input prompting method and electronic device |
EP3523710A1 (en) * | 2016-11-29 | 2019-08-14 | Samsung Electronics Co., Ltd. | Apparatus and method for providing sentence based on user input |
CN108628911A (en) * | 2017-03-24 | 2018-10-09 | 微软技术许可有限责任公司 | It is predicted for expression input by user |
CN109597496A (en) * | 2017-09-30 | 2019-04-09 | 北京金山安全软件有限公司 | Information prediction method, device and equipment |
CN108132717A (en) * | 2017-12-21 | 2018-06-08 | 广东欧珀移动通信有限公司 | Recommendation method, apparatus, storage medium and the mobile terminal of candidate word |
CN110874145A (en) * | 2018-08-30 | 2020-03-10 | 北京搜狗科技发展有限公司 | Input method and device and electronic equipment |
CN110908523A (en) * | 2018-09-14 | 2020-03-24 | 北京搜狗科技发展有限公司 | Input method and device |
CN110187780A (en) * | 2019-06-10 | 2019-08-30 | 北京百度网讯科技有限公司 | Long text prediction technique, device, equipment and storage medium |
CN110673748A (en) * | 2019-09-27 | 2020-01-10 | 北京百度网讯科技有限公司 | Method and device for providing candidate long sentences in input method |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114791769A (en) * | 2022-06-24 | 2022-07-26 | 湖北云享客数字智能科技有限公司 | Big database establishment method for user behavior prediction result |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110874145A (en) | Input method and device and electronic equipment | |
CN107291260B (en) | Information input method and device for inputting information | |
CN111984749B (en) | Interest point ordering method and device | |
CN111198620B (en) | Method, device and equipment for presenting input candidate items | |
CN107564526B (en) | Processing method, apparatus and machine-readable medium | |
CN108628813B (en) | Processing method and device for processing | |
CN109582768B (en) | Text input method and device | |
CN110069624B (en) | Text processing method and device | |
CN107422872B (en) | Input method, input device and input device | |
CN112631435A (en) | Input method, device, equipment and storage medium | |
CN111381685B (en) | Sentence association method and sentence association device | |
CN111240497A (en) | Method and device for inputting through input method and electronic equipment | |
CN113589949A (en) | Input method and device and electronic equipment | |
CN109979435B (en) | Data processing method and device for data processing | |
CN116484828A (en) | Similar case determining method, device, apparatus, medium and program product | |
CN110908523A (en) | Input method and device | |
CN113589954B (en) | Data processing method and device and electronic equipment | |
CN112214114A (en) | Input method and device and electronic equipment | |
CN111198619A (en) | Association candidate generation method and device | |
CN113589946B (en) | Data processing method and device and electronic equipment | |
CN113589955B (en) | Data processing method and device and electronic equipment | |
CN111382566A (en) | Site theme determination method and device and electronic equipment | |
CN111722726B (en) | Method and device for determining pigment and text | |
CN113589947B (en) | Data processing method and device and electronic equipment | |
CN110765338A (en) | Data processing method and device and data processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |