CN114510154A - Input method, input device and input device - Google Patents

Input method, input device and input device Download PDF

Info

Publication number
CN114510154A
CN114510154A CN202011291518.7A CN202011291518A CN114510154A CN 114510154 A CN114510154 A CN 114510154A CN 202011291518 A CN202011291518 A CN 202011291518A CN 114510154 A CN114510154 A CN 114510154A
Authority
CN
China
Prior art keywords
sentence
sample
target
original
statement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011291518.7A
Other languages
Chinese (zh)
Inventor
崔欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202011291518.7A priority Critical patent/CN114510154A/en
Publication of CN114510154A publication Critical patent/CN114510154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the application discloses an input method, an input device and a device for inputting. An embodiment of the method comprises: acquiring an original sentence input by a user; under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern; and displaying the target sentence. The implementation method can realize sentence recombination aiming at the original sentences input by the user, and provides more accurate sentences for the user under the condition of not changing semantics, thereby improving the efficiency of inputting the sentences by the user.

Description

Input method, input device and input device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an input method, an input device and an input device.
Background
In the process of inputting a sentence by a user through an input method client, some expression problems exist in the input content generally. For example, when a user whose native language is korean or japanese wants to express "he has got rid of harabine", the user usually erroneously inputs "he has got rid of harabine" due to the habit of expressing the native language. For another example, when some users want to express the completion statement "gather at XX tomorrow and receive please reply," only the individual keywords "tomorrow, XX, reply" are input.
The existing input method application only supports error correction of the content input by the user, cannot solve the sentence pattern problem in the sentence input by the user, needs the user to manually modify the sentence, and causes the efficiency of inputting the sentence by the user to be low.
Disclosure of Invention
The embodiment of the application provides an input method, an input device and a device for inputting, and aims to solve the technical problem that in the prior art, the efficiency of inputting sentences by a user is low.
In a first aspect, an embodiment of the present application provides an input method, where the method includes: acquiring an original sentence input by a user; under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern; and displaying the target statement.
In some embodiments, the obtaining a target sentence having the same semantics as the original sentence and having a correct sentence pattern comprises: determining a conversion requirement of the original sentence; and acquiring the target sentence with the same semantic meaning as the original sentence and the correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
In some embodiments, the conversion requirements include at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target statement acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
In some embodiments, the obtaining, by using a target sentence obtaining manner matched with the conversion requirement, a target sentence having the same semantics as the original sentence and having a correct sentence pattern includes: extracting keywords from the original sentence under the condition that the conversion requirement is a scattered word sentence making requirement, and determining the types of the keywords; retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set; based on the related information of the original sentences, sorting the candidate sentences in the candidate sentence set to obtain a sorting result; and selecting a target sentence from the candidate sentence set based on the sorting result.
In some embodiments, the obtaining, by using a target sentence obtaining manner matched with the conversion requirement, a target sentence having the same semantics as the original sentence and having a correct sentence pattern includes: and under the condition that the conversion requirement is a word order adjusting requirement, inputting the original sentence into a pre-trained word order adjusting model to obtain a target sentence subjected to word order adjustment on the original sentence.
In some embodiments, the word order adjustment model is trained based on the following steps: obtaining a sample set, wherein samples in the sample set are statement duplets, the statement duplets comprise a first sample statement and a second sample statement, and the first sample statement and the second sample statement have different word orders; and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language order adjustment model.
In some embodiments, the samples in the sample set are generated by: acquiring correct sentences without language sickness; randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences; and taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
In some embodiments, the samples in the sample set are generated by: acquiring a log of an input method application, wherein the log comprises historical behavior data of a user; searching a sentence before modification and a sentence after modification corresponding to the backspace modification behavior from the historical behavior data; if the sentence before modification and the sentence after modification have the same semantic meaning and the sentence after modification has the correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
In some embodiments, after said presenting said target statement, said method further comprises: and when the target sentence is detected to be selected by the user, replacing the original sentence with the target sentence.
In a second aspect, an embodiment of the present application provides an input device, including: a first acquisition unit configured to acquire an original sentence input by a user; a second obtaining unit configured to obtain a target sentence having the same semantic meaning as the original sentence and having a correct sentence pattern, in a case where the original sentence satisfies a preset condition; a presentation unit configured to present the target sentence.
In some embodiments, the second obtaining unit is further configured to: determining a conversion requirement of the original sentence; and acquiring the target sentence with the same semantic meaning as the original sentence and the correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
In some embodiments, the conversion requirements include at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target statement acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
In some embodiments, the second obtaining unit is further configured to: extracting keywords from the original sentence under the condition that the conversion requirement is a scattered word sentence making requirement, and determining the types of the keywords; retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set; based on the related information of the original sentences, sorting the candidate sentences in the candidate sentence set to obtain a sorting result; and selecting a target sentence from the candidate sentence set based on the sorting result.
In some embodiments, the second obtaining unit is further configured to: and under the condition that the conversion requirement is a language order adjustment requirement, inputting the original sentence into a pre-trained language order adjustment model to obtain a target sentence subjected to language order adjustment on the original sentence.
In some embodiments, the word order adjustment model is trained based on the following steps: obtaining a sample set, wherein samples in the sample set are statement duplets, the statement duplets comprise a first sample statement and a second sample statement, and the first sample statement and the second sample statement have different word orders; and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language order adjustment model.
In some embodiments, the samples in the sample set are generated by: acquiring correct sentences without language sickness; randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences; and taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
In some embodiments, the samples in the sample set are generated by: acquiring a log of an input method application, wherein the log comprises historical behavior data of a user; searching a sentence before modification and a sentence after modification corresponding to the backspace modification behavior from the historical behavior data; if the sentence before modification and the sentence after modification have the same semantic meaning and the sentence after modification has the correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
In some embodiments, the apparatus further comprises: a replacing unit configured to replace the original sentence with the target sentence when detecting that the target sentence is selected by a user.
In a third aspect, an embodiment of the present application provides an apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for: acquiring an original sentence input by a user; under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern; and displaying the target statement.
In some embodiments, the obtaining a target sentence having the same semantics as the original sentence and having a correct sentence pattern comprises: determining a conversion requirement of the original sentence; and acquiring the target sentence with the same semantic meaning as the original sentence and the correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
In some embodiments, the conversion requirements include at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target statement acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
In some embodiments, the obtaining, by using a target sentence obtaining manner matched with the conversion requirement, a target sentence having the same semantics as the original sentence and having a correct sentence pattern includes: under the condition that the conversion requirement is a scattered word sentence making requirement, extracting keywords from the original sentence, and determining the types of the keywords; retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set; based on the related information of the original sentences, sorting the candidate sentences in the candidate sentence set to obtain a sorting result; and selecting a target sentence from the candidate sentence set based on the sorting result.
In some embodiments, the obtaining, by using a target sentence obtaining manner matched with the conversion requirement, a target sentence having the same semantics as the original sentence and having a correct sentence pattern includes: and under the condition that the conversion requirement is a language order adjustment requirement, inputting the original sentence into a pre-trained language order adjustment model to obtain a target sentence subjected to language order adjustment on the original sentence.
In some embodiments, the word order adjustment model is trained based on the following steps: obtaining a sample set, wherein samples in the sample set are statement duplets, the statement duplets comprise a first sample statement and a second sample statement, and the first sample statement and the second sample statement have different word orders; and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language order adjustment model.
In some embodiments, the samples in the sample set are generated by: acquiring correct sentences without language sickness; randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences; and taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
In some embodiments, the samples of the set of samples are generated by: acquiring a log of an input method application, wherein the log comprises historical behavior data of a user; searching a sentence before modification and a sentence after modification corresponding to the backspace modification behavior from the historical behavior data; if the sentence before modification and the sentence after modification have the same semantic meaning and the sentence after modification has the correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
In some embodiments, the device being configured to execute the one or more programs by the one or more processors includes instructions for: and when the target sentence is detected to be selected by the user, replacing the original sentence with the target sentence.
In a fourth aspect, embodiments of the present application provide a computer-readable medium on which a computer program is stored, which when executed by a processor, implements the method as described in the first aspect above.
According to the input method, the input device and the input device, the original sentence input by the user is obtained, and the target sentence which has the same semantics as the original sentence and has the correct sentence pattern is obtained under the condition that the original sentence meets the preset condition, so that the target sentence is displayed. Therefore, sentence recombination can be realized aiming at the original sentences input by the user, more accurate sentences can be provided for the user under the condition of not changing semantics, the sentences do not need to be manually modified by the user, and the efficiency of inputting the sentences by the user is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of an input method according to the present application;
FIG. 2 is a flow chart of a target sentence acquisition mode according to the input method of the present application;
FIG. 3 is a schematic diagram of an embodiment of an input device according to the present application;
FIG. 4 is a schematic diagram of a structure of an apparatus for input according to the present application;
fig. 5 is a schematic diagram of a server in some embodiments according to the application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to FIG. 1, a flow 100 of one embodiment of an input method according to the present application is shown. The input method can be operated in various electronic devices including but not limited to: a server, a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, an intelligent tv, a wearable device, and so on.
The input method application mentioned in the embodiment of the application can support various input methods. The input method may be an encoding method used for inputting various symbols to electronic devices such as computers and mobile phones, and a user may conveniently input a desired character or character string to the electronic devices using the input method application. It should be noted that, in the embodiment of the present application, in addition to the common chinese input method (such as pinyin input method, wubi input method, zhuyin input method, phonetic input method, handwriting input method, etc.), the input method may also support other languages (such as english input method, japanese hiragana input method, korean input method, etc.), and the input method and the language category of the input method are not limited at all.
The input method in this embodiment may include the following steps:
step 101, obtaining an original sentence input by a user.
In this embodiment, an execution subject of the input method (such as the electronic device described above) may acquire a sentence input by a user, and take the sentence as an original sentence. Wherein, the original sentence can be input by an input method application. The user can input the sentence in any input mode. For example, the input method may be a pinyin, a wubi, a stroke, or other encoding input method, or may be a voice input method, and is not limited herein.
And 102, under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern.
In this embodiment, the execution subject may obtain the target sentence having the same semantic meaning as the original sentence and the correct sentence pattern when the original sentence satisfies the preset condition. The preset condition may be used to trigger an obtaining operation of the target statement, and the preset condition may be preset as needed. For example, it may be set to obtain a target sentence having the same semantic meaning as the original sentence and the correct sentence pattern in the case that the original sentence has a language disorder (e.g., the sentence pattern is incorrect, the complete sentence cannot be constructed, the word order is disordered, etc.).
The target sentence may have the same semantics as the original sentence, and the target sentence may have the correct sentence pattern. The target sentence can be a sentence without language sickness, and the sentence is complete and has no error in language sequence. As an example, the original sentence is "he has harbourine off", and the target sentence may be the sentence "he has harbourine off" after the word order is adjusted. As yet another example, the original sentence is "tomorrow, XX family restaurant, reply," and the target sentence may be the complete sentence "gather dinner at XX family restaurant tomorrow, receive please reply.
In some optional implementations of this embodiment, in a case that the original sentence satisfies a preset condition, the execution main body may obtain a target sentence having the same semantic meaning as the original sentence and having a correct sentence pattern according to the following steps:
first, the conversion requirements of the original sentence are determined.
The conversion requirement can be determined according to the problems existing in the original sentence. For example, if the original sentence contains only scattered words and is not composed into a complete sentence, the conversion requirement may be a scattered word sentence making requirement. If the original sentence is out of order, the conversion requirement can be a requirement for adjusting the word order. It should be noted that the conversion requirement of the original sentence is not limited to the sentence making requirement of the scattered words and the requirement of adjusting the word order, and other conversion requirements can be determined according to other language diseases types of the original sentence.
Here, the problem of the original sentence can be detected in various ways, and the following two examples are described:
as an example, when the original sentence has a plurality of (e.g., at least two) separators (e.g., pause signs ", commas", "etc.), and the content between each separator is mostly nouns (e.g., includes at least two nouns), it can be considered that there is a requirement for scattered word sentence making. Such as "today, clothes, how.
As yet another example, the original sentence may be part-of-speech tagged to determine the syntax of the original sentence. If the syntax of the original sentence matches a predetermined wrong syntax (e.g., noun-verb), the original sentence may be considered to have a requirement for word order adjustment. Such as "he has got off the harbin".
And secondly, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
Here, the target sentence acquisition manner may include, but is not limited to, at least one of the following: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model. Different target statement acquisition modes can be preset according to different conversion requirements.
Optionally, in a case that the conversion requirement is a requirement for making sentences from scattered words, the execution main body may adopt a target sentence acquisition mode based on a sentence library. The sentence library may contain a large number of sentences. The sentences in the sentence library can be extracted from the input logs of the users, the Internet texts and other linguistic data in advance. The sentence library also records keywords of each sentence and type labels of each keyword. The types of keywords may include, but are not limited to, named entities (e.g., places, people, organizations, etc.), verbs, times, etc., among others.
In the case that the conversion requirement is a scattered word sentence making requirement, referring to the flowchart shown in fig. 2, the execution main body may obtain the target sentence with the same semantic as the original sentence and the correct sentence pattern according to the following sub-steps:
and a substep S11 of extracting keywords from the original sentence and determining the type of the keywords.
For example, if the original sentence is "Xiaoming, taxi taking, Beijing", the keyword may be a separator "or a" segmented word, specifically "Xiaoming", "taxi taking", or "Beijing", and the type is a person, a verb, and a place in sequence.
And a substep S12, retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set.
Specifically, sentences matching the type of the keyword may be first retrieved from the sentence library; then, from the searched sentences, the sentences containing the keywords are selected as candidate sentences, and the candidate sentences are collected into a candidate sentence set. Continuing with the above example, the set of candidate sentences may include "Mingming taxi to Beijing", "Mingming taxi to come back from Beijing", and so on.
And a substep S13, sorting the candidate sentences in the candidate sentence set based on the related information of the original sentences to obtain a sorting result.
Here, the related information of the original sentence may include, but is not limited to, the above information, the below information, the history input information of the user, the original sentence, and the like. The execution agent may extract features from the related information of the original sentence, and input each candidate sentence and the extracted features to a pre-trained ranking model to obtain a score for each candidate sentence. And sorting the candidate sentences in the candidate sentence set according to the score sequence to obtain a sorting result. The ranking model herein may extract features from candidate sentences and match the features extracted from the candidate sentences with features extracted from related information, and the output scores may characterize the degree of match.
In practice, the ranking model can be obtained by adopting an existing click rate estimation model and a model based on a Deep Interest Network (DIN) and the like and training in advance through a machine learning method (such as a supervised learning method).
And a substep S14 of selecting a target sentence from the candidate sentence set based on the sorting result.
Here, the number of target sentences is not limited. For example, the first candidate sentence in the sequence may be selected as the target sentence, so that the sentence most needed by the user is selected as the target sentence. The first N (N is a positive integer) candidate sentences can also be used as target sentences, so that more choices can be provided for the user.
In some optional implementation manners of this embodiment, in a case that the conversion requirement is a requirement for adjusting a word order, the execution main body may input the original sentence to a pre-trained word order adjustment model, so as to obtain a target sentence obtained by performing word order adjustment on the original sentence. The language order adjusting model can be used for recombining the input sentences to obtain a new sentence with correct language order. The word order adjustment model may be pre-trained based on machine learning methods (e.g., supervised learning methods).
In some optional implementations of this embodiment, the word order adjustment model is obtained by training based on the following steps:
in a first step, a sample set is obtained. Wherein, the samples in the sample set are statement duplets. The statement duplet may include a first sample statement and a second sample statement. The first sample statement and the second sample statement may have different word orders. Here, the first sample sentence and the second sample sentence may be considered to have the same semantic meaning because they contain the same word and only the sentences are different.
Optionally, the samples in the sample set may be generated by: first, correct sentences without language sickness are obtained. And then, randomly exchanging the positions of the words in the correct sentence to obtain the out-of-order sentence. And then, taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
Optionally, the samples in the sample set may be generated by: firstly, a log of input method application is obtained, wherein the log comprises historical behavior data of a user. And then, searching the statement before modification and the statement after modification corresponding to the backspace modification behavior from the historical behavior data. And if the sentence before modification and the sentence after modification have the same semantics and the sentence after modification has a correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
And secondly, taking a first sample sentence in the sample set as the input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as the output target of the end-to-end generation model, and training the end-to-end generation model (Sequence to Sequence, Sequence 2 Sequence) by using a machine learning algorithm to obtain a language Sequence adjustment model.
Wherein, the end-to-end generation model is an Encoder-decoder structure model, the input is a sequence, and the output is also a sequence. Therefore, the model can be trained by using the samples in the sample set, and the language order adjustment model is obtained.
And 103, displaying the target statement.
In this embodiment, the execution body may present the target sentence in various ways. For example, the target sentence may be displayed in the input method panel as a candidate, or may be displayed in a pop-up window, where the display position and the display style of the target sentence are not particularly limited.
In some optional implementation manners of this embodiment, after the target sentence is presented, if it is detected that the target sentence is selected by the user, the original sentence may be replaced with the target sentence, so that the target sentence is displayed on the screen.
In the method provided by the above embodiment of the present application, the target sentence is displayed by acquiring the original sentence input by the user and acquiring the target sentence having the same semantic meaning as the original sentence and the correct sentence pattern when the original sentence meets the preset condition. Therefore, sentence recombination can be realized aiming at the original sentences input by the user, more accurate sentences can be provided for the user under the condition of not changing semantics, the sentences do not need to be manually modified by the user, and the efficiency of inputting the sentences by the user is improved.
With further reference to fig. 3, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an input device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the input device 300 of the present embodiment includes: a first acquisition unit 301 configured to acquire an original sentence input by a user; a second obtaining unit 302, configured to obtain a target sentence having the same semantic meaning as the original sentence and having a correct sentence pattern, if the original sentence satisfies a preset condition; a presentation unit 303 configured to present the target sentence.
In some optional implementations of the present embodiment, the second obtaining unit 302 is further configured to: determining the conversion requirement of the original statement; and acquiring the target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
In some optional implementations of this embodiment, the conversion requirement includes at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target sentence acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
In some optional implementations of the present embodiment, the second obtaining unit 302 is further configured to: extracting keywords from the original sentence under the condition that the conversion requirement is a scattered word sentence making requirement, and determining the types of the keywords; retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set; sorting the candidate sentences in the candidate sentence set based on the related information of the original sentences to obtain a sorting result; and selecting a target sentence from the candidate sentence set based on the sorting result.
In some optional implementations of the present embodiment, the second obtaining unit 302 is further configured to: and under the condition that the conversion requirement is a language order adjustment requirement, inputting the original sentence into a pre-trained language order adjustment model to obtain a target sentence subjected to the language order adjustment on the original sentence.
In some optional implementation manners of this embodiment, the word order adjustment model is obtained by training based on the following steps: obtaining a sample set, wherein samples in the sample set are statement duplets, and the statement duplets comprise a first sample statement and a second sample statement which has a different language order from the first sample statement; and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language sequence adjustment model.
In some optional implementations of this embodiment, the samples in the sample set are generated by: acquiring correct sentences without language sickness; randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences; and using the out-of-order sentence as a first sample sentence, using the correct sentence as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
In some optional implementations of this embodiment, the samples in the sample set are generated by: acquiring a log of input method application, wherein the log comprises historical behavior data of a user; searching a sentence before modification and a sentence after modification corresponding to the backspace modification behavior from the historical behavior data; and if the sentence before modification and the sentence after modification have the same semantics and the sentence after modification has a correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
In some optional implementations of this embodiment, the apparatus further includes: and the replacing unit is configured to replace the original sentence with the target sentence when detecting that the target sentence is selected by the user.
The device provided by the above embodiment of the present application displays the target sentence by acquiring the original sentence input by the user, and acquiring the target sentence having the same semantic meaning as the original sentence and having the correct sentence pattern when the original sentence satisfies the preset condition. Therefore, sentence recombination can be realized aiming at the original sentences input by the user, more accurate sentences can be provided for the user under the condition of not changing semantics, the sentences do not need to be manually modified by the user, and the efficiency of inputting the sentences by the user is improved.
Fig. 4 is a block diagram illustrating an apparatus 400 for input according to an example embodiment, where the apparatus 400 may be an intelligent terminal or a server. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the apparatus 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 406 provide power to the various components of device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the apparatus 400. For example, the sensor assembly 414 may detect an open/closed state of the device 400, the relative positioning of the components, such as a display and keypad of the apparatus 400, the sensor assembly 414 may also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 5 is a schematic diagram of a server in some embodiments of the present application. The server 500 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 522 (e.g., one or more processors) and memory 532, one or more storage media 530 (e.g., one or more mass storage devices) storing applications 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 522 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the server 500.
The server 500 may also include one or more power supplies 526, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, one or more keyboards 556, and/or one or more operating systems 541, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform an input method, the method comprising: acquiring an original sentence input by a user; under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern; and displaying the target statement.
Optionally, the obtaining a target sentence having the same semantic as the original sentence and having a correct sentence pattern includes: determining the conversion requirement of the original statement; and acquiring the target sentence with the same semantic meaning as the original sentence and the correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
Optionally, the conversion requirement includes at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target statement acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
Optionally, the obtaining, by using a target statement obtaining manner matched with the conversion requirement, a target statement having the same semantics as the original statement and having a correct statement pattern includes: extracting keywords from the original sentence under the condition that the conversion requirement is a scattered word sentence making requirement, and determining the types of the keywords; retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set; based on the related information of the original sentences, sorting the candidate sentences in the candidate sentence set to obtain a sorting result; and selecting a target sentence from the candidate sentence set based on the sorting result.
Optionally, the obtaining, by using a target statement obtaining manner matched with the conversion requirement, a target statement having the same semantics as the original statement and having a correct statement pattern includes: and under the condition that the conversion requirement is a language order adjustment requirement, inputting the original sentence into a pre-trained language order adjustment model to obtain a target sentence subjected to language order adjustment on the original sentence.
Optionally, the word order adjustment model is obtained by training based on the following steps: obtaining a sample set, wherein samples in the sample set are statement duplets, the statement duplets comprise a first sample statement and a second sample statement, and the first sample statement and the second sample statement have different word orders; and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language order adjustment model.
Optionally, the samples in the sample set are generated by: acquiring correct sentences without language sickness; randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences; and taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
Optionally, the samples in the sample set are generated by: acquiring a log of an input method application, wherein the log comprises historical behavior data of a user; searching a sentence before modification and a sentence after modification corresponding to the backspace modification behavior from the historical behavior data; if the sentence before modification and the sentence after modification have the same semantic meaning and the sentence after modification has the correct sentence pattern, taking the sentence before modification as a first sample sentence, taking the sentence after modification as a second sample sentence, and summarizing the first sample sentence and the second sample sentence to obtain a sample.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: and when the target sentence is detected to be selected by the user, replacing the original sentence with the target sentence.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
The present application provides an input method, an input device and an input device, and the principles and embodiments of the present application are described herein using specific examples, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An input method, characterized in that the method comprises:
acquiring an original sentence input by a user;
under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern;
and displaying the target sentence.
2. The method of claim 1, wherein obtaining the target sentence having the same semantic meaning as the original sentence and having the correct sentence pattern comprises:
determining a conversion requirement of the original sentence;
and acquiring the target sentence with the same semantic meaning as the original sentence and the correct sentence pattern by adopting a target sentence acquisition mode matched with the conversion requirement.
3. The method of claim 1, wherein the conversion requirements comprise at least one of: sentence making requirements of scattered words and word order adjusting requirements; and the target statement acquisition mode comprises at least one of the following items: a target sentence acquisition mode based on a sentence library and a target sentence acquisition mode based on a word order adjusting model.
4. The method according to claim 3, wherein said obtaining the target sentence having the same semantic meaning as the original sentence and the correct sentence pattern by using the target sentence obtaining manner matched with the conversion requirement comprises:
extracting keywords from the original sentence under the condition that the conversion requirement is a scattered word sentence making requirement, and determining the types of the keywords;
retrieving candidate sentences from a preset sentence library based on the keywords and the types of the keywords to obtain a candidate sentence set;
based on the related information of the original sentences, sorting the candidate sentences in the candidate sentence set to obtain a sorting result;
and selecting a target sentence from the candidate sentence set based on the sorting result.
5. The method according to claim 3, wherein said obtaining the target sentence having the same semantic meaning as the original sentence and the correct sentence pattern by using the target sentence obtaining manner matched with the conversion requirement comprises:
and under the condition that the conversion requirement is a word order adjusting requirement, inputting the original sentence into a pre-trained word order adjusting model to obtain a target sentence subjected to word order adjustment on the original sentence.
6. The method according to claim 5, wherein the word order adjustment model is trained based on the following steps:
obtaining a sample set, wherein samples in the sample set are statement duplets, the statement duplets comprise a first sample statement and a second sample statement, and the first sample statement and the second sample statement have different word orders;
and taking a first sample sentence in the sample set as an input of an end-to-end generation model, taking a second sample sentence corresponding to the input first sample sentence as an output target of the end-to-end generation model, and training the end-to-end generation model by using a machine learning algorithm to obtain a language order adjustment model.
7. The method of claim 6, wherein the samples in the sample set are generated by:
acquiring correct sentences without language sickness;
randomly exchanging the positions of the words in the correct sentences to obtain out-of-order sentences;
and taking the out-of-order statement as a first sample statement, taking the correct statement as a second sample statement, and summarizing the first sample statement and the second sample statement to obtain a sample.
8. An input device, the device comprising:
a first acquisition unit configured to acquire an original sentence input by a user;
a second obtaining unit configured to obtain a target sentence having the same semantic meaning as the original sentence and having a correct sentence pattern, in a case where the original sentence satisfies a preset condition;
a presentation unit configured to present the target sentence.
9. An apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
acquiring an original sentence input by a user;
under the condition that the original sentence meets a preset condition, acquiring a target sentence which has the same semantic meaning as the original sentence and has a correct sentence pattern;
and displaying the target statement.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202011291518.7A 2020-11-17 2020-11-17 Input method, input device and input device Pending CN114510154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291518.7A CN114510154A (en) 2020-11-17 2020-11-17 Input method, input device and input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291518.7A CN114510154A (en) 2020-11-17 2020-11-17 Input method, input device and input device

Publications (1)

Publication Number Publication Date
CN114510154A true CN114510154A (en) 2022-05-17

Family

ID=81546369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291518.7A Pending CN114510154A (en) 2020-11-17 2020-11-17 Input method, input device and input device

Country Status (1)

Country Link
CN (1) CN114510154A (en)

Similar Documents

Publication Publication Date Title
CN107608532B (en) Association input method and device and electronic equipment
CN108304412B (en) Cross-language search method and device for cross-language search
CN107092424B (en) Display method and device of error correction items and device for displaying error correction items
CN107424612B (en) Processing method, apparatus and machine-readable medium
CN107797676B (en) Single character input method and device
KR102327790B1 (en) Information processing methods, devices and storage media
CN107784037B (en) Information processing method and device, and device for information processing
CN113033163A (en) Data processing method and device and electronic equipment
CN111752436A (en) Recommendation method and device and recommendation device
CN108614830B (en) Search result display method and device
CN114610163A (en) Recommendation method, apparatus and medium
CN114510154A (en) Input method, input device and input device
CN114115550A (en) Method and device for processing association candidate
CN112306251A (en) Input method, input device and input device
CN113515618A (en) Voice processing method, apparatus and medium
CN112306252A (en) Data processing method and device and data processing device
US20230196001A1 (en) Sentence conversion techniques
CN110716653B (en) Method and device for determining association source
CN112528129B (en) Language searching method and device for multilingual translation system
CN111381685B (en) Sentence association method and sentence association device
CN113495656A (en) Input method, input device and input device
CN114253404A (en) Input method, input device and input device
CN113342183A (en) Input method, input device and input device
CN114330305A (en) Entry recalling method and device and entry recalling device
CN113534972A (en) Entry prompting method and device and entry prompting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination