CN110673748A - Method and device for providing candidate long sentences in input method - Google Patents

Method and device for providing candidate long sentences in input method Download PDF

Info

Publication number
CN110673748A
CN110673748A CN201910927584.XA CN201910927584A CN110673748A CN 110673748 A CN110673748 A CN 110673748A CN 201910927584 A CN201910927584 A CN 201910927584A CN 110673748 A CN110673748 A CN 110673748A
Authority
CN
China
Prior art keywords
candidate
words
prediction model
long sentence
long
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910927584.XA
Other languages
Chinese (zh)
Other versions
CN110673748B (en
Inventor
龚建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910927584.XA priority Critical patent/CN110673748B/en
Publication of CN110673748A publication Critical patent/CN110673748A/en
Application granted granted Critical
Publication of CN110673748B publication Critical patent/CN110673748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The method obtains a current input sequence input by a user in the input method application, obtains candidate words matched with the current input sequence, combines a pre-trained long sentence prediction model and the candidate words to obtain corresponding candidate long sentences, displays the candidate long sentences while displaying the candidate words on the input method application, and therefore, the matched candidate long sentences are quickly obtained by combining the pre-trained long sentence prediction model and the candidate long sentences are provided for the user, the user can conveniently and quickly finish the input of the long sentences according to the candidate long sentences, the input cost of the user is reduced, and the user experience is improved.

Description

Method and device for providing candidate long sentences in input method
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method and a device for providing candidate long sentences in an input method.
Background
At present, in the input method application, the input method application can provide a candidate word corresponding to the pinyin sequence and a next character, word, phrase and other shorter texts corresponding to the candidate word according to the pinyin sequence input by a user, however, in practical application, when the user needs to input a complete sentence through the input method application, the user needs to input the corresponding pinyin sequence in the corresponding complete sentence through the input method application for many times to complete the input of the complete sentence, the input cost for the user to input the complete sentence is high, and the input method experience of the user is not ideal.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the first objective of the present application is to provide a method for providing candidate long sentences in an input method.
A second object of the present application is to provide an apparatus for providing candidate long sentences in an input method.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a computer readable storage medium.
A fifth object of the present application is to propose a computer program product.
In order to achieve the above object, an embodiment of a first aspect of the present application provides a method for providing a candidate long sentence in an input method, including: acquiring a current input sequence input by a user in an input method application; acquiring candidate words matched with the current input sequence; acquiring a candidate long sentence matched with the candidate word according to a long sentence prediction model trained in advance; and displaying the candidate words and the candidate long sentences on the input method application.
According to the method for providing the candidate long sentences in the input method, the current input sequence input by the user in the input method application is obtained, the candidate words matched with the current input sequence are obtained, the corresponding candidate long sentences are obtained by combining the pre-trained long sentence prediction model and the candidate words, and the candidate long sentences are displayed while the candidate words are displayed on the input method application.
In order to achieve the above object, a third embodiment of the present application provides an apparatus for providing candidate long sentences in an input method, including: the first acquisition module is used for acquiring a current input sequence input by a user in the input method application; the second acquisition module is used for acquiring candidate words matched with the current input sequence; the third acquisition module is used for acquiring a candidate long sentence matched with the candidate word according to a long sentence prediction model trained in advance; and the display module is used for displaying the candidate words and the candidate long sentences on the input method application.
The device for providing the candidate long sentences in the input method, provided by the embodiment of the application, is used for obtaining the current input sequence input by the user in the input method application, obtaining the candidate words matched with the current input sequence, combining the pre-trained long sentence prediction model and the candidate words, obtaining the corresponding candidate long sentences, and displaying the candidate long sentences while displaying the candidate words on the input method application.
To achieve the above object, an embodiment of a third aspect of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the method for providing a candidate long sentence in an input method as described above.
In order to achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, where instructions of the storage medium, when executed by a processor, implement a method for providing a candidate long sentence in an input method as described above.
In order to achieve the above object, an embodiment of a fifth aspect of the present application provides a computer program product, where when being executed by an instruction processor, the computer program product implements a qualification auditing method as described above.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for providing candidate long sentences in an input method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a user interface containing candidate long sentences;
FIG. 3 is a first flowchart of a refinement of step 103 in the embodiment shown in FIG. 1;
FIG. 4 is a diagram illustrating a detailed flow chart of step 103 in the embodiment shown in FIG. 1;
fig. 5 is a flowchart illustrating a method for providing candidate long sentences in an input method according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of a device for providing candidate long sentences in an input method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a device for providing candidate long sentences in an input method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for providing candidate long sentences in an input method according to another embodiment of the present application;
FIG. 9 is a schematic structural diagram of an apparatus for providing candidate long sentences in an input method according to another embodiment of the present application
FIG. 10 is a schematic structural diagram of an electronic device according to one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A method, an apparatus, and an electronic device for providing candidate long sentences in an input method according to an embodiment of the present application are described below with reference to the drawings.
Fig. 1 is a flowchart illustrating a method for providing candidate long sentences in an input method according to an embodiment of the present application. It should be noted that the execution subject of the method for providing a candidate long sentence in an input method provided in this embodiment is a device for providing a candidate long sentence in an input method, and the device may be configured in an electronic device or a cloud server, which is not limited in this embodiment.
As shown in fig. 1, the method for providing candidate long sentences in the input method may include:
step 101, acquiring a current input sequence input by a user in an input method application.
Step 102, obtaining candidate words matched with the current input sequence.
As an exemplary implementation manner, when a user needs to input information through an input method application, the terminal device may obtain a current input sequence input by the user in the input method application, and upload the current input sequence to the cloud server, so that the cloud server may convert the current input sequence to obtain a candidate word matching the current input sequence.
It is to be understood that, in addition to being executed by the cloud server, the obtaining of the candidate word matching the current input sequence may also be executed by the terminal device, for example, the terminal device may convert the current input sequence according to the current input sequence input by the user in the input method application, in combination with the key model, to obtain the candidate word matching the current input sequence, and upload the corresponding candidate word to the cloud server, which is not limited in this embodiment.
The terminal device may include, but is not limited to, a hardware device having an input method application, such as a personal computer, a tablet computer, a mobile phone, and a smart phone, and this embodiment is not particularly limited thereto.
For example, the current input sequence input by the user in the input method application is nihaozai, the current input sequence "nihaozai" is converted, and the first candidate word corresponding to the current input sequence can be obtained as "you are there".
For another example, the current input sequence input by the user in the input method application is the guinianhao, and the current input sequence "guinianhao" is converted, so that the first candidate word corresponding to the current input sequence may be obtained as "good year round".
And 103, acquiring a candidate long sentence matched with the candidate word according to a long sentence prediction model trained in advance.
Specifically, after the candidate words matched with the current input sequence are obtained, in order to facilitate a user to quickly input a complete long sentence through an input method application, a long sentence prediction model trained in advance can be combined to obtain the candidate long sentences matched with the candidate words.
It should be noted that, in different application scenarios, the manner of obtaining the candidate long sentences matched with the candidate words is different according to the long sentence prediction model, for example, the candidate words may be directly input into the long sentence prediction model, and the long sentence prediction model may directly output the candidate long sentences matched with the candidate words, where the long sentence prediction model has learned the correspondence between the candidate words and the candidate long sentences.
Other ways of obtaining candidate long sentences matching the candidate words according to the long sentence prediction model will be described in the following embodiments.
Wherein, it is understood that the candidate long sentence includes the candidate word and the suffix word after the candidate word.
In practical application, the sentence use habits of different users are different, and in order to make the provided candidate long sentence more meet the user requirements, as an exemplary implementation manner, after the candidate long sentence matched with the candidate word is obtained, the sentence preference characteristic of the user can be obtained, and the obtained candidate long sentence is adjusted in combination with the sentence preference characteristic, and the adjusted candidate long sentence is fed back to the terminal device.
And 104, displaying the candidate words and the candidate long sentences on the input method application.
In this embodiment, there may be one or more candidate long sentences matching the candidate word.
In this embodiment, in order to avoid the occurrence of sensitive words in the candidate long sentence, for example, an unverified sentence, after the candidate long sentence is obtained, it may be determined whether the corresponding candidate long sentence includes a word in the blacklist word list, if the corresponding candidate long sentence includes a word in the blacklist word list, the corresponding candidate long sentence is filtered, and if the corresponding candidate long sentence does not exist, the candidate long sentence is saved.
The blacklist word list stores some preset non-civilized words, illegal words and the like.
As an exemplary embodiment, when a plurality of candidate long sentences matching the candidate word are determined, in order that the candidate long sentences can be accurately provided to the user, the score of each candidate long sentence may be obtained, and the candidate long sentence with the highest score may be applied to the input method.
For example, if three candidate long sentences are assumed, the candidate long sentences are "how then", and the scores are 8, 7, and 6 in order, and it is determined that each candidate long sentence does not include any word in the blacklist word list, the candidate long sentence with the highest score may be obtained and fed back to the terminal device for display.
As another exemplary implementation, when a plurality of candidate long sentences matching the candidate word are determined, a score corresponding to each candidate long sentence may be obtained, the candidate long sentences may be ranked in an order from high to low according to the score, and the ranked candidate long sentences may be presented on the input method application.
In this embodiment, in order not to affect the user to input information through the input method, the candidate long sentence may be displayed at a position such as an upper left corner or an upper right corner in the application of the input method, and the display position of the candidate long sentence is not specifically limited in this embodiment.
For example, the user currently enters the sequence "nihaozai" in the input method application, which displays the candidate words in the input method application, and the candidate long sentence "do you get there and i find you have something" that the candidate words match, wherein the example diagram on the corresponding user interface is shown in fig. 2, wherein it should be noted that fig. 2 exemplifies that the candidate long sentence is displayed on the upper right corner.
According to the method for providing the candidate long sentences in the input method, the current input sequence input by the user in the input method application is obtained, the candidate words matched with the current input sequence are obtained, the corresponding candidate long sentences are obtained by combining the pre-trained long sentence prediction model and the candidate words, and the candidate long sentences are displayed while the candidate words are displayed on the input method application.
As shown in fig. 3, in an embodiment, the specific implementation process of the step 103 may include:
step 301, using the candidate words as the current input of the long sentence prediction model.
In this embodiment, in order to accurately predict the next word appearing after each word by using the long sentence prediction model, the long sentence prediction model may be trained by combining with the training corpus data before the current word is input into the long sentence prediction model to obtain the current output of the long sentence prediction model.
The specific process of training the long sentence prediction model is as follows:
step a, obtaining training corpus data, wherein the training corpus data comprises prefix sample words and suffix sample words corresponding to the prefix sample words.
Wherein suffix sample words are words that occur after prefix sample words.
In this embodiment, the corpus data can be constructed by combining a large amount of chat corpuses in the instant messaging chat scene.
As an exemplary embodiment, in order to ensure that enough preamble information is available, when selecting a chat sentence, a sentence with a word number greater than or equal to a preset word number threshold may be selected as the chat corpus.
The preset word number threshold is a preset word number critical value, and if the number of words in a chat sentence is equal to or greater than the word number critical value, the chat sentence can be used as a sentence for constructing a training corpus. For example, if the preset word count threshold is 7, and the chat sentence is "which meal we have at night", it can be determined that the number of words in the chat sentence exceeds 7, and at this time, the chat sentence can be used as a sentence for constructing the corpus.
The general process of constructing the corpus data according to the chat corpus is as follows: separating the chat sentences in the chat corpus by preset separators, and constructing training corpus data according to separation processing results, wherein words before each preset separator in the chat sentences are prefix sample words, and words after the corresponding preset separator are suffix sample words.
Wherein the preset delimiter is preset, for example, the preset delimiter may be "|".
For example, the chat conversation is "which meal we got at night", after the chat sentence is divided by applying the separator "|", the divided chat conversation is "which you get at night", for the first separator, the corresponding prefix sample word is "us", the suffix sample word is "night", for the second separator, the prefix sample word is "we get at night", the suffix sample word is "which", for the third separator, the prefix sample word is "which we got at night", and the suffix sample word is "meal".
And b, training the long sentence prediction model according to the prefix sample words and the suffix sample words.
Specifically, the long sentence prediction model is trained by taking prefix sample words as input features of the long sentence prediction model and taking suffix sample words as output features of the long sentence prediction model.
For example, the long sentence prediction model may be trained in conjunction with a recurrent Neural network (rnn), and prefix and suffix sample words.
The RNN can use structures such as LSTM or GRNN, the input characteristic is a Chinese character, and the output characteristic is the next Chinese character. Then passing through an Embedding layer and then modeling through an RNN layer. And then outputting the network structure through the hierarchical textual Softmax, and selecting the corresponding Chinese characters.
It should be noted that, by adopting a hierarchical Softmax output network structure, the classification calculation amount can be reduced, and the efficiency of training the model can be further improved.
In this embodiment, the long sentence prediction model used has the following advantages: the efficiency of outputting the candidate long sentences is high, the storage space required by the long sentence prediction model is small, the requirement on the storage space is not high, and the storage resources occupied by the model are reduced.
In the process of training the model, parameters in the model can be optimized by using a BP algorithm to obtain a final sequence conversion model of the pronunciations.
It should be understood that the trained long sentence prediction module can accurately predict the next word appearing after each input word.
Step 302, inputting the current input into the long sentence prediction model to obtain the current output of the long sentence prediction model, wherein the current output includes the next word after the current input.
And 303, when the next word is determined not to be matched with the preset sentence termination information, updating the current input of the long sentence prediction model according to the current output and the current input, and acquiring the current output corresponding to the current input through the long sentence prediction model until the current output of the long sentence prediction model is matched with the preset sentence termination information.
And 304, when the current output of the long sentence prediction model is matched with the preset sentence termination information, generating a candidate long sentence matched with the candidate word according to the current input of the long sentence prediction model.
That is, in this embodiment, the candidate word and the long sentence prediction model are combined to predict the next word appearing after the candidate word, and the candidate word and the next word are used as the input of the language model for the next prediction, so as to further predict the next word, and thus the long sentence prediction model is repeatedly used until the long sentence prediction model outputs the sentence terminator.
The sentence termination information is information for indicating the termination of the sentence. The sentence termination information is preset. For example, the statement termination information may be a statement terminator, and the statement terminator may be NULL.
For example, the sentence termination information may be NULL, and it is assumed that, according to the current input sequence input by the user in the input method application, the obtained candidate word is "our night", at this time, the candidate word "our night" may be used as the current input of the long sentence prediction model, after "our night" is input into the long sentence prediction model, if the current output of the long sentence prediction model is "where to go", that is, the next word appearing after "our night" is "where to go", at this time, it may be determined that the current output is not sentence termination information, at this time, the current output may be spliced after the current input to obtain an updated current input, and the updated current input is "where to go in night". Correspondingly, the current output of the long sentence prediction model is "eating". Correspondingly, the current output is spliced again after the current input to obtain an updated current input, the updated current input is 'which meal we go at night', at the moment, the current output of the long sentence prediction model is 'NULL', the current input corresponding to the long sentence prediction model is 'which meal we go at night', and the current input is the candidate long sentence matched with the candidate word.
As shown in fig. 4, in another embodiment, the specific implementation process of step 103 may include:
step 401, determining a suffix word matched with the candidate word by using a long sentence prediction model, wherein the long sentence prediction model is learned to obtain a corresponding relation between the candidate word and the suffix word.
In this embodiment, in order to accurately predict suffix words matching the candidate words by the long sentence prediction model, the long sentence prediction model may be trained by combining training corpus data before the current suffix words are input to the long sentence prediction model to obtain the current output of the long sentence prediction model.
The specific process of training the long sentence prediction model is as follows:
step a, obtaining training corpus data, wherein the training corpus data comprises prefix sample words and suffix sample words corresponding to the prefix sample words, and the prefix sample words and the suffix sample words can form a long sentence.
In this embodiment, the corpus data can be constructed by combining a large amount of chat corpuses in the instant messaging chat scene.
As an exemplary embodiment, in order to ensure that enough preamble information is available, when selecting a chat sentence, a sentence with a word number greater than or equal to a preset word number threshold may be selected as the chat corpus.
The preset word number threshold is a preset word number critical value, and if the number of words in a chat sentence is equal to or greater than the word number critical value, the chat sentence can be used as a sentence for constructing a training corpus. For example, if the preset word count threshold is 7, and the chat sentence is "which meal we have at night", it can be determined that the number of words in the chat sentence exceeds 7, and at this time, the chat sentence can be used as a sentence for constructing the corpus.
The general process of constructing the corpus data according to the chat corpus is as follows, chat sentences in the chat corpus are separated by preset separators, and the corpus data is determined according to the separation processing result. And the words before the preset separators in the chat sentences are prefix sample words, and the words after the corresponding preset separators are suffix sample words.
Wherein the preset delimiter is preset, for example, the preset delimiter may be "|".
For example, the chat conversation is "which meal we got at night", after the chat sentence is divided by applying the separator "|", the divided chat conversation is "which meal we | get at night", the prefix sample word is "we evening", and the suffix sample word is "which meal go".
For another example, the chat conversation is "which meal we eat at night", after the chat sentence is divided by applying the separator "|", the divided chat conversation is "which meal we eat at night", the prefix sample word is "us", and the suffix sample word is "which meal we eat at night".
And b, training the long sentence prediction model according to the prefix sample words and the suffix sample words.
In this embodiment, a long sentence prediction model may be trained in conjunction with a typical sequence-to-sequence neural network translation model, as well as prefix words and suffix words.
And step 402, generating a candidate long sentence according to the candidate words and the suffix words.
In this implementation, after the candidate word and the suffix word are obtained, the suffix word may be concatenated after the candidate word to generate the candidate long sentence.
For example, assuming that the candidate word is "we are night", if the suffix word after the candidate word is predicted as "go to which meal" according to the long sentence prediction model, at this time, the generated candidate long sentence is "we go to which meal" according to the candidate word and the suffix word.
It is understood that, in practical applications, before the user inputs the current input sequence through the input method application, it is possible that the user has already screened a word in the text edit box in the corresponding user interface through the input method, and in order to accurately provide the candidate long sentence to the user, on the basis of any of the above embodiments, the candidate long sentence may be determined by combining the screened word and the current input sequence input by the user in the input method application.
The method for providing candidate long sentences in the input method of the embodiment is further described below with reference to fig. 5.
Fig. 5 is a flowchart illustrating a method for providing candidate long sentences in an input method according to another embodiment of the present application.
As shown in fig. 5, the method for providing candidate long sentences in the input method may include:
step 501, acquiring a current input sequence input by a user in an input method application.
Step 502, obtaining candidate words matched with the current input sequence.
It should be noted that the foregoing explanation on step 101 to step 102 also applies to step 501 to step 502 in this embodiment, and is not repeated here.
Step 503, acquiring the words on the screen before the current input sequence.
It should be noted that, in this embodiment, the execution sequence of step 502 and step 503 is not sequential.
As an exemplary implementation manner, text information which is input in a text edit box in a user interface by a user can be obtained, wherein the text information is a word which is displayed on a screen.
And step 504, acquiring candidate long sentences matched with the candidate words and the words on the screen by adopting a long sentence prediction model trained in advance.
It should be noted that, in different application scenarios, the manner of obtaining the candidate long sentences matched with the candidate words and the on-screen words by using the long sentence prediction model trained in advance is different, which is exemplified as follows:
in a first implementation scenario, the on-screen words and candidate words are used as the current input of the long sentence prediction model; inputting the current input into the long sentence prediction model to obtain the current output of the long sentence prediction model, wherein the current output comprises the next word after the current input; when the next word is determined not to be matched with the preset sentence termination information, updating the current input of the long sentence prediction model according to the current output and the current input, and acquiring the current output corresponding to the current input through the long sentence prediction model until the current output of the long sentence prediction model is matched with the preset sentence termination information; and when the current output of the long sentence prediction model is matched with the preset sentence termination information, generating a candidate long sentence matched with the candidate word according to the current input of the long sentence prediction model.
As an example, candidate words may be stitched after the words have been on-screen, and the stitched words may be used as current input to the long-sentence prediction model.
For example, the word that has been displayed is "us", the candidate word corresponding to the current input sequence is "evening", the word that has been displayed and the candidate word may be spliced to obtain the current input of the long sentence prediction model as "us evening", after "us evening" is input into the long sentence prediction model, if the current output of the long sentence prediction model is "where" it goes ", that is, the next word appearing after" us evening "is" where to go ", at this time, it may be determined that the current output is not a sentence terminator, at this time, the current output may be spliced after the current input to obtain the updated current input, and the updated current input is" where to go at night ". Correspondingly, the current output of the long sentence prediction model is "eating". Correspondingly, the current output is spliced again after the current input to obtain an updated current input, the updated current input is 'which meal we go at night', at the moment, the current output of the long sentence prediction model is 'NULL', the current input corresponding to the long sentence prediction model is 'which meal we go at night', and the current input is the candidate long sentence matched with the candidate word.
In a second implementation scenario, suffix words matched with the candidate words and the displayed words are determined through a long-sentence prediction model, wherein the long-sentence prediction model learns to obtain the corresponding relation between the candidate words and the suffix words; and generating a candidate long sentence according to the candidate word and the suffix word.
Specifically, candidate words may be spliced after the words are displayed on the screen to obtain spliced words, the spliced words may be input to the long sentence prediction model to obtain suffix words corresponding to the spliced words through the long sentence prediction model, and then the suffix words may be spliced after the spliced words to obtain candidate long sentences.
For example, assuming that the displayed word is "us", the candidate word corresponding to the current input sequence is "evening", the displayed word and the candidate word may be spliced to obtain a spliced word "us evening", if the suffix word after the spliced word is predicted to be "go to dinner" according to the long sentence prediction model, that is, "go to dinner" is a suffix word matched with the displayed word and the candidate word, at this time, the generated candidate long sentence is "we go to dinner" according to the candidate word and the suffix word.
In practical application, the sentence use habits of different users are different, and in order to make the provided candidate long sentence more meet the user requirements, as an exemplary implementation manner, after the candidate long sentence matched with the candidate word is obtained, the sentence preference feature of the user can be obtained, and the obtained candidate long sentence is fed back to the terminal device in combination with the sentence preference feature.
And 505, displaying the candidate words and the candidate long sentences on the input method application.
According to the method for providing the candidate long sentences in the input method, the current input sequence input by the user in the input method application is obtained, the candidate words matched with the current input sequence are obtained, the on-screen words before the current input sequence is input are obtained, the candidate long sentences corresponding to the candidate words and the on-screen words are obtained by combining the long sentence prediction model trained in advance, the candidate long sentences are displayed while the candidate words are displayed in the input method application, therefore, the matched candidate long sentences are quickly obtained by combining the long sentence prediction model trained in advance, the candidate long sentences are provided for the user, the user can conveniently and quickly finish the input of the long sentences according to the candidate long sentences, the input cost of the user is reduced, and the user experience degree is improved.
Fig. 6 is a schematic structural diagram of a device for providing candidate long sentences in an input method according to an embodiment of the present application.
As shown in fig. 6, the apparatus for providing candidate long sentences in the input method includes a first obtaining module 110, a second obtaining module 120, a third obtaining module 130, and a presenting module 140, where:
the first obtaining module 110 is configured to obtain a current input sequence input by a user in an input method application.
And a second obtaining module 120, configured to obtain a candidate word matching the current input sequence.
And a third obtaining module 130, configured to obtain, according to a long sentence prediction model trained in advance, a candidate long sentence matched with the candidate word.
And the display module 140 is used for displaying the candidate words and the candidate long sentences on the input method application.
In an embodiment of the present application, the third obtaining module 130 is specifically configured to: and taking the candidate words as the current input of the long sentence prediction model. And inputting the current into the long sentence prediction model to obtain the current output of the long sentence prediction model, wherein the current output comprises the next word after the current input. And when the next word is determined not to be matched with the preset sentence termination information, updating the current input of the long sentence prediction model according to the current output and the current input, and acquiring the current output corresponding to the current input through the long sentence prediction model until the current output of the long sentence prediction model is matched with the preset sentence termination information. And when the current output of the long sentence prediction model is matched with the preset sentence termination information, generating a candidate long sentence matched with the candidate word according to the current input of the long sentence prediction model.
In an embodiment of the present application, on the basis of the embodiment of the apparatus shown in fig. 6, as shown in fig. 7, the apparatus may include:
the fourth obtaining module 150 is configured to obtain corpus data, where the corpus data includes prefix sample words and suffix sample words corresponding to the prefix sample words, where the suffix sample words are words appearing after the prefix sample words.
The first training module 160 is configured to train the long sentence prediction model according to the prefix sample words and the suffix sample words.
In an embodiment of the present application, the third obtaining module 130 is specifically configured to: and determining suffix words matched with the candidate words through a long sentence prediction model, wherein the long sentence prediction model is learned to obtain the corresponding relation between the candidate words and the suffix words. And generating a candidate long sentence according to the candidate word and the suffix word.
In an embodiment of the present application, based on the embodiment of the apparatus shown in fig. 6, as shown in fig. 8, the apparatus further includes:
the fifth obtaining module 170 is configured to obtain corpus data, where the corpus data includes prefix sample words and suffix sample words corresponding to the prefix sample words, where the prefix sample words and the suffix sample words may form a long sentence.
And the second training module 180 is configured to train the long sentence prediction model according to the prefix sample words and the suffix sample words.
In an embodiment of the present application, on the basis of the embodiment of the apparatus shown in fig. 6, as shown in fig. 9, the apparatus may further include:
a sixth obtaining module 190, configured to obtain the on-screen word before the current input sequence.
The third obtaining module 130 is specifically configured to: and acquiring candidate long sentences matched with the candidate words and the words on the screen by adopting a long sentence prediction model trained in advance.
It should be understood that the structure of the sixth obtaining module 190 of the embodiment of the apparatus shown in fig. 9 may also be included in the embodiment of the apparatus shown in fig. 7 or fig. 8, and the implementation is not limited thereto.
It should be noted that the foregoing explanation of the embodiment of the method for providing a candidate long sentence in an input method is also applicable to the apparatus for providing a candidate long sentence in an input method of the embodiment, and the implementation principle is similar, and is not described herein again.
The device for providing the candidate long sentences in the input method, provided by the embodiment of the application, is used for obtaining the current input sequence input by the user in the input method application, obtaining the candidate words matched with the current input sequence, combining the pre-trained long sentence prediction model and the candidate words, obtaining the corresponding candidate long sentences, and displaying the candidate long sentences while displaying the candidate words on the input method application.
FIG. 10 is a schematic structural diagram of an electronic device according to one embodiment of the present application. The electronic device includes:
memory 1001, processor 1002, and computer programs stored on memory 1001 and executable on processor 1002.
The processor 1002 executes the program to implement the method for providing candidate long sentences in the input method provided in the above-described embodiment.
Further, the electronic device further includes:
a communication interface 1003 for communicating between the memory 1001 and the processor 1002.
A memory 1001 for storing computer programs that may be run on the processor 1002.
Memory 1001 may include high-speed RAM memory and may also include non-volatile memory (e.g., at least one disk memory).
The processor 1002 is configured to implement the method for providing the candidate long sentence in the input method according to the above embodiment when executing the program.
If the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the communication interface 1003, the memory 1001, and the processor 1002 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on one chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through an internal interface.
The processor 1002 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method for providing a candidate long sentence in an input method as described above.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. A method for providing candidate long sentences in an input method is characterized by comprising the following steps:
acquiring a current input sequence input by a user in an input method application;
acquiring candidate words matched with the current input sequence;
acquiring a candidate long sentence matched with the candidate word according to a long sentence prediction model trained in advance;
and displaying the candidate words and the candidate long sentences on the input method application.
2. The method of claim 1, wherein obtaining the candidate long sentences matching the candidate words according to a long sentence prediction model trained in advance comprises:
taking the candidate words as the current input of the long sentence prediction model;
inputting the current into the long sentence prediction model to obtain a current output of the long sentence prediction model, wherein the current output comprises a next word after the current input;
when the next word is determined not to be matched with preset statement termination information, updating the current input of the long sentence prediction model according to the current output and the current input, and acquiring the current output corresponding to the current input through the long sentence prediction model until the current output of the long sentence prediction model is matched with the preset statement termination information;
and when the current output of the long sentence prediction model is matched with preset sentence termination information, generating a candidate long sentence matched with the candidate word according to the current input of the long sentence prediction model.
3. The method of claim 2, further comprising, prior to said inputting a current into said long sentence prediction model to obtain a current output of said long sentence prediction model:
acquiring training corpus data, wherein the training corpus data comprises prefix sample words and suffix sample words corresponding to the prefix sample words, and the suffix sample words are words appearing after the prefix sample words;
training the long sentence prediction model according to the prefix sample words and the suffix sample words.
4. The method of claim 1, wherein obtaining the candidate long sentences matching the candidate words according to a long sentence prediction model trained in advance comprises:
determining suffix words matched with the candidate words through the long sentence prediction model, wherein the long sentence prediction model is learned to obtain the corresponding relation between the candidate words and the suffix words;
and generating the candidate long sentence according to the candidate word and the suffix word.
5. The method of claim 4, further comprising, prior to said determining, by said long sentence prediction model, a suffix word that matches said candidate word:
acquiring training corpus data, wherein the training corpus data comprises prefix sample words and suffix sample words corresponding to the prefix sample words, and the prefix sample words and the suffix sample words can form long sentences;
training the long sentence prediction model according to the prefix sample words and the suffix sample words.
6. The method of any one of claims 1-5, further comprising:
acquiring the words on the screen before the current input sequence;
the obtaining of the candidate long sentence matched with the candidate word according to the long sentence prediction model trained in advance comprises:
and acquiring the candidate long sentences matched with the candidate words and the on-screen words by adopting a long sentence prediction model trained in advance.
7. An apparatus for providing candidate long sentences in an input method, the apparatus comprising:
the first acquisition module is used for acquiring a current input sequence input by a user in the input method application;
the second acquisition module is used for acquiring candidate words matched with the current input sequence;
the third acquisition module is used for acquiring a candidate long sentence matched with the candidate word according to a long sentence prediction model trained in advance;
and the display module is used for displaying the candidate words and the candidate long sentences on the input method application.
8. The apparatus of claim 7, wherein the third obtaining module is specifically configured to:
taking the candidate words as the current input of the long sentence prediction model;
inputting the current into the long sentence prediction model to obtain a current output of the long sentence prediction model, wherein the current output comprises a next word after the current input;
when the next word is determined not to be matched with preset statement termination information, updating the current input of the long sentence prediction model according to the current output and the current input, and acquiring the current output corresponding to the current input through the long sentence prediction model until the current output of the long sentence prediction model is matched with the preset statement termination information;
and when the current output of the long sentence prediction model is matched with preset sentence termination information, generating a candidate long sentence matched with the candidate word according to the current input of the long sentence prediction model.
9. The apparatus of claim 8, further comprising:
a fourth obtaining module, configured to obtain corpus data, where the corpus data includes prefix sample words and suffix sample words corresponding to the prefix sample words, where the suffix sample words are words appearing after the prefix sample words;
a first training module for training the long sentence prediction model according to the prefix sample words and the suffix sample words.
10. The apparatus of claim 7, wherein the third obtaining module is specifically configured to:
determining suffix words matched with the candidate words through the long sentence prediction model, wherein the long sentence prediction model is learned to obtain the corresponding relation between the candidate words and the suffix words;
and generating the candidate long sentence according to the candidate word and the suffix word.
11. The apparatus of claim 10, further comprising:
a fifth obtaining module, configured to obtain corpus data, where the corpus data includes prefix sample words and suffix sample words corresponding to the prefix sample words, and the prefix sample words and the suffix sample words may form long sentences;
and the second training module is used for training the long sentence prediction model according to the prefix sample words and the suffix sample words.
12. The apparatus of any one of claims 7-11, further comprising:
a sixth obtaining module, configured to obtain a word that is already on the screen before the current input sequence;
the third obtaining module is specifically configured to:
and acquiring the candidate long sentences matched with the candidate words and the on-screen words by adopting a long sentence prediction model trained in advance.
13. An electronic device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor implements a method for providing candidate long sentences in an input method as claimed in any one of claims 1 to 6 when executing the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for providing candidate long sentences in an input method according to any one of claims 1 to 6.
CN201910927584.XA 2019-09-27 2019-09-27 Method and device for providing candidate long sentences in input method Active CN110673748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910927584.XA CN110673748B (en) 2019-09-27 2019-09-27 Method and device for providing candidate long sentences in input method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910927584.XA CN110673748B (en) 2019-09-27 2019-09-27 Method and device for providing candidate long sentences in input method

Publications (2)

Publication Number Publication Date
CN110673748A true CN110673748A (en) 2020-01-10
CN110673748B CN110673748B (en) 2023-04-28

Family

ID=69079711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910927584.XA Active CN110673748B (en) 2019-09-27 2019-09-27 Method and device for providing candidate long sentences in input method

Country Status (1)

Country Link
CN (1) CN110673748B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052649A (en) * 2020-10-12 2020-12-08 腾讯科技(深圳)有限公司 Text generation method and device, electronic equipment and storage medium
CN112506359A (en) * 2020-12-21 2021-03-16 北京百度网讯科技有限公司 Method and device for providing candidate long sentences in input method and electronic equipment
CN112527127A (en) * 2020-12-23 2021-03-19 北京百度网讯科技有限公司 Training method and device for input method long sentence prediction model, electronic equipment and medium
CN113449515A (en) * 2021-01-27 2021-09-28 心医国际数字医疗系统(大连)有限公司 Medical text prediction method and device and electronic equipment
CN113589954A (en) * 2020-04-30 2021-11-02 北京搜狗科技发展有限公司 Data processing method and device and electronic equipment
CN113655893A (en) * 2021-07-08 2021-11-16 华为技术有限公司 Word and sentence generation method, model training method and related equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030234821A1 (en) * 2002-03-25 2003-12-25 Agere Systems Inc. Method and apparatus for the prediction of a text message input
JP2007034871A (en) * 2005-07-29 2007-02-08 Sanyo Electric Co Ltd Character input apparatus and character input apparatus program
JP2011128958A (en) * 2009-12-18 2011-06-30 Chiteki Mirai:Kk Device, method and program for inputting sentence
CN102866782A (en) * 2011-07-06 2013-01-09 哈尔滨工业大学 Input method and input method system for improving sentence generating efficiency
CN105718070A (en) * 2016-01-16 2016-06-29 上海高欣计算机系统有限公司 Pinyin long sentence continuous type-in input method and Pinyin long sentence continuous type-in input system
CN105929979A (en) * 2016-06-29 2016-09-07 百度在线网络技术(北京)有限公司 Long-sentence input method and device
US20180302350A1 (en) * 2016-08-03 2018-10-18 Tencent Technology (Shenzhen) Company Limited Method for determining candidate input, input prompting method and electronic device
US20190121533A1 (en) * 2016-02-06 2019-04-25 Shanghai Chule (Coo Tek) Information Technology Co., Ltd. Method and device for secondary input of text
CN110187780A (en) * 2019-06-10 2019-08-30 北京百度网讯科技有限公司 Long text prediction technique, device, equipment and storage medium
CN110286778A (en) * 2019-06-27 2019-09-27 北京金山安全软件有限公司 Chinese deep learning input method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030234821A1 (en) * 2002-03-25 2003-12-25 Agere Systems Inc. Method and apparatus for the prediction of a text message input
JP2007034871A (en) * 2005-07-29 2007-02-08 Sanyo Electric Co Ltd Character input apparatus and character input apparatus program
JP2011128958A (en) * 2009-12-18 2011-06-30 Chiteki Mirai:Kk Device, method and program for inputting sentence
CN102866782A (en) * 2011-07-06 2013-01-09 哈尔滨工业大学 Input method and input method system for improving sentence generating efficiency
CN105718070A (en) * 2016-01-16 2016-06-29 上海高欣计算机系统有限公司 Pinyin long sentence continuous type-in input method and Pinyin long sentence continuous type-in input system
US20190121533A1 (en) * 2016-02-06 2019-04-25 Shanghai Chule (Coo Tek) Information Technology Co., Ltd. Method and device for secondary input of text
CN105929979A (en) * 2016-06-29 2016-09-07 百度在线网络技术(北京)有限公司 Long-sentence input method and device
US20180302350A1 (en) * 2016-08-03 2018-10-18 Tencent Technology (Shenzhen) Company Limited Method for determining candidate input, input prompting method and electronic device
CN110187780A (en) * 2019-06-10 2019-08-30 北京百度网讯科技有限公司 Long text prediction technique, device, equipment and storage medium
CN110286778A (en) * 2019-06-27 2019-09-27 北京金山安全软件有限公司 Chinese deep learning input method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IUY;: "拼音输入法词库广度及选词精度全测试" *
袁哲;: "人工智能在拼音输入法中的应用" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113589954A (en) * 2020-04-30 2021-11-02 北京搜狗科技发展有限公司 Data processing method and device and electronic equipment
CN112052649A (en) * 2020-10-12 2020-12-08 腾讯科技(深圳)有限公司 Text generation method and device, electronic equipment and storage medium
CN112506359A (en) * 2020-12-21 2021-03-16 北京百度网讯科技有限公司 Method and device for providing candidate long sentences in input method and electronic equipment
CN112506359B (en) * 2020-12-21 2023-07-21 北京百度网讯科技有限公司 Method and device for providing candidate long sentences in input method and electronic equipment
CN112527127A (en) * 2020-12-23 2021-03-19 北京百度网讯科技有限公司 Training method and device for input method long sentence prediction model, electronic equipment and medium
CN112527127B (en) * 2020-12-23 2022-01-28 北京百度网讯科技有限公司 Training method and device for input method long sentence prediction model, electronic equipment and medium
CN113449515A (en) * 2021-01-27 2021-09-28 心医国际数字医疗系统(大连)有限公司 Medical text prediction method and device and electronic equipment
CN113655893A (en) * 2021-07-08 2021-11-16 华为技术有限公司 Word and sentence generation method, model training method and related equipment

Also Published As

Publication number Publication date
CN110673748B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN110673748B (en) Method and device for providing candidate long sentences in input method
CN110377716B (en) Interaction method and device for conversation and computer readable storage medium
US11386271B2 (en) Mathematical processing method, apparatus and device for text problem, and storage medium
CN107731228B (en) Text conversion method and device for English voice information
CN111061874B (en) Sensitive information detection method and device
CN107679032A (en) Voice changes error correction method and device
JP2013232193A (en) Technique for assisting user in textual input of names of entities to user device in multiple different languages
CN108304376B (en) Text vector determination method and device, storage medium and electronic device
CN110737774A (en) Book knowledge graph construction method, book recommendation method, device, equipment and medium
CN114757176A (en) Method for obtaining target intention recognition model and intention recognition method
CN112347787A (en) Method, device and equipment for classifying aspect level emotion and readable storage medium
CN110187780B (en) Long text prediction method, long text prediction device, long text prediction equipment and storage medium
CN113836303A (en) Text type identification method and device, computer equipment and medium
CN116797695A (en) Interaction method, system and storage medium of digital person and virtual whiteboard
CN110909768A (en) Method and device for acquiring marked data
CN114048288A (en) Fine-grained emotion analysis method and system, computer equipment and storage medium
EP3617907A1 (en) Translation device
CN113255331A (en) Text error correction method, device and storage medium
CN117349402A (en) Emotion cause pair identification method and system based on machine reading understanding
JP2021039727A (en) Text processing method, device, electronic apparatus, and computer-readable storage medium
CN114490967B (en) Training method of dialogue model, dialogue method and device of dialogue robot and electronic equipment
CN113569581B (en) Intention recognition method, device, equipment and storage medium
CN105068992B (en) A kind of search result display methods and device
CN111506715B (en) Query method and device, electronic equipment and storage medium
CN111045836B (en) Search method, search device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant