CN106649605B - Method and device for triggering promotion keywords - Google Patents

Method and device for triggering promotion keywords Download PDF

Info

Publication number
CN106649605B
CN106649605B CN201611065432.6A CN201611065432A CN106649605B CN 106649605 B CN106649605 B CN 106649605B CN 201611065432 A CN201611065432 A CN 201611065432A CN 106649605 B CN106649605 B CN 106649605B
Authority
CN
China
Prior art keywords
query
sequence
translation
promotion
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611065432.6A
Other languages
Chinese (zh)
Other versions
CN106649605A (en
Inventor
童牧晨玄
陈志杰
官瀚举
曹莹
严春伟
张克丰
黄威
周鑫昌
王晨晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201611065432.6A priority Critical patent/CN106649605B/en
Publication of CN106649605A publication Critical patent/CN106649605A/en
Application granted granted Critical
Publication of CN106649605B publication Critical patent/CN106649605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/313Selection or weighting of terms for indexing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)

Abstract

The invention provides a method and a device for triggering promotion keywords, wherein the method for triggering the promotion keywords comprises the following steps: acquiring a query input by a user; inputting the obtained input sequence corresponding to the query into a translation model to obtain an output sequence; determining a promotion keyword triggered by the query by using the output sequence; the translation model is obtained by adopting the following pre-training mode: obtaining query and a clicked title corresponding to the query from a user click behavior log as training data; and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model. The method and the device can determine the promotion keywords matched with the query semantically, and improve the accuracy and recall rate of promotion keyword triggering.

Description

Method and device for triggering promotion keywords
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of search, in particular to a method and a device for triggering promotion keywords.
[ background of the invention ]
The search engine is a main way for netizens to obtain information, and the income of the search engine mainly comes from search promotion, so search engine companies need to construct a complex search promotion delivery system. The main processes of the search promotion delivery system for showing promotion are as follows: 1) finding out related promotion keywords according to the search words of the user; 2) finding out advertisements which purchase the promotion keywords and estimating the click rate; 3) and selecting popularization and showing to the user according to a certain mechanism. How to trigger a proper auction word according to the search word of the user determines to a great extent whether the displayed advertisement meets the search intention of the user. Therefore, the triggering technology of the promotion keyword is one of the core technologies in the search promotion delivery system.
Most of the existing triggering technologies for the promotion keywords are based on the literal transformation of search keywords, for example, the transformation based on synonyms, and in this way, the promotion keywords matched with the query cannot be determined semantically, so that the accuracy and recall rate of triggering the promotion keywords are low. Therefore, it is urgently needed to provide a method for triggering a promotion keyword, so that the promotion keyword matched with the query is determined semantically, and the accuracy and recall rate of triggering the promotion keyword are improved.
[ summary of the invention ]
In view of the above, the invention provides a method and a device for triggering a promotion keyword, so that the promotion keyword matched with a query is determined semantically, and the triggering accuracy and the recall rate of the promotion keyword are improved.
The technical scheme adopted by the invention for solving the technical problem is to provide a method for triggering the promotion keywords, which comprises the following steps: acquiring a query input by a user; inputting the obtained input sequence corresponding to the query into a translation model to obtain an output sequence; determining a promotion keyword triggered by the query by using the output sequence; the translation model is obtained by adopting the following pre-training mode: obtaining query and a clicked title corresponding to the query from a user click behavior log as training data; and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model.
According to a preferred embodiment of the present invention, inputting the obtained input sequence corresponding to the query into a translation model includes: performing word segmentation on the obtained query, and inputting a word sequence obtained after word segmentation into a translation model as an input sequence; the obtaining of the input sequence by using the query in the training data includes: and performing word segmentation processing on the query in the training data, and taking a word sequence obtained after the word segmentation processing as an input sequence.
According to a preferred embodiment of the present invention, the translation model translates the acquired input sequence corresponding to the query by using a beam-search technique and a preset keyword library to obtain an output sequence.
According to a preferred embodiment of the present invention, each output sequence obtained by translating the input sequence corresponding to the obtained query by the translation model satisfies the following conditions: the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
According to a preferred embodiment of the present invention, in the process of translating the input sequence, the translation model performs the following processing on each layer: searching candidate words obtained by translation of each layer in a promotion keyword library; if the word with the sequence formed by the determined word and the candidate word in each layer as the prefix does not exist in the promotion keyword library, pruning the candidate word; sorting the rest candidate words according to the probability value, and pruning words after the beam-size words are ranked; and pruning words with probability values smaller than Q from the words with the probability values within beam-size.
According to a preferred embodiment of the present invention, the neural network model is: the recurrent neural network model RNN.
According to a preferred embodiment of the present invention, determining the keyword triggered by the query using the output sequence includes: and judging whether a promotion keyword consistent with the output sequence exists in a promotion keyword library, if so, taking the promotion keyword consistent with the output sequence as a promotion keyword triggered by the query.
According to a preferred embodiment of the invention, the method further comprises: and the search result page corresponding to the query comprises a promotion result corresponding to the promotion keyword triggered by the query.
The invention provides a trigger device for popularizing keywords for solving the technical problem, which comprises: the acquisition unit is used for acquiring the query input by the user; the translation unit is used for inputting the acquired input sequence corresponding to the query into a translation model to obtain an output sequence; the determining unit is used for determining the promotion keyword triggered by the query by using the output sequence; the training unit is used for pre-training in the following modes to obtain the translation model: obtaining query and a clicked title corresponding to the query from a user click behavior log as training data; and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model.
According to a preferred embodiment of the present invention, when the translation unit inputs the acquired input sequence corresponding to the query into a translation model, the following steps are specifically performed: performing word segmentation on the obtained query, and inputting a word sequence obtained after word segmentation into a translation model as an input sequence; when the training unit obtains an input sequence by using a query in training data, the method specifically comprises the following steps: and the translation unit carries out word segmentation processing on the query in the training data, and a word sequence obtained after the word segmentation processing is used as an input sequence.
According to a preferred embodiment of the present invention, the translation model of the translation unit translates the acquired input sequence corresponding to the query by using a beam-search technique and a preset keyword library to obtain an output sequence.
According to a preferred embodiment of the present invention, the translation model in the translation unit obtains each output sequence by translating the input sequence corresponding to the obtained query, and satisfies the following conditions: the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
According to a preferred embodiment of the present invention, in the process of translating the input sequence, the translation model in the translation unit performs the following processing on each layer: searching candidate words obtained by translation of each layer in a promotion keyword library; if the word with the sequence formed by the determined word and the candidate word in each layer as the prefix does not exist in the promotion keyword library, pruning the candidate word; sorting the rest candidate words according to the probability value, and pruning words after the beam-size words are ranked; and pruning words with probability values smaller than Q from the words with the probability values within beam-size.
According to a preferred embodiment of the present invention, the neural network model used by the translation model in the translation unit is: the recurrent neural network model RNN.
According to a preferred embodiment of the present invention, the determining unit, when determining the keyword triggered by the query by using the output sequence, includes: and judging whether a promotion keyword consistent with the output sequence exists in a promotion keyword library, if so, taking the promotion keyword consistent with the output sequence as a promotion keyword triggered by the query.
According to a preferred embodiment of the present invention, the apparatus further comprises: and the triggering unit is used for including the promotion result corresponding to the promotion keyword triggered by the query in the search result page corresponding to the query.
According to the technical scheme, the query in the user click behavior log and the clicked title corresponding to the query are selected as training data, the neural network model is trained to obtain the translation model, and after the query input by the user is obtained, the promotion keyword triggered by the query is obtained through the translation model obtained through pre-training. The method can acquire the popularization keywords semantically matched with the query, and compared with a literal transformation method, the accuracy and the recall rate are greatly improved.
[ description of the drawings ]
Fig. 1 is a flowchart of a method according to an embodiment of the present invention.
Fig. 2 is a diagram of an RNN network structure according to an embodiment of the present invention.
Fig. 3 is a diagram illustrating an apparatus according to an embodiment of the present invention.
Fig. 4 is a block diagram of an apparatus according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Fig. 1 is a flowchart of a method according to an embodiment of the present invention, and as shown in fig. 1, the method may mainly include the following steps:
in 101, a query input by a user is obtained.
In this step, the query input by the user may be in any form, may be a single word, or may be a sentence.
In 102, the obtained input sequence corresponding to the query is input to a translation model, and an output sequence is obtained.
In this step, the obtained query is subjected to word segmentation, and a word sequence obtained after word segmentation is input into a translation model as an input sequence. The translation model is obtained by pre-training, and the neural network model is trained through the obtained training data, so that the translation model is obtained.
When training the neural network model, the used training data are: and obtaining the query and the binary group formed by the clicked title corresponding to the query from the user click behavior log. Wherein the clicked title is preferred to the title of the clicked promotional result.
Wherein the user click behavior log comes from a log system of a search engine. The log system of the search engine records the click behavior of the user, which reflects the search intention of the user and the correlation between the search results and the promotion. In a log system of a search engine, a mature algorithm is used for identifying the effectiveness of a user and the effectiveness of a user click behavior, so that training data used in the step of training a translation model is effective click data generated by an effective user, and the training data can also be called as high-quality click behavior data of the user. The translation model obtained by training the training data is more accurate and effective when translating the input query.
When the training data is used for training the neural network model to obtain the translation model, word segmentation is carried out on query in the training data, a word sequence obtained after word segmentation is used as an input sequence, and a clicked title corresponding to the query in the training data is used as a target sequence for training.
In this step, the neural Network model used is preferably a Recurrent Neural Network (RNN) model because the connections between hidden nodes in the recurrent neural Network model Network structure form a ring, and the internal state is determined by the input sequence, so the Network structure of the recurrent neural Network model is very suitable for processing a long text sequence, and the RNN is used to represent the recurrent neural Network model in the following description.
The RNN network structure is shown in fig. 2, and the RNN network structure is mainly divided into 5 parts from bottom to top, and completes the training process from the input sequence to the target sequence, and the training process is actually a translation process from query to the clicked title corresponding to the query.
The first layer of the RNN structure is a word vector layer at the input end, each word of the query is represented as a word vector by the word vector layer, for example, the query in the training data is 'good habit of how to cultivate children', the words 'how', 'cultivate', 'children', 'good', 'habit' obtained after word segmentation are represented as word vectors, the number in the box represents the word vector of the word, and the semantic similarity between the words is measured by calculating the cosine similarity between the word vectors; the second layer of the RNN structure is a bidirectional RNN layer, the layer reads input word vector sequences from the positive direction and the negative direction, and encodes read sequence information into an internal state in an iterative mode, because the query in the training data contains rich semantic information, after the query is represented as a word vector by the first layer, different word vectors contain different semantic information, and the bidirectional RNN layer encodes the semantic information in the word vector sequences and encodes the semantic information of the input sequence into the internal state of the RNN; the third layer of the RNN network structure is an alignment layer, which weights the internal state of the bi-directional RNN with different weight values; the fourth layer of the RNN structure is an RNN layer, and the RNN layer calculates the probability of each word in the iteration in an iteration mode according to the output of the alignment layer, the internal state of the last iteration and the translated word sequence; the fifth layer of the RNN network structure is an output word vector layer, which represents the word vector of each word in the target sequence translated from the input sequence.
For example, a query in the training data is "how to cultivate good habits of children", and the query corresponds to the clicked title "good habits of children". And training the training data by using an RNN structure, wherein the training aims to be as possible through the RNN structure, and when the query is 'how to cultivate good habits of children', the output target sequence is 'good habit cultivation of children'. That is, the purpose of training with the RNN network structure is: and translating the query in the training data to obtain the clicked title corresponding to the query as much as possible.
As can be seen from the above description of the RNN training network structure, the RNN network structure is mainly used to encode and decode sequence data in a translation model, the bidirectional RNN layer encodes an input sequence, and encodes semantic information of the input sequence in an internal state of the RNN; and the RNN layer decodes the internal state containing the semantic information into a target sequence, thereby completing the translation process from the input sequence to the target sequence.
Therefore, in this step, after the obtained input sequence corresponding to the query is input to the translation model, the obtained output sequence includes the semantic information of the query.
In 103, the output sequence is used to determine the promotion keyword triggered by the query.
In this step, when training of the translation model is performed by using the training data, word segmentation is performed on all queries and titles in the acquired training data, and words obtained by segmenting all the training data form a candidate translation character set.
After obtaining the semantic information of the input query through step 102, the translation model assigns a corresponding probability value to each character in the candidate translation character set according to the semantic information of the input query. That is, when the input queries are different, the probability value assigned by the translation model to each character in the set of candidate translated characters is different. And according to the probability value of each character in the candidate translation character set, combining a beam-search technology and a promotion keyword library to translate the input sequence to obtain an output sequence.
The triggering of the promotion keyword can be regarded as a translation process from the query to the promotion keyword, so that the triggering from the query to the promotion keyword can be completed by applying the translation model. However, there is a problem in this translation process that the translation process using the translation model is free, and the output sequence is an open space, so that it cannot be guaranteed that the input query triggers the promotion keyword purchased by the advertiser after passing through the translation model, and the standard machine translation only requires to return an optimal solution, thereby reducing the recall rate triggered by the promotion keyword.
In order to solve the problems, the invention adopts an improved beam-search technology when an input query is translated by using a translation model, namely, a beam-size and a probability threshold Q are preset before the input query is translated. Wherein, the probability threshold value Q determines the lowest probability value of the candidate sequence selection, and the value is obtained by multiple experimental observations; the beam-size determines the number of candidate sequences selected for each layer of translation and the size of the final candidate sequence set, and this value needs to be determined by balancing the translation performance and the correlation of the translation results. In addition, when the translation model is used for translating the input query, the translation model is combined with a promotion keyword library, and the promotion keyword library contains all promotion keywords purchased by an advertiser.
Therefore, the following conditions are satisfied for each output sequence translated by the translation model: the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
In the process that the translation model translates the input query, the translation model gives a probability value of each character in a candidate translation character set according to semantic information of the input query, and beam-size candidate characters are obtained in a promotion keyword library according to the probability value of each character in the candidate character set, so that words which are not in the promotion keyword library cannot appear in a result obtained by the translation model.
And when training of the translation model is carried out by using training data, a special mark character is automatically added after the query and the corresponding clicked title, the special mark character is used for representing the end of translation, and then the translation model can learn when the output words form a complete sentence by training the training data, and the translation process is ended.
Therefore, when a candidate character ends with the special mark character, it indicates that the translation process is finished. And when the translation is finished, selecting beam-size candidate characters larger than the probability threshold value Q as final candidate characters, wherein the final candidate characters are promotion keywords obtained after the input query is translated.
For example, assume that the translation model uses a set of candidate translation characters { a, b, c, d, EOF }, i.e., each candidate character in the target sequence is selected from the set during the translation process. Where EOF represents the special marker character for translation end, setting beam-size 2, probability threshold Q0.3, and the final candidate sequence set is denoted as R.
When translating an initial character of an input query, the translation model gives a probability value of each character as the initial character in a candidate translation character set according to semantic information of the query, such as p (a) a probability value representing a character a as the translation result initial character, p (b) a probability value representing a character b as the translation result initial character, and so on. Sorting the probability values of the characters from large to small, if p (a) is the maximum, searching whether a promotion keyword with a as a prefix exists in a promotion keyword library at first, if so, taking a as a candidate first character, and if not, pruning a; and if p (b) is next to p (a), searching whether b exists in the promotion keyword library, if so, taking b as a candidate first character, and if not, pruning b. According to the selection rule, beam-size candidate first characters are finally selected, the rest characters are pruned, the probability value of each candidate first character of the beam-size is ensured to be larger than or equal to a preset probability threshold value Q, and the candidate first characters with the probability value smaller than Q are pruned. Because the set beam-size is 2, two characters are selected as the first character candidates, assuming that the first character candidates selected this time are a and b.
For the second character of translation, the translation model calculates the probability of all sequences { aa, ab, ac, ad, aEOF, ba, bb, bc, bd, bEOF } beginning with a or b, the sequences are sorted in a descending order according to the magnitude of the probability value as same as the selection rule of the first character, and whether a promotion keyword with the obtained sequence as a prefix exists is searched in a promotion keyword library, if so, the promotion keyword is selected, and if not, the promotion keyword is pruned. And finally, selecting candidate sequences with the beam-size probability values larger than or equal to Q for continuous translation.
Repeating the translation process, when a certain candidate sequence ends with EOF, considering that the translation process of the input sequence is ended, and if the probability value of the obtained candidate sequence is greater than the probability threshold value Q, placing the sequence into the final candidate sequence set R. When the size of set R is equal to beam-size, the translation ends.
For another example, the query input by the user is "how to select beijing fresh flower express delivery", the query is subjected to word segmentation to obtain "how to", "select", "beijing", "fresh flower", and "express delivery", the words obtained after word segmentation are input into the translation model, the semantics of the input query are known through the translation model, and then the translation model starts to translate.
The translation model first determines what the first word is output, and assigns a probability value to each character in the candidate translation character set according to the semantics of the query, wherein the probability value represents the probability that each character in the candidate translation characters is used as the first character. If the probability values of the three words of 'how', 'where' and 'Beijing' are higher, but the promotion keywords beginning with 'how' and 'where' are not searched in the promotion keyword library, and the promotion keywords beginning with 'Beijing' are searched, the first two words are abandoned, and the 'Beijing' is selected as the first character. And then the translation model determines what the output second word is, namely, which word is suitable to be output after the word "Beijing", the translation model also gives a probability value to each character in the candidate translation character set, the probability value represents the possibility of outputting the word after "Beijing", the probability values of the second character, namely "flower" and "flower" are assumed to be higher, and the promotion keywords beginning with "Beijing flower" or "Beijing flower" are searched in the promotion keyword library, then the "flower" and the "flower" are selected as the second word, and the like. After the translation model selects three vocabularies, namely 'Beijing', 'fresh flower' and 'express', when a fourth vocabulary is selected, the translation model finds that the probability that the fourth character is 'EOF' is the largest, and then the translation process is finished. At this time, the model searches whether the promotion keyword, namely Beijing flower express is available in a promotion keyword library, and if the promotion keyword is available, and the probability of the promotion keyword is greater than a set probability threshold value Q, the keyword is output as a candidate promotion keyword.
After the output sequence is used for determining the promotion keyword triggered by the query, promotion corresponding to the promotion keyword can be obtained by using the promotion keyword, and a corresponding search result page after the query is input by a user comprises a promotion result corresponding to the promotion keyword triggered by the query.
By utilizing the technical scheme provided by the invention, the query in the user click behavior log and the clicked title corresponding to the query are selected as training data, the neural network model is trained to obtain the translation model, and after the query input by the user is obtained, the promotion keyword triggered by the query is obtained through the translation model obtained by pre-training. The method can acquire the popularization keywords semantically matched with the query, and compared with a literal transformation mode, the accuracy and recall rate of triggering the popularization keywords are greatly improved.
The following is a detailed description of the structure of the apparatus according to the embodiment of the present invention. As shown in fig. 3, the apparatus mainly comprises: an acquisition unit 31, a translation unit 32, a determination unit 33, a training unit 34, and a triggering unit 35.
The acquiring unit 31 is configured to acquire a query input by a user.
In this step, the query input by the user may be in any form, may be a single word, or may be a sentence.
And the translation unit 32 is configured to input the acquired input sequence corresponding to the query into a translation model to obtain an output sequence.
The translation unit 32 needs to perform word segmentation on the obtained query, take a word sequence obtained after the word segmentation as an input sequence, and then input the input sequence into a translation model for translation.
The translation model used in the translation unit 32 is trained in advance by the training unit 34, and the training unit 34 trains the neural network model by using the acquired training data, thereby obtaining the translation model.
The training data used by the training unit 34 to train the neural network model are: and obtaining the query and the binary group formed by the clicked title corresponding to the query from the user click behavior log. Wherein the clicked title is preferred to the title of the clicked promotional result.
Wherein the user click behavior log comes from a log system of a search engine. The log system of the search engine records the click behavior of the user, which reflects the search intention of the user and the correlation between the search results and the promotion. In a log system of a search engine, a mature algorithm is used for identifying the effectiveness of a user and the effectiveness of a user click behavior, so that training data used in the step of training a translation model is effective click data generated by an effective user, and the training data can also be called as high-quality click behavior data of the user. The translation model obtained by training the training data is more accurate and effective when translating the input query.
When the training unit 34 trains the neural network model with the training data to obtain the translation model, it needs to perform word segmentation on the query in the training data, and takes the word sequence obtained after the word segmentation as an input sequence and the clicked title corresponding to the query in the training data as a target sequence for training.
In this step, the neural Network model used is preferably a Recurrent Neural Network (RNN) model, because the connections between hidden nodes in the recurrent neural Network model Network structure form a ring, and the internal state is determined by the input sequence, so the Network structure of the recurrent neural Network model is very suitable for processing long text sequences.
The RNN network structure is shown in fig. 2, and the RNN network structure is mainly divided into 5 parts from bottom to top, and completes the training process from the input sequence to the target sequence, and the training process is actually a translation process from query to the clicked title corresponding to the query.
The first layer of the RNN structure is a word vector layer at the input end, each word of the query is represented as a word vector by the word vector layer, for example, the query in the training data is 'good habit of how to cultivate children', the words 'how', 'cultivate', 'children', 'good', 'habit' obtained after word segmentation are represented as word vectors, the number in the box represents the word vector of the word, and the semantic similarity between the words is measured by calculating the cosine similarity between the word vectors; the second layer of the RNN structure is a bidirectional RNN layer, the layer reads input word vector sequences from the positive direction and the negative direction, and encodes read sequence information into an internal state in an iterative mode, because the query in the training data contains rich semantic information, after the query is represented as a word vector by the first layer, different word vectors contain different semantic information, and the bidirectional RNN layer encodes the semantic information in the word vector sequences and encodes the semantic information of the input sequence into the internal state of the RNN; the third layer of the RNN network structure is an alignment layer, which weights the internal state of the bi-directional RNN with different weight values; the fourth layer of the RNN structure is an RNN layer, and the RNN layer calculates the probability of each word in the iteration in an iteration mode according to the output of the alignment layer, the internal state of the last iteration and the translated word sequence; the fifth layer of the RNN network structure is an output word vector layer, which represents the word vector of each word in the target sequence translated from the input sequence.
For example, a query in the training data is "how to cultivate good habits of children", and the query corresponds to the clicked title "good habits of children". And training the training data by using an RNN structure, wherein the training aims to be as possible through the RNN structure, and when the query is 'how to cultivate good habits of children', the output target sequence is 'good habit cultivation of children'. That is, the purpose of training with the RNN network structure is: and translating the query in the training data to obtain the clicked title corresponding to the query as much as possible.
As can be seen from the above description of the RNN training network structure, the RNN network structure is mainly used to encode and decode sequence data in a translation model, the bidirectional RNN layer encodes an input sequence, and encodes semantic information of the input sequence in an internal state of the RNN; and the RNN layer decodes the internal state containing the semantic information into a target sequence, thereby completing the translation process from the input sequence to the target sequence.
Therefore, in this step, the translation unit 32 inputs the acquired input sequence corresponding to the query into the translation model, and the obtained output sequence includes semantic information of the query.
A determining unit 33, configured to determine, by using the output sequence, a promotion keyword triggered by the query.
When training of a translation model is carried out by using training data, word segmentation processing is carried out on all queries and titles in the obtained training data, and a candidate translation character set is formed by words obtained after all training data are segmented.
After the semantic information of the input query is obtained through the translation unit 32, a translation model in the translation unit 32 assigns a corresponding probability value to each character in the candidate translation character set according to the semantic information of the input query. That is, when the input queries are different, the probability value assigned by the translation model to each character in the set of candidate translated characters is different. And according to the probability value of each character in the candidate translation character set, combining a beam-search technology and a promotion keyword library to translate the input sequence to obtain an output sequence.
The triggering of the promotion keyword can be regarded as a translation process from the query to the promotion keyword, so that the triggering from the query to the promotion keyword can be completed by applying the translation model. However, there is a problem in this translation process that the translation process using the translation model is free, and the output sequence is an open space, so that it cannot be guaranteed that the input query triggers the promotion keyword purchased by the advertiser after passing through the translation model, and the standard machine translation only requires to return an optimal solution, thereby reducing the recall rate triggered by the promotion keyword.
In order to solve the problems, the invention adopts an improved beam-search technology when an input query is translated by using a translation model, namely, a beam-size and a probability threshold Q are preset before the input query is translated. Wherein, the probability threshold value Q determines the lowest probability value of the candidate sequence selection, and the value is obtained by multiple experimental observations; the beam-size determines the number of candidate sequences selected for each layer of translation and the size of the final candidate sequence set, and this value needs to be determined by balancing the translation performance and the correlation of the translation results. In addition, when the translation model is used for translating the input query, the translation model is combined with a promotion keyword library, and the promotion keyword library contains all promotion keywords purchased by an advertiser.
Therefore, the following conditions are satisfied for each output sequence translated by the translation model: the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
In the process that the translation model translates the input query, the translation model gives a probability value of each character in a candidate translation character set according to semantic information of the input query, and beam-size candidate characters are obtained in a promotion keyword library according to the probability value of each character in the candidate character set, so that words which are not in the promotion keyword library cannot appear in a result obtained by the translation model.
And when training of the translation model is carried out by using training data, a special mark character is automatically added after the query and the corresponding clicked title, the special mark character is used for representing the end of translation, and then the translation model can learn when the output words form a complete sentence by training the training data, and the translation process is ended.
Therefore, when a candidate character ends with the special mark character, it indicates that the translation process is finished. And when the translation is finished, selecting beam-size candidate characters larger than the probability threshold value Q as final candidate characters, wherein the final candidate characters are promotion keywords obtained after the input query is translated.
For example, assume that the translation model uses a set of candidate translation characters { a, b, c, d, EOF }, i.e., each candidate character in the target sequence is selected from the set during the translation process. Where EOF represents the special marker character for translation end, setting beam-size 2, probability threshold Q0.3, and the final candidate sequence set is denoted as R.
When translating an initial character of an input query, the translation model gives a probability value of each character as the initial character in a candidate translation character set according to semantic information of the query, such as p (a) a probability value representing a character a as the translation result initial character, p (b) a probability value representing a character b as the translation result initial character, and so on. Sorting the probability values of the characters from large to small, if p (a) is the maximum, searching whether a promotion keyword with a as a prefix exists in a promotion keyword library at first, if so, taking a as a candidate first character, and if not, pruning a; and if p (b) is next to p (a), searching whether b exists in the promotion keyword library, if so, taking b as a candidate first character, and if not, pruning b. According to the selection rule, beam-size candidate first characters are finally selected, the rest characters are pruned, the probability value of each candidate first character of the beam-size is ensured to be larger than or equal to a preset probability threshold value Q, and the candidate first characters with the probability value smaller than Q are pruned. Because the set beam-size is 2, two characters are selected as the first character candidates, assuming that the first character candidates selected this time are a and b.
For the second character of translation, the translation model calculates the probability of all sequences { aa, ab, ac, ad, aEOF, ba, bb, bc, bd, bEOF } beginning with a or b, the sequences are sorted in a descending order according to the magnitude of the probability value as same as the selection rule of the first character, and whether a promotion keyword with the obtained sequence as a prefix exists is searched in a promotion keyword library, if so, the promotion keyword is selected, and if not, the promotion keyword is pruned. And finally, selecting candidate sequences with the beam-size probability values larger than or equal to Q for continuous translation.
Repeating the translation process, when a certain candidate sequence ends with EOF, considering that the translation process of the input sequence is ended, and if the probability value of the obtained candidate sequence is greater than the probability threshold value Q, placing the sequence into the final candidate sequence set R. When the size of set R is equal to beam-size, the translation ends.
For another example, the query input by the user is "how to select beijing fresh flower express delivery", the query is subjected to word segmentation to obtain "how to", "select", "beijing", "fresh flower", and "express delivery", the words obtained after word segmentation are input into the translation model, the semantics of the input query are known through the translation model, and then the translation model starts to translate.
The translation model first determines what the first word to output is, and assigns a probability value to each character in the set of candidate translated characters according to the semantics of the input query, the probability value representing the probability that each character in the candidate translated characters is taken as the first character. If the probability values of the three words of 'how', 'where' and 'Beijing' are higher, but the promotion keywords beginning with 'how' and 'where' are not searched in the promotion keyword library, and the promotion keywords beginning with 'Beijing' are searched, the first two words are abandoned, and the 'Beijing' is selected as the first character. And then the translation model determines what the output second word is, namely, which word is suitable to be output after the word "Beijing", the translation model also gives a probability value to each character in the candidate translation character set, the probability value represents the possibility of outputting the word after "Beijing", the probability values of the second character, namely "flower" and "flower" are assumed to be higher, and the promotion keywords beginning with "Beijing flower" or "Beijing flower" are searched in the promotion keyword library, then the "flower" and the "flower" are selected as the second word, and the like. After the translation model selects three vocabularies, namely 'Beijing', 'fresh flower' and 'express', when a fourth vocabulary is selected, the translation model finds that the probability that the fourth character is 'EOF' is the largest, and then the translation process is finished. At this time, the model searches whether the promotion keyword, namely Beijing flower express is available in a promotion keyword library, and if the promotion keyword is available, and the probability of the promotion keyword is greater than a set probability threshold value Q, the keyword is output as a candidate promotion keyword.
After the determining unit 33 determines the promotion keyword triggered by the query by using the output sequence, the promotion corresponding to the promotion keyword can be obtained by using the promotion keyword, and the triggering unit 35 further includes the promotion result corresponding to the promotion keyword triggered by the query in the search result page corresponding to the query input by the user.
The above-described methods and apparatus provided by embodiments of the present invention may be embodied in a computer program that is configured and operable to be executed by a device. The apparatus may include one or more processors, and further include memory and one or more programs, as shown in fig. 4. Where the one or more programs are stored in memory and executed by the one or more processors to implement the method flows and/or device operations illustrated in the above-described embodiments of the invention. For example, the method flows executed by the one or more processors may include:
acquiring a query input by a user;
inputting the obtained input sequence corresponding to the query into a translation model to obtain an output sequence;
determining a promotion keyword triggered by the query by using the output sequence;
the translation model is obtained by adopting the following pre-training mode:
obtaining query and a clicked title corresponding to the query from a user click behavior log as training data;
and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model.
By utilizing the technical scheme provided by the invention, the query in the user click behavior log and the clicked title corresponding to the query are selected as training data, the neural network model is trained to obtain the translation model, and after the query input by the user is obtained, the promotion keyword triggered by the query is obtained through the translation model obtained by pre-training. The method can acquire the popularization keywords semantically matched with the query, and compared with a literal transformation mode, the accuracy and recall rate of triggering the popularization keywords are greatly improved.
The method for triggering the promotion keywords is applied to the search promotion delivery system, the promotion keywords matched with input query semantics in a promotion keyword library can be triggered through the query input by a user, and then the search promotion delivery system displays the promotion corresponding to the promotion keywords, so that the displayed promotion is more in line with the search intention of the user.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. A method for triggering promotion keywords is characterized by comprising the following steps:
acquiring a query input by a user;
inputting the obtained input sequence corresponding to the query into a translation model, and translating the obtained input sequence corresponding to the query by the translation model by using a preset popularization keyword library to obtain an output sequence;
determining a promotion keyword triggered by the query by using the output sequence;
the translation model is obtained by adopting the following pre-training mode:
obtaining query and a clicked title corresponding to the query from a user click behavior log as training data;
and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model.
2. The method of claim 1, wherein inputting the input sequence corresponding to the obtained query into a translation model comprises:
performing word segmentation on the obtained query, and inputting a word sequence obtained after word segmentation into a translation model as an input sequence;
the obtaining of the input sequence by using the query in the training data includes: and performing word segmentation processing on the query in the training data, and taking a word sequence obtained after the word segmentation processing as an input sequence.
3. The method of claim 1, wherein the translation model further translates the obtained input sequence corresponding to the query by using a beam-search technique to obtain an output sequence.
4. The method according to claim 3, wherein the translation model satisfies the following condition for each output sequence translated from the input sequence corresponding to the obtained query:
the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
5. The method of claim 4, wherein the translation model performs the following at each layer during the translation of the input sequence:
searching candidate words obtained by translation of each layer in a promotion keyword library;
if the word with the sequence formed by the determined word and the candidate word in each layer as the prefix does not exist in the promotion keyword library, pruning the candidate word;
sorting the rest candidate words according to the probability value, and pruning words after the beam-size words are ranked;
and pruning words with probability values smaller than Q from the words with the probability values within beam-size.
6. The method of claim 1, wherein the neural network model is: the recurrent neural network model RNN.
7. The method of claim 1, wherein determining the query-triggered promotional keyword using the output sequence comprises:
and judging whether a promotion keyword consistent with the output sequence exists in a promotion keyword library, if so, taking the promotion keyword consistent with the output sequence as a promotion keyword triggered by the query.
8. The method of any one of claims 1 to 7, further comprising:
and the search result page corresponding to the query comprises a promotion result corresponding to the promotion keyword triggered by the query.
9. A trigger device for promoting a keyword, the device comprising:
the acquisition unit is used for acquiring the query input by the user;
the translation unit is used for inputting the acquired input sequence corresponding to the query into a translation model, and the translation model of the translation unit translates the acquired input sequence corresponding to the query by using a preset popularization keyword library to obtain an output sequence;
the determining unit is used for determining the promotion keyword triggered by the query by using the output sequence;
the training unit is used for pre-training in the following modes to obtain the translation model:
obtaining query and a clicked title corresponding to the query from a user click behavior log as training data;
and obtaining an input sequence by utilizing the query in the training data, obtaining a target sequence by utilizing a clicked title corresponding to the query, and training a neural network model to obtain a translation model.
10. The apparatus according to claim 9, wherein when the translation unit inputs the acquired input sequence corresponding to the query into a translation model, the translation unit specifically executes:
performing word segmentation on the obtained query, and inputting a word sequence obtained after word segmentation into a translation model as an input sequence;
when the training unit obtains an input sequence by using a query in training data, the method specifically comprises the following steps: and the translation unit carries out word segmentation processing on the query in the training data, and a word sequence obtained after the word segmentation processing is used as an input sequence.
11. The apparatus according to claim 9, wherein the translation model of the translation unit further translates the obtained input sequence corresponding to the query by using a beam-search technique to obtain an output sequence.
12. The apparatus according to claim 11, wherein the translation model in the translation unit satisfies the following condition for each output sequence translated from the input sequence corresponding to the obtained query:
the number of the output sequences is within beam-size, the probability of each output sequence is greater than or equal to a preset probability threshold value Q, and a promotion keyword consistent with the output sequences exists in the promotion keyword library or a promotion keyword with the output sequences as prefixes exists.
13. The apparatus according to claim 12, wherein the translation model in the translation unit performs the following processing in each layer during the translation of the input sequence:
searching candidate words obtained by translation of each layer in a promotion keyword library;
if the word with the sequence formed by the determined word and the candidate word in each layer as the prefix does not exist in the promotion keyword library, pruning the candidate word;
sorting the rest candidate words according to the probability value, and pruning words after the beam-size words are ranked;
and pruning words with probability values smaller than Q from the words with the probability values within beam-size.
14. The apparatus of claim 9, wherein the neural network model used by the translation model in the translation unit is: the recurrent neural network model RNN.
15. The apparatus according to any one of claims 9 to 14, wherein the determining unit, in determining the query-triggered promotion keyword using the output sequence, comprises:
and judging whether a promotion keyword consistent with the output sequence exists in a promotion keyword library, if so, taking the promotion keyword consistent with the output sequence as a promotion keyword triggered by the query.
16. The apparatus of any of claims 9 to 14, further comprising:
and the triggering unit is used for including the promotion result corresponding to the promotion keyword triggered by the query in the search result page corresponding to the query.
CN201611065432.6A 2016-11-28 2016-11-28 Method and device for triggering promotion keywords Active CN106649605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611065432.6A CN106649605B (en) 2016-11-28 2016-11-28 Method and device for triggering promotion keywords

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611065432.6A CN106649605B (en) 2016-11-28 2016-11-28 Method and device for triggering promotion keywords

Publications (2)

Publication Number Publication Date
CN106649605A CN106649605A (en) 2017-05-10
CN106649605B true CN106649605B (en) 2020-09-29

Family

ID=58811867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611065432.6A Active CN106649605B (en) 2016-11-28 2016-11-28 Method and device for triggering promotion keywords

Country Status (1)

Country Link
CN (1) CN106649605B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815474B (en) * 2017-11-20 2022-09-23 深圳市腾讯计算机系统有限公司 Word sequence vector determination method, device, server and storage medium
CN108319585B (en) * 2018-01-29 2021-03-02 北京三快在线科技有限公司 Data processing method and device, electronic equipment and computer readable medium
CN111597800B (en) * 2019-02-19 2023-12-12 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for obtaining synonyms
CN111177551B (en) 2019-12-27 2021-04-16 百度在线网络技术(北京)有限公司 Method, device, equipment and computer storage medium for determining search result
CN111221952B (en) * 2020-01-06 2021-05-14 百度在线网络技术(北京)有限公司 Method for establishing sequencing model, method for automatically completing query and corresponding device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2816460A1 (en) * 2010-10-28 2012-05-03 Adon Network Methods and apparatus for dynamic content
CN105159890A (en) * 2014-06-06 2015-12-16 谷歌公司 Generating representations of input sequences using neural networks
CN105808685A (en) * 2016-03-02 2016-07-27 腾讯科技(深圳)有限公司 Promotion information pushing method and device
CN105808541A (en) * 2014-12-29 2016-07-27 阿里巴巴集团控股有限公司 Information matching processing method and apparatus
CN105975558A (en) * 2016-04-29 2016-09-28 百度在线网络技术(北京)有限公司 Method and device for establishing statement editing model as well as method and device for automatically editing statement

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080256069A1 (en) * 2002-09-09 2008-10-16 Jeffrey Scott Eder Complete Context(tm) Query System
CN102760142A (en) * 2011-04-29 2012-10-31 北京百度网讯科技有限公司 Method and device for extracting subject label in search result aiming at searching query
US9305050B2 (en) * 2012-03-06 2016-04-05 Sergey F. Tolkachev Aggregator, filter and delivery system for online context dependent interaction, systems and methods
CN103870505B (en) * 2012-12-17 2017-10-27 阿里巴巴集团控股有限公司 One kind inquiry words recommending method and query word commending system
CN105912686A (en) * 2016-04-18 2016-08-31 上海珍岛信息技术有限公司 Search engine marketing bid method and system based on machine learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2816460A1 (en) * 2010-10-28 2012-05-03 Adon Network Methods and apparatus for dynamic content
CN105159890A (en) * 2014-06-06 2015-12-16 谷歌公司 Generating representations of input sequences using neural networks
CN105808541A (en) * 2014-12-29 2016-07-27 阿里巴巴集团控股有限公司 Information matching processing method and apparatus
CN105808685A (en) * 2016-03-02 2016-07-27 腾讯科技(深圳)有限公司 Promotion information pushing method and device
CN105975558A (en) * 2016-04-29 2016-09-28 百度在线网络技术(北京)有限公司 Method and device for establishing statement editing model as well as method and device for automatically editing statement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Character-level incremental speech recognition with recurrent neural networks;Kyuyeon Hwang,等;《2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;20160325;第5335-5338页 *

Also Published As

Publication number Publication date
CN106649605A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106649818B (en) Application search intention identification method and device, application search method and server
CN109815308B (en) Method and device for determining intention recognition model and method and device for searching intention recognition
CN106649605B (en) Method and device for triggering promotion keywords
CN106709040B (en) Application search method and server
CN106663104B (en) Learning and using contextual content retrieval rules for query disambiguation
CN108280114B (en) Deep learning-based user literature reading interest analysis method
CN102479191B (en) Method and device for providing multi-granularity word segmentation result
CN106202294B (en) Related news computing method and device based on keyword and topic model fusion
CN110349568A (en) Speech retrieval method, apparatus, computer equipment and storage medium
CN110188197B (en) Active learning method and device for labeling platform
CN104199965A (en) Semantic information retrieval method
WO2019114430A1 (en) Natural language question understanding method and apparatus, and electronic device
CN108038099B (en) Low-frequency keyword identification method based on word clustering
CN110096572B (en) Sample generation method, device and computer readable medium
JP6056610B2 (en) Text information processing apparatus, text information processing method, and text information processing program
US20210173874A1 (en) Feature and context based search result generation
CN112256845A (en) Intention recognition method, device, electronic equipment and computer readable storage medium
CN113821646A (en) Intelligent patent similarity searching method and device based on semantic retrieval
Hillard et al. Learning weighted entity lists from web click logs for spoken language understanding
US20190294705A1 (en) Image annotation
CN115438674A (en) Entity data processing method, entity linking method, entity data processing device, entity linking device and computer equipment
CN105808737B (en) Information retrieval method and server
CN111274366A (en) Search recommendation method and device, equipment and storage medium
CN110705285B (en) Government affair text subject word library construction method, device, server and readable storage medium
JP6260678B2 (en) Information processing apparatus, information processing method, and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant