CN114238798A - Search ranking method, system, device and storage medium based on neural network - Google Patents
Search ranking method, system, device and storage medium based on neural network Download PDFInfo
- Publication number
- CN114238798A CN114238798A CN202111524650.2A CN202111524650A CN114238798A CN 114238798 A CN114238798 A CN 114238798A CN 202111524650 A CN202111524650 A CN 202111524650A CN 114238798 A CN114238798 A CN 114238798A
- Authority
- CN
- China
- Prior art keywords
- document
- recalled
- click rate
- target
- predicted click
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 46
- 238000012549 training Methods 0.000 claims abstract description 45
- 238000012163 sequencing technique Methods 0.000 claims abstract description 8
- 238000011176 pooling Methods 0.000 claims description 40
- 230000014509 gene expression Effects 0.000 claims description 20
- 239000011159 matrix material Substances 0.000 claims description 20
- 239000013604 expression vector Substances 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 238000010606 normalization Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 210000002569 neuron Anatomy 0.000 description 7
- 238000013473 artificial intelligence Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a search sorting method, a system, equipment and a storage medium based on a neural network, wherein the method comprises the following steps: acquiring a plurality of different types of recall documents according to a target search statement and a target search engine; for each recall document, extracting comprehensive features between the target search statement and each recall document; acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and a target sorting model corresponding to each recalled document, wherein the target sorting module is obtained by training through a training sample and a training label; and sequencing each recalled document according to the final predicted click rate corresponding to each recalled document. The invention fully utilizes the category relationship between the search sentence and the recalled document to weaken the problem of too large difference of click rates of different categories, so that the final recommendation result is not limited to a certain category but is evenly distributed among the different categories, thereby improving the internet experience of users.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a search ranking method, a search ranking system, search ranking equipment and a storage medium based on a neural network.
Background
The search scene is used as an important component in an internet service scene, a general search engine needs to recall according to search keywords to obtain a plurality of recalled documents, then the recalled documents are ranked, the ranking result directly influences the internet use experience of a user, and when the recall result is displayed, whether the recall result directly meets the requirements of the user or whether the recall result brings unexpected surprise to the user is an important ranking standard of the recall ranking result.
In the existing recall ranking mode, taking an example of searching on an APP of a recommended advertisement of a certain vehicle as an example, for example, "insurance" is searched on the APP, search results in the prior art are often concentrated in the category of policy, but search results of other categories related to insurance, such as service, question and answer, video and the like, are difficult to appear in a search page, so the ranking in the prior art cannot balance differences among recall results of different categories, a plurality of recall results of the same category are often ranked in front, and recall results of other categories are not seen, thereby reducing the internet experience of users.
Disclosure of Invention
The invention provides a search sorting method, a search sorting system, search sorting equipment and a storage medium based on a neural network, and mainly aims to comprehensively sort different types of recalled documents and effectively improve the use experience of a user.
In a first aspect, an embodiment of the present invention provides a search ranking method based on a neural network, including:
acquiring a plurality of different types of recall documents according to a target search statement and a target search engine;
for each recall document, extracting comprehensive features between the target search statement and each recall document;
acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and a target sorting model corresponding to each recalled document, wherein the target sorting module is obtained by training through a training sample and a training label;
and sequencing each recalled document according to the final predicted click rate corresponding to each recalled document.
Preferably, the target ordering model comprises a wide module and an improved deep module, and the improved deep module is obtained by improving the deep module through the idea of standardizing the condition layer.
Preferably, the obtaining a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and the target ranking model corresponding to each recalled document includes:
inputting the comprehensive characteristics corresponding to each recalled document into the wide module, and acquiring a first predicted click rate of each recalled document;
inputting the target search statement and each recall document into the improved deep module to obtain a second predicted click rate corresponding to each recall document;
and obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document.
Preferably, the improving deep module includes a first encoding unit, a second encoding unit, a first multi-head attention unit, a second multi-head attention unit, a first condition layer normalization unit, a second condition difference normalization unit, a first average pooling unit, a second average pooling unit, a fusion unit and an output unit, and the inputting the target search statement and each recalled document into the improving deep module to obtain a second predicted click rate corresponding to each recalled document includes:
coding the target search statement through the first coding unit to obtain statement codes;
coding each recalled document through the second coding unit to obtain a document code;
focusing on the document coding by using the statement coding through the first multi-head attention unit to obtain a first expression vector;
focusing on the statement code by using the document code through the second multi-head attention unit to obtain a second expression vector;
acquiring a first expression matrix according to the first expression vector through the first condition layer standardization unit;
acquiring a second expression matrix according to the second expression vector through the second condition layer standardization unit;
carrying out average pooling on the first expression matrix through the first average pooling unit to obtain a first numerical value;
carrying out average pooling on the second expression matrix through the second average pooling unit to obtain a second numerical value;
fusing the first numerical value and the second numerical value through the fusion unit to obtain the second predicted click rate;
outputting, by the output unit, the second predicted click rate.
Preferably, the difference between the proportion of the document categories in the training sample and the proportion of the clicked document categories in the training labels is within a preset range.
Preferably, the obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document includes:
and adding the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document, adding a preset bias term to the sum of the first predicted click rate and the second predicted click rate, and outputting through a sigmoid function to obtain the final predicted click rate corresponding to each recalled document.
Preferably, the comprehensive features include similarity information, type information, context information, and cross information between the target search sentence and each recalled document, for any recalled document, the similarity information represents an edit distance ratio between the target search sentence and the any recalled document, the type information represents a type of the any recalled document, the context information represents a preset time, whether the preset time appears in the target search sentence, whether the preset time appears in the any recalled document, and whether the preset time appears in both the target search sentence and the any recalled document, the cross feature target position, whether the target position appears in the target search sentence, whether the target position appears in the any recalled document, the cross feature target position, the cross feature position, and the cross feature information, And whether the target location appears in both the target search statement and the any recalled document.
In a second aspect, an embodiment of the present invention provides a search ranking system based on a neural network, including:
the recall module is used for acquiring a plurality of different types of recall documents according to the target search sentences and the target search engine;
the characteristic module is used for extracting comprehensive characteristics between the target search statement and each recalled document for each recalled document;
the prediction module is used for acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and the target sorting model corresponding to each recalled document, and the target sorting module is obtained by training through a training sample and a training label;
and the sorting module is used for sorting each recalled document according to the final predicted click rate corresponding to each recalled document.
In a third aspect, an embodiment of the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the above search ranking method based on a neural network when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the above search ranking method based on a neural network.
The invention provides a search sorting method, a system, equipment and a storage medium based on a neural network, which are characterized in that various types of recall documents aiming at a target search statement are found out through a target search engine, similarity information, type information, context information and cross information between the target search statement and each recall document are calculated, comprehensive characteristics are finally extracted, and the correlation information between the target search statement and each recall document is fully utilized so as to weaken the influence of different types on the click rate; and a target sorting model is utilized to obtain a final predicted click rate according to the comprehensive characteristics, because the target sorting model belongs to a neural network model, the most appropriate predicted click rate of each type of recalled documents can be obtained by combining historical experience, the category click rate is balanced, sorting is carried out according to the final predicted click rate, and finally, a recommendation result is not limited to a certain category but is evenly distributed among different categories, so that the internet experience of a user is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a search ranking method based on a neural network according to an embodiment of the present invention;
fig. 2 is a flowchart of a search ranking method based on a neural network according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an improved deep module according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a search ranking system based on a neural network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device provided in an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 is a schematic view of an application scenario of a search ranking method based on a neural network according to an embodiment of the present invention, as shown in fig. 1, a user inputs a target search statement at a client, the client sends the target search statement to a server after receiving the target search statement, and the server executes the search ranking method based on the neural network after receiving the target search statement to rank each recalled document.
It should be noted that the server may be implemented by an independent server or a server cluster composed of a plurality of servers. The client may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The client and the server may be connected through bluetooth, USB (Universal Serial Bus), or other communication connection manners, which is not limited in this embodiment of the present invention.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning, deep learning and the like.
Fig. 2 is a flowchart of a search ranking method based on a neural network according to an embodiment of the present invention, and as shown in fig. 2, the method includes:
the search ranking method provided by the embodiment of the invention is directed to a target search ranking engine, and the target search ranking engine can comprise a search server arranged in a cloud, such as a search server based on an elastic search distribution cluster, or some APP containing the search server, and the like.
S210, acquiring a plurality of different types of recall documents according to the target search statement and the target search engine;
in the embodiment of the invention, a certain car insurance APP is taken as a target search engine for explanation, a keyword 'insurance' is searched in the APP, and here, a target search statement is insurance; firstly, recalling documents needing to be sequenced are obtained according to a target search statement 'insurance', the types of the recalling documents comprise an insurance type, a service type, a store type, a video type, a question and answer type, a live broadcast type and the like, the recalling documents contained in the insurance type are insurance policy types suitable for car insurance of a user, the service type comprises the insurance policy types which are already used for car insurance of the user, the store type is a service store around the place where the user is located, the video type is a video for explaining the insurance, the question and answer type comprises the recalling documents for asking questions and answering of the user, and the live broadcast type comprises a main broadcast for live explanation of the insurance.
The recall documents in the embodiment of the invention comprise various types of insurance, policy, store, video, document and the like, and each type has a corresponding recall document. And performing the operation on each recalled document, and extracting the comprehensive characteristics corresponding to each recalled document.
S220, for each recall document, extracting comprehensive characteristics between the target search statement and each recall document;
specifically, the integrated features include similarity information, type information, context information, and cross information between a target search statement and each of the recalled documents, and for example, one of the recalled documents is taken as an example to be explained, the integrated features between the recalled document and the target search statement are extracted, the integrated features include similarity information, type information, context information, and cross information between the target search statement and the recalled document, wherein the similarity information represents the similarity between the target search statement and the recalled document, and the common similarity representing method includes: the edit distance, the cosine similarity, the euclidean distance, the manhattan distance, the log-likelihood similarity, and the like can be specifically determined according to actual conditions, and the embodiment of the invention is not specifically limited herein; the type information indicates the type of the recalled document; the context information represents some information or all information which can be perceived and applied to affect objects in scenes and images, and the common information includes semantic context, spatial context, scale context and the like, and can be determined according to actual conditions, and the embodiment of the invention is not specifically limited herein; the cross information represents multi-angle, multi-level, inter-related interaction in the information space dimension. The specific comprehensive features are interactive combinations of the above various features, and can be determined according to actual conditions, and the embodiments of the present invention are not determined here.
The embodiment of the invention finds various types of recall documents aiming at the target search statement through the target search engine, calculates the similarity information, the type information, the context information and the cross information between the target search statement and each recall document, finally extracts the comprehensive characteristics, and fully utilizes the associated information between the target search statement and each recall document so as to weaken the influence of different types on the click rate.
S230, acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and a target sorting model corresponding to each recalled document, wherein the target sorting model is obtained by training through a training sample and a training label;
and inputting the comprehensive characteristics corresponding to the document into a target sorting model to obtain the final predicted click rate of the target user on the recalled document. The target ranking model in the embodiment of the invention belongs to one of neural networks, and needs to be trained before the target ranking model is used, and the target ranking model is trained through samples and labels obtained in advance. The training process of the target ranking model can be divided into three steps: defining the structure of a target sequencing model and an output result of forward propagation; defining a loss function and a back propagation optimization algorithm; finally, a session is generated and a back propagation optimization algorithm is run repeatedly on the training data.
The neuron is the minimum unit forming the neural network, one neuron can have a plurality of inputs and one output, and the input of each neuron can be the output of other neurons or the input of the whole neural network. The output of the neural network is the weighted sum of the inputs of all the neurons, the weight of different inputs is the neuron parameter, and the optimization process of the neural network is the process of optimizing the value of the neuron parameter.
The effect and optimization goal of the neural network are defined by a loss function, the loss function gives a calculation formula of the difference between the output result of the neural network and the real label, and supervised learning is a way of training the neural network, and the idea is that on a labeled data set of known answers, the result given by the neural network is as close as possible to the real answer (namely, the label). The training data is fitted by adjusting parameters in the neural network so that the neural network provides predictive power to unknown samples.
The back propagation algorithm realizes an iterative process, when each iteration starts, a part of training data is taken first, and the prediction result of the neural network is obtained through the forward propagation algorithm. Because the training data all have correct answers, the difference between the predicted result and the correct answer can be calculated. Based on the difference, the back propagation algorithm can correspondingly update the value of the neural network parameter, so that the neural network parameter is closer to the real answer.
After the training process is completed by the method, the trained target ranking model can be used for application.
S240, sorting each recalled document according to the final predicted click rate corresponding to each recalled document.
And then, sorting each recalled document according to the final predicted click rate corresponding to each recalled document, arranging the recalled documents with higher final predicted click rates in front of the recalled documents, and arranging the recalled documents with lower final predicted click rates in back of the recalled documents.
The invention provides a search sorting method based on a neural network, which is characterized in that various types of recall documents aiming at a target search statement are found out through a target search engine, similarity information, type information, context information and cross information between the target search statement and each recall document are calculated, comprehensive characteristics are finally extracted, and the correlation information between the target search statement and each recall document is fully utilized, so that the influence of different types on the click rate is weakened; and a target sorting model is utilized to obtain a final predicted click rate according to the comprehensive characteristics, because the target sorting model belongs to a neural network model, the most appropriate predicted click rate of each type of recalled documents can be obtained by combining historical experience, the category click rate is balanced, sorting is carried out according to the final predicted click rate, and finally, a recommendation result is not limited to a certain category but is evenly distributed among different categories, so that the internet experience of a user is improved.
On the basis of the above embodiment, preferably, the target ranking model includes a wide module and an improved deep module, and the improved deep module is obtained by improving the deep module through the idea of standardizing the condition layer.
In the embodiment of the invention, the target sequencing model consists of a wide module and an improved deep module, the two modules are mutually parallel, and comprehensive consideration processing is carried out according to the predicted click rate obtained by the two modules to obtain the final predicted click rate. The wide module is a wide module in the traditional deep & wide module, also called a shallow model, and a target search statement is input into the shallow model, and a predicted click rate is obtained by utilizing the memory capacity of the shallow model; the method comprises the steps of inputting a target search statement and each recalled document into an improved deep module, wherein the improved deep module is obtained by improving a deep model in a deep & wide module, the deep model is also called a deep model, the generalization capability of the deep model is utilized, the accuracy and the expansibility of a recommendation system are considered by a single model, the improvement idea is that the concept of conditional layer standardization is used for reference, and the characteristics of different types of recalled documents and the like are used as input in a display mode, so that different expressions are generated on a semantic side under different characteristics, and the behavior difference among different types is relieved.
In the embodiment of the invention, the idea of condition layer standardization is integrated into a classical deep model, the type characteristics of the recalled document are used as input in a display mode, and different expressions are generated on the semantic side under different characteristics, so that the behavior difference among types is relieved, the recommendation results of all categories can be balanced in the final recommendation sequencing, and the recommendation results cannot be concentrated on the recommendation results of the same category.
On the basis of the foregoing embodiment, preferably, the obtaining a final predicted click rate corresponding to each recalled document according to the comprehensive feature and the target ranking model corresponding to each recalled document includes:
inputting the comprehensive characteristics corresponding to each recalled document into the wide module, and acquiring a first predicted click rate of each recalled document;
inputting the target search statement and each recall document into the improved deep module to obtain a second predicted click rate corresponding to each recall document;
and obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document.
And inputting the comprehensive characteristics of the extracted documents into a wide module of the target sequencing model, wherein the wide module can efficiently realize the memory capacity by utilizing the cross characteristics to achieve the aim of accurate recommendation. The wide module achieves a certain generalization capability by adding some broad class features. However, limited by the training data, the wide module cannot achieve generalization that does not occur in the training data, and the first predicted click rate of the target ranking model to the recalled document can be obtained through the wide module.
The Wide module is a basic linear model and is expressed as y ═ WX + b, the X characteristic part comprises basic characteristics and cross characteristics, and the cross characteristics are important in the Wide module, so that the interaction between the characteristics can be captured, and the effect of adding nonlinearity is achieved. The cross-over feature can be expressed as:
then, the target search statement and each recalled document are input into an improved deep module, and a second predicted click rate of the target user on the recalled document is obtained.
And finally, obtaining the final predicted click rate of each recalled document according to the first predicted click rate and the second predicted click rate of each recalled document.
The embodiment of the invention provides a search sorting method, which is characterized in that comprehensive characteristics such as similar measurement characteristics, document type characteristics, context characteristics, cross characteristics and the like are extracted from a target search statement, the category relation between the target search statement and a recalled document is fully utilized, so that the problem that the click rate difference of different categories is too large is weakened, the final recommendation result is not limited to a certain category but is uniformly distributed among the different categories, and the internet experience of a user is improved.
On the basis of the foregoing embodiment, preferably, the improved deep module includes a first encoding unit, a second encoding unit, a first multi-head attention unit, a second multi-head attention unit, a first condition layer normalization unit, a second condition difference normalization unit, a first average pooling unit, a second average pooling unit, a fusion unit, and an output unit, where the inputting the target search statement and each recalled document into the improved deep module to obtain a second predicted click rate corresponding to each recalled document includes:
coding the target search statement through the first coding unit to obtain statement codes;
coding each recalled document through the second coding unit to obtain a document code;
focusing on the document coding by using the statement coding through the first multi-head attention unit to obtain a first expression vector;
focusing on the statement code by using the document code through the second multi-head attention unit to obtain a second expression vector;
acquiring a first expression matrix according to the first expression vector through the first condition layer standardization unit;
acquiring a second expression matrix according to the second expression vector through the second condition layer standardization unit;
carrying out average pooling on the first expression matrix through the first average pooling unit to obtain a first numerical value;
carrying out average pooling on the second expression matrix through the second average pooling unit to obtain a second numerical value;
fusing the first numerical value and the second numerical value through the fusion unit to obtain the second predicted click rate;
outputting, by the output unit, the second predicted click rate.
Fig. 3 is a schematic structural diagram of an improved deep module according to an embodiment of the present invention, as shown in fig. 3, the improved deep module according to the embodiment of the present invention includes a first encoding unit 310, a second encoding unit 320, a first multi-head attention unit 330, a second multi-head attention unit 340, a first condition normalization unit 350, a second condition normalization unit 360, a first average pooling unit 370, a second average pooling unit 380, a fusion unit 390, and an output unit 311, where:
the first coding unit, the first multi-head attention unit, the first condition standardization unit and the first average pooling unit are sequentially connected; the second coding unit, the second multi-head attention unit, the second condition standardization unit and the second average pooling unit are sequentially connected; additionally, the output end of the first coding unit is also connected with the second multi-head attention unit, and the output end of the second coding unit is also connected with the first multi-head attention unit.
Inputting a target search sentence into a first coding unit, coding the target search sentence to obtain a sentence code, inputting a recall document into a second coding unit, and coding the recall document to obtain a document code; sentence coding and document coding are simultaneously input into a first multi-head attention unit, and the sentence coding is used for paying attention to the document coding to obtain a first expression vector; and simultaneously inputting the sentence codes and the document codes into a second multi-head attention unit, and paying attention to the sentence codes by using the document codes to obtain a second expression vector. It should be noted that, the multi-head attention unit being the first multi-head attention unit and the second multi-head attention unit can divide the model into multiple heads to form multiple subspaces, so that the model can pay attention to multiple different information.
The first expression vector is input into the first condition layer standardization unit to obtain a first expression matrix, the second expression vector is input into the second condition layer standardization unit to obtain a second expression matrix, the first condition layer standardization unit and the second condition layer standardization unit are conditional layer standardization, and the direction of model output can be controlled through controlling the input of conditions.
Then, carrying out average pooling on the first expression matrix by using a first average pooling unit to obtain a first numerical value; carrying out average pooling on the second expression matrix by using a second average pooling unit to obtain a second numerical value; the first average pooling unit and the second average pooling unit are averagePooling, the average pooling is widely applied in the global average pooling operation, and the average pooling is used in the last layer in the ResNet and the inclusion structures. Sometimes, the use of global average pooling near the end of the model classifier can also replace the scatter operation, turning the input data into one-dimensional vectors. The average pooling emphasizes the sampling of the whole feature information, the contribution to the reduction of the parameter dimension is larger, more is reflected on the aspect of the complete transfer of the information, and in a very large representative model, for example, the connection between modules in the DenseNet mostly adopts the average pooling, so that the dimension is reduced, and the information is more favorably transferred to the next module for feature extraction.
And then, the first numerical value and the second numerical value are fused through a fusion unit and output through an output unit to obtain a second predicted click rate.
On the basis of the above embodiment, preferably, the difference between the proportion of the document category in the training sample and the proportion of the clicked document category in the training label is within a preset range.
The target sorting model is specifically a neural network model, before the target sorting model is used, the target sorting model needs to be trained through training samples and training labels, the composition of the samples is adjusted according to the click rate and the exposure of each type of recalled document, and some classes which are originally ultrahigh in exposure quantity and low in click rate are undersampled to a greater extent, so that the difference between the proportion of the training samples of each class and the proportion of the classes in the click samples is in a preset range.
Because the difference that different types of users clicked is huge, and different types of content distribution has obvious difference, these difference can make the model can not fine convergence in the training process, perhaps cross the fit easily, just remembered the difference similar to the noise between the class, and do not have real going to fit the semantic relevance to the model is difficult to the convergence during training, through the proportion of adjusting different types of samples, convergence speed during the training of sequencing model can be accelerated, improve training efficiency.
On the basis of the foregoing embodiment, preferably, the obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document includes:
and adding the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document, adding a preset bias term to the sum of the first predicted click rate and the second predicted click rate, and outputting through a sigmoid function to obtain the final predicted click rate corresponding to each recalled document.
And obtaining the final predicted click rate of each recalled document according to the first predicted click rate and the second predicted click rate of each recalled document.
On the basis of the foregoing embodiment, preferably, the comprehensive features include similarity information, type information, context information, and cross information between the target search statement and each recalled document, for any recalled document, the similarity information represents an edit distance ratio between the target search statement and any recalled document, the type information represents a type of any recalled document, the context information represents a preset time, whether the preset time appears in the target search statement, whether the preset time appears in any recalled document, and whether the preset time appears in both the target search statement and any recalled document, the cross feature target location, whether the target location appears in the target search statement, whether the target location appears in any recalled document, and a combination thereof, And whether the target location appears in both the target search statement and the any recalled document.
For each recalled document, extracting comprehensive features between the recalled document and the target search statement, wherein the similarity information represents an edit distance ratio between the target search statement and any one of the recalled documents, and a calculation formula of the edit distance ratio is as follows:
where sum represents the sum of the lengths of the recalled document and the search term, ldist represents a class edit distance, the class edit distance is not a common edit distance and is calculated from the edit operations, and 1 is added to the class edit distance for the edit operations such as deletion and insertion, and 2 is added to the class edit distance for the replacement edit operation.
The type information indicates a type of the any one of the recalled documents.
The context information indicates a preset time, whether the preset time appears in the target search sentence, whether the preset time appears in any one of the recalled documents, and whether the preset time appears in both the target search sentence and any one of the recalled documents.
The cross feature indicates a target position, whether the target position appears in the target search sentence, whether the target position appears in any of the recalled documents, and whether the target position appears in both the target search sentence and any of the recalled documents. The cross features include four types, namely, the city where the user is located, whether the city where the user is located appears in the recall document, whether the city where the user is located appears in the search sentence, and whether the city where the user is located appears in the search sentence and the recall document at the same time.
In the embodiment of the invention, the feature difference between different types of recall documents is fully extracted by extracting a plurality of features such as similarity measurement features, document type features, context features, cross features and the like between the search sentences and the recall documents, so that the huge difference of subsequent click prediction between different types of samples is overcome.
Fig. 4 is a schematic structural diagram of a search ranking system based on a neural network according to an embodiment of the present invention, as shown in fig. 4, the system includes a recall module 410, a feature module 420, a prediction module 430, and a ranking module 440, where:
the recall module 410 is used for acquiring a plurality of different types of recall documents according to the target search statement and the target search engine;
the feature module 420 is configured to, for each recalled document, extract a composite feature between the target search statement and each recalled document;
the prediction module 430 is configured to obtain a final predicted click rate corresponding to each recalled document according to the comprehensive features and a target ranking model corresponding to each recalled document, where the target ranking model is obtained by training through a training sample and a training label;
the ranking module 440 is configured to rank each recalled document according to a final predicted click rate corresponding to the recalled document.
The present embodiment is a system embodiment corresponding to the above method embodiment, and the specific implementation process is the same as the above method embodiment, and please refer to the above method embodiment for details, which is not described herein again.
On the basis of the above embodiment, preferably, the target ranking model includes a wide module and an improved deep module, and the improved deep module is obtained by improving the deep module through the idea of standardizing the condition layer.
On the basis of the above embodiment, preferably, the prediction module includes a shallow unit, a deep unit, and a click unit, wherein:
the shallow layer unit is used for inputting the comprehensive characteristics corresponding to each recalled document into the wide module and acquiring a first predicted click rate of each recalled document;
the deep unit is used for inputting the target search statement and each recall document into the improved deep module to obtain a second predicted click rate corresponding to each recall document;
the click unit is used for obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to a first predicted click rate corresponding to each recalled document and a second predicted click rate corresponding to each recalled document.
On the basis of the above embodiment, preferably, the deep unit includes a first encoding unit, a second encoding unit, a first multi-head attention unit, a second multi-head attention unit, a first condition layer normalization unit, a second condition difference normalization unit, a first average pooling unit, a second average pooling unit, a fusion unit, and an output unit, wherein:
the first coding unit is used for coding the target search statement to acquire statement codes;
the second coding unit is used for coding each recall document to obtain a document code;
the first multi-head attention unit is used for focusing on the document code by using the statement code to obtain a first expression vector;
the second multi-head attention unit is used for focusing on the statement code by utilizing the document code to obtain a second expression vector;
the first conditional layer normalization unit is used for acquiring a first representation matrix according to the first representation vector;
the second conditional layer normalization unit is used for acquiring a second representation matrix according to the second representation vector;
the first average pooling unit is used for carrying out average pooling on the first expression matrix to obtain a first numerical value;
the second average pooling unit is used for carrying out average pooling on the second expression matrix to obtain a second numerical value;
the fusion unit is used for fusing the first numerical value and the second numerical value to obtain the second predicted click rate;
the output unit is used for outputting the second predicted click rate.
On the basis of the above embodiment, preferably, the difference between the proportion of the document category in the training sample and the proportion of the clicked document category in the training label is within a preset range.
On the basis of the foregoing embodiment, preferably, the prediction module specifically includes: and adding the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document, adding a preset bias term to the sum of the first predicted click rate and the second predicted click rate, and outputting through a sigmoid function to obtain the final predicted click rate corresponding to each recalled document.
On the basis of the foregoing embodiment, preferably, the comprehensive features include similarity information, type information, context information, and cross information between the target search term and each recalled document, for any recalled document, the similarity information represents an edit distance ratio between the target search term and the any recalled document, the type information represents a type of the any recalled document, the context information represents a preset time, whether the preset time appears in the target search term, whether the preset time appears in the any recalled document, and whether the preset time appears in both the target search term and the any recalled document, the cross feature represents a target position, whether the target position appears in the target search term, whether the target position appears in the any recalled document, and the like, And whether the target location appears in both the target search statement and the any recalled document.
The various modules in the neural network based search ranking system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 5 is a schematic structural diagram of a computer device provided in an embodiment of the present invention, where the computer device may be a server, and an internal structural diagram of the computer device may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a computer storage medium and an internal memory. The computer storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the computer storage media. The database of the computer device is used to store data, such as target search statements, generated or obtained during execution of the neural network-based search ranking method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a neural network-based search ranking method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the steps of the neural network based search ranking method in the above embodiments are implemented. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units in this embodiment of the neural network-based search ranking system.
In an embodiment, a computer storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the neural network based search ranking method in the above embodiments. Alternatively, the computer program realizes the functions of the modules/units in the embodiment of the neural network-based search ranking system described above when executed by the processor.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A search ranking method based on a neural network is characterized by comprising the following steps:
acquiring a plurality of different types of recall documents according to a target search statement and a target search engine;
for each recall document, extracting comprehensive features between the target search statement and each recall document;
acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and a target sorting model corresponding to each recalled document, wherein the target sorting module is obtained by training through a training sample and a training label;
and sequencing each recalled document according to the final predicted click rate corresponding to each recalled document.
2. The neural network-based search ranking method of claim 1, wherein the target ranking model comprises a wide module and an improved deep module, and the improved deep module is obtained by improving the deep module through the idea of standardizing a condition layer.
3. The method according to claim 2, wherein the obtaining a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and the target ranking model corresponding to each recalled document comprises:
inputting the comprehensive characteristics corresponding to each recalled document into the wide module, and acquiring a first predicted click rate of each recalled document;
inputting the target search statement and each recall document into the improved deep module to obtain a second predicted click rate corresponding to each recall document;
and obtaining a final predicted click rate corresponding to each recalled document of each recalled document according to the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document.
4. The method according to claim 3, wherein the refined deep module includes a first encoding unit, a second encoding unit, a first multi-head attention unit, a second multi-head attention unit, a first condition-layer normalization unit, a second condition-difference normalization unit, a first average pooling unit, a second average pooling unit, a fusion unit, and an output unit, and the inputting the target search statement and each recalled document into the refined deep module to obtain a second predicted click rate corresponding to each recalled document includes:
coding the target search statement through the first coding unit to obtain statement codes;
coding each recalled document through the second coding unit to obtain a document code;
focusing on the document coding by using the statement coding through the first multi-head attention unit to obtain a first expression vector;
focusing on the statement code by using the document code through the second multi-head attention unit to obtain a second expression vector;
acquiring a first expression matrix according to the first expression vector through the first condition layer standardization unit;
acquiring a second expression matrix according to the second expression vector through the second condition layer standardization unit;
carrying out average pooling on the first expression matrix through the first average pooling unit to obtain a first numerical value;
carrying out average pooling on the second expression matrix through the second average pooling unit to obtain a second numerical value;
fusing the first numerical value and the second numerical value through the fusion unit to obtain the second predicted click rate;
outputting, by the output unit, the second predicted click rate.
5. The neural network-based search ranking method of claim 1 wherein the difference between the proportion of document categories in the training sample and the proportion of clicked document categories in the training labels is within a preset range.
6. The method according to claim 3, wherein obtaining a final predicted click rate for each recalled document of each recalled document according to the first predicted click rate for each recalled document and the second predicted click rate for each recalled document comprises:
and adding the first predicted click rate corresponding to each recalled document and the second predicted click rate corresponding to each recalled document, adding a preset bias term to the sum of the first predicted click rate and the second predicted click rate, and outputting through a sigmoid function to obtain the final predicted click rate corresponding to each recalled document.
7. The neural network-based search ranking method according to any one of claims 1 to 5, wherein the integrated features include similarity information between the target search sentence and each of the recalled documents, type information, context information, and cross-over information, for any one of the recalled documents, the similarity information indicates an edit distance ratio between the target search sentence and the any one of the recalled documents, the type information indicates a type of the any one of the recalled documents, the context information indicates a preset time, whether the preset time appears in the target search sentence, whether the preset time appears in the any one of the recalled documents, and whether the preset time appears in both the target search sentence and the any one of the recalled documents, the cross-over features indicate a target position, whether the target position appears in the target search sentence, and, Whether the target location appears in the any recalled document and whether the target location appears in both the target search statement and the any recalled document.
8. A neural network-based search ranking system, comprising:
the recall module is used for acquiring a plurality of different types of recall documents according to the target search sentences and the target search engine;
the characteristic module is used for extracting comprehensive characteristics between the target search statement and each recalled document for each recalled document;
the prediction module is used for acquiring a final predicted click rate corresponding to each recalled document according to the comprehensive characteristics and the target sorting model corresponding to each recalled document, and the target sorting module is obtained by training through a training sample and a training label;
and the sorting module is used for sorting each recalled document according to the final predicted click rate corresponding to each recalled document.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the neural network based search ranking method according to any one of claims 1 to 7 when executing the computer program.
10. A computer storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the neural network based search ranking method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111524650.2A CN114238798A (en) | 2021-12-14 | 2021-12-14 | Search ranking method, system, device and storage medium based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111524650.2A CN114238798A (en) | 2021-12-14 | 2021-12-14 | Search ranking method, system, device and storage medium based on neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114238798A true CN114238798A (en) | 2022-03-25 |
Family
ID=80755868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111524650.2A Pending CN114238798A (en) | 2021-12-14 | 2021-12-14 | Search ranking method, system, device and storage medium based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114238798A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579870A (en) * | 2022-05-05 | 2022-06-03 | 深圳格隆汇信息科技有限公司 | Financial anchor recommendation method and system based on keywords |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339383A (en) * | 2015-07-07 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method and system for sorting search |
CN111538908A (en) * | 2020-06-22 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Search ranking method and device, computer equipment and storage medium |
CN111563158A (en) * | 2020-04-26 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Text sorting method, sorting device, server and computer-readable storage medium |
CN111581545A (en) * | 2020-05-12 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Method for sorting recalled documents and related equipment |
CN111859138A (en) * | 2020-07-27 | 2020-10-30 | 小红书科技有限公司 | Searching method and device |
CN113626713A (en) * | 2021-08-19 | 2021-11-09 | 北京齐尔布莱特科技有限公司 | Search method, device, equipment and storage medium |
-
2021
- 2021-12-14 CN CN202111524650.2A patent/CN114238798A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106339383A (en) * | 2015-07-07 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method and system for sorting search |
CN111563158A (en) * | 2020-04-26 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Text sorting method, sorting device, server and computer-readable storage medium |
CN111581545A (en) * | 2020-05-12 | 2020-08-25 | 腾讯科技(深圳)有限公司 | Method for sorting recalled documents and related equipment |
CN111538908A (en) * | 2020-06-22 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Search ranking method and device, computer equipment and storage medium |
CN111859138A (en) * | 2020-07-27 | 2020-10-30 | 小红书科技有限公司 | Searching method and device |
CN113626713A (en) * | 2021-08-19 | 2021-11-09 | 北京齐尔布莱特科技有限公司 | Search method, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579870A (en) * | 2022-05-05 | 2022-06-03 | 深圳格隆汇信息科技有限公司 | Financial anchor recommendation method and system based on keywords |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111444428B (en) | Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium | |
CN110147551B (en) | Multi-category entity recognition model training, entity recognition method, server and terminal | |
WO2020228376A1 (en) | Text processing method and model training method and apparatus | |
CN107423442B (en) | Application recommendation method and system based on user portrait behavior analysis, storage medium and computer equipment | |
CN114565104A (en) | Language model pre-training method, result recommendation method and related device | |
CN112101042B (en) | Text emotion recognition method, device, terminal equipment and storage medium | |
CN110457718B (en) | Text generation method and device, computer equipment and storage medium | |
CN110781686B (en) | Statement similarity calculation method and device and computer equipment | |
CN111259647A (en) | Question and answer text matching method, device, medium and electronic equipment based on artificial intelligence | |
CN114329029B (en) | Object retrieval method, device, equipment and computer storage medium | |
CN113762392A (en) | Financial product recommendation method, device, equipment and medium based on artificial intelligence | |
WO2020192523A1 (en) | Translation quality detection method and apparatus, machine translation system, and storage medium | |
CN113505193A (en) | Data processing method and related equipment | |
CN112257841A (en) | Data processing method, device and equipment in graph neural network and storage medium | |
CN116821307B (en) | Content interaction method, device, electronic equipment and storage medium | |
CN113761868A (en) | Text processing method and device, electronic equipment and readable storage medium | |
CN113112282A (en) | Method, device, equipment and medium for processing consult problem based on client portrait | |
CN111160000A (en) | Composition automatic scoring method, device terminal equipment and storage medium | |
CN113821527A (en) | Hash code generation method and device, computer equipment and storage medium | |
CN116150306A (en) | Training method of question-answering robot, question-answering method and device | |
CN114238798A (en) | Search ranking method, system, device and storage medium based on neural network | |
CN117932058A (en) | Emotion recognition method, device and equipment based on text analysis | |
CN111859165A (en) | Real-time personalized information flow recommendation method based on user behaviors | |
CN116680401A (en) | Document processing method, document processing device, apparatus and storage medium | |
CN116956183A (en) | Multimedia resource recommendation method, model training method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |