CN112989843B - Intention recognition method, device, computing equipment and storage medium - Google Patents

Intention recognition method, device, computing equipment and storage medium Download PDF

Info

Publication number
CN112989843B
CN112989843B CN202110285921.7A CN202110285921A CN112989843B CN 112989843 B CN112989843 B CN 112989843B CN 202110285921 A CN202110285921 A CN 202110285921A CN 112989843 B CN112989843 B CN 112989843B
Authority
CN
China
Prior art keywords
layer
target
intention
classification
user answer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110285921.7A
Other languages
Chinese (zh)
Other versions
CN112989843A (en
Inventor
陆凯
赵知纬
高维国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202110285921.7A priority Critical patent/CN112989843B/en
Publication of CN112989843A publication Critical patent/CN112989843A/en
Application granted granted Critical
Publication of CN112989843B publication Critical patent/CN112989843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3334Selection or weighting of terms from queries, including natural language queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses an intention recognition method, which comprises the following steps: acquiring a user answer text, wherein the user answer text corresponds to a target intention question; inputting the user answer text into a target hierarchical weighting layer of an intention recognition model to obtain a weighted feature vector, wherein the intention recognition model comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer has a corresponding relation with a target intention question of the user answer text; and inputting the weighted feature vector into a target classification layer of the intention recognition model to obtain a classification result of the user answer text, wherein the target classification layer has a corresponding relation with a target intention question of the user answer text. According to the method and the device for identifying the intention, the M-dimensional feature vectors of the feature extraction layer are weighted and summed, and the obtained weighted feature vectors are input into the classification layer corresponding to the target intention problem, so that accuracy of intention identification is improved.

Description

Intention recognition method, device, computing equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence (artificial intelligence, AI) technology, and in particular, to an intent recognition method, apparatus, computing device, and storage medium.
Background
Intent recognition is a subtask in natural language processing (nature language process, NLP), simply by classifying sentences into corresponding candidate intents. Intent recognition is common to task dialog systems, AI interviews, and other scenarios. For example, a dialogue system presents a question and then the user presents a sentence of answer, from which the system needs to recognize the user's intent in order to reply or open the next round of dialogue. In a dialogue scenario or AI interview scenario, the system often has many intention questions, and the candidate intention sets corresponding to each intention question are different, so that the intention recognition is performed on the user answer of each intention question separately.
In the scenario with multiple intention problems, one intention problem corresponds to one BERT model, which results in a large number of models required by the system, large hardware resource consumption, and insufficient accuracy of intention recognition.
Disclosure of Invention
The embodiment of the application provides an intention recognition method, an intention recognition device, a calculation device and a storage medium, which realize intention recognition under a plurality of intention problem scenes, and the information extracted by different network layers in a feature extraction layer is fully utilized by carrying out weighted summation on M-dimensional feature vectors output by the feature extraction layer, so that the accuracy of the intention recognition is improved, the number of required models is small, and the resource consumption is reduced.
In a first aspect, the present application provides an intent recognition method, the method comprising: acquiring a user answer text, wherein the user answer text corresponds to a target intention question; inputting the user answer text into a target hierarchical weighting layer of an intention recognition model to obtain a weighted feature vector, wherein the intention recognition model comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer has a corresponding relation with a target intention question of the user answer text; and inputting the weighted feature vector into a target classification layer of the intention recognition model to obtain a classification result of the user answer text, wherein the target classification layer has a corresponding relation with a target intention question of the user answer text.
In a second aspect, the present application provides an intent recognition device, comprising: the acquisition module is used for acquiring a user answer text, wherein the user answer text corresponds to a target intention question; the processing module is used for inputting the user answer text into a target hierarchical weighting layer of the intention recognition model to obtain a weighted feature vector, wherein the intention recognition model comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer has a corresponding relation with a target intention question of the user answer text; the processing module is further used for inputting the weighted feature vector into a target classification layer of the intention recognition model to obtain a classification result of the user answer text, wherein the target classification layer has a corresponding relation with the target intention of the user answer text.
In a third aspect, the present application provides a computing device comprising a processor and a memory, which may be interconnected by a bus or may be integrated together. The processor executes code stored in memory to carry out the method as described in the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium comprising a program or instructions which, when run on a computer device, cause the computer device to perform a method as described in the first aspect.
It can be seen that in the embodiment of the present application, the M-dimensional feature vectors of the feature extraction layer are input into the target hierarchical weighting layer to perform weighted summation, and then the obtained weighted feature vectors are input into the target classification layer, so as to obtain a final classification result. Because the emphasis points of different network layers of the feature extraction layer are different and the extracted information is different, the method fully utilizes the feature characterization capability of different layers of the feature extraction layer through weighted summation, synthesizes the information extracted by different layers, and further improves the accuracy of intention recognition. In the method, only one feature extraction layer is needed in a scene with a plurality of intention questions, and user answers which do not agree with the questions of the graph can be subjected to feature extraction through the same feature extraction layer, so that the model scale is reduced, only one intention recognition model is needed, the number of models is reduced, the model management difficulty is reduced, and the hardware resource consumption is low.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an intent recognition scenario provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of an intent recognition model provided by an embodiment of the present application;
FIG. 3 is a schematic flow chart of an intent recognition method according to an embodiment of the present application;
FIG. 4 is a schematic structural view of an intent recognition device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It is noted that the terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
Natural language processing is a sub-domain of AI, and intent recognition is a sub-task in natural language processing. Intent recognition is common in various human-machine interaction scenarios, fig. 1 is a schematic diagram of an intent recognition scenario provided by an embodiment of the present application, in which a user is conducting AI interviews via an electronic device 100. The electronic device 100 displays an AI interview interface 101, the AI interview interface 101 including a robotic interview 102, a text box 103, and a recording control 104. Wherein the robot interviewer 102 is configured to represent a character image of a virtual interviewer, and the character image may be changed according to a current question or a user answer; the text box 103 is used for displaying text content of the current interview question, displaying reply content, and the like; the record control 104 is used to turn on the record and then record the user's answer speech. For example, in response to a user clicking on the record control 104, the electronic device begins recording and then obtains a segment of user answer speech for the current interview question. It will be appreciated that, in addition to the voice input operation described above, the user may also directly input a piece of answer text for the current interview question, and the present application is not particularly limited. It should be noted that fig. 1 is only for convenience of understanding the embodiment of the present application, the electronic device 100 may be, besides a mobile phone, an electronic device such as a notebook computer or a tablet computer, which is not limited in this application, and the AI interface 101 may further include more or fewer controls, and the content displayed is not limited.
In the AI interview scenario shown in fig. 1, the robotic interview 102 will ask the user a number of questions one by one and display the current interview questions in text box 103 for which various user answers may be received, the user answers corresponding to a particular question. How to give a more accurate reply for different user answers, the system needs to recognize the user's intention. These questions requiring intent recognition are referred to as intent questions, each of which corresponds to a candidate intent set that includes a plurality of candidate intents. Briefly, intent recognition is the separation of user answer text into certain candidate intents of an intent question for subsequent operations, such as giving a corresponding reply to the user's answer, the reply content being displayed in text box 103. In the above-described intention recognition scenario having a plurality of intention questions, one intention recognition model is usually used for one intention question, and if the number of the intention questions is large, the number of corresponding models is also large, and there is a problem that the models are difficult to manage. The intent recognition model typically needs to run on a graphics processor (graphics processingunit, GPU), while a GPU has limited video memory, and the model that can run is limited, and a large number of models require a large number of GPUs, resulting in excessive consumption of hardware resources and excessive overall cost of the overall intent recognition system. However, the conventional intention recognition method is often not high enough in accuracy, and the experience of the user is affected.
In order to solve the above problems, the embodiment of the present application provides an intent recognition method: and inputting the obtained user answer text into an intention recognition model to finally obtain a classification result of the user answer text. The method can be used for carrying out intention recognition under the scene with a plurality of intention problems, the number of required models is small, and the accuracy of the intention recognition is improved. Fig. 2 is a schematic diagram of an intent recognition model including a feature extraction layer, a hierarchical weighting layer, and a classification layer in accordance with an embodiment of the present application.
The feature extraction layer is mainly used for extracting grammar, semantic information and the like in the answer text of the user, and comprises a plurality of network layers, and each network layer outputs a feature vector with one dimension. It will be appreciated that, since the emphasis points of different network layers in the feature extraction layer are different, the information (represented by feature vectors) extracted by the different network layers also varies. In a possible embodiment, the feature extraction layer may be a BERT model. The BERT model is a language model released by google AI team, the whole name of the BERT model is a bidirectional coding representation (bi-directional encoder representations from transformers, BERT) based on a transducer, and the main idea is that: a transformer network is adopted as a model basic structure, and a pretraining (pre-training) is carried out on a large-scale unsupervised corpus through two pretraining tasks of a masking language model (masked language model, MLM) and lower sentence prediction (next sentence prediction, NSP) to obtain a pretraining BERT model; and performing model fine tuning (fine-tuning) on the downstream related NLP task based on the pre-trained model. For ease of understanding, the feature extraction layer below is described by taking the BERT model as an example.
As shown in fig. 2, only one BERT model is used in the whole intention recognition model, that is, a user answer text of a plurality of intention questions shares one BERT model for feature extraction, so that the number of BERT models can be reduced, and hardware consumption and cost can be reduced. There are two different sizes of BERT models, BERT BASE with 12 layers transformer encoder and BERT LARGE with 24 layers transformer encoder, respectively. The BERT model extracts semantic information by stacking a plurality of transformerencoder layers, and each stacked transformer encoder layer generates a feature vector containing different information, so that the BERT model has strong feature characterization capability. However, in the conventional BERT model using method, only the feature vector outputted from the last layer is used, resulting in lower accuracy of intention recognition. According to the embodiment of the application, the feature vectors of the BERT model different transformerencoder layers are weighted and summed, so that the feature characterization capability of the BERT model different layers is fully utilized, and the information extracted by different network layers is synthesized, so that the intention recognition accuracy is higher.
Referring to the intent recognition model in FIG. 2, in a scenario with multiple intent questions, the intent recognition model includes multiple hierarchical weighting layers and multiple classification layers, with each intent question having a corresponding hierarchical weighting layer and a corresponding classification layer. The hierarchical weighting layer is used for carrying out weighted summation on the feature vectors output by different network layers in the feature extraction layer to obtain weighted feature vectors. It can be appreciated that the obtained weighted feature vectors integrate the information extracted by the different layers, so that the accuracy of intent recognition can be improved. The classification layer takes the weighted feature vector output by the hierarchical weighting layer as input, and then outputs the classification result of the user answer text. Since a classification layer corresponds to an intent question and an intent question corresponds to a candidate intent set, the classification layer is responsible for classifying the user answer text corresponding to a certain intent question into a certain candidate intent of the candidate intent set.
Assuming that m intention questions are required for intention recognition in the AI interview scenario of fig. 1, a candidate intention set in which 3 intention questions and the respective intention questions correspond is exemplarily given as follows:
intent problem 1: what does you want to seek?
Candidate intent set 1: high income, free time, opportunity for development, ease and development of people.
Intent questions 2 do you know about our company?
Candidate intent set 2: is not known.
Intent problem 3: knowing how much sales work you have done before, please ask what aspect of sales is in particular?
Candidate intent set 3: clothing, finance, automobiles, real estate, and daily necessities.
……
It can be appreciated that the number, content and division of candidate intentions in the candidate intent set of the above intent questions can be reasonably designed according to the specific application scenario, and the application is not limited. It is assumed that the intent recognition model of fig. 2 can perform intent recognition for the m intent questions, and that the m intent questions have m hierarchical weighting layers and m classification layers in total. The intention questions, the layered weighting layers and the classification layers are in one-to-one correspondence, that is, after the feature extraction layer inputs the user answer text of a certain intention question to obtain feature vectors of a plurality of layers, the feature vectors are only input into the layered weighting layer corresponding to the intention question, the output result of the layered weighting layer is also only input into the classification layer corresponding to the intention question, and then the classification result of the user answer text is obtained. For example, as shown in fig. 2, the dialog system currently presents the intent question 1 to the user, then obtains a section of user answer text A1 for the intent question 1, inputs A1 into the BERT model for feature extraction, and outputs A1', A1' representing feature vectors output by multiple layers (all transfromer encoder layers or part of the transformerender layers) of the BERT model. Since A1 corresponds to the intention question 1, A1' will only input the hierarchical weighting layer to which the intention question 1 corresponds. The feature weighting network of the intention question 1 outputs A1 'and inputs A' to the classification layer of the intention question 1, and outputs a classification result corresponding to the user answer text A1. The operation processes of other user answer texts A2, B1, Y1 and the like are the same, and after the feature vectors of the user answer texts are extracted by the BERT model, the feature vectors are only input into the corresponding hierarchical weighting layers and the corresponding classification layers. It can be understood that the obtained user answer text may include a label of a corresponding target intention question, through which it can be determined which hierarchical weighting layer and classification layer in the intention recognition model are used to perform intention recognition, specifically, the user answer text is input into the BERT model to obtain feature vectors of multiple layers, the hierarchical weighting layer corresponding to the input target intention question is judged according to the label to obtain a weighted feature vector, and the classification layer corresponding to the input target intention question is judged through the label to output a classification result.
The specific process of inputting the obtained user answer text into the intention recognition model and finally obtaining the classification result of the user answer text is described below.
First, a user answer text is acquired, wherein the user answer text corresponds to a target intention question. The user may enter user answer text via the electronic device of fig. 1 or the computing device 500 of fig. 5 (described in more detail below). In a possible embodiment, the user answer speech is obtained first, and then converted into the user answer text. Inputting the acquired user answer text into a BERT model, and outputting feature vectors of a plurality of layers of the BERT model, wherein each transformerencon layer of the BERT model outputs feature vectors of one dimension. The BERT model may use a 12-layer transformerencoder, or may use a 24-layer transformerencoder, and the feature vectors of the multiple layers may be the feature vectors of several transformer encoder layers, or may be the feature vectors of all the transformerencoder layers. For convenience of description, the following description will be given with a feature vector of the output 12-layer transformerencoder. It should be appreciated that word segmentation of the user answer text is also required before the user answer text is entered into the BERT model. For example, a piece of user answer text "i know your company" is obtained for intent question 2. The section of user answer text is then subjected to word segmentation to obtain I/O/solution/you/public/S/. The user answer text is divided into individual chinese characters. In a possible embodiment, the user answer text is converted into an ID-based text Wherein the ID text represents each character in the user answer text as a corresponding number in the character table. For example, "I/m/you/n/g/s/. "convert to ID text [ x ] 1 、x 2 、…、x k 、…、x n ]Wherein x is k The corresponding number of the kth character of the text in the character table is answered by the user, k is a positive integer less than or equal to n, and the obtained ID text is specifically [2769,749,6237,872,812,1062,1385,511 ]]。
The feature vector R of 12 layers is obtained through the operation j J is the interval [1,12 ]]Positive integer between j represents layer j of BERT model, R j Is the feature vector corresponding to the j-th layer. Then, the feature vectors of the 12 layers are input into a target hierarchical weighting layer of an intention recognition model, and weighted feature vectors are obtained, wherein the intention recognition model comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer is one hierarchical weighting layer corresponding to a target intention problem. Specifically, the weighted feature vector E can be obtained by performing weighted summation on the feature vectors of the 12 layers through the formula (i), wherein S j For the j-th layer weight, the weight needs to be adjusted during training.
And then, the weighted feature vector output by the layered weighting layer is input into a target classification layer of the intention recognition model, and a classification result of the user answer text is output, wherein the target classification layer is the classification layer corresponding to the target intention problem. In one possible embodiment, the classification layer may be a convolutional neural network (convolutional neural networks, CNN). It can be understood that, because of the strong classification capability and the lighter structure, CNN is more commonly used in practical application scenarios, besides using CNN, other neural networks can be used in the scheme as the classification layer of the multi-problem intention recognition model, and corresponding classification results are output, for example, the classification layer can directly adopt a linear softmax separator with a simpler structure. With the previous example, a section of user answer text to intent question 2 "I know your company". "after the above operation is performed, a classification result may be obtained, where the classification result may be the probability of corresponding to different candidate intentions in the candidate intent set 2, and it is assumed that the output classification result is" know "of 0.9 and" not know "of 0.1, so that the user answer text is finally classified into" know "of the candidate intentions. It should be appreciated that the subsequent operations corresponding to different candidate intents are not the same, and that different branches may exist in the dialog flow. After the user answer text is identified as "know", a reply "thank you for our approval, please ask you to know what is? ". While other users of the intention question 2 answer text, if identified as "unaware", the intention recognition system may reply to "i introduce you to our company bar" and play company introduction videos, the application is not limited to reply content and form.
Training of the intent recognition model is required before using the intent recognition model provided by the present application for intent recognition. The embodiment of the application provides a model training method, which comprises the steps of firstly obtaining a first sample set, wherein the first sample set comprises known M-dimensional feature vectors and corresponding known text classification results, and then obtaining an intention recognition model after training a first neural network by using the first sample. In a possible embodiment, the known user answer text is input into a BERT model to obtain the known M-dimensional feature vector, where the BERT model includes M network layers, and each network layer of the M network layers outputs a feature vector of one dimension, where M is a positive integer. Because the BERT model is more commonly used in engineering, the pre-trained BERT model can be directly used as a feature extractor, and parameters of the BERT model are not adjusted in the whole model training process. In a possible embodiment, training the first neural network using the first set of samples to obtain the intent recognition model includes: inputting the known M-dimensional feature vector into a hierarchical weighting layer of the first neural network for weighted summation to obtain a weighted feature vector; then, the weighted feature vector is input into a classification layer of the first neural network to obtain a predicted value; and obtaining loss (loss) by the difference between the predicted value and the known classification text classification result, carrying out back propagation on the first neural network through the loss, and adjusting parameters of a layering weighting layer and a classification layer of the first neural network to obtain the layering weighting layer and the classification layer of the intention recognition model. Wherein the parameters include weights and gradients. It should be noted that, in the training method described above, training is performed for a certain target intention problem, for example, a sample set corresponding to the intention problem 1 is currently used, and parameters of a hierarchical weighting layer and a classification layer corresponding to the intention problem 1 in the first neural network are adjusted in the training process, and the hierarchical weighting layer and the classification layer corresponding to other intention problems are not involved. The training process of the hierarchical weighting layer and the classifying layer corresponding to other intention questions is the same, that is, the hierarchical weighting layer and the classifying layer corresponding to the different graph questions are trained separately, for example, the hierarchical weighting layer and the classifying layer corresponding to the intention question 1 are trained first, then the sample set of the intention question 2 is used to perform the training operation on the hierarchical weighting layer and the classifying layer corresponding to the intention question 2, and the corresponding network parameters are updated. It should be understood that, in order to perform intent recognition on a plurality of different intent questions, the intent recognition model provided in the embodiment of the present application includes a plurality of different hierarchical weighting layers and a plurality of different classification layers, and parameters of the hierarchical weighting layers and the classification layers corresponding to different graph questions are often different.
The embodiment of the application also provides another model training method, which comprises the steps of firstly, obtaining a second sample set, wherein the second sample set comprises known user answer texts and corresponding known text classification results; training the second neural network by using the second sample set to obtain an intention recognition model. It should be noted that, each intention question has a sample set, the sample set includes the known user answer text of the intention question and the corresponding classification result, and the second sample set includes a plurality of sample sets corresponding to the intention questions respectively.
In one possible embodiment, a subset of the second sample set is obtained, wherein the subset corresponds to a target intent question; inputting the known user answer text in the subset into a feature extraction layer of the second neural network, and outputting M-dimensional feature vectors; inputting the M-dimensional feature vector into a target classification layer of a second neural network to obtain a predicted value, wherein the second neural network comprises a feature extraction layer (BERT model), a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer is one hierarchical weighting layer corresponding to a target intention problem; inputting the weighted feature vector into a target classification layer of the second neural network to obtain a predicted value, wherein the target classification layer is one classification layer corresponding to the target intention problem in a plurality of classification layers of the second neural network; obtaining loss from the gap between the predicted value and the known text classification result, back-propagating the second neural network through the loss, and adjusting parameters of a feature extraction layer, a target hierarchical weighting layer and a target classification layer of the second neural network. The above operations are repeatedly performed until the model training target is reached, such as the maximum training number of models is reached or the loss is less than the set value. It can be appreciated that the above method performs fine adjustment on the BERT model on a specific intention recognition task, in other words, parameters of the BERT model are adjusted through a sample set of a plurality of intention questions, so that the obtained intention recognition model tends to have better effect and higher intention recognition accuracy. Because the BERT model learns from the sample set of a plurality of intention problems, and the similar intention problems can share the useful information, the BERT model obtained after fine adjustment can be better suitable for specific intention recognition tasks, and the feature vector extracted by using the BERT model can be more accurate and more suitable for the requirements of downstream specific classification tasks, so that the intention recognition accuracy is improved. It should be noted that the structures of the first neural network and the second neural network may be the same or different.
Referring to fig. 3, fig. 3 is a flowchart of an intent recognition method according to an embodiment of the present application, the method includes the following steps:
s301: and acquiring a user answer text.
Wherein the user answer text corresponds to a target intent question.
In one possible embodiment, before obtaining the user answer text, the method further comprises: and acquiring user answer voices, and converting the user answer voices into user answer texts. Since the answer input by the user may be voice data or text data, when the answer input by the user is voice data, the answer needs to be converted into text data and then input into the intention recognition model provided by the embodiment of the application for processing. In a possible embodiment, the user answer text is subjected to ID (identification) to obtain an ID text, wherein each character in the user answer text is represented as a corresponding number in a character table. For the BERT model, the special character table is used for corresponding conversion, and the converted text of the ID is convenient for processing the BERT model.
In one possible embodiment, before obtaining the user answer text, the method further comprises: acquiring a first sample set, wherein the first sample set comprises known M-dimensional feature vectors and corresponding known text classification results; training the first neural network by using the first sample set to obtain an intention recognition model.
In one possible embodiment, training the first neural network using the first set of samples to obtain the intent recognition model includes: inputting the known M-dimensional feature vector into a hierarchical weighting layer of the first neural network for weighted summation to obtain a weighted feature vector; inputting the weighted feature vector into a classification layer of the first neural network to obtain a predicted value; obtaining loss through the difference between the predicted value and the known text classification result, carrying out back propagation on the first neural network through the loss, and adjusting parameters of a hierarchical weighting layer and a classification layer of the first neural network to obtain the hierarchical weighting layer and the classification layer of the intention recognition model. Wherein the parameters include weights and biases.
In one possible embodiment, before obtaining the user answer text, the method further comprises: acquiring a second sample set, wherein the second sample set comprises known user answer texts and corresponding known text classification results; an intent recognition model is obtained after training the second neural network using the second sample set.
In one possible embodiment, the intent recognition model includes a feature extraction layer, a hierarchical weighting layer, and a classification layer; training the second neural network by using the second sample set to obtain an intention recognition model, including: obtaining a subset of the second sample set, wherein the subset corresponds to a target intent problem; inputting the known user answer text in the subset into a feature extraction layer of the second neural network, and outputting M-dimensional feature vectors; inputting the M-dimensional feature vector into a target hierarchical weighting layer of a second neural network to obtain a weighted feature vector, wherein the second neural network comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layers have a corresponding relation with a target intention problem; inputting the weighted feature vector into a target classification layer of the second neural network to obtain a predicted value, wherein the target classification layer has a corresponding relation with a target intention problem; obtaining loss from the gap between the predicted value and the known text classification result, back-propagating the second neural network through the loss, and adjusting parameters of a feature extraction layer, a target hierarchical weighting layer and a target classification layer of the second neural network.
S302: and inputting the text answered by the user into a target hierarchical weighting layer of the intention recognition model to obtain a weighted feature vector.
The intention recognition model comprises a plurality of layering weighting layers and a plurality of classification layers, and the target layering weighting layers and target intention questions of the user answer text have corresponding relations.
In one possible embodiment, the user answer text is input into a feature extraction layer and an M-dimensional feature vector is output, wherein the feature extraction layer comprises M network layers, each of the M network layers outputting a feature vector of one dimension, and M is a positive integer. For how the M-dimensional feature vectors are weighted summed in the hierarchical weighting layer, please refer to the related description above.
In one possible embodiment, the feature extraction layer is a BERT model.
S303: and inputting the weighted feature vectors into a target classification layer of the intention recognition model to obtain a classification result of the user answer text.
The target classification layer has a corresponding relation with the target intention questions of the user answer text.
In one possible embodiment, the classification layer is any one of a convolutional neural network, a softmax classifier.
It can be seen that in the embodiment of the present application, the M-dimensional feature vectors of the feature extraction layer are input into the target hierarchical weighting layer to perform weighted summation, and then the obtained weighted feature vectors are input into the target classification layer, so as to obtain the classification result of the answer text of the user. Because the emphasis points of different network layers of the feature extraction layer are different and the extracted information is different, the method fully utilizes the feature characterization capability of different layers of the feature extraction layer through weighted summation, and improves the accuracy of intention recognition. In addition, in the method, only one feature extraction layer is needed in a scene with a plurality of intention questions, the feature extraction can be carried out by the same feature extraction layer on user answers which do not agree with the graph questions, the scale of the intention recognition model is reduced, the model management difficulty is reduced, the hardware resource consumption is low, and in addition, only one intention recognition model is needed in the plurality of intention questions, so that the number of models is small.
Fig. 4 is a schematic structural diagram of an intent recognition device 400 according to an embodiment of the present application, and as shown in fig. 4, the intent recognition device 400 includes an acquisition module 401 and a processing module 402.
An obtaining module 401, configured to obtain a user answer text, where the user answer text corresponds to a target intention question;
the processing module 402 is configured to input the user answer text into a target hierarchical weighting layer of an intent recognition model, and obtain a weighted feature vector, where the intent recognition model includes a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer has a corresponding relationship with a target intent question of the user answer text;
the processing module 402 is further configured to input the weighted feature vector into a target classification layer of the intent recognition model, to obtain a classification result of the user answer text, where the target classification layer has a corresponding relationship with a target intent question of the user answer text.
In one possible embodiment, the processing module 402 is specifically configured to: inputting a user answer text into a feature extraction layer and outputting M-dimensional feature vectors, wherein the feature extraction layer comprises M network layers, each network layer in the M network layers outputs a feature vector with one dimension, and M is an integer; and inputting the M-dimensional feature vector into a target hierarchical weighting layer of the intention recognition model to obtain a weighted feature vector.
In a possible embodiment, the processing module 402 is further configured to obtain a first sample set, where the first sample set includes a known M-dimensional feature vector and a corresponding known text classification result; training the first neural network by using the first sample set to obtain an intention recognition model.
In a possible embodiment, the processing module 402 is further configured to input the known M-dimensional feature vector into a hierarchical weighting layer of the first neural network to perform weighted summation, so as to obtain a weighted feature vector; inputting the weighted feature vector into a classification layer of the first neural network to obtain a predicted value; obtaining loss through the difference between the predicted value and the known text classification result, carrying out back propagation on the first neural network through the loss, and adjusting parameters of a hierarchical weighting layer and a classification layer of the first neural network to obtain the hierarchical weighting layer and the classification layer of the intention recognition model.
In a possible embodiment, the processing module 402 is further configured to obtain a second sample set, where the second sample set includes known user answer text and corresponding known text classification results; training the second neural network by using the second sample set to obtain an intention recognition model.
In a possible embodiment, the processing module 402 is further configured to obtain a subset of the second sample set, where the subset corresponds to a target intent problem; inputting the known user answer text in the subset into a feature extraction layer of the second neural network, and outputting M-dimensional feature vectors; inputting the M-dimensional feature vector into a target hierarchical weighting layer of a second neural network to obtain a weighted feature vector, wherein the second neural network comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layers have a corresponding relation with a target intention problem; inputting the weighted feature vector into a target classification layer of the second neural network to obtain a predicted value, wherein the target classification layer has a corresponding relation with a target intention problem; obtaining loss from the gap between the predicted value and the known text classification result, back-propagating the second neural network through the loss, and adjusting parameters of a feature extraction layer, a target hierarchical weighting layer and a target classification layer of the second neural network.
In a possible embodiment, the obtaining module 401 is further configured to obtain a user answer speech, and the processing module 402 is further configured to convert the user answer speech into a user answer text.
The modules of the intention recognition apparatus 400 are specifically used to implement steps S301 to S303 of the embodiment of the intention recognition method of fig. 3, and for details, please refer to the above related description.
Fig. 5 is a schematic structural diagram of a computing device 500 according to an embodiment of the present application. The computing device 500 may be a computing device such as a notebook computer, a tablet computer, and a cloud server, which is not limited in this application.
The computing device 500 includes: a processor 501, a communication interface 502 and a memory 503, said computing device being specifically adapted to carry out steps S301 to S303 in the above-described embodiment of the intention recognition method. The processor 501, the communication interface 502, and the memory 503 may be connected to each other via an internal bus 504, or may communicate via other means such as wireless transmission. In the present embodiment, the bus 504 is exemplified by a connection via the bus 504, and the bus 504 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, or the like. The bus 504 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
The processor 501 may be comprised of at least one general purpose processor such as a central processing unit (central processing unit, CPU) or a combination of CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof. The processor 501 executes various types of digitally stored instructions, such as software or firmware programs stored in the memory 503, that enable the computing device 500 to provide a variety of services.
The memory 503 is used for storing program codes and is controlled to be executed by the processor 501 to perform the processing steps in the embodiment of the intended identification method in the above-described embodiment. The program code may include one or more software modules, which may be provided in the embodiment of fig. 4, such as the obtaining module 401 and the processing module 402, and for specific implementation details, reference is made to the foregoing description.
It should be noted that, the present embodiment may be implemented by a general physical server, for example, an ARM server or an X86 server, or may be implemented by a virtual machine implemented by combining an NFV technology with a general physical server, where the virtual machine refers to a complete computer system that is simulated by software and has a complete hardware system function and operates in a completely isolated environment, and the application is not limited specifically. It should be understood that the computing device shown in fig. 5 may also be a computer cluster of at least one server, which is not specifically limited in this application.
The memory 503 may include volatile memory (RAM), such as random access memory (random access memory); the memory 503 may also include a nonvolatile memory (non-volatile memory), such as a read-only memory (ROM), a flash memory (flash memory), a hard disk (HDD), or a Solid State Drive (SSD); the memory 503 may also include a combination of the above. The memory 503 may store program codes, and may specifically include program codes for performing the steps described in the embodiment of fig. 3, which will not be described herein.
The communication interface 502 may be a wired interface (e.g., an ethernet interface), may be an internal interface (e.g., a high-speed serial computer expansion bus (peripheral component interconnect express, PCIe) bus interface), a wired interface (e.g., an ethernet interface), or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other devices or modules.
Optionally, the computing device 500 may further include an input/output interface 505, where the input/output interface 505 is connected to an input/output device, for receiving input information, outputting operation results, such as inputting user answer text, and outputting classification results corresponding to the user answer text.
It should be noted that fig. 5 is merely one possible implementation of the embodiments of the present application, and in practical applications, the computing device 500 may further include more or fewer components, which is not limited herein. For details not shown or described in the embodiment of the present application, reference may be made to the foregoing description of the embodiment of fig. 3, which is not repeated here.
Embodiments of the present application also provide a computer-readable storage medium having instructions stored therein that, when executed on a processor, implement the method flow shown in fig. 3.
Embodiments of the present application also provide a computer program product, which when run on a processor, implements the method flow shown in fig. 3.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access Memory, RAM), or the like.
The above disclosure is illustrative of a preferred embodiment of the present application and, of course, should not be taken as limiting the scope of the invention, and those skilled in the art will recognize that all or part of the above embodiments can be practiced with modification within the spirit and scope of the appended claims.

Claims (7)

1. A method of intent recognition, the method comprising:
acquiring a first sample set, wherein the first sample set comprises a known M-dimensional feature vector and a corresponding known text classification result, and corresponds to a first intention problem;
The known M-dimensional feature vector is input into a hierarchical weighting layer of a first neural network for weighted summation to obtain a weighted feature vector, and the weighted feature vector is input into a classification layer of the first neural network to obtain a predicted value;
obtaining a loss from a gap between the predicted value and the known text classification result, back-propagating the first neural network through the loss, adjusting parameters of the hierarchical weighting layer and the classification layer of the first neural network, and obtaining a first hierarchical weighting layer and a first classification layer of an intention recognition model, wherein the intention recognition model comprises a plurality of hierarchical weighting layers and a plurality of classification layers, the hierarchical weighting layers and the classification layers corresponding to different graph problems in the intention recognition model are trained separately, and the first hierarchical weighting layer and the first classification layer correspond to the first intention problem;
acquiring a user answer text, wherein the user answer text corresponds to a target intention question;
inputting the user answer text into a feature extraction layer of the intention recognition model, and outputting M-dimensional feature vectors, wherein the feature extraction layer comprises M network layers, each network layer in the M network layers outputs a feature vector of one dimension, and M is an integer;
Inputting the M-dimensional feature vector into a target hierarchical weighting layer of the intention recognition model to obtain the weighted feature vector, wherein the target hierarchical weighting layer has a corresponding relation with a target intention question of the user answer text;
and inputting the weighted feature vector into a target classification layer of the intention recognition model to obtain a classification result of the user answer text, wherein the target classification layer has a corresponding relation with a target intention question of the user answer text.
2. The method of claim 1, wherein prior to the obtaining user answer text, the method further comprises:
obtaining a second sample set, wherein the second sample set comprises known user answer texts and corresponding known text classification results;
training a second neural network by using the second sample set to obtain the intention recognition model.
3. The method of claim 2, wherein the intent recognition model includes a feature extraction layer, a hierarchical weighting layer, and a classification layer;
the training the second neural network using the second sample set to obtain the intention recognition model includes:
Obtaining a subset of the second sample set, wherein the subset corresponds to a target intent problem;
inputting the known user answer text in the subset into a feature extraction layer of the second neural network, and outputting M-dimensional feature vectors;
inputting the M-dimensional feature vector into a target hierarchical weighting layer of the second neural network to obtain a weighted feature vector, wherein the second neural network comprises a plurality of hierarchical weighting layers and a plurality of classification layers, and the target hierarchical weighting layer has a corresponding relation with the target intention problem;
inputting the weighted feature vector into a target classification layer of the second neural network to obtain a predicted value, wherein the target classification layer has a corresponding relation with the target intention problem;
obtaining a loss from a gap between the predicted value and the known text classification result, back-propagating the second neural network through the loss, and adjusting parameters of the feature extraction layer, the target hierarchical weighting layer and the target classification layer of the second neural network.
4. A method according to any one of claims 1 to 3, wherein prior to said obtaining user answer text, the method further comprises:
And acquiring user answer voice, and converting the user answer voice into the user answer text.
5. An intent recognition device, the device comprising:
the system comprises an acquisition module, a first analysis module and a second analysis module, wherein the acquisition module is used for acquiring a first sample set, the first sample set comprises a known M-dimensional feature vector and a corresponding known text classification result, and the first sample set corresponds to a first intention problem;
the processing module is used for inputting the known M-dimensional feature vector into a hierarchical weighting layer of a first neural network for weighted summation to obtain a weighted feature vector, and inputting the weighted feature vector into a classification layer of the first neural network to obtain a predicted value;
the processing module is further configured to obtain a loss from a gap between the predicted value and the known text classification result, back propagate the first neural network through the loss, adjust parameters of the hierarchical weighting layer and the classification layer of the first neural network, and obtain a first hierarchical weighting layer and a first classification layer of an intent recognition model, where the intent recognition model includes a plurality of hierarchical weighting layers and a plurality of classification layers, the hierarchical weighting layers and the classification layers corresponding to different graph problems in the intent recognition model are trained separately, and the first hierarchical weighting layer and the first classification layer correspond to the first intent problem;
The acquisition module is further used for acquiring a user answer text, wherein the user answer text corresponds to a target intention question;
the processing module is further configured to input the user answer text into a feature extraction layer of the intent recognition model, and output an M-dimensional feature vector, where the feature extraction layer includes M network layers, each of the M network layers outputs a feature vector of one dimension, and M is an integer;
the processing module is further configured to input the M-dimensional feature vector into a target hierarchical weighting layer of the intent recognition model, and obtain the weighted feature vector, where the target hierarchical weighting layer has a corresponding relationship with a target intent question of the user answer text;
the processing module is further configured to input the weighted feature vector into a target classification layer of the intent recognition model, and obtain a classification result of the user answer text, where the target classification layer has a corresponding relationship with a target intent question of the user answer text.
6. A computing device comprising a memory and a processor:
the memory is used for storing a computer program;
The processor configured to execute a computer program stored in the memory to cause the computing device to perform the method of any one of claims 1-4.
7. A computer readable storage medium comprising a program or instructions which, when executed on a computer device, performs the method of any of claims 1-4.
CN202110285921.7A 2021-03-17 2021-03-17 Intention recognition method, device, computing equipment and storage medium Active CN112989843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110285921.7A CN112989843B (en) 2021-03-17 2021-03-17 Intention recognition method, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110285921.7A CN112989843B (en) 2021-03-17 2021-03-17 Intention recognition method, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112989843A CN112989843A (en) 2021-06-18
CN112989843B true CN112989843B (en) 2023-07-25

Family

ID=76334092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110285921.7A Active CN112989843B (en) 2021-03-17 2021-03-17 Intention recognition method, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112989843B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408278B (en) * 2021-06-22 2023-01-20 平安科技(深圳)有限公司 Intention recognition method, device, equipment and storage medium
CN113905135B (en) * 2021-10-14 2023-10-20 天津车之家软件有限公司 User intention recognition method and device of intelligent outbound robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004177998A (en) * 2002-11-22 2004-06-24 Fujitsu Ltd Classification evaluation device
CN111125335A (en) * 2019-12-27 2020-05-08 北京百度网讯科技有限公司 Question and answer processing method and device, electronic equipment and storage medium
CN111209384A (en) * 2020-01-08 2020-05-29 腾讯科技(深圳)有限公司 Question and answer data processing method and device based on artificial intelligence and electronic equipment
CN111597320A (en) * 2020-05-26 2020-08-28 成都晓多科技有限公司 Intention recognition device, method, equipment and storage medium based on hierarchical classification
CN111708873A (en) * 2020-06-15 2020-09-25 腾讯科技(深圳)有限公司 Intelligent question answering method and device, computer equipment and storage medium
CN112131366A (en) * 2020-09-23 2020-12-25 腾讯科技(深圳)有限公司 Method, device and storage medium for training text classification model and text classification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004177998A (en) * 2002-11-22 2004-06-24 Fujitsu Ltd Classification evaluation device
CN111125335A (en) * 2019-12-27 2020-05-08 北京百度网讯科技有限公司 Question and answer processing method and device, electronic equipment and storage medium
CN111209384A (en) * 2020-01-08 2020-05-29 腾讯科技(深圳)有限公司 Question and answer data processing method and device based on artificial intelligence and electronic equipment
CN111597320A (en) * 2020-05-26 2020-08-28 成都晓多科技有限公司 Intention recognition device, method, equipment and storage medium based on hierarchical classification
CN111708873A (en) * 2020-06-15 2020-09-25 腾讯科技(深圳)有限公司 Intelligent question answering method and device, computer equipment and storage medium
CN112131366A (en) * 2020-09-23 2020-12-25 腾讯科技(深圳)有限公司 Method, device and storage medium for training text classification model and text classification

Also Published As

Publication number Publication date
CN112989843A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN110263324B (en) Text processing method, model training method and device
WO2021042828A1 (en) Neural network model compression method and apparatus, and storage medium and chip
WO2020177282A1 (en) Machine dialogue method and apparatus, computer device, and storage medium
CN110795552B (en) Training sample generation method and device, electronic equipment and storage medium
CN110853626B (en) Bidirectional attention neural network-based dialogue understanding method, device and equipment
CN110990543A (en) Intelligent conversation generation method and device, computer equipment and computer storage medium
CN111144124B (en) Training method of machine learning model, intention recognition method, and related device and equipment
CN112818861A (en) Emotion classification method and system based on multi-mode context semantic features
CN113239169A (en) Artificial intelligence-based answer generation method, device, equipment and storage medium
CN116861995A (en) Training of multi-mode pre-training model and multi-mode data processing method and device
CN112989843B (en) Intention recognition method, device, computing equipment and storage medium
CN113240510B (en) Abnormal user prediction method, device, equipment and storage medium
CN113987147A (en) Sample processing method and device
CN115424013A (en) Model training method, image processing apparatus, and medium
CN117634459A (en) Target content generation and model training method, device, system, equipment and medium
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN113672714A (en) Multi-turn dialogue device and method
CN117033961A (en) Multi-mode image-text classification method for context awareness
CN113538079A (en) Recommendation model training method and device, and recommendation method and device
CN116521832A (en) Dialogue interaction method, device and system, electronic equipment and storage medium
CN113312445A (en) Data processing method, model construction method, classification method and computing equipment
CN113869068A (en) Scene service recommendation method, device, equipment and storage medium
CN113010687A (en) Exercise label prediction method and device, storage medium and computer equipment
CN114328797B (en) Content search method, device, electronic apparatus, storage medium, and program product
CN114818644B (en) Text template generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant