CN111931512A - Statement intention determining method and device and storage medium - Google Patents
Statement intention determining method and device and storage medium Download PDFInfo
- Publication number
- CN111931512A CN111931512A CN202010624894.7A CN202010624894A CN111931512A CN 111931512 A CN111931512 A CN 111931512A CN 202010624894 A CN202010624894 A CN 202010624894A CN 111931512 A CN111931512 A CN 111931512A
- Authority
- CN
- China
- Prior art keywords
- intention
- similarity
- vector
- model
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 239000013598 vector Substances 0.000 claims abstract description 239
- 238000010801 machine learning Methods 0.000 claims abstract description 47
- 238000004458 analytical method Methods 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims description 15
- 238000009826 distribution Methods 0.000 claims description 12
- 238000009827 uniform distribution Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000013136 deep learning model Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008439 repair process Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 102220313179 rs1553259785 Human genes 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Machine Translation (AREA)
Abstract
The embodiment of the application discloses a statement intention determining method, a statement intention determining device, equipment and a storage medium, wherein the statement intention determining device is applied to an intention analysis model and comprises a machine learning encoder and a first intention determining model, and the method comprises the following steps: a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; the first intention determining model obtains a specific vector set, and the first intention determining model determines the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set; under the condition that the similarity is determined not to be larger than or equal to a specific similarity value in the similarity set, a first intention determining model acquires a target intention corresponding to the input statement from the outside of the intention analyzing model; a first intent determination model adds the input vector to the particular set of vectors, adding the target intent to the particular set of intentions.
Description
Technical Field
The embodiments of the present application relate to, but not limited to, electronic technologies, and in particular, to a method and an apparatus for determining a sentence intent, and a storage medium.
Background
The intention understanding of the existing chat robot is obtained based on a training model of a defined intention classification system. When the defined intention classification system is changed, training data needs to be collected for the intention classes added in the intention classification system, the collected training data is added to the training data of the defined intention classification system, and the classification system is retrained, so that the defined model is updated.
However, collecting training data for the added intent categories takes a long time and is small in data volume. Further, there is a problem that the effect of the updated intention classification system in identifying the initial intention category is deteriorated. Therefore, the method for updating the defined intention classification system through retraining can increase time and labor cost, cannot ensure the classification effect, reduces the maintainability and the expansibility of the classification system, and is not suitable for actual business scenes.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for determining a sentence intent, and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a method for determining a sentence intent, which is applied to an intent analysis model, where the intent analysis model includes a machine learning encoder and a first intent determination model, and the method includes:
the machine learning encoder obtains an input vector corresponding to an input sentence to be classified;
the first intention determining model obtains a specific vector set, wherein the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
the first intention determining model determines the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
under the condition that the similarity is determined not to be larger than or equal to a specific similarity value in the similarity set, the first intention determining model acquires a target intention corresponding to the input statement from the outside of the intention analyzing model;
the first intent determination model adds the input vector to the particular set of vectors to update the set of vectors, and adds the target intent to the particular set of intentions to update the intent vector to complete the update from the first intent determination model to a second intent determination model.
In a second aspect, the present application provides an apparatus for determining a sentence intent, which is applied to an intent analysis model including a machine learning encoder and a first intent determination model, the apparatus including:
the first obtaining module is used for obtaining an input vector corresponding to an input statement to be classified;
a second obtaining module, configured to obtain a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
the determining module is used for determining the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
a third obtaining module, configured to obtain, from outside the intention analysis model, a target intention corresponding to the input sentence when it is determined that no similarity in the similarity set is greater than or equal to a specific similarity value;
an adding module to add the input vector to the particular set of vectors to update the set of vectors, to add the target intent to the particular set of intents to update the intent vector to complete the update from the first intent determination model to the second intent determination model.
In a third aspect, an embodiment of the present application provides a device for determining a sentence intent, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps in the method when executing the program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having computer-executable instructions stored therein, the computer-executable instructions being configured to perform the method for determining a sentence intent provided above.
In the embodiment of the application, vectorization of input sentences to be classified and intents can be realized, the similarity between the intents and the input sentences to be classified is convenient to calculate, the sentence intents are favorably determined, and the target intents corresponding to the input sentences to be classified are favorably added under the condition that each intention does not meet the specific similarity, so that after the target intents are favorably increased, the model does not need to be retrained, and the on-line model is updated; furthermore, the requirement on the training data of new intentions added to a specific intention set is low, the prediction effect of the original intentions is not influenced, and a good intention classification effect can be realized on line under the condition that the training data of the new intentions are few.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a method for determining a sentence intent according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a method for determining a sentence intent according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of a method for determining a sentence intent according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation of a method for determining a sentence intent according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation of a method for determining a sentence intent according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a sentence intent determination apparatus provided in an embodiment of the present application;
fig. 7 is a hardware entity diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application is further elaborated below with reference to the drawings and the embodiments.
Fig. 1 is a schematic flow chart of an implementation of a method for determining a sentence intent provided in an embodiment of the present application, as shown in fig. 1, the method includes:
step S110, a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
here, the machine learning encoder may be a deep learning Network, for example, a Convolutional Neural Network (CNN), and may also be a Recurrent Neural Network (RNN). For example, the machine learning encoder may be a Long-Short Term Memory artificial neural network (LSTM) in temporal RNN. The machine learning encoder may be used to convert a user sentence into a high-dimensional semantic vector.
Here, the input sentence to be classified may be a voice transmitted by a user. For example, it can be a voice sent in an online chat room (Chatbot). Here, the input vector may be a high-order semantic vector converted from speech transmitted by the user through a machine learning encoder.
Here, the intention analysis model may be a model having an automatic update function for automatically and quickly implementing update of the model without retraining the model in the case of adding a new intention. Here, the first intention determining model may be a model having the functions of similarity calculation, similarity comparison, and normalization.
Step S120, the first intention determining model obtains a specific vector set, wherein the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
here, the specific intention set may be possible categories corresponding to the input sentence to be classified. For example, the input sentence to be classified is "mouse repair", the possible categories are "repair", "mouse", "after-sale" … …, etc., and the set of these possible categories is the specific intention set.
Step S130, the first intention determining model determines the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
here, the similarity calculation is performed on the input vector and each semantic vector in the vector set, and each obtained similarity may represent a degree of similarity between the semantic vector and the input vector.
Step S140, when it is determined that no similarity value greater than or equal to a specific similarity value exists in the similarity set, the first intention determining model obtains a target intention corresponding to the input sentence from the outside of the intention analyzing model;
here, the specific similarity value may be a preset similarity value, and in a case that the preset similarity value is not present in the similarity set, the intention category characterizing the current intention set cannot represent the input sentence to be classified.
Here, the external acquisition manner of the intention analysis model may be a manner of web search. The target intention may be an intention corresponding to the input sentence to be classified, for example, may be a sentence having a similarity greater than 0.95 to the input sentence to be classified.
In some embodiments, the target intent corresponding to the input sentence to be classified is obtained by means of a web search.
Step S150, the first intention determination model adds the input vector to the specific vector set to update the vector set, and adds the target intention to the specific intention set to update the intention vector, to complete the updating from the first intention determination model to the second intention determination model.
Here, the target intent may also be converted into a high-dimensional semantic vector by using a machine learning encoder, that is, the target vector is obtained, and then the first intent determination model adds the target vector to the vector set to update the vector set.
The second intention determining model may be regarded as a model obtained on the basis of updating the first intention determining model, i.e. the current intention determining model may be an intermediate model in a training process or in a process of processing images using the model.
In some embodiments, after the first intention determining model is updated to the second intention determining model, the model does not need to be retrained, and in the case that the input sentence to be classified is updated, the sentence intention is determined for the new input sentence by taking the updated second intention determining model as the current intention analyzing model.
In the embodiment of the application, vectorization of the input sentences to be classified and the intentions can be realized, the similarity between the intentions and the input sentences to be classified is convenient to calculate, the sentence intentions are favorably determined, and the target intentions corresponding to the input sentences to be classified are favorably added under the condition that each intention does not meet the specific similarity, so that after the target intentions are favorably added, the model does not need to be retrained, and the on-line model is updated; furthermore, the requirement on the training data of new intentions added to a specific intention set is low, the prediction effect of the original intentions is not influenced, and a good intention classification effect can be realized on line under the condition that the training data of the new intentions are few.
Fig. 2 is a schematic flow chart of an implementation of a method for determining a sentence intent provided in an embodiment of the present application, as shown in fig. 2, the method includes:
step S210, a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
step S220, the first intention determining model obtains a specific vector set, wherein the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
step S230, the first intention determining model determines a similarity between the input vector and each semantic vector in the vector set, so as to obtain a similarity set;
step S240, in a case that it is determined that the similarity in the similarity set is greater than or equal to the specific similarity value, the first intention determining model determines an intention that the similarity satisfies the specific similarity value as the target intention corresponding to the input sentence.
In some embodiments, in step S240, the determining, by the first intention determining model, an intention that the similarity satisfies a similarity value greater than or equal to a specific similarity value is determined as a target intention corresponding to the input sentence, including:
step S241, the first intention determining model normalizes each similarity to obtain the probability that the input statement belongs to the corresponding intention;
here, the normalization method may be Softmax.
In step S242, the first intention determining model determines the intention with the probability satisfying the second condition as the target intention corresponding to the input sentence.
For example, the input sentence to be classified is a user sentence U, and the user sentence U and the description I of each intentionijCan be expressed as CosSim (U, I)ij) Wherein, IijThe jth descriptive utterance, which represents the ith intent. For CosSim (U, I)ij) Performing Softmax normalization to obtain the probability P (I) that the user statement U belongs to each intentioni| U); and taking the intention with the maximum probability value as the intention of the user statement.
In the embodiment of the application, each similarity is normalized to obtain the probability that the input sentence belongs to the corresponding intention. In this way, the intention with the highest probability value can be determined as the target intention corresponding to the input sentence, which is beneficial to improving the accuracy of the input sentence intention determination.
Fig. 3 is a schematic flow chart of an implementation of the method for determining a sentence intent provided in the embodiment of the present application, and as shown in fig. 3, the method includes:
step S310, a machine learning encoder randomly generates a first initial vector for each intention in the specific intention set according to uniform distribution or normal distribution;
here, the vectorization process of the first initial vector is as follows: and randomly generating high-dimensional vectors according to uniform distribution or normal distribution. For example, an initial vector of 128 dimensions is randomly generated according to a uniform distribution or a normal distribution.
Step S320, the machine learning encoder performs linear transformation on each first initial vector to obtain a semantic vector corresponding to the intention;
step S330, the machine learning encoder randomly generates an initial input vector for the input sentences to be classified according to uniform distribution or normal distribution;
step S340, the machine learning encoder performs linear transformation on the initial input vector to obtain an input vector; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
step S350, the first intention determining model obtains a specific vector set, wherein the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
step S360, the first intention determining model determines the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
step S370, when it is determined that there is no similarity value in the similarity set that is greater than or equal to a specific similarity value, the first intention determining model obtains a target intention corresponding to the input sentence from outside the intention analyzing model;
step S380, the first intent determination model adds the input vector to the specific set of vectors to update the set of vectors, and adds the target intent to the specific set of intentions to update the intent vector to complete the update from the first intent determination model to the second intent determination model.
In the embodiment of the application, the input sentence to be classified and the specific intention set are vectorized through the machine learning encoder, so that the similarity calculation of each intention in the input sentence to be classified and the specific intention set is facilitated, and the accuracy for input sentence intention determination is improved.
Fig. 4 is a schematic flow chart of an implementation of a method for determining a sentence intent provided in an embodiment of the present application, and as shown in fig. 4, the method includes:
step S410, a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
step S420, the first intention determining model obtains a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
step S430, each intention in the intention set comprises at least one definition description and at least one answer description, and semantic vectors corresponding to each intention comprise at least one definition description vector and at least one answer description vector; the first intention determining model calculates similarity between the input vector and the definition description vector corresponding to each definition description by using cosine similarity to obtain corresponding first similarity;
here, the definition is described as a definition of intent for defining what contents this intent category has; the answer description is an answer to an intent.
For example, when the intent is mouse servicing, the definition of the intent is described as "mouse-related". The answer to the intention is described as "after-sales repair", "check mouse jack", "check whether indicator lamp is on", or the like. As another example, one intent is "order," and the definition describes "order refund" or "order query.
Step S440, the first intention determining model performs similarity calculation on the input vector and an answer description vector corresponding to each answer description by using cosine similarity to obtain a corresponding second similarity;
step S450, the first intention determining model determines the similarity according to the first similarity of each definition description and the second similarity of each answer description to obtain a similarity set;
here, the similarity set includes at least one first similarity and at least 2 second similarities.
Here, the similarity is determined as a similarity between each input vector and each intention's semantic vector. Here, the semantic vector of each intention includes at least one definition description vector and at least one answer description vector.
In some embodiments, first, similarity calculation is performed on the input vector and N1 definition description vectors or N2 answer description vectors of each intention respectively to obtain N1 first similarities or N2 second similarities; secondly, according to (N1+ N2) similarity degrees, determining the similarity degree between the input vector and each intention; finally, the similarities intended by N3 to be determined constitute a set of similarities.
Step S460, in a case that it is determined that there is no similarity value in the similarity set that is greater than or equal to a specific similarity value, the first intention determining model obtains a target intention corresponding to the input sentence from outside the intention analyzing model;
step S470, the first intent determination model adds the input vector to the specific set of vectors to update the set of vectors, and adds the target intent to the specific set of intentions to update the intent vector to complete the update from the first intent determination model to the second intent determination model.
In the embodiment of the application, firstly, similarity calculation is carried out on the input vector and a definition description vector corresponding to each definition description to obtain corresponding first similarity; secondly, similarity calculation is carried out on the input vector and the answer description vector corresponding to each answer description, and corresponding second similarity is obtained. In this way, richer intention content can be provided, and the accuracy of the intention determination for the input sentence is improved.
The method for determining the statement intention provided by the embodiment of the application comprises the following steps:
step S510, a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
step S520, the first intention determining model obtains a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
step S530, each intention in the intention set comprises at least one definition description and at least one answer description, and the semantic vector corresponding to each intention comprises at least one definition description vector and at least one answer description vector; the first intention determining model calculates similarity between the input vector and the definition description vector corresponding to each definition description by using cosine similarity to obtain corresponding first similarity;
step S540, the first intention determining model calculates similarity between the input vector and the answer description vector corresponding to each answer description by using cosine similarity to obtain a corresponding second similarity;
step S550, determining, by the first intention determining model, a maximum similarity as the similarity from the first similarity of each definition description and the second similarity of each answer description, to obtain a similarity set; or, the first intention determining model determines a first similarity of each definition description and a second similarity of each answer description, and determines the arithmetic mean as the similarity to obtain a similarity set;
step S560, in a case that it is determined that there is no similarity value in the similarity set that is greater than or equal to a specific similarity value, the first intention determining model obtains a target intention corresponding to the input sentence from outside the intention analyzing model;
step S570, the first intent determination model adds the input vector to the specific set of vectors to update the set of vectors, and adds the target intent to the specific set of intentions to update the intent vector to complete the update from the first intent determination model to the second intent determination model.
In the embodiment of the present application, the maximum similarity is determined, or the arithmetic mean of the similarities is determined as the similarity; therefore, the similarity can be screened by determining the maximum value, or the interval of the similarity value distribution is narrowed by calculating the average value, so that the model accuracy is improved, and the performance of determining the statement intention by the intention analysis model is improved.
The method for determining the statement intention provided by the embodiment of the application comprises the following steps:
step S610, a machine learning encoder obtains an input vector corresponding to an input sentence to be classified; wherein the method of determining the sentence intent is applied to an intent analysis model comprising the machine learning encoder and a first intent determination model;
step S620, the first intention determining model obtains a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
step S630, the first intention determining model determines a similarity between the input vector and each semantic vector in the vector set, to obtain a similarity set;
step S640, in the case that it is determined that there is no similarity value in the similarity set that is greater than or equal to a specific similarity value, the first intention determining model obtains a target intention corresponding to the input sentence from outside the intention analyzing model;
step S650, the first intention determination model adds the input vector to the specific set of vectors to update the set of vectors, and adds the target intention to the specific set of intentions to update the intention vector to complete the updating from the first intention determination model to the second intention determination model.
Step S660, adding the newly added intention into the intention set by the first intention determining model;
here, it should be noted that the new intention is different from the aforementioned target intention, and the new intention is an intention newly added according to the service demand. Here, the newly added intention includes definition description I of the new intentionnew-definitionAnd a corresponding answer description Inew-answer。
Step S670, the first intention determining model randomly generates a second initial vector according to the newly added intention and uniform distribution or normal distribution;
step S680, the first intention determining model carries out linear transformation on the second initial vector to obtain a newly added semantic vector;
in step S690, the first intention determining model adds the newly added semantic vector to the specific vector set to complete the updating from the first intention determining model to the third intention determining model.
Here, the third intention determining model is a model to which a new semantic vector is added.
For example, assume that the service is newly added with a new intention InewWithout requiring retraining of the previous model, the definition of the new intent needs to be described Inew-definitionAnd the corresponding answer Inew-answerInput deviceAnd obtaining a corresponding semantic vector in the model, and then adding the semantic vector into the original intention set to finish the updating of the model.
In the embodiment of the application, the newly added semantic vector is added into the specific vector set by the first intention determining model to update the first intention determining model to the third intention determining model, so that when the intention is increased, the model does not need to be trained again, the trained model is published on line, the online model update can be realized, and the timeliness of the model update is increased.
The method for determining the sentence intentions provided by the embodiment of the application, wherein the input sentences to be classified are taken as user sentences, and the machine learning encoder is taken as an example of a deep learning model training intention vector encoder, and the method comprises the following steps:
a first part: model training, comprising steps S11 to S15:
step S11, using the description of n user sentences and intentions as training data, and training an intention vector encoder through a deep learning model;
here, the description of the intention may include a definition description of the intention and an answer description of the intention. The deep learning model can be a CNN or LSTM deep learning model. The intention vector encoder is used for converting an input user statement into an N-dimensional vector and representing the semantics of the user statement.
For example, the user statements are Text1, Text2, …, TextN. The definition of Intent is described as Intent1-definition, and the answers of Intent are described as Intent1-answer1, Intent1-answer2, …, Intent 1-answer. And taking the definition description corresponding to each intention of the n user sentences and the n answer descriptions corresponding to the definition description as training data, and training an intention vector encoder through a deep learning model of the CNN. Here, the training situation of the model is judged by a loss function that characterizes the difference between the predicted result and the actual result. And when the loss value calculated by the loss function is minimum, the intention analysis model corresponding to the network parameter is the optimal intention analysis model.
Step S12, converting the user statement U and the description of each intention into a semantic vector by using an intention vector encoder;
step S13, calculating the description I of the user sentence U and each intentionijThe similarity of (2);
here, the similarity represents the degree of closeness of semantics, the calculation of the similarity may be obtained by cosine similarity, and the user sentence U and the description I of each intentionijCan be expressed as CosSim (U, I)ij) Wherein, IijThe jth descriptive utterance, which represents the ith intent.
Here, the similarity CosSim (U, I) of the user sentence U to each intentionij) The similarity may be determined by taking the average value of the similarities, or the maximum value of the similarities, and may be expressed as formula (1):
CosSim(U,Ii)=max{CosSim(U,Iij)}; (1);
or,
CosSim(U,Ii)=average{CosSim(U,Iij)}; (1);
step S14, normalizing the similarity to obtain the probability P (I) that the user statement U belongs to each intentioni|U);
Here, the method of normalization may be softmax. And taking the intention with the maximum probability value as the intention of the user statement.
And step S15, determining a loss value according to the loss function, and continuously updating network parameters by utilizing back propagation to finish the training of the intention analysis model.
Here, the loss function may be a cross entropy loss function, and the network parameter may be a weight w and a deviation b of the network.
A second part: model prediction, comprising steps S21 to S23:
step S21, adding the statements of the user and the description of each intention as input into an intention analysis model to obtain a corresponding semantic vector set;
step S22, calculating the similarity of each semantic vector, and normalizing the obtained similarity;
here, the normalization is that data becomes (0,1) or a fraction between (1, 1). The data processing can be simplified by a normalization method.
Step S23, selecting probability P (I)i| U) is the largest, output as a prediction result.
And a third part: updating the intention classification system, comprising the steps S21 to S132:
step S31, adding an intention InewIn the case of (2), the definition of the new intention is described as Inew-definitionAnd a corresponding answer description Inew-answerAdding the semantic vector as input into a model to obtain a semantic vector corresponding to the intention;
and step S32, adding the semantic vector into a semantic vector set to complete the updating of the model.
Fig. 5 is a schematic flow chart of an implementation of the method for determining a sentence intent provided in the embodiment of the present application, and as shown in fig. 5, the method includes:
(1) model training:
step S51, calculating cosine similarity;
here, the cosine similarity is a semantic vector corresponding to a user sentence and a semantic vector corresponding to an initial intention set, where the initial intention set is a semantic vector set corresponding to an intention. And obtaining each similarity by the semantic vector in each initial intention set and the user statement, wherein each similarity forms a similarity set.
Step S52, determining the similarity corresponding to the user sentence;
here, the maximum value in the Similarity set or the average value of the Similarity set is determined as the Similarity (U, I) corresponding to the user sentence1)。
Step S53, performing Softmax normalization;
in some embodiments, Similarity (U, I)1) Performing Softmax normalization to obtain the probability P (I) that the user statement U belongs to each intentioni|U)。
In step S54A, the intention corresponding to the probability is directly output.
Here, the probability P (I)i| U) is determined as the intention corresponding to the user sentence.
(2) Model prediction:
step S51, calculating cosine similarity;
step S52, determining the similarity corresponding to the user sentence;
step S53, performing Softmax normalization;
step S54B, calculating cross entropy loss;
here, the cross entropy loss may be Softmax cross entropy loss. And updating the intention vector encoder through the obtained loss value so as to update the semantic vector corresponding to the user statement, and further updating the first intention determining model so as to update the intention analyzing model.
Here, when a new intention is added to the intention analysis model, the new intention may be vectorized by the intention vector encoder to obtain a corresponding semantic vector, and the corresponding semantic vector may be added to the initial intention set to obtain a new intention set, and the intention analysis model may be updated.
In the embodiment of the application, the intention classification system is updated, the model does not need to be retrained again, and the online model can be automatically and quickly updated; and the requirement on the training data of newly added intentions is low, the prediction effect of the original intention is not influenced, and a better intention classification effect can be realized on line under the condition of less training data of new intentions.
Based on the foregoing embodiments, the present application provides a device for determining a sentence intent, where the device includes units and modules included in the units, and may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 6 is a schematic structural diagram of a determining apparatus of a sentence intent provided in an embodiment of the present application, and as shown in fig. 6, the apparatus 600 includes a first obtaining module 601, a second obtaining module 602, a first determining module 603, a third obtaining module 604, and a first adding module 605, where:
the first obtaining module 601 is configured to obtain an input vector corresponding to an input sentence to be classified;
the second obtaining module 602 is configured to obtain a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by the machine learning encoder;
the first determining module 603 is configured to determine a similarity between the input vector and each semantic vector in the vector set, so as to obtain a similarity set;
the third obtaining module 604 is configured to, when it is determined that no similarity in the similarity set is greater than or equal to a specific similarity value, obtain a target intent corresponding to the input sentence from outside the intent analysis model;
the first adding module 605 is configured to add the input vector to the specific vector set to update the vector set, and add the target intent to the specific intent set to update the intent vector, so as to complete the update from the first intent determination model to the second intent determination model.
In some embodiments, the apparatus 600 further comprises a second determining module, wherein: the second determining module is configured to, when it is determined that the similarity in the similarity set is greater than or equal to a specific similarity value, determine, by the first intention determining model, an intention that the similarity satisfies the similarity greater than or equal to the specific similarity value as a target intention corresponding to the input sentence.
In some embodiments, the second determination module comprises a normalization sub-module and a first determination sub-module, wherein: the normalization submodule is used for normalizing each similarity to obtain the probability that the input statement belongs to the corresponding intention; the first determining submodule is used for determining the intention of meeting the probability with the second condition as the target intention corresponding to the input statement.
In some embodiments, the first obtaining module 601 includes a generating sub-module and a linear transformation sub-module, wherein: the generation submodule is used for randomly generating an initial input vector for the input sentences to be classified according to uniform distribution or normal distribution; and the linear transformation submodule is used for performing linear transformation on the initial input vector to obtain the input vector.
In some embodiments, the apparatus 600 further comprises a first generation module and a first linear transformation module, wherein: the first generation module is used for randomly generating a first initial vector for each intention in the specific intention set according to uniform distribution or normal distribution; the first linear transformation module is used for performing linear transformation on each first initial vector to obtain a semantic vector corresponding to the intention.
In some embodiments, each intention in the set of intentions comprises at least one definition description and at least one answer description, and each of the intentions corresponds to a semantic vector comprising at least one definition description vector and at least one answer description vector; the first determining module 603 includes a first calculating sub-module, a second calculating sub-module, and a second determining sub-module, wherein: the first calculation submodule is used for calculating the similarity of the input vector and the definition description vector corresponding to each definition description by utilizing cosine similarity to obtain corresponding first similarity; the second calculation submodule is used for calculating the similarity of the input vector and the answer description vector corresponding to each answer description by utilizing cosine similarity to obtain corresponding second similarity; and the second determining submodule is used for determining the similarity according to the first similarity of each definition description and the second similarity of each answer description to obtain a similarity set.
In some embodiments, the second determination submodule comprises a first determination unit and a second determination unit, wherein: the first determining unit is configured to determine, from the first similarity of each definition description and the second similarity of each answer description, a maximum similarity as the similarity to obtain a similarity set; or, the second determining unit is configured to determine an arithmetic average of the first similarity of each definition description and the second similarity of each answer description, and determine the arithmetic average as the similarity to obtain a similarity set.
In some embodiments, the apparatus 600 further comprises a fourth obtaining module, a second generating module, a second linear transformation module, and a second adding module, wherein: a fourth obtaining module, configured to add the newly added intention into the intention set; the second generation module is used for randomly generating a second initial vector according to the newly added intention and uniform distribution or normal distribution; the second linear transformation module is used for carrying out linear transformation on the second initial vector to obtain a newly added semantic vector; and the second adding module is used for adding the newly added semantic vector into the specific vector set so as to finish updating from the first intention determining model to a third intention determining model.
Here, it should be noted that: the above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the method for determining the sentence intent is implemented in the form of a software functional module and sold or used as a standalone product, the method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing a computer device (which may be a personal computer or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the present application provides a device for determining a sentence intent (which may be implemented as a computer device), including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor executes the program to implement the steps in the methods provided in the foregoing embodiments.
Correspondingly, the embodiment of the present application provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are stored in the computer-readable storage medium, and are configured to implement the method for determining a sentence intent provided in the above embodiment when the program is executed.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that fig. 7 is a schematic hardware entity diagram of a computer device in an embodiment of the present application, and as shown in fig. 7, the hardware entity of the computer device 700 includes: a processor 701, a communication interface 702, and a memory 703, wherein
The processor 701 generally controls the overall operation of the computer device 700.
The communication interface 702 may enable the computer device to communicate with other computer devices over a network.
The Memory 703 is configured to store instructions and applications executable by the processor 701, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by the processor 701 and modules in the computer device 700, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing a computer device (which may be a personal computer or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A sentence intent determination method applied to an intent analysis model including a machine learning encoder and a first intent determination model, the method comprising:
the machine learning encoder obtains an input vector corresponding to an input sentence to be classified;
the first intention determining model obtains a specific vector set, wherein the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
the first intention determining model determines the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
under the condition that the similarity is determined not to be larger than or equal to a specific similarity value in the similarity set, the first intention determining model acquires a target intention corresponding to the input statement from the outside of the intention analyzing model;
the first intent determination model adds the input vector to the particular set of vectors to update the set of vectors, and adds the target intent to the particular set of intentions to update the intent vector to complete the update from the first intent determination model to a second intent determination model.
2. The method of claim 1, wherein the method further comprises:
and under the condition that the similarity is determined to be greater than or equal to a specific similarity value in the similarity set, determining the similarity to meet the intention that is greater than or equal to the specific similarity value by the first intention determination model as the target intention corresponding to the input sentence.
3. The method of claim 2, wherein the first intent determination model determines an intent of which a similarity satisfies a certain similarity value or more as a target intent corresponding to the input sentence, including:
the first intention determining model normalizes each similarity to obtain the probability that the input statement belongs to the corresponding intention;
and the first intention determining model determines the intention of which the probability meets the second condition as the target intention corresponding to the input statement.
4. The method of claim 1, wherein the machine learning encoder obtains an input vector corresponding to an input sentence to be classified, comprising:
the machine learning encoder randomly generates an initial input vector for the input sentences to be classified according to uniform distribution or normal distribution;
and the machine learning encoder performs linear transformation on the initial input vector to obtain the input vector.
5. The method of claim 1, wherein prior to the machine learning encoder obtaining an input vector corresponding to an input sentence to be classified, the method further comprises:
a machine learning encoder randomly generates a first initial vector according to a uniform distribution or a normal distribution for each intention in the specific intention set;
and the machine learning encoder performs linear transformation on each first initial vector to obtain a semantic vector corresponding to the intention.
6. The method according to claim 1, wherein each intention in the set of intentions comprises at least one definition description and at least one answer description, and each of the intentions corresponds to a semantic vector comprising at least one definition description vector and at least one answer description vector;
the first intention determining model determines similarity between the input vector and each semantic vector in the vector set to obtain a similarity set, including:
the first intention determining model calculates similarity between the input vector and the definition description vector corresponding to each definition description by using cosine similarity to obtain corresponding first similarity;
the first intention determining model calculates the similarity of the input vector and the answer description vector corresponding to each answer description by using cosine similarity to obtain corresponding second similarity;
and the first intention determining model determines the similarity according to the first similarity of each definition description and the second similarity of each answer description to obtain a similarity set.
7. The method of claim 6, wherein the first intent determination model determines the similarity according to a first similarity of each of the definition descriptions and a second similarity of each of the answer descriptions, resulting in a set of similarities, comprising:
the first intention determining model determines the maximum similarity as the similarity from the first similarity of each definition description and the second similarity of each answer description to obtain a similarity set; or,
and the first intention determining model determines an arithmetic mean of the first similarity of each definition description and the second similarity of each answer description, and determines the arithmetic mean as the similarity to obtain a similarity set.
8. The method of any of claims 1 to 7, wherein the method further comprises:
adding the newly added intention into the intention set by the first intention determining model;
the first intention determining model randomly generates a second initial vector according to uniform distribution or normal distribution of the newly added intention;
the first intention determining model carries out linear transformation on the second initial vector to obtain a newly added semantic vector;
the first intention determination model adds the newly added semantic vector to the specific vector set to complete the updating from the first intention determination model to a third intention determination model.
9. An apparatus for determining an intention of a sentence, applied to an intention analysis model including a machine learning encoder and a first intention determination model, the apparatus comprising:
the first obtaining module is used for obtaining an input vector corresponding to an input statement to be classified;
a second obtaining module, configured to obtain a specific vector set, where the vector set is a set formed by semantic vectors obtained by decoding each intention in the specific intention set by a machine learning encoder;
the determining module is used for determining the similarity between the input vector and each semantic vector in the vector set to obtain a similarity set;
a third obtaining module, configured to obtain, from outside the intention analysis model, a target intention corresponding to the input sentence when it is determined that no similarity in the similarity set is greater than or equal to a specific similarity value;
an adding module to add the input vector to the particular set of vectors to update the set of vectors, to add the target intent to the particular set of intents to update the intent vector to complete the update from the first intent determination model to the second intent determination model.
10. A computer-readable storage medium having computer-executable instructions stored therein, the computer-executable instructions being configured to perform the method of determining a sentence intent as provided in any of claims 1 to 8 above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010624894.7A CN111931512B (en) | 2020-07-01 | 2020-07-01 | Statement intention determining method and device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010624894.7A CN111931512B (en) | 2020-07-01 | 2020-07-01 | Statement intention determining method and device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111931512A true CN111931512A (en) | 2020-11-13 |
CN111931512B CN111931512B (en) | 2024-07-26 |
Family
ID=73317352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010624894.7A Active CN111931512B (en) | 2020-07-01 | 2020-07-01 | Statement intention determining method and device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111931512B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334891A (en) * | 2017-12-15 | 2018-07-27 | 北京奇艺世纪科技有限公司 | A kind of Task intent classifier method and device |
CN109597993A (en) * | 2018-11-30 | 2019-04-09 | 深圳前海微众银行股份有限公司 | Sentence analysis processing method, device, equipment and computer readable storage medium |
CN109800306A (en) * | 2019-01-10 | 2019-05-24 | 深圳Tcl新技术有限公司 | It is intended to analysis method, device, display terminal and computer readable storage medium |
US20200097563A1 (en) * | 2018-09-21 | 2020-03-26 | Salesforce.Com, Inc. | Intent classification system |
CN111046667A (en) * | 2019-11-14 | 2020-04-21 | 深圳市优必选科技股份有限公司 | Sentence recognition method, sentence recognition device and intelligent equipment |
CN111325037A (en) * | 2020-03-05 | 2020-06-23 | 苏宁云计算有限公司 | Text intention recognition method and device, computer equipment and storage medium |
-
2020
- 2020-07-01 CN CN202010624894.7A patent/CN111931512B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108334891A (en) * | 2017-12-15 | 2018-07-27 | 北京奇艺世纪科技有限公司 | A kind of Task intent classifier method and device |
US20200097563A1 (en) * | 2018-09-21 | 2020-03-26 | Salesforce.Com, Inc. | Intent classification system |
CN109597993A (en) * | 2018-11-30 | 2019-04-09 | 深圳前海微众银行股份有限公司 | Sentence analysis processing method, device, equipment and computer readable storage medium |
CN109800306A (en) * | 2019-01-10 | 2019-05-24 | 深圳Tcl新技术有限公司 | It is intended to analysis method, device, display terminal and computer readable storage medium |
CN111046667A (en) * | 2019-11-14 | 2020-04-21 | 深圳市优必选科技股份有限公司 | Sentence recognition method, sentence recognition device and intelligent equipment |
CN111325037A (en) * | 2020-03-05 | 2020-06-23 | 苏宁云计算有限公司 | Text intention recognition method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111931512B (en) | 2024-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109033068B (en) | Method and device for reading and understanding based on attention mechanism and electronic equipment | |
CN109101537B (en) | Multi-turn dialogue data classification method and device based on deep learning and electronic equipment | |
CN107908803B (en) | Question-answer interaction response method and device, storage medium and terminal | |
KR102288249B1 (en) | Information processing method, terminal, and computer storage medium | |
CN110377916B (en) | Word prediction method, word prediction device, computer equipment and storage medium | |
CN110275939B (en) | Method and device for determining conversation generation model, storage medium and electronic equipment | |
CN111105029B (en) | Neural network generation method, generation device and electronic equipment | |
CN109299245B (en) | Method and device for recalling knowledge points | |
CN108304890B (en) | Generation method and device of classification model | |
CN111639247B (en) | Method, apparatus, device and computer readable storage medium for evaluating quality of comments | |
CN112633010A (en) | Multi-head attention and graph convolution network-based aspect-level emotion analysis method and system | |
KR102688236B1 (en) | Voice synthesizer using artificial intelligence, operating method of voice synthesizer and computer readable recording medium | |
CN111161726B (en) | Intelligent voice interaction method, device, medium and system | |
CN109977292B (en) | Search method, search device, computing equipment and computer-readable storage medium | |
CN113392640B (en) | Title determination method, device, equipment and storage medium | |
KR20210020656A (en) | Apparatus for voice recognition using artificial intelligence and apparatus for the same | |
US12079280B2 (en) | Retrieval-based dialogue system with relevant responses | |
CN108287848B (en) | Method and system for semantic parsing | |
CN113157876A (en) | Information feedback method, device, terminal and storage medium | |
CN114676689A (en) | Sentence text recognition method and device, storage medium and electronic device | |
CN111026840A (en) | Text processing method, device, server and storage medium | |
CN117851444A (en) | Advanced searching method based on semantic understanding | |
CN115274086A (en) | Intelligent diagnosis guiding method and system | |
CN115270807A (en) | Method, device and equipment for judging emotional tendency of network user and storage medium | |
CN114299920A (en) | Method and device for training language model for speech recognition and speech recognition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |