CN115146064A - Intention recognition model optimization method, device, equipment and storage medium - Google Patents

Intention recognition model optimization method, device, equipment and storage medium Download PDF

Info

Publication number
CN115146064A
CN115146064A CN202210883771.4A CN202210883771A CN115146064A CN 115146064 A CN115146064 A CN 115146064A CN 202210883771 A CN202210883771 A CN 202210883771A CN 115146064 A CN115146064 A CN 115146064A
Authority
CN
China
Prior art keywords
intention
clustering
text
corpus
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210883771.4A
Other languages
Chinese (zh)
Inventor
吴青松
陈范曙
王燕蒙
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210883771.4A priority Critical patent/CN115146064A/en
Publication of CN115146064A publication Critical patent/CN115146064A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Abstract

The invention relates to an artificial intelligence technology, and discloses an intention recognition model optimization method, which comprises the following steps: obtaining an unknown corpus set from the intention recognition task, performing text optimization on the unknown corpus set to obtain an optimized corpus set, performing intention clustering on corpus texts in the optimized corpus set to obtain an intention clustering result, performing similarity confusion analysis on the intention clustering result, constructing an optimized training set according to the confusion result, training an intention recognition model to be optimized by using the optimized training set to obtain an optimized intention recognition model, performing model testing on the optimized intention recognition model, and optimizing the intention recognition model in the intention recognition task based on the testing result. In addition, the invention also relates to a block chain technology, and the to-be-optimized intention identification model can be obtained from the nodes of the block chain. The invention also provides an intention recognition model optimization device, an electronic device and a readable storage medium. The invention can improve the efficiency of the optimization of the intention recognition model.

Description

Intention recognition model optimization method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intention recognition model optimization method and device, electronic equipment and a readable storage medium.
Background
Semantic understanding is an important area of artificial intelligence, and intent recognition capabilities are the core of semantic understanding. Generally, in the initial online stage of the service, due to the shortage of the historical corpus and the limitation of the labeling time, the recognition rate of the intention recognition model is often low. As the business progresses, the actual business corpus content may gradually deviate from the original corpus. These conditions can cause the accuracy of the intention recognition model to be difficult to continuously improve, and even gradually decrease until the business requirements cannot be met and the intention recognition model cannot be used.
In the prior art, although an operator can add a new labeled corpus in time according to a business situation, due to the fact that the overlap ratio and the difference between the new corpus and the historical corpus cannot be identified, the identification capability of model loss cannot be improved in a targeted manner, so that the model capability can be optimized by marking and trying with great cost and energy, and the optimization efficiency of an intention identification model is low.
Disclosure of Invention
The invention provides an intention recognition model optimization method, an intention recognition model optimization device, an electronic device and a readable storage medium, and mainly aims to improve the efficiency of intention recognition model optimization.
In order to achieve the above object, the present invention provides an intention recognition model optimization method, including:
when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result;
carrying out similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to the confusion result;
and training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
Optionally, the obtaining a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set includes:
determining a rejection threshold value of a meaning graph recognition model in the intention recognition task, outputting a recognition result of the linguistic data in the intention recognition task by using the intention recognition model, and taking the linguistic data lower than the rejection threshold value in the recognition result of the intention recognition model as a rejection linguistic data set;
and performing text sentence breaking and text error correction processing on the corpus text in the rejection corpus set to obtain the optimized corpus set.
Optionally, the performing intent clustering on the corpus text in the optimized corpus set to obtain an intent clustering result includes:
performing text cleaning and text word segmentation processing on the corpus text in the optimized corpus set to obtain a text word segmentation set;
and performing intention clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intention clustering result.
Optionally, the performing intent clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intent clustering result includes:
vectorizing the text in the text word segmentation set to obtain a text vector set;
randomly selecting a preset number of text vectors from the text vector set as a clustering center;
sequentially calculating the distance from each text vector in the text vector set to the clustering center, and dividing each text vector into clustering clusters corresponding to the clustering center with the minimum distance to obtain a plurality of intention clustering clusters;
recalculating the clustering center of each intention clustering cluster, returning to the step of sequentially calculating the distance from each text vector in the text vector set to the clustering center until the clustering centers of all the intention clustering clusters are converged, and acquiring a clustering label corresponding to each converged intention clustering cluster;
and determining the text and the clustering label corresponding to the converged intention clustering cluster as the intention clustering result.
Optionally, the cluster center of each intended cluster is calculated by the following formula:
Figure BDA0003765186280000021
wherein E is i Is the ith cluster center, C i And x is a text vector in the intention cluster.
Optionally, the performing a similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to a confusion result includes:
calculating the similarity of the target text in the intention clustering result and the corpus text in the intention corpus corresponding to the intention recognition model one by one;
if the similarity is smaller than a preset similarity threshold, determining that a confusion result of the target text is a confusion text and deleting the confusion text, if the similarity is larger than or equal to the similarity threshold, determining that the confusion result of the target text is an optimized text, and adding all the optimized texts into the intention corpus;
and determining the intention corpus to which all the optimized texts are added as the optimized training set.
Optionally, the performing model test on the optimized intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result includes:
acquiring an intention test set, and outputting a test result of the intention test set by using the optimization intention recognition model;
and if the test result does not meet the preset optimization condition, returning to the step of performing similarity confusion analysis on the intention clustering result until the test result meets the preset optimization condition, and replacing the intention identification model with the optimization intention identification model.
In order to solve the above problems, the present invention also provides an intention recognition model optimization apparatus, including:
the text optimization module is used for acquiring a rejected corpus set from the intention recognition task when the intention recognition task is received, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
the training set construction module is used for carrying out intention clustering on the corpus texts in the optimized corpus set to obtain intention clustering results, carrying out similarity confusion analysis on the intention clustering results and constructing an optimized training set according to the confusion results;
and the model optimization module is used for training a preset intention recognition model to be optimized by utilizing the optimization training set to obtain an optimization intention recognition model, carrying out model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one computer program; and
a processor executing a computer program stored in the memory to implement the intent recognition model optimization method described above.
In order to solve the above problem, the present invention also provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is executed by a processor in an electronic device to implement the method for optimizing an intention recognition model described above.
The method obtains the rejected corpus set from the intention recognition task, performs text optimization on the rejected corpus set to obtain the optimized corpus set, and can improve the accuracy of model training because the optimized corpus is from the actual intention recognition task. Meanwhile, the meaning clustering is carried out on the corpus texts in the optimized corpus set to obtain the meaning clustering result, the similarity confusion analysis is carried out on the meaning clustering result, the optimized training set is constructed according to the confusion result, a large amount of comments do not need to be carried out manually, and the efficiency of the intention recognition model optimization is greatly improved. Therefore, the intention recognition model optimization method, the intention recognition model optimization device, the electronic equipment and the computer-readable storage medium can improve the efficiency of the intention recognition model optimization.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an intent recognition model optimization method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart showing a detailed implementation of one of the steps in FIG. 1;
FIG. 3 is a schematic flow chart showing another step of FIG. 1;
FIG. 4 is a schematic flow chart showing another step of FIG. 1;
FIG. 5 is a functional block diagram of an apparatus for optimizing an intent recognition model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device implementing the method for optimizing an intention recognition model according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an intention recognition model optimization method. The execution subject of the intention recognition model optimization method includes, but is not limited to, at least one of the electronic devices of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the present invention. In other words, the intention recognition model optimization method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Referring to fig. 1, a schematic flow chart of an intention recognition model optimization method according to an embodiment of the present invention is shown. In this embodiment, the intention recognition model optimization method includes the following steps S1 to S3:
s1, when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set.
In the embodiment of the invention, the intention recognition task is to recognize the intention of the user through an intention recognition model, for example, in an intelligent question-answering customer service system, the intention of the user for consulting a question is recognized, and a corresponding answer is matched for the user to view.
In detail, referring to fig. 2, the obtaining of the rejected corpus set from the intention recognition task and the text optimization of the rejected corpus set to obtain the optimized corpus set include the following steps S10 to S11:
s10, determining a rejection threshold value of a meaning graph recognition model in the intention recognition task, outputting a recognition result of the linguistic data in the intention recognition task by using the intention recognition model, and taking the linguistic data lower than the rejection threshold value in the recognition result of the intention recognition model as a rejection linguistic data set;
s11, text sentence breaking and text error correction processing are carried out on the corpus texts in the rejection corpus set, and the optimized corpus set is obtained.
In an optional embodiment of the invention, different rejection thresholds can be set for different intention recognition models, all corpora with model recognition results lower than the thresholds are collected as rejections when actual services run, and all the rejection corpora are from actual services, so that the authenticity of the corpora can be ensured, and the model optimization training result is improved.
In an optional embodiment of the invention, the rejection corpus set can be obtained in real time from the actual service running log of the intention recognition task.
The rejected corpus is originated from a real service, for example, speech needs to be converted into a text in speech intention recognition, and the converted text may contain some useless information or the text is too long, so that the correctness of the optimized corpus is improved through text sentence breaking, error correction and other processing.
In an alternative embodiment of the present invention, text sentence breaks can be performed according to sentence ending symbols (e.g., ",". Meanwhile, text error correction can be performed by using a Chinese error correction model Softmasked-BERT and the like.
And S2, performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result.
In the embodiment of the invention, because the optimized corpus set is difficult to directly carry out model optimization training, the optimized corpus set needs to be refined by intention clustering first, and the model optimization training efficiency is improved.
In detail, referring to fig. 3, the performing intent clustering on the corpus text in the optimized corpus set to obtain an intent clustering result includes the following steps S20 to S21:
s20, performing text cleaning and text word segmentation processing on the corpus text in the optimized corpus set to obtain a text word segmentation set;
and S21, carrying out intention clustering on the text word segmentation set by utilizing a K-Means clustering algorithm to obtain an intention clustering result.
In the embodiment of the invention, the data cleaning is to remove punctuations and other characters in the text, only three parts of Chinese characters, numbers and English letters in the text are reserved, and meanwhile, the cleaned text can be segmented by using jieba segmentation to obtain a text segmentation set.
Specifically, the performing intent clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intent clustering result includes:
vectorizing the text in the text word segmentation set to obtain a text vector set;
randomly selecting a preset number of text vectors from the text vector set as a clustering center;
sequentially calculating the distance from each text vector in the text vector set to the clustering center, and dividing each text vector into clustering clusters corresponding to the clustering center with the smallest distance to obtain a plurality of intention clustering clusters;
recalculating the clustering center of each intention clustering cluster, returning to the step of sequentially calculating the distance from each text vector in the text vector set to the clustering center until the clustering centers of all the intention clustering clusters are converged, and acquiring a clustering label corresponding to each converged intention clustering cluster;
and determining texts and clustering labels corresponding to the converged intention clustering cluster as the intention clustering result.
In an optional embodiment of the present invention, a Term Frequency-Inverse Document Frequency algorithm (TF-IDF) may be used to convert a text in an input text participle set into a sequence expressed by text vectors, and after the number K of clusters is set, K text vectors are randomly selected from the text vector set as a clustering center, the input text is divided into different intent clustering clusters according to semantics, each intent clustering cluster is set, and an intent is set for labeling based on corpus semantics in each intent clustering cluster, so as to obtain a clustering label, which can realize batch labeling quickly and conveniently. For example, in the finance field loan collection service, for a certain intention cluster A, labeling is performed according to the semantics of the corpus text in the intention cluster A, and the cluster label is determined to be "loan pending payment", that is, the "name" of the intention cluster A indicates that the corpus in the intention cluster A is the text of "loan pending payment".
In the embodiment of the present invention, the cluster center of each intention cluster can be calculated by the following formula:
Figure BDA0003765186280000071
wherein E is i Is the ith cluster center, C i And x is a text vector in the intention cluster.
And S3, carrying out similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to the confusion result.
In the embodiment of the invention, the intention clustering result is added into the existing intention corpus of the intention recognition model to form a new intention corpus for model training, and meanwhile, in order to ensure that the existing corpus and the newly added corpus are not mixed up, a tools such as similarity cross analysis and the like are used for mixing up analysis, and the corpus causing mixing up is removed or moved to a correct intention.
In detail, referring to fig. 4, performing similarity confusion analysis on the intention clustering results, and constructing an optimized training set according to the confusion results includes the following steps S30 to S32:
s30, calculating the similarity between the target text in the intention clustering result and the corpus text in the intention corpus corresponding to the intention recognition model one by one;
s31, if the similarity is smaller than a preset similarity threshold, determining that a confusion result of the target text is a confusion text and deleting the confusion text, if the similarity is larger than or equal to the similarity threshold, determining that the confusion result of the target text is an optimized text, and adding all the optimized texts into the intention corpus;
and S32, determining the intention corpus to which all the optimized texts are added as the optimized training set.
In the embodiment of the invention, the accuracy of the training data can be further improved and the model training effect can be improved through the similarity confusion analysis.
And S4, training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
In the embodiment of the invention, the optimization training set is used for training the to-be-optimized intention recognition model, and because the training data of the optimization training set comes from the actual business, the adaptability and the accuracy of model training can be greatly improved. The to-be-optimized intention recognition model can be a model based on a traditional machine learning algorithm, such as DBN, SVM and the like, and a model based on a deep learning algorithm, such as LSTM, bi-RNN, bi-LSTM-CRF and the like).
In detail, the model testing the optimized intention recognition model, and optimizing the intention recognition model in the intention recognition task based on the test result comprises:
acquiring an intention test set, and outputting a test result of the intention test set by using the optimization intention recognition model;
and if the test result does not meet the preset optimization condition, returning to the step of performing similarity confusion analysis on the intention clustering result until the test result meets the preset optimization condition, and replacing the intention identification model with the optimization intention identification model.
In an optional embodiment of the present invention, the test result may be accuracy, precision, recall rate, and the like output by the optimization intention recognition model, and if the test result satisfies a preset optimization condition (for example, the accuracy, precision, recall rate are greater than or equal to a preset test threshold), the optimization intention recognition model is arranged to be on-line, so as to replace an old model in the intention recognition task, and complete model optimization.
The method obtains the rejected corpus set from the intention recognition task, performs text optimization on the rejected corpus set to obtain the optimized corpus set, and can improve the accuracy of model training because the optimized corpus is derived from the actual intention recognition task.
Meanwhile, the meaning clustering is carried out on the corpus texts in the optimized corpus set to obtain the meaning clustering result, the similarity confusion analysis is carried out on the meaning clustering result, the optimized training set is constructed according to the confusion result, a large amount of comments do not need to be carried out manually, and the efficiency of the intention recognition model optimization is greatly improved. Therefore, the intention recognition model optimization method provided by the invention can improve the efficiency of intention recognition model optimization.
Fig. 5 is a functional block diagram of an intention recognition model optimization apparatus according to an embodiment of the present invention.
The intention recognition model optimizing device 100 of the present invention may be installed in an electronic device. According to the implemented functions, the intention recognition model optimization device 100 may include a text optimization module 101, a training set construction module 102, and a model optimization module 103. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the text optimization module 101 is configured to, when an intention recognition task is received, obtain a rejected corpus set from the intention recognition task, and perform text optimization on the rejected corpus set to obtain an optimized corpus set;
the training set construction module 102 is configured to perform intent clustering on the corpus texts in the optimized corpus set to obtain an intent clustering result, perform similarity confusion analysis on the intent clustering result, and construct an optimized training set according to the confusion result;
the model optimization module 103 is configured to train a preset to-be-optimized intention recognition model by using the optimization training set to obtain an optimized intention recognition model, perform model testing on the optimized intention recognition model, and optimize the intention recognition model in the intention recognition task based on a test result.
In detail, the specific implementation of each module of the intention recognition model optimization device 100 is as follows:
step one, when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set.
In the embodiment of the invention, the intention recognition task is to recognize the intention of the user through an intention recognition model, for example, in an intelligent question-answering customer service system, the intention of the user for consulting a question is recognized, and a corresponding answer is matched for the user to view.
In detail, the obtaining of the rejected corpus set from the intention recognition task and the text optimization of the rejected corpus set to obtain an optimized corpus set include:
determining a rejection threshold value of a meaning graph recognition model in the intention recognition task, outputting a recognition result of the linguistic data in the intention recognition task by using the intention recognition model, and taking the linguistic data lower than the rejection threshold value in the recognition result of the intention recognition model as a rejection linguistic data set;
and performing text sentence breaking and text error correction processing on the corpus text in the rejection corpus set to obtain the optimized corpus set.
In an optional embodiment of the invention, different rejection thresholds can be set for different intention recognition models, all corpora with model recognition results lower than the thresholds are collected as rejections when actual services run, and all the rejection corpora are from actual services, so that the authenticity of the corpora can be ensured, and the model optimization training result is improved.
In an optional embodiment of the invention, the rejected corpus set can be obtained in real time from the actual service running log of the intention recognition task.
The rejected corpus is originated from a real service, for example, speech needs to be converted into a text in speech intention recognition, and the converted text may contain some useless information or the text is too long, so that the correctness of the optimized corpus is improved through text sentence breaking, error correction and other processing.
In an alternative embodiment of the present invention, text sentence breaks can be performed according to sentence ending symbols (e.g., ",". Meanwhile, text error correction can be performed by using a Chinese error correction model Softmasked-BERT and the like.
And secondly, performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result.
In the embodiment of the invention, because the optimized corpus set is difficult to directly carry out model optimization training, the optimized corpus set needs to be refined by intention clustering first, and the model optimization training efficiency is improved.
In detail, the performing intent clustering on the corpus text in the optimized corpus set to obtain an intent clustering result includes:
performing text cleaning and text word segmentation processing on the corpus text in the optimized corpus set to obtain a text word segmentation set;
and performing intention clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intention clustering result.
In the embodiment of the invention, the data cleaning refers to removing punctuations and other characters in the text, only keeping three parts of Chinese characters, numbers and English letters in the text, and simultaneously, performing word segmentation on the cleaned text by using jieba word segmentation to obtain a text word segmentation set.
Specifically, the performing intent clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intent clustering result includes:
vectorizing the text in the text word segmentation set to obtain a text vector set;
randomly selecting a preset number of text vectors from the text vector set as a clustering center;
sequentially calculating the distance from each text vector in the text vector set to the clustering center, and dividing each text vector into clustering clusters corresponding to the clustering center with the smallest distance to obtain a plurality of intention clustering clusters;
recalculating the clustering center of each intention clustering cluster, returning to the step of sequentially calculating the distance from each text vector in the text vector set to the clustering center until the clustering centers of all the intention clustering clusters are converged, and acquiring a clustering label corresponding to each converged intention clustering cluster;
and determining the text and the clustering label corresponding to the converged intention clustering cluster as the intention clustering result.
In an optional embodiment of the invention, a Term Frequency-Inverse Document Frequency algorithm (TF-IDF) can be used to convert texts in an input text participle set into a sequence expressed by text vectors, K text vectors are randomly selected from the text vector set as a clustering center after a cluster number K is set, the input texts are divided into different intention clustering clusters according to semantics, each intention clustering cluster is set, and an intention is set for labeling based on corpus semantics in each intention clustering cluster to obtain a clustering label, so that batch labeling can be quickly and conveniently realized. For example, in the finance field loan collection service, for a certain intention cluster A, labeling is performed according to the semantics of the corpus text in the intention cluster A, and the cluster label is determined to be "loan pending payment", that is, the "name" of the intention cluster A indicates that the corpus in the intention cluster A is the text of "loan pending payment".
In the embodiment of the present invention, the cluster center of each intention cluster can be calculated by the following formula:
Figure BDA0003765186280000111
wherein E is i Is the ith cluster center, C i And x is a text vector in the intention cluster.
And thirdly, performing similarity confusion analysis on the intention clustering results, and constructing an optimized training set according to the confusion results.
In the embodiment of the invention, the intention clustering result is added into the existing intention corpus of the intention recognition model to form a new intention corpus for model training, and meanwhile, in order to ensure that the existing corpus and the newly added corpus are not mixed up, the confusion analysis is carried out by tools such as similarity cross analysis and the like, and the corpus which causes the confusion is removed or moved to a correct intention.
In detail, the performing a similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to a confusion result includes:
calculating the similarity of the target text in the intention clustering result and the corpus text in the intention corpus corresponding to the intention recognition model one by one;
if the similarity is smaller than a preset similarity threshold, determining that the confusion result of the target text is a confusion text and deleting the confusion text, if the similarity is larger than or equal to the similarity threshold, determining that the confusion result of the target text is an optimized text, and adding all the optimized texts into the intention corpus;
and determining the intention corpus to which all the optimized texts are added as the optimized training set.
In the embodiment of the invention, the accuracy of the training data can be further improved and the model training effect can be improved through the similarity confusion analysis.
And fourthly, training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
In the embodiment of the invention, the optimization training set is used for training the intention recognition model to be optimized, and the training data of the optimization training set comes from actual services, so that the adaptability and the accuracy of model training can be greatly improved. The to-be-optimized intention recognition model can be a model based on a traditional machine learning algorithm, such as DBN, SVM and the like, and a model based on a deep learning algorithm, such as LSTM, bi-RNN, bi-LSTM-CRF and the like).
In detail, the model testing the optimized intention recognition model, and optimizing the intention recognition model in the intention recognition task based on the test result comprises:
acquiring an intention test set, and outputting a test result of the intention test set by using the optimization intention recognition model;
and if the test result does not meet the preset optimization condition, returning to the step of performing similarity confusion analysis on the intention clustering result until the test result meets the preset optimization condition, and replacing the intention identification model with the optimization intention identification model.
In an optional embodiment of the present invention, the test result may be accuracy, precision, recall rate, and the like output by the optimization intention recognition model, and if the test result satisfies a preset optimization condition (for example, the accuracy, precision, recall rate are greater than or equal to a preset test threshold), the optimization intention recognition model is arranged to be on-line, so as to replace an old model in the intention recognition task, and complete model optimization.
The method obtains the rejected corpus set from the intention recognition task, performs text optimization on the rejected corpus set to obtain the optimized corpus set, and can improve the accuracy of model training because the optimized corpus is from the actual intention recognition task. Meanwhile, the meaning clustering is carried out on the corpus texts in the optimized corpus set to obtain the meaning clustering result, the similarity confusion analysis is carried out on the meaning clustering result, the optimized training set is constructed according to the confusion result, a large amount of comments do not need to be carried out manually, and the efficiency of the intention recognition model optimization is greatly improved. Therefore, the intention recognition model optimization device provided by the invention can improve the efficiency of the intention recognition model optimization.
Fig. 6 is a schematic structural diagram of an electronic device implementing the method for optimizing an intention recognition model according to an embodiment of the present invention.
The electronic device may include a processor 10, a memory 11, a communication interface 12, and a bus 13, and may further include a computer program, such as a graph recognition model optimization program, stored in the memory 11 and operable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only to store application software installed in the electronic device and various types of data, such as codes of an intention recognition model optimization program, etc., but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (e.g., intention recognition model optimization programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The communication interface 12 is used for communication between the electronic device and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 13 may be divided into an address bus, a data bus, a control bus, etc. The bus 13 is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 6 shows only an electronic device having components, and those skilled in the art will appreciate that the structure shown in fig. 6 is not limiting of the electronic device, and may include fewer or more components than shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used to establish a communication connection between the electronic device and other electronic devices.
Optionally, the electronic device may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The intention-recognition-model optimization program stored in the memory 11 of the electronic device is a combination of instructions that, when executed in the processor 10, may implement:
when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result;
carrying out similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to the confusion result;
and training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the electronic device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result;
carrying out similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to the confusion result;
and training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for intent recognition model optimization, the method comprising:
when an intention recognition task is received, acquiring a rejected corpus set from the intention recognition task, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
performing intention clustering on the corpus texts in the optimized corpus set to obtain an intention clustering result;
carrying out similarity confusion analysis on the intention clustering result, and constructing an optimized training set according to the confusion result;
and training a preset intention recognition model to be optimized by using the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
2. The method for optimizing an intention recognition model according to claim 1, wherein the obtaining of the corpus of rejected words from the intention recognition task and the text optimization of the corpus of rejected words to obtain an optimized corpus comprises:
determining a rejection threshold value of a meaning graph recognition model in the intention recognition task, outputting a recognition result of the linguistic data in the intention recognition task by using the intention recognition model, and taking the linguistic data lower than the rejection threshold value in the recognition result of the intention recognition model as a rejection linguistic data set;
and performing text sentence breaking and text error correction processing on the corpus text in the rejection corpus set to obtain the optimized corpus set.
3. The method for optimizing an intention recognition model according to claim 1, wherein the performing intention clustering on corpus texts in the optimized corpus collection to obtain an intention clustering result comprises:
performing text cleaning and text word segmentation processing on the corpus texts in the optimized corpus set to obtain a text word segmentation set;
and performing intention clustering on the text word segmentation set by using a K-Means clustering algorithm to obtain an intention clustering result.
4. The method for optimizing the intention recognition model according to claim 3, wherein the intention clustering of the text participle set by using the K-Means clustering algorithm to obtain an intention clustering result comprises:
vectorizing texts in the text word segmentation set to obtain a text vector set;
randomly selecting a preset number of text vectors from the text vector set as a clustering center;
sequentially calculating the distance from each text vector in the text vector set to the clustering center, and dividing each text vector into clustering clusters corresponding to the clustering center with the smallest distance to obtain a plurality of intention clustering clusters;
recalculating the clustering center of each intention clustering cluster, returning to the step of sequentially calculating the distance from each text vector in the text vector set to the clustering center until the clustering centers of all the intention clustering clusters are converged, and acquiring a clustering label corresponding to each converged intention clustering cluster;
and determining the text and the clustering label corresponding to the converged intention clustering cluster as the intention clustering result.
5. The intent recognition model optimization method of claim 4, wherein the cluster center for each intent cluster is calculated by the formula:
Figure FDA0003765186270000021
wherein, E i Is the ith cluster center, C i And x is a text vector in the intention cluster.
6. The method for optimizing the intention recognition model according to claim 2, wherein the performing a similarity confusion analysis on the intention clustering results and constructing an optimized training set according to the confusion results comprises:
calculating the similarity of the target text in the intention clustering result and the corpus text in the intention corpus corresponding to the intention recognition model one by one;
if the similarity is smaller than a preset similarity threshold, determining that the confusion result of the target text is a confusion text and deleting the confusion text, if the similarity is larger than or equal to the similarity threshold, determining that the confusion result of the target text is an optimized text, and adding all the optimized texts into the intention corpus;
and determining the intention corpus to which all the optimized texts are added as the optimized training set.
7. The method for optimizing an intention recognition model according to claim 1, wherein the model testing the optimized intention recognition model, and optimizing the intention recognition model in the intention recognition task based on the test result comprises:
acquiring an intention test set, and outputting a test result of the intention test set by using the optimization intention recognition model;
and if the test result does not meet the preset optimization condition, returning to the step of performing similarity confusion analysis on the intention clustering result until the test result meets the preset optimization condition, and replacing the intention recognition model with the optimized intention recognition model.
8. An intent recognition model optimization apparatus, the apparatus comprising:
the text optimization module is used for acquiring a rejected corpus set from the intention recognition task when the intention recognition task is received, and performing text optimization on the rejected corpus set to obtain an optimized corpus set;
the training set construction module is used for carrying out intention clustering on the corpus texts in the optimized corpus set to obtain intention clustering results, carrying out similarity confusion analysis on the intention clustering results and constructing an optimized training set according to the confusion results;
and the model optimization module is used for training a preset intention recognition model to be optimized by utilizing the optimization training set to obtain an optimization intention recognition model, performing model test on the optimization intention recognition model, and optimizing the intention recognition model in the intention recognition task based on a test result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of intent recognition model optimization of any of claims 1-7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the intent recognition model optimization method of any of claims 1-7.
CN202210883771.4A 2022-07-26 2022-07-26 Intention recognition model optimization method, device, equipment and storage medium Pending CN115146064A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210883771.4A CN115146064A (en) 2022-07-26 2022-07-26 Intention recognition model optimization method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210883771.4A CN115146064A (en) 2022-07-26 2022-07-26 Intention recognition model optimization method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115146064A true CN115146064A (en) 2022-10-04

Family

ID=83414590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210883771.4A Pending CN115146064A (en) 2022-07-26 2022-07-26 Intention recognition model optimization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115146064A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115408509A (en) * 2022-11-01 2022-11-29 杭州一知智能科技有限公司 Intention identification method, system, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115408509A (en) * 2022-11-01 2022-11-29 杭州一知智能科技有限公司 Intention identification method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112016304A (en) Text error correction method and device, electronic equipment and storage medium
CN112597312A (en) Text classification method and device, electronic equipment and readable storage medium
CN112883190A (en) Text classification method and device, electronic equipment and storage medium
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN113033198B (en) Similar text pushing method and device, electronic equipment and computer storage medium
CN112883730B (en) Similar text matching method and device, electronic equipment and storage medium
CN113051356A (en) Open relationship extraction method and device, electronic equipment and storage medium
CN112988963A (en) User intention prediction method, device, equipment and medium based on multi-process node
CN113157927A (en) Text classification method and device, electronic equipment and readable storage medium
CN113378970A (en) Sentence similarity detection method and device, electronic equipment and storage medium
CN114511038A (en) False news detection method and device, electronic equipment and readable storage medium
CN112528013A (en) Text abstract extraction method and device, electronic equipment and storage medium
CN113821622A (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN113344125B (en) Long text matching recognition method and device, electronic equipment and storage medium
CN113887941A (en) Business process generation method and device, electronic equipment and medium
CN113658002A (en) Decision tree-based transaction result generation method and device, electronic equipment and medium
CN115146064A (en) Intention recognition model optimization method, device, equipment and storage medium
CN113205814A (en) Voice data labeling method and device, electronic equipment and storage medium
CN113254814A (en) Network course video labeling method and device, electronic equipment and medium
CN112801222A (en) Multi-classification method and device based on two-classification model, electronic equipment and medium
CN114757154B (en) Job generation method, device and equipment based on deep learning and storage medium
CN115346095A (en) Visual question answering method, device, equipment and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN114186028A (en) Consult complaint work order processing method, device, equipment and storage medium
CN112632264A (en) Intelligent question and answer method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination