CN110399609B - Intention recognition method, device, equipment and computer readable storage medium - Google Patents

Intention recognition method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN110399609B
CN110399609B CN201910554487.0A CN201910554487A CN110399609B CN 110399609 B CN110399609 B CN 110399609B CN 201910554487 A CN201910554487 A CN 201910554487A CN 110399609 B CN110399609 B CN 110399609B
Authority
CN
China
Prior art keywords
entity
intention
recognition
recognition result
corpus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910554487.0A
Other languages
Chinese (zh)
Other versions
CN110399609A (en
Inventor
王恒
孙谷飞
周建华
陈学适
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongan Information Technology Service Co Ltd
Original Assignee
Zhongan Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Information Technology Service Co Ltd filed Critical Zhongan Information Technology Service Co Ltd
Priority to CN201910554487.0A priority Critical patent/CN110399609B/en
Publication of CN110399609A publication Critical patent/CN110399609A/en
Application granted granted Critical
Publication of CN110399609B publication Critical patent/CN110399609B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses an intention recognition method, an intention recognition device, intention recognition equipment and a computer readable storage medium, and belongs to the technical field of deep learning. The method comprises the following steps: matching the data to be identified with the generalized intention classification table to obtain a preliminary intention identification result; performing entity recognition on the data to be recognized to obtain an entity recognition result; and carrying out intention recognition by combining the entity recognition result and the preliminary intention recognition result to obtain a final intention recognition result. The method realizes finer intention classification and higher intention recognition accuracy, and is particularly suitable for the application of intention recognition in the vertical field.

Description

Intention recognition method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of deep learning technologies, and in particular, to an intent recognition method, apparatus, device, and computer readable storage medium.
Background
Intent recognition (Intention Recognition) and named entity recognition (Named Entity Recognition) are important research fields of natural language processing and play an important role in natural language understanding and the construction of intelligent customer service. The intention recognition is to recognize and classify the intention of the user based on text information or history input by the user in the interaction of the intelligent customer service with the user, and is essentially a text classification problem. The intention recognition plays an important role in dialogue management of intelligent customer service, and the intelligent customer service robot needs to trigger and control subsequent operations according to the recognized intention. There are two main types of application scenarios for current intent recognition: (1) In the classification of open text, for example, classification of scenes, such as "financial accounting", "weather", etc., is performed on text information such as news; (2) In intelligent service robots with relatively single functions, some relatively simple actions, such as intelligent vehicle service and intelligent home service, are assisted, such as "turn on a light", "turn on a sound", etc.
Typically, the vertical domain is focused on a particular domain, and it is desirable to get through the problems in the specific domain. Compared with the open field, the intelligent customer service in the vertical field has more professional and authoritative understanding requirements on the industry, can clearly understand the requirements of users, and can truly identify the problem which the users wish to really solve in the field; compared with the intelligent service robot with single function, the intelligent customer service in the vertical field has deeper and more complex knowledge system and more diversified facing user input, so that the two conventional intention recognition scenes can not be well applied to the vertical field.
In the current deep learning algorithm, text classification is mainly completed by a deep neural network. After the text is segmented, each sentence is converted into a group of word sequences, and then each word is vectorized and encoded, and a sentence of text is converted into a two-dimensional matrix to extract and classify the characteristics. In natural language processing, the judgment of features and intentions has a certain subjective component, so the classification setting is important, and the data equalization in different classifications and the coincidence degree between different classifications are determined. These all have an impact on the accuracy of the final text classification, i.e. the intent recognition. In the current intention recognition model, no mode for setting the intention and the entity by combining the corpus characteristics of the vertical field exists. In the face of a great deal of and careful intention classification, it is difficult to obtain good results in real corpora without sorting and configuration.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the application provides an intention recognition method, an intention recognition device, an intention recognition equipment and a computer readable storage medium, which realize finer intention classification and higher intention recognition accuracy, and are particularly suitable for the application of intention recognition in the vertical field. The technical scheme is as follows:
in a first aspect, an intention recognition method is provided, including: matching the data to be identified with the generalized intention classification table to obtain a preliminary intention identification result; performing entity recognition on the data to be recognized to obtain an entity recognition result; and carrying out intention recognition by combining the entity recognition result and the preliminary intention recognition result to obtain a final intention recognition result.
Further, the method further comprises: and carrying out intention integration operation on the historical corpus to obtain the generalized intention classification table.
Further, performing intent integration operation on the historical corpus, including: according to the business logic and grammar structure of the historical corpus, generalizing at least two intentions with granularity meeting a preset range into the same type of granularity intentions.
Further, the method further comprises: and performing word segmentation and intention labeling on the data to be recognized and the historical corpus, wherein the historical corpus comprises training corpus.
Further, performing entity recognition on the data to be recognized to obtain an entity recognition result, including: and inputting the data to be identified into an entity identification model, and outputting an entity identification result.
Further, the entity recognition model is obtained through training in the following way: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with an entity dictionary and/or a paraphrasing expansion to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain the entity recognition model.
Further, performing intention recognition by combining the entity recognition result and the preliminary intention recognition result to obtain a final intention recognition result, including: and inputting the entity recognition result and the preliminary intention recognition result into an intention recognition model, and outputting a final intention recognition result.
Further, the intention recognition model is trained by: constructing a convolutional neural network, taking an entity and an entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing together as input of the convolutional neural network, so as to train and obtain the intent recognition model.
Further, the method further comprises: and carrying out subsequent dialogue flow configuration according to the intention recognition result so as to carry out corresponding operation of the dialogue flow configuration.
Further, the method further comprises: and triggering an information complement operation when the entity in the intention recognition result does not meet the preset configuration condition.
In a second aspect, there is provided an intention recognition apparatus including: the primary intention matching module is used for matching the data to be intention recognition with the generalized intention classification table to obtain a primary intention recognition result; the entity identification module is used for carrying out entity identification on the data to be identified and obtaining an entity identification result; the intention recognition module is used for carrying out intention recognition by combining the entity recognition result and the preliminary intention recognition result to obtain a final intention recognition result.
Further, the apparatus further comprises: the intent generalization module is used for carrying out intent integration operation on the historical corpus and obtaining the generalized intent classification table.
Further, the intent generalization module is used for generalizing at least two intentions with granularity meeting a preset range into the same type of granularity intentions according to the business logic and the grammar structure of the historical corpus.
Further, the apparatus further comprises: the data preprocessing module is used for word segmentation and intention labeling of the data to be recognized and the historical corpus, and the historical corpus comprises training corpus.
Further, the entity identification module is configured to: inputting the data to be identified into an entity identification model, and outputting an entity identification result, wherein the entity identification model is obtained through training in the following way: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with an entity dictionary and/or a paraphrasing expansion to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain the entity recognition model.
Further, the intention recognition module is configured to: inputting an intention recognition model into the entity recognition result and the preliminary intention recognition result, and outputting a final intention recognition result, wherein the intention recognition model is trained by the following steps: constructing a convolutional neural network, taking an entity and an entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing together as input of the convolutional neural network, so as to train and obtain the intent recognition model.
Further, the apparatus further comprises: and the conversation process configuration module is used for carrying out subsequent conversation process configuration according to the intention recognition result so as to carry out corresponding operation of the conversation process configuration.
Further, the apparatus further comprises: and the information complement operation module is used for triggering information complement operation when the entity in the intention recognition result does not meet the preset configuration condition.
In a third aspect, there is provided an intention recognition apparatus including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to perform the steps of the intent recognition method of any one of the above schemes via the executable instructions.
In a fourth aspect, a computer-readable storage medium is provided, the computer-readable storage medium storing a computer program that, when executed by a processor, implements the steps of the intent recognition method of any one of the above aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
1. in the preparation process of the corpus, for the accuracy of model classification, when intention category setting and corpus labeling are carried out, the difference between each category is ensured as much as possible, the corpus with similar business logic or similar expression is avoided as much as possible, the corpus appears in different categories, the difference between different graph categories is improved through intent generalization and fine-grained intent merging to a certain extent, the interval between different graph categories is improved, and the confusion degree is reduced;
2. the balance of the meaning classification in the real corpus is focused, namely, whether the corpus quantity in different classifications is balanced or not needs to be comprehensively considered under the setting standard in the meaning classification setting process, and the balanced meaning classification setting can better induce and comb the user demands, so that the corpus quantity in different classifications is balanced as much as possible in the real natural corpus;
3. improving generalization ability of intention, the same intention can cover problems of various forms and structures as much as possible
4. The intention recognition result returned by recognition has higher intention recognition accuracy, so that the subsequent dialogue flow configuration is facilitated;
5. the method is particularly suitable for the configuration mode of the intention and the entity in the vertical field, the intention and the entity can be set according to the service requirement, the configuration mode of the intention and the entity can improve the accuracy of the intention identification in the vertical field, and the subsequent multi-round dialogue management can be more conveniently carried out.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an intention recognition method provided in embodiment 1 of the present application;
FIG. 2 is a flowchart of an intention recognition method provided in embodiment 2 of the present application;
FIG. 3 is a schematic diagram of an intent recognition device according to embodiment 3 of the present application;
FIG. 4 is a schematic diagram of the structure of the intent recognition device according to embodiment 4 of the present application;
fig. 5 is a schematic diagram of the structure of the intention recognition apparatus provided in embodiment 5 of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. In the description of the present application, the meaning of "plurality" is two or more unless specifically defined otherwise.
According to the intention recognition method, the device, the equipment and the computer readable storage medium, the intention classification table obtained by generalizing the intention data of the user is utilized, the intention recognition data is subjected to intention matching firstly, a preliminary intention recognition result is obtained, then the preliminary intention recognition result is combined with the entity recognition result, so that a final intention recognition result is obtained, the difference of different graph categories can be improved, the similar intention confusion degree is reduced, the intention generalization capability is improved, the intention classification equalization in the real corpus is focused, the recognition of finer granularity of the user is realized by adopting the intention collocation entity mode, and the follow-up multi-round dialogue flow such as intelligent customer service is performed. Because the intention recognition in the scheme carries out finer classification and processing on the intention, and higher accuracy of the intention recognition is realized in a mode of combining the intention with the entity, the intention recognition method, the device, the equipment and the computer readable storage medium provided by the embodiment of the application are particularly suitable for the intelligent customer service application field related to the vertical field, and are also suitable for other artificial intelligent related application fields related to the intention recognition or text classification.
The method, apparatus, device and computer readable storage medium for identifying intent provided by the embodiments of the present application are described in detail below with reference to specific embodiments and accompanying drawings.
Example 1
Fig. 1 is a flowchart of an intention recognition method provided in embodiment 1 of the present application. As shown in fig. 1, the intention recognition method provided by the embodiment of the application includes the following steps:
101. and matching the data to be identified with the generalized intention classification table to obtain a preliminary intention identification result.
Specifically, before step 101, the intention recognition method provided by the embodiment of the present application further includes performing an intention integration operation on the historical corpus, and obtaining a generalized intention classification table. Preferably, at least two intentions with granularity meeting a preset range are generalized into the same type of granularity intentions according to the business logic and grammar structure of the historical corpus. In addition, word segmentation and intention labeling are required to be carried out on the data to be recognized and the historical corpus, wherein the historical corpus comprises training corpus. The historical corpus refers to various corpora accumulated by the past intent recognition service (including related service data which can be acquired from a network platform), and the corpora can be acquired from an existing corpus or can be temporarily arranged from the acquired service accumulation data according to the subsequent requirement. The training corpus refers to the corpus data used for subsequent deep learning model training, and the whole historical data can be used as the training corpus, or a part of the historical data can be selected as the training corpus, that is, the scale or the number of the training data is not particularly limited, and the corresponding setting and selection can be performed according to actual needs.
For example, in a customer service scenario, the requirements of users are various and complex, and only by reasonably inducing and integrating the requirements of users, answers and subsequent services can be better provided for the users. Therefore, firstly, the industry corpus is sorted, and the possible problems of the user or the problem corpus recorded in the real customer service scene are classified, induced and sorted. The generalized criteria are mainly two: 1) Whether having similar obligation logic and statement structures; 2) Whether a similar subsequent dialog flow would be triggered.
And the granularity of the user intention induction arrangement directly influences the accuracy of intention classification and recognition. When the intention granularity is finer, although the content of each intention is more specific, the intention category is more, the similarity between corpora of different categories is too high, and the classification has higher error rate. Aiming at the situation, according to knowledge content of industry, according to business logic and sentence structure, integrating finer granularity intentions, combining intentions with similar sentence structure and processing logic, integrating and generalizing required fine granularity intentions, and labeling intentions of corpus, thereby realizing generalization to a certain extent in first layer intention classification, processing in a hierarchical progressive mode, and finely generalizing intentions step by step.
Taking insurance industry as an example, the corpus can be integrated into generalized intentions of insurance, claim settlement, renewal, manual conversion, inquiry, refund and the like, and each intent represents a corresponding scene and a subsequent question-answer flow. Although there may be multiple finer granularity intents below each intent, such as underwriting on age or on physical condition in underwriting problems, they are generalizable to underwriting problems; in the problem of insuring, insuring a product for parents and B product for children can be generalized into insuring.
For another example, two fine-grained intents of "whether 60 years old or older" and "whether hypertension can be guaranteed or not" are applied, although the specific question information is different, the answer is different, one is the age-related term of the question, and one is the health-related term of the question, but because of similar business logic, the user's current condition is acquired to judge the application qualification, namely "check and guarantee", and we can integrate the two intents into a "check and guarantee" intents.
102. And carrying out entity recognition on the data to be recognized to obtain an entity recognition result.
Specifically, the data to be identified is input into the entity identification model, and the entity identification result is output. The entity recognition model is obtained through training in the following mode: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with the expansion of the entity dictionary and/or the paraphrasing, so as to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain an entity recognition model. It should be noted that any other possible entity recognition model in the prior art may be used as the entity recognition model, so long as it can implement the above-mentioned functions, and the embodiment of the present application is not limited thereto.
Illustratively, first, an entity dictionary is built, and words with key information that may be needed are sorted and categorized for extraction and matching of entities in connection with business needs. The physical dictionary here contains proper nouns in the vertical domain and synonymous or near-defined words. Taking insurance industry as an example, keywords such as disease names, medicine names, treatment means, insurance product names and the like are required to be arranged, meanwhile, the keywords are expanded in a paraphrasing manner, categories of the keywords are marked, an entity dictionary is formed, keys of the dictionary are entities, and corresponding values are categories of the entities. Meanwhile, training the entity recognition model of the text, and training the entity recognition model of the general categories such as time, name, geographic position, age and the like.
And secondly, extracting and replacing the entity, extracting the entity in the user problem by using a trained entity recognition model, matching and extracting the entity in the user text by using an entity dictionary, and storing according to the corresponding classification. And mapping the positions of the entities and the entities themselves into feature vectors respectively, and splicing the feature vectors with the original text. While the intention recognition is performed on the user problem, the entities therein, which are words having key information in the user problem, are extracted and recognized.
Still taking the above insurance industry two warranty questions as an example, after identifying the generalized "warranty" intention, further subdivision is required because the answers to the two questions and the knowledge involved are not the same, one is to determine the warranty condition based on age, and one is to determine the warranty condition based on physical condition, and this subdivision is accomplished by extraction of the entity. For example, "60 years old" is an entity that characterizes age, "whether hypertension can be guaranteed" and "hypertension" is an entity that characterizes physical condition "in" whether 60 years old or older can be guaranteed. Therefore, the intention is verified, and the intention of the user can be recognized in a finer granularity by combining the recognized different entities. The recognition mode ensures the accuracy of generalization intentions with similar structures, and can improve the hit rate of entity extraction through enriching the way of matching the entity dictionary or the paraphrasing, so that the intentions of users are recognized more accurately on the whole.
103. And combining the entity recognition result and the preliminary intention recognition result, carrying out intention recognition, and obtaining a final intention recognition result.
Specifically, the entity recognition result and the preliminary intention recognition result are input into an intention recognition model, and a final intention recognition result is output. Wherein the intention recognition model is trained by: constructing a convolutional neural network, taking the entity and the entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the segmented historical corpus into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing as the input of the convolutional neural network together, and training to obtain an intent recognition model. It should be noted that any other possible intent recognition model in the prior art may be used as the intent recognition model, so long as it can implement the above-described functions, and the embodiment of the present application is not particularly limited.
Further preferably, after the step 103, the method for identifying an intention further includes: according to the intention recognition result, carrying out subsequent dialogue flow configuration so as to carry out corresponding operation of dialogue flow configuration; and triggering an information complement operation when the entity in the intended identification result does not meet the preset configuration condition.
First, training and classifying a user problem, constructing a convolutional neural network, taking a user problem corpus after entity replacement as input, mapping each word into a word embedding, embedding a spliced entity together with an entity position as input of the neural network, taking a labeling intention as a classification label, and training the classification problem.
Next, a subsequent multi-round dialog flow is configured according to the combination of the intent and the entity, and subsequent flow configuration is performed for the acquired values after the intent and the entity are identified. Here, the processing mode of functionally treating the intention and parameterizing the entity is adopted, namely, the subsequent operation carried out on a certain intention is regarded as a function, the entity to be extracted is regarded as the parameter of the function, and the return result of the function is adjusted according to different parameters.
And carrying out intention recognition and entity extraction on the new sentence, and transmitting the result to a flow configuration engine for subsequent multi-round dialogue configuration. For the identified intent, detecting the corresponding entity parameters thereof, and filling the extracted entity into the corresponding parameter list. And triggering a subsequent dialogue flow according to the identified intention and the filled parameters. If the necessary entity parameters are all satisfied, the corresponding action can be directly performed, and if the necessary entity is satisfied, the multi-round guide question and answer is triggered, and the user is guided to complement the information. That is, when the necessary parameters are missing, i.e. the necessary entities are not identified, the data acquisition flow is triggered, and the information is completed in the interaction process with the user in the form of a question.
Example 2
Fig. 2 is a flowchart of an intention recognition method provided in embodiment 2 of the present application. As shown in fig. 2, the intention recognition method provided by the embodiment of the present application includes the following steps:
201. and carrying out intention integration operation on the historical corpus to obtain a generalized intention classification table.
Illustratively, at least two intentions with granularity meeting a preset range are generalized into the same type of granularity intentions according to the business logic and grammar structure of the historical corpus.
It should be noted that, the process of step 201 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
202. The method comprises the steps of segmenting and labeling the intended identification data and historical corpus, wherein the historical corpus comprises training corpus.
It should be noted that, the process of step 202 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
203. And matching the data to be identified with the generalized intention classification table to obtain a preliminary intention identification result.
It should be noted that, the process of step 203 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
204. And inputting the data to be identified into the entity identification model, and outputting an entity identification result.
Illustratively, the entity recognition model is trained by: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with the expansion of the entity dictionary and/or the paraphrasing, so as to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain an entity recognition model.
It should be noted that, the process of step 204 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
205. And inputting the entity recognition result and the preliminary intention recognition result into an intention recognition model, and outputting a final intention recognition result. Wherein the intention recognition model is trained by: the method comprises the steps of constructing a convolutional neural network, taking an entity and an entity position of a training corpus which is generalized according to an intention classification table after entity recognition as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding, the spliced entity embedding and the entity position embedding as input of the convolutional neural network together, and training to obtain an intention recognition model.
It should be noted that, the process of step 205 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
206. And carrying out subsequent dialogue flow configuration according to the intention recognition result so as to carry out corresponding operation of the dialogue flow configuration.
It should be noted that the process of step 206 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
207. And triggering an information complement operation when the entity in the intended identification result does not meet the preset configuration condition.
It should be noted that, the process of step 201 may be implemented in other manners besides those described in the foregoing steps, and the embodiment of the present application is not limited to the specific manner.
Example 3
Fig. 3 is a schematic diagram of an intention recognition device according to embodiment 3 of the present application. As shown in fig. 3, an intention recognition device provided in an embodiment of the present application includes:
the preliminary intention matching module 31 is configured to match the to-be-intention recognition data with the generalized intention classification table, and obtain a preliminary intention recognition result;
the entity recognition module 32 is configured to perform entity recognition on the data to be recognized to obtain an entity recognition result;
the intention recognition module 33 is configured to combine the entity recognition result and the preliminary intention recognition result, perform intention recognition, and obtain a final intention recognition result.
Specifically, the entity identification module 32 is configured to: inputting the data to be identified into an entity identification model, and outputting an entity identification result, wherein the entity identification model is obtained through training in the following way: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with the expansion of the entity dictionary and/or the paraphrasing, so as to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain an entity recognition model.
Specifically, the intention recognition module 33 is configured to: inputting an entity recognition result and a preliminary intention recognition result into an intention recognition model, and outputting a final intention recognition result, wherein the intention recognition model is trained by the following modes: constructing a convolutional neural network, taking the entity and the entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the segmented historical corpus into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing as the input of the convolutional neural network together, and training to obtain an intent recognition model.
Example 4
Fig. 4 is a schematic structural diagram of an intention recognition device according to embodiment 4 of the present application. As shown in fig. 3, an intention recognition device provided in an embodiment of the present application includes:
the data preprocessing module 41 is used for performing word segmentation and intention labeling on the data to be recognized and the historical corpus, wherein the historical corpus comprises training corpus;
the intent generalization module 42 is configured to perform intent integration operation on the historical corpus to obtain a generalized intent classification table, and specifically, the intent generalization module 42 is configured to generalize at least two intentions with granularity meeting a preset range into intent with the same type of granularity according to business logic and grammar structures of the historical corpus;
the preliminary intention matching module 43 is configured to match the to-be-intention recognition data with the generalized intention classification table, and obtain a preliminary intention recognition result;
the entity recognition module 44 is configured to input the data to be recognized into an entity recognition model, and output an entity recognition result, where the entity recognition model is obtained by training in the following manner: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with the expansion of the entity dictionary and/or the paraphrasing, so as to obtain entity keywords; labeling the category of the entity keyword, and training to obtain an entity recognition model;
the intention recognition module 45 is configured to input the entity recognition result and the preliminary intention recognition result into an intention recognition model, and output a final intention recognition result, wherein the intention recognition model is trained by: constructing a convolutional neural network, taking an entity and an entity position of a training corpus which is generalized according to an intention classification table after entity recognition as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding and the entity position embedding together as input of the convolutional neural network, and training to obtain an intention recognition model;
the dialog flow configuration module 46 is configured to perform subsequent dialog flow configuration according to the intention recognition result, so as to perform corresponding operation of the dialog flow configuration.
The information complement operation module 47 is configured to trigger an information complement operation when the entity in the intention recognition result does not satisfy the preset configuration condition.
Example 5
Fig. 5 is a schematic structural diagram of an intention recognition device provided in embodiment 5 of the present application, and as shown in fig. 5, the intention recognition device 5 provided in the embodiment of the present application includes: a processor 51 and a memory 52, the memory 52 for storing executable instructions of the processor 51; wherein the processor 51 is configured to perform the steps of the intention recognition method according to any one of embodiments 1, 2 via the executable instructions.
Example 6
Embodiment 6 of the present application further provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the steps of the intention recognition method described in any one of embodiments 1 and 2.
It should be noted that: the intention recognition device and the intention recognition equipment provided in the above embodiments only use the division of the above functional modules to illustrate when triggering the intention recognition service, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device or equipment is divided into different functional modules to perform all or part of the functions described above. In addition, the intention recognition device, the intention recognition apparatus and the intention recognition method embodiment provided in the above embodiments belong to the same concept, and detailed implementation processes thereof are shown in the method embodiment, and are not repeated here.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
In summary, the intention recognition method, device, equipment and computer readable storage medium provided by the embodiments of the present application have the following beneficial effects compared with the prior art:
1. in the preparation process of the corpus, for the accuracy of model classification, when intention category setting and corpus labeling are carried out, the difference between each category is ensured as much as possible, the corpus with similar business logic or similar expression is avoided as much as possible, the corpus appears in different categories, the difference between different graph categories is improved through intent generalization and fine-grained intent merging to a certain extent, the interval between different graph categories is improved, and the confusion degree is reduced;
2. the balance of the meaning classification in the real corpus is focused, namely, whether the corpus quantity in different classifications is balanced or not needs to be comprehensively considered under the setting standard in the meaning classification setting process, and the balanced meaning classification setting can better induce and comb the user demands, so that the corpus quantity in different classifications is balanced as much as possible in the real natural corpus;
3. improving generalization ability of intention, the same intention can cover problems of various forms and structures as much as possible
4. The intention recognition result returned by recognition has higher intention recognition accuracy, so that the subsequent dialogue flow configuration is facilitated;
5. the method is particularly suitable for the configuration mode of the intention and the entity in the vertical field, the intention and the entity can be set according to the service requirement, the configuration mode of the intention and the entity can improve the accuracy of the intention identification in the vertical field, and the subsequent multi-round dialogue management can be more conveniently carried out.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (16)

1. An intent recognition method, comprising:
according to the business logic and grammar structure of the historical corpus, generalizing at least two intentions with granularity meeting a preset range into the same class of granularity intentions, and obtaining a generalized intent classification table;
matching the data to be identified with the generalized intention classification table to obtain a preliminary intention identification result;
performing entity recognition on the data to be recognized to obtain an entity recognition result;
combining the entity identification result and the preliminary intention identification result, carrying out intention identification, and obtaining a final intention identification result;
wherein, the entity recognition of the data to be recognized is performed, and the obtaining of the entity recognition result includes:
constructing an entity dictionary, and sorting and classifying words with key information;
and extracting the entities in the user problems by using the trained entity identification model, matching and extracting the entities in the user text by using an entity dictionary, and storing according to the corresponding classification.
2. The method according to claim 1, wherein the method further comprises: and performing word segmentation and intention labeling on the data to be recognized and the historical corpus, wherein the historical corpus comprises training corpus.
3. The method according to claim 1, wherein performing entity recognition on the data to be recognized to obtain an entity recognition result includes:
and inputting the data to be identified into an entity identification model, and outputting an entity identification result.
4. A method according to claim 3, wherein the entity recognition model is trained by:
extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with an entity dictionary and/or a paraphrasing expansion to obtain entity keywords;
and labeling the category of the entity keyword, and training to obtain the entity recognition model.
5. The method of claim 1, wherein performing intent recognition in combination with the entity recognition result and the preliminary intent recognition result to obtain a final intent recognition result comprises:
and inputting the entity recognition result and the preliminary intention recognition result into an intention recognition model, and outputting a final intention recognition result.
6. The method of claim 5, wherein the intent recognition model is trained by:
constructing a convolutional neural network, taking the entity and the entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing as the input of the convolutional neural network together, so as to train and obtain the intent recognition model.
7. The method according to claim 1, wherein the method further comprises:
and carrying out subsequent dialogue flow configuration according to the intention recognition result so as to carry out corresponding operation of the dialogue flow configuration.
8. The method according to claim 1, wherein the method further comprises:
and triggering an information complement operation when the entity in the intention recognition result does not meet the preset configuration condition.
9. An intent recognition device for implementing the method of claim 1, comprising:
the intention generalization module is used for generalizing at least two intentions with granularity meeting a preset range into the same type of granularity intentions according to the business logic and grammar structure of the historical corpus, and obtaining a generalized intention classification table;
the primary intention matching module is used for matching the data to be intention recognition with the generalized intention classification table to obtain a primary intention recognition result;
the entity identification module is used for carrying out entity identification on the data to be identified and obtaining an entity identification result;
the intention recognition module is used for carrying out intention recognition by combining the entity recognition result and the preliminary intention recognition result to obtain a final intention recognition result.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the data preprocessing module is used for word segmentation and intention labeling of the data to be recognized and the historical corpus, and the historical corpus comprises training corpus.
11. The apparatus of claim 9, wherein the entity identification module is configured to: inputting the data to be identified into an entity identification model, and outputting an entity identification result, wherein the entity identification model is obtained through training in the following way: extracting and matching the entity of the training corpus generalized according to the intention classification table by combining with an entity dictionary and/or a paraphrasing expansion to obtain entity keywords; and labeling the category of the entity keyword, and training to obtain the entity recognition model.
12. The apparatus of claim 9, wherein the intent recognition module is to: inputting an intention recognition model into the entity recognition result and the preliminary intention recognition result, and outputting a final intention recognition result, wherein the intention recognition model is trained by the following steps: constructing a convolutional neural network, taking the entity and the entity position of the training corpus which are subjected to entity recognition according to the generalized intent classification table as input, mapping each word in the history corpus after word segmentation into word embedding, and taking the word embedding and the entity position embedding which are subjected to splicing as the input of the convolutional neural network together, so as to train and obtain the intent recognition model.
13. The apparatus of claim 9, wherein the apparatus further comprises:
and the conversation process configuration module is used for carrying out subsequent conversation process configuration according to the intention recognition result so as to carry out corresponding operation of the conversation process configuration.
14. The apparatus of claim 9, wherein the apparatus further comprises:
and the information complement operation module is used for triggering information complement operation when the entity in the intention recognition result does not meet the preset configuration condition.
15. A computer device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the steps of the intent recognition method as claimed in any one of claims 1 to 8 via the executable instructions.
16. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the intention recognition method of any one of claims 1 to 8.
CN201910554487.0A 2019-06-25 2019-06-25 Intention recognition method, device, equipment and computer readable storage medium Active CN110399609B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910554487.0A CN110399609B (en) 2019-06-25 2019-06-25 Intention recognition method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910554487.0A CN110399609B (en) 2019-06-25 2019-06-25 Intention recognition method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110399609A CN110399609A (en) 2019-11-01
CN110399609B true CN110399609B (en) 2023-12-01

Family

ID=68323544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910554487.0A Active CN110399609B (en) 2019-06-25 2019-06-25 Intention recognition method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110399609B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111046667B (en) * 2019-11-14 2024-02-06 深圳市优必选科技股份有限公司 Statement identification method, statement identification device and intelligent equipment
CN111046153B (en) * 2019-11-14 2023-12-29 深圳市优必选科技股份有限公司 Voice assistant customization method, voice assistant customization device and intelligent equipment
CN112906370B (en) * 2019-12-04 2022-12-20 马上消费金融股份有限公司 Intention recognition model training method, intention recognition method and related device
CN111078855A (en) * 2019-12-19 2020-04-28 联想(北京)有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN111128161A (en) * 2019-12-23 2020-05-08 上海优扬新媒信息技术有限公司 Data processing method and device and electronic equipment
CN111368045B (en) * 2020-02-21 2024-05-07 平安科技(深圳)有限公司 User intention recognition method, device, equipment and computer readable storage medium
CN113221034B (en) * 2021-05-06 2024-08-06 北京百度网讯科技有限公司 Data generalization method, device, electronic equipment and storage medium
CN113268593A (en) * 2021-05-18 2021-08-17 Oppo广东移动通信有限公司 Intention classification and model training method and device, terminal and storage medium
CN114154495A (en) * 2021-12-03 2022-03-08 海南港航控股有限公司 Entity extraction method and system based on keyword matching
CN114661910A (en) * 2022-03-25 2022-06-24 平安科技(深圳)有限公司 Intention identification method and device, electronic equipment and storage medium
CN115356939A (en) * 2022-08-18 2022-11-18 青岛海尔科技有限公司 Control command transmission method, control device, storage medium, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121721A (en) * 2016-11-28 2018-06-05 渡鸦科技(北京)有限责任公司 Intension recognizing method and device
CN109146610A (en) * 2018-07-16 2019-01-04 众安在线财产保险股份有限公司 It is a kind of intelligently to insure recommended method, device and intelligence insurance robot device
CN109461039A (en) * 2018-08-28 2019-03-12 厦门快商通信息技术有限公司 A kind of text handling method and intelligent customer service method
CN109492079A (en) * 2018-10-09 2019-03-19 北京奔影网络科技有限公司 Intension recognizing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121721A (en) * 2016-11-28 2018-06-05 渡鸦科技(北京)有限责任公司 Intension recognizing method and device
CN109146610A (en) * 2018-07-16 2019-01-04 众安在线财产保险股份有限公司 It is a kind of intelligently to insure recommended method, device and intelligence insurance robot device
CN109461039A (en) * 2018-08-28 2019-03-12 厦门快商通信息技术有限公司 A kind of text handling method and intelligent customer service method
CN109492079A (en) * 2018-10-09 2019-03-19 北京奔影网络科技有限公司 Intension recognizing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡吉明.《社会网络环境下基于用户关系的信息推荐服务研究》.武汉大学出版社,2015,(2015年3月第1版),第133页. *

Also Published As

Publication number Publication date
CN110399609A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110399609B (en) Intention recognition method, device, equipment and computer readable storage medium
Zadeh et al. Memory fusion network for multi-view sequential learning
US7685082B1 (en) System and method for identifying, prioritizing and encapsulating errors in accounting data
CN111145052A (en) Structured analysis method and system of judicial documents
CN110032623B (en) Method and device for matching question of user with title of knowledge point
KR20190109614A (en) Method and apprartus for chatbots in customer service analyzing hierarchical user expression and generating responses
CN109325040B (en) FAQ question-answer library generalization method, device and equipment
CN109299245B (en) Method and device for recalling knowledge points
CN111182162B (en) Telephone quality inspection method, device, equipment and storage medium based on artificial intelligence
CN113268610B (en) Intent jump method, device, equipment and storage medium based on knowledge graph
CN111159375A (en) Text processing method and device
CN107145514A (en) Chinese sentence pattern sorting technique based on decision tree and SVM mixed models
CN112287090A (en) Financial question asking back method and system based on knowledge graph
CN113821605A (en) Event extraction method
CN110008308A (en) For the method and apparatus of user's question sentence supplemental information
CN105677636A (en) Information processing method and device for intelligent question-answering system
CN117520503A (en) Financial customer service dialogue generation method, device, equipment and medium based on LLM model
CN115730058A (en) Reasoning question-answering method based on knowledge fusion
CN116341519A (en) Event causal relation extraction method, device and storage medium based on background knowledge
CN113988195A (en) Private domain traffic clue mining method and device, vehicle and readable medium
CN110362828B (en) Network information risk identification method and system
CN117332054A (en) Form question-answering processing method, device and equipment
CN117216214A (en) Question and answer extraction generation method, device, equipment and medium
KR102452814B1 (en) Methods for analyzing and extracting issues in documents
CN114756679A (en) Chinese medical text entity relation combined extraction method based on conversation attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant