CN114398903B - Intention recognition method, device, electronic equipment and storage medium - Google Patents

Intention recognition method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114398903B
CN114398903B CN202210074053.2A CN202210074053A CN114398903B CN 114398903 B CN114398903 B CN 114398903B CN 202210074053 A CN202210074053 A CN 202210074053A CN 114398903 B CN114398903 B CN 114398903B
Authority
CN
China
Prior art keywords
intention
data
node
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210074053.2A
Other languages
Chinese (zh)
Other versions
CN114398903A (en
Inventor
李平
马骏
王少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210074053.2A priority Critical patent/CN114398903B/en
Publication of CN114398903A publication Critical patent/CN114398903A/en
Application granted granted Critical
Publication of CN114398903B publication Critical patent/CN114398903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The application provides an intention recognition method, an intention recognition device, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring intention data to be identified; traversing flow nodes of the outbound robot system to obtain node information; extracting original intention data and node attribute data of node information, wherein the original intention data comprises an original intention field; performing data supplementation on the same original intention field according to the node attribute data to obtain first node data; carrying out semantic analysis on the first node data according to the intention category label to obtain target intention characteristics; performing fine adjustment on the first node data according to the target intention characteristics to obtain second node data; carrying out intention prediction processing on intention data to be identified through a preset target intention prediction model to obtain predicted intention data; and carrying out intention recognition through an intention intersection algorithm, predicted intention data and second node data to obtain target intention data. The method and the device can improve intention recognition efficiency.

Description

Intention recognition method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to an intent recognition method, apparatus, electronic device, and storage medium.
Background
Currently, in the process of performing intent recognition, it is often required to make each problem of a predetermined flow correspond to a flow node, and set a plurality of predefined intent data at each flow node, where the disagreement graph data at each flow node usually needs to be recognized by a separate intent recognition model; therefore, when recognizing intention data of a plurality of flow nodes, it is necessary to train a plurality of corresponding intention recognition models to recognize the intention data, and recognition efficiency is low. Therefore, how to provide a solution capable of improving the intention recognition efficiency is a technical problem to be solved.
Disclosure of Invention
The embodiment of the application aims to provide an intention recognition method, an intention recognition device, electronic equipment and a storage medium, and aims to improve the intention recognition efficiency.
To achieve the above object, a first aspect of an embodiment of the present application proposes an intent recognition method, including:
acquiring intention data to be identified of an external calling robot system;
traversing a plurality of flow nodes of the outbound robot system, and acquiring node information of each flow node;
extracting original intention data and node attribute data in the node information, wherein the original intention data comprises an original intention field;
The same original intention field is subjected to data supplementation according to the node attribute data to obtain first node data
Carrying out semantic analysis processing on the first node data according to a preset intention type label to obtain target intention characteristics;
performing fine adjustment processing on the first node data according to the target intention characteristics to obtain second node data;
carrying out intention prediction processing on the intention data to be identified through a preset target intention prediction model to obtain predicted intention data;
and carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data to obtain target intention data.
In some embodiments, the step of performing semantic analysis processing on the first node data according to a preset intention category label to obtain a target intention feature includes:
performing tag intention classification on the first node data according to the intention category tag to obtain tag intention data;
carrying out semantic analysis processing on the label intention data to obtain node intention corpus;
and extracting features of the node intention corpus to obtain the target intention features.
In some embodiments, the step of performing fine tuning processing on the first node data according to the target intention feature to obtain second node data includes:
Mapping the target intention feature to a preset first vector space to obtain a target intention feature vector;
and carrying out data complementation on the first node data according to the target intention feature vector to obtain the second node data.
In some embodiments, the target intention prediction model includes an MLP network, a pooling layer, and a preset function, and the step of performing intention prediction processing on the intention data to be identified through the preset target intention prediction model to obtain predicted intention data includes:
mapping the intention data to be identified to a preset second vector space through the MLP network to obtain an intention vector to be identified;
carrying out pooling treatment on the intention vector to be identified through the pooling layer to obtain pooling intention characteristics;
and carrying out intention prediction processing on the pooled intention characteristics through the preset function to obtain the predicted intention data.
In some embodiments, the step of obtaining target intention data by performing intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data includes:
analyzing the second node data to obtain node intention data;
And performing intersection operation on the node intention data and the prediction intention data through the intention intersection algorithm to obtain the target intention data.
In some embodiments, before the step of performing intent prediction processing on the intent data to be identified through a preset target intent prediction model to obtain predicted intent data, the method further includes pre-training the target intent prediction model, and specifically includes:
acquiring sample intention data;
inputting the sample intent data into an initial intent prediction model;
identifying the sample intention data through the initial intention prediction model to obtain a sample intention sentence vector;
calculating the similarity between the two sample intention sentence vectors through a loss function of the initial intention prediction model;
generating entangled corpus pairs according to the similarity and the sample intent sentence vector;
optimizing a loss function of the initial intention prediction model according to the entangled corpus so as to update the initial intention prediction model and obtain the target intention prediction model.
In some embodiments, before the step of pre-training the target intent prediction model, the method further comprises pre-constructing the initial intent prediction model, specifically comprising:
Acquiring an initial model, wherein the initial model is a Transformer encoder model;
and carrying out parameter fine adjustment on the initial model according to the acquired sample intention data to obtain the initial intention prediction model.
To achieve the above object, a second aspect of the embodiments of the present application proposes an intention recognition apparatus, the apparatus comprising:
the intention data acquisition module to be identified is used for acquiring intention data to be identified of the external calling robot system;
the node information acquisition module is used for traversing a plurality of flow nodes of the outbound robot system and acquiring node information of each flow node;
the data extraction module is used for extracting original intention data and node attribute data in the node information, wherein the original intention data comprises an original intention field;
the data supplementing module is used for supplementing data to the same original intention field according to the node attribute data to obtain first node data;
the semantic analysis module is used for carrying out semantic analysis processing on the first node data according to a preset intention type label to obtain target intention characteristics;
the fine tuning module is used for carrying out fine tuning processing on the first node data according to the target intention characteristics to obtain second node data;
The intention prediction module is used for carrying out intention prediction processing on the intention data to be identified through a preset target intention prediction model to obtain predicted intention data;
the intention recognition module is used for carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data to obtain target intention data.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device comprising a memory, a processor, a computer program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the computer program, when executed by the processor, implementing the method according to the first aspect.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, for computer-readable storage, the storage medium storing one or more computer programs executable by one or more processors to implement the method described in the first aspect.
The intention recognition method, the intention recognition device, the electronic equipment and the storage medium are used for acquiring intention data to be recognized of the external calling robot system; traversing a plurality of flow nodes of the outbound robot system, acquiring node information of each flow node, and conveniently acquiring the node information of the flow nodes of the outbound robot system; and extracting original intention data and node attribute data in the node information, wherein the original intention data comprises original intention fields, and carrying out data supplementation on the same original intention fields according to the node attribute data to obtain first node data. Further, semantic analysis processing is conducted on the first node data according to the preset intention type label, target intention characteristics are obtained, fine adjustment processing is conducted on the first node data according to the target intention characteristics, second node data is obtained, fine adjustment of data can be conducted on the flow node according to the target intention characteristics, intention characteristics contained in the flow node are more accurate and comprehensive, and the requirement of multi-intention recognition is met. Finally, intention prediction processing is carried out on intention data to be identified through a preset target intention prediction model to obtain predicted intention data, and intention recognition is carried out through a preset intention intersection algorithm, the predicted intention data and second node data to obtain target intention data.
Drawings
FIG. 1 is a flow chart of an intent recognition method provided by an embodiment of the present application;
fig. 2 is a flowchart of step S105 in fig. 1;
fig. 3 is a flowchart of step S106 in fig. 1;
FIG. 4 is another flow chart of an intent recognition method provided by an embodiment of the present application;
FIG. 5 is another flow chart of an intent recognition method provided by an embodiment of the present application;
fig. 6 is a flowchart of step S107 in fig. 1;
fig. 7 is a flowchart of step S108 in fig. 1;
FIG. 8 is a schematic diagram of the structure of the intent recognition device provided in the embodiment of the present application;
fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Natural language processing (natural language processing, NLP): NLP is a branch of artificial intelligence that is a interdisciplinary of computer science and linguistics, and is often referred to as computational linguistics, and is processed, understood, and applied to human languages (e.g., chinese, english, etc.). Natural language processing includes parsing, semantic analysis, chapter understanding, and the like. Natural language processing is commonly used in the technical fields of machine translation, handwriting and print character recognition, voice recognition and text-to-speech conversion, information intent recognition, information extraction and filtering, text classification and clustering, public opinion analysis and opinion mining, and the like, and relates to data mining, machine learning, knowledge acquisition, knowledge engineering, artificial intelligence research, linguistic research related to language calculation, and the like.
Information extraction (Information Extraction, NER): extracting the fact information of the appointed type of entity, relation, event and the like from the natural language text, and forming the text processing technology of the structured data output. Information extraction is a technique for extracting specific information from text data. Text data is made up of specific units, such as sentences, paragraphs, chapters, and text information is made up of small specific units, such as words, phrases, sentences, paragraphs, or a combination of these specific units. The noun phrase, the name of a person, the name of a place, etc. in the extracted text data are all text information extraction, and of course, the information extracted by the text information extraction technology can be various types of information.
Entity: refers to something that is distinguishable and independently present. Such as a person, a city, a plant, etc., a commodity, etc. Worldwide everything has a concrete composition of things, which refers to an entity. The entities are the most basic elements in the knowledge graph, and different relationships exist among different entities.
Self-supervision study: self-supervision learning mainly utilizes auxiliary tasks (context) to mine own supervision information from large-scale non-supervision data, and the network is trained through the supervision information with the structure, so that valuable characterization on downstream tasks can be learned. That is, the supervision information of the self-supervision learning is not manually marked, but the algorithm automatically constructs the supervision information in the large-scale non-supervision data to perform the supervision learning or training.
Text classification (text categorization): given a classification hierarchy, each text in a set of text is classified into one or more categories, a process known as text classification. Text classification is a guided learning (supervised learning) process.
Back propagation: the general principle of back propagation is: inputting training set data into an input layer of a neural network, passing through a hidden layer of the neural network, finally reaching an output layer of the neural network and outputting a result; because the output result of the neural network has errors with the actual result, calculating the errors between the estimated value and the actual value, and reversely transmitting the errors from the output layer to the hidden layer until the errors are transmitted to the input layer; in the process of back propagation, adjusting the values of various parameters according to the errors; the above process is iterated until convergence.
At present, in the process of performing intent recognition, each problem of a predetermined process often needs to correspond to one process node, and a plurality of predefined intentions are set at each process node, so that each node needs to use a separate intent recognition model to recognize different intent problems respectively, and often cannot accurately recognize a plurality of intent problems, and affects the efficiency of intent recognition. Therefore, how to provide an intention recognition method, which can improve the intention recognition efficiency, is a technical problem to be solved urgently.
Based on the above, the embodiment of the application provides an intention recognition method, an intention recognition device, electronic equipment and a storage medium, which aim to improve the intention recognition efficiency.
The method, the device, the electronic equipment and the storage medium for identifying intention provided by the embodiment of the application are specifically described through the following embodiments, and the method for identifying intention in the embodiment of the application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides an intention recognition method, and relates to the technical field of artificial intelligence. The intention recognition method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the intention recognition method, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Fig. 1 is an optional flowchart of an intent recognition method provided in an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S108.
Step S101, obtaining intention data to be identified of an external calling robot system;
step S102, traversing a plurality of flow nodes of the outbound robot system, and acquiring node information of each flow node;
step S103, extracting original intention data and node attribute data in the node information, wherein the original intention data comprises an original intention field;
step S104, carrying out data supplementation on the same original intention field according to the node attribute data to obtain first node data;
step S105, carrying out semantic analysis processing on the first node data according to a preset intention type label to obtain target intention characteristics;
step S106, performing fine adjustment processing on the first node data according to the target intention characteristics to obtain second node data;
step S107, carrying out intention prediction processing on intention data to be identified through a preset target intention prediction model to obtain predicted intention data;
step S108, intention recognition is carried out through a preset intention intersection algorithm, predicted intention data and second node data, and target intention data is obtained.
In the steps S101 to S108 illustrated in the embodiments of the present application, by extracting the original intention data and the node attribute data in the node information, where the original intention data includes an original intention field, and performing data supplementation on the same original intention field according to the node attribute data, the first node data is obtained, so that the node information can be more reasonably complemented, and the integrity of the node information is improved. The first node data is subjected to semantic analysis processing through a preset intention type label to obtain target intention characteristics, the first node data is subjected to fine adjustment processing according to the target intention characteristics to obtain second node data, and the process nodes can be subjected to data fine adjustment according to the target intention characteristics, so that the intention characteristics contained in the process nodes are more accurate and comprehensive, and the requirement of multi-intention recognition is met. Finally, intention prediction processing is carried out on intention data to be identified through a preset target intention prediction model, predicted intention data are obtained, intention recognition is carried out on the predicted intention data and second node data through a preset intention intersection algorithm, and target intention data are obtained. According to the method and the device for identifying the intention, under the condition that a plurality of intention identification models are not required to be set, the intention problems can be accurately identified conveniently, the intention identification efficiency is improved, meanwhile, occupation of server resources is reduced, and the resource utilization rate is improved.
The external calling robot system is also called a robot external calling system, and is an intelligent system integrating the advanced technologies such as NLP technology, various voice technologies, big data technology, deep learning algorithm technology and the like. The external pager robot system is mainly integrated in the external pager robot and can be used for identifying the intention of a user. In the credit card collect application scenario, "verify customer identity" is the consciousness of one user, and "customer willing to return to arrears" is the consciousness of another user.
In step S101 of some embodiments, the intent data to be identified of the external pager system may be obtained by writing a web crawler, setting a data source, and then performing targeted crawling data. The intention data to be identified can also be obtained directly from the external pager robot system by data transmission. The intent data to be identified may include specific user intent such as "verify customer identity", "customer willing to return debt", etc. as described above.
In step S102 of some embodiments, a plurality of flow nodes of the outbound robot system may be traversed according to a plurality of orders such as a preset number of the flow nodes of the outbound robot system or a node name first letter of the flow nodes, so as to obtain node information of the flow nodes, where the node information includes a node problem, a node intention, and the like. For example, the process nodes with numbers 2, 5, and 6 may be traversed in sequence according to the preset numbers of the process nodes.
In step S103 of some embodiments, the original intention data and the node attribute data in the node information may be extracted according to a preset category anchor field, for example, the preset category anchor field includes "basic attribute", "number of characters", "intention", and the like. By inputting fields such as 'basic attribute', 'number of characters', the content related to the node attribute data can be retrieved, extraction of the node attribute data is realized, and by inputting fields such as 'intention', the original intention data of the flow node can be retrieved and extracted, wherein the original intention data comprises a plurality of original intention fields, and the original intention fields can be divided according to part-of-speech category, sentence length and the like.
In step S104 of some embodiments, in order to improve the integrity of the node information, the same original intention field needs to be data-complemented according to the node attribute data, so as to obtain first node data. Specifically, the same original intention field is obtained by comparing the original intention fields of each process node, wherein the same original intention field refers to the original intention fields with the same field content and field meaning corresponding to different node problems in different process nodes, and the same original intention fields have different meanings according to the node problems corresponding to the process node in which the same original intention fields are located. In order to improve the accuracy of intention recognition and avoid ambiguity, the process nodes containing the same original intention fields are marked to obtain labeling process nodes, node attribute data of the labeling process nodes are analyzed to obtain key fields of node problems of the labeling process nodes, so that the true intention of the process nodes is clarified, and the same original intention fields are subjected to completion processing according to the key fields, for example, sentence completion, entity rewrite, case rewrite, synonym transformation and the like are performed on the same original intention fields to form target intention fields corresponding to the node problems of each process node. And adding the target intention fields into corresponding flow nodes, and performing data expansion on the original intention data through the target intention fields to obtain first node data. By the method, the node information can be more reasonably complemented, and the integrity of the node information is improved.
For example, in the credit card collect scenario, the original intent field named "yes" has the meaning of "the recipient is the arrears client himself" at the flow node corresponding to the node question "verify client identity", but the original intent field has the meaning of "the client is willing to return arrears" at the flow node corresponding to the node question "whether the client is willing to return arrears". Since the two identical original intent fields now have completely different meanings. Therefore, in order to avoid ambiguity, the original intention field needs to be complemented, the original intention field of the flow node corresponding to the node problem of "verify client identity" may be complemented to "principal", and the original intention field of the flow node corresponding to the node problem of "whether the client is willing to return to arrearage" may be complemented to "willing to repayment".
Referring to fig. 2, in some embodiments, step S105 may include, but is not limited to, steps S201 to S203:
step S201, carrying out label intention classification on first node data according to the intention category label to obtain label intention data;
step S202, semantic analysis processing is carried out on the label intention data to obtain node intention corpus;
Step S203, extracting features of the node intention corpus to obtain target intention features.
In step S201 of some embodiments, tag intent classification is performed on the first node data according to the intent class tags and a preset tag classification model. The intention type label can comprise borrowing, repayment, loan consultation and the like in a credit card service scene, and can be preset according to actual service handling requirements without limitation in other application scenes; the label classification model is a textCNN model and comprises an Embedding layer, a convolution layer, a pooling layer and an output layer. The commonly used Embedding layer of the label classification model may use ELMO, GLOVE, word2Vector, bert, etc. algorithm to generate a dense Vector from the input first node data. And then, carrying out convolution processing and pooling processing on the dense vector through a convolution layer and a pooling layer to obtain a target feature vector, further inputting the feature vector into an output layer, and carrying out label intention classification on the target feature vector through a preset function (such as a softmax function) and an intention type label in the output layer to obtain label intention data.
In step S202 and step S203 of some embodiments, the label intention data of each process node is traversed and compared through a preset dictionary tree, and node intention corpus of different process nodes is identified and extracted, wherein the node intention corpus is the same intention corpus of different process nodes, namely, overlapping parts of the label intention data of two process nodes, and the node intention corpus is independently extracted as the same intention feature, so as to obtain the target intention feature.
Referring to fig. 3, in some embodiments, step S106 may include, but is not limited to, steps S301 to S302:
step S301, mapping target intention characteristics to a preset first vector space to obtain target intention characteristic vectors;
and step S302, carrying out data complementation on the first node data according to the target intention feature vector to obtain second node data.
In step S301 of some embodiments, the target intention feature is mapped to a preset first vector space by using the MLP network in an intention mapping manner, so as to obtain a target intention feature vector. It should be noted that, by mapping the target intention feature to the first vector space, the target intention feature vector can be made to satisfy a preset feature dimension requirement, for example, the feature dimension of the first vector space is 512×512.
In step S302 of some embodiments, data complement is performed on the first node data according to the target intent feature vector, for example, sentence complement, entity rewrite, case rewrite, synonym transformation, and the like are performed on the first node data, so as to form second node data corresponding to the node problem of each flow node. According to the method, the flow node can be subjected to data fine adjustment according to the target intention characteristics, so that the intention characteristics contained in the flow node are more accurate and comprehensive, and the requirement of multi-intention recognition is met.
For example, the labeling intention data of the same corpus X at the flow node No. 1 is the intention a, and the labeling intention data of the same corpus X at the flow node No. 2 is the intention B. When multi-intention recognition is performed, the intention A and the intention B of the corpus X are recognized, namely, the intention corpus of the overlapping part of the label intention data of two flow nodes is recognized as the intention A and the intention B at the same time. In order to improve the accuracy of intention recognition, the intention corpus of the overlapped part can be independently extracted to be used as a new intention C, namely, the intention A and the intention B are split. After intent splitting, only corpus X will be identified as intent C. Further, the intent C is mapped to the intent a or the intent B by means of the intent mapping, that is, when the corpus X is identified as the intent C at the flow node No. 1, the intent C is mapped to the intent a, so that the outbound robot performs the same operation or reply as after the intent a is identified. And similarly, the process node No. 2 is processed in a similar way, so that the accuracy of intention recognition is improved.
Referring to fig. 4, in some embodiments, before step S107, the method further includes pre-constructing an initial intent prediction model, specifically including:
Step S401, an initial model is obtained, wherein the initial model is a Transformer encoder model;
step S402, performing parameter fine adjustment on the initial model according to the acquired sample intention data to obtain an initial intention prediction model.
In step S401 of some embodiments, the preset initial model may be a Transformer encoder model; parameter fine-tuning is performed with the Transformer encoder model as a base model to update Transformer encoder the model, thereby obtaining an initial intent prediction model. The Transformer encoder model includes two transducer layers. The intent prediction performance can be improved by the Transformer encoder model.
In step S402 of some embodiments, a loss function is constructed according to the sample intention data, and the loss function is calculated according to the sample intention data, for example, a similarity value between the sample intention data and the reference intention data is calculated through the loss function, and a loss parameter of the loss function is trimmed according to the similarity value, so that the loss parameter after trimming can meet a requirement that the similarity value is greater than or equal to a preset similarity threshold. And taking the finely tuned loss function as a model parameter of the initial model to update the initial model so as to obtain an initial intention prediction model.
It will be appreciated that other ways of training the Transformer encoder model as a base model to obtain an initial intent prediction model may be used, for example, training may be performed by using knowledge distillation, and it should be understood that the implementation of the present application may be performed by using a conventional knowledge distillation method, and embodiments of the present application are not limited thereto.
Referring to fig. 5, in some embodiments, before step S107, the method further includes pre-training a target intention prediction model, specifically including:
step S501, obtaining sample intention data;
step S502, inputting sample intention data into an initial intention prediction model;
step S503, carrying out recognition processing on sample intention data through an initial intention prediction model to obtain a sample intention sentence vector;
step S504, calculating the similarity between two sample intention sentence vectors through a loss function of the initial intention prediction model;
step S505, generating entangled corpus pairs according to the similarity and the sample intent sentence vector;
step S506, optimizing a loss function of the initial intention prediction model according to the entangled corpus so as to update the initial intention prediction model and obtain a target intention prediction model.
In step S501 and step S502 of some embodiments, the sample intent data may be obtained by writing a web crawler, setting up a data source, and then performing targeted crawling data. And then the sample intent data is input into the initial intent prediction model.
In step S503 of some embodiments, the sample data is subjected to pooling processing and activation processing by the initial intent prediction model, to obtain a sample intent sentence vector.
In step S504 of some embodiments, the similarity between the two sample intent sentence vectors may be calculated by a collaborative filtering algorithm such as a cosine similarity algorithm through a loss function of the initial intent prediction model. For example, assuming that one of the sample intent sentence vectors u and the other sample intent sentence vector v, the similarity between the two sample intent sentence vectors is calculated according to the cosine similarity algorithm (as shown in formula (1)), where u T Is a transpose of u.
Figure BDA0003483187760000111
In step S505 of some embodiments, the magnitude relation between the similarity and the preset similarity threshold is compared, and if the similarity is greater than or equal to the similarity threshold, entangled corpus pairs are generated according to the sample intent sentence vector.
It should be noted that, the entangled corpus pair represents two corpora query1 and query2 with very high similarity, but the labeling intentions of the two are different, the labeling intentions of the corpora query1 are label1, the labeling intentions of the corpora query2 are label2, the similarity score is a real number, the score e [0,1] is larger, and the semantic similarity of the query1 and the query2 is higher. The preset similarity threshold may be 0.9. When the similarity is larger than a preset similarity threshold, labeling intentions of the two corresponding corpus are different, and the two corpus can be considered to be entangled with each other, namely an entangled corpus pair, and at least one corpus is considered to be labeled in error. And outputting all entangled corpus pairs to a preset review form to instruct a data labeling person to re-label the corpora.
In step S506 of some embodiments, the loss function is back-propagated according to the entangled corpus to update the initial intent prediction model by optimizing the loss function, and update internal parameters (i.e., loss parameters) of the initial intent prediction model to obtain the target intent prediction model. It will be appreciated that the back propagation principle may apply to conventional back propagation principles, and embodiments of the present application are not limited.
It should be noted that, the correction process of one entangled corpus pair may not completely find out the corpus of the labeling error and the missed label in the training set. Therefore, after finishing the correction of the entangled corpus once, the initial intention prediction model needs to be re-finely adjusted by using the corrected business data, the process is repeated continuously so as to calculate the entanglement condition of the training set, and the found entangled corpus is re-labeled for labeling personnel. Until the data in the training set is no longer entangled, the optimization of the initial intent prediction model is stopped.
Referring to fig. 6, in some embodiments, the target intent prediction model includes an MLP network, a pooling layer, and a preset function, and step S107 may further include, but is not limited to, steps S601 to S603:
Step S601, mapping intention data to be identified to a preset second vector space through an MLP network to obtain an intention vector to be identified;
step S602, carrying out pooling treatment on the intention vector to be identified through a pooling layer to obtain pooled intention characteristics;
step S603, carrying out intention prediction processing on the pooled intention characteristics through a preset function to obtain predicted intention data.
In step S601 of some embodiments, the MLP network may be used to perform multiple mapping processing from semantic space to vector space on the intention data to be identified, and map the intention data to be identified into a preset second vector space, so as to obtain an intention vector to be identified. It should be noted that, by mapping the intention data to be identified to the first vector space, the intention vector to be identified can be made to satisfy a preset feature dimension requirement, for example, the feature dimension of the second vector space is 256×256.
In step S602 of some embodiments, the intent vector to be identified is subjected to a maximum pooling process and an average pooling process through the pooling layer, and the result of the maximum pooling process and the result of the average pooling process are spliced to obtain pooled intent characteristics.
In step S603 of some embodiments, the preset function may be a softmax function, a tanh function, or the like. Taking a softmax function as an example, the softmax function can create a probability distribution on a preset intention category label, so that feature vectors are labeled and classified according to the probability distribution, and predicted intention data are obtained. The skirt, the predicted intention data is mainly word embedded vectors containing intention category labels and intention probability values corresponding to each intention category.
Referring to fig. 7, in some embodiments, step S108 may further include, but is not limited to, steps S701 to S702:
step S701, analyzing the second node data to obtain node intention data;
step S702, performing intersection operation on the node intention data and the predicted intention data through an intention intersection algorithm to obtain target intention data.
In step S701 of some embodiments, the parsing function and the intention type label are used to parse the second node data to obtain a plurality of node intention fields, and a node intention list is generated according to the plurality of node intention fields, so as to obtain node intention data.
For example, first, word segmentation is performed on second node data through an analytic function and an intention type label, a directed acyclic graph corresponding to the second node data is generated by comparing a dictionary in a preset J i eba word segmentation device, a shortest path on the directed acyclic graph is searched through the analytic function, a preset selection mode and the dictionary, the second node data is intercepted according to the shortest path, or the second node data is directly intercepted, and a plurality of node intention fields are obtained.
Further, for node intent fields that are not in the lexicon, new word discovery may be performed using HMM (hidden markov model). Specifically, the position B, M, E, S of the character in the node intent field is taken as a hidden state, the character is an observed state, wherein B/M/E/S represents the presence in the prefix, word tail, and word formation of a single word, respectively. The dictionary file is used to store the expression probability matrix, the initial probability vector and the transition probability matrix between characters respectively. And solving the most probable hidden state by using a Viterbi algorithm, thereby obtaining the node intention field.
And finally, filling the plurality of node intention fields into a preset excel table, and generating a node intention list, thereby obtaining node intention data.
In step S702 of some embodiments, intersection operation is performed on the node intention data and the predicted intention data by an intention intersection algorithm, and intention features existing in both the node intention data and the predicted intention data are extracted to obtain target intention data. For example, if the predicted intent data includes an intent a and an intent B, but the node intent data included in the intent list of the current flow node is an intent a, an intent X, an intent Y, then it may be determined that the user intent is an intent a under the current flow node according to an intent intersection algorithm. However, if the node intention list of the current flow node contains the node intention data of the intention X, the intention Y and the intention Z, through intersection operation, the corpus of the user cannot be processed at the current flow node, and prompt information of 'refusing to identify' can be output through the outbound robot system.
It should be noted that, the above-mentioned intent intersection algorithm may be a set intersection algorithm, specifically, when the intersection operation is performed by adopting the set intersection algorithm, a sym () function needs to be created first, two or more arrays are accepted through the sym () function, and a peer-to-peer difference (symmetric difference) of the given array is returned, for example, two sets (for example, set a= {1,2,3} and set b= {2,3,4 }) are given, and the set of the mathematical term "peer-to-peer difference" refers to a set (a Δb=c= {1,4 }) composed of all elements in only one of the two sets.
According to the embodiment of the application, the intention data to be identified of the external calling robot system is obtained; traversing a plurality of flow nodes of the outbound robot system, acquiring node information of each flow node, and conveniently acquiring the node information of the flow nodes of the outbound robot system; and extracting original intention data and node attribute data in the node information, wherein the original intention data comprises original intention fields, and carrying out data supplementation on the same original intention fields according to the node attribute data to obtain first node data. Further, semantic analysis processing is conducted on the first node data according to the preset intention type label, target intention characteristics are obtained, fine adjustment processing is conducted on the first node data according to the target intention characteristics, second node data is obtained, fine adjustment of data can be conducted on the flow node according to the target intention characteristics, intention characteristics contained in the flow node are more accurate and comprehensive, and the requirement of multi-intention recognition is met. Finally, intention prediction processing is carried out on intention data to be identified through a preset target intention prediction model to obtain predicted intention data, and intention recognition is carried out through a preset intention intersection algorithm, the predicted intention data and second node data to obtain target intention data.
Referring to fig. 8, an embodiment of the present application further provides an intention recognition device, which may implement the above-mentioned intention recognition method, where the device includes:
the intention data to be identified acquisition module 801 is used for acquiring intention data to be identified of the pager robot system;
a node information obtaining module 802, configured to traverse a plurality of flow nodes of the outbound robot system, and obtain node information of each flow node;
a data extraction module 803 for extracting original intention data and node attribute data in the node information, wherein the original intention data includes an original intention field;
the data supplementing module 804 is configured to supplement data to the same original intention field according to the node attribute data, so as to obtain first node data;
the semantic analysis module 805 is configured to perform semantic analysis processing on the first node data according to a preset intention type label, so as to obtain a target intention feature;
the fine tuning module 806 is configured to perform fine tuning processing on the first node data according to the target intention feature to obtain second node data;
the intention prediction module 807 is configured to perform intention prediction processing on intention data to be identified through a preset target intention prediction model, so as to obtain predicted intention data;
The intention recognition module 808 is configured to perform intention recognition through a preset intention intersection algorithm, predicted intention data and second node data, so as to obtain target intention data.
The specific implementation of the intention recognition device is basically the same as the specific embodiment of the intention recognition method, and will not be repeated here.
The embodiment of the application also provides electronic equipment, which comprises: the device comprises a memory, a processor, a computer program stored on the memory and executable on the processor, and a data bus for realizing connection communication between the processor and the memory, wherein the computer program realizes the intention recognition method when being executed by the processor. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 901 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
The memory 902 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 902 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present application are implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes an intention recognition method to execute the embodiments of the present application;
an input/output interface 903 for inputting and outputting information;
the communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium and is used for computer readable storage, the storage medium stores one or more computer programs, and the one or more computer programs can be executed by one or more processors to realize the intention recognition method.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the intention recognition method, the intention recognition device, the electronic equipment and the storage medium, intention data to be recognized of the external calling robot system are obtained; traversing a plurality of flow nodes of the outbound robot system, acquiring node information of each flow node, and conveniently acquiring the node information of the flow nodes of the outbound robot system; and extracting original intention data and node attribute data in the node information, wherein the original intention data comprises original intention fields, and carrying out data supplementation on the same original intention fields according to the node attribute data to obtain first node data. Further, semantic analysis processing is conducted on the first node data according to the preset intention type label, target intention characteristics are obtained, fine adjustment processing is conducted on the first node data according to the target intention characteristics, second node data is obtained, fine adjustment of data can be conducted on the flow node according to the target intention characteristics, intention characteristics contained in the flow node are more accurate and comprehensive, and the requirement of multi-intention recognition is met. Finally, intention prediction processing is carried out on intention data to be identified through a preset target intention prediction model, predicted intention data are obtained, intention recognition is carried out on the preset intention intersection algorithm, the predicted intention data and the second node data, target intention data are obtained, a plurality of intention problems can be accurately recognized, and accuracy of intention recognition is improved. According to the embodiment of the application, under the condition that a plurality of intention recognition models are not required to be set, a plurality of intention problems can be conveniently and accurately recognized, and the intention recognition efficiency is improved; meanwhile, the intention recognition method does not need to perform independent model training on different intention problems, so that occupation of server resources and model training cost can be effectively reduced.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-7 are not limiting to embodiments of the present application and may include more or fewer steps than shown, or certain steps may be combined, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (7)

1. A method of intent recognition, the method comprising:
acquiring intention data to be identified of an external calling robot system;
traversing a plurality of flow nodes of the outbound robot system, and acquiring node information of each flow node;
extracting original intention data and node attribute data in the node information, wherein the original intention data comprises an original intention field;
performing data supplementation on the same original intention field according to the node attribute data to obtain first node data;
carrying out semantic analysis processing on the first node data according to a preset intention type label to obtain target intention characteristics;
performing fine adjustment processing on the first node data according to the target intention characteristics to obtain second node data;
carrying out intention prediction processing on the intention data to be identified through a preset target intention prediction model to obtain predicted intention data;
carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data to obtain target intention data;
the step of performing fine tuning processing on the first node data according to the target intention characteristic to obtain second node data includes:
Mapping the target intention feature to a preset first vector space to obtain a target intention feature vector;
performing data complementation on the first node data according to the target intention feature vector to obtain the second node data;
the target intention prediction model comprises an MLP network, a pooling layer and a preset function, and the intention prediction processing is carried out on the intention data to be identified through the preset target intention prediction model to obtain predicted intention data, and the method comprises the following steps:
mapping the intention data to be identified to a preset second vector space through the MLP network to obtain an intention vector to be identified;
carrying out pooling treatment on the intention vector to be identified through the pooling layer to obtain pooling intention characteristics;
carrying out intention prediction processing on the pooled intention characteristics through the preset function to obtain predicted intention data;
the step of obtaining target intention data by carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data comprises the following steps:
analyzing the second node data to obtain node intention data;
and performing intersection operation on the node intention data and the prediction intention data through the intention intersection algorithm to obtain the target intention data.
2. The intention recognition method according to claim 1, wherein the step of performing semantic analysis processing on the first node data according to a preset intention category label to obtain a target intention feature includes:
performing tag intention classification on the first node data according to the intention category tag to obtain tag intention data;
carrying out semantic analysis processing on the label intention data to obtain node intention corpus;
and extracting features of the node intention corpus to obtain the target intention features.
3. The intention recognition method according to any one of claims 1 to 2, wherein before the step of performing intention prediction processing on the intention data to be recognized by a preset target intention prediction model to obtain predicted intention data, the method further comprises pre-training the target intention prediction model, specifically comprising:
acquiring sample intention data;
inputting the sample intent data into an initial intent prediction model;
identifying the sample intention data through the initial intention prediction model to obtain a sample intention sentence vector;
calculating the similarity between the two sample intention sentence vectors through a loss function of the initial intention prediction model;
Generating entangled corpus pairs according to the similarity and the sample intent sentence vector;
optimizing a loss function of the initial intention prediction model according to the entangled corpus so as to update the initial intention prediction model and obtain the target intention prediction model.
4. The intent recognition method as recited in claim 3, wherein prior to the step of pre-training the target intent prediction model, the method further comprises pre-constructing the initial intent prediction model, comprising:
acquiring an initial model, wherein the initial model is a Transformer encoder model;
and carrying out parameter fine adjustment on the initial model according to the acquired sample intention data to obtain the initial intention prediction model.
5. An intent recognition device, the device comprising:
the intention data acquisition module to be identified is used for acquiring intention data to be identified of the external calling robot system;
the node information acquisition module is used for traversing a plurality of flow nodes of the outbound robot system and acquiring node information of each flow node;
the data extraction module is used for extracting original intention data and node attribute data in the node information, wherein the original intention data comprises an original intention field;
The data supplementing module is used for supplementing data to the same original intention field according to the node attribute data to obtain first node data;
the semantic analysis module is used for carrying out semantic analysis processing on the first node data according to a preset intention type label to obtain target intention characteristics;
the fine tuning module is used for carrying out fine tuning processing on the first node data according to the target intention characteristics to obtain second node data;
the intention prediction module is used for carrying out intention prediction processing on the intention data to be identified through a preset target intention prediction model to obtain predicted intention data;
the intention recognition module is used for carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data to obtain target intention data;
the step of performing fine tuning processing on the first node data according to the target intention characteristic to obtain second node data includes:
mapping the target intention feature to a preset first vector space to obtain a target intention feature vector;
performing data complementation on the first node data according to the target intention feature vector to obtain the second node data;
The target intention prediction model comprises an MLP network, a pooling layer and a preset function, and the intention prediction processing is carried out on the intention data to be identified through the preset target intention prediction model to obtain predicted intention data, and the method comprises the following steps:
mapping the intention data to be identified to a preset second vector space through the MLP network to obtain an intention vector to be identified;
carrying out pooling treatment on the intention vector to be identified through the pooling layer to obtain pooling intention characteristics;
carrying out intention prediction processing on the pooled intention characteristics through the preset function to obtain predicted intention data;
the step of obtaining target intention data by carrying out intention recognition through a preset intention intersection algorithm, the predicted intention data and the second node data comprises the following steps:
analyzing the second node data to obtain node intention data;
and performing intersection operation on the node intention data and the prediction intention data through the intention intersection algorithm to obtain the target intention data.
6. An electronic device comprising a memory, a processor, a computer program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the computer program, when executed by the processor, implementing the steps of the intent recognition method as claimed in any one of claims 1 to 4.
7. A storage medium, which is a computer-readable storage medium, for computer-readable storage, characterized in that the storage medium stores one or more computer programs executable by one or more processors to implement the steps of the intention recognition method of any one of claims 1 to 4.
CN202210074053.2A 2022-01-21 2022-01-21 Intention recognition method, device, electronic equipment and storage medium Active CN114398903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210074053.2A CN114398903B (en) 2022-01-21 2022-01-21 Intention recognition method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210074053.2A CN114398903B (en) 2022-01-21 2022-01-21 Intention recognition method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114398903A CN114398903A (en) 2022-04-26
CN114398903B true CN114398903B (en) 2023-06-20

Family

ID=81232023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210074053.2A Active CN114398903B (en) 2022-01-21 2022-01-21 Intention recognition method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114398903B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117196035A (en) * 2023-08-31 2023-12-08 摩尔线程智能科技(北京)有限责任公司 Reply content processing method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024162A1 (en) * 2017-08-04 2019-02-07 平安科技(深圳)有限公司 Intention obtaining method, electronic device, and computer-readable storage medium
CN109815489A (en) * 2019-01-02 2019-05-28 深圳壹账通智能科技有限公司 Collection information generating method, device, computer equipment and storage medium
CN111274797A (en) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 Intention recognition method, device and equipment for terminal and storage medium
CN111563144A (en) * 2020-02-25 2020-08-21 升智信息科技(南京)有限公司 Statement context prediction-based user intention identification method and device
CN111931513A (en) * 2020-07-08 2020-11-13 泰康保险集团股份有限公司 Text intention identification method and device
CN112380870A (en) * 2020-11-19 2021-02-19 平安科技(深圳)有限公司 User intention analysis method and device, electronic equipment and computer storage medium
CN112988963A (en) * 2021-02-19 2021-06-18 平安科技(深圳)有限公司 User intention prediction method, device, equipment and medium based on multi-process node
CN113220828A (en) * 2021-04-28 2021-08-06 平安科技(深圳)有限公司 Intention recognition model processing method and device, computer equipment and storage medium
CN113515594A (en) * 2021-04-28 2021-10-19 京东数字科技控股股份有限公司 Intention recognition method, intention recognition model training method, device and equipment
CA3123387A1 (en) * 2021-06-28 2021-11-10 Ada Support Inc. Method and system for generating an intent classifier

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563209B (en) * 2019-01-29 2023-06-30 株式会社理光 Method and device for identifying intention and computer readable storage medium
CN111563208B (en) * 2019-01-29 2023-06-30 株式会社理光 Method and device for identifying intention and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024162A1 (en) * 2017-08-04 2019-02-07 平安科技(深圳)有限公司 Intention obtaining method, electronic device, and computer-readable storage medium
CN109815489A (en) * 2019-01-02 2019-05-28 深圳壹账通智能科技有限公司 Collection information generating method, device, computer equipment and storage medium
CN111274797A (en) * 2020-01-13 2020-06-12 平安国际智慧城市科技股份有限公司 Intention recognition method, device and equipment for terminal and storage medium
CN111563144A (en) * 2020-02-25 2020-08-21 升智信息科技(南京)有限公司 Statement context prediction-based user intention identification method and device
CN111931513A (en) * 2020-07-08 2020-11-13 泰康保险集团股份有限公司 Text intention identification method and device
CN112380870A (en) * 2020-11-19 2021-02-19 平安科技(深圳)有限公司 User intention analysis method and device, electronic equipment and computer storage medium
CN112988963A (en) * 2021-02-19 2021-06-18 平安科技(深圳)有限公司 User intention prediction method, device, equipment and medium based on multi-process node
CN113220828A (en) * 2021-04-28 2021-08-06 平安科技(深圳)有限公司 Intention recognition model processing method and device, computer equipment and storage medium
CN113515594A (en) * 2021-04-28 2021-10-19 京东数字科技控股股份有限公司 Intention recognition method, intention recognition model training method, device and equipment
CA3123387A1 (en) * 2021-06-28 2021-11-10 Ada Support Inc. Method and system for generating an intent classifier

Also Published As

Publication number Publication date
CN114398903A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
EP4113357A1 (en) Method and apparatus for recognizing entity, electronic device and storage medium
CN111666399A (en) Intelligent question and answer method and device based on knowledge graph and computer equipment
CN114519356B (en) Target word detection method and device, electronic equipment and storage medium
CN114722069A (en) Language conversion method and device, electronic equipment and storage medium
CN116561538A (en) Question-answer scoring method, question-answer scoring device, electronic equipment and storage medium
CN111259113A (en) Text matching method and device, computer readable storage medium and computer equipment
CN114841146B (en) Text abstract generation method and device, electronic equipment and storage medium
CN113705315A (en) Video processing method, device, equipment and storage medium
CN116258137A (en) Text error correction method, device, equipment and storage medium
CN116541493A (en) Interactive response method, device, equipment and storage medium based on intention recognition
CN116680386A (en) Answer prediction method and device based on multi-round dialogue, equipment and storage medium
CN113342944B (en) Corpus generalization method, apparatus, device and storage medium
CN114398903B (en) Intention recognition method, device, electronic equipment and storage medium
CN112749556B (en) Multi-language model training method and device, storage medium and electronic equipment
CN116719999A (en) Text similarity detection method and device, electronic equipment and storage medium
CN116956925A (en) Electronic medical record named entity identification method and device, electronic equipment and storage medium
CN116701604A (en) Question and answer corpus construction method and device, question and answer method, equipment and medium
CN116775875A (en) Question corpus construction method and device, question answering method and device and storage medium
CN114611529B (en) Intention recognition method and device, electronic equipment and storage medium
CN116595023A (en) Address information updating method and device, electronic equipment and storage medium
CN116432705A (en) Text generation model construction method, text generation device, equipment and medium
CN114492437B (en) Keyword recognition method and device, electronic equipment and storage medium
CN115795007A (en) Intelligent question-answering method, intelligent question-answering device, electronic equipment and storage medium
CN114090778A (en) Retrieval method and device based on knowledge anchor point, electronic equipment and storage medium
CN114998041A (en) Method and device for training claim settlement prediction model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant