CN113515594A - Intention recognition method, intention recognition model training method, device and equipment - Google Patents

Intention recognition method, intention recognition model training method, device and equipment Download PDF

Info

Publication number
CN113515594A
CN113515594A CN202110465878.2A CN202110465878A CN113515594A CN 113515594 A CN113515594 A CN 113515594A CN 202110465878 A CN202110465878 A CN 202110465878A CN 113515594 A CN113515594 A CN 113515594A
Authority
CN
China
Prior art keywords
data
training
intention
node
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110465878.2A
Other languages
Chinese (zh)
Inventor
杨久东
冯明超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JD Digital Technology Holdings Co Ltd
Original Assignee
JD Digital Technology Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JD Digital Technology Holdings Co Ltd filed Critical JD Digital Technology Holdings Co Ltd
Priority to CN202110465878.2A priority Critical patent/CN113515594A/en
Publication of CN113515594A publication Critical patent/CN113515594A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses an intention recognition method, an intention recognition device, an intention recognition model training method, an intention recognition device, electronic equipment and a readable storage medium, wherein the intention recognition method comprises the following steps: acquiring data to be identified and node data of a current business process; inputting data to be identified and node data into an intention identification model to obtain an intention identification result corresponding to the node data; executing target operation corresponding to the intention recognition result; the node data acquired by the method divides the intention identification range of each node, so that intention identification is performed based on the node data, whether the data to be identified represents a certain intention corresponding to the node data can be accurately judged, and further, the operation required to be executed under the condition of the current node can be accurately judged. Even if the data to be identified corresponds to the intentions of other nodes, the data to be identified can be determined not to represent any intention corresponding to the node data, so that the process corresponding to the data to be identified cannot be executed, and the problem of twisting of the business process is prevented.

Description

Intention recognition method, intention recognition model training method, device and equipment
Technical Field
The present application relates to the field of intent recognition technology, and in particular, to an intent recognition method, an intent recognition apparatus, an intent recognition model training method, an intent recognition model training apparatus, an electronic device, and a computer-readable storage medium.
Background
Intent recognition, which may also be referred to as intent detection (intent detection), is used to determine which operation of which domain the input information is used to perform, and is inherently a matter of multivariate classification. In actual use, a business process typically has multiple nodes, each node corresponding to multiple intents. In the related art, when the intention recognition is performed, the intention recognition model can recognize all intentions corresponding to all nodes. Therefore, when a service flow runs to a certain node, the input information may correspond to the intentions of other nodes, and in this case, the service flow may be distorted, and further, the service flow logic may conflict, and the normal execution cannot be completed.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an intention recognition method, an intention recognition device, an intention recognition model training method, an intention recognition model training device, an electronic device, and a readable storage medium, in which since the acquired node data divides the intention recognition range of each node, intention recognition is performed based on the node data, whether the data to be recognized represents a certain intention corresponding to the node data can be accurately determined, and further, an operation that needs to be performed in the case of the current node can be accurately determined.
In order to solve the above technical problem, in a first aspect, the present application provides an intention identifying method, which specifically includes:
acquiring data to be identified and node data of a current business process;
inputting the data to be recognized and the node data into an intention recognition model to obtain an intention recognition result corresponding to the node data;
and executing the target operation corresponding to the intention recognition result.
In a possible implementation manner, the acquiring data to be identified and node data where a current business process is located includes:
acquiring initial data to be identified and initial node data;
and respectively carrying out feature coding on the initial data to be identified and the initial node data to obtain the data to be identified and the node data.
In a possible implementation manner, the acquiring data to be identified and node data where a current business process is located includes:
acquiring voice to be detected, and performing voice text recognition processing on the voice to be detected to obtain data to be recognized;
and inquiring the current business process progress to obtain the node data.
In a feasible implementation manner, the performing speech-to-text recognition processing on the speech to be detected to obtain the data to be recognized includes:
performing voice text conversion on the voice to be detected to obtain initial data;
and performing keyword extraction or invalid information filtering processing on the initial data to obtain the data to be identified.
In a second aspect, the present application further provides an intention recognition model training method for generating the above intention recognition model, the intention recognition model training method including:
acquiring training data, wherein the training data comprises training intention data and training node data;
and training the initial model by using the training data to obtain an intention recognition model.
In one possible embodiment, the acquiring training data includes:
acquiring a plurality of training intention data and a plurality of training node data;
respectively utilizing each training intention data and each training node data to form a plurality of initial training data;
setting a category label corresponding to the training intention data for positive initial training data, and setting a negative label for negative initial training data to obtain the training data;
wherein the forward initial training data is the initial training data for which the training intent data matches the training node data; the negative initial training data is the initial training data for which the training intent data does not match the training node data.
In one possible embodiment, the initial model is a tree model.
In a third aspect, the present application further provides an intention identifying apparatus, including:
the data to be identified acquisition module is used for acquiring the data to be identified and the node data of the current business process;
the identification module is used for inputting the data to be identified and the node data into an intention identification model to obtain an intention identification result corresponding to the node data;
and the execution module is used for executing the target operation corresponding to the intention recognition result.
In a fourth aspect, the present application further provides an intention recognition model training apparatus, including:
the training data acquisition module is used for acquiring training data, and the training data comprises training intention data and training node data;
and the training module is used for training the initial model by using the training data to obtain an intention recognition model.
In a fifth aspect, the present application further provides an electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the above-mentioned intention recognition method and/or the above-mentioned intention recognition model training method.
In a sixth aspect, the present application further provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned intent recognition method and/or the above-mentioned intent recognition model training method.
The intention identification method provided by the application obtains data to be identified and node data where a current business process is located; inputting data to be identified and node data into an intention identification model to obtain an intention identification result corresponding to the node data; and executing the target operation corresponding to the intention recognition result.
Therefore, when the method identifies the intention, the data to be identified and the node data are acquired. The node data may represent the operating location of the current business process and may also characterize the intent corresponding to the node. By inputting the data to be recognized and the node data into the intention recognition model, an intention recognition result corresponding to the node data can be obtained. The intention identification model can identify whether the data to be identified represents a certain intention or not, can also identify whether the data to be identified represents a certain intention corresponding to the node data or not based on the node data, and can identify whether the data to be identified represents a certain intention corresponding to the node data or not according to the intention identification result. After the intention recognition result is obtained, the corresponding specific operation, i.e., the target operation, may be performed according to the specific content of the intention recognition result. Because the acquired node data divides the intention identification range of each node, intention identification is carried out based on the node data, whether the data to be identified represents a certain intention corresponding to the node data can be accurately judged, an accurate intention identification result is obtained, and further, the operation which needs to be executed under the condition of the current node is accurately judged. Even if the data to be identified corresponds to the intentions of other nodes, the data to be identified can be determined not to represent any intention corresponding to the node data, so that the process corresponding to the data to be identified cannot be executed, and the problem of twisting of the business process is prevented. Meanwhile, by limiting the intention identification range, the situation that the data to be identified simultaneously corresponds to a plurality of intentions of a plurality of nodes can be avoided, and the identification accuracy of the model is improved.
In addition, the application also provides an intention recognition device, an intention recognition model training method, an intention recognition model training device, electronic equipment and a computer readable storage medium, and the intention recognition model training device, the intention recognition model training method, the intention recognition model training device, the electronic equipment and the computer readable storage medium also have the beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an intention identification method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a specific intent recognition process provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of an intention identifying apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an intention identifying apparatus according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware composition framework to which an intention identification method according to an embodiment of the present disclosure is applied;
fig. 6 is a hardware composition framework diagram for another intention identification method according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, when intention recognition is performed, all intentions corresponding to all nodes in a business process can be recognized by using an intention recognition model. Thus, when a business process runs to a certain node, the input information may correspond to the intentions of other nodes, in which case, the business process may be distorted to cause problems. For example, when the business process is identity-time-item-address validation, which includes 3 nodes, each node may correspond to multiple intents. When the address is determined to be needed to be modified, the input information belongs to an intention corresponding to the address node. In this case, the intention recognition model can accurately recognize its intention, but the intention does not belong to the intention corresponding to the node of the confirmation time, and thus a problem arises in the torsion of the business process. For example, directly entering the confirmed address node and modifying the address, which may cause the business process to skip the confirmed item node. Meanwhile, the input information may correspond to a plurality of intentions of a plurality of nodes at the same time, for example, when the input information is "modification needed", it may correspond to an intention of confirming a modification time of a time node and an intention of confirming a modification address of an address node, and in this case, the recognized intention may be a modification address, resulting in poor recognition accuracy.
In order to solve the above problems, the present application provides an intention identification method. Referring to fig. 1, fig. 1 is a flowchart of an intention identification method according to an embodiment of the present disclosure. The method comprises the following steps:
s101: and acquiring data to be identified and node data of the current business process.
The data to be recognized is the data which needs to be subjected to the intention recognition, and the data format of the data is not limited, for example, when the intention recognition model does not have a characteristic coding part, the data to be recognized can be the data obtained after the characteristic coding; in another embodiment, the data to be recognized may be text data subjected to speech-to-text conversion, or may be audio data. The specific acquiring mode of the data to be identified is not limited, and the acquiring mode of the data to be identified can be different according to different data formats of the data to be identified. For example, the audio to be recognized may be acquired by an audio receiving device, or the data to be recognized may be extracted from data sent by other electronic devices.
The node data is used for representing data of a node where the business process is located currently, and different nodes have different intention identification ranges, so that the intention identification result can be obtained by synthesizing the node data when intention identification is carried out on the basis of the node data subsequently by obtaining the node data. Similar to the data to be identified, the data format of the node data is also not limited, and the specific data format may be the same as or different from the data to be identified.
S102: and inputting the data to be identified and the node data into the intention identification model to obtain an intention identification result corresponding to the node data.
After the data to be recognized and the node data are obtained, the data and the node data are input into an intention recognition model, so that an intention recognition result is generated by using the intention recognition model. The intention recognition model is trained in advance, and the type and the model architecture are not limited. The intention recognition model is based not only on the data to be recognized but also on the node data when generating the intention recognition result, and thus the resultant intention recognition result is a result corresponding to the node data, which can indicate whether the data to be recognized indicates any one of the intentions corresponding to the node data. Similarly, for example, the service flow is used as the confirmation identity-confirmation time-confirmation article-confirmation address, the node data corresponds to the confirmation time, the intention of the node corresponds to the modification time, the non-modification time and others, if the data to be recognized is the "modification-required address", the intention is to modify the address, but in the case that the node data is used for defining the intention recognition range, the data to be recognized does not match with the intentions of the modification time and the non-modification time, but matches with other such intentions, so that the intention recognition result is other than the modification address.
S103: and executing the target operation corresponding to the intention recognition result.
And after the intention identification result is obtained, executing corresponding target operation according to the intention identification result so as to continue the business process. The specific content of the target operation may also be different according to different intention recognition results, which is not limited in this embodiment and may be set as needed.
By applying the intention identification method provided by the embodiment of the application, when the intention is identified, not only the data to be identified is acquired, but also the node data is acquired. The node data may represent the operating location of the current business process and may also characterize the intent corresponding to the node. By inputting the data to be recognized and the node data into the intention recognition model, an intention recognition result corresponding to the node data can be obtained. The intention identification model can identify whether the data to be identified represents a certain intention or not, can also identify whether the data to be identified represents a certain intention corresponding to the node data or not based on the node data, and can identify whether the data to be identified represents a certain intention corresponding to the node data or not according to the intention identification result. After the intention recognition result is obtained, the corresponding specific operation, i.e., the target operation, may be performed according to the specific content of the intention recognition result. Because the acquired node data divides the intention identification range of each node, intention identification is carried out based on the node data, whether the data to be identified represents a certain intention corresponding to the node data can be accurately judged, an accurate intention identification result is obtained, and further, the operation which needs to be executed under the condition of the current node is accurately judged. Even if the data to be identified corresponds to the intentions of other nodes, the data to be identified can be determined not to represent any intention corresponding to the node data, so that the process corresponding to the data to be identified cannot be executed, and the problem of twisting of the business process is prevented. Meanwhile, by limiting the intention identification range, the situation that the data to be identified simultaneously corresponds to a plurality of intentions of a plurality of nodes can be avoided, and the identification accuracy of the model is improved.
Based on the above embodiments, the present embodiment will specifically describe several steps in the above embodiments. In one embodiment, the intention recognition model has no feature coding part, and is a classification model. In this case, the process of acquiring the data to be identified and the node data may include:
step 11: and acquiring initial data to be identified and initial node data.
In this embodiment, the initial data to be recognized and the initial node data are raw data that is not encoded, which may be audio data or may be text data. The initial data to be identified and the initial node data may be obtained in the same manner, for example, both are input from the outside, or may be obtained in different manners, which is not limited in this embodiment.
Step 12: and respectively carrying out feature coding on the initial data to be identified and the initial node data to obtain the data to be identified and the node data.
The initial data to be identified and the initial node data may be encoded in the same or different encoding manners, in an embodiment, the initial data to be identified may be encoded by using a BERT (bidirectional Encoder retrieval from transforms) model to obtain the data to be identified, and the data to be identified may be represented by BERT service, that is, a feature vector obtained based on the BERT model. The BERT model is a pre-training language model, and for initial node data, because each node in the service process is discrete, the node data can be subjected to feature coding by adopting an One-Hot coding mode. One-Hot encoding, also known as One-bit-efficient encoding, uses an N-bit state register to encode N states, each state having a separate register bit and only One bit being active at any One time. And obtaining the data to be identified and the node data through the feature coding.
Based on the above embodiment, in application scenarios such as an intelligent outbound scenario or a customer service scenario, the initial data is usually data in an audio format. In this case, in order to recognize the meaning corresponding to the audio, it may be subjected to a speech text recognition process to obtain corresponding text data, and the text data is used as the data to be recognized. Specifically, the process of acquiring the data to be identified and the node data may include the following steps:
step 21: and acquiring the voice to be recognized, and performing voice text recognition processing on the voice to be recognized to obtain data to be recognized.
Through the speech text recognition processing, the speech to be detected can be converted into the data to be recognized in a text form so as to recognize the intention in the data in the text form. The embodiment does not limit the specific process of the speech text recognition processing, for example, the text recognition of the speech to be detected can be directly performed to obtain the corresponding data to be recognized; in another embodiment, the speech to be recognized may be recognized first to obtain a recognition result, and the recognition result may be subjected to data filtering, sentence structure adjustment, and the like, and the processed result may be determined as the data to be recognized.
Step 22: and inquiring the current business process progress to obtain node data.
In this embodiment, in order to ensure the accuracy of the node data, the node data does not need to be obtained from the outside, but the progress of the current business process is directly queried, and the corresponding node data is obtained based on the progress. The embodiment does not limit the specific process and manner of the query progress, and reference may be made to related technologies.
Further, to improve the accuracy of the intention recognition result, in one embodiment, invalid data may be filtered during the speech text recognition process. Specifically, carry out speech text recognition processing to the pronunciation that awaits measuring, obtain the data of awaiting discerning, include:
step 31: and performing voice text conversion on the voice to be detected to obtain initial data.
The voice-to-text conversion is used for converting the voice to be tested into initial data in a text format.
Step 32: and performing keyword extraction or invalid information filtering processing on the initial data to obtain data to be identified.
In this embodiment, the keywords in the initial data may be extracted, and the data to be recognized is formed by using the keywords, or the invalid information in the initial data may be filtered, and the data to be recognized is formed by using the remaining data. The keywords and the invalid information may be set as needed, and the specific content is not limited in this embodiment. Through keyword extraction or invalid information filtering, the content which is not beneficial to intention identification in the initial data can be deleted, and the data to be identified with valid information can be obtained.
Based on the above embodiments, it can be understood that before intent recognition is performed using an intent recognition model, a corresponding intent recognition model needs to be generated. The generation process of the intention recognition model may specifically include the following steps:
step 41: acquiring training data;
it should be noted that the training data in the present embodiment includes training intention data and training node data. The embodiment does not limit the specific obtaining manner of the training data, for example, the externally input training data may be obtained, for example, the training data is obtained from a cloud, an external storage medium, or other electronic devices.
In particular, in one possible implementation, the training data may be constructed locally. In this case, the process of acquiring the training data may specifically include the following steps:
step 51: a plurality of training intent data and a plurality of training node data are obtained.
It can be understood that, in order to train the intention recognition model comprehensively, the training node data needs to cover each node of the business process, and the training intention data covers all intents corresponding to all nodes.
Step 52: and forming a plurality of initial training data by using each training intention data and each training node data respectively.
By combining the training intention data and the training node data, a plurality of initial training data can be obtained, wherein the training intention data and the training node data are matched, and the training intention data and the training node data are not matched.
Step 53: and setting a category label corresponding to the training intention data for the positive initial training data, and setting a negative label for the negative initial training data to obtain the training data.
The positive initial training data are initial training data with training intention data matched with training node data, and the negative initial training data are initial training data with training intention data not matched with the training node data. By setting the negative label for the negative initial training data, the model can learn any training intention data which does not correspond to the training node data, so that the intention of the data to be recognized can be recognized based on the node data after the training is finished.
Step 42: and training the initial model by using the training data to obtain an intention recognition model.
The initial model is a model which is not trained, and it should be noted that, since the training node data is a strong feature, the function of the initial model is to limit the scope of intention recognition. In a common deep learning model, training node data may be overwhelmed, resulting in poor training effect of the intention recognition model. To solve this problem, a tree model may be used as the initial model, for example, an xgboost tree model.
Referring to fig. 2, fig. 2 is a schematic diagram of a specific intent recognition process according to an embodiment of the present disclosure. The Xgboost model is a classification model, which may be used as an intention recognition model alone or may be combined with the BERT model to form an intention recognition model. The BERT model is used for extracting text features, and a text sentence vector, namely, sense embedding, corresponding to data to be identified is obtained in a BertAsService obtaining mode, and the dimensionality of the text sentence vector is default 768 dimensions. Meanwhile, node information can be extracted, and since the nodes are discrete, the nodes are encoded in a one-hot mode, and the characteristic dimension and the number of the nodes are kept consistent. For example, there are 5 total nodes, then the dimension of the node feature is 5, and the current node is the first node, then the node feature can be represented as (1, 0, 0, 0, 0), and if the current node is the second node, then the node feature can be represented as (0, 1, 0, 0, 0), and the other nodes are similar. The node characteristics can be obtained by performing One Hot coding on the node information. And finally, splicing the text sentence vectors and the node characteristics, wherein the characteristic dimensionality is 768+ the number of nodes after splicing, and thus, the complete characteristic construction is completed. And inputting the complete features into the Xgboost model to obtain an intention recognition result, and executing corresponding target operation according to the intention recognition result.
In the following, the intention identification device provided by the embodiment of the present application is introduced, and the intention identification device described below and the intention identification method described above may be referred to correspondingly.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an intention identifying apparatus according to an embodiment of the present application, including:
a to-be-identified data obtaining module 110, configured to obtain to-be-identified data and node data where a current business process is located;
the identification module 120 is configured to input the data to be identified and the node data into the intention identification model, and obtain an intention identification result corresponding to the node data;
and the execution module 130 is configured to execute the target operation corresponding to the intention recognition result.
Optionally, the data to be identified obtaining module 110 includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring initial data to be identified and initial node data;
and the coding unit is used for respectively carrying out characteristic coding on the initial data to be identified and the initial node data to obtain the data to be identified and the node data.
Optionally, the data to be identified obtaining module 110 includes:
the voice recognition unit is used for acquiring the voice to be detected and carrying out voice text recognition processing on the voice to be detected to obtain data to be recognized;
and the query unit is used for querying the current business process progress to obtain the node data.
Optionally, the speech recognition unit comprises:
the conversion subunit is used for performing voice text conversion on the voice to be detected to obtain initial data;
and the information extraction subunit is used for extracting keywords or filtering invalid information from the initial data to obtain the data to be identified.
In the following, the intention identification device provided by the embodiment of the present application is introduced, and the intention identification device described below and the intention identification method described above may be referred to correspondingly.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an intention identifying apparatus according to an embodiment of the present application, including:
a training data obtaining module 210, configured to obtain training data, where the training data includes training intention data and training node data;
and the training module 220 is configured to train the initial model by using the training data to obtain an intention recognition model.
Optionally, the training data obtaining module 210 includes:
an acquisition unit configured to acquire a plurality of training intention data and a plurality of training node data;
the combination unit is used for forming a plurality of initial training data by utilizing each training intention data and each training node data respectively;
the marking unit is used for setting a category label corresponding to the training intention data for the positive initial training data and setting a negative label for the negative initial training data to obtain training data;
the forward initial training data is initial training data of which training intention data are matched with training node data; the negative initial training data is initial training data with training intention data not matched with training node data.
In the following, the electronic device provided by the embodiment of the present application is introduced, and the electronic device described below and the intention identification method described above may be referred to correspondingly.
Referring to fig. 5, fig. 5 is a hardware composition framework diagram for an intention recognition method according to an embodiment of the present disclosure. Wherein the electronic device 100 may include a processor 101 and a memory 102, and may further include one or more of a multimedia component 103, an information input/information output (I/O) interface 104, and a communication component 105.
The processor 101 is configured to control the overall operation of the electronic device 100 to complete all or part of the steps in the above-mentioned intention identification method; the memory 102 is used to store various types of data to support operation at the electronic device 100, such data may include, for example, instructions for any application or method operating on the electronic device 100, as well as application-related data. The Memory 102 may be implemented by any type or combination of volatile and non-volatile Memory devices, such as one or more of Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk. In the present embodiment, the memory 102 stores therein at least programs and/or data for realizing the following functions:
acquiring data to be identified and node data of a current business process;
inputting the data to be recognized and the node data into an intention recognition model to obtain an intention recognition result corresponding to the node data;
and executing the target operation corresponding to the intention recognition result.
And/or the presence of a gas in the gas,
acquiring training data; the training data comprises training intent data and training node data;
and training the initial model by using the training data to obtain an intention recognition model.
The multimedia component 103 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 102 or transmitted through the communication component 105. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 104 provides an interface between the processor 101 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 105 is used for wired or wireless communication between the electronic device 100 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 105 may include: Wi-Fi part, Bluetooth part, NFC part.
The electronic Device 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components, and is configured to perform the method for identifying intent as set forth in the above embodiments.
Of course, the structure of the electronic device 100 shown in fig. 5 does not constitute a limitation of the electronic device in the embodiment of the present application, and in practical applications, the electronic device 100 may include more or less components than those shown in fig. 5, or some components may be combined.
It is to be understood that, in the embodiment of the present application, the number of the electronic devices is not limited, and it may be that a plurality of electronic devices cooperate together to complete the intention identification method. In a possible implementation manner, please refer to fig. 6, and fig. 6 is a schematic diagram of a hardware composition framework to which another intent recognition method provided in the embodiments of the present application is applied. As can be seen from fig. 6, the hardware composition framework may include: the first electronic device 11 and the second electronic device 12 are connected to each other through a network 13.
In the embodiment of the present application, the hardware structures of the first electronic device 11 and the second electronic device 12 may refer to the electronic device 100 in fig. 5. That is, it can be understood that there are two electronic devices 100 in the present embodiment, and the two devices perform data interaction. Further, in this embodiment of the application, the form of the network 13 is not limited, that is, the network 13 may be a wireless network (e.g., WIFI, bluetooth, etc.), or may be a wired network.
The first electronic device 11 and the second electronic device 12 may be the same electronic device, for example, the first electronic device 11 and the second electronic device 12 are both servers; or may be different types of electronic devices, for example, the first electronic device 11 may be a computer and the second electronic device 12 may be a server. The interaction behavior between the first electronic device 11 and the second electronic device 12 may be: the first electronic equipment 11 acquires data to be identified and node data and sends the data to be identified and the node data to the second electronic equipment 12, and the second electronic equipment 12 inputs the data to be identified and the node data into the intention identification model for identification to obtain an intention identification result. The second electronic device 12 transmits the intention recognition result to the first electronic device 11 so that the first electronic device 11 performs the corresponding target operation according to the intention recognition result.
The following describes a readable storage medium provided in an embodiment of the present application, and the readable storage medium described below and the intention identification method described above may be referred to correspondingly.
The present application also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described intent recognition method.
The readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Finally, it should also be noted that, herein, relationships such as first and second, etc., are intended only to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms include, or any other variation is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that includes a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An intent recognition method, comprising:
acquiring data to be identified and node data of a current business process;
inputting the data to be recognized and the node data into an intention recognition model to obtain an intention recognition result corresponding to the node data;
and executing the target operation corresponding to the intention recognition result.
2. The intention identification method according to claim 1, wherein the acquiring data to be identified and node data where the current business process is located comprises:
acquiring initial data to be identified and initial node data;
and respectively carrying out feature coding on the initial data to be identified and the initial node data to obtain the data to be identified and the node data.
3. The intention identification method according to claim 1, wherein the acquiring data to be identified and node data where the current business process is located comprises:
acquiring voice to be detected, and performing voice text recognition processing on the voice to be detected to obtain data to be recognized;
and inquiring the current business process progress to obtain the node data.
4. The intention recognition method according to claim 3, wherein the performing speech-to-text recognition processing on the speech to be recognized to obtain the data to be recognized comprises:
performing voice text conversion on the voice to be detected to obtain initial data;
and performing keyword extraction or invalid information filtering processing on the initial data to obtain the data to be identified.
5. An intention recognition model training method for generating an intention recognition model recited in any one of claims 1 to 4, comprising:
acquiring training data; the training data comprises training intent data and training node data;
and training the initial model by using the training data to obtain an intention recognition model.
6. The intent recognition model training method of claim 5, wherein said obtaining training data comprises:
acquiring a plurality of training intention data and a plurality of training node data;
respectively utilizing each training intention data and each training node data to form a plurality of initial training data;
setting a category label corresponding to the training intention data for positive initial training data, and setting a negative label for negative initial training data to obtain the training data;
wherein the forward initial training data is the initial training data for which the training intent data matches the training node data; the negative initial training data is the initial training data for which the training intent data does not match the training node data.
7. The method of claim 5, wherein the initial model is a tree model.
8. An intention recognition apparatus, comprising:
the data to be identified acquisition module is used for acquiring the data to be identified and the node data of the current business process;
the identification module is used for inputting the data to be identified and the node data into an intention identification model to obtain an intention identification result corresponding to the node data;
and the execution module is used for executing the target operation corresponding to the intention recognition result.
9. An intention recognition model training apparatus, comprising:
the training data acquisition module is used for acquiring training data, and the training data comprises training intention data and training node data;
and the training module is used for training the initial model by using the training data to obtain an intention recognition model.
10. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing a computer program;
the processor is configured to execute the computer program to implement the intention recognition method according to any one of claims 1 to 4 and/or the intention recognition model training method according to any one of claims 5 to 7.
11. A computer-readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the intent recognition method of any of claims 1 to 4 and/or the intent recognition model training method of any of claims 5 to 7.
CN202110465878.2A 2021-04-28 2021-04-28 Intention recognition method, intention recognition model training method, device and equipment Pending CN113515594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110465878.2A CN113515594A (en) 2021-04-28 2021-04-28 Intention recognition method, intention recognition model training method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110465878.2A CN113515594A (en) 2021-04-28 2021-04-28 Intention recognition method, intention recognition model training method, device and equipment

Publications (1)

Publication Number Publication Date
CN113515594A true CN113515594A (en) 2021-10-19

Family

ID=78064023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110465878.2A Pending CN113515594A (en) 2021-04-28 2021-04-28 Intention recognition method, intention recognition model training method, device and equipment

Country Status (1)

Country Link
CN (1) CN113515594A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357994A (en) * 2022-01-06 2022-04-15 京东科技信息技术有限公司 Intention recognition processing and confidence degree judgment model generation method and device
CN114398903A (en) * 2022-01-21 2022-04-26 平安科技(深圳)有限公司 Intention recognition method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359296A (en) * 2018-09-18 2019-02-19 深圳前海微众银行股份有限公司 Public sentiment emotion identification method, device and computer readable storage medium
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN111131889A (en) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 Method and system for adaptively adjusting images and sounds in scene and readable storage medium
CN111309915A (en) * 2020-03-03 2020-06-19 爱驰汽车有限公司 Method, system, device and storage medium for training natural language of joint learning
US20200251091A1 (en) * 2017-08-29 2020-08-06 Tiancheng Zhao System and method for defining dialog intents and building zero-shot intent recognition models
CN111883115A (en) * 2020-06-17 2020-11-03 马上消费金融股份有限公司 Voice flow quality inspection method and device
CN112154465A (en) * 2018-09-19 2020-12-29 华为技术有限公司 Method, device and equipment for learning intention recognition model
CN112185358A (en) * 2020-08-24 2021-01-05 维知科技张家口有限责任公司 Intention recognition method, model training method, device, equipment and medium
CN112202978A (en) * 2020-08-24 2021-01-08 维知科技张家口有限责任公司 Intelligent outbound call system, method, computer system and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200251091A1 (en) * 2017-08-29 2020-08-06 Tiancheng Zhao System and method for defining dialog intents and building zero-shot intent recognition models
CN109359296A (en) * 2018-09-18 2019-02-19 深圳前海微众银行股份有限公司 Public sentiment emotion identification method, device and computer readable storage medium
CN112154465A (en) * 2018-09-19 2020-12-29 华为技术有限公司 Method, device and equipment for learning intention recognition model
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN111131889A (en) * 2019-12-31 2020-05-08 深圳创维-Rgb电子有限公司 Method and system for adaptively adjusting images and sounds in scene and readable storage medium
CN111309915A (en) * 2020-03-03 2020-06-19 爱驰汽车有限公司 Method, system, device and storage medium for training natural language of joint learning
CN111883115A (en) * 2020-06-17 2020-11-03 马上消费金融股份有限公司 Voice flow quality inspection method and device
CN112185358A (en) * 2020-08-24 2021-01-05 维知科技张家口有限责任公司 Intention recognition method, model training method, device, equipment and medium
CN112202978A (en) * 2020-08-24 2021-01-08 维知科技张家口有限责任公司 Intelligent outbound call system, method, computer system and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘祥龙等: "《飞桨PaddlePaddle深度学习实战[M]》", 31 August 2020, 机械工业出版社, pages: 268 - 272 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357994A (en) * 2022-01-06 2022-04-15 京东科技信息技术有限公司 Intention recognition processing and confidence degree judgment model generation method and device
CN114398903A (en) * 2022-01-21 2022-04-26 平安科技(深圳)有限公司 Intention recognition method and device, electronic equipment and storage medium
CN114398903B (en) * 2022-01-21 2023-06-20 平安科技(深圳)有限公司 Intention recognition method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110377716B (en) Interaction method and device for conversation and computer readable storage medium
CN108509619B (en) Voice interaction method and device
WO2020186778A1 (en) Error word correction method and device, computer device, and storage medium
US9767092B2 (en) Information extraction in a natural language understanding system
CN107591155B (en) Voice recognition method and device, terminal and computer readable storage medium
CN109710727B (en) System and method for natural language processing
US10319379B2 (en) Methods and systems for voice dialogue with tags in a position of text for determining an intention of a user utterance
KR20180064504A (en) Personalized entity pronunciation learning
CN110910903B (en) Speech emotion recognition method, device, equipment and computer readable storage medium
CN111261151B (en) Voice processing method and device, electronic equipment and storage medium
US20170076716A1 (en) Voice recognition server and control method thereof
JP2017058483A (en) Voice processing apparatus, voice processing method, and voice processing program
CN111274797A (en) Intention recognition method, device and equipment for terminal and storage medium
US20180122369A1 (en) Information processing system, information processing apparatus, and information processing method
CN113515594A (en) Intention recognition method, intention recognition model training method, device and equipment
CN110827803A (en) Method, device and equipment for constructing dialect pronunciation dictionary and readable storage medium
US20210249001A1 (en) Dialog System Capable of Semantic-Understanding Mapping Between User Intents and Machine Services
KR20180012639A (en) Voice recognition method, voice recognition device, apparatus comprising Voice recognition device, storage medium storing a program for performing the Voice recognition method, and method for making transformation model
KR102312993B1 (en) Method and apparatus for implementing interactive message using artificial neural network
KR20200080400A (en) Method for providing sententce based on persona and electronic device for supporting the same
CN111326154A (en) Voice interaction method and device, storage medium and electronic equipment
KR102536944B1 (en) Method and apparatus for speech signal processing
US11615787B2 (en) Dialogue system and method of controlling the same
CN116778967B (en) Multi-mode emotion recognition method and device based on pre-training model
US10600405B2 (en) Speech signal processing method and speech signal processing apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, 2 / F, block C, 18 Kechuang 11th Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: Jingdong Digital Technology Holding Co., Ltd