WO2022105119A1 - Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé - Google Patents

Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé Download PDF

Info

Publication number
WO2022105119A1
WO2022105119A1 PCT/CN2021/090462 CN2021090462W WO2022105119A1 WO 2022105119 A1 WO2022105119 A1 WO 2022105119A1 CN 2021090462 W CN2021090462 W CN 2021090462W WO 2022105119 A1 WO2022105119 A1 WO 2022105119A1
Authority
WO
WIPO (PCT)
Prior art keywords
corpus
inquiry
query
target
related corpus
Prior art date
Application number
PCT/CN2021/090462
Other languages
English (en)
Chinese (zh)
Inventor
孙向欣
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022105119A1 publication Critical patent/WO2022105119A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of big data technology, and in particular, to a training corpus generation method for an intent recognition model and related devices.
  • human-machine dialogues mostly use intent recognition models to identify customer intentions.
  • customer intentions rely on AI (Artificial Intelligence) inquiries, and in some scenarios, customer intentions do not rely on AI inquiries. Therefore, in the training process of the intent recognition model, it is mostly determined whether to fill the corresponding AI query in the training sample according to the dependency situation.
  • AI Artificial Intelligence
  • the inventor realized that since the dependence of customer intentions on AI queries cannot be judged in the actual production process, the prediction parameters of the input model all contain AI queries. As a result, the model training mode and the model prediction mode are inconsistent, and the accuracy of the intent recognition model in production is lower than that in the training environment.
  • the purpose of the embodiments of the present application is to propose a training corpus generation method for an intent recognition model and related equipment, so as to improve the quality of the training corpus of the intent recognition model.
  • the embodiment of the present application provides a training corpus generation method for an intent recognition model, which adopts the following technical solutions:
  • a training corpus generation method for an intent recognition model comprising the following steps:
  • the target query related corpus in the target query related corpus and determine the query category corresponding to the target query related corpus based on the AI query corpus, and based on the target query related corpus The corresponding query The category and the target query related corpus generate a first training sample;
  • the first training sample and the second training sample are used as training corpus and output, wherein the training corpus is used for training an intention recognition model.
  • the embodiment of the present application also provides a training corpus generation device for an intent recognition model, which adopts the following technical solutions:
  • a training corpus generation device for an intent recognition model comprising:
  • the matching module is used to receive the AI query corpus pre-labeled with the query category and the customer response corpus pre-labeled with the intent label, and perform a screening operation on the customer response corpus based on a preset regular expression to obtain query-related corpus and non-inquiry-related corpus, wherein the customer answer corpus and the AI inquiry corpus have a one-to-one mapping relationship;
  • an establishment module for establishing an inquiry-related corpus and a non-inquiry-related corpus based on the inquiry-related corpus and the non-inquiry-related corpus respectively;
  • a calculation module configured to calculate the similarity between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus, and adjust the inquiry-related corpus and the inquiry-related corpus based on the similarity. For the non-inquiry-related corpus, a target inquiry-related corpus and a target non-inquiry-related corpus are obtained;
  • the generating module is configured to obtain the target query related corpus in the target query related corpus, and determine the query category corresponding to the target query related corpus based on the AI query corpus, and based on the target query related corpus The query category corresponding to the corpus and the target query-related corpus generate a first training sample;
  • the association module is used to obtain the target non-inquiry-related corpus in the target non-inquiry-related corpus, and based on the intent tag, associate the target non-inquiry-related corpus with a preset inquiry category, and obtain the first two training samples;
  • the output module is used for outputting the first training sample and the second training sample as training corpus, wherein the training corpus is used for training the intention recognition model.
  • the embodiment of the present application also provides a computer device, which adopts the following technical solutions:
  • a computer device comprising a memory and a processor, wherein computer-readable instructions are stored in the memory, and when the processor executes the computer-readable instructions, the following method for generating a training corpus of an intent recognition model is implemented:
  • the target query related corpus in the target query related corpus and determine the query category corresponding to the target query related corpus based on the AI query corpus, and based on the target query related corpus The corresponding query The category and the target query related corpus generate a first training sample;
  • the first training sample and the second training sample are used as training corpus and output, wherein the training corpus is used for training an intention recognition model.
  • the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
  • a computer-readable storage medium where computer-readable instructions are stored on the computer-readable storage medium, and when the computer-readable instructions are executed by a processor, the following method for generating a training corpus of an intent recognition model is implemented:
  • the target query related corpus in the target query related corpus and determine the query category corresponding to the target query related corpus based on the AI query corpus, and based on the target query related corpus The corresponding query The category and the target query related corpus generate a first training sample;
  • the first training sample and the second training sample are used as training corpus and output, wherein the training corpus is used for training an intention recognition model.
  • This application calculates the similarity between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus, and adjusts the inquiry-related corpus and the non-inquiry-related corpus based on the similarity to achieve the determined
  • the accuracy of target query-related corpus and target non-question-related corpus is higher.
  • By associating target non-question-related corpus with preset query categories based on intent tags the problem of not filling query categories for training corpus that does not rely on AI query corpus is solved, and it does not cause training problems.
  • the explosion of corpus ensures the efficiency of model training.
  • the training corpus generated in this way can keep the accuracy of the intent recognition model at a high level.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of a training corpus generation method for an intent recognition model according to the present application
  • FIG. 3 is a schematic structural diagram of an embodiment of an apparatus for generating training corpus of an intent recognition model according to the present application
  • FIG. 4 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications can be installed on the terminal devices 101, 102, and 103, such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, and the like.
  • the terminal devices 101, 102, and 103 may be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • the server 105 may be a server that provides various services, such as a background server that provides support for the pages displayed on the terminal devices 101 , 102 , and 103 .
  • the method for generating training corpus for the intent recognition model is generally performed by a server/terminal device, and accordingly, the apparatus for generating training corpus for the intent recognition model is generally set in the server/terminal device.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the training corpus generation method for the intent recognition model includes the following steps:
  • S1 Receive the AI query corpus pre-labeled with the query category and the customer response corpus pre-labeled with the intent label, and perform a screening operation on the customer response corpus based on a preset regular expression to obtain query-related corpus and non-question corpus.
  • query related corpus wherein the customer answer corpus and the AI query corpus have a one-to-one mapping relationship.
  • an annotator pre-marks an intent label on the customer answer corpus under each inquiry category, where the inquiry category may include six categories from Q1 to Q6.
  • the successful customer answer corpus is used as the inquiry-related corpus, and the remaining customer's answer corpus is used as the non-inquiry-related corpus, which is convenient for further processing of the customer's answer corpus.
  • query-related corpora Examples of query-related corpora are as follows:
  • the electronic device for example, the server/terminal device shown in FIG. 1
  • the method for generating training corpus of the intent recognition model runs can receive the AI query corpus and the customer’s answer through a wired connection or a wireless connection. corpus.
  • the above wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or developed in the future .
  • the customer answer corpus is matched based on a preset regular expression, the customer answer corpus that is successfully matched is used as inquiry-related corpus, and the customer's answer corpus that fails to match is used as non-inquiry-related corpus
  • the corpus includes:
  • the suspected inquiry-related corpus is marked as inquiry-related or non-inquiry-related based on the confirmation of the suspected inquiry-related corpus by the designated person;
  • the method of regular matching is adopted to extract suspected query-related corpus from the customer's answer corpus, and the remaining corpus, that is, the corpus that fails to match, is suspected non-inquiry-related corpus.
  • suspected inquiries-related corpora are handed over to designated personnel for confirmation, and the confirmed inquiries-related corpora.
  • the suspected inquiry-related corpus marked as inquiry-related is regarded as inquiry-related corpus; the suspected non-inquiry-related corpus and the suspected inquiry-related corpus marked as non-inquiry-related are regarded as non-inquiry-related corpus.
  • the relevant corpus of the inquiry can be further determined, and the accuracy of the division of the corpus of the customer's answer can be improved.
  • S2 Establish an inquiry-related corpus and a non-inquiry-related corpus based on the inquiry-related corpus and the non-inquiry-related corpus, respectively.
  • S3 Calculate the similarity between each non-inquiry-related corpus in the non-inquiry-related corpus and the inquiry-related corpus, and adjust the inquiry-related corpus and the non-inquiry-related corpus based on the similarity
  • the query-related corpus is obtained, and the target query-related corpus and the target non-query-related corpus are obtained.
  • the similarity between each non-question-related corpus and the query-related corpus is calculated.
  • the corpus in the query-related corpus and the non-question-related corpus are adjusted by similarity. Achieve a more rigorous target query-related corpus and target non-query-related corpus.
  • the calculating the similarity between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus includes:
  • the cosine similarity with the largest numerical value is taken as the similarity between the current non-question-related corpus and the query-related corpus.
  • the language representation model is called to embed the query-related corpus, thereby converting the query-related corpus into a 768-dimensional query-related word vector, where each query-related word vector represents a corpus Embedding, Embedding refers to using a low-dimensional vector to represent a corpus.
  • the language representation model can be the BERT (Bidirectional Encoder Representations from Transformers) model.
  • the BERT model has a wide range of versatility and can capture longer distances. rely.
  • the language representation model is called to convert the non-inquiry-related corpus into a 768-dimensional non-inquiry-related word vector, which can represent information in both directions.
  • the cosine similarity between each non-query-related word vector and each of the query-related word vectors is traversed and calculated. After the traversal, the maximum value of the cosine similarity is taken as the similarity between the current non-query-related word vector and the query-related corpus.
  • query-related word vectors An example of query-related word vectors is as follows:
  • non-inquiry related corpus such as: I have saved it.
  • the non-inquiry-related word vector corresponding to the non-inquiry-related corpus is a 768-dimensional vector: [0.07, 0.002, 0.04,..., 0.009], and the non-inquiry-related word vector is calculated with each inquiry-related word vector respectively.
  • Cosine similarity For two vectors of the same latitude, A and B, the cosine similarity calculation formula is:
  • adjusting the inquiry-related corpus and the non-inquiry-related corpus based on the similarity to obtain the target inquiry-related corpus and the target non-inquiry-related corpus includes:
  • the to-be-confirmed corpus is allocated to the non-inquiry-related corpus or the inquiry-related corpus according to the classification of the designated person, Obtain the target query-related corpus and the target non-query-related corpus.
  • the non-inquiry-related corpus still belongs to the non-inquiry-related corpus.
  • the similarity is 0.3, which is less than the first similarity threshold of 0.6, so the corpus is a non-question-related corpus.
  • the similarity is 0.9, which is less than the first similarity threshold of 0.6, so the corpus is the corpus to be confirmed.
  • the non-inquiry-related corpus is used as the to-be-confirmed corpus, the to-be-determined corpus is extracted from the non-inquiry-related corpus for redistribution. Achieve a more rigorous target query-related corpus and target non-query-related corpus.
  • the adjusting the query-related corpus and the non-query-related corpus based on the similarity, and obtaining the target query-related corpus and the target non-query-related corpus include:
  • the similarity is greater than the preset second threshold, the corresponding non-inquiry-related corpus is directly deleted, which can effectively improve the processing speed of the computer.
  • the first to-be-confirmed corpus is allocated to the non-inquiry-related corpus or the inquiry according to the classification of the designated person In the relevant corpus, obtain the first inquiry-related corpus and the first non-inquiry-related corpus;
  • the second to-be-confirmed corpus is allocated to the first non-inquiry-related corpus according to the classification of the designated person or the From the first inquiry-related corpus, obtain a second inquiry-related corpus and a second non-inquiry-related corpus;
  • the designated person in this application may be an annotator. If the similarity is greater than the first similarity threshold, the corpus is included in the corpus to be confirmed by the business, that is, the corresponding non-inquiry-related corpus is regarded as the corpus to be confirmed, and the part of the corpus is returned to the annotator.
  • the annotator is to confirm whether the corpus annotation is related to the AI query. According to the annotation of the annotator, the to-be-confirmed corpus related to AI inquiries is added to the inquiry-related corpus, and the to-be-confirmed corpus that is not related to AI inquiries is added to the non-inquiry-related corpus.
  • the maximum similarity that is, if the third similarity is greater than the preset second similarity threshold, it is directly deleted. If the similarity is less than the second similarity threshold, it is directly still the non-inquiry-related corpus and still exists in the non-inquiry-related corpus.
  • S4 Acquire the target query related corpus in the target query related corpus, and determine the query category corresponding to the target query related corpus based on the AI query corpus, and based on the target query related corpus The query category and the target query-related corpus generate a first training sample.
  • the generated first training sample belongs to the training sample that the customer intends to rely on the AI query corpus.
  • the first training sample is generated through the target query related corpus and the corresponding query category, so as to ensure the dependency between the first training sample and the customer's intention.
  • S5 Acquire the target non-inquiry-related corpus in the target non-inquiry-related corpus, associate the target non-inquiry-related corpus with a preset inquiry category based on the intent label, and obtain a second training sample .
  • the target non-inquiry-related corpus is associated with a preset inquiry category to obtain a second training sample.
  • the second training sample belongs to the training sample in which the customer's intention does not depend on the AI query corpus.
  • associating the target non-inquiry-related corpus with a preset inquiry category based on the intent tag, and obtaining the second training sample includes:
  • sample equalization processing is performed on the target non-inquiry-related corpus corresponding to each of the intent tags, to obtain balanced corpus;
  • the balanced corpus corresponding to each of the intent labels is associated with a preset query category to obtain the second training sample.
  • sample balance is performed on the non-inquiry-related corpus under each intent label, so as to prevent the samples under different intent labels from being too different and affecting the subsequent training effect of the model.
  • the quantity threshold includes a first quantity threshold and a second quantity threshold, wherein the first quantity threshold is greater than the second quantity threshold, and the preset quantity thresholds are respectively used for each of the intent labels.
  • the corresponding target non-inquiry-related corpus is subjected to sample equalization processing, and the balanced corpus obtained includes:
  • the quantity of the target non-inquiry-related corpus corresponding to the current intent label is greater than the first quantity threshold, randomly filter the target non-inquiry-related corpus corresponding to the current intent label until the target non-inquiry-related corpus corresponding to the current intent label is The quantity of the target non-inquiry-related corpus is less than or equal to the first quantity threshold;
  • corpus expansion is performed on the target non-inquiry-related corpus corresponding to the current intent tag until the current intent tag corresponds to the target non-inquiry related corpus.
  • the quantity of the target non-inquiry-related corpus is greater than or equal to the second quantity threshold.
  • the first number threshold may be set to 2500
  • the second number threshold may be set to 1000.
  • the specific values of the first quantity threshold and/or the second quantity threshold can be adjusted according to actual needs, as long as they are applicable.
  • intent labels with more than 2,500 corpora 2,500 non-inquiry-related corpora were randomly selected and retained.
  • intent tags with less than 1000 corpus the corpus is expanded to 1000.
  • the corpus of each intent label is set to be no more than 2500 corpora and no less than 1000 corpus, because there is a serious imbalance in the intent labels of model training.
  • the corpus expansion on the target non-inquiry related corpus corresponding to the current intent tag includes:
  • a preset random oversampling package is called, and the target non-query related corpus corresponding to the current intent tag is randomly copied through the random oversampling package.
  • the method of corpus expansion is to use python to call the RandomOverSample (random oversample) package.
  • RandomOverSample package some corpus in the corpus can be randomly copied to expand the corpus to a predetermined value.
  • the RandomOverSample package is often used to randomly replicate and repeat the minority class samples. The goal is to make the number of minority classes equal to the majority class to obtain a new balanced dataset.
  • S6 Use the first training sample and the second training sample as training corpora and output, wherein the training corpus is used for training an intent recognition model.
  • a better training corpus is obtained, and the consistency between the accuracy in the training environment and the accuracy in the production environment is improved.
  • the corpus used in the intent recognition model can identify customer intent more accurately.
  • the preset intent recognition model is trained by the training corpus, and the trained intent recognition model is obtained.
  • the above training corpus can also be stored in a node of a blockchain.
  • the blockchain referred to in this application is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the present application can be applied in the field of smart medical care, thereby promoting the construction of smart cities.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
  • the present application provides an embodiment of a training corpus generation device for an intent recognition model, which corresponds to the method embodiment shown in FIG. 2 .
  • the device can be specifically applied to various electronic devices.
  • the training corpus generation device 300 for the intent recognition model described in this embodiment includes: a matching module 301 , a establishing module 302 , a computing module 303 , a generating module 304 , an association module 305 and an output module 306 .
  • the matching module 301 is configured to receive the AI query corpus pre-labeled with the query category and the customer response corpus pre-labeled with the intent label, and perform a screening operation on the customer response corpus based on a preset regular expression to obtain the query Inquiry-related corpus and non-inquiry-related corpus, wherein the customer answer corpus and the AI inquiry corpus have a one-to-one mapping relationship; establishing a module 302 is used to respectively base on the inquiry-related corpus and the non-inquiry-related corpus The inquiry-related corpus establishes an inquiry-related corpus and a non-inquiry-related corpus; the computing module 303 is configured to calculate the relationship between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus similarity, and adjust the query-related corpus and the non-query-related corpus based on the similarity to obtain a target query-related corpus and a target non-query-related corpus;
  • the query-related corpus generates a first training sample;
  • the association module 305 is configured to obtain the target non-inquiry-related corpus in the target non-inquiry-related corpus, and based on the intent tag, associate the target non-inquiry-related corpus with the target non-inquiry-related corpus.
  • the preset query categories are associated to obtain a second training sample; and an output module 306 is configured to use the first training sample and the second training sample as training corpus and output, wherein the training corpus is used for Intent recognition model training.
  • the present application calculates the similarity between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus, and performs the query-related corpus and the non-inquiry-related corpus based on the similarity. Adjustment to achieve higher accuracy of the determined target inquiry-related corpus and target non-inquiry-related corpus.
  • target non-question-related corpus By associating target non-question-related corpus with preset query categories based on intent tags, the problem of not filling query categories for training corpus that does not rely on AI query corpus is solved, and it does not cause training problems.
  • the explosion of corpus ensures the efficiency of model training.
  • the training corpus generated in this way can keep the accuracy of the intent recognition model at a high level.
  • the matching module 301 includes a matching sub-module, a presentation sub-module, a marking sub-module and a generating sub-module.
  • the matching sub-module is used to match the customer answer corpus based on a preset regular expression, and use the successfully matched customer answer corpus as the suspected query related corpus, and the matched failed customer answer corpus as the suspected query related corpus Non-inquiry-related corpus;
  • the display sub-module is used to display the suspected inquiries-related corpus on the preset front-end page, and notify the designated personnel to confirm the suspected inquiries-related corpus;
  • the marking sub-module is used to identify When the designated person completes the confirmation, the suspected inquiry-related corpus is marked as inquiry-related or non-inquiry-related based on the confirmation of the suspected inquiry-related corpus by the designated person;
  • the generating submodule is used to mark the inquiry-related corpus as an inquiry-related material.
  • the suspected inquiry-related corpus related to the inquiry is used as the inquiry-related corpus, and the suspected non-inquiry-related corpus and the suspected inquiry-related corpus marked as non-inquiry-related are used as the non-inquiry-related corpus.
  • the calculation module 303 includes a first vector submodule, a second vector submodule, a similarity calculation submodule and a similarity confirmation submodule.
  • the first vector sub-module is used to input the current query-related corpus into a pre-trained language representation model to obtain query-related word vectors; the second vector sub-module is used to input the non-inquiry-related corpus into a pre-trained language representation model.
  • non-inquiry-related word vectors are obtained; the similarity calculation sub-module is used to traversely calculate the cosine similarity between the current non-inquiry-related word vectors and each of the inquiry-related word vectors.
  • the similarity confirmation sub-module is used for taking the cosine similarity with the largest numerical value as the similarity between the current non-inquiry-related corpus and the inquiry-related corpus.
  • the computing module 303 also includes a first identifying sub-module and a first allocating sub-module.
  • the first identification sub-module is used to identify whether the similarity is greater than the preset first similarity threshold, and when the similarity is greater than the preset first similarity threshold, the corresponding non-inquiry related corpus is used as The corpus to be confirmed, and notify the designated person to classify the to-be-confirmed corpus;
  • the first assignment sub-module is configured to, when it is recognized that the designated person has completed the classification of the to-be-confirmed corpus, classify the to-be-confirmed corpus according to the classification of the designated person.
  • the to-be-confirmed corpus is allocated to the non-inquiry-related corpus or the inquiry-related corpus to obtain a target inquiry-related corpus and a target non-inquiry-related corpus.
  • the above calculation module 303 is further configured to: identify whether the similarity is greater than a preset second similarity threshold, and when the similarity is greater than a preset second similarity When the threshold is reached, the corresponding non-inquiry-related corpus is deleted from the non-inquiry-related corpus to obtain a target inquiry-related corpus and a target non-inquiry-related corpus.
  • the calculation module 303 further includes a first calculation submodule, a second identification submodule, a second allocation submodule, a second calculation submodule, a third identification submodule, a third identification submodule, and a third identification submodule. Allocating submodules, third computing submodules, and deleting submodules.
  • the first calculation sub-module is used to calculate the similarity between each of the non-inquiry-related corpus in the non-inquiry-related corpus and the inquiry-related corpus;
  • the second recognition sub-module is used to identify the Whether the similarity is greater than the preset first similarity threshold, when the similarity is greater than the preset first similarity threshold, use the corresponding non-inquiry related corpus as the first to be confirmed corpus, and notify the designated person Classify the first to-be-confirmed corpus;
  • the second assignment sub-module is configured to classify the first to-be-confirmed corpus according to the designated person's classification when it is recognized that the designated person has completed the classification of the first to-be-confirmed corpus.
  • the corpus to be confirmed is allocated to the non-inquiry-related corpus or the inquiry-related corpus to obtain a first inquiry-related corpus and a first non-inquiry-related corpus; the second calculation submodule is used to calculate the first inquiry-related corpus.
  • the corpus is allocated to the first non-inquiry-related corpus or the first inquiry-related corpus to obtain a second inquiry-related corpus and a second non-inquiry-related corpus;
  • the third calculation submodule is used to calculate all the the second similarity between each second non-inquiry-related corpus in the second non-inquiry-related corpus and the second inquiry-related corpus;
  • the deletion sub-module is used to identify whether the second similarity is greater than A preset second similarity threshold, when the second similarity is greater than the preset second similarity threshold, delete the corresponding second non-inquiry related corpus from the second non-inquiry related corpus to obtain The target query-related corpus and the target non-query-related corpus.
  • the association module 305 includes a determination sub-module, an equalization sub-module and an association sub-module.
  • the determination sub-module is used to determine the target non-inquiry-related corpus corresponding to each of the intent tags;
  • the equalization sub-module is used to separately analyze the target non-inquiry corresponding to each of the intent tags based on a preset quantity threshold
  • the relevant corpus is subjected to sample equalization processing to obtain a balanced corpus;
  • the association sub-module is used to associate the balanced corpus corresponding to each of the intention labels with the preset query category based on the preset same probability, and obtain the second Training samples.
  • the quantity threshold includes a first quantity threshold and a second quantity threshold, wherein the first quantity threshold is greater than the second quantity threshold, and the equalization sub-module includes an identification unit, a screening unit and an expansion unit.
  • the identifying unit is used to identify whether the quantity of the target non-inquiry related corpus corresponding to the current intent label is greater than the first quantity threshold or less than the second quantity threshold;
  • the target non-inquiry-related corpus corresponding to the current intent label is randomly screened until the target non-inquiry-related corpus corresponding to the current intent label is The quantity is less than or equal to the first quantity threshold;
  • the expansion unit is configured to, when the quantity of the target non-inquiry-related corpus corresponding to the current intention label is less than the second quantity threshold, to The query-related corpus is expanded until the quantity of the target non-query-related corpus corresponding to the current intent tag is greater than or equal to the second quantity threshold.
  • the expansion unit is further configured to: call a preset random oversampling package, and use the random oversampling package to correlate the target non-inquiry corresponding to the current intent tag The corpus is replicated randomly.
  • This application calculates the similarity between each non-inquiry-related corpus and the inquiry-related corpus in the non-inquiry-related corpus, and adjusts the inquiry-related corpus and the non-inquiry-related corpus based on the similarity to achieve the determined
  • the accuracy of target query-related corpus and target non-question-related corpus is higher.
  • By associating target non-question-related corpus with preset query categories based on intent tags the problem of not filling query categories for training corpus that does not rely on AI query corpus is solved, and it does not cause training problems.
  • the explosion of corpus ensures the efficiency of model training.
  • the training corpus generated in this way can keep the accuracy of the intent recognition model at a high level.
  • FIG. 4 is a block diagram of a basic structure of a computer device according to this embodiment.
  • the computer device 200 includes a memory 201 , a processor 202 , and a network interface 203 that communicate with each other through a system bus. It should be noted that only the computer device 200 with components 201-203 is shown in the figure, but it should be understood that implementation of all shown components is not required, and more or less components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 201 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the memory 201 may be an internal storage unit of the computer device 200 , such as a hard disk or a memory of the computer device 200 .
  • the memory 201 may also be an external storage device of the computer device 200, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 201 may also include both an internal storage unit of the computer device 200 and an external storage device thereof.
  • the memory 201 is generally used to store the operating system and various application software installed on the computer device 200 , such as computer-readable instructions for a method for generating training corpus of an intent recognition model.
  • the memory 201 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 202 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips.
  • the processor 202 is typically used to control the overall operation of the computer device 200 .
  • the processor 202 is configured to execute computer-readable instructions stored in the memory 201 or process data, for example, computer-readable instructions for executing a method for generating training corpus of the intent recognition model.
  • the network interface 203 may include a wireless network interface or a wired network interface, and the network interface 203 is generally used to establish a communication connection between the computer device 200 and other electronic devices.
  • the problem of not filling the query category for the training corpus that does not rely on the AI query corpus is solved, and at the same time, better training corpus is obtained, the explosion of the training corpus is not caused, and the intention recognition is effectively improved through the training corpus.
  • the accuracy of the model's ability to identify customer intent is solved, and at the same time, better training corpus is obtained, the explosion of the training corpus is not caused, and the intention recognition is effectively improved through the training corpus.
  • the present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to perform the above-described method for generating training corpus of an intent recognition model.
  • the problem of not filling the query category for the training corpus that does not rely on the AI query corpus is solved, and at the same time, better training corpus is obtained, the explosion of the training corpus is not caused, and the intention recognition is effectively improved through the training corpus.
  • the accuracy of the model's ability to identify customer intent is solved, and at the same time, better training corpus is obtained, the explosion of the training corpus is not caused, and the intention recognition is effectively improved through the training corpus.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Abstract

La présente invention se rapporte au domaine des mégadonnées et s'applique au domaine du traitement médical intelligent. L'invention concerne un procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, ainsi qu'un dispositif associé. Le procédé consiste à : recevoir un corpus de questions IA pré-annoté avec une classe de questions, ainsi qu'un corpus de réponses client pré-annoté avec une étiquette d'intention, le corpus de réponses client comprenant un corpus lié aux questions et un corpus non lié aux questions ; établir une bibliothèque de corpus liée aux questions et une bibliothèque de corpus non liée aux questions ; ajuster la bibliothèque de corpus liée aux questions et la bibliothèque de corpus non liée aux questions d'après la similarité entre le corpus non lié aux questions et la bibliothèque de corpus liée aux questions afin d'obtenir une bibliothèque de corpus liée aux questions cible et une bibliothèque de corpus non liée aux questions cible ; établir un premier échantillon d'apprentissage d'après la bibliothèque de corpus liée aux questions cible ; établir un second échantillon d'apprentissage d'après l'étiquette d'intention et la bibliothèque de corpus non liée aux questions ; et considérer le premier échantillon d'apprentissage et le second échantillon d'apprentissage comme un corpus d'apprentissage et générer celui-ci. Le corpus d'apprentissage peut être stocké dans une chaîne de blocs. Au moyen du procédé, la qualité d'un corpus d'apprentissage est améliorée.
PCT/CN2021/090462 2020-11-17 2021-04-28 Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé WO2022105119A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011288871.X 2020-11-17
CN202011288871.XA CN112395390B (zh) 2020-11-17 2020-11-17 意图识别模型的训练语料生成方法及其相关设备

Publications (1)

Publication Number Publication Date
WO2022105119A1 true WO2022105119A1 (fr) 2022-05-27

Family

ID=74606272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090462 WO2022105119A1 (fr) 2020-11-17 2021-04-28 Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé

Country Status (2)

Country Link
CN (1) CN112395390B (fr)
WO (1) WO2022105119A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395390B (zh) * 2020-11-17 2023-07-25 平安科技(深圳)有限公司 意图识别模型的训练语料生成方法及其相关设备
CN113158680B (zh) * 2021-03-23 2024-05-07 北京新方通信技术有限公司 一种语料处理及意图识别的方法和装置
CN114281968B (zh) * 2021-12-20 2023-02-28 北京百度网讯科技有限公司 一种模型训练及语料生成方法、装置、设备和存储介质
CN115408509B (zh) * 2022-11-01 2023-02-14 杭州一知智能科技有限公司 一种意图识别方法、系统、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161363A1 (en) * 2015-12-04 2017-06-08 International Business Machines Corporation Automatic Corpus Expansion using Question Answering Techniques
CN108153780A (zh) * 2016-12-05 2018-06-12 阿里巴巴集团控股有限公司 一种人机对话装置及其实现人机对话的方法
WO2018157700A1 (fr) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Procédé et dispositif permettant de générer un dialogue, et support d'informations
CN111368043A (zh) * 2020-02-19 2020-07-03 中国平安人寿保险股份有限公司 基于人工智能的事件问答方法、装置、设备及存储介质
CN111428010A (zh) * 2019-01-10 2020-07-17 北京京东尚科信息技术有限公司 人机智能问答的方法和装置
CN112395390A (zh) * 2020-11-17 2021-02-23 平安科技(深圳)有限公司 意图识别模型的训练语料生成方法及其相关设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619050B (zh) * 2018-06-20 2023-05-09 华为技术有限公司 意图识别方法及设备
CN109508376A (zh) * 2018-11-23 2019-03-22 四川长虹电器股份有限公司 可在线纠错更新的意图识别方法及装置
CN110032724B (zh) * 2018-12-19 2022-11-25 阿里巴巴集团控股有限公司 用于识别用户意图的方法及装置
CN110135551B (zh) * 2019-05-15 2020-07-21 西南交通大学 一种基于词向量和循环神经网络的机器人聊天方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170161363A1 (en) * 2015-12-04 2017-06-08 International Business Machines Corporation Automatic Corpus Expansion using Question Answering Techniques
CN108153780A (zh) * 2016-12-05 2018-06-12 阿里巴巴集团控股有限公司 一种人机对话装置及其实现人机对话的方法
WO2018157700A1 (fr) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Procédé et dispositif permettant de générer un dialogue, et support d'informations
CN111428010A (zh) * 2019-01-10 2020-07-17 北京京东尚科信息技术有限公司 人机智能问答的方法和装置
CN111368043A (zh) * 2020-02-19 2020-07-03 中国平安人寿保险股份有限公司 基于人工智能的事件问答方法、装置、设备及存储介质
CN112395390A (zh) * 2020-11-17 2021-02-23 平安科技(深圳)有限公司 意图识别模型的训练语料生成方法及其相关设备

Also Published As

Publication number Publication date
CN112395390B (zh) 2023-07-25
CN112395390A (zh) 2021-02-23

Similar Documents

Publication Publication Date Title
WO2022105119A1 (fr) Procédé de génération de corpus d'apprentissage pour un modèle de reconnaissance d'intention, et dispositif associé
WO2022142014A1 (fr) Procédé de classification de texte sur la base d'une fusion d'informations multimodales et dispositif associé correspondant
WO2022126971A1 (fr) Procédé et appareil de groupement de textes selon la densité, dispositif et support de stockage
US9460117B2 (en) Image searching
US11727053B2 (en) Entity recognition from an image
US10713306B2 (en) Content pattern based automatic document classification
WO2022126970A1 (fr) Procédé et dispositif d'identification de risques de fraude financière, dispositif informatique et support de stockage
US20190392258A1 (en) Method and apparatus for generating information
WO2022134584A1 (fr) Procédé et appareil de vérification d'image de bien immobilier, dispositif informatique et support de stockage
US11977567B2 (en) Method of retrieving query, electronic device and medium
CN112632278A (zh) 一种基于多标签分类的标注方法、装置、设备及存储介质
WO2021103594A1 (fr) Procédé et dispositif de détection de degré de tacitivité, serveur et support de stockage lisible
CN116956326A (zh) 权限数据的处理方法、装置、计算机设备及存储介质
CN116661936A (zh) 页面数据的处理方法、装置、计算机设备及存储介质
CN114547257B (zh) 类案匹配方法、装置、计算机设备及存储介质
WO2022126962A1 (fr) Procédé sur la base d'un graphique de connaissances destiné à détecter un corpus de guidage et de soutien et dispositif associé
WO2022142032A1 (fr) Procédé et appareil de vérification de signature manuscrite, dispositif informatique et support de stockage
CN113065354B (zh) 语料中地理位置的识别方法及其相关设备
CN113989618A (zh) 可回收物品分类识别方法
CN112036501A (zh) 基于卷积神经网络的图片的相似度检测方法及其相关设备
CN111597453A (zh) 用户画像方法、装置、计算机设备及计算机可读存储介质
CN115250200B (zh) 服务授权认证方法及其相关设备
CN117076775A (zh) 资讯数据的处理方法、装置、计算机设备及存储介质
CN117113400A (zh) 一种数据泄露溯源方法、装置、设备及其存储介质
CN116796730A (zh) 基于人工智能的文本纠错方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893275

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21893275

Country of ref document: EP

Kind code of ref document: A1