CN114117037A - Intention recognition method, device, equipment and storage medium - Google Patents

Intention recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN114117037A
CN114117037A CN202111303756.XA CN202111303756A CN114117037A CN 114117037 A CN114117037 A CN 114117037A CN 202111303756 A CN202111303756 A CN 202111303756A CN 114117037 A CN114117037 A CN 114117037A
Authority
CN
China
Prior art keywords
intention
label
candidate
confidence
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111303756.XA
Other languages
Chinese (zh)
Inventor
张云云
夏海兵
佘丽丽
毛宇
王福海
纳颖泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Merchants Union Consumer Finance Co Ltd
Original Assignee
Merchants Union Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Merchants Union Consumer Finance Co Ltd filed Critical Merchants Union Consumer Finance Co Ltd
Priority to CN202111303756.XA priority Critical patent/CN114117037A/en
Publication of CN114117037A publication Critical patent/CN114117037A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The application relates to an intention identification method, an intention identification device, intention identification equipment and a storage medium. The method comprises the following steps: acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected; inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label; if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value; and selecting the intention label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as a target intention label. Different numbers of intention labels can be output in a self-adaptive mode for different texts, and the text intention can be accurately identified.

Description

Intention recognition method, device, equipment and storage medium
Technical Field
The present application relates to the field of natural language processing technologies, and in particular, to an intention recognition method, apparatus, device, and storage medium.
Background
Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence, and it is studying various theories and methods that enable efficient communication between humans and computers using Natural Language. With the rapid development and wide application of artificial intelligence technology, more and more industry fields relate to man-machine conversation systems, so that effective intention identification needs to be carried out on language requirement information of users, and accurate corresponding services are provided for the users.
The user intention processing is a science integrating linguistics, computer science and mathematics. The user intention processing is mainly applied to the aspects of recommendation search, computational advertising, man-machine conversation, machine translation, public opinion monitoring, automatic summarization, viewpoint extraction, text classification, question answering, text semantic comparison, voice Recognition, Chinese OCR (Optical Character Recognition) and the like. Most of the existing intention recognition models can only recognize the single intention, and the accuracy of the result of multi-intention recognition is low.
Disclosure of Invention
In view of the above, it is necessary to provide an intention recognition method, apparatus, device, and storage medium capable of accurately recognizing text intentions in view of the above technical problems.
In a first aspect, the present application provides an intent recognition method, the method comprising:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
In one embodiment, the way to evolve the single-intent recognition model into the multi-intent recognition model includes:
preprocessing a multi-intention sample set to obtain a multi-intention sample numerical sequence;
inputting the numerical value sequence of the multiple intention samples into a single intention recognition model to obtain an initial intention label;
calculating a cross entropy loss between the initial intent label and an actual intent label of a multi-intent sample set;
and reversely adjusting the model parameters of the univocal graphical model according to the cross entropy loss until a preset training termination condition is met, and obtaining the multi-intention recognition model.
In one embodiment, if the confidence level of at least one candidate intention tag is greater than the first threshold, performing adaptive threshold computation on the candidate intention tag set to obtain an adaptive threshold, includes:
if the confidence of at least one candidate intention label is larger than a first threshold value, sequencing the confidence of a candidate intention label set from large to small to obtain a confidence sequence;
sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean value and a second variance of all confidence coefficients sequenced in back of the target element;
and determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In one embodiment, the determining the difference value through the first mean, the second mean, the first variance and the second variance includes:
calculating a difference value between the first average value and the second average value to obtain a first difference value;
calculating a difference value between the first variance and the second variance to obtain a second difference value;
and obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is in direct proportion to the square of the first difference value, and the difference value is in inverse proportion to the second difference value.
In one embodiment, after selecting the label corresponding to the confidence coefficient greater than the adaptive threshold in the candidate intention label set as the target intention label, the method further includes:
and removing the conflict label in the target intention label to obtain a final intention label identification result.
In one embodiment, the removing the conflicting label from the target intention label to obtain a final intention label recognition result includes:
and removing the conflict label with lower credibility in the conflict labels to obtain a final intention label identification result.
In a second aspect, the present application also provides an intent recognition apparatus, the apparatus comprising:
the preprocessing module is used for acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
the model processing module is used for inputting the target numerical value sequence into a multi-intention recognition model developed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence coefficient corresponding to the candidate intention label;
the self-adaptive threshold module is used for carrying out self-adaptive threshold calculation on the candidate intention label set to obtain a self-adaptive threshold if the confidence coefficient of at least one candidate intention label is greater than a first threshold;
and the intention identification module is used for selecting the label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
In one embodiment, the intention recognition apparatus further includes a model training module for evolving the single intention recognition model into a multiple intention recognition model, including:
the preprocessing unit is used for preprocessing the multi-intention sample set to obtain a multi-intention sample numerical sequence;
the single meaning graph model unit is used for inputting the numerical value sequence of the multi-intention sample into a single meaning graph recognition model to obtain an initial intention label;
a cross entropy calculation unit to calculate a cross entropy loss between the initial intent label and an actual intent label of a multi-intent sample set;
and the parameter training unit is used for reversely adjusting the model parameters of the univocal graphical model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
In one embodiment, the adaptive threshold module comprises:
the confidence sequence unit is used for sequencing the confidence degrees in the candidate intention label set from large to small to obtain a confidence sequence if the confidence degree of at least one candidate intention label is larger than a first threshold value;
the target element unit is used for sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all the confidence coefficients sequenced in front of the target element, and calculating a second mean value and a second variance of all the confidence coefficients sequenced in back of the target element;
and the difference calculating unit is used for determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking the corresponding target element when the difference value is maximum as an adaptive threshold.
In one embodiment, the difference calculating unit is further configured to calculate a difference between the first average value and the second average value to obtain a first difference value; calculating a difference value between the first variance and the second variance to obtain a second difference value; and obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is in direct proportion to the square of the first difference value, and the difference value is in inverse proportion to the second difference value.
In one embodiment, the intention identification device further includes a conflict processing module, configured to remove a conflict tag in the target intention tags after selecting the tags corresponding to the confidence degrees that are greater than the adaptive threshold in the candidate intention tag set as the target intention tags, so as to obtain a final intention tag identification result.
In one embodiment, the collision processing module is further configured to remove collision tags with a lower confidence level from the collision tags, so as to obtain a final intention tag identification result.
In a third aspect, the present application further provides an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
According to the intention identification method, the intention identification device, the intention identification equipment and the storage medium, the text to be detected is obtained and is preprocessed, so that a target numerical sequence corresponding to the text to be detected is obtained; inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label; if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value; and selecting the intention label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as a target intention label. The single-intention recognition model is evolved into the multi-intention recognition model, and the self-adaptive threshold is calculated to obtain the self-adaptive threshold, so that different texts can be output with different numbers of intention labels in a self-adaptive manner, and the text intentions can be accurately recognized.
Drawings
FIG. 1 is a diagram of an application environment of the intent recognition method in one embodiment;
FIG. 2 is a flow diagram illustrating an intent recognition method, according to one embodiment;
FIG. 3 is a schematic flow diagram of evolving a multi-intent recognition model in one embodiment;
FIG. 4 is a schematic flow chart illustrating step S208 in the embodiment shown in FIG. 2;
FIG. 5 is a schematic flowchart of step S406 in the embodiment shown in FIG. 4;
FIG. 6 is a flow diagram illustrating an intent recognition method, in accordance with one embodiment;
FIG. 7 is a flow diagram illustrating an intent recognition method, in accordance with one embodiment;
FIG. 8 is a block diagram of an intent recognition mechanism in one embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The intention identification method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. After receiving a text to be detected sent by the terminal 102, the server 104 preprocesses the text to be detected to obtain a target numerical sequence corresponding to the text to be detected; inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label; if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value; and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an intention recognition method is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s202, acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected.
The text to be tested can be directly input text or input voice, and then the text is converted into the text through a voice-to-text technology. After the text to be detected is obtained, the text to be detected is preprocessed, for example, special symbols, sensitive information, punctuation marks and stop words in the text to be detected are removed, the text is segmented, and symbol marks are added. Wherein, word segmentation means to segment the text into several words, for example, using a jieba library; the symbol mark is used for indicating the starting position and the ending position of the text to be detected. And performing numerical coding on the preprocessed text to be detected according to the preset length of the numerical sequence, for example, performing numerical coding on the text to be detected by using any one of a keras library, tenserflow, pyrrch, mxnet, caffe, paddlepaddle and the like, or by using an autonomously designed programming method to obtain a target numerical sequence corresponding to the text to be detected. Among them, Caffe is a deep Learning framework developed by Berkeley Vision and Learning Center (BVLC) community contributors.
S204, inputting the target numerical sequence into a multi-intention recognition model developed from the single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels.
The single-intent recognition model can only recognize single-intent icons with single-intent text, and the multiple-intent recognition model can recognize multiple-intent labels with multiple-intent text. And training the single-intention recognition model by using the text data set sample with multiple intentions to obtain the multiple-intention recognition model capable of recognizing the multiple-intention labels. Inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention tag set, wherein the candidate intention tag set comprises at least one candidate intention tag and confidence degrees corresponding to the candidate intention tags, namely, if the text to be tested only contains one intention tag, the obtained candidate intention set comprises one candidate intention tag and the corresponding confidence degree, and if the text to be tested contains a plurality of intention tags, the obtained candidate intention tag set comprises a plurality of candidate intention tags and the confidence degrees corresponding to the candidate intention tags. The confidence in the set of candidate intention tags may be understood as a prediction of the confidence in the actual intention tags, and the confidence in the set of candidate intention tags is arranged in the order of the intention tags, one for each intention tag. The univocal map identification model comprises text classification models such as a convolutional neural network text classification model (TextCNN), a fast text classification model (Fasttext), a deep pyramid convolutional neural network model (DPCNN) and the like.
S206, if the confidence coefficient of at least one candidate intention label is larger than the first threshold value, the candidate intention label set is subjected to self-adaptive threshold value calculation to obtain a self-adaptive threshold value.
In the candidate intention label set, if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, the candidate intention label set is subjected to self-adaptive threshold value calculation to obtain a self-adaptive threshold value corresponding to the candidate intention label. The first threshold is a hyper-parameter, and can be set according to specific situations, for example, the first threshold is less than or equal to 0.4; the adaptive threshold is a threshold that is more matched with the overall data features of the confidence degrees corresponding to the candidate intention labels in the candidate intention label set, that is, if the confidence degrees corresponding to the candidate intention labels are different in different candidate intention label sets, the corresponding adaptive thresholds are also different.
In an alternative embodiment, the confidence level of each candidate intention label in the set of candidate intention labels is compared with a first threshold, and if the confidence level of at least one candidate intention label is greater than the first threshold, the set of candidate intention labels is subjected to adaptive threshold calculation to obtain an adaptive threshold.
In another optional embodiment, the confidence corresponding to the largest candidate intention label in the candidate intention label set is obtained first, the largest confidence is compared with the first threshold, and if the confidence is greater than the first threshold, the candidate intention label set is subjected to adaptive threshold calculation to obtain an adaptive threshold.
And S208, selecting the intention label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
And after the self-adaptive threshold is determined, selecting the intention label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label. That is, the intention labels with the confidence degrees larger than or equal to the adaptive threshold may be sequentially selected according to the order of the candidate intention labels in the candidate intention label set as the target intention labels of the intention recognition.
In the intention identification method in the embodiment, a target numerical sequence corresponding to a text to be detected is obtained by obtaining the text to be detected and preprocessing the text to be detected; inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label; if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value; and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label. The single-intention recognition model is evolved into the multi-intention recognition model, and the self-adaptive threshold is calculated to obtain the self-adaptive threshold, so that different texts can be output with different numbers of intention labels in a self-adaptive manner, and the text intentions can be accurately recognized.
In one embodiment, as shown in FIG. 3, the way in which a single intent recognition model is evolved into a multiple intent recognition model includes the following steps:
and S302, preprocessing the multi-intention sample set to obtain a multi-intention sample numerical sequence.
S304, inputting the numerical sequence of the multiple intention samples into the single intention recognition model to obtain an initial intention label;
s306, calculating cross entropy loss between the initial intention label and an actual intention label of the multi-intention sample set; in one implementation, the specific way to encode the actual intent tags of the multi-intent sample set is: assuming that n intention labels are involved in the multiple intention sample sets and are arranged according to a fixed sequence, traversing the n labels according to the fixed sequence for one of the multiple intention samples, if the corresponding position contains the corresponding label in the multiple intention sample sets, the corresponding position is coded as 1, otherwise, the corresponding position is 0, finally, the intention label corresponding to the multiple intention sample is coded as a sequence of 0 and 1, and if the multiple intention sample contains m labels, the multiple intention sample contains m 1, n-m 0. Other multi-intent samples in the multi-intent sample set are encoded according to the same method.
And S308, reversely adjusting model parameters of the univocal graphical model according to the cross entropy loss until a preset training termination condition is met, and obtaining the multi-intention recognition model.
In this embodiment, the specific implementation process of obtaining the numerical sequence of the multi-intent sample by preprocessing the multi-intent sample set may refer to the description of step S202 in the above embodiments, and is not described herein again. After the multi-intention sample numerical sequence is obtained, inputting the multi-intention sample numerical sequence into a single-intention recognition model to obtain an initial intention label, calculating cross entropy loss between the initial intention label and an actual intention label corresponding to the multi-intention sample set, reversely adjusting model parameters of the single-intention recognition model according to the cross entropy loss until a preset training termination condition is met, obtaining target model parameters, and replacing corresponding model parameters in the original single-intention recognition model with the target model parameters to obtain the multi-intention recognition model. The preset termination condition may be a preset training frequency, that is, when the number of times of model training reaches the preset training frequency, terminating the model training to obtain a target model parameter; or preset cross entropy loss, and when the cross entropy loss reaches the preset cross entropy loss, the model parameter obtained by the last training is the target model parameter.
In an optional embodiment, the univocal map identification model is a convolutional neural network text classification TextCNN model, wherein an activation function in the TextCNN model is a Sigmoid function, a loss function is a Binary Cross Entropy loss function (BCE), a sequence of values of multiple intention samples is input into the TextCNN model to obtain an initial intention label, a Binary Cross Entropy loss function is used to calculate Cross Entropy loss between the initial intention label and an actual intention label corresponding to the multiple intention sample set, model parameters of the univocal map identification model are reversely adjusted according to the Cross Entropy loss until the Cross Entropy loss is zero, that is, an output result of the TextCNN model is the same as an actual intention label coding result, that is, target model parameters are obtained, and at this time, the obtained model with the target model parameters is the multiple intention identification model.
And outputting a numerical value sequence with the same length as the actual intention label coding sequence by the multi-intention recognition model obtained after the textCNN model is trained, wherein each numerical value is the confidence coefficient of the intention label at the corresponding position, and the closer the confidence coefficient is to 1, the higher the possibility that the text contains the corresponding intention label is considered.
In one embodiment, as shown in fig. 4, if the confidence of at least one candidate intention tag is greater than the first threshold, the step S206 of performing adaptive threshold calculation on the candidate intention tag set to obtain an adaptive threshold includes:
s402, if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, sequencing the confidence coefficients in the candidate intention label set from large to small to obtain a confidence coefficient sequence.
S404, starting from the first confidence coefficient in the confidence coefficient sequence, sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element, calculating a first mean value and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean value and a second variance of all confidence coefficients sequenced in back of the target element.
In one specific example, assume a confidence sequence of 0.98, 0.96, 0.95, 0.92. Firstly, selecting a first confidence coefficient 0.98 in a confidence coefficient sequence as a target element, calculating that a first mean value and a first variance of the target element and all confidence coefficients {0.98} ranked in front of the target element are respectively 0.98 and 0, and calculating that a second mean value and a second variance of all confidence coefficients {0.96, 0.95 and 0.92} ranked behind the target element 0.98 are respectively 0.94 and 0.00030; selecting a second confidence coefficient 0.96 in the confidence coefficient sequence as a target element, calculating a first mean value and a first variance of all confidence coefficients {0.98, 0.96} of the target element and the target element in front of the target element to be 0.97 and 0.00010 respectively, and calculating a second mean value and a second variance of all confidence coefficients {0.95, 0.92} of the target element in the sequence after 0.96 to be 0.94 and 0.00025 respectively; selecting a third confidence coefficient 0.95 in the confidence coefficient sequence as a target element, calculating a first mean value and a first variance of the target element and all confidence coefficients {0.98, 0.96 and 0.95} sorted in front of the target element to be 0.96 and 0.00017 respectively, and calculating a second mean value and a second variance of all confidence coefficients {0.92} sorted behind the target element 0.95 to be 0.92 and 0 respectively; selecting the fourth confidence coefficient 0.92 in the confidence coefficient sequence as a target element, calculating the first mean value and the first variance of the target element and all confidence coefficients {0.98, 0.96, 0.95, 0.92} ordered before the target element to be 0.95 and 0.00048 respectively, and since there is no other confidence coefficient after the target element 0.92 ordered, or in some practical cases, the confidence coefficient is zero, then the second mean value and the second variance at this time are 0 and 0 respectively. The details are shown in table 1 below:
TABLE 1
Figure BDA0003339341150000111
S406, determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
In this embodiment, if the confidence of at least one candidate intention tag is greater than the first threshold, the confidence in the candidate intention tag set is sorted from large to small to obtain a confidence sequence; sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating the mean value and variance of the target element and all confidence coefficients sequenced in front of the target element respectively as a first mean value and a first variance, and calculating the mean value and variance of all confidence coefficients sequenced in back of the target element respectively as a second mean value and a second variance; and determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold. Wherein the mean may be an arithmetic mean or a weighted mean.
In one embodiment, as shown in fig. 5, the step S406 of determining the difference value by the first mean, the second mean, the first variance and the second variance includes:
s502, calculating a difference value between the first average value and the second average value to obtain a first difference value;
s504, calculating a difference value between the first variance and the second variance to obtain a second difference value;
s506, obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is proportional to a square of the first difference value, and the difference value is inversely proportional to the second difference value.
In an alternative embodiment, the difference value diff is:
Figure BDA0003339341150000121
wherein, mu1Is the first mean value, mu2Is the second mean value, var1Is a first variance, var2Is the second variance.
In another alternative embodiment, the difference value diff is:
Figure BDA0003339341150000122
where k is a constant greater than zero. That is, the above formula may allow appropriate deformation in order to make the result more accurate in a specific application scenario on the basis of retaining the first mean, the second mean, the first variance, and the second variance when used specifically.
In one embodiment, as shown in fig. 6, there is provided an intention recognition method including the steps of:
s602, acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected.
S604, inputting the target numerical value sequence into a multi-intention recognition model developed from the single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels.
S606, if the confidence of at least one candidate intention label is larger than the first threshold, the candidate intention label set is subjected to self-adaptive threshold calculation to obtain a self-adaptive threshold.
S608, selecting the intention label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
S610, removing the conflict label in the target intention label to obtain a final intention label identification result.
In the embodiment, firstly, a text to be detected is obtained, and the text to be detected is preprocessed to obtain a target numerical sequence corresponding to the text to be detected; inputting the target numerical value sequence into a multi-intention recognition model developed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label; if the confidence coefficient of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value; and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold in the candidate intention label set as a target intention label, and removing the conflict label which does not meet the preset condition in the target intention label to obtain the final intention label identification result. The conflict tag is a tag intended to express mutual opposition, for example, "confirm transaction" and "deny transaction", and the conflict tags cannot exist simultaneously in the same text. Steps S602 to S608 correspond to steps S202 to S208, and are not described herein again.
In one embodiment, the step S610 of removing the conflicting label from the target intention label to obtain the final intention label identification result includes: and removing the conflict label with lower credibility in the conflict labels to obtain the final intention label identification result. And the final intention label identification result comprises intention labels and corresponding intention contents.
In one specific example, as shown in fig. 7, the intention identification method includes the steps of:
s702, acquiring the text to be detected. And the server receives the text to be tested sent by the terminal.
S704, preprocessing the text to be detected to obtain a target numerical sequence. The server preprocesses the text to be detected, numerically encodes the preprocessed text to be detected, and obtains a target numerical sequence corresponding to the text to be detected.
S706, inputting the candidate intention labels and the corresponding confidence degrees into a multi-intention recognition model trained by the TextCNN model. And training the TextCNN model by using the multi-intention sample set to obtain a corresponding multi-intention recognition model, inputting the target numerical value sequence obtained in the step S704 into the multi-intention recognition model trained by the TextCNN model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and confidence degrees corresponding to the candidate intention labels.
S708, judging whether the maximum confidence coefficient is larger than a first threshold value K. And selecting the maximum confidence coefficient in the candidate intention label set, comparing the maximum confidence coefficient with a first threshold value K, and judging whether the maximum confidence coefficient is greater than the first threshold value K, wherein the first threshold value K is 0.3.
And S710, outputting the result as other labels. If the maximum confidence is smaller than or equal to the first threshold K, it is indicated that all the confidence in the candidate intention set are small, the text to be detected is regarded as irrelevant text, that is, the judgment of the model on the text is invalid, and the result is output as other label.
And S712, sequencing the confidence degrees from high to low, calculating the segmentation difference value of the confidence degrees, and determining the self-adaptive threshold value t. If the maximum confidence degree in the candidate intention label set is larger than a first threshold value K, sequencing the confidence degrees in the candidate intention set from large to small to obtain a confidence degree sequence; and sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all confidence coefficients sequenced in front of the target element, calculating a second mean value and a second variance of all confidence coefficients sequenced behind the target element, determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking the corresponding target element with the maximum difference value as an adaptive threshold value t.
And S714, outputting the intention label corresponding to the confidence coefficient larger than the adaptive threshold value t. And selecting an intention label corresponding to the confidence coefficient which is greater than the adaptive threshold value t in the candidate intention set as a target intention label list, wherein the target intention label list comprises intention content, intention labels and the corresponding confidence coefficient.
And S716, post-processing. And processing the conflict tags in the target intention tag list, and selecting to remove the intention tags with lower credibility from the conflict tags.
S718, an intention recognition result is output. And the post-processed target intention label list is a final intention identification result.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an intention identification device for implementing the intention identification method mentioned above. The solution to the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the intention identifying apparatus provided below can refer to the limitations on the intention identifying method in the above, and are not described herein again.
In one embodiment, as shown in fig. 8, there is provided an intention identifying apparatus including: a pre-processing module 802, a model processing module 804, an adaptive threshold module 806, an intent recognition module 808, wherein:
the preprocessing module 802 is configured to obtain a text to be detected, and preprocess the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
a model processing module 804, configured to input the target numerical sequence into a multi-intent recognition model developed from a univocal illustration recognition model to obtain a candidate intent tag set, where the candidate intent tag set includes at least one candidate intent tag and a confidence degree corresponding to the candidate intent tag;
an adaptive threshold module 806, configured to perform adaptive threshold computation on the candidate intention tag set to obtain an adaptive threshold if the confidence of at least one candidate intention tag is greater than a first threshold;
the intention identifying module 808 is configured to select a label corresponding to the confidence coefficient greater than the adaptive threshold in the candidate intention label set as a target intention label.
In one embodiment, the intention recognition apparatus further comprises a model training module for evolving the single intention recognition model into a multiple intention recognition model, comprising:
the preprocessing unit is used for preprocessing the multi-intention sample set to obtain a multi-intention sample numerical sequence;
the single meaning graph model unit is used for inputting the numerical value sequence of the multi-intention sample into a single meaning graph recognition model to obtain an initial intention label;
a cross entropy calculation unit to calculate a cross entropy loss between the initial intent label and an actual intent label of a multi-intent sample set;
and the parameter training unit is used for reversely adjusting the model parameters of the univocal graphical model according to the cross entropy loss until a preset training termination condition is met, so as to obtain the multi-intention recognition model.
In one embodiment, the adaptive threshold module 806 includes:
the confidence sequence unit is used for sequencing the confidence degrees in the candidate intention label set from large to small to obtain a confidence sequence if the confidence degree of at least one candidate intention label is larger than a first threshold value;
the target element unit is used for sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all the confidence coefficients sequenced in front of the target element, and calculating a second mean value and a second variance of all the confidence coefficients sequenced in back of the target element;
and the difference calculating unit is used for determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking the corresponding target element when the difference value is maximum as an adaptive threshold.
In one embodiment, the difference calculating unit is further configured to calculate a difference between the first average value and the second average value to obtain a first difference value; calculating a difference value between the first variance and the second variance to obtain a second difference value; and obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is in direct proportion to the square of the first difference value, and the difference value is in inverse proportion to the second difference value.
In an embodiment, the intention identification apparatus further includes a conflict processing module, configured to remove a conflict tag in the target intention tags after selecting the tags corresponding to the confidence degrees that are greater than the adaptive threshold in the candidate intention tag set as the target intention tags, so as to obtain a final intention tag identification result.
In one embodiment, the collision processing module is further configured to remove collision tags with lower confidence degrees from the collision tags, so as to obtain a final intention tag identification result.
The various modules in the above-described intent recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an intent recognition method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided, comprising a memory storing a computer program and a processor implementing the following steps when the processor executes the computer program:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is greater than the self-adaptive threshold value in the candidate intention label set as the target intention label.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. An intent recognition method, the method comprising:
acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
inputting the target numerical value sequence into a multi-intention recognition model evolved from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence degree corresponding to the candidate intention label;
if the confidence of at least one candidate intention label is larger than a first threshold value, performing adaptive threshold value calculation on the candidate intention label set to obtain an adaptive threshold value;
and selecting the label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
2. The method of claim 1, wherein the manner in which the univocal pattern recognition model evolves into a multi-intentional pattern recognition model comprises:
preprocessing a multi-intention sample set to obtain a multi-intention sample numerical sequence;
inputting the numerical value sequence of the multiple intention samples into a single intention recognition model to obtain an initial intention label;
calculating a cross entropy loss between the initial intent label and an actual intent label of a multi-intent sample set;
and reversely adjusting the model parameters of the univocal graphical model according to the cross entropy loss until a preset training termination condition is met, and obtaining the multi-intention recognition model.
3. The method of claim 1, wherein if the confidence level that there is at least one candidate intention tag is greater than a first threshold, then performing an adaptive threshold computation on the set of candidate intention tags to obtain an adaptive threshold, comprising:
if the confidence of at least one candidate intention label is larger than a first threshold value, sequencing the confidence of a candidate intention label set from large to small to obtain a confidence sequence;
sequentially selecting each confidence coefficient in the confidence coefficient sequence as a target element from a first confidence coefficient in the confidence coefficient sequence, calculating a first mean value and a first variance of the target element and all confidence coefficients sequenced in front of the target element, and calculating a second mean value and a second variance of all confidence coefficients sequenced in back of the target element;
and determining a difference value through the first mean value, the second mean value, the first variance and the second variance, and taking a target element corresponding to the maximum difference value as an adaptive threshold.
4. The method of claim 3, wherein determining the difference value from the first mean, the second mean, the first variance, and the second variance comprises:
calculating a difference value between the first average value and the second average value to obtain a first difference value;
calculating a difference value between the first variance and the second variance to obtain a second difference value;
and obtaining a difference value according to the first difference value and the second difference value, wherein the difference value is in direct proportion to the square of the first difference value, and the difference value is in inverse proportion to the second difference value.
5. The method of claim 1, wherein after said selecting the label corresponding to the confidence level in the candidate intent label set that is greater than the adaptive threshold as the target intent label, the method further comprises:
and removing the conflict label in the target intention label to obtain a final intention label identification result.
6. The method of claim 5, wherein the removing conflicting ones of the target intention tags to obtain a final intention tag identification comprises:
and removing the conflict label with lower credibility in the conflict labels to obtain a final intention label identification result.
7. An intent recognition apparatus, characterized in that the apparatus comprises:
the preprocessing module is used for acquiring a text to be detected, and preprocessing the text to be detected to obtain a target numerical sequence corresponding to the text to be detected;
the model processing module is used for inputting the target numerical value sequence into a multi-intention recognition model developed from a single-intention recognition model to obtain a candidate intention label set, wherein the candidate intention label set comprises at least one candidate intention label and a confidence coefficient corresponding to the candidate intention label;
the self-adaptive threshold module is used for carrying out self-adaptive threshold calculation on the candidate intention label set to obtain a self-adaptive threshold if the confidence coefficient of at least one candidate intention label is greater than a first threshold;
and the intention identification module is used for selecting the label corresponding to the confidence coefficient which is not less than the self-adaptive threshold in the candidate intention label set as the target intention label.
8. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202111303756.XA 2021-11-05 2021-11-05 Intention recognition method, device, equipment and storage medium Pending CN114117037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111303756.XA CN114117037A (en) 2021-11-05 2021-11-05 Intention recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111303756.XA CN114117037A (en) 2021-11-05 2021-11-05 Intention recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114117037A true CN114117037A (en) 2022-03-01

Family

ID=80380701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111303756.XA Pending CN114117037A (en) 2021-11-05 2021-11-05 Intention recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114117037A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818703A (en) * 2022-06-28 2022-07-29 珠海金智维信息科技有限公司 Multi-intention recognition method and system based on BERT language model and TextCNN model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818703A (en) * 2022-06-28 2022-07-29 珠海金智维信息科技有限公司 Multi-intention recognition method and system based on BERT language model and TextCNN model

Similar Documents

Publication Publication Date Title
CN110347835B (en) Text clustering method, electronic device and storage medium
CN112084331A (en) Text processing method, text processing device, model training method, model training device, computer equipment and storage medium
WO2022105125A1 (en) Image segmentation method and apparatus, computer device, and storage medium
CN112732911A (en) Semantic recognition-based conversational recommendation method, device, equipment and storage medium
CN111177367B (en) Case classification method, classification model training method and related products
CN110929524A (en) Data screening method, device, equipment and computer readable storage medium
CN114549913B (en) Semantic segmentation method and device, computer equipment and storage medium
CN112966088B (en) Unknown intention recognition method, device, equipment and storage medium
CN115578735B (en) Text detection method and training method and device of text detection model
CN114021582B (en) Spoken language understanding method, device, equipment and storage medium combined with voice information
CN116580257A (en) Feature fusion model training and sample retrieval method and device and computer equipment
CN116861995A (en) Training of multi-mode pre-training model and multi-mode data processing method and device
CN112863683A (en) Medical record quality control method and device based on artificial intelligence, computer equipment and storage medium
CN116152833B (en) Training method of form restoration model based on image and form restoration method
CN115146068B (en) Method, device, equipment and storage medium for extracting relation triples
CN115730597A (en) Multi-level semantic intention recognition method and related equipment thereof
CN115168590A (en) Text feature extraction method, model training method, device, equipment and medium
CN114428860A (en) Pre-hospital emergency case text recognition method and device, terminal and storage medium
CN114117037A (en) Intention recognition method, device, equipment and storage medium
CN113723077A (en) Sentence vector generation method and device based on bidirectional characterization model and computer equipment
CN115687934A (en) Intention recognition method and device, computer equipment and storage medium
CN114398482A (en) Dictionary construction method and device, electronic equipment and storage medium
CN115082598A (en) Text image generation method, text image training method, text image processing method and electronic equipment
CN115116080A (en) Table analysis method and device, electronic equipment and storage medium
CN114707638A (en) Model training method, model training device, object recognition method, object recognition device, object recognition medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Zhaolian Consumer Finance Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: MERCHANTS UNION CONSUMER FINANCE Co.,Ltd.

Country or region before: China

CB02 Change of applicant information