CN112860870B - Noise data identification method and equipment - Google Patents

Noise data identification method and equipment Download PDF

Info

Publication number
CN112860870B
CN112860870B CN202110283194.0A CN202110283194A CN112860870B CN 112860870 B CN112860870 B CN 112860870B CN 202110283194 A CN202110283194 A CN 202110283194A CN 112860870 B CN112860870 B CN 112860870B
Authority
CN
China
Prior art keywords
data
result
sample
loss
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110283194.0A
Other languages
Chinese (zh)
Other versions
CN112860870A (en
Inventor
张勇
刘升平
梁家恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Xiamen Yunzhixin Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd, Xiamen Yunzhixin Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN202110283194.0A priority Critical patent/CN112860870B/en
Publication of CN112860870A publication Critical patent/CN112860870A/en
Application granted granted Critical
Publication of CN112860870B publication Critical patent/CN112860870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The invention provides a noise data identification method and equipment, comprising the following steps: acquiring original training data; forward reasoning is carried out on the original training data, and a prediction result is obtained; calculating based on the original training data and the prediction result to obtain a loss result; deriving the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; forming new training data based on the new sample feature data and the sample result data; performing union processing on the new training data and the training data to obtain a first data set; processing the first data set to obtain a second data set; training through the first data set and the second data set to obtain a final model; noise data in the input intention data is identified by the final model. According to the scheme, the training data are specially processed in the training stage, and the robustness of the model is enhanced in a mode of countertraining and sample fusion.

Description

Noise data identification method and equipment
Technical Field
The invention relates to the technical field of noise data identification, in particular to a noise data identification method and equipment.
Background
In the prior art, noise data is not typically processed specifically in the context of custom dialogue systems for certain user customers. But rather the noise data is trained with the user intent data as a noise intent in the general context.
In such a scenario, the user intent data is relatively small, while in general in training data for tasks intended to be identified, positive intent data and negative noise data are required to maintain a certain proportion, e.g., 1;3 or 1:5. Thus, the noise data cannot be too much when training data is collated. And the language space of the noise data is relatively large, so that a small amount of training data coverage is insufficient. However, the prior art does not provide some additional special treatment for negative noise data. Thus, current intent recognition techniques have poor recognition for such unintelligible or noisy data. There may occur a case where noise data is largely recognized as front data.
Thus, there is a need for a solution to the problems of the prior art.
Disclosure of Invention
The invention provides a method and equipment for identifying voice data, which can solve the technical problem of poor identification performance in the prior art.
The technical scheme for solving the technical problems is as follows:
the embodiment of the invention provides a noise data identification method, which comprises the following steps:
acquiring original training data comprising intention data and noise data of a user;
forward reasoning is carried out on the original training data, and a prediction result is obtained;
calculating based on the original training data and the prediction result to obtain a loss result;
deriving the original training data based on the loss result to obtain gradient data;
converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
forming new training data based on the new sample feature data and the sample result data;
performing union processing on the new training data and the training data to obtain a first data set;
processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
training a selected intention classification algorithm through the first data set and the second data set to obtain a final model;
noise data in the input intention data is identified by the final model.
In a specific embodiment, the forward processing is performed by the following formula:
wherein, (x) i ,y i ) Is the original training data input; θ is a model parameter; f (θ, x) i ,y i ) A function of forward processing of the input original training data by a representation model; />And (5) obtaining the prediction result.
In a specific embodiment, the loss result is obtained by the following formula:
wherein (1)>Is the prediction result; x is x i ,y i Both are the input original training data; />Representing a loss function; loss of loss i As a result of losses.
In a specific embodiment, the gradient data is obtained by the following formula:
wherein grad i Is gradient data; loss of loss i As a loss result; />For a derivative function.
In a specific embodiment, the new sample feature data is obtained by the following formula:
wherein ε is a parameter between 0 and 1; sign (grad) i ) For a sign function; when grad is greater than 0, sign (grad i ) -1; when grad is less than 0, sign (grad i )=-1;/>Characteristic data for the new sample; x is x i Sample characteristic data; y is i Is sample result data.
In a specific embodiment, the processing of the preset mode is performed by the following formula:
wherein,and->For any two numbers in the first data setAccording to the above; lambda is a weight parameter; x is X MIX Is the second data set.
In a specific embodiment, the selected intent classification algorithm includes: convolutional neural networks or recurrent neural networks.
In a specific embodiment, the loss function of the final model comprises:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, the method further comprises:
and if the final model is tested, inputting the original training data for forward reasoning to obtain a predicted result of the final model, and comparing the predicted result of the final model with sample result data to determine a test result.
The embodiment of the invention also provides a device for identifying noise data, which comprises:
an acquisition module for acquiring raw training data including intention data and noise data of a user;
the forward reasoning module is used for carrying out forward reasoning on the original training data to obtain a prediction result;
the loss module is used for calculating based on the original training data and the prediction result to obtain a loss result;
the deriving module is used for deriving the original training data based on the loss result to obtain gradient data;
the conversion module is used for converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module for forming new training data based on the new sample feature data and the sample result data;
the union module is used for performing union processing on the new training data and the training data to obtain a first data set;
the processing module is used for processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
the training module is used for training the selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and the recognition module is used for recognizing noise data in the input intention data through the final model.
The beneficial effects of the invention are as follows:
the embodiment of the invention provides a noise data identification method and equipment, wherein the method comprises the following steps: acquiring original training data comprising intention data and noise data of a user; forward reasoning is carried out on the original training data, and a prediction result is obtained; calculating based on the original training data and the prediction result to obtain a loss result; deriving the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data; forming new training data based on the new sample feature data and the sample result data; performing union processing on the new training data and the training data to obtain a first data set; processing any two pieces of data in the first data set in a preset mode to obtain a second data set; training a selected intention classification algorithm through the first data set and the second data set to obtain a final model; noise data in the input intention data is identified by the final model. According to the scheme, the training data are specially processed in the training stage, the robustness of the model is enhanced in a mode of resisting training and fusing samples, meanwhile, the defect that noise data are largely identified as front data is avoided, and the identification capacity of user intention is not influenced. The algorithm improves the intention recognition capability in the scene and improves the actual experience of the user.
Drawings
Fig. 1 is a flow chart of a method for identifying noise data according to an embodiment of the present invention;
fig. 2 is a schematic frame structure of an apparatus for recognizing noise data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a frame structure of a noise data recognition device according to an embodiment of the present invention;
fig. 4 is a flowchart of a frame structure of a terminal according to an embodiment of the present invention.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the examples are illustrated for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
Example 1
The embodiment 1 of the invention discloses a noise data identification method, which is shown in fig. 1 and comprises the following steps:
step 101, acquiring original training data comprising intention data and noise data of a user;
specifically, training data is preparedThe training data includes user intent data and noise data.
102, forward reasoning is carried out on the original training data to obtain a prediction result;
the forward processing is performed by the following formula:
wherein, (x) i ,y i ) Is the original training data input; θ is a model parameter; f (θ, x) i ,y i ) A function of forward processing of the input original training data by a representation model; />And (5) obtaining the prediction result.
Step 103, calculating based on the original training data and the prediction result to obtain a loss result;
the loss result is obtained by the following formula:
wherein (1)>Is the prediction result; x is x i ,y i Both are the input original training data; />Representing a loss function; loss of loss i As a result of losses.
Step 104, deriving the original training data based on the loss result to obtain gradient data;
the gradient data is obtained by the following formula:
wherein grad i Is gradient data; loss of loss i As a loss result; />For a derivative function.
Step 105, converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
the new sample characteristic data is obtained by the following formula:
wherein, E is a parameter between 0 and 1; sign (grad) i ) For a sign function; when grad is greater than 0, sign (grad i ) =1; when grad is less than 0, sign (grad i )=1;/>Characteristic data for the new sample; x is x i Sample characteristic data; y is i Is sample result data.
Step 106, forming new training data based on the new sample characteristic data and the sample result data;
step 107, performing union processing on the new training data and the training data to obtain a first data set;
step 108, processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
the processing of the preset mode is performed by the following formula:
wherein,and->Any two pieces of data in the first data set; lambda is a weight parameter; x is X MIX Is the second data set. Specifically, lambda is in the range of 0-1 for adjusting x i And x j Generally chosen empirically and with respect to the final effect, for example 0.8 may be chosen.
Step 109, training a selected intention classification algorithm through the first data set and the second data set to obtain a final model;
specifically, the selected intent classification algorithm includes: convolutional neural networks (CNN, convolutional Neural Networks) or recurrent neural networks (RNN, recurrent Neural Network).
Step 110, noise data in the input intention data is identified by the final model.
In a specific embodiment, the loss function of the final model comprises:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, the method further comprises:
and if the final model is tested, inputting the original training data for forward reasoning to obtain a predicted result of the final model, and comparing the predicted result of the final model with sample result data to determine a test result.
Specifically, when the subsequent model test or model online forward reasoning is performed, data X is input, and then the model prediction result is obtained through the model forward reasoning.
Here, a specific application scenario is described, which specifically includes the following steps:
step 1: preparing training dataThe training data includes user intent data and noise data.
Step 2: an intent classification algorithm is selected. Such as Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs).
Step 3: input to the model (x i ,y i ) Forward calculation is carried out; according toWhere θ represents a parameter of the model, f (θ, x i ) Representing model pairsInputting x for forward processing to obtain result +.>
The formula represents a set of data values for the input data (x i ,y i ) Corresponding prediction result +.>The loss obtained.
The formula represents the loss versus input data (x i ,y i ) The resulting gradient is derived.Where E is a parameter between 0 and 1. sign (grad) i ) The function is a signed function. When grad is greater than 0, sign (grad i ) =1; when grad is less than 0, sign (grad i ) =1. Obtaining transformed +.>
Will beAfter the data set is subjected to the above operation, the +.>
Step 4: obtaining a new data set X ADA =X∪X adv
Step 5: for data set X ADA Any two pieces of data in (a)The following operations are performed
Thereby obtaining a new data set
Step 6: using data set X ADA And X MIX As training data, a model is trained. The loss of the model is:
wherein for X ADA Using a cross entropy loss function for samples of X MIX The data in (a) uses a KL divergence loss function. Obtaining a final model
Step 7: and when the subsequent model test or model online forward reasoning is carried out, inputting data X, and then obtaining a model prediction result through the forward reasoning of the model.
According to the scheme, training data are specially processed in the training stage, the robustness of the model is enhanced in a mode of resisting training and fusing samples, and meanwhile the recognition capability of user intention is not influenced. The algorithm improves the intention recognition capability in the scene and improves the actual experience of the user. Meanwhile, the recognition capability of the small data volume dialogue system to noise data is improved, the defect that the noise data is largely recognized as front data is avoided, the scheme can be nested in a plurality of deep learning classification algorithms of any type, and the application range is wider.
Example 2
The embodiment 2 of the invention also discloses a noise data identification device, as shown in fig. 2, comprising:
an acquisition module 201 for acquiring raw training data including intention data and noise data of a user;
the forward reasoning module 202 is configured to forward reason the original training data to obtain a prediction result;
the loss module 203 is configured to calculate, based on the original training data and the prediction result, to obtain a loss result;
a deriving module 204, configured to derive the original training data based on the loss result to obtain gradient data;
the conversion module 205 is configured to convert the sample feature data based on the gradient data to obtain new sample feature data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module 206, configured to form new training data based on the new sample feature data and the sample result data;
a union module 207, configured to perform union processing on the new training data and the training data, so as to obtain a first data set;
a processing module 208, configured to perform a preset manner of processing on any two pieces of data in the first data set, so as to obtain a second data set;
a training module 209, configured to train the selected intent classification algorithm through the first data set and the second data set to obtain a final model;
the recognition module 210 is configured to recognize noise data in the input intention data through the final model.
In a specific embodiment, the forward processing is performed by the following formula:
wherein, (x) i ,y i ) Is the original training data input; θ is a model parameter; f (θ, x) i ,y i ) A function of forward processing of the input original training data by a representation model; />And (5) obtaining the prediction result.
In a specific embodiment, the loss result is obtained by the following formula:
wherein (1)>Is the prediction result; x is x i ,y i Both are the input original training data; />Representing a loss function; loss of loss i As a result of losses.
In a specific embodiment, the gradient data is obtained by the following formula:
wherein grad i Is gradient data; loss of loss i As a loss result; />For a derivative function.
In a specific embodiment, the new sample feature data is obtained by the following formula:
wherein, E is a parameter between 0 and 1; sign (grad) i ) For a sign function; when grad is greater than 0, sign (grad i ) =1; when grad is less than 0, sign (grad i )=-1;/>Characteristic data for the new sample; x is x i Sample characteristic data; y is i Is sample result data.
In a specific embodiment, the processing of the preset mode is performed by the following formula:
wherein,and->Any two pieces of data in the first data set; lambda is a weight parameter; x is X MIX Is the second data set.
In a specific embodiment, the selected intent classification algorithm includes: convolutional neural networks or recurrent neural networks.
In a specific embodiment, the loss function of the final model comprises:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, as shown in fig. 3, the method further includes:
and the test module 211 is configured to input the original training data for forward reasoning if the final model is tested, obtain a predicted result of the final model, and compare the predicted result of the final model with sample result data according to the predicted result of the final model to determine a test result.
Example 3
The embodiment 3 of the invention also discloses a terminal, as shown in fig. 4, which comprises a memory and a processor, wherein the processor executes the method in the embodiment 1 when running the application program in the memory.
The embodiment of the invention provides a noise data identification method and equipment, wherein the method comprises the following steps: acquiring original training data comprising intention data and noise data of a user; forward reasoning is carried out on the original training data, and a prediction result is obtained; calculating based on the original training data and the prediction result to obtain a loss result; deriving the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data; forming new training data based on the new sample feature data and the sample result data; performing union processing on the new training data and the training data to obtain a first data set; processing any two pieces of data in the first data set in a preset mode to obtain a second data set; training a selected intention classification algorithm through the first data set and the second data set to obtain a final model; noise data in the input intention data is identified by the final model. According to the scheme, the training data are specially processed in the training stage, the robustness of the model is enhanced in a mode of resisting training and fusing samples, and meanwhile, the defect that noise data are largely identified as front data is avoided, and the identification capacity of user intention is not influenced. The algorithm improves the intention recognition capability in the scene and improves the actual experience of the user.
The present invention is not limited to the above embodiments, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and these modifications and substitutions are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. A method of identifying noise data, comprising:
acquiring original training data comprising intention data and noise data of a user;
forward reasoning is carried out on the original training data, and a prediction result is obtained;
calculating based on the original training data and the prediction result to obtain a loss result;
deriving the original training data based on the loss result to obtain gradient data;
converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
forming new training data based on the new sample feature data and the sample result data;
performing union processing on the new training data and the training data to obtain a first data set;
processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
training a selected intention classification algorithm through the first data set and the second data set to obtain a final model;
identifying noise data in the input intention data through the final model;
the new sample characteristic data is obtained by the following formula:
wherein, E is a parameter between 0 and 1; sign (grad) i ) For a sign function; when grad is greater than 0, sign (grad i ) =1; when grad is less than 0, sign (grad i )=1;/>Characteristic data for the new sample; x is x i Sample characteristic data; y is i Sample result data;
the processing of the preset mode is performed by the following formula:
wherein,and->Any two pieces of data in the first data set; lambda is a weight parameter; x is X MIX Is the second data set;
the loss function of the final model includes:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
2. The method of claim 1, wherein the forward processing is performed by the following formula:
wherein, (x) i ,y i ) Is the original training data input; θ is a model parameter; f (θ, x) i ,y i ) Representation modelA function of performing forward processing on the input original training data; />And (5) obtaining the prediction result.
3. The method of claim 1 or 2, wherein the loss result is obtained by the following formula:
wherein (1)>Is the prediction result; x is x i ,y i Both are the input original training data; />Representing a loss function; loss of loss i As a result of losses.
4. A method according to claim 1 or 3, wherein the gradient data is obtained by the following formula:
wherein grad i Is gradient data; loss of loss i As a loss result; />For a derivative function.
5. The method of claim 1, wherein the selected intent classification algorithm comprises: convolutional neural networks or recurrent neural networks.
6. The method as recited in claim 1, further comprising:
and if the final model is tested, inputting the original training data for forward reasoning to obtain a predicted result of the final model, and comparing the predicted result of the final model with sample result data to determine a test result.
7. An apparatus for recognizing noise data, comprising:
an acquisition module for acquiring raw training data including intention data and noise data of a user;
the forward reasoning module is used for carrying out forward reasoning on the original training data to obtain a prediction result;
the loss module is used for calculating based on the original training data and the prediction result to obtain a loss result;
the deriving module is used for deriving the original training data based on the loss result to obtain gradient data;
the conversion module is used for converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module for forming new training data based on the new sample feature data and the sample result data;
the union module is used for performing union processing on the new training data and the training data to obtain a first data set;
the processing module is used for processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
the training module is used for training the selected intention classification algorithm through the first data set and the second data set to obtain a final model;
the recognition module is used for recognizing noise data in the input intention data through the final model;
the new sample featureThe data is obtained by the following formula:wherein, E is a parameter between 0 and 1; sign (grad) i ) For a sign function; when grad is greater than 0, sign (grad i ) =1; when grad is less than 0, sign (grad i )=-1;/>Characteristic data for the new sample; x is x i Sample characteristic data; y is i Sample result data;
the processing of the preset mode is performed by the following formula:
wherein,and->Any two pieces of data in the first data set; lambda is a weight parameter; x is X MIX Is the second data set;
the loss function of the final model includes:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
CN202110283194.0A 2021-03-16 2021-03-16 Noise data identification method and equipment Active CN112860870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110283194.0A CN112860870B (en) 2021-03-16 2021-03-16 Noise data identification method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110283194.0A CN112860870B (en) 2021-03-16 2021-03-16 Noise data identification method and equipment

Publications (2)

Publication Number Publication Date
CN112860870A CN112860870A (en) 2021-05-28
CN112860870B true CN112860870B (en) 2024-03-12

Family

ID=75994903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110283194.0A Active CN112860870B (en) 2021-03-16 2021-03-16 Noise data identification method and equipment

Country Status (1)

Country Link
CN (1) CN112860870B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113345426B (en) * 2021-06-02 2023-02-28 云知声智能科技股份有限公司 Voice intention recognition method and device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548210A (en) * 2016-10-31 2017-03-29 腾讯科技(深圳)有限公司 Machine learning model training method and device
CN111931637A (en) * 2020-08-07 2020-11-13 华南理工大学 Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network
CN112183631A (en) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 Method and terminal for establishing intention classification model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3739516A1 (en) * 2019-05-17 2020-11-18 Robert Bosch GmbH Classification robust against multiple perturbation types

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548210A (en) * 2016-10-31 2017-03-29 腾讯科技(深圳)有限公司 Machine learning model training method and device
CN111931637A (en) * 2020-08-07 2020-11-13 华南理工大学 Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network
CN112183631A (en) * 2020-09-28 2021-01-05 云知声智能科技股份有限公司 Method and terminal for establishing intention classification model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向迁移学习的意图识别研究进展;赵鹏飞;李艳玲;林民;;计算机科学与探索(第08期);全文 *

Also Published As

Publication number Publication date
CN112860870A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN109214360B (en) Construction method and application of face recognition model based on Parasoft Max loss function
CN107341463B (en) Face feature recognition method combining image quality analysis and metric learning
CN109271958B (en) Face age identification method and device
US20040190760A1 (en) Face detection method and apparatus
CN110619319A (en) Improved MTCNN model-based face detection method and system
CN104485103B (en) A kind of multi-environment model isolated word recognition method based on vector Taylor series
CN109934300B (en) Model compression method, device, computer equipment and storage medium
JP2008533606A (en) How to perform face recognition
CN111476200A (en) Face de-identification generation method based on generation of confrontation network
CN112509583B (en) Auxiliary supervision method and system based on scheduling operation ticket system
CN113744262B (en) Target segmentation detection method based on GAN and YOLO-v5
CN110930976A (en) Voice generation method and device
CN113591978B (en) Confidence penalty regularization-based self-knowledge distillation image classification method, device and storage medium
CN111970400B (en) Crank call identification method and device
CN110659573A (en) Face recognition method and device, electronic equipment and storage medium
CN109325472B (en) Face living body detection method based on depth information
CN112860870B (en) Noise data identification method and equipment
CN111241924A (en) Face detection and alignment method and device based on scale estimation and storage medium
CN111414943B (en) Anomaly detection method based on mixed hidden naive Bayes model
CN112766351A (en) Image quality evaluation method, system, computer equipment and storage medium
CN112966429A (en) Non-linear industrial process modeling method based on WGANs data enhancement
CN114821174B (en) Content perception-based transmission line aerial image data cleaning method
CN116257816A (en) Accompanying robot emotion recognition method, device, storage medium and equipment
CN112183631B (en) Method and terminal for establishing intention classification model
CN115273202A (en) Face comparison method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant