CN112860870A - Noise data identification method and equipment - Google Patents
Noise data identification method and equipment Download PDFInfo
- Publication number
- CN112860870A CN112860870A CN202110283194.0A CN202110283194A CN112860870A CN 112860870 A CN112860870 A CN 112860870A CN 202110283194 A CN202110283194 A CN 202110283194A CN 112860870 A CN112860870 A CN 112860870A
- Authority
- CN
- China
- Prior art keywords
- data
- result
- loss
- training data
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000009795 derivation Methods 0.000 claims abstract description 4
- 238000007635 classification algorithm Methods 0.000 claims description 14
- 239000000126 substance Substances 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 230000000306 recurrent effect Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 25
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
- G06F40/35—Discourse or dialogue representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Abstract
The invention provides a method and equipment for identifying noise data, which comprises the following steps: acquiring original training data; carrying out forward reasoning on the original training data to obtain a prediction result; calculating based on the original training data and the prediction result to obtain a loss result; carrying out derivation on the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; forming new training data based on the new sample characteristic data and the sample result data; carrying out union processing on the new training data and the training data to obtain a first data set; processing the first data set to obtain a second data set; training through the first data set and the second data set to obtain a final model; and identifying the noise data in the input intention data through a final model. The scheme performs special processing on training data in a training stage, and enhances the robustness of the model in a mode of countertraining and sample fusion.
Description
Technical Field
The invention relates to the technical field of noise data identification, in particular to a noise data identification method and equipment.
Background
In the prior art, noise data is generally not specially processed in a scene in a customized dialog system for some user customers. Instead, the noise data is trained with the user intent data as a noise intent in the normal scenario.
In such scenarios, the user intention data is relatively small, whereas in training data for the task intended to be identified, positive intention data and negative noise data need to be maintained in a certain ratio, e.g., 1:3 or 1: 5. Therefore, when training data is collated, noise data cannot be too much. And the speech space of noisy data is relatively large, a small amount of training data coverage is not sufficient. However, the prior art does not provide additional special processing for negative noise data. Therefore, current intent recognition techniques are less effective at recognizing such unintelligible or noisy data. A case may occur where a large amount of noise data is recognized as positive data.
Thus, there is a need for a solution to the problems of the prior art.
Disclosure of Invention
The invention provides a voice data identification method and voice data identification equipment, which can solve the technical problem of poor identification performance in the prior art.
The technical scheme for solving the technical problems is as follows:
the embodiment of the invention provides a method for identifying noise data, which comprises the following steps:
acquiring original training data including intention data and noise data of a user;
carrying out forward reasoning on the original training data to obtain a prediction result;
calculating based on the original training data and the prediction result to obtain a loss result;
deriving the original training data based on the loss result to obtain gradient data;
converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
forming new training data based on the new sample feature data and the sample result data;
performing union processing on the new training data and the training data to obtain a first data set;
processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
training a selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and identifying the noise data in the input intention data through a final model.
In a specific embodiment, the forward processing is performed by the following formula:
wherein (x)i,yi) Inputting the original training data; theta is a model parameter; f (theta, x)i,yi) Representing a function of a model for performing forward processing on the input original training data;and the prediction result is obtained.
In a specific embodiment, the loss result is obtained by the following formula:
wherein the content of the first and second substances,is the prediction result; x is the number ofi,yiBoth are the input original training data;representing a loss function; lossiAs a result of loss.
In a specific embodiment, the gradient data is obtained by the following formula:
In a specific embodiment, the new sample feature data is obtained by the following formula:
wherein epsilon is a parameter between 0 and 1; sign (gradi) is a sign-solving function; when grad is greater than 0, sign (grad)i) 1 is ═ 1; when grad is less than 0, sign (grad)i)=-1;New sample characteristic data; x is the number ofiSample characteristic data is obtained; y isiIs sample result data.
In a specific embodiment, the preset mode is processed by the following formula:
wherein the content of the first and second substances,any two pieces of data in the first data set are obtained; λ is a weight parameter; xMIXIs the second data set.
In a specific embodiment, the selected intent classification algorithm comprises: a convolutional neural network or a recurrent neural network.
In a specific embodiment, the loss function of the final model includes:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, the method further comprises the following steps:
and if the final model is tested, inputting the original training data to carry out forward reasoning to obtain a prediction result of the final model, and comparing the prediction result of the final model with sample result data to determine a test result.
The embodiment of the invention also provides a device for identifying noise data, which comprises:
an acquisition module for acquiring original training data including intention data and noise data of a user;
the forward reasoning module is used for carrying out forward reasoning on the original training data to obtain a prediction result;
the loss module is used for calculating based on the original training data and the prediction result to obtain a loss result;
a derivation module, configured to derive the original training data based on the loss result to obtain gradient data;
the conversion module is used for converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module for forming new training data based on the new sample feature data and the sample result data;
the union set module is used for carrying out union set processing on the new training data and the training data to obtain a first data set;
the processing module is used for processing any two data in the first data set in a preset mode to obtain a second data set;
the training module is used for training the selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and the identification module is used for identifying the noise data in the input intention data through the final model.
The invention has the beneficial effects that:
the embodiment of the invention provides a method and equipment for identifying noise data, wherein the method comprises the following steps: acquiring original training data including intention data and noise data of a user; carrying out forward reasoning on the original training data to obtain a prediction result; calculating based on the original training data and the prediction result to obtain a loss result; deriving the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data; forming new training data based on the new sample feature data and the sample result data; performing union processing on the new training data and the training data to obtain a first data set; processing any two pieces of data in the first data set in a preset mode to obtain a second data set; training a selected intention classification algorithm through the first data set and the second data set to obtain a final model; and identifying the noise data in the input intention data through a final model. The scheme performs special processing on training data in a training stage, enhances the robustness of the model by means of countertraining and sample fusion, avoids the defect that a large amount of noise data is recognized as positive data, and has no influence on the recognition capability of the intention of a user. The algorithm improves the intention recognition capability in a scene and improves the actual experience of a user.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for identifying noise data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a frame structure of a noise data identification device according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a frame structure of a noise data identification device according to an embodiment of the present invention;
fig. 4 is a flowchart of a framework structure of a terminal according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Example 1
The embodiment 1 of the invention discloses a method for identifying noise data, which comprises the following steps as shown in figure 1:
specifically, training data is preparedThe training data includes intent data and noise data of the user.
102, carrying out forward reasoning on the original training data to obtain a prediction result;
the forward processing is performed by the following formula:
wherein (x)i,yi) Inputting the original training data; theta is a model parameter; f (theta, x)i,yi) Representing a function of a model for performing forward processing on the input original training data;and the prediction result is obtained.
103, calculating based on the original training data and the prediction result to obtain a loss result;
the loss results are obtained by the following formula:
wherein the content of the first and second substances,is the prediction result; x is the number ofi,yiBoth are the input original training data;representing a loss function; lossiAs a result of loss.
104, deriving the original training data based on the loss result to obtain gradient data;
the gradient data is obtained by the following formula:
105, converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
the new sample feature data is obtained by the following formula:
wherein epsilon is a parameter between 0 and 1; sign (grad) is a sign-solving function; when grad is greater than 0, sign (grad)i) 1 is ═ 1; when grad is less than 0, sign (grad)i)=-1;New sample characteristic data; x is the number ofiSample characteristic data is obtained; y isiIs sample result data.
106, forming new training data based on the new sample characteristic data and the sample result data;
108, processing any two data in the first data set in a preset mode to obtain a second data set;
the preset mode processing is performed by the following formula:
wherein the content of the first and second substances,any two pieces of data in the first data set are obtained; λ is a weight parameter; xMIXIs the second data set. Specifically, the value range of λ is0-1 for adjusting xiAnd xjThe weight of (c) is generally chosen based on experience and the final effect, for example 0.8 may be chosen.
specifically, the selected intent classification algorithm includes: convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN).
In a specific embodiment, the loss function of the final model includes:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, the method further comprises the following steps:
and if the final model is tested, inputting the original training data to carry out forward reasoning to obtain a prediction result of the final model, and comparing the prediction result of the final model with sample result data to determine a test result.
Specifically, when performing subsequent model tests or performing forward reasoning on the model on line, data X is input, and then a prediction result of the model is obtained through the forward reasoning of the model.
Here, a specific application scenario is described, which specifically includes the following steps:
Step 2: an intent classification algorithm is selected. Such as a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN).
And 3, step 3: input to the model (x)i,yi) Performing forward calculation;according to the detailsWhere θ represents the parameters of the model, f (θ, x)i) The representation model carries out forward processing on the input x to obtain a result
The formula representation is for input data (x)i,yi) And corresponding prediction resultsThe loss obtained.
The formula represents loss versus input data (x)i,yi) The resulting gradient is derived.Wherein e is a parameter between 0 and 1. The sign (grad) function is a sign-finding function. When grad is greater than 0, sign (grad)i) 1 is ═ 1; when grad is less than 0, sign (grad)i)=-1。
And 4, step 4: get a new data set XADA=X∪Xadv。
wherein for XADAUsing a cross entropy loss function for the samples of XMIXThe data in (1) uses the KL divergence loss function. Obtaining the final model
And 7, step 7: and when the subsequent model test or the online forward reasoning of the model is carried out, inputting data X, and then obtaining the prediction result of the model through the forward reasoning of the model.
The method and the device have the advantages that training data are specially processed in a training stage, robustness of the model is enhanced through countertraining and a sample fusion mode, and meanwhile recognition capability of user intention cannot be influenced. The algorithm improves the intention recognition capability in a scene and improves the actual experience of a user. Meanwhile, the recognition capability of a small-data-volume dialogue system on noise data is improved, the defect that a large amount of noise data is recognized as positive data is overcome, the scheme can be nested in a plurality of deep learning classification algorithms of any type, and the application range is wide.
Example 2
Embodiment 2 of the present invention also discloses a device for identifying noise data, as shown in fig. 2, including:
an obtaining module 201, configured to obtain original training data including intention data and noise data of a user;
a forward reasoning module 202, configured to perform forward reasoning on the original training data to obtain a prediction result;
a loss module 203, configured to perform calculation based on the original training data and the prediction result to obtain a loss result;
a derivation module 204, configured to derive the original training data based on the loss result to obtain gradient data;
a conversion module 205, configured to convert the sample feature data based on the gradient data to obtain new sample feature data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module 206 for forming new training data based on the new sample feature data and the sample result data;
a union module 207, configured to perform union processing on the new training data and the training data to obtain a first data set;
the processing module 208 is configured to perform processing in a preset manner on any two pieces of data in the first data set to obtain a second data set;
a training module 209, configured to train the selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and the identification module 210 is used for identifying the noise data in the input intention data through the final model.
In a specific embodiment, the forward processing is performed by the following formula:
wherein (x)i,yi) Inputting the original training data; theta is a model parameter; f (theta, x)i,yi) Representing a function of a model for performing forward processing on the input original training data;and the prediction result is obtained.
In a specific embodiment, the loss result is obtained by the following formula:
wherein the content of the first and second substances,is the prediction result; x is the number ofi,yiBoth are the input original training data;representing a loss function; lossiAs a result of loss.
In a specific embodiment, the gradient data is obtained by the following formula:
In a specific embodiment, the new sample feature data is obtained by the following formula:
wherein epsilon is a parameter between 0 and 1; sign (grad) is a sign-solving function; when grad is greater than 0, sign (grad)i) 1 is ═ 1; when grad is less than 0, sign (grad)i)=-1;New sample characteristic data; x is the number ofiSample characteristic data is obtained; y isiIs sample result data.
In a specific embodiment, the preset mode is processed by the following formula:
wherein the content of the first and second substances,any two pieces of data in the first data set are obtained; λ is a weight parameter; xMIXIs the second data set.
In a specific embodiment, the selected intent classification algorithm comprises: a convolutional neural network or a recurrent neural network.
In a specific embodiment, the loss function of the final model includes:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
In a specific embodiment, as shown in fig. 3, the method further includes:
the testing module 211 is configured to input the original training data to perform forward reasoning to obtain a prediction result of the final model if the final model is tested, and compare the prediction result of the final model with sample result data to determine a testing result.
Example 3
Embodiment 3 of the present invention further discloses a terminal, as shown in fig. 4, the terminal includes a memory and a processor, and the processor executes the method in embodiment 1 when running an application program in the memory.
The embodiment of the invention provides a method and equipment for identifying noise data, wherein the method comprises the following steps: acquiring original training data including intention data and noise data of a user; carrying out forward reasoning on the original training data to obtain a prediction result; calculating based on the original training data and the prediction result to obtain a loss result; deriving the original training data based on the loss result to obtain gradient data; converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data; forming new training data based on the new sample feature data and the sample result data; performing union processing on the new training data and the training data to obtain a first data set; processing any two pieces of data in the first data set in a preset mode to obtain a second data set; training a selected intention classification algorithm through the first data set and the second data set to obtain a final model; and identifying the noise data in the input intention data through a final model. According to the scheme, training data are specially processed in a training stage, robustness of the model is enhanced through countertraining and a sample fusion mode, and meanwhile the defect that a large amount of noise data are recognized as positive data is avoided, and the recognition capability of the intention of a user cannot be influenced. The algorithm improves the intention recognition capability in a scene and improves the actual experience of a user.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for identifying noisy data, comprising:
acquiring original training data including intention data and noise data of a user;
carrying out forward reasoning on the original training data to obtain a prediction result;
calculating based on the original training data and the prediction result to obtain a loss result;
deriving the original training data based on the loss result to obtain gradient data;
converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
forming new training data based on the new sample feature data and the sample result data;
performing union processing on the new training data and the training data to obtain a first data set;
processing any two pieces of data in the first data set in a preset mode to obtain a second data set;
training a selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and identifying the noise data in the input intention data through a final model.
2. The method of claim 1, wherein the forward processing is performed by the following equation:
5. The method of claim 1 or 4, wherein the new sample characteristic data is obtained by the following formula:
wherein epsilon is a parameter between 0 and 1; sign (grad) is a sign-solving function; when grad is greater than 0, sign (grad)i) 1 is ═ 1; when grad is less than 0, sign (grad)i)=-1;New sample characteristic data; x is the number ofiSample characteristic data is obtained; y isiIs sample result data.
7. The method of claim 1, wherein the selected intent classification algorithm comprises: a convolutional neural network or a recurrent neural network.
8. The method of claim 1, wherein the loss function of the final model comprises:
a cross entropy loss function for the first data set and a KL divergence loss function for the second data set.
9. The method of claim 1, further comprising:
and if the final model is tested, inputting the original training data to carry out forward reasoning to obtain a prediction result of the final model, and comparing the prediction result of the final model with sample result data to determine a test result.
10. An apparatus for recognizing noise data, comprising:
an acquisition module for acquiring original training data including intention data and noise data of a user;
the forward reasoning module is used for carrying out forward reasoning on the original training data to obtain a prediction result;
the loss module is used for calculating based on the original training data and the prediction result to obtain a loss result;
a derivation module, configured to derive the original training data based on the loss result to obtain gradient data;
the conversion module is used for converting the sample characteristic data based on the gradient data to obtain new sample characteristic data; the original training data consists of the sample characteristic data and sample result data corresponding to the sample characteristic data;
a forming module for forming new training data based on the new sample feature data and the sample result data;
the union set module is used for carrying out union set processing on the new training data and the training data to obtain a first data set;
the processing module is used for processing any two data in the first data set in a preset mode to obtain a second data set;
the training module is used for training the selected intention classification algorithm through the first data set and the second data set to obtain a final model;
and the identification module is used for identifying the noise data in the input intention data through the final model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110283194.0A CN112860870B (en) | 2021-03-16 | 2021-03-16 | Noise data identification method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110283194.0A CN112860870B (en) | 2021-03-16 | 2021-03-16 | Noise data identification method and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112860870A true CN112860870A (en) | 2021-05-28 |
CN112860870B CN112860870B (en) | 2024-03-12 |
Family
ID=75994903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110283194.0A Active CN112860870B (en) | 2021-03-16 | 2021-03-16 | Noise data identification method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112860870B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113345426A (en) * | 2021-06-02 | 2021-09-03 | 云知声智能科技股份有限公司 | Voice intention recognition method and device and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548210A (en) * | 2016-10-31 | 2017-03-29 | 腾讯科技(深圳)有限公司 | Machine learning model training method and device |
CN111931637A (en) * | 2020-08-07 | 2020-11-13 | 华南理工大学 | Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network |
US20200364616A1 (en) * | 2019-05-17 | 2020-11-19 | Robert Bosch Gmbh | Classification robust against multiple perturbation types |
CN112183631A (en) * | 2020-09-28 | 2021-01-05 | 云知声智能科技股份有限公司 | Method and terminal for establishing intention classification model |
-
2021
- 2021-03-16 CN CN202110283194.0A patent/CN112860870B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548210A (en) * | 2016-10-31 | 2017-03-29 | 腾讯科技(深圳)有限公司 | Machine learning model training method and device |
US20200364616A1 (en) * | 2019-05-17 | 2020-11-19 | Robert Bosch Gmbh | Classification robust against multiple perturbation types |
CN111931637A (en) * | 2020-08-07 | 2020-11-13 | 华南理工大学 | Cross-modal pedestrian re-identification method and system based on double-current convolutional neural network |
CN112183631A (en) * | 2020-09-28 | 2021-01-05 | 云知声智能科技股份有限公司 | Method and terminal for establishing intention classification model |
Non-Patent Citations (1)
Title |
---|
赵鹏飞;李艳玲;林民;: "面向迁移学习的意图识别研究进展", 计算机科学与探索, no. 08 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113345426A (en) * | 2021-06-02 | 2021-09-03 | 云知声智能科技股份有限公司 | Voice intention recognition method and device and readable storage medium |
CN113345426B (en) * | 2021-06-02 | 2023-02-28 | 云知声智能科技股份有限公司 | Voice intention recognition method and device and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112860870B (en) | 2024-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11138967B2 (en) | Voice recognition processing method, device and computer storage medium | |
EP3848887A1 (en) | Gan network-based vehicle damage image enhancement method and apparatus | |
WO2021128741A1 (en) | Voice emotion fluctuation analysis method and apparatus, and computer device and storage medium | |
CN109065027B (en) | Voice distinguishing model training method and device, computer equipment and storage medium | |
CN109298993B (en) | Method and device for detecting fault and computer readable storage medium | |
CN107818797B (en) | Voice quality evaluation method, device and system | |
CN109378002B (en) | Voiceprint verification method, voiceprint verification device, computer equipment and storage medium | |
CN111563422B (en) | Service evaluation acquisition method and device based on bimodal emotion recognition network | |
CN110726898A (en) | Power distribution network fault type identification method | |
Yousefi et al. | Assessing speaker engagement in 2-person debates: Overlap detection in United States Presidential debates. | |
Yang et al. | Quality classified image analysis with application to face detection and recognition | |
CN112860870A (en) | Noise data identification method and equipment | |
CN117197057A (en) | Automatic detection method for corrosion degree of steel material based on deep learning | |
CN112966429A (en) | Non-linear industrial process modeling method based on WGANs data enhancement | |
CN111833842A (en) | Synthetic sound template discovery method, device and equipment | |
CN114821174B (en) | Content perception-based transmission line aerial image data cleaning method | |
CN116884427A (en) | Embedded vector processing method based on end-to-end deep learning voice re-etching model | |
CN116257816A (en) | Accompanying robot emotion recognition method, device, storage medium and equipment | |
CN113327617B (en) | Voiceprint discrimination method, voiceprint discrimination device, computer device and storage medium | |
CN106709598B (en) | Voltage stability prediction and judgment method based on single-class samples | |
Jayanth et al. | Speaker Identification based on GFCC using GMM-UBM | |
Kang et al. | SVLDL: Improved speaker age estimation using selective variance label distribution learning | |
Faridee et al. | Predicting score distribution to improve non-intrusive speech quality estimation | |
US20240013369A1 (en) | Image defect detecting system, generation method of image defect detecting system and non-transitory computer readable medium | |
CN117152746B (en) | Method for acquiring cervical cell classification parameters based on YOLOV5 network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |