CN108304890B - Generation method and device of classification model - Google Patents
Generation method and device of classification model Download PDFInfo
- Publication number
- CN108304890B CN108304890B CN201810218705.9A CN201810218705A CN108304890B CN 108304890 B CN108304890 B CN 108304890B CN 201810218705 A CN201810218705 A CN 201810218705A CN 108304890 B CN108304890 B CN 108304890B
- Authority
- CN
- China
- Prior art keywords
- data
- classifier
- feature
- training data
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a method and a device for generating a classification model, wherein the method comprises the following steps: the method comprises the steps of obtaining target training data, wherein the target training data comprise different divided field data under the same data type, and utilizing the target training data to train a feature generator, a main classifier and an auxiliary classifier, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for carrying out classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing divided fields to which the target training data belong according to the transformed feature data. Because the feature generator is constructed to reduce the domain distinguishing capability of the auxiliary classifier, the training can be finished when the auxiliary classifier cannot distinguish the domains, and the classification result of the main classifier is more accurate because the main classifier is not limited by the domain division.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a classification model.
Background
Generally, for a model a trained by using training data in the field a, when the model a is used to classify data to be classified in the field a, the accuracy of the classification result is often high, but when the model a is used to classify data to be classified in the field B, the accuracy of the classification result is reduced, so that how to apply the model a to the field B is also excellent, which requires the field adaptive technology.
With the continuous development of artificial intelligence technology, the field adaptive technology plays an important role in more and more fields. In the existing domain adaptive method, a model a (which is a model trained by using a domain training data) is retrained by using the a domain training data and the B domain training data, wherein the a domain training data is supervised data, the B domain training data is supervised data or unsupervised data, the supervised data is data labeled with an actual classification label in advance, and the unsupervised data is data not labeled with the actual classification label. When the model A is retrained, specifically, if the B field training data is supervised data, the model A is retrained by using the B field training data and the A field training data; if the B field training data is unsupervised data, the B field training data is labeled by the model A, namely, classification labels are marked on the B field training data, and then the B field training data labeled with the classification labels and the A field training data are used for retraining the model A.
However, when the training data in the B field is unsupervised data, if the training data in the B field is labeled by using the model a, the labeling method across fields makes the labeling results of the training data in the B field often have more errors, and the obtained model effect is often poor when the wrong labeling results are used for training the model a.
Disclosure of Invention
The embodiment of the present application mainly aims to provide a method and an apparatus for generating a classification model, which can enable the generated classification model to accurately classify data in different fields.
The embodiment of the application provides a method for generating a classification model, which comprises the following steps:
acquiring target training data, wherein the target training data comprises different field data divided under the same data type;
training a feature generator, a main classifier and an auxiliary classifier by using the target training data, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for carrying out classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data;
and if the auxiliary classifier is determined not to be capable of carrying out the field discrimination on the target training data, finishing the training to obtain a classification model comprising the feature generator and the main classifier.
Optionally, some or all of the target training data have actual classification labels; training a feature generator, a primary classifier, and an auxiliary classifier using the target training data comprises:
training a feature generator and a main classifier by using data with actual classification labels in the target training data;
training an auxiliary classifier using the target training data.
Optionally, the training the feature generator and the main classifier by using the data with the actual classification label in the target training data includes:
extracting data from the target training data in batches;
for the current extracted data, if the extracted data contains data with actual classification labels, taking the data with the actual classification labels as the current training data;
taking the current training data as input data of the feature generator so that the feature generator transforms the original feature data of the current training data to the same feature space to obtain first transformation data;
taking the first transformation data as input data of the main classifier so that the main classifier can classify and predict the current training data according to the first transformation data and the current training data has a prediction classification label;
and updating the model parameters of the feature generator and the main classifier according to the actual classification label and the prediction classification label corresponding to the current training data to complete the parameter updating of the current round.
Optionally, the training of the auxiliary classifier using the target training data includes:
if the extracted data does not contain data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data; if the extracted data contains data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data after the current round of parameter updating is finished;
wherein, the updating the parameters of the auxiliary classifier by using the extracted data specifically includes:
the extracted data is used as input data of the feature generator, so that the feature generator transforms original feature data of the extracted data to the same feature space to obtain second transformation data;
taking the second transformation data as input data of the auxiliary classifier so that the auxiliary classifier can predict the field to which the current training data belongs according to the second transformation data;
and updating the model parameters of the auxiliary classifier according to the actual field and the prediction field corresponding to the extracted data.
Optionally, the determining that the auxiliary classifier cannot perform domain differentiation on the target training data includes:
and if the parameter variation of the feature generator, the main classifier and the auxiliary classifier is smaller than the corresponding set threshold, determining that the auxiliary classifier can not perform the field discrimination on the target training data.
Optionally, the feature generator, the main classifier, and the auxiliary classifier are neural network models.
Optionally, the method further includes:
after finishing training, acquiring data to be classified;
taking the data to be classified as input data of the feature generator, so that the feature generator transforms original feature data of the data to be classified into the same feature space to obtain third transformation data;
and taking the third transformation data as input data of the main classifier, so that the main classifier performs classification prediction on the data to be classified according to the third transformation data, and the data to be classified is provided with a prediction classification label.
The embodiment of the present application further provides a device for generating a classification model, including:
the training data acquisition unit is used for acquiring target training data, wherein the target training data comprises different field data divided under the same data type;
the classification model training unit is used for training a feature generator, a main classifier and an auxiliary classifier by using the target training data, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for performing classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data;
and the classification model generation unit is used for finishing training to obtain a classification model comprising the feature generator and the main classifier if the auxiliary classifier is determined not to be capable of carrying out the field discrimination on the target training data.
The embodiment of the present application further provides a device for generating a classification model, including: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is for storing one or more programs, the one or more programs including instructions, which when executed by the processor, cause the processor to perform the method of any of the above.
Embodiments of the present application also provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform any one of the methods described above.
The embodiment of the application provides a method and a device for generating a classification model, target training data are obtained, the target training data comprise different field data which are divided under the same data type, a feature generator, a main classifier and an auxiliary classifier are trained by utilizing the target training data, so that the feature generator can accurately convert original feature data of the target training data into the same feature space, the main classifier can accurately predict classification of each field data based on the converted feature data, and the auxiliary classifier can accurately distinguish the field of each field data based on the converted feature data. In this embodiment, the feature generator is constructed for reducing the domain distinguishing capability of the auxiliary classifier, that is, the feature generator is opposite to the training target of the auxiliary classifier, so that the training can be finished when the auxiliary classifier cannot distinguish the domains, and the main classifier at this time is not limited by the domain division when performing prediction classification, so that training data belonging to different domain divisions can have the same classification effect, and the classification result can be more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for generating a classification model according to an embodiment of the present disclosure;
fig. 2 is one of schematic diagrams of a feature generator, a main classifier, and an auxiliary classifier provided in an embodiment of the present application;
fig. 3 is a second schematic flowchart of a method for generating a classification model according to an embodiment of the present application;
fig. 4 is a schematic flow chart of a parameter updating method provided in an embodiment of the present application;
fig. 5 is a second schematic diagram of the feature generator, the main classifier and the auxiliary classifier provided in the embodiment of the present application;
fig. 6 is a schematic flowchart of a data classification method according to an embodiment of the present application;
fig. 7 is a schematic composition diagram of a classification model generation apparatus according to an embodiment of the present application;
fig. 8 is a schematic hardware structure diagram of a classification model generation apparatus according to an embodiment of the present application.
Detailed Description
In the existing domain adaptive method, a model a (which is a model trained by using a domain training data) is retrained by using the a domain training data and the B domain training data, wherein the a domain training data is supervised data, the B domain training data is supervised data or unsupervised data, the supervised data is data labeled with an actual classification label in advance, and the unsupervised data is data not labeled with the actual classification label.
When the model A is retrained, if the B field training data is supervised data, the model A is retrained by using the B field training data and the A field training data. For example, in the speech recognition task, assuming that the a-domain training data is a chinese mandarin speech and the model a is a mandarin speech recognition model trained by the chinese mandarin speech, when the B-domain training data is a northeast mandarin speech and an actual classification label is pre-labeled for the northeast mandarin speech (e.g., a phoneme label is labeled for the northeast mandarin speech), the model a is retrained by the northeast mandarin speech and the chinese mandarin speech to obtain a new speech recognition model.
When the model A is retrained, if the B field training data is unsupervised data, the model A is firstly utilized to label the B field training data, namely, classification labels are marked on the B field training data, and then the B field training data marked with the classification labels and the A field training data are utilized to retrain the model A. For example, in the speech recognition task, it is assumed that the a-domain training data is a chinese mandarin speech and the model a is a mandarin speech recognition model trained by using the chinese mandarin speech, and when the B-domain training data is a northeast mandarin speech but an actual classification tag is not pre-labeled for the northeast mandarin speech (for example, a phoneme tag is labeled for the northeast mandarin speech), the northeast mandarin speech is labeled by using the mandarin speech recognition model, and then the model a is retrained by using the labeled northeast mandarin speech and the chinese mandarin speech to obtain a new speech recognition model.
However, when the training data in the B field is unsupervised data, if the training data in the B field is labeled by using the model a, the labeling method across fields makes the labeling results of the training data in the B field often have more errors, and the obtained model effect is often poor when the wrong labeling results are used for training the model a.
In order to solve the above-mentioned defects, an embodiment of the present application provides a method and an apparatus for generating a classification model, where a batch of training data is collected in advance, the training data includes different field data divided under the same data type, and a feature generator, a main classifier, and an auxiliary classifier are trained by using the batch of training data, where the feature generator may transform the training data into the same feature space, the main classifier may perform classification prediction on the training data according to the transformed feature data, and the auxiliary classifier may perform field classification on the training data according to the transformed feature data. The feature generator is constructed for reducing the field distinguishing capability of the auxiliary classifier, namely, the feature generator and the auxiliary classifier are opposite in training target, so that training can be finished when the auxiliary classifier cannot distinguish the fields, and the main classifier at the moment is not limited by the fields during prediction classification, so that training data belonging to different classification fields have the same classification effect, and the classification result is more accurate.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First embodiment
Referring to fig. 1, a schematic flow chart of a method for generating a classification model provided in this embodiment is shown, where the method includes the following steps:
s101: target training data are obtained, wherein the target training data comprise different field data divided under the same data type.
A large amount of data for training the classification model may be collected in advance, and the present embodiment defines the training data as target training data.
In this embodiment, the target training data is the same type of data, for example, the target training data may be voice data, text data, image data or other data in multimedia form. Each type of data may be divided into domains in advance, and the domain division manner is not limited in this embodiment, for example, as follows:
1) the voice data may be divided into domains according to dialect types, and each dialect type may be one divided domain of the voice data. For example, the voice data may be divided into general speech sound data, sichuan speech sound data, northeast speech sound data, and the like, and the corresponding divided fields are "mandarin chinese", "sichuan", "northeast" and the like, respectively.
2) The text data may be divided into areas according to the types of text sources, and each text source may be a divided area of the text data. For example, the news text data may be divided into mainland news text data, hong and ao station news text data, and the corresponding divided fields are "mainland news text", "hong and ao station news text", and the like; of course, the "news text from table of australia, hong kong" may be further divided into fields, such as "news text from hong kong", "news text from australia", "news text from taiwan".
3) The image data may be divided into regions according to the shooting scenes, and each shooting scene may be defined as one divided region of the image data. For example, the animal picture data may be divided into indoor animal picture data, outdoor animal picture data, and the like, and the corresponding division areas are "indoor animal picture" and "outdoor animal picture", respectively.
It is understood that when a certain type of data (such as voice data) is used as training data, in order to ensure that the classification model can classify all data of the type, data of all divided domains of the type can be collected in advance when data collection is performed, and the collected data is used as target training data.
S102: training a feature generator, a primary classifier, and an auxiliary classifier using the target training data.
As shown in fig. 2, the present embodiment includes 3 training objects, which are a feature generator, a main classifier, and an auxiliary classifier. The feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for classifying and predicting the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing division fields to which the target training data belong according to the transformed feature data.
In this embodiment, the feature generator and the main classifier form a classification model for achieving a desired classification goal, and therefore, the stronger the classification capability of the main classifier, the higher the accuracy of the classification result.
It should be noted that the construction target of the auxiliary classifier is to be able to better distinguish the domain categories between the multi-domain data in the target training data according to the output feature data of the feature generator, and the construction target of the feature generator is to reduce the domain distinguishing capability of the auxiliary classifier, so when training the auxiliary classifier and the feature generator, the training targets are opposite, and the confrontation training is formed. Based on the above, when the auxiliary classifier cannot perform the domain division, the description feature generator can accurately transform the original feature data of the target training data into the same feature space, so that the main classifier is not limited by the domain division when performing prediction classification, and thus the main classifier has the same classification effect on the training data of each divided domain, and further has an accurate classification prediction result on the training data of each divided domain.
For the convenience of understanding the present step, the following description will be given by taking as an example that the target training data includes a field a training data and a field B training data:
in the model training process, characteristic data can be extracted from training data in the A field according to a characteristic extraction mode in the A field, and characteristic data can be extracted from training data in the B field according to a characteristic extraction mode in the B field; then, the feature data of the training data in the a domain and the feature data of the training data in the B domain are subjected to feature transformation, so that the feature data, i.e., the original feature data of the corresponding domain, are transformed to a preset feature space, that is, the feature dimensions and the feature types of the feature space in the a domain and the feature space in the B domain may not be completely the same as the feature dimensions and the feature types of the transformed feature space, and therefore, the transformed feature data of the training data in the a domain and the training data in the B domain need to have the feature dimensions and the feature types of the transformed feature space through feature transformation. Then, the main classifier can carry out classification prediction on the training data of the A field and the training data of the B field according to the transformed feature data, and the auxiliary classifier can carry out field distinguishing on the training data of the A field and the training data of the B field according to the transformed feature data.
Specifically, taking the training data in the field A as the common speech sound data and the training data in the field B as the Sichuan speech sound data as an example, the acoustic features of the common speech sound data and the Sichuan speech sound data can be extracted, and the acoustic features are subjected to feature transformation by using a feature generator and transformed to the same feature space; the main classifier can classify and predict the mandarin speech data and the Sichuan speech data according to the transformed acoustic characteristics, and the classification prediction result can be Chinese phoneme classification; the auxiliary classifier may perform domain discrimination on mandarin speech data and tetrakawa speech data according to the transformed acoustic features, and the domain discrimination result may be mandarin or tetrakawa.
Next, the model parameters of the feature generator, the main classifier and the auxiliary classifier can be adjusted according to the classification prediction result of the main classifier and the domain distinguishing result of the auxiliary classifier, and the auxiliary classifier can no longer have the domain distinguishing capability through multiple continuous adjustments.
S103: and if the auxiliary classifier is determined not to be capable of carrying out the field discrimination on the target training data, finishing the training to obtain a classification model comprising the feature generator and the main classifier.
As described above, the training can be ended by continuously adjusting the model parameters of the feature generator, the main classifier and the auxiliary classifier until the auxiliary classifier cannot perform the domain classification on the target training data, for example, the training data in the a domain and the training data in the B domain cannot be performed, and the trained feature generator and the main classifier form the classification model.
In summary, the method for generating a classification model according to this embodiment obtains target training data, where the target training data includes data of different domains divided under the same data type, and trains the feature generator, the main classifier, and the auxiliary classifier by using the target training data, so that the feature generator can accurately transform original feature data of the target training data into the same feature space, the main classifier can have an accurate classification prediction result for each domain data based on the transformed feature data, and the auxiliary classifier can accurately distinguish the domain to which each domain data belongs based on the transformed feature data. In this embodiment, the feature generator is constructed for reducing the domain distinguishing capability of the auxiliary classifier, that is, the feature generator is opposite to the training target of the auxiliary classifier, so that the training can be finished when the auxiliary classifier cannot distinguish the domains, and the main classifier at this time is not limited by the domain division when performing prediction classification, so that training data belonging to different domain divisions can have the same classification effect, and the classification result can be more accurate.
Second embodiment
The present embodiment will describe a specific implementation manner of step S102 in the first embodiment by following step S302.
Referring to fig. 3, a schematic flow chart of a method for generating a classification model provided in this embodiment is shown, where the method includes the following steps:
s301: target training data are obtained, wherein the target training data comprise different field data divided under the same data type.
It should be noted that step S301 is the same as step S101 in the first embodiment, and for related description, reference is made to the first embodiment, which is not repeated herein.
S302: training a feature generator and a main classifier by using data with actual classification labels in the target training data; training an auxiliary classifier using the target training data.
The feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for carrying out classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data.
In this embodiment, some or all of the target training data have actual classification labels. Specifically, when collecting the target training data, at least one of the domain-divided data should be provided with a classification label, which should be an accurate classification label, i.e., an actual classification label. Taking target training data as voice data as an example, when the target training data comprises common speech sound data, Sichuan speech sound data and northeast speech sound data, corresponding actual Chinese phonemes can be marked on a recognition text of the common speech sound data to serve as actual classification labels of the common speech sound data, and the Sichuan speech sound data and the northeast speech sound data are not provided with the actual classification labels.
When the model training is carried out, the training data with actual classification labels are used for training the feature generator and the main classifier, and the training data of the auxiliary classifier can be free from the limitation of whether the training data are provided with the actual classification labels or not, namely, all target training data can be used for training the auxiliary classifier.
Specifically, when the feature generator, the main classifier and the auxiliary classifier are trained, model parameters of the feature generator, the main classifier and the auxiliary classifier can be optimized and adjusted through multiple rounds of training. For this reason, the target training data may be divided into multiple batches of training data, for example, divided into 100 batches, one batch of training data is selected first, the batch of training data is used to perform one-round update on the model parameters of the feature generator, the main classifier and the auxiliary classifier, after the update is finished, the next batch of training data is selected, based on the update result of the previous round, the batch of training data is used to perform one-round update on the model parameters of the feature generator, the main classifier and the auxiliary classifier, and so on until the parameter variation of the feature generator, the main classifier and the auxiliary classifier is smaller than the corresponding set threshold, or when all batches of training data are used up, the auxiliary classifier is considered to be unable to perform the domain division on the target training data, and at this time, the parameter update is finished.
For example, assume that the target training data includes ordinary speech sound data, Sichuan speech sound data, and northeast speech sound data, if only the ordinary speech sound data carries an actual classification tag, neither the Sichuan speech sound data nor the northeast speech sound data carries an actual classification tag. If the training data of the current batch comprises the common speech sound data, updating the model parameters of the feature generator and the main classifier, and then updating the model parameters of the auxiliary classifier; however, if the training data of the current batch does not include the common speech data, i.e., the tetrakawa speech data and/or the northeast speech data, the model parameters of the feature generator and the main classifier are not updated, and only the model parameters of the auxiliary classifier are updated.
It should be noted that, for how to update the model parameters by using the training data of the current batch, please refer to the third embodiment below.
S303: and if the auxiliary classifier is determined not to be capable of carrying out the field discrimination on the target training data, finishing the training to obtain a classification model comprising the feature generator and the main classifier.
It should be noted that step S303 is the same as step S103 in the first embodiment, and for related description, reference is made to the first embodiment, which is not repeated herein.
In summary, the method for generating a classification model provided in this embodiment obtains target training data, where the target training data includes data of different fields divided under the same data type, and part or all of the data in the target training data has actual classification labels; the method comprises the steps of training a feature generator and a main classifier by utilizing data with actual classification labels in target training data, enabling the feature generator to accurately convert original feature data of the target training data into the same feature space, enabling the main classifier to have accurate classification prediction results for each field data based on the converted feature data, and training auxiliary classification by utilizing the target training data, enabling the auxiliary classifier to accurately distinguish the affiliated field of each field data based on the converted feature data. In this embodiment, the feature generator is constructed for reducing the domain distinguishing capability of the auxiliary classifier, that is, the feature generator is opposite to the training target of the auxiliary classifier, so that the training can be finished when the auxiliary classifier cannot distinguish the domains, and the main classifier at this time is not limited by the domain division when performing prediction classification, so that training data belonging to different domain divisions can have the same classification effect, and the classification result can be more accurate.
Third embodiment
The present embodiment will describe a specific implementation manner of step S302 in the second embodiment.
Referring to fig. 4, a schematic flow chart of a parameter updating method provided in this embodiment is shown, where the parameter updating method includes the following steps:
s401: judging whether the current extracted data has data with actual classification labels or not; if yes, go to S402; if not, go to step S406.
S402: and taking the data with the actual classification label as the current training data.
Since all or part of the extracted data is provided with the actual classification label, the data with the actual classification label can be used as the training data of the current round of the feature generator and the main classifier.
S403: and taking the current training data as input data of the feature generator so that the feature generator can transform the original feature data of the current training data to the same feature space to obtain first transformation data.
In this embodiment, as shown in fig. 5, the model structure of the feature generator may be a Neural Network model, and specifically may be a one-layer or multi-layer Neural Network model, such as a multi-layer Deep Neural Network (DNN) model, a multi-layer Recurrent Neural Network (RNN), a multi-layer Convolutional Neural Network (CNN), and the like.
As described in the above embodiments, the feature generator is configured to perform feature transformation on the multi-domain data, and transform the multi-domain data into the same feature space. Therefore, when the current training data is input into the feature generator (the feature generator is the latest feature generator), the feature generator outputs the transformed feature data, the specific transformation method may be different according to the specific network structure, for example, when the network structure is a multi-layer DNN, the feature transformation is performed on the input data according to the feature transformation method between layers of the DNN, and this step defines the transformed feature data as the first transformed data.
For example, assuming that the current training data is multi-domain speech data, the input data of the feature generator is acoustic features extracted from the multi-domain speech data, and the output is feature data obtained by transforming the acoustic features.
S404: and taking the first transformation data as input data of the main classifier, so that the main classifier performs classification prediction on the current training data according to the first transformation data, and the current training data has a prediction classification label.
In this embodiment, as shown in fig. 5, the model structure of the main classifier may be a neural network model, and specifically may be a one-layer or multi-layer neural network model, such as a multi-layer DNN model, a multi-layer RNN model, a multi-layer CNN model, and the like.
As described in the above embodiments, the master classifier is used for performing classification prediction on the multi-domain data according to the transformed feature data output by the feature generator, wherein the master classifier has at least two master classification labels. Therefore, when the first transformation data is input into the main classifier (which is the latest main classifier), the main classifier outputs the probability that the current training data corresponds to each main classification label, and the main classification label corresponding to the maximum probability is selected as the predicted classification label of the current training data.
For example, assuming that the current training data is still multi-domain speech data including one or more of normal speech data, sichuan speech data, and northeast speech data, when the input data of the feature generator is acoustic features extracted from the multi-domain speech data and the output is feature data obtained by transforming the acoustic features, the input data of the main classifier is transformed feature data output by the feature generator and the acoustic features output by each frame of speech data in the current training data are posterior probabilities of each chinese phoneme, and the chinese phoneme corresponding to the maximum probability is selected as the predictive classification label of the corresponding acoustic feature.
S405: and updating the model parameters of the feature generator and the main classifier according to the actual classification label and the prediction classification label corresponding to the current training data to complete the parameter updating of the current round.
In this embodiment, the initial model parameters of the feature generator and the main classifier may be preset, and when updating the parameters, the model parameters of the feature generator and the main classifier are updated by using the actual classification label carried by the current training data and the predicted classification label predicted by the main classifier. For example, a Cross Entropy (CE) criterion may be used for parameter updating to obtain the feature generator and the main classifier after parameter updating.
S406: and taking the extracted data as input data of the feature generator so that the feature generator transforms the original feature data of the extracted data to the same feature space to obtain second transformation data.
Note that, when the feature generator performs parameter update in S405, the feature generator in this step is a feature generator after the parameter update.
Step S406 is similar to step S403, and please refer to step S403 for related parts, which are not described herein again. Here, the difference between step S406 and step S403 is that the transformed feature data output by the feature generator may be different due to the difference between the model parameters of the feature generator and the input data, and therefore, in step S406, the transformed feature data is defined as the second transformed data.
S407: and taking the second transformation data as input data of the auxiliary classifier so that the auxiliary classifier can predict the field to which the current training data belongs according to the second transformation data.
In this embodiment, as shown in fig. 5, the model structure of the auxiliary classifier may be a neural network model, and specifically may be a one-layer or multi-layer neural network model, such as a multi-layer DNN model, a multi-layer RNN model, a multi-layer CNN model, and the like.
As described in the above embodiments, the auxiliary classifier is configured to perform domain classification on the multi-domain data according to the transformed feature data output by the feature generator. Therefore, when the second conversion data is input to the auxiliary classifier (which is the latest auxiliary classifier), the auxiliary classifier outputs the probability that the extracted data belongs to each divided region, and the divided region corresponding to the maximum value of the probability is selected as the region to which the extracted data belongs.
For example, assuming that the extracted data is multi-domain speech data including one or more of ordinary speech data, sichuan speech data, and northeast speech data, the auxiliary classifier outputs probabilities that each sentence of speech in the extracted data belongs to mandarin, sihuan, and northeast speech, and dialect types corresponding to the maximum probability values are selected as the domains to which each sentence of speech belongs.
S408: and updating the model parameters of the auxiliary classifier according to the actual field and the prediction field corresponding to the extracted data.
In this embodiment, the initial model parameters of the auxiliary classifier may be preset, and when updating the parameters, the current auxiliary classifier is updated by using the actual field and the prediction field corresponding to the extracted data. For example, a Cross Entropy (CE) criterion may be used to obtain the updated auxiliary classifier.
In summary, in the method for generating a classification model provided in this embodiment, when updating parameters of the feature generator, the main classifier, and the auxiliary classifier, data may be extracted from the target training data in batches, and for each extracted data, if all or part of the extracted data has an actual classification label, the model parameters of the feature generator, the main classifier, and the auxiliary classifier are updated by using the extracted data, whereas if none of the extracted data has an actual classification label, only the model parameters of the auxiliary classifier are updated by using the extracted data, and a specific updating method is as follows. Until the parameter variation of the feature generator, the main classifier and the auxiliary classifier is smaller than the corresponding set threshold, or until all the training data of all batches are extracted.
Fourth embodiment
Further, after the classification model including the feature generator and the main classifier is obtained through training, the classification model may be used to classify the data to be classified.
Referring to fig. 6, a schematic flow chart of a data classification method provided in this embodiment is shown, where the data classification method includes the following steps:
s601: and after finishing training, acquiring data to be classified.
S602: and taking the data to be classified as input data of the feature generator, so that the feature generator converts the original feature data of the data to be classified into the same feature space to obtain third conversion data.
In this embodiment, after the data to be classified is input into the trained classification model, the classification model transforms the original feature data of the data to be classified into the same feature space by using the feature generator, and this step defines the transformed feature data as third transformed data.
S603: and taking the third transformation data as input data of the main classifier, so that the main classifier performs classification prediction on the data to be classified according to the third transformation data, and the data to be classified is provided with a prediction classification label.
In this embodiment, after the feature generator of the classification model outputs the third transformation data, the classification model performs classification prediction on the data to be classified by using the main classifier, and outputs a prediction classification result of the data to be classified.
It should be noted that, when the classification model is used to classify the data to be classified, the parameters of the feature generator, the main classifier and the auxiliary classifier can be adjusted according to the actual classification result and the predicted classification result of the data to be classified, so that the classification prediction result of the main classifier is more accurate.
In summary, in the data classification method provided in this embodiment, for the classification model including the feature generator and the main classifier, since the classification model is no longer limited by the classification field when performing classification prediction, the classification model has the same classification effect on the data to be classified belonging to different classification fields, and therefore, compared with the prior art, when performing prediction classification on the data to be classified by using the classification model, the classification result is more accurate.
Fifth embodiment
Referring to fig. 7, a schematic composition diagram of a generation apparatus of a classification model provided in this embodiment is shown, where the generation apparatus 700 includes:
a training data obtaining unit 701, configured to obtain target training data, where the target training data includes different field data divided under the same data type;
a classification model training unit 702, configured to train a feature generator, a main classifier, and an auxiliary classifier by using the target training data, where the feature generator is configured to transform original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is configured to perform classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is configured to distinguish a partition domain to which the target training data belongs according to the transformed feature data;
a classification model generating unit 703, configured to end training to obtain a classification model including the feature generator and the main classifier if it is determined that the auxiliary classifier cannot perform domain classification on the target training data.
In one implementation manner of this embodiment, some or all of the target training data have actual classification labels; the classification model training unit 702 may include:
a main classification training subunit, configured to train a feature generator and a main classifier by using data with actual classification labels in the target training data;
and the auxiliary classification training subunit is used for training an auxiliary classifier by using the target training data.
In an implementation manner of this embodiment, the main classification training subunit may include:
the training data extraction subunit is used for extracting data from the target training data in batches;
the training data determining subunit is used for determining the current extracted data, and if the extracted data contains data with actual classification labels, using the data with the actual classification labels as the current training data;
the first feature transformation subunit is used for taking the current training data as the input data of the feature generator so that the feature generator transforms the original feature data of the current training data to the same feature space to obtain first transformation data;
the classification label prediction subunit is used for taking the first transformation data as input data of the main classifier so that the main classifier can carry out classification prediction on the current training data according to the first transformation data and the current training data is provided with a prediction classification label;
and the first parameter updating subunit is used for updating the model parameters of the feature generator and the main classifier according to the actual classification label and the predicted classification label corresponding to the current training data to complete the current parameter updating.
In an implementation manner of this embodiment, the auxiliary classification training subunit is specifically configured to, if there is no data with an actual classification label in the extracted data, perform parameter update on the auxiliary classifier by using the extracted data; if the extracted data contains data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data after the current round of parameter updating is finished;
wherein the auxiliary classification training subunit comprises:
the second feature transformation subunit is used for taking the extracted data as input data of the feature generator, so that the feature generator transforms the original feature data of the extracted data to the same feature space to obtain second transformation data;
a domain division subunit, configured to use the second transformation data as input data of the auxiliary classifier, so that the auxiliary classifier predicts a domain to which current training data belongs according to the second transformation data;
and the second parameter updating subunit is used for updating the model parameters of the auxiliary classifier according to the actual field and the prediction field corresponding to the extracted data.
In an implementation manner of this embodiment, the classification model generating unit 703 may be specifically configured to determine that the auxiliary classifier cannot perform domain classification on the target training data if parameter variation amounts of the feature generator, the main classifier, and the auxiliary classifier are smaller than corresponding set thresholds.
In one implementation of this embodiment, the feature generator, the primary classifier, and the secondary classifier may be neural network models.
In an implementation manner of this embodiment, the apparatus 700 may further include:
the data acquisition unit to be classified is used for acquiring data to be classified after training is finished;
the data feature transformation unit is used for taking the data to be classified as input data of the feature generator so that the feature generator transforms original feature data of the data to be classified into the same feature space to obtain third transformation data;
and the classification result acquisition unit is used for taking the third transformation data as input data of the main classifier so that the main classifier can classify and predict the data to be classified according to the third transformation data and the data to be classified has a prediction classification label.
Sixth embodiment
Referring to fig. 8, a schematic diagram of a hardware structure of a classification model generation apparatus provided for this embodiment, the generation apparatus 800 includes a memory 801 and a receiver 802, and a processor 803 connected to the memory 801 and the receiver 802 respectively, the memory 801 is used to store a set of program instructions, and the processor 803 is used to call the program instructions stored in the memory 801 to perform the following operations:
acquiring target training data, wherein the target training data comprises different field data divided under the same data type;
training a feature generator, a main classifier and an auxiliary classifier by using the target training data, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for carrying out classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data;
and if the auxiliary classifier is determined not to be capable of carrying out the field discrimination on the target training data, finishing the training to obtain a classification model comprising the feature generator and the main classifier.
In an implementation manner of this embodiment, some or all of the target training data are labeled with actual classification, the processor 803 is further configured to call the program instructions stored in the memory 801 to perform the following operations:
training a feature generator and a main classifier by using data with actual classification labels in the target training data;
training an auxiliary classifier using the target training data.
In one implementation manner of this embodiment, the processor 803 is further configured to call the program instructions stored in the memory 801 to perform the following operations:
extracting data from the target training data in batches;
for the current extracted data, if the extracted data contains data with actual classification labels, taking the data with the actual classification labels as the current training data;
taking the current training data as input data of the feature generator so that the feature generator transforms the original feature data of the current training data to the same feature space to obtain first transformation data;
taking the first transformation data as input data of the main classifier so that the main classifier can classify and predict the current training data according to the first transformation data and the current training data has a prediction classification label;
and updating the model parameters of the feature generator and the main classifier according to the actual classification label and the prediction classification label corresponding to the current training data to complete the parameter updating of the current round.
In one implementation manner of this embodiment, the processor 803 is further configured to call the program instructions stored in the memory 801 to perform the following operations:
if the extracted data does not contain data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data; if the extracted data contains data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data after the current round of parameter updating is finished;
when the extracted data is used for updating the parameters of the auxiliary classifier, the extracted data is specifically used as input data of the feature generator, so that the feature generator can transform original feature data of the extracted data to the same feature space to obtain second transformation data;
taking the second transformation data as input data of the auxiliary classifier so that the auxiliary classifier can predict the field to which the current training data belongs according to the second transformation data;
and updating the model parameters of the auxiliary classifier according to the actual field and the prediction field corresponding to the extracted data.
In one implementation manner of this embodiment, the processor 803 is further configured to call the program instructions stored in the memory 801 to perform the following operations:
and if the parameter variation of the feature generator, the main classifier and the auxiliary classifier is smaller than the corresponding set threshold, determining that the auxiliary classifier can not perform the field discrimination on the target training data.
In one implementation of this embodiment, the feature generator, the primary classifier, and the secondary classifier are neural network models.
In one implementation manner of this embodiment, the processor 803 is further configured to call the program instructions stored in the memory 801 to perform the following operations:
after finishing training, acquiring data to be classified;
taking the data to be classified as input data of the feature generator, so that the feature generator transforms original feature data of the data to be classified into the same feature space to obtain third transformation data;
and taking the third transformation data as input data of the main classifier, so that the main classifier performs classification prediction on the data to be classified according to the third transformation data, and the data to be classified is provided with a prediction classification label.
In some embodiments, the processor 803 may be a Central Processing Unit (CPU), the Memory 801 may be a Random Access Memory (RAM) type internal Memory, and the receiver 802 may include a common physical interface, which may be an Ethernet (Ethernet) interface or an Asynchronous Transfer Mode (ATM) interface. The processor 803, receiver 802 and memory 801 may be integrated into one or more separate circuits or hardware, such as: application Specific Integrated Circuit (ASIC).
Further, the present embodiment also provides a computer-readable storage medium, which includes instructions, when executed on a computer, cause the computer to execute any implementation manner of the above generation method of the classification model.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method for generating a classification model, comprising:
acquiring target training data, wherein the target training data is voice data, text data, image data or data in other multimedia forms, and the target training data comprises voice data in different fields, text data in different fields, image data in different fields or data in other multimedia forms in different fields which are divided under the same data type;
training a feature generator, a main classifier and an auxiliary classifier by using the target training data, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for carrying out classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data;
and if the auxiliary classifier is determined not to be capable of carrying out the field distinguishing on the target training data, finishing the training to obtain a classification model which comprises the feature generator and the main classifier and can classify the voice data to be classified, the text data to be classified, the image data to be classified or the data to be classified in other multimedia forms.
2. The method of claim 1, wherein some or all of the target training data is labeled with an actual classification; training a feature generator, a primary classifier, and an auxiliary classifier using the target training data comprises:
training a feature generator and a main classifier by using data with actual classification labels in the target training data;
training an auxiliary classifier using the target training data.
3. The method of claim 2, wherein training a feature generator and a master classifier using data with actual classification labels in the target training data comprises:
extracting data from the target training data in batches;
for the current extracted data, if the extracted data contains data with actual classification labels, taking the data with the actual classification labels as the current training data;
taking the current training data as input data of the feature generator so that the feature generator transforms the original feature data of the current training data to the same feature space to obtain first transformation data;
taking the first transformation data as input data of the main classifier so that the main classifier can classify and predict the current training data according to the first transformation data and the current training data has a prediction classification label;
and updating the model parameters of the feature generator and the main classifier according to the actual classification label and the prediction classification label corresponding to the current training data to complete the parameter updating of the current round.
4. The method of claim 3, wherein training an auxiliary classifier using the target training data comprises:
if the extracted data does not contain data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data; if the extracted data contains data with actual classification labels, updating parameters of the auxiliary classifier by using the extracted data after the current round of parameter updating is finished;
wherein, when the extracted data does not have data with actual classification tags, the updating the parameters of the auxiliary classifier by using the extracted data specifically includes:
the extracted data is used as input data of the feature generator, so that the feature generator transforms original feature data of the extracted data to the same feature space to obtain second transformation data;
taking the second transformation data as input data of the auxiliary classifier so that the auxiliary classifier can predict the field to which the current training data belongs according to the second transformation data;
and updating the model parameters of the auxiliary classifier according to the actual field and the prediction field corresponding to the extracted data.
5. The method of claim 4, wherein the determining that the auxiliary classifier is unable to perform domain discrimination on the target training data comprises:
and if the parameter variation of the feature generator, the main classifier and the auxiliary classifier is smaller than the corresponding set threshold, determining that the auxiliary classifier can not perform the field discrimination on the target training data.
6. The method of any of claims 1 to 5, wherein the feature generator, the primary classifier, and the secondary classifier are neural network models.
7. The method according to any one of claims 1 to 5, further comprising:
after finishing training, acquiring data to be classified;
taking the data to be classified as input data of the feature generator, so that the feature generator transforms original feature data of the data to be classified into the same feature space to obtain third transformation data;
and taking the third transformation data as input data of the main classifier, so that the main classifier performs classification prediction on the data to be classified according to the third transformation data, and the data to be classified is provided with a prediction classification label.
8. An apparatus for generating a classification model, comprising:
the training data acquisition unit is used for acquiring target training data in a target scene, wherein the target training data is voice data, text data, image data or data in other multimedia forms, and the target training data comprises voice data in different fields, text data in different fields, image data in different fields or data in other multimedia forms in different fields which are divided under the same data type;
the classification model training unit is used for training a feature generator, a main classifier and an auxiliary classifier by using the target training data, wherein the feature generator is used for transforming original feature data of the target training data into the same feature space to obtain transformed feature data, the main classifier is used for performing classification prediction on the target training data according to the transformed feature data, and the auxiliary classifier is used for distinguishing a division field to which the target training data belongs according to the transformed feature data;
and the classification model generation unit is used for finishing training to obtain a classification model which comprises the feature generator and the main classifier and can classify the voice data to be classified, the text data to be classified, the image data to be classified or the data to be classified in other multimedia forms if the auxiliary classifier is determined not to be capable of distinguishing the field of the target training data.
9. An apparatus for generating a classification model, comprising: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is to store one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform the method of any of claims 1-7.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810218705.9A CN108304890B (en) | 2018-03-16 | 2018-03-16 | Generation method and device of classification model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810218705.9A CN108304890B (en) | 2018-03-16 | 2018-03-16 | Generation method and device of classification model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108304890A CN108304890A (en) | 2018-07-20 |
CN108304890B true CN108304890B (en) | 2021-06-08 |
Family
ID=62850188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810218705.9A Active CN108304890B (en) | 2018-03-16 | 2018-03-16 | Generation method and device of classification model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108304890B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543031A (en) * | 2018-10-16 | 2019-03-29 | 华南理工大学 | A kind of file classification method based on multitask confrontation study |
CN109065021B (en) * | 2018-10-18 | 2023-04-18 | 江苏师范大学 | End-to-end dialect identification method for generating countermeasure network based on conditional deep convolution |
CN109359689B (en) * | 2018-10-19 | 2021-06-04 | 科大讯飞股份有限公司 | Data identification method and device |
CN109740682B (en) * | 2019-01-08 | 2020-07-28 | 南京大学 | Image identification method based on domain transformation and generation model |
CN109947931B (en) * | 2019-03-20 | 2021-05-14 | 华南理工大学 | Method, system, device and medium for automatically abstracting text based on unsupervised learning |
CN110288976B (en) * | 2019-06-21 | 2021-09-07 | 北京声智科技有限公司 | Data screening method and device and intelligent sound box |
CN113505797B (en) * | 2021-09-09 | 2021-12-14 | 深圳思谋信息科技有限公司 | Model training method and device, computer equipment and storage medium |
CN114385890B (en) * | 2022-03-22 | 2022-05-20 | 深圳市世纪联想广告有限公司 | Internet public opinion monitoring system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103544210B (en) * | 2013-09-02 | 2017-01-18 | 烟台中科网络技术研究所 | System and method for identifying webpage types |
CN105045913B (en) * | 2015-08-14 | 2018-08-28 | 北京工业大学 | File classification method based on WordNet and latent semantic analysis |
CN105574538B (en) * | 2015-12-10 | 2020-03-17 | 小米科技有限责任公司 | Classification model training method and device |
CN105868790A (en) * | 2016-04-08 | 2016-08-17 | 湖南工业大学 | Electrical load type recognizer |
CN106531190B (en) * | 2016-10-12 | 2020-05-05 | 科大讯飞股份有限公司 | Voice quality evaluation method and device |
CN106789888B (en) * | 2016-11-18 | 2020-08-04 | 重庆邮电大学 | Multi-feature fusion phishing webpage detection method |
CN106874478A (en) * | 2017-02-17 | 2017-06-20 | 重庆邮电大学 | Parallelization random tags subset multi-tag file classification method based on Spark |
-
2018
- 2018-03-16 CN CN201810218705.9A patent/CN108304890B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108304890A (en) | 2018-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108304890B (en) | Generation method and device of classification model | |
CN111292764B (en) | Identification system and identification method | |
CN107978311B (en) | Voice data processing method and device and voice interaction equipment | |
CN106649561B (en) | Intelligent question-answering system for tax consultation service | |
CN108062954B (en) | Speech recognition method and device | |
CN112100349A (en) | Multi-turn dialogue method and device, electronic equipment and storage medium | |
JP6235082B1 (en) | Data classification apparatus, data classification method, and program | |
CN113272894A (en) | Fully supervised speaker logging | |
CN110147806B (en) | Training method and device of image description model and storage medium | |
CN111402894B (en) | Speech recognition method and electronic equipment | |
CN110069612B (en) | Reply generation method and device | |
CN111583911B (en) | Speech recognition method, device, terminal and medium based on label smoothing | |
CN109948160B (en) | Short text classification method and device | |
CN111161726B (en) | Intelligent voice interaction method, device, medium and system | |
CN113035311A (en) | Medical image report automatic generation method based on multi-mode attention mechanism | |
CN111294812A (en) | Method and system for resource capacity expansion planning | |
JP6199461B1 (en) | Information processing apparatus, information processing method, and program | |
CN111460149A (en) | Text classification method, related equipment and readable storage medium | |
CN114550718A (en) | Hot word speech recognition method, device, equipment and computer readable storage medium | |
CN114267345A (en) | Model training method, voice processing method and device | |
CN110298046B (en) | Translation model training method, text translation method and related device | |
CN112329470B (en) | Intelligent address identification method and device based on end-to-end model training | |
CN114611625A (en) | Language model training method, language model training device, language model data processing method, language model data processing device, language model data processing equipment, language model data processing medium and language model data processing product | |
CN113012687B (en) | Information interaction method and device and electronic equipment | |
WO2023083176A1 (en) | Sample processing method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |