CN114139655A - Distillation type competitive learning target classification system and method - Google Patents

Distillation type competitive learning target classification system and method Download PDF

Info

Publication number
CN114139655A
CN114139655A CN202111590972.7A CN202111590972A CN114139655A CN 114139655 A CN114139655 A CN 114139655A CN 202111590972 A CN202111590972 A CN 202111590972A CN 114139655 A CN114139655 A CN 114139655A
Authority
CN
China
Prior art keywords
neural network
module
training
cifar
weighted average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111590972.7A
Other languages
Chinese (zh)
Inventor
郭杰
谢聪
庄龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 14 Research Institute
Original Assignee
CETC 14 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 14 Research Institute filed Critical CETC 14 Research Institute
Priority to CN202111590972.7A priority Critical patent/CN114139655A/en
Publication of CN114139655A publication Critical patent/CN114139655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a distillation type competitive learning target classification system and method aiming at the training problems caused by sample error marking, class ambiguity and artificial knowledge intervention in the existing target classification method, which can obviously improve the target identification accuracy based on deep learning and provide core technical support for the field of deep learning-based target classification.

Description

Distillation type competitive learning target classification system and method
Technical Field
The invention belongs to the technical field of signal and information processing, and particularly relates to a target classification system and a target classification method.
Background
Object classification is one of the most basic and important research directions in many areas, such as computer vision, natural speech processing, etc. In civil, the system is widely applied to the fields of automatic driving, security protection, medicine and the like. In military affairs, the method is widely applied to the fields of target detection, accurate guidance, remote monitoring and the like. In recent years, with the rapid development of high-speed computing hardware, big data and deep learning technology, the target classification technology takes a new step. The target classification accuracy rate based on deep learning greatly exceeds that of the traditional method, and the recognition performance exceeding the human level is achieved on a plurality of data sets.
However, the deep learning based object classification still faces the following problems:
1) error labeling: deep learning depends on a large amount of high-quality labeled data, samples in various fields are obtained from a search engine or an open source data set, and the data have the problems of low labeling quality and more false labels.
2) Fuzzy category: for example, in image recognition, the problem of category ambiguity between a lion and a cat exists, and the use of the traditional one-hot label learning may cause model overfitting, thereby reducing generalization performance.
3) Artificial knowledge: the one-hot coded labels of the samples are artificially marked, and the condition that the artificially designed labels are most suitable for neural network learning cannot be guaranteed.
Disclosure of Invention
The invention aims at the problems and provides a distillation type competitive learning target classification system and method. The method comprises two neural networks, wherein the prediction result of each neural network and a one-hot (one-hot) label are subjected to weighted average to serve as new label knowledge, and the knowledge distillation is carried out on the other neural network, so that the neural networks compete with each other to realize a result better than the other neural network while learning each other. One-hot (one-hot) labels marked by people are not allowed to directly participate in neural network learning, so that the network has enough space to carry out self-adjustment, and the training problems caused by wrong labeling and fuzzy classification are solved to a certain extent. The block diagram of the distillation type competitive learning is shown in fig. 1.
The invention aims to realize a distillation type competitive learning target classification system and method. Aiming at the training problems caused by sample error marking, category blurring and artificial knowledge intervention in the existing target classification method, the distillation type competitive learning target classification system and method are provided, the target identification accuracy based on deep learning can be obviously improved, and a core technical support is provided for the field of deep learning-based target classification.
A target classification system for distillation type competitive learning comprises an input module, a neural network 1, a neural network 2, a first softmax module, a first weighted average module, a loss1 module, a second softmax module, a second weighted average module, a loss2 module and a one-hot coding module. The samples respectively enter a neural network 1 and a neural network 2 through an input module, the output of the neural network 1 is sent to a first softmax module, the output of the first softmax module is sent to a first weighted average module and a loss1 module, the output of the neural network 2 is sent to a second softmax module, the output of the second softmax module is sent to a second weighted average module and a loss2 module, the output of the one-hot coding module is respectively sent to the first weighted average module and the second weighted average module, the output of the first weighted average module is sent to a loss2 module, and the output of the second weighted average module is sent to a loss1 module.
A method for classifying targets for distillation type competitive learning, comprising the steps of:
step 1, designing a network model: designing or selecting two neural network models;
the distillation type competitive learning block diagram is shown in fig. 1, two neural networks compete with each other for learning, and after each iterative training, the prediction result of the neural network is tried to be closer to a sample label than the other neural network. Both neural network 1 and neural network 2 employ ResNet 50. The method effect was verified on the cifar-10 and cifar-100 datasets, respectively. Since the cifar-10 and cifar-100 datasets contained 10 and 100 class targets, respectively, the output dimensions of ResNet50 were 10 and 100 for the two sets of experiments, respectively.
Step 2, loss function design: designing a distillation type competition loss function;
the distillation type competition loss function adopts a cross entropy loss function, and is different from the label of a neural network
Figure 870509DEST_PATH_IMAGE001
And
Figure 966641DEST_PATH_IMAGE002
and (4) constructing. The loss functions of the neural network 1 and the neural network 2 are respectively:
Figure 412666DEST_PATH_IMAGE003
(1)
Figure 277853DEST_PATH_IMAGE004
(2)
wherein the content of the first and second substances,
Figure 275765DEST_PATH_IMAGE005
the number of the total categories is,
Figure 859193DEST_PATH_IMAGE006
for neural network 1, label
Figure 843330DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure 828603DEST_PATH_IMAGE008
as the first in the prediction of the neural network 1
Figure 607204DEST_PATH_IMAGE007
The probability of the prediction of a class,
Figure 536982DEST_PATH_IMAGE009
for neural network 2 in the label
Figure 590389DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure 164590DEST_PATH_IMAGE010
is the first in the prediction result of the neural network 2
Figure 645250DEST_PATH_IMAGE007
The prediction probability of a class.
Step 3, pre-training: training the neural network by utilizing one-hot codes in the first training periods (epochs) to enable the neural network to be rapidly converged;
front sidenOne training period (epoch), a neural network is trained using one-hot encoding (one-hot), i.e.,
Figure 406532DEST_PATH_IMAGE011
and
Figure 263630DEST_PATH_IMAGE012
the neural network is converged quickly.
Step 4, acquiring a new label: firstly, a current batch of samples are sent to two neural networks to obtain two prediction results. Then, respectively carrying out weighted average on the two prediction results and a one-hot (one-hot) label to obtain new labels of two neural networks;
taking a batch of samples from the cifar-10 or cifar-100 training set, wherein the samples pass through the neural network 1 and are calculated by using a softmax function to obtain a prediction resulty net1The prediction result of the sample after passing through the neural network 2 and being operated by the softmax function isy net2The one-hot label of the sample ist one-hot,. The labels of neural network 1 and neural network 2 are:
Figure 816971DEST_PATH_IMAGE013
(3)
Figure 468532DEST_PATH_IMAGE014
(4)
wherein the content of the first and second substances,
Figure 513849DEST_PATH_IMAGE015
is a weighting factor.
Step 5, training: training a neural network model by using a new label and a loss function obtained from the current batch;
and training the neural network model by using the obtained new label and the loss function. In training, random horizontal inversion is adopted for data augmentation, and mean reduction, variance removal and standardization are carried out on the images. The neural network was trained using the gradient descent (SGD) method with the batch size set to 64. The model is trained for a total of 100 cycles (epoch). The initial learning rate is set to 0.1, and a learning rate cosine annealing attenuation strategy is adopted. The weight decay (weight decay) was set to 0.0005 and the momentum (momentum) was set to 0.9 for all training sessions.
Step 6, iterative training: and repeating the steps 4-5 until the training is finished.
And 7, testing: and testing the test sample by using the trained neural network.
And testing on the cifar-10 and cifar-100 test sets by using the trained neural network. At the same time, the original single ResNet50 was trained using the same parameter settings and then tested on the cifar-10 and cifar-100 test sets. The test results are shown in table 1. As can be seen, the recognition effect of the two ResNet50 networks subjected to distillation type competitive learning is better than that of the original ResNet50, wherein the recognition rate of the neural network with the best effect is 0.82% higher than that of the original ResNet50 on cifar-10 and 2.01% higher than that of the original ResNet50 on cifar-100.
The distillation type competition learning framework can be expanded to multiple neural networks, when the number of the neural networks is larger than 2, and in the secondary iteration training, the loss function of the current neural network is the average of the distillation type competition loss functions of the current neural network and all other neural networks.
The invention has the beneficial effects that:
the invention realizes a distillation type competitive learning target classification system and method, and provides a core technical support for the field of deep learning-based target classification. Compared with the existing target classification system and method, the method has the remarkable advantages that: the distillation type competitive learning system and the distillation type competitive learning method can enable the two neural networks to learn each other and compete with each other to achieve a result better than the other side; one-hot (one-hot) labels marked by people are not allowed to directly participate in neural network learning, so that the network has enough space to carry out self-adjustment, and the problems of wrong labeling and fuzzy category are solved to a certain extent; the target identification accuracy rate based on deep learning is obviously improved.
Drawings
FIG. 1 is a block diagram of distillation type competitive learning.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to specific examples, and it should be understood that the following specific embodiments are only illustrative of the present invention and are not intended to limit the scope of the present invention.
Training networks using artificially labeled one-hot codes (one-hot) has two disadvantages:
1. if one-hot coding is wrong, the neural network can generate a great gradient value in the training process, and the learning process of the neural network is damaged;
2. if the characteristics of the classes are similar and the labels are fuzzy, the one-hot coding is adopted to force the neural network to completely separate the two classes, and the representation capability of the neural network on the target is reduced.
In order to solve the problems and simultaneously enable the neural network to continuously improve the recognition capability in the training process, a distillation type competitive learning target classification system and method are provided.
A target classification system for distillation type competitive learning comprises an input module, a neural network 1, a neural network 2, a first softmax module, a first weighted average module, a loss1 module, a second softmax module, a second weighted average module, a loss2 module and a one-hot coding module. The samples respectively enter a neural network 1 and a neural network 2 through an input module, the output of the neural network 1 is sent to a first softmax module, the output of the first softmax module is sent to a first weighted average module and a loss1 module, the output of the neural network 2 is sent to a second softmax module, the output of the second softmax module is sent to a second weighted average module and a loss2 module, the output of the one-hot coding module is respectively sent to the first weighted average module and the second weighted average module, the output of the first weighted average module is sent to a loss2 module, and the output of the second weighted average module is sent to a loss1 module.
A distillation type competitive learning target classification method comprises the following specific implementation steps:
step 1, designing a network model: the distillation type competitive learning block diagram is shown in fig. 1, two neural networks compete with each other for learning, and after each iterative training, the prediction result of the neural network is tried to be closer to a sample label than the other neural network. In this example, both neural network 1 and neural network 2 employ ResNet 50. The method effect was verified on the cifar-10 and cifar-100 datasets, respectively. Since the cifar-10 and cifar-100 datasets contained 10 and 100 class targets, respectively, the output dimensions of ResNet50 were 10 and 100 for the two sets of experiments, respectively.
Step 2, loss function design: the distillation type competition loss function adopts a cross entropy loss function, and is different from the label of a neural network
Figure 643479DEST_PATH_IMAGE001
And
Figure 192272DEST_PATH_IMAGE002
and (4) constructing. The loss functions of the neural network 1 and the neural network 2 are respectively:
Figure 14734DEST_PATH_IMAGE003
(1)
Figure 281767DEST_PATH_IMAGE004
(2)
wherein the content of the first and second substances,
Figure 74143DEST_PATH_IMAGE005
the number of the total categories is,
Figure 477442DEST_PATH_IMAGE006
for neural network 1, label
Figure 470806DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure 225136DEST_PATH_IMAGE008
as the first in the prediction of the neural network 1
Figure 962147DEST_PATH_IMAGE007
The probability of the prediction of a class,
Figure 219953DEST_PATH_IMAGE009
for neural network 2 in the labelFirst, the
Figure 649798DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure 219319DEST_PATH_IMAGE010
is the first in the prediction result of the neural network 2
Figure 760022DEST_PATH_IMAGE007
The prediction probability of a class.
Step 3, pre-training: front siden(in this example)nTaking 10) training periods (epochs), training the neural network using one-hot encoding (one-hot), i.e.,
Figure 137914DEST_PATH_IMAGE011
and
Figure 473080DEST_PATH_IMAGE012
the neural network is converged quickly.
Step 4, acquiring a new label: taking a batch of samples from the cifar-10 or cifar-100 training set, wherein the samples pass through the neural network 1 and are calculated by using a softmax function to obtain a prediction resulty net1The prediction result of the sample after passing through the neural network 2 and being operated by the softmax function isy net2The one-hot label of the sample ist one-hot,. The labels of neural network 1 and neural network 2 are:
Figure 936423DEST_PATH_IMAGE013
(3)
Figure 15237DEST_PATH_IMAGE014
(4)
wherein the content of the first and second substances,
Figure 247635DEST_PATH_IMAGE015
for the weighting factor, it is set to 0.7 in this example.
Step 5, training: and training the neural network model by using the obtained new label and the loss function. In training, random horizontal inversion is adopted for data augmentation, and mean reduction, variance removal and standardization are carried out on the images. The neural network was trained using the gradient descent (SGD) method with the batch size set to 64. The model is trained for a total of 100 cycles (epoch). The initial learning rate is set to 0.1, and a learning rate cosine annealing attenuation strategy is adopted. The weight decay (weight decay) was set to 0.0005 and the momentum (momentum) was set to 0.9 for all training sessions.
Step 6, iterative training: and repeating the steps 4-5 until the training is finished.
And 7, testing: and testing on the cifar-10 and cifar-100 test sets by using the trained neural network. At the same time, the original single ResNet50 was trained using the same parameter settings and then tested on the cifar-10 and cifar-100 test sets. The test results are shown in table 1. As can be seen, the recognition effect of the two ResNet50 networks subjected to distillation type competitive learning is better than that of the original ResNet50, wherein the recognition rate of the neural network with the best effect is 0.82% higher than that of the original ResNet50 on cifar-10 and 2.01% higher than that of the original ResNet50 on cifar-100.
Table 1: distillative competitive learning versus original ResNet50 performance
Data set Original ResNet50 Neural network 1 Neural network 2
cifar-10 93.87% 94.69%(+0.82%) 94.63%(+0.76%)
cifar-100 77.29% 78.53%(+1.24%) 79.30%(+2.01%)
The above description is only for the best mode of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.

Claims (9)

1. An object classification system for distillation-type competitive learning, characterized in that: the system comprises an input module, a neural network 1, a neural network 2, a first softmax module, a first weighted average module, a loss1 module, a second softmax module, a second weighted average module, a loss2 module and a one-hot coding module; the samples respectively enter a neural network 1 and a neural network 2 through an input module, the output of the neural network 1 is sent to a first softmax module, the output of the first softmax module is sent to a first weighted average module and a loss1 module, the output of the neural network 2 is sent to a second softmax module, the output of the second softmax module is sent to a second weighted average module and a loss2 module, the output of the one-hot coding module is respectively sent to the first weighted average module and the second weighted average module, the output of the first weighted average module is sent to a loss2 module, and the output of the second weighted average module is sent to a loss1 module.
2. A method for classifying targets for distillation type competitive learning, which is characterized by comprising the following steps:
step 1, designing a network model: designing or selecting two neural network models;
step 2, loss function design: designing a distillation type competition loss function;
step 3, pre-training: in the first training periods, the neural network is trained by utilizing the one-hot coding, so that the neural network is converged quickly;
step 4, acquiring a new label: firstly, sending samples of a current batch into two neural networks to obtain two prediction results; then, respectively carrying out weighted average on the two prediction results and the one-hot coded labels to obtain new labels of two neural networks;
step 5, training: training a neural network model by using a new label and a loss function obtained from the current batch;
step 6, iterative training: repeating the step 4 to the step 5 until the training is finished;
and 7, testing: and testing the test sample by using the trained neural network.
3. The method according to claim 2, wherein step 1 is specifically: the two neural networks compete with each other for learning, and after each iterative training, the prediction result of the neural network is tried to be closer to a sample label than the other neural network; both the neural network 1 and the neural network 2 adopt ResNet 50; the method effect was verified on the cifar-10 and cifar-100 datasets, respectively.
4. The method according to claim 2, wherein step 2 is specifically: the distillation type competition loss function adopts a cross entropy loss function, and is different from the label of a neural network
Figure DEST_PATH_IMAGE001
And
Figure DEST_PATH_IMAGE002
constructing; the loss functions of the neural network 1 and the neural network 2 are respectively:
Figure DEST_PATH_IMAGE003
(1)
Figure DEST_PATH_IMAGE004
(2)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
the number of the total categories is,
Figure DEST_PATH_IMAGE006
for neural network 1, label
Figure DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure DEST_PATH_IMAGE008
as the first in the prediction of the neural network 1
Figure 357937DEST_PATH_IMAGE007
The probability of the prediction of a class,
Figure DEST_PATH_IMAGE009
for neural network 2 in the label
Figure 9498DEST_PATH_IMAGE007
The probability of the annotation of a class,
Figure DEST_PATH_IMAGE010
is the first in the prediction result of the neural network 2
Figure 258077DEST_PATH_IMAGE007
The prediction probability of a class.
5. The method of claim 2The method is characterized in that the step 3 specifically comprises the following steps: front sidenIn each training period, the neural network is trained by using the one-hot coding, namely,
Figure DEST_PATH_IMAGE011
and
Figure DEST_PATH_IMAGE012
the neural network is converged quickly.
6. The method according to claim 2, wherein step 4 is specifically: taking a batch of samples from the cifar-10 or cifar-100 training set, wherein the samples pass through the neural network 1 and are calculated by using a softmax function to obtain a prediction resulty net1The prediction result of the sample after passing through the neural network 2 and being operated by the softmax function isy net2The sample is a one-hot coded labelt one-hot(ii) a The labels of neural network 1 and neural network 2 are:
Figure DEST_PATH_IMAGE013
(3)
Figure DEST_PATH_IMAGE014
(4)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE015
is a weighting factor.
7. The method according to claim 2, wherein step 5 is specifically: training a neural network model by using the obtained new label and the loss function; in training, random horizontal inversion is adopted for data augmentation, and mean value reduction, variance removal and standardization are carried out on the image; training a neural network by adopting a gradient descent method, wherein the batch size is set to 64; the model is trained for 100 cycles in total; setting the initial learning rate to be 0.1, and adopting a learning rate cosine annealing attenuation strategy; the weight attenuation was set to 0.0005 and the momentum to 0.9 for all training sessions.
8. The method according to claim 2, wherein step 7 is specifically: testing on the cifar-10 and cifar-100 test sets by using the trained neural network; at the same time, the original single ResNet50 was trained using the same parameter settings and then tested on the cifar-10 and cifar-100 test sets.
9. The method of claim 2, wherein: the method can be expanded to multiple neural networks, and when the number of the neural networks is more than 2, the loss function of the current neural network is the average of distillation type competition loss functions of the current neural network and all other neural networks in the secondary iterative training.
CN202111590972.7A 2021-12-23 2021-12-23 Distillation type competitive learning target classification system and method Pending CN114139655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111590972.7A CN114139655A (en) 2021-12-23 2021-12-23 Distillation type competitive learning target classification system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111590972.7A CN114139655A (en) 2021-12-23 2021-12-23 Distillation type competitive learning target classification system and method

Publications (1)

Publication Number Publication Date
CN114139655A true CN114139655A (en) 2022-03-04

Family

ID=80383407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111590972.7A Pending CN114139655A (en) 2021-12-23 2021-12-23 Distillation type competitive learning target classification system and method

Country Status (1)

Country Link
CN (1) CN114139655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937617A (en) * 2023-03-06 2023-04-07 支付宝(杭州)信息技术有限公司 Risk identification model training and risk control method, device and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937617A (en) * 2023-03-06 2023-04-07 支付宝(杭州)信息技术有限公司 Risk identification model training and risk control method, device and equipment

Similar Documents

Publication Publication Date Title
CN111368886B (en) Sample screening-based label-free vehicle picture classification method
CN112116030B (en) Image classification method based on vector standardization and knowledge distillation
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN108256482B (en) Face age estimation method for distributed learning based on convolutional neural network
CN109961089A (en) Small sample and zero sample image classification method based on metric learning and meta learning
CN111160474A (en) Image identification method based on deep course learning
CN112446423B (en) Fast hybrid high-order attention domain confrontation network method based on transfer learning
CN113011357B (en) Depth fake face video positioning method based on space-time fusion
CN113076994B (en) Open-set domain self-adaptive image classification method and system
CN110991549A (en) Countermeasure sample generation method and system for image data
CN112733533A (en) Multi-mode named entity recognition method based on BERT model and text-image relation propagation
CN108537119A (en) A kind of small sample video frequency identifying method
CN113657491A (en) Neural network design method for signal modulation type recognition
CN113221655B (en) Face spoofing detection method based on feature space constraint
CN111832650A (en) Image classification method based on generation of confrontation network local aggregation coding semi-supervision
CN112528777A (en) Student facial expression recognition method and system used in classroom environment
CN114241564A (en) Facial expression recognition method based on inter-class difference strengthening network
CN115546196A (en) Knowledge distillation-based lightweight remote sensing image change detection method
CN114675249A (en) Attention mechanism-based radar signal modulation mode identification method
CN106203373A (en) A kind of human face in-vivo detection method based on deep vision word bag model
CN115761408A (en) Knowledge distillation-based federal domain adaptation method and system
CN113109782B (en) Classification method directly applied to radar radiation source amplitude sequence
CN114139655A (en) Distillation type competitive learning target classification system and method
Zhao et al. A contrastive knowledge transfer framework for model compression and transfer learning
CN116433909A (en) Similarity weighted multi-teacher network model-based semi-supervised image semantic segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination