CN113869464A - Training method of image classification model and image classification method - Google Patents

Training method of image classification model and image classification method Download PDF

Info

Publication number
CN113869464A
CN113869464A CN202111459523.9A CN202111459523A CN113869464A CN 113869464 A CN113869464 A CN 113869464A CN 202111459523 A CN202111459523 A CN 202111459523A CN 113869464 A CN113869464 A CN 113869464A
Authority
CN
China
Prior art keywords
image
student
result
network
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111459523.9A
Other languages
Chinese (zh)
Other versions
CN113869464B (en
Inventor
刘国清
杨广
王启程
郑伟
张孟华
杨国武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Netlink Intelligent Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202111459523.9A priority Critical patent/CN113869464B/en
Publication of CN113869464A publication Critical patent/CN113869464A/en
Application granted granted Critical
Publication of CN113869464B publication Critical patent/CN113869464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a training method of an image classification model, which comprises the following steps: dividing a first image into a plurality of subsets, wherein the first image is a labeled image; constructing a plurality of collections, wherein each collection comprises a verification collection and a training collection which are in one-to-one correspondence, the verification collection is one of a plurality of subsets in each collection, the training collection comprises a second image collection and the rest of the subsets, and the second image in the second image collection is a label-free image; respectively inputting each training set into a plurality of pairs of student networks and teacher networks, and acquiring corresponding output results; updating a first parameter of the student network and a second parameter of the teacher network according to the output result; selecting an optimal student network formed by the corresponding student network in the process of updating the first parameter according to the verification set; and using the optimal student networks as image classification models. The technical scheme of the invention effectively solves the problem of low accuracy of the image classification model caused by small data volume of the labeled image.

Description

Training method of image classification model and image classification method
Technical Field
The present invention relates to the field of image classification technologies, and in particular, to a training method for an image classification model, an image classification method, and a computer-readable storage medium.
Background
In recent years, with the rapid development of big data and deep learning technology, the deep neural network greatly promotes the progress of the image classification and detection field. Because the deep neural network can more effectively learn the image features from a large number of samples, the complex feature extraction process in the traditional image classification algorithm is avoided, and end-to-end classification detection is realized. Existing image classification algorithms based on a deep neural network include a supervised algorithm and an unsupervised algorithm. The supervision algorithm is trained by using labeled data, but the labeling of the data consumes a large amount of labor and time; the unsupervised algorithm is trained by using label-free data, but the accuracy of the model obtained by training is not high.
Disclosure of Invention
The invention provides a training method of an image classification model, an image classification method and a computer readable storage medium, which are used for solving the problem of low accuracy of the image classification model caused by small data volume of labeled images.
In a first aspect, an embodiment of the present invention provides a training method for an image classification model, where the training method for the image classification model includes:
dividing a first image into a plurality of subsets, wherein the first image is a labeled image;
constructing a plurality of collections, wherein each collection comprises a verification collection and a training collection which are in one-to-one correspondence, in each collection, the verification collection is one subset of the plurality of subsets, the training collection comprises a second image collection and the rest subsets of the plurality of subsets, and the second image in the second image collection is an unlabeled image;
respectively inputting each training set into a plurality of pairs of student networks and teacher networks, and acquiring corresponding output results;
updating a first parameter of the student network and a second parameter of the teacher network according to the output result;
selecting an optimal student network formed by the corresponding student network in the process of updating the first parameter according to the verification set; and
and taking a plurality of the optimal student networks as an image classification model.
In a second aspect, an embodiment of the present invention provides an image classification method, where the image classification method includes:
inputting a target image into an image classification model and obtaining a classification result, wherein the image classification model is obtained by training the image classification model by the training method, the image classification model comprises a plurality of sub-models, and the inputting the target image into the image classification model and obtaining the classification result specifically comprises the following steps:
respectively inputting the target image into each sub-model and obtaining corresponding sub-results; and
and obtaining the classification result according to the sub-result.
In a third aspect, an embodiment of the present invention provides a computer-readable storage medium for storing program instructions executable by a processor to implement the training method of an image classification model as described above.
The training method of the image classification model, the image classification method and the computer readable storage medium divide the first image, namely the labeled image, into a plurality of subsets, and then add the second image, namely the unlabeled image, into the subsets to form a plurality of training sets which are respectively used for training a plurality of pairs of student networks and teacher networks. And then, an optimal student network is selected as an image classification model by utilizing a verification set corresponding to the training set, and an image classification model with good effect can be trained together by utilizing more unlabelled images under the condition of small number of labeled images through a semi-supervised deep learning method, so that the problem of low accuracy of the image classification model caused by small number of labeled images can be effectively solved, the performance of the image classification model is effectively improved, and the accuracy of classification prediction is increased. Meanwhile, the finally obtained image classification model has good robustness and can be suitable for various image classification tasks.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a training method of an image classification model according to an embodiment of the present invention.
Fig. 2 is a sub-flowchart of a training method of an image classification model according to an embodiment of the present invention.
Fig. 3 is a flowchart of an image classification method according to an embodiment of the present invention.
Fig. 4 is a sub-flowchart of an image classification method according to an embodiment of the present invention.
Fig. 5 is a schematic internal structure diagram of a terminal according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances, in other words that the embodiments described are to be practiced in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, may also include other things, such as processes, methods, systems, articles, or apparatus that comprise a list of steps or elements is not necessarily limited to only those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such processes, methods, articles, or apparatus.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Please refer to fig. 1, which is a flowchart illustrating a training method of an image classification model according to an embodiment of the present invention. The training method is used for training the image classification model, and the image classification model obtained through training can classify the label-free images. The training method of the image classification model specifically comprises the following steps.
Step S102, the first image is divided into several subsets. In the present embodiment, all the first images are randomly and uniformly divided into a plurality of subsets, and each first image corresponds to one subset. The uniform division may mean that the number of first images in each subset is equal, or that the number of first images in each subset is about the same, for example, the number of first images in each subset differs by 1-5. In this embodiment, the first images are labeled images, and each first image has a label. The first image includes a first original label, which is a label vector. All the first images belong to a plurality of categories, the number of the categories is a preset value, and the label of each first image corresponds to one category. For example, all first images belong to 5 categories, the 5 categories including A, B, C, D, E. The label of each first image corresponds to A, B, C, D, E one of five categories. If the label a of the first image corresponds to the category A, the first original label of the first image is (1, 0,0,0, 0); if the label D of the first image corresponds to the category D, the first original label of the first image is (0, 0,0,1, 0). It will be appreciated that the number of values in the first original label is the same as the number of categories, i.e. the preset values. And each value in the first original label corresponds to each category one to one. When the label of the first image corresponds to a certain category, the value corresponding to the category in the first original label of the first image is 1, and the rest values are 0. That is, the first original label is a one-hot vector.
And step S104, constructing a plurality of collections. Wherein each collection comprises a validation set and a training set in one-to-one correspondence. In each of the sets, the validation set is one of the subsets, and the training set includes the second image set and the remaining ones of the subsets. The second images in the second set of images are unlabeled images. In this embodiment, the number of subsets is 5-10. Accordingly, the number of collections is the same as the number of subsets, also 5-10. For example, the number of subsets is 5, including X1, X2, X3, X4, X5, and the second set of images is Y. If the verification set in the collection set is X1, the training set comprises X2, X3, X4, X5 and Y; if the verification set in the collection set is X2, the training set comprises X1, X3, X4, X5 and Y; if the verification set in the collection set is X3, the training set comprises X1, X2, X4, X5 and Y; if the verification set in the collection set is X4, the training set comprises X1, X2, X3, X5 and Y; if the verification set in the collection set is X5, the training set includes X1, X2, X3, X4 and Y. That is, the number of collections is also 5, and each training set includes the second image set.
After several collections are constructed, enhancement processing is performed on both the first image and the second image in each training set. Specifically, each first image and each second image are sequentially subjected to filling, random cropping and random horizontal inversion processing, so that corresponding first enhanced images and second enhanced images are obtained. And in the process of random cutting, cutting the filled first image and the filled second image into preset sizes. That is, the first enhanced image and the second enhanced image are both the same size. The random horizontal flipping is probabilistic horizontal flipping, that is, the randomly cropped first image and second image may or may not be horizontally flipped. In some possible embodiments, the random horizontal flipping may also be replaced by a random vertical flipping or weakening, etc.
And step S106, inputting each training set into a plurality of pairs of student networks and teacher networks respectively, and acquiring corresponding output results. In this embodiment, each first image of the same training set is input into the student network and corresponding first student results are obtained, and each second image of the same training set is input into the student network and the teacher network and corresponding second student results and second teacher results are obtained. It can be understood that the student networks and the teacher network correspond to each other one by one, and the number of pairs of the student networks and the teacher network is the same as the number of training sets. Each pair of student and teacher networks corresponds to a training set. Preferably, a first enhanced image in the same training set is input into a student network and a first student result is obtained, a second enhanced image in the same training set is input into a corresponding student network and a second student result is obtained, and a second enhanced image in the same training set is input into a corresponding teacher network and a second teacher result is obtained. The first student result, the second student result and the second teacher result are label vectors, the number of numerical values in the label vectors is the same as the number of categories, namely the preset value, and each numerical value in the label vectors corresponds to one category. And if the category corresponding to the maximum numerical value in the label vector is the category with the highest reliability, the label corresponding to the category is the predicted label.
In some possible embodiments, the second images in the training set may be subjected to two enhancement processes to form two second enhanced images, and the two second enhanced images are input into the corresponding student network and teacher network respectively.
And step S108, updating the first parameters of the student network and the second parameters of the teacher network according to the output result. It will be appreciated that obtaining the output results and updating the first parameters of the student network and the second parameters of the teacher network based on the output results is an iterative training process. After all the first images and the second images in the training set are respectively input into the student network and the teacher network and corresponding output results are obtained, the first parameters of the student network and the second parameters of the teacher network are updated according to the output results each time. A specific process of updating the first parameter of the student network and the second parameter of the teacher network according to the output result will be described in detail below.
And step S110, selecting the optimal student network formed by the corresponding student network in the process of updating the first parameter according to the verification set. In this embodiment, each time the first parameter of the student network is updated, the first image in the corresponding verification set is input into the student network and the corresponding verification result is obtained. The verification result is a label vector, the number of numerical values in the label vector is the same as the number of categories, namely the preset value, and each numerical value in the label vector corresponds to one category. And calculating the accuracy of the corresponding student network according to the verification result and the first original label. Specifically, after the first images in the verification set are adjusted to a preset size, each first image is input into a student network and a corresponding verification result is obtained, and whether the verification result is correct or not is judged according to a first original label of the first image. And when the label corresponding to the maximum numerical value in the verification result is the same as the label corresponding to the first original label, judging that the verification result is correct. And counting the correct times of the verification result of the first image in the same verification set, and calculating the accuracy of the student network according to the correct times and the number of the first images. And selecting the student network with the highest accuracy as the optimal student network. It can be understood that, after the first parameter of the student network is updated each time, the accuracy of the student network is calculated, and the accuracy obtained by the current calculation is compared with the accuracy obtained by the last calculation, so that the student network with higher accuracy is reserved. In this embodiment, when the number of updates of the first parameter reaches a preset number, the update is ended. Correspondingly, the student network with higher accuracy is reserved as the student network with the highest accuracy in the updating process, and then the student network is selected as the optimal student network.
And step S112, taking the optimal student networks as image classification models. It will be appreciated that each collection is trained on a pair of student and teacher networks, respectively. Accordingly, the number of optimal student networks is the same as the number of collections.
In some possible embodiments, before step S102 is performed, all the first images are divided into a first set and a second set, and then the first images in the second set are divided into several subsets. Wherein a first image of the first set is used to evaluate an accuracy of the image classification model. The evaluating the accuracy of the image classification model specifically comprises: and after the first image is adjusted to a preset size, inputting the first image into an image classification model and obtaining a corresponding evaluation result, and calculating the accuracy of the image classification model according to the evaluation result and the first original label of the first image. In this embodiment, the first image is subjected to scaling processing so that the size of the first image is a preset size. Wherein the number of the first images in the first set is 10-20% of the number of all the first images, and the number of the first images in the second set is 80-90% of the number of all the first images. Preferably, the number of first images in the first set is 20% of all first images, and the number of first images in the second set is 80% of all first images.
In the above embodiment, the first image, i.e., the labeled image, is divided into a plurality of subsets, and then the second image, i.e., the unlabeled image, is added to the subsets to form a plurality of training sets, which are respectively used for training a plurality of pairs of student networks and teacher networks. And then, an optimal student network is selected as an image classification model by utilizing a verification set corresponding to the training set, and an image classification model with good effect can be trained together by utilizing more unlabelled images under the condition of small number of labeled images through a semi-supervised deep learning method, so that the problem of low accuracy of the image classification model caused by small number of labeled images can be effectively solved, the performance of the image classification model is effectively improved, and the accuracy of classification prediction is increased. Meanwhile, the finally obtained image classification model has good robustness and can be suitable for various image classification tasks.
In addition, the first image is uniformly divided into a plurality of subsets, so that the importance of a plurality of trained optimal student networks can be effectively ensured to be consistent. The number of the divided subsets is 5-10, so that the ratio of the number of the verification sets to the number of the training sets is 1/10-1/5, and the training effect can be effectively ensured. And performing enhancement processing on the same second image twice respectively, and inputting the two obtained second enhanced images into a student network and a teacher network respectively, so that the generalization capability and robustness of the image classification model can be better improved according to the second student result and the second teacher result.
It can be understood that the training method of the image classification model can be used for training not only the image classification model, but also the classification models of voice, characters and the like. When the training method is used for training classification models such as voices or characters, the first image is correspondingly changed into the voice with the label or the characters with the label, and the second image is changed into the voice without the label or the characters without the label in the training process, so that the description is omitted.
Please refer to fig. 2, which is a sub-flowchart of a training method of an image classification model according to an embodiment of the present invention. Step S108 specifically includes the following steps.
Step S202, constructing a first loss according to the first student result and the first original label. In this embodiment, the cross entropy of the corresponding first student result and the first original label is calculated, and then the average value of all cross entropies corresponding to the same training set is used as the first loss. Specifically, a first loss is constructed using a first formula. WhereinThe first formula is
Figure 245185DEST_PATH_IMAGE001
Figure 596532DEST_PATH_IMAGE002
The first loss is represented by the first loss,
Figure 276912DEST_PATH_IMAGE003
a first image is represented that is a first image,
Figure 485040DEST_PATH_IMAGE004
representing the number of first images in a training set,
Figure 126237DEST_PATH_IMAGE005
a first original label is represented that is,
Figure 137399DEST_PATH_IMAGE006
the result of the first student is presented,
Figure 875548DEST_PATH_IMAGE007
representing the cross entropy of the first student result and the first original label. It is understood that the cross entropy is the cross entropy of the first student result and the first original label of the same first image, each training set corresponding to a first penalty.
And step S204, constructing a second loss according to the second student result and the second teacher result. In this embodiment, the mean square errors of the corresponding second student result and second teacher result are calculated, and then the average value of all the mean square errors corresponding to the same training set is used as the second loss. Specifically, the second loss is constructed using a second formula. Wherein the second formula is
Figure 192260DEST_PATH_IMAGE008
Figure 383070DEST_PATH_IMAGE009
The second loss is represented by the second loss,
Figure 935274DEST_PATH_IMAGE010
a second image is represented that is a second image,
Figure 465613DEST_PATH_IMAGE011
representing the number of second images in a training set,
Figure 15543DEST_PATH_IMAGE012
a result of a second student is presented,
Figure 755966DEST_PATH_IMAGE013
the results of the second instructor are presented,
Figure 252806DEST_PATH_IMAGE014
the mean square error of the second student result and the second teacher result is represented. It will be appreciated that the mean square error is the mean square error of the second student result and the second teacher result of the same second image, each training set corresponding to a second loss.
And step S206, updating the first parameters of the student network and the second parameters of the teacher network according to the first loss and the second loss. In the present embodiment, the total loss is calculated using the first loss and the second loss. Specifically, the total loss is calculated using the third formula. Wherein the third formula is
Figure 699968DEST_PATH_IMAGE015
Figure 217537DEST_PATH_IMAGE016
The total loss is expressed as a total loss,
Figure 382939DEST_PATH_IMAGE017
representing the first coefficient. And after the total loss is obtained through calculation, updating a first parameter of the student network according to the total loss by using a gradient descent method, and updating a second parameter of the teacher network according to the first parameter of the student network by using a sliding average method. Specifically, the second parameter of the teacher network is updated using a fourth formula. Wherein the fourth formula is
Figure 417891DEST_PATH_IMAGE018
Figure 47455DEST_PATH_IMAGE020
A second parameter representing the current of the teacher network,
Figure 673609DEST_PATH_IMAGE021
a second parameter representing the last time the teacher network,
Figure 263990DEST_PATH_IMAGE022
a first parameter representing a current of the student network. Preferably, the first parameter of the student network is updated by a random gradient descent method, and the second parameter of the teacher network is updated by an exponential moving average method. Specifically, the second parameter of the teacher network is updated using a fifth formula. Wherein the fifth formula is
Figure 899371DEST_PATH_IMAGE023
Figure 117863DEST_PATH_IMAGE024
Representing the second coefficient. It will be appreciated that after the first parameter of the student network is updated with the total loss, the second parameter of the teacher network is updated with the updated first parameter. The second parameter of the teacher network is an exponential sliding average value of the first parameter of the student network.
In the embodiment, the first loss constructed according to the first student result and the first original label is a supervised loss item, so that the fitting of the first image can be effectively ensured; and a second loss constructed according to the second student result and the second teacher result is an unsupervised loss item, so that the second student result of the student network and the second teacher result of the teacher network are more similar. In addition, the second parameter of the teacher network is updated according to the first parameter of the student network by using a moving average method, so that the second parameter can be updated simply and quickly. And updating the second parameter of the teacher network according to the first parameter of the student network by using an exponential sliding average method, so that the second parameter is more accurate and credible.
Referring to fig. 3 and fig. 4 in combination, fig. 3 is a flowchart of an image classification method according to an embodiment of the present invention, and fig. 4 is a sub-flowchart of the image classification method according to the embodiment of the present invention. The image classification method specifically comprises the following steps.
Step S302, inputting the target image into an image classification model and acquiring a classification result. The image classification model is obtained by training the training method of the image classification model in the embodiment, and the image classification model can classify the target image. The image classification model comprises several sub-models. It can be understood that the target image is a label-free image, and the plurality of sub-models correspond to the plurality of optimal student networks formed in the training process one by one. Accordingly, the number of sub-models is the same as the number of optimal student networks. Inputting the image into the image classification model and obtaining the classification result specifically comprises the following steps.
Step S402, inputting the target image into each sub-model respectively and obtaining corresponding sub-results. In this embodiment, after the target image is adjusted to a preset size, the target image is respectively input into each sub-model and a corresponding sub-result is obtained. The sub-result is a label vector, the number of numerical values in the label vector is the same as the number of categories, namely the preset value, and each numerical value in the label vector corresponds to one category. It is understood that, in the training process, the image classification model adjusts the first image and the second image used for training to a preset size. When the target image is input into the image classification model, the target image also needs to be adjusted to a corresponding preset size.
And S404, obtaining a classification result according to the sub-results. In this embodiment, the number of the sub-results with the same number is counted, and the sub-result with the largest number of the sub-results with the same number is selected as the classification result. For example, the number of categories is 5, including A, B, C, D, E. The number of submodels is 5. The sub-results obtained after the same target image is respectively input into each sub-model are respectively (0.1, 0.8,0.6,0.3, 0.4), (0.7, 0.2,0.6,0.2, 0.1), (0.2, 0.9,0.5,0.4, 0.1), (0.4, 0.7,0.5,0.2, 0.1), (0.6, 0.5,0.4,0.1, 0.1). And the category corresponding to the maximum numerical value in each sub-result is the category with the highest reliability. Then, the category corresponding to each sub-result is B, A, B, B, A. Accordingly, the number of categories a is 2, and the number of categories B is 3. The number of the categories B is larger than that of the categories A, so the categories B are selected as the classification result.
In the above embodiment, the classification result is obtained by inputting the target image into each sub-model and obtaining the corresponding sub-result, then counting the sub-results, and adopting a voting manner, that is, counting the number of the same sub-results in the sub-results. Since each sub-model is trained according to the first image and the second image with the same number, the importance of each sub-model is the same, and the importance of the corresponding voting result is the same. Therefore, the corresponding classification result can be obtained by directly counting the number of the voting results, and the accuracy of the image classification model can be effectively increased.
In some possible embodiments, in the process of training the image classification model, if all the first images are divided into a plurality of subsets according to a certain proportion, when counting the number of the same sub-results, the number needs to be calculated according to the corresponding proportion. For example, if all the first images are divided into 5 subsets, the ratio of the number of first images in the 5 subsets is 1:1.2:2:1: 3. The category corresponding to each sub-result is B, A, B, B, A. Accordingly, the number of classes a is 1.2+3, 4.2; the number of classes B is 1+2+1 and 4. The number of the categories A is larger than that of the categories B, so that the categories A are selected as the classification result. That is, when counting the number of identical sub-results in the sub-results, the radix of each sub-result needs to be set to 1, and the total number can be counted after the radix of each sub-result is multiplied by the corresponding ratio.
Please refer to fig. 5, which is a schematic diagram of an internal structure of a terminal according to an embodiment of the present invention. The terminal 10 includes a computer-readable storage medium 11, a processor 12, and a bus 13. The computer-readable storage medium 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The computer readable storage medium 11 may in some embodiments be an internal storage unit of the terminal 10, such as a hard disk of the terminal 10. The computer readable storage medium 11 may in other embodiments be an external terminal 10 storage device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the terminal 10. Further, the computer-readable storage medium 11 may also include both an internal storage unit and an external storage device of the terminal 10. The computer-readable storage medium 11 may be used not only to store application software and various types of data installed in the terminal 10 but also to temporarily store data that has been output or will be output.
The bus 13 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
Further, the terminal 10 may also include a display assembly 14. The display component 14 may be a Light Emitting Diode (LED) display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch panel, or the like. The display component 14 may also be referred to as a display device or display unit, as appropriate, for displaying information processed in the terminal 10 and for displaying a visual user interface, among other things.
Further, the terminal 10 may also include a communication component 15. The communication component 15 may optionally include a wired communication component and/or a wireless communication component, such as a WI-FI communication component, a bluetooth communication component, etc., typically used to establish a communication connection between the terminal 10 and other intelligent control devices.
The processor 12 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip for executing program codes stored in the computer-readable storage medium 11 or Processing data. Specifically, the processor 12 executes a processing program of multi-source heterogeneous data to control the terminal 10 to implement the training method of the image classification model.
It is to be understood that fig. 5 only shows the terminal 10 with the components 11-15 and the training method for implementing the image classification model, and that those skilled in the art will appreciate that the structure shown in fig. 5 does not constitute a limitation of the terminal 10 and may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, insofar as these modifications and variations of the invention fall within the scope of the claims of the invention and their equivalents, the invention is intended to include these modifications and variations.
The above-mentioned embodiments are only examples of the present invention, which should not be construed as limiting the scope of the present invention, and therefore, the present invention is not limited by the claims.

Claims (10)

1. A training method of an image classification model is characterized by comprising the following steps:
dividing a first image into a plurality of subsets, wherein the first image is a labeled image;
constructing a plurality of collections, wherein each collection comprises a verification collection and a training collection which are in one-to-one correspondence, in each collection, the verification collection is one subset of the plurality of subsets, the training collection comprises a second image collection and the rest subsets of the plurality of subsets, and the second image in the second image collection is an unlabeled image;
respectively inputting each training set into a plurality of pairs of student networks and teacher networks, and acquiring corresponding output results;
updating a first parameter of the student network and a second parameter of the teacher network according to the output result;
selecting an optimal student network formed by the corresponding student network in the process of updating the first parameter according to the verification set; and
and taking a plurality of the optimal student networks as an image classification model.
2. The method of claim 1, wherein the step of inputting each training set into a plurality of pairs of student networks and teacher networks and obtaining corresponding output results comprises:
inputting each first image of the same training set into the student network and acquiring a corresponding first student result; and
and respectively inputting each second image of the same training set into the student network and the teacher network and acquiring a corresponding second student result and a corresponding second teacher result.
3. The method of claim 2, wherein the first image comprises a first raw label, and updating the first parameter of the student network and the second parameter of the teacher network based on the output comprises:
constructing a first loss from the first student result and the first original label;
constructing a second loss according to the second student result and the second teacher result; and
updating a first parameter of the student network and a second parameter of the teacher network based on the first loss and the second loss.
4. The method of claim 3, wherein constructing a first loss from the first student result and the first original label specifically comprises:
calculating the cross entropy of the corresponding first student result and the first original label; and
and taking the average value of all cross entropies corresponding to the same training set as the first loss.
5. The method of claim 3, wherein constructing a second loss from the second student results and second teacher results comprises:
calculating the mean square error of the corresponding second student result and the second teacher result; and
and taking the average value of all mean square errors corresponding to the same training set as the second loss.
6. The method of claim 3, wherein updating the first parameters of the student network and the second parameters of the teacher network based on the first loss and the second loss specifically comprises:
calculating a total loss using the first loss and the second loss;
updating a first parameter of the student network according to the total loss by using a gradient descent method; and
and updating the second parameter of the teacher network according to the first parameter of the student network by using a moving average method.
7. The method for training the image classification model according to claim 1, wherein the step of selecting the optimal student network formed by the corresponding student network in the process of updating the first parameter according to the validation set specifically comprises the steps of:
after a first parameter of the student network is updated every time, inputting a first image in a corresponding verification set into the student network and acquiring a corresponding verification result, wherein the first image comprises a first original label;
calculating the accuracy of the corresponding student network according to the verification result and the first original label; and
and selecting the student network with the highest accuracy as the optimal student network.
8. An image classification method, characterized in that the image classification method comprises:
inputting a target image into an image classification model and obtaining a classification result, wherein the image classification model is obtained by training the training method of the image classification model according to any one of claims 1 to 7, the image classification model comprises a plurality of sub-models, and inputting the target image into the image classification model and obtaining the classification result specifically comprises:
respectively inputting the target image into each sub-model and obtaining corresponding sub-results; and
and obtaining the classification result according to the sub-result.
9. The image classification method according to claim 8, wherein obtaining the classification result according to the sub-result specifically comprises:
counting the number of the same sub-results in the sub-results; and
and selecting the sub-result with the most number of the same sub-results as the classification result.
10. A computer-readable storage medium for storing program instructions executable by a processor to implement a method of training an image classification model according to any one of claims 1 to 7.
CN202111459523.9A 2021-12-02 2021-12-02 Training method of image classification model and image classification method Active CN113869464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111459523.9A CN113869464B (en) 2021-12-02 2021-12-02 Training method of image classification model and image classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111459523.9A CN113869464B (en) 2021-12-02 2021-12-02 Training method of image classification model and image classification method

Publications (2)

Publication Number Publication Date
CN113869464A true CN113869464A (en) 2021-12-31
CN113869464B CN113869464B (en) 2022-03-18

Family

ID=78985663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111459523.9A Active CN113869464B (en) 2021-12-02 2021-12-02 Training method of image classification model and image classification method

Country Status (1)

Country Link
CN (1) CN113869464B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862819A (en) * 2022-05-24 2022-08-05 深圳大学 Image quality evaluation method, device, equipment and medium based on asymmetric network
CN117009883A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Object classification model construction method, object classification method, device and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307894A1 (en) * 2017-04-21 2018-10-25 General Electric Company Neural network systems
CN110674880A (en) * 2019-09-27 2020-01-10 北京迈格威科技有限公司 Network training method, device, medium and electronic equipment for knowledge distillation
CN110826458A (en) * 2019-10-31 2020-02-21 河海大学 Multispectral remote sensing image change detection method and system based on deep learning
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
WO2021140426A1 (en) * 2020-01-09 2021-07-15 International Business Machines Corporation Uncertainty guided semi-supervised neural network training for image classification
CN113326764A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Method and device for training image recognition model and image recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307894A1 (en) * 2017-04-21 2018-10-25 General Electric Company Neural network systems
CN110674880A (en) * 2019-09-27 2020-01-10 北京迈格威科技有限公司 Network training method, device, medium and electronic equipment for knowledge distillation
CN110826458A (en) * 2019-10-31 2020-02-21 河海大学 Multispectral remote sensing image change detection method and system based on deep learning
WO2021140426A1 (en) * 2020-01-09 2021-07-15 International Business Machines Corporation Uncertainty guided semi-supervised neural network training for image classification
CN112545452A (en) * 2020-12-07 2021-03-26 南京医科大学眼科医院 High myopia fundus lesion risk prediction method
CN113326764A (en) * 2021-05-27 2021-08-31 北京百度网讯科技有限公司 Method and device for training image recognition model and image recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HEINRICH DINKEL等: "Voice Activity Detection in the Wild: A Data-Driven Approach Using Teacher-Student Training", 《IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING》 *
张萌: "基于深度学习的甲状腺癌病理图像分类方法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862819A (en) * 2022-05-24 2022-08-05 深圳大学 Image quality evaluation method, device, equipment and medium based on asymmetric network
CN117009883A (en) * 2023-09-28 2023-11-07 腾讯科技(深圳)有限公司 Object classification model construction method, object classification method, device and equipment
CN117009883B (en) * 2023-09-28 2024-04-02 腾讯科技(深圳)有限公司 Object classification model construction method, object classification method, device and equipment

Also Published As

Publication number Publication date
CN113869464B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN109978893B (en) Training method, device, equipment and storage medium of image semantic segmentation network
CN113869464B (en) Training method of image classification model and image classification method
CN107944450B (en) License plate recognition method and device
CN109214002A (en) A kind of transcription comparison method, device and its computer storage medium
CN108334805B (en) Method and device for detecting document reading sequence
US20200364216A1 (en) Method, apparatus and storage medium for updating model parameter
CN113626607B (en) Abnormal work order identification method and device, electronic equipment and readable storage medium
CN114241499A (en) Table picture identification method, device and equipment and readable storage medium
CN111325237A (en) Image identification method based on attention interaction mechanism
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
CN114091594A (en) Model training method and device, equipment and storage medium
CN113220883B (en) Text classification method, device and storage medium
CN111611388A (en) Account classification method, device and equipment
CN112269875A (en) Text classification method and device, electronic equipment and storage medium
CN114139658A (en) Method for training classification model and computer readable storage medium
CN113591881B (en) Intention recognition method and device based on model fusion, electronic equipment and medium
CN113221662B (en) Training method and device of face recognition model, storage medium and terminal
CN113989596B (en) Training method of image classification model and computer readable storage medium
CN113850326A (en) Image identification method, device, equipment and storage medium
CN115937875A (en) Text recognition method and device, storage medium and terminal
CN114373088A (en) Training method of image detection model and related product
CN114117037A (en) Intention recognition method, device, equipment and storage medium
CN109583512B (en) Image processing method, device and system
CN112507912A (en) Method and device for identifying illegal picture
CN112070060A (en) Method for identifying age, and training method and device of age identification model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221130

Address after: 410,012 1st to 4th floors in the north of Building 6, Science and Technology Creative Park, Yuelu Street, Yuelushan University Science and Technology City, Changsha, Hunan

Patentee after: Hunan Youxiang Netlink Intelligent Technology Co.,Ltd.

Address before: 518049 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.