CN111950643B - Image classification model training method, image classification method and corresponding device - Google Patents

Image classification model training method, image classification method and corresponding device Download PDF

Info

Publication number
CN111950643B
CN111950643B CN202010834837.1A CN202010834837A CN111950643B CN 111950643 B CN111950643 B CN 111950643B CN 202010834837 A CN202010834837 A CN 202010834837A CN 111950643 B CN111950643 B CN 111950643B
Authority
CN
China
Prior art keywords
image
sampling
classification
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010834837.1A
Other languages
Chinese (zh)
Other versions
CN111950643A (en
Inventor
秦永强
李素莹
宋亮
高达辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Wisdom Shanghai Technology Co ltd
Original Assignee
Innovation Wisdom Shanghai Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Wisdom Shanghai Technology Co ltd filed Critical Innovation Wisdom Shanghai Technology Co ltd
Priority to CN202010834837.1A priority Critical patent/CN111950643B/en
Publication of CN111950643A publication Critical patent/CN111950643A/en
Application granted granted Critical
Publication of CN111950643B publication Critical patent/CN111950643B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a model training method, an image classification method and a corresponding device. The model training method comprises the following steps: inputting the training image into a first neural network for processing to obtain a first characteristic diagram; obtaining a first attention diagram based on the first feature diagram; non-uniformly sampling the training image according to the information of all channels and the information of a single channel in the first attention diagram respectively to obtain a first sampling image and a second sampling image; inputting the first sampling image into a second neural network for processing to obtain a first classification probability, and inputting the second sampling image into a third neural network for processing to obtain a second classification probability; and calculating the classification prediction loss according to the first classification probability and the second classification probability, and updating the parameters of each new neural network according to the classification prediction loss. The first attention in the method tries to automatically locate the key details required by classification through learning without depending on labeling information, and training cost is saved.

Description

Image classification model training method, image classification method and corresponding device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a model training method, an image classification method and a corresponding device.
Background
The fine image classification means that a coarse image classification is finely sub-classified, and because the differences among the sub-classes are finer, different classes can be distinguished only by means of small local differences.
Currently, the vast majority of image sub-classification methods follow such a flow framework: firstly, finding a foreground object and local areas thereof, then respectively extracting the characteristics of the local areas, and finally finishing the training and prediction of a classifier based on the extracted characteristics. In the method, when the model is trained, in addition to the class label of the image, extra manual labeling information such as the position of a local area and the like is often used, and the extra labeling information is high in acquisition cost, time-consuming and labor-consuming.
Disclosure of Invention
An object of the embodiments of the present application is to provide a model training method, an image classification method and a corresponding apparatus, so as to improve the above technical problems.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a model training method, including: inputting a training image into a first neural network for processing to obtain a first feature map output by the first neural network; obtaining a first attention diagram based on the first feature map, wherein the value of a pixel in the first attention diagram is positively correlated with the probability that a corresponding pixel in the training image is sampled; non-uniformly sampling the training image according to the information of all channels in the first attention diagram to obtain a first sampling image, and non-uniformly sampling the training image according to the information of a single channel in the first attention diagram to obtain a second sampling image; inputting the first sampling image into a second neural network for processing to obtain a first classification probability output by the second neural network, and inputting the second sampling image into a third neural network for processing to obtain a second classification probability output by the third neural network; and calculating a classification prediction loss according to the first classification probability and the second classification probability, and updating parameters of the first neural network, the second neural network and the third neural network by using a back propagation algorithm according to the classification prediction loss.
In the method, since the value of the pixel in the first attention map is positively correlated with the probability that the corresponding pixel in the training image is sampled, the training image is non-uniformly sampled according to the first attention map, and a region with a larger value of the pixel (i.e., a region with concentrated attention distribution) in the first attention map is allocated with more sampling points, so that the influence on the classification prediction result is more remarkable.
Further, the first attention map is not preset, but is obtained based on calculation of the first neural network, and the first neural network continuously adjusts parameters according to a classification prediction result in a training process, so that a region with a large pixel value in the first attention map gradually falls into a key region which is beneficial to classification of the region in the training image. That is to say, with the deepening of the training process, the first attention endeavor can gradually locate the details which play a key role in correctly classifying the training images, and the detail locating capability is automatically generated through learning without depending on additional marking information, so that the training cost can be saved, the training efficiency can be improved, and the practicability of the method can be improved.
In addition, the image classification network used in the above method can be regarded as including two branch networks: the global branch network generates a first classification probability based on the prediction of a first sampling image, and the first sampling image is obtained by non-uniformly sampling the training image according to the information of all channels in the first attention diagram, so that the global contour information of the training image is reserved in the first sampling image; the local branch network generates a second classification probability based on the prediction of the second sampling image, and the second sampling image is obtained by non-uniformly sampling the training image according to the information of a single channel in the first attention diagram, so that the local detail information of the training image is reserved in the second sampling image. However, when the prediction loss is finally calculated, the method considers the first classification probability and the second classification probability simultaneously, which is equivalent to blending the local detail information extracted by the local branch network and contributing to image classification into the global branch network through knowledge distillation, that is, the information in the image is fully and comprehensively utilized for classification, so that the trained image classification network has better performance.
It should be noted that the trained image classification network can be used for performing both the image sub-classification task and the general image classification task.
In an implementation manner of the first aspect, the non-uniformly sampling the training image according to information of all channels in the first attention map to obtain a first sampled image includes: performing average pooling on all channels in the first attention map to obtain an average attention map; sampling the training image with a first non-uniform sampling function according to the average attention map to obtain the first sampled image.
In the above implementation, the average pooling is to average pixel values of each channel in the first attention map at the same position, and the average attention map obtained after the average pooling fuses information of each channel in the first attention map, so that attention distribution in the training image can be reflected as a whole, so that the training image is sampled according to the average attention map, and the obtained first sampled image retains global contour information of the training image.
In an implementation manner of the first aspect, the sampling the training image with a first non-uniform sampling function according to the average attention map to obtain the first sampled image includes: the first sampling image is obtained by calculation according to the following formula:
Figure M_220506182651935_935446001
Wherein Is denotes the first sampled image, S denotes the first non-uniform sampling function, M denotes the first attention map, A (M) denotes the average attention map, I denotes the training image, w denotes the width of the training image, h denotes the height of the training image, I denotes a pixel index in the w-direction, j denotes a pixel index in the h-direction,
Figure M_220506182652013_013948001
and
Figure M_220506182652029_029587002
which are respectively the inverse functions of the following two functions:
Figure M_220506182652060_060826001
Figure M_220506182652107_107680001
wherein the content of the first and second substances,
Figure M_220506182652156_156009001
denotes the integral of a (m) in the w direction,
Figure M_220506182652187_187259002
Figure M_220506182652218_218500003
denotes the integral of A (M) in the h direction,
Figure M_220506182652234_234156004
in one implementation manner of the first aspect, the non-uniformly sampling the training image according to information of a single channel in the first attention map to obtain a second sampled image includes: randomly selecting one channel from all channels of the first attention diagram; sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image; and performing channel random selection once again at intervals of a preset training period.
In one implementation manner of the first aspect, the non-uniformly sampling the training image according to information of a single channel in the first attention map to obtain a second sampled image includes: selecting one channel from all channels of the first attention diagram according to a preset sequence; sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image; and performing channel selection again according to the preset sequence every preset training period, wherein the preset sequence is an arrangement sequence of all channels in the first attention diagram.
In both implementations, a single channel in the first attention map is selected to sample the training image, and since each channel represents a visual mode, the resulting second sampled image retains local detail information of the training image for that attention channel.
Two ways of selecting channels from the first attention map are provided above, one is random selection and one is selection according to a predetermined sequence, but whichever way, when the training time is long enough, all channels in the first attention map are traversed, i.e. different levels of local detail information are finally extracted and used for training the image classification network. In addition, only one channel in the first attention diagram is selected for sampling during each training, so that the calculation amount is reduced, and the training efficiency is improved.
In one implementation manner of the first aspect, the obtaining a first attention map based on the first feature map includes: and calculating the first attention diagram according to the relation among the channels in the first feature diagram.
In the implementation manner, since the information of each channel in the first feature map is fused when the first attention map is calculated, the calculated first attention map can more effectively reflect the attention distribution in the training image.
In an implementation manner of the first aspect, the calculating a classification prediction loss according to the first classification probability and the second classification probability includes: calculating a first loss according to the first classification probability and a label of the training image, and calculating a second loss according to the first classification probability and the second classification probability; wherein the first loss characterizes a difference between the predicted classification result of the second neural network and a true classification result, and the second loss characterizes a difference between the predicted classification result of the second neural network and a predicted classification result of the third neural network; and carrying out weighted summation on the first loss and the second loss to obtain the classified prediction loss.
In the above implementation manner, the total classification prediction loss is obtained by weighted summation of a first loss and a second loss, the first loss is a traditional classification prediction loss, and training based on the first loss can make the classification result predicted by the second neural network close to the true classification result; the second loss is a loss newly proposed by the application, the local detail information extracted by fusing the local branch network is fused into the global branch network, and the training based on the second loss can enable the classification result predicted by the third neural network to be close to the classification result predicted by the second neural network.
In a second aspect, an embodiment of the present application provides an image classification method, including: inputting an image to be classified into a first neural network for processing to obtain a second feature map output by the first neural network; obtaining a second attention diagram based on the second feature map, wherein the value of a pixel in the second attention diagram is positively correlated with the probability that the corresponding pixel in the image to be classified is sampled; the image to be classified is subjected to non-uniform sampling according to the information of all channels in the second attention diagram to obtain a third sampling image, and the image to be classified is subjected to non-uniform sampling according to the information of a single channel in the second attention diagram to obtain a fourth sampling image; inputting the third sampling image into a second neural network for processing to obtain a third classification probability output by the second neural network, and inputting the fourth sampling image into a third neural network for processing to obtain a fourth classification probability output by the third neural network; and determining the final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
In the method, the image classification network provided by the first aspect is used for classifying the image to be classified, so that the second attention map acquired based on the trained first neural network can automatically locate the key details related to classification in the image to be classified, and the non-uniform sampling of the image to be classified based on the second attention map can strengthen the key details in the obtained sampled image.
The third sampling image is obtained by non-uniformly sampling the image to be classified according to the information of all channels in the second attention diagram, so that the global contour information of the image to be classified is reserved in the third sampling image; since the fourth sampling image is obtained by non-uniformly sampling the image to be classified according to the information of the single channel in the second attention map, the local detail information of the image to be classified is retained in the fourth sampling image.
According to the method, when the final classification result of the image to be classified is determined, the third classification probability generated based on the third sampling image prediction and the fourth classification probability generated based on the fourth sampling image prediction are considered at the same time, namely, the key global contour information and the local detail information are considered, so that the classification precision is high, and the method is very suitable for executing an image fine classification task.
In a third aspect, an embodiment of the present application provides a model training apparatus, including: the first characteristic diagram acquisition module is used for inputting a training image into a first neural network for processing to obtain a first characteristic diagram output by the first neural network; a first attention map obtaining module, configured to obtain a first attention map based on the first feature map, where a value of a pixel in the first attention map is positively correlated with a probability that a corresponding pixel in the training image is sampled; the first sampling module is used for carrying out non-uniform sampling on the training image according to the information of all channels in the first attention diagram to obtain a first sampling image, and carrying out non-uniform sampling on the training image according to the information of a single channel in the first attention diagram to obtain a second sampling image; the first classification prediction module is used for inputting the first sampling image into a second neural network for processing to obtain a first classification probability output by the second neural network, and inputting the second sampling image into a third neural network for processing to obtain a second classification probability output by the third neural network; and the parameter updating module is used for calculating the classification prediction loss according to the first classification probability and the second classification probability and updating the parameters of the first neural network, the second neural network and the third neural network by utilizing a back propagation algorithm according to the classification prediction loss.
In a fourth aspect, an embodiment of the present application provides an image classification apparatus, including: the second feature map acquisition module is used for inputting the image to be classified into the first neural network for processing to obtain a second feature map output by the first neural network; a second attention map obtaining module, configured to obtain a second attention map based on the second feature map, where a value of a pixel in the second attention map is positively correlated with a probability that a corresponding pixel in the image to be classified is sampled; the second sampling module is used for carrying out non-uniform sampling on the image to be classified according to the information of all channels in the second attention map to obtain a third sampling image, and carrying out non-uniform sampling on the image to be classified according to the information of a single channel in the second attention map to obtain a fourth sampling image; the second classification prediction module is used for inputting the third sampling image into a second neural network for processing to obtain a third classification probability output by the second neural network, and inputting the fourth sampling image into the third neural network for processing to obtain a fourth classification probability output by the third neural network; and the classification result acquisition module is used for determining a final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores computer program instructions, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the method provided by the first aspect or any one of the possible implementation manners of the first aspect.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: a memory in which computer program instructions are stored, and a processor, where the computer program instructions are read and executed by the processor to perform the method provided by the first aspect or any one of the possible implementation manners of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart illustrating a model training method provided by an embodiment of the present application;
FIG. 2 is a block diagram illustrating an image classification network used in a model training method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of an image classification method provided by an embodiment of the present application;
FIG. 4 is a block diagram of a model training apparatus provided in an embodiment of the present application;
FIG. 5 is a block diagram of an image classification apparatus according to an embodiment of the present application;
fig. 6 shows a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Fig. 1 is a flowchart illustrating a model training method provided in an embodiment of the present application, and fig. 2 is a block diagram illustrating an image classification network that can be used in the model training method of fig. 1, and will be described with reference to fig. 2 when describing steps of the method of fig. 1. The model training method in fig. 1 may be, but is not limited to, performed by an electronic device, and fig. 6 shows a possible structure of the electronic device, which is described in detail with reference to fig. 6 later. Referring to fig. 1, the method includes:
step S110: and inputting the training image into the first neural network for processing to obtain a first characteristic diagram output by the first neural network.
In the solution proposed in the present application, the type of the Neural Network (including the first Neural Network, the second Neural Network, and the third Neural Network) is not limited, and may be, for example, a Convolutional Neural Network (CNN), a cyclic Neural Network (RNN), a Deep Neural Network (DNN), or the like. Convolutional neural networks are more commonly used in the field of image processing, and the convolutional neural networks at least include a convolutional layer, and may further include a pooling layer, a full-link layer, and the like.
The training images may refer to images in a training set, and when training the image classification network, a batch (batch) training method may be adopted, that is, a batch of images in the training set are input into the image classification network for training each time, but for simplicity, the case where each batch includes only one training image is taken as an example when the solution of the present application is introduced, and the case where each batch includes multiple training images is similar.
After the training image is input into the first neural network, feature extraction is carried out through the first neural network to obtain a first feature map. For example, if the training image is I and the first feature map is X, the dimension of I may be h × w (h is the height of I, and w is the width of I), and the dimension of X may be c × h × w (c is the number of channels of X).
Step S120: a first attention map is obtained based on the first signature map.
The first attention attempt reflects the attention distribution in the training image, and the more concentrated the attention distribution, the more likely the region contains the key information (e.g., small details for distinguishing two similar products) needed to classify the image, the more the region should be focused by the image classification model. In the first attention map, the degree of concentration of such an attention distribution is characterized by the size of the pixel value (larger pixel values indicate more concentrated attention distribution, whereas less concentrated distribution).
On the other hand, the value of the pixel in the first attention map is positively correlated with the probability that the corresponding pixel in the training image is sampled, so that the training image is non-uniformly sampled according to the first attention map (the specific sampling process is shown in step S130), and areas with larger values of the pixel in the first attention map (i.e. areas with concentrated attention distribution) are allocated with more sampling points, and the influence of the areas on the classification prediction result is more significant. Such a sampling method is suitable for image classification, because it has been mentioned above that the regions with concentrated attention distribution are more likely to contain the key information required for classifying images, and it occupies a larger proportion in the sampling result to be beneficial for enhancing the key information, thereby improving the image classification result, while the regions with concentrated attention distribution are less helpful for image classification, and should occupy a smaller proportion in the sampling result, or even not need to sample it.
Of course, the first attempt to address the ability to locate the critical information required for classification is not natural and is obtained through the constant training of the first neural network. The first neural network continuously adjusts parameters according to the classification prediction result in the training process (see step S150), so that the region with larger pixel value in the first attention map gradually falls into a key region which is beneficial to classifying the training image. That is, as the training process progresses, the first attention endeavor is increasingly able to locate details in the training image that are critical to its correct classification. It should be noted that the detail positioning capability of the first attention map is automatically generated through learning, and additional labeled information is not needed, so that the training cost is saved, the training efficiency is improved, and the practicability of the training method is improved.
The first attention map and the first feature map have the same dimension, the first attention map is not labeled as M, the dimension of M is also c × h × w, and the dimension of the training image is h × w, so that the pixels in M can establish a corresponding relationship with the pixels at the same position in the training image. In some implementations, the first feature map may be directly taken as the first attention map; in other implementations, the first attention map may also be generated by a first profile operation: for example, the first attention map may be calculated according to a relationship between channels in the first feature map, and this implementation is advantageous in that information of the channels in the first feature map is fused when the first attention map is calculated, so that the calculated first attention map more effectively reflects the attention distribution in the training image. A specific example is given below:
Figure M_220506182652265_265399001
where T represents the transpose of the matrix,
Figure M_220506182652296_296673001
the relation among the channels in the first characteristic diagram is embodied, and only X self is used in the calculation formula, so that the matrix operation is based on a self-care mechanism.
Step S130: non-uniformly sampling the training image according to the information of all channels in the first attention diagram to obtain a first sampling image; and non-uniformly sampling the training image according to the information of the single channel in the first attention diagram to obtain a second sampling image.
Step S130 is divided into two sub-steps, the first sub-step is to sample and obtain a first sampled image, and the second sub-step is to sample and obtain a second sampled image. The two sub-steps may be executed in parallel, without limitation to the order of execution, and the first sub-step of step S130 is described below:
in some implementations, all channels in the first attention map are averaged and pooled, where the averaging pooling means that pixel values of each channel in the first attention map at the same position are averaged, and after the averaging pooling, a plurality of channels in the first attention map are merged into one channel, which is called an average attention map, and a pixel value in the average attention map is also positively correlated to a probability that a corresponding pixel in the training image is sampled.
Then, the training image is sampled by a first non-uniform sampling function according to the average attention map, and a first sampling image is obtained. By non-uniform sampling, it is meant that more sampling points are assigned to regions with larger pixels in the average attention map (i.e., regions with concentrated attention distribution in the training image), and less or even no sampling points are assigned to regions with smaller pixels in the average attention map (i.e., regions with concentrated attention distribution in the training image).
The first non-uniform sampling function Is a function capable of implementing the non-uniform sampling function, and if the first non-uniform sampling function Is denoted as S and the first sampled image Is denoted as Is, the following functions are provided:
Is=S(I, A(M))
where I represents the training image, M represents the first attention map, a (·) represents the average pooling (a (M) represents the average attention map). The present application does not limit the specific form of the first non-uniform sampling function, and a specific example is given below.
In one implementation, the first sampled image may be obtained by calculation using the following formula:
Figure M_220506182652312_312264001
where w denotes the width of the training image, h denotes the height of the training image, i denotes the pixel index in the w direction, j denotes the pixel index in the h direction,
Figure M_220506182652360_360150001
and
Figure M_220506182652391_391881002
which are respectively the inverse functions of the following two functions:
Figure M_220506182652407_407584001
Figure M_220506182652454_454360001
wherein the content of the first and second substances,
Figure M_220506182652502_502212001
denotes the integral of a (m) in the w direction,
Figure M_220506182652518_518319002
Figure M_220506182652551_551526003
denotes the integral of A (M) in the h direction,
Figure M_220506182652567_567164004
in the above implementation, S may be regarded as a mapping from I to Is, and it should be noted that although the dimension of Is the same as I, and Is h × w, pixels at certain positions in I are repeatedly sampled for a plurality of times, where the pixel values in a (m) are larger, or where the attention Is concentrated.
Since the average attention map fuses information of each channel in the first attention map, attention distribution in the training image can be reflected on the whole, so that the training image is sampled according to the average attention map, the obtained first sampling image effectively retains global contour information of the training image, and the global contour information is key information for image classification according to the analysis of the first attention map.
It is to be understood that the way of fusing the multi-channel information in the first attention map is not limited to average pooling, but may be, for example, summing the channels directly, maximal pooling, etc.
The second sub-step of step S130 is described below:
firstly, one channel is selected from a plurality of channels of the first attention map, and the value of a pixel in the channel is also positively correlated with the probability that the corresponding pixel in the training image is sampled. And then, sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image. Since each channel represents a visual mode, the resulting second sample image retains the local detail information of the training image for that attention channel, which is key information for image classification according to the foregoing analysis of the first attention map.
The second non-uniform sampling function may be the same as or different from the first non-uniform sampling function, and is not described in detail. The following illustrates how to select a channel for sampling from the plurality of channels in the first attention diagram, including at least the following two approaches:
(1) one channel is randomly selected from all the channels of the first attention map for obtaining a second sampling image, and the channel random selection is performed again at intervals of a preset training period.
If the second non-uniform sampling function is also denoted as S and the second sampled image is denoted as Id, then:
Id=S(I, R(M))
where I represents a training image, M represents a first attention map, and R (·) represents a random selection of a channel from among a plurality of channels of the image (R (M) represents a random selection of a channel from among a plurality of channels of the first attention map).
The preset training period may be a preset duration, a preset number of steps, or a preset number of rounds in the training process, for example, in one implementation, each round of training (all images in the training set participate in one round of training) re-randomly selects a channel for sampling in the first attention map once again. The reason for re-randomly choosing the channel is: the local detail information of different levels in the training image can be extracted by sampling based on different channels, and when the training time is long enough, all the channels in the first attention diagram can be traversed, that is, the local detail information corresponding to all the channels can be extracted and used for training the image classification network.
(2) One channel is selected from all the channels of the first attention map in a predetermined order for obtaining the second sample image, and the channel selection is performed again in the predetermined order every preset training period.
Wherein the predetermined order is an arrangement order of all channels in the first attention map, such that all channels in the first attention map are traversed if the training time is long enough. For example, in one implementation, the channels used for sampling in the first attention diagram are re-selected once per training round, and the selection is performed sequentially, i.e., the first channel is selected for the first training round, the second channel is selected for the second training round, and so on.
In the method (1) or the method (2), only one channel in the first attention diagram is selected for sampling during each training (which may refer to training with a batch of training images, or one step in the training process), and the channel is selected again at every preset training period, which is beneficial to reducing the amount of computation required for extracting local detail information and improving the training efficiency.
Referring to fig. 2, the image classification network may be divided into two branch networks: a global branch network and a local branch network. The global branch network carries out non-uniform sampling on the training image by using a first sampling function according to the information of all channels in the first attention diagram to obtain a first sampling image containing global contour information, and then carries out classification prediction according to the first sampling image; and the local branch network carries out non-uniform sampling on the training image according to the information of a single channel in the first attention diagram by using a second sampling function to obtain a second sampling image containing local detail information, and then carries out classification prediction according to the second sampling image.
Step S140: inputting the first sampling image into a second neural network for processing to obtain a first classification probability output by the second neural network; and inputting the second sampling image into a third neural network for processing to obtain a second classification probability output by the third neural network.
The application does not limit what architecture is adopted by the second neural network, for example, the architecture may be VGG, ResNet, GoogleNet, or the like, and the end of the second neural network may include a full connection layer and a softmax classifier, so that after the first sampled image is processed by the second neural network, a first classification probability may be output, where the first classification probability is a vector, each element constituting the vector corresponds to a probability value of a class, and the first classification probability also represents a classification result predicted by the second neural network (a class corresponding to an element with the largest median in the vector may be taken as a class predicted by the second neural network). The third neural network is similar to the second neural network and will not be described again.
Step S150: and calculating the classification prediction loss according to the first classification probability and the second classification probability, and updating parameters of the first neural network, the second neural network and the third neural network by using a back propagation algorithm according to the classification prediction loss.
The present application does not specifically limit how the classification prediction loss is calculated from the first classification probability and the second classification probability, but limits that two factors, i.e., the first classification probability and the second classification probability, must be considered at the same time when calculating the classification prediction loss. It has been mentioned before that the global contour information of the training image is retained in the first sampling image, so the first classification probability predicted by the second neural network according to the first sampling image also represents this global information, while the local detail information of the training image is retained in the second sampling image, so the second classification probability predicted by the third neural network according to the second sampling image also represents this local information, so that when calculating the classification prediction loss, the two factors of the first classification probability and the second classification probability are considered at the same time, which is equivalent to fusing the local detail information extracted by the local branch network with the global contour information extracted by the global branch network through knowledge distillation, i.e. the key information in the image which is helpful for classification is fully and comprehensively utilized, and the image classification network obtained by training has better performance.
In some implementations, the total class prediction penalty can be obtained by a weighted sum of the first penalty and the second penalty (illustrated in fig. 2 by the arrow pointing to the class prediction penalty). Wherein the first loss is calculated according to the first classification probability and the label of the training image (shown by an arrow pointing to the first loss in fig. 2), the loss is a traditional classification prediction loss, and the classification result predicted by the second neural network is characterized by the difference between the real classification result and the classification result predicted by the second neural network, and the training based on the first loss can make the classification result predicted by the second neural network approximate to the real classification result. For example, if the first classification probability is represented as qs and the label of the training image is represented as y (one-hot coding can be used), the first loss can be represented as
Figure M_220506182652598_598394001
As for the expression of L0, reference may be made to the prior art, and no description is given.
The second predicted loss is calculated according to the first classification probability and the second classification probability (shown by an arrow pointing to the second loss in fig. 2), the loss is newly proposed in the present application and represents the difference between the classification result predicted by the second neural network and the classification result predicted by the third neural network, and the training based on the second loss can enable the classification result predicted by the third neural network to be close to the classification result predicted by the second neural network. For example, if the second classification probability is denoted qd, the second loss may be denoted as
Figure M_220506182652629_629663001
If L1 adopts cross entropy loss, the specific calculation formula of the second loss is as follows:
Figure M_220506182652676_676537001
where N represents the total number of categories, and qd and qs are both N-dimensional vectors, it will be understood that L1 may also use other loss functions, and it is not necessary to use cross entropy loss. According to the above notation, the total classification prediction loss can be calculated as:
Figure M_220506182652723_723442001
where α is a weighting coefficient, if α =1 indicates that L0 and L1 are directly summed, it may be regarded as a special case of weighted summation. By weighted summation, the local detail information extracted by the local branch network is fused to the global branch network, and the process can also be regarded as a knowledge distillation process. It will be appreciated that in other implementations, the classification prediction loss may include other loss terms in addition to the first loss and the second loss, such as a third loss calculated based on the second classification probability and the label of the training image.
As for the updating of the parameters of the neural network by using the back propagation algorithm in step S150, reference is made to the prior art, and no description is made herein. It should be noted that the image classification network includes not only the first neural network, the second neural network and the third neural network, but also the sampling structure is a part of the network (the blocks shown by the solid lines in fig. 2 can be regarded as the components of the network), but since the sampling structure does not have parameters to be updated, only the parameter updating of the first neural network, the second neural network and the third neural network is mentioned here. It is understood that if the image classification network further includes other parts requiring parameter updating, the parameter updating may be performed when step S150 is performed.
Steps S110 to S150 are repeatedly performed in the training process until the training end condition is satisfied. The training end condition may be one or more of convergence of the image classification network, training for a preset time, training for a preset turn, and the like. The trained image classification network may be used to perform a fine image classification task, or may be used to perform a general image classification task, which is not limited in the present application.
Fig. 3 shows a flowchart of an image classification method provided in the embodiment of the present application, where an image classification network used in the method is obtained by training with the model training method provided in the embodiment of the present application. The image classification training method in fig. 3 may be, but is not limited to be, performed by an electronic device, and fig. 6 shows a possible structure of the electronic device, which may be referred to as the following description about fig. 6. Referring to fig. 3, the method includes:
step S210: and inputting the image to be classified into the first neural network for processing to obtain a second characteristic diagram output by the first neural network.
Step S220: obtaining a second attention diagram based on the second feature diagram; and the value of the pixel in the second attention map is positively correlated with the probability of sampling the corresponding pixel in the image to be classified.
The above two steps are similar to steps S110 and S120, and reference may be made to the corresponding contents, which are not repeated.
Step S230: and carrying out non-uniform sampling on the image to be classified according to the information of the single channel in the second attention map to obtain a fourth sampling image.
Step S230 is similar to step S130, and reference may be made to the corresponding contents in the foregoing, but it should be noted that, in the trained image classification network, no matter which channel in the second attention map is selected for sampling to obtain the fourth sampled image, the final classification result is not greatly affected. Therefore, in step S230, when selecting the channel for sampling in the second attention map, the channel may be selected randomly or one channel (e.g., the first channel) may be selected fixedly.
Step S240: and inputting the fourth sampling image into the third neural network for processing to obtain a fourth classification probability output by the third neural network.
Step S240 is similar to step S140, and reference may be made to the corresponding contents in the foregoing.
Step S250: and determining a final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
In some implementation manners, a mean value of the third classification probability and the fourth classification probability (a mean value of the elements at the corresponding positions of the two vectors) may be calculated, and a category corresponding to an element with the largest value in the obtained mean value vectors is used as a final classification result of the image to be classified. Of course, in other implementation manners, the third classification probability and the fourth classification probability may also be summed or weighted summed (summing or weighted summing of elements at positions corresponding to two vectors), and the category corresponding to the element with the largest value in the obtained sum vector is used as the final classification result of the image to be classified, and so on.
According to the image classification method, the image classification network trained by the model training method is adopted to classify the images to be classified, so that the second attention map acquired based on the trained first neural network can automatically position the key details related to classification in the images to be classified, and the non-uniform sampling of the images to be classified based on the second attention map can strengthen the key details in the obtained sampling images.
The third sampling image is obtained by non-uniformly sampling the image to be classified according to the information of all channels in the second attention diagram, so that the global contour information of the image to be classified is reserved in the third sampling image; and the fourth sampling image is obtained by non-uniformly sampling the image to be classified according to the information of the single channel in the second attention map, so that the local detail information of the image to be classified is reserved in the fourth sampling image. When the final classification result of the image to be classified is determined, the third classification probability generated based on the third sampling image prediction and the fourth classification probability generated based on the fourth sampling image prediction are considered at the same time, namely, the key global contour information and the local detail information in the image to be classified are considered, so that the classification precision is high, and the image classification method is very suitable for executing a task of finely classifying images.
Fig. 4 is a functional block diagram of a model training apparatus 300 according to an embodiment of the present application. Referring to fig. 4, the model training apparatus 300 includes:
a first feature map obtaining module 310, configured to input a training image to a first neural network for processing, and obtain a first feature map output by the first neural network;
a first attention map obtaining module 320, configured to obtain a first attention map based on the first feature map, where a value of a pixel in the first attention map is positively correlated with a probability that a corresponding pixel in the training image is sampled;
a first sampling module 330, configured to perform non-uniform sampling on the training image according to information of all channels in the first attention map to obtain a first sampling image, and perform non-uniform sampling on the training image according to information of a single channel in the first attention map to obtain a second sampling image;
the first classification prediction module 340 is configured to input the first sampled image to a second neural network for processing to obtain a first classification probability output by the second neural network, and input the second sampled image to a third neural network for processing to obtain a second classification probability output by the third neural network;
A parameter updating module 350, configured to calculate a classification prediction loss according to the first classification probability and the second classification probability, and update parameters of the first neural network, the second neural network, and the third neural network by using a back propagation algorithm according to the classification prediction loss.
In one implementation of the model training apparatus 300, the non-uniform sampling of the training image by the first sampling module 330 according to the information of all channels in the first attention map to obtain a first sampled image includes: performing average pooling on all channels in the first attention map to obtain an average attention map; and sampling the training image by utilizing a first non-uniform sampling function according to the average attention diagram to obtain the first sampling image.
In one implementation of the model training apparatus 300, the first sampling module 330 samples the training image according to the average attention map by using a first non-uniform sampling function to obtain the first sampled image, including: the first sampling image is obtained by calculation according to the following formula:
Figure M_220506182652770_770312001
wherein Is denotes the first sampled image, S denotes the first non-uniform sampling function, M denotes the first attention map, A (M) denotes the average attention map, I denotes the training image, w denotes the width of the training image, h denotes the height of the training image, I denotes a pixel index in the w-direction, j denotes a pixel index in the h-direction,
Figure M_220506182652817_817185001
And
Figure M_220506182652848_848400002
which are the inverse of the two functions:
Figure M_220506182652879_879654001
Figure M_220506182652910_910925001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220506182652959_959729001
denotes the integral of a (m) in the w direction,
Figure M_220506182652975_975349002
Figure M_220506182653006_006618003
denotes the integral of A (M) in the h direction,
Figure M_220506182653022_022246004
in one implementation of the model training apparatus 300, the non-uniform sampling of the training image by the first sampling module 330 according to the information of the single channel in the first attention map to obtain the second sampling image includes: randomly selecting one channel from all channels of the first attention diagram; sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image; and performing channel random selection once again at intervals of a preset training period.
In one implementation of the model training apparatus 300, the non-uniform sampling of the training image by the first sampling module 330 according to the information of the single channel in the first attention map to obtain the second sampling image includes: selecting one channel from all channels of the first attention diagram according to a preset sequence; sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image; and performing channel selection again according to the preset sequence every preset training period, wherein the preset sequence is an arrangement sequence of all channels in the first attention diagram.
In one implementation of the model training apparatus 300, the first attention map obtaining module 320 obtains a first attention map based on the first feature map, including: and calculating the first attention diagram according to the relation among the channels in the first feature diagram.
In one implementation of the model training apparatus 300, the parameter updating module 350 calculates the classification prediction loss according to the first classification probability and the second classification probability, including: calculating a first loss according to the first classification probability and a label of the training image, and calculating a second loss according to the first classification probability and the second classification probability; wherein the first loss characterizes a difference between the predicted classification result of the second neural network and a true classification result, and the second loss characterizes a difference between the predicted classification result of the second neural network and a predicted classification result of the third neural network; and carrying out weighted summation on the first loss and the second loss to obtain the classified prediction loss.
The model training apparatus 300 according to the embodiment of the present application, which has been described in the foregoing method embodiments, can be referred to the corresponding contents in the method embodiments for the sake of brief description, and the portions of the apparatus embodiments that are not mentioned in the foregoing description.
Fig. 5 shows a functional block diagram of an image classification apparatus 400 provided in an embodiment of the present application. Referring to fig. 5, the image classification apparatus 400 includes:
a second feature map obtaining module 410, configured to input an image to be classified into a first neural network for processing, and obtain a second feature map output by the first neural network;
a second attention map obtaining module 420, configured to obtain a second attention map based on the second feature map, where a value of a pixel in the second attention map is positively correlated to a probability that a corresponding pixel in the image to be classified is sampled;
the second sampling module 430 is configured to perform non-uniform sampling on the image to be classified according to information of all channels in the second attention map to obtain a third sampled image, and perform non-uniform sampling on the image to be classified according to information of a single channel in the second attention map to obtain a fourth sampled image;
the second classification prediction module 440 is configured to input the third sampled image to a second neural network for processing to obtain a third classification probability output by the second neural network, and input the fourth sampled image to the third neural network for processing to obtain a fourth classification probability output by the third neural network;
A classification result obtaining module 450, configured to determine a final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
The image classification apparatus 400 according to the embodiment of the present application, which has been described in the foregoing method embodiments, can be referred to the corresponding contents in the method embodiments for brevity.
Fig. 6 shows a possible structure of an electronic device 500 provided in an embodiment of the present application. Referring to fig. 6, the electronic device 500 includes: a processor 510, a memory 520, and a communication interface 530, which are interconnected and in communication with each other via a communication bus 540 and/or other form of connection mechanism (not shown).
The Memory 520 includes one or more (Only one is shown in the figure), which may be, but not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like. The processor 510, as well as possibly other components, may access, read, and/or write data to the memory 520.
The processor 510 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 510 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Network Processor (NP), or other conventional processors; the Application Specific Processor may also be a special purpose Processor, including a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, and discrete hardware components. Also, when there are multiple processors 510, some of them may be general-purpose processors and others may be special-purpose processors.
Communication interface 530 includes one or more devices (only one of which is shown) that can be used to communicate directly or indirectly with other devices for data interaction. Communication interface 530 may include an interface to communicate wired and/or wireless.
One or more computer program instructions may be stored in memory 520 and read and executed by processor 510 to implement the model training method and/or the image classification method provided by the embodiments of the present application.
It will be appreciated that the configuration shown in FIG. 6 is merely illustrative and that electronic device 500 may include more or fewer components than shown in FIG. 6 or have a different configuration than shown in FIG. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof. The electronic device 500 may be a physical device, such as a PC, a laptop, a tablet, a cell phone, a server, an embedded device, etc., or may be a virtual device, such as a virtual machine, a virtualized container, etc. The electronic device 500 is not limited to a single device, and may be a combination of a plurality of devices or a cluster including a large number of devices.
The embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor of a computer, the computer program instructions execute the model training method and/or the image classification method provided in the embodiment of the present application. The computer readable storage medium may be embodied as the memory 520 in the electronic device 500 in fig. 6, for example.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An image classification model training method is characterized by comprising the following steps:
inputting a training image into a first neural network for processing to obtain a first feature map output by the first neural network;
obtaining a first attention map based on the first feature map, wherein the value of a pixel in the first attention map is positively correlated with the probability of sampling the corresponding pixel in the training image;
non-uniformly sampling the training image according to the information of all channels in the first attention map to obtain a first sampling image, and non-uniformly sampling the training image according to the information of a single channel in the first attention map to obtain a second sampling image;
inputting the first sampling image into a second neural network for processing to obtain a first classification probability output by the second neural network, and inputting the second sampling image into a third neural network for processing to obtain a second classification probability output by the third neural network;
And calculating a classification prediction loss according to the first classification probability and the second classification probability, and updating parameters of the first neural network, the second neural network and the third neural network by using a back propagation algorithm according to the classification prediction loss.
2. The method for training the image classification model according to claim 1, wherein the non-uniformly sampling the training image according to the information of all channels in the first attention map to obtain a first sampled image, comprises:
performing average pooling on all channels in the first attention map to obtain an average attention map;
and sampling the training image by utilizing a first non-uniform sampling function according to the average attention diagram to obtain the first sampling image.
3. The method for training the image classification model according to claim 2, wherein the sampling the training image according to the average attention map by using a first non-uniform sampling function to obtain the first sampled image comprises:
the first sampling image is obtained by calculation according to the following formula:
Figure M_220506182649030_030036001
wherein Is denotes the first sampled image, S denotes the first non-uniform sampling function, M denotes the first attention map, A (M) denotes the average attention map, I denotes the training image, w denotes the width of the training image, h denotes the height of the training image, I denotes a pixel index in the w-direction, j denotes a pixel index in the h-direction,
Figure M_220506182649092_092537001
And
Figure M_220506182649123_123794002
which are the inverse of the two functions:
Figure M_220506182649157_157492001
Figure M_220506182649204_204360001
wherein, the first and the second end of the pipe are connected with each other,
Figure M_220506182649251_251279001
denotes the integral of a (m) in the w direction,
Figure M_220506182649266_266857002
Figure M_220506182649298_298106003
denotes the integral of a (m) in the h direction,
Figure M_220506182649313_313741004
4. the method for training an image classification model according to claim 1, wherein the non-uniform sampling of the training image according to the information of the single channel in the first attention map to obtain a second sampled image comprises:
randomly selecting one channel from all channels of the first attention diagram;
sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image;
and performing channel random selection once again at intervals of a preset training period.
5. The method for training an image classification model according to claim 1, wherein the non-uniform sampling of the training image according to the information of the single channel in the first attention map to obtain a second sampled image comprises:
selecting one channel from all channels of the first attention diagram according to a preset sequence;
sampling the training image by using a second non-uniform sampling function according to the selected channel to obtain a second sampling image;
And performing channel selection again according to the preset sequence every other preset training period, wherein the preset sequence is an arrangement sequence of all channels in the first attention map.
6. The method for training the image classification model according to claim 1, wherein the obtaining a first attention map based on the first feature map comprises:
and calculating the first attention diagram according to the relation among the channels in the first characteristic diagram.
7. The method for training an image classification model according to claim 1, wherein the calculating a classification prediction loss according to the first classification probability and the second classification probability comprises:
calculating a first loss according to the first classification probability and a label of the training image, and calculating a second loss according to the first classification probability and the second classification probability; wherein the first loss characterizes a difference between the predicted classification result of the second neural network and a true classification result, and the second loss characterizes a difference between the predicted classification result of the second neural network and a predicted classification result of the third neural network;
and carrying out weighted summation on the first loss and the second loss to obtain the classified prediction loss.
8. An image classification method, comprising:
inputting an image to be classified into a first neural network for processing to obtain a second feature map output by the first neural network;
obtaining a second attention diagram based on the second feature map, wherein the value of a pixel in the second attention diagram is positively correlated with the probability that the corresponding pixel in the image to be classified is sampled;
the image to be classified is subjected to non-uniform sampling according to the information of all channels in the second attention diagram to obtain a third sampling image, and the image to be classified is subjected to non-uniform sampling according to the information of a single channel in the second attention diagram to obtain a fourth sampling image;
inputting the third sampling image into a second neural network for processing to obtain a third classification probability output by the second neural network, and inputting the fourth sampling image into a third neural network for processing to obtain a fourth classification probability output by the third neural network;
and determining the final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
9. An image classification model training device, comprising:
The first characteristic diagram acquisition module is used for inputting the training image into a first neural network for processing to obtain a first characteristic diagram output by the first neural network;
a first attention map obtaining module, configured to obtain a first attention map based on the first feature map, where a value of a pixel in the first attention map is positively correlated with a probability that a corresponding pixel in the training image is sampled;
the first sampling module is used for carrying out non-uniform sampling on the training image according to the information of all channels in the first attention map to obtain a first sampling image, and carrying out non-uniform sampling on the training image according to the information of a single channel in the first attention map to obtain a second sampling image;
the first classification prediction module is used for inputting the first sampling image into a second neural network for processing to obtain a first classification probability output by the second neural network, and inputting the second sampling image into a third neural network for processing to obtain a second classification probability output by the third neural network;
and the parameter updating module is used for calculating the classification prediction loss according to the first classification probability and the second classification probability and updating the parameters of the first neural network, the second neural network and the third neural network by utilizing a back propagation algorithm according to the classification prediction loss.
10. An image classification apparatus, comprising:
the second feature map acquisition module is used for inputting the image to be classified into the first neural network for processing to obtain a second feature map output by the first neural network;
a second attention map obtaining module, configured to obtain a second attention map based on the second feature map, where a value of a pixel in the second attention map is positively correlated with a probability that a corresponding pixel in the image to be classified is sampled;
the second sampling module is used for carrying out non-uniform sampling on the image to be classified according to the information of all channels in the second attention map to obtain a third sampling image, and carrying out non-uniform sampling on the image to be classified according to the information of a single channel in the second attention map to obtain a fourth sampling image;
the second classification prediction module is used for inputting the third sampling image into a second neural network for processing to obtain a third classification probability output by the second neural network, and inputting the fourth sampling image into the third neural network for processing to obtain a fourth classification probability output by the third neural network;
and the classification result acquisition module is used for determining a final classification result of the image to be classified according to the third classification probability and the fourth classification probability.
CN202010834837.1A 2020-08-18 2020-08-18 Image classification model training method, image classification method and corresponding device Active CN111950643B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010834837.1A CN111950643B (en) 2020-08-18 2020-08-18 Image classification model training method, image classification method and corresponding device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010834837.1A CN111950643B (en) 2020-08-18 2020-08-18 Image classification model training method, image classification method and corresponding device

Publications (2)

Publication Number Publication Date
CN111950643A CN111950643A (en) 2020-11-17
CN111950643B true CN111950643B (en) 2022-06-28

Family

ID=73342115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010834837.1A Active CN111950643B (en) 2020-08-18 2020-08-18 Image classification model training method, image classification method and corresponding device

Country Status (1)

Country Link
CN (1) CN111950643B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634231A (en) * 2020-12-23 2021-04-09 香港中文大学深圳研究院 Image classification method and device, terminal equipment and storage medium
CN112365385B (en) * 2021-01-18 2021-06-01 深圳市友杰智新科技有限公司 Knowledge distillation method and device based on self attention and computer equipment
CN112819044A (en) * 2021-01-20 2021-05-18 江苏天幕无人机科技有限公司 Method for training neural network for target operation task compensation of target object
CN113139627B (en) * 2021-06-22 2021-11-05 北京小白世纪网络科技有限公司 Mediastinal lump identification method, system and device
CN113888430B (en) * 2021-09-30 2023-03-24 北京达佳互联信息技术有限公司 Image processing method and device and model training method and device
CN113780478B (en) * 2021-10-26 2024-05-28 平安科技(深圳)有限公司 Activity classification model training method, classification method, device, equipment and medium
CN115861684B (en) * 2022-11-18 2024-04-09 百度在线网络技术(北京)有限公司 Training method of image classification model, image classification method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426858B (en) * 2017-08-29 2021-04-06 京东方科技集团股份有限公司 Neural network, training method, image processing method, and image processing apparatus
EP3671531A1 (en) * 2018-12-17 2020-06-24 Promaton Holding B.V. Semantic segmentation of non-euclidean 3d data sets using deep learning
CN109800737B (en) * 2019-02-02 2021-06-25 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and storage medium
CN110059744B (en) * 2019-04-16 2022-10-25 腾讯科技(深圳)有限公司 Method for training neural network, method and equipment for processing image and storage medium
CN110110808B (en) * 2019-05-16 2022-04-15 京东方科技集团股份有限公司 Method and device for performing target labeling on image and computer recording medium
CN111126488B (en) * 2019-12-24 2023-08-18 威创集团股份有限公司 Dual-attention-based image recognition method
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111198964B (en) * 2020-01-10 2023-04-25 中国科学院自动化研究所 Image retrieval method and system

Also Published As

Publication number Publication date
CN111950643A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111950643B (en) Image classification model training method, image classification method and corresponding device
CN108304775B (en) Remote sensing image recognition method and device, storage medium and electronic equipment
CN107424159B (en) Image semantic segmentation method based on super-pixel edge and full convolution network
CN111027493B (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN115017418B (en) Remote sensing image recommendation system and method based on reinforcement learning
CN109886330B (en) Text detection method and device, computer readable storage medium and computer equipment
CN107683469A (en) A kind of product classification method and device based on deep learning
CN112116599B (en) Sputum smear tubercle bacillus semantic segmentation method and system based on weak supervised learning
CN112966691A (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN111783779B (en) Image processing method, apparatus and computer readable storage medium
CN114187311A (en) Image semantic segmentation method, device, equipment and storage medium
Yang et al. Active matting
CN114283350B (en) Visual model training and video processing method, device, equipment and storage medium
CN111723815A (en) Model training method, image processing method, device, computer system, and medium
CN114821022A (en) Credible target detection method integrating subjective logic and uncertainty distribution modeling
CN116304341A (en) Fraud discrimination method and system based on user network big data
CN111738319A (en) Clustering result evaluation method and device based on large-scale samples
CN115797735A (en) Target detection method, device, equipment and storage medium
CN111815627B (en) Remote sensing image change detection method, model training method and corresponding device
CN110570490B (en) Saliency image generation method and equipment
CN114330542A (en) Sample mining method and device based on target detection and storage medium
CN114998672A (en) Small sample target detection method and device based on meta-learning
CN113076823A (en) Training method of age prediction model, age prediction method and related device
CN112906785A (en) Zero-sample object type identification method, device and equipment based on fusion
CN113886578A (en) Form classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant