CN112613574A - Training method of image classification model, image classification method and device - Google Patents
Training method of image classification model, image classification method and device Download PDFInfo
- Publication number
- CN112613574A CN112613574A CN202011609917.3A CN202011609917A CN112613574A CN 112613574 A CN112613574 A CN 112613574A CN 202011609917 A CN202011609917 A CN 202011609917A CN 112613574 A CN112613574 A CN 112613574A
- Authority
- CN
- China
- Prior art keywords
- image classification
- classification model
- loss
- cam
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a training method of an image classification model, an image classification method and a device, wherein the image classification model is realized based on a convolutional neural network, and the training method comprises the following steps: aiming at each picture input into the image classification model in the training process of the image classification model, the following operations are respectively executed: acquiring a class activation map CAM corresponding to a preset class of the picture and a thermodynamic map CAAM irrelevant to the preset class; calculating a loss term between the CAM graph and the CAAM graph by adopting a preset algorithm; determining a final loss function using the loss term; adjusting parameters of the image classification model using the determined final loss function. According to the scheme disclosed by the invention, in the training process of the image classification model, the CAAM graph is constrained by the CAM graph, and the parameters of the training image classification model are adjusted.
Description
Technical Field
The embodiment of the application relates to but is not limited to the field of image classification, and particularly relates to a training method of an image classification model, and an image classification method and device.
Background
Image classification techniques are currently the most popular area of research in the fields of machine learning, pattern recognition and computer vision. With the development of deep learning technology, processing the image classification problem by using a deep learning model has become the mainstream. In recent years, the structure of the deep neural network becomes larger and deeper, and the accuracy of the deep neural network on the visual classification task is continuously improved. However, the deep network is easy to face the key problem of overfitting while obtaining strong learning ability. Many researchers have proposed effective regularization solutions such as Dropout, Weight decay, Stochastic depth, Mixup, and others.
Under the deep learning framework, the loss function is an indispensable link for judging the difference between the prediction result and the real result, so as to guide the network to adjust parameters in the direction of making more accurate prediction. An effective solution to the problem of overfitting is to design different loss functions to obtain a more categorically differentiated power distribution, i.e. to increase intra-class compactness and inter-class separability. Based on the inspiration, researchers provide Center-loss and triple-loss, and introduce additional constraint on the basis of the traditional Softmax-loss, wherein the Center-loss requires that the sum of squares of distances between characteristics of samples in each batch and the class Center is as small as possible, and the triple-loss requires that the distance between similar samples is as small as possible and the distance between dissimilar samples is as large as possible. However, both of these loss functions are computationally intensive, which limits their application to large-scale datasets, such as ImageNet. L-Softmax and SM-Softmax are provided by researchers, and original Softmax functions are mathematically corrected, so that image feature representations have larger angle separability. However, such a loss function is not visually intuitive. In addition, when the above-described loss function is used, an image is always represented by a one-dimensional feature vector and does not contain any visual spatial information. Therefore, the method realizes the loss function containing visual space information without introducing a large amount of calculation, and has strong practical significance for solving the over-fitting problem of the deep learning network and improving the overall performance (including classification capability, positioning capability, interpretability capability and the like) of the deep learning network.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the disclosure discloses a training method and a training device for an image classification model and an image classification method, which can realize that a CAAM (computer aided algorithm) graph is constrained by a CAM (computer aided manufacturing) graph and parameters of the image classification model can be adjusted.
The present disclosure provides a training method of an image classification model, wherein the image classification model is implemented based on a convolutional neural network, and the training method comprises:
aiming at each picture input into the image classification model in the training process of the image classification model, the following operations are respectively executed:
acquiring a class activation map CAM corresponding to a preset class of the picture and a thermodynamic map CAAM irrelevant to the preset class;
calculating a loss term between the CAM graph and the CAAM graph by adopting a preset algorithm;
determining a final loss function using the loss term;
adjusting parameters of the image classification model using the determined final loss function.
In an exemplary embodiment, the obtaining the CAM and CAAM of the preset category corresponding to the picture includes:
obtaining a plurality of characteristic graphs output by the picture on the last layer of convolution layer of the image classification model;
for each pixel position point, respectively carrying out weighted summation on weighting coefficients corresponding to the preset categories in all the characteristic graphs to obtain a CAM graph corresponding to the preset categories;
and summing the values in all the feature maps to obtain a CAAM map for each pixel position point.
In an exemplary embodiment, the calculating the loss term between the CAM map and the CAAM map by using a predetermined algorithm includes:
respectively normalizing the CAM graph and the CAAM graph;
and calculating loss terms between the CAM graph and the CAAM graph by using a distance measurement method for the normalized CAM graph and CAAM graph.
In an exemplary embodiment, the distance metric comprises: distance measurement method of arbitrary pixel space.
In an exemplary embodiment, the determining the final loss function using the loss term includes:
and carrying out weighted summation on the determined loss terms and preset classification cross entropy loss terms to obtain a final loss function.
In an exemplary embodiment, the determining the final loss function using the loss term includes:
the final loss function includes: CAM-Loss ═ α Losscam+Losscross;
Wherein alpha represents a preset hyper-parameter for adjusting the proportion of the Loss term, LosscamRepresenting a Loss term, Loss, between the CAM and CAAM mapscrossRepresenting the categorical cross entropy loss term.
In an exemplary embodiment, the convolutional neural network employs a linear classifier or a cosine classifier.
The present disclosure also provides an image classification method, including:
training the image classification model according to the training method of the image classification model in any embodiment;
and classifying the input pictures by adopting a trained image classification model.
The present disclosure also provides an apparatus comprising a memory and a processor; the memory is configured to store a training or image classification program for an image classification model, and the processor is configured to read and execute the training or image classification program for an image classification model, and execute the training method for an image classification model or the image classification method according to any one of the above embodiments.
The present disclosure also provides a storage medium having stored therein a training or image classification program for an image classification model, the program being arranged to perform, when running, the training method of the image classification model according to any one of the above embodiments or the image classification method according to the above embodiments.
The embodiment of the disclosure discloses a training method of an image classification model, an image classification method and a device, wherein the image classification model is realized based on a convolutional neural network, and the training method comprises the following steps: aiming at each picture input into the image classification model in the training process of the image classification model, the following operations are respectively executed: acquiring a class activation map CAM corresponding to a preset class of the picture and a thermodynamic map CAAM irrelevant to the preset class; calculating a loss term between the CAM graph and the CAAM graph by adopting a preset algorithm; determining a final loss function using the loss term; adjusting parameters of the image classification model using the determined final loss function. According to the scheme disclosed by the invention, in the training process, the CAAM image is constrained by the CAM image, and the parameters of the image classification model are adjusted.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
FIG. 1 is a flowchart of a training method of an image classification model according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the CAM, CAAM, CAM-loss calculations in some exemplary embodiments;
FIG. 3 is a schematic view of an apparatus according to an embodiment of the present application;
FIG. 4 is a graphical depiction of a visual comparison of the CAM-loss method to the Softmax-loss baseline method in some exemplary embodiments.
Detailed Description
Hereinafter, embodiments of the present application will be described in detail with reference to the accompanying drawings. It should be noted that the features of the embodiments and examples of the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The embodiment of the present disclosure provides a training method for an image classification model, where the image classification model is implemented based on a convolutional neural network, as shown in fig. 1, the method includes:
aiming at each picture input into the image classification model in the training process of the image classification model, the following operations are respectively executed:
s100, acquiring a class activation map CAM corresponding to a preset class of the picture and a thermodynamic map CAAM irrelevant to the preset class;
s200, calculating a loss item between the CAM graph and the CAAM graph by adopting a preset algorithm;
s300, determining a final loss function by using the loss item;
s400, adjusting parameters of the image classification model by using the determined final loss function.
In this embodiment, the Class Activation Map CAM is a thermodynamic Map (Class Activation Map, also called Class Activation Map, abbreviated as CAM) that includes rich spatial information. The CAM may exhibit the most discriminating core regions for identifying a particular class. And directly calculating the total sum of the output feature maps of the last convolution layer to obtain a Class-independent thermodynamic diagram (CAAM) which shows the spatial distribution of all the features of the pictures input into the image classification model.
In this embodiment, the obtaining the CAM and the CAAM of the preset category corresponding to the picture includes: obtaining a plurality of characteristic graphs output by the picture on the last layer of convolution layer of the image classification model; for each pixel position point, respectively carrying out weighted summation on weighting coefficients corresponding to the preset categories in all the characteristic graphs to obtain a CAM graph corresponding to the preset categories; and summing the values in all the feature maps to obtain a CAAM map for each pixel position point.
In this embodiment, the calculating the loss term between the CAM map and the CAAM map by using a preset algorithm includes: respectively normalizing the CAM graph and the CAAM graph; and calculating loss terms between the CAM graph and the CAAM graph by using a distance measurement method for the normalized CAM graph and CAAM graph.
In an exemplary embodiment, the distance metric may include, but is not limited to: distance measurement method of arbitrary pixel space.
In this embodiment, the determining a final loss function by using the loss term includes:
and carrying out weighted summation on the determined loss terms and preset classification cross entropy loss terms to obtain a final loss function.
In an exemplary embodiment, the final loss function determined by using the loss term may adopt a weighted fusion strategy, and the loss term is not limited to the cross-entropy loss term and is not limited to the fusion strategy adopting the weighted sum.
In this embodiment, the determining a final loss function by using the loss term includes: the final loss function includes: CAM-Loss ═ α Losscam+Losscross(ii) a Wherein alpha represents a preset hyper-parameter for adjusting the proportion of the Loss term, LosscamRepresenting a Loss term, Loss, between the CAM and CAAM mapscrossRepresenting the categorical cross entropy loss term.
In this embodiment, the convolutional neural network employs a linear classifier or a cosine classifier; the neural network structure may employ a deep neural network structure such as ResNet, DenseNet, ResNext, or the like.
In the embodiment, in the training process of the image classification model, the CAM is used for constraining the CAAM, so that the network highlights the feature expression of the preset class, namely the target class, and inhibits the feature expression of the non-target class, thereby enhancing the intra-class compactness and the inter-class separability.
The above embodiment is described below by way of an example.
This example presents a method of training an image classification model. Aiming at each iteration process of image classification model training, the following operations are respectively executed for each picture input into the image classification model:
in this step, the last convolutional layer of the deep neural network is set to output n feature maps (feature maps), fk(x, y) represents the value corresponding to the coordinates (x, y) in the kth feature map, the size of each feature map is fixed as height H and width W, after global average pooling (global average pooling),the characteristic map has a corresponding value ofAfter training and learning, aiming at the target class c, the weight corresponding to the value isThe weight corresponding to the target category is obtained through deep learning network training.
Step 2, calculating a CAM graph and a CAAM graph under the category c;
in this step, the CAM map of the input picture under the category c may be represented as CAMcAnd the value of each pixel point coordinate point can be expressed as:
the CAAM of the input picture, wherein the value of each pixel point coordinate point can be expressed as: CAAM (x, y) ═ Σkfk(x, y); a schematic diagram of the CAM map and the CAAM map calculated under category c is shown in fig. 2.
Step 3, calculating a loss item between the CAM graph and the CAAM graph by adopting a preset algorithm;
in this step, in order to utilize CAMcTo constrain CAAM, define LosscamTo measure CAMcThe gap from CAAM. First, CAMcAnd CAAM to CAM'cAnd CAAM'; second, calculating Loss by using distance measurement method of arbitrary pixel spacecamSuch as manhattan distance (L1):
the euclidean distance (L2) equidistance measurement method may be used, and is not particularly limited.
And 4, calculating a loss function.
In this step, it is determined that the final Loss function CAM-Loss is defined by LosscamAnd cross entropy Loss term LosscrossCombined, the final loss function includes:
CAM-loss=αLosscam+Losscross;
wherein alpha represents a preset hyper-parameter for adjusting the proportion of the Loss term, LosscamRepresenting a Loss term, Loss, between the CAM and CAAM mapscrossRepresenting a classification cross entropy loss term; a schematic of this calculation is shown in fig. 2.
And 5, adjusting parameters of the image classification model by using the determined final loss function and adopting a parameter updating method such as a gradient descent method.
And (5) continuously repeating the process of the iteration step 1-5, and training the image classification model to obtain the final image classification model.
In the example, in the process of training the image classification model based on the convolutional neural network, each training cycle can obtain thermodynamic diagrams (CAM) of each picture corresponding to a specific category and thermodynamic diagrams (CAAM) independent of the category; utilizing the CAM to constrain the CAAM so as to generate a new loss item, wherein the loss item can guide the network to more prominently express the characteristics of the target class and inhibit the characteristics of the non-target class; weighting and fusing the new loss term and the classified cross entropy loss term to obtain a new loss function CAM-loss; the loss function is used for training, and a conventional convolutional neural network model is combined to train to obtain a high-efficiency deep learning image classification model. By the method, the visual space information is fused into the loss function, the accuracy of the image classification task is effectively improved, and the computing resources can be saved.
The disclosed embodiment also provides an apparatus, as shown in fig. 3, including a memory and a processor; the memory is configured to store a training or image classification program for an image classification model, and the processor is configured to read and execute the training or image classification program for an image classification model, and execute the training method for an image classification model or the image classification method according to any one of the above embodiments.
The embodiment of the present disclosure further provides a storage medium, in which a training or image classification program for an image classification model is stored, where the program is configured to execute the training method of the image classification model according to any one of the above embodiments or the image classification method according to the above embodiments when the program runs.
The embodiment of the present disclosure further provides an image classification method, including:
training the image classification model according to the training method of the image classification model in the embodiment;
and classifying the input pictures by adopting a trained image classification model.
The effect of the above image classification is explained below with an example.
The present example uses an actual image classification model training process as an example to illustrate the effect of the image classification model training method.
Taking the image classification reference data sets CIFAR100 and ImageNet as examples, Softmax-loss is taken as a comparison reference, and CAM-loss is taken as an experimental verification method instead of Softmax-loss. The hyper-parameter α is fixedly set to 1, and a linear classifier is employed.
(1) On ImageNet, run 120epoch, with a batch size of 1024, an initial learning rate lr of 0.4 (for ResNext-101, the batch size is 512, the initial lr is 0.2 due to computational limitations), and take a cosine learning rate decay strategy.
(2) On CIFAR-100, 300 epochs are run, the batch size is 128, the initial learning rate lr is 0.4, and cosine learning rate attenuation is adopted.
The comparative experiment results of the CAM-loss method under different network architectures are shown in Table 1:
in Table 1, the Top 1 error rate is used and the results of the CAM-loss method are shown in bold. It can be seen that: the CAM-loss can be widely applied to various network structures, and the performance of the CIFAR-100 and ImageNet baseline methods is obviously improved. Specifically, the improvement is 0.69-1.46% on CIFAR-100 and 0.51-0.70% on ImageNet, which is obvious under the standard of deep network architecture.
FIG. 4 is a graph showing the results of the visualization of the CAM-loss method and the Softmax-loss method, and it can be seen that:
(1) as can be seen from comparison between the column (d) and the column (b), the CAAM graph obtained by the CAM-loss is more accurate than the CAAM graph obtained by the Softmax-loss, the characteristic representation of the target class is more obvious, and the CAAM graph has a certain inhibiting effect on the characteristic representation of the non-target class. For example, the correct category for the third graph is a pan, the CAAM graph obtained by Softmax-loss (b) focuses more on the food in the pan, while the CAAM graph obtained by CAM-loss (e) focuses more on the surrounding pan and suppresses the expression of the food area at the same time (position shown by black box).
(2) As can be seen from comparison between the column (e) and the column (c), the CAM map obtained by the CAM-loss is more accurate than the CAM map obtained by the Softmax-loss, the characteristic representation of the target class is more obvious, and the CAM has a certain inhibiting effect on the characteristic representation of the non-target class. For example, the correct category for the second image is bookshelf, and the CAM map from Softmax-loss (c) focuses more on the desk, while the CAM map from CAM-loss (e) gives more attention to the bookshelf area in the upper right corner and at the same time suppresses the expression of the desk area (positions shown as white boxes).
In the example, CAAM is restricted by CAM, so that the network highlights feature expression of the target category, and inhibits feature expression of the non-target category, which is beneficial to enhancing compactness and separability between categories simultaneously, and realizes fusion of visual space information into a loss function, thereby effectively improving accuracy of image classification task.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Claims (10)
1. A training method of an image classification model, wherein the image classification model is realized based on a convolutional neural network, and the training method comprises the following steps:
aiming at each picture input into the image classification model in the training process of the image classification model, the following operations are respectively executed:
acquiring a class activation map CAM corresponding to a preset class of the picture and a thermodynamic map CAAM irrelevant to the preset class;
calculating a loss term between the CAM graph and the CAAM graph by adopting a preset algorithm;
determining a final loss function using the loss term;
adjusting parameters of the image classification model using the determined final loss function.
2. The method for training the image classification model according to claim 1, wherein the obtaining the CAM and CAAM of the preset category corresponding to the picture comprises:
obtaining a plurality of characteristic graphs output by the picture on the last layer of convolution layer of the image classification model;
for each pixel position point, respectively carrying out weighted summation on weighting coefficients corresponding to the preset categories in all the characteristic graphs to obtain a CAM graph corresponding to the preset categories;
and summing the values in all the feature maps to obtain a CAAM map for each pixel position point.
3. The method for training an image classification model according to claim 2, wherein the calculating the loss term between the CAM map and the CAAM map by using a predetermined algorithm comprises:
respectively normalizing the CAM graph and the CAAM graph;
and calculating loss terms between the CAM graph and the CAAM graph by adopting a distance measurement method for the normalized CAM graph and CAAM graph.
4. The method of claim 3, wherein the distance metric comprises: distance measurement method of arbitrary pixel space.
5. The method for training an image classification model according to claim 4, wherein the determining a final loss function by using the loss term comprises:
and carrying out weighted summation on the determined loss terms and preset classification cross entropy loss terms to obtain a final loss function.
6. The method for training an image classification model according to claim 5, wherein the determining a final loss function by using the loss term comprises:
the final loss function includes: CAM-Loss ═ α Losscam+Losscross;
Wherein alpha represents a preset hyper-parameter for adjusting the proportion of the Loss term, LosscamRepresenting a Loss term, Loss, between the CAM and CAAM mapscrossRepresenting the categorical cross entropy loss term.
7. The method for training an image classification model according to claim 1, wherein the convolutional neural network employs a linear classifier or a cosine classifier.
8. An image classification method, comprising:
training an image classification model according to the method for training an image classification model as claimed in claims 1-7;
and classifying the input pictures by adopting a trained image classification model.
9. An apparatus comprising a memory and a processor; wherein the memory is configured to store a training or image classification program for an image classification model, and the processor is configured to read and execute the training or image classification program for an image classification model, the training method for an image classification model according to any one of claims 1 to 7, or the image classification method according to claim 8.
10. A storage medium in which a training or image classification program for an image classification model is stored, the program being arranged to perform, when running, the method of training an image classification model according to any one of claims 1 to 7 or the method of image classification according to claim 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609917.3A CN112613574B (en) | 2020-12-30 | 2020-12-30 | Training method of image classification model, image classification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609917.3A CN112613574B (en) | 2020-12-30 | 2020-12-30 | Training method of image classification model, image classification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112613574A true CN112613574A (en) | 2021-04-06 |
CN112613574B CN112613574B (en) | 2022-07-19 |
Family
ID=75249447
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011609917.3A Active CN112613574B (en) | 2020-12-30 | 2020-12-30 | Training method of image classification model, image classification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112613574B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113077466A (en) * | 2021-05-11 | 2021-07-06 | 清华大学深圳国际研究生院 | Medical image classification method and device based on multi-scale perception loss |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019231104A1 (en) * | 2018-05-31 | 2019-12-05 | 주식회사 뷰노 | Method for classifying images by using deep neural network and apparatus using same |
CN111310800A (en) * | 2020-01-20 | 2020-06-19 | 世纪龙信息网络有限责任公司 | Image classification model generation method and device, computer equipment and storage medium |
CN112130200A (en) * | 2020-09-23 | 2020-12-25 | 电子科技大学 | Fault identification method based on grad-CAM attention guidance |
-
2020
- 2020-12-30 CN CN202011609917.3A patent/CN112613574B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019231104A1 (en) * | 2018-05-31 | 2019-12-05 | 주식회사 뷰노 | Method for classifying images by using deep neural network and apparatus using same |
CN111310800A (en) * | 2020-01-20 | 2020-06-19 | 世纪龙信息网络有限责任公司 | Image classification model generation method and device, computer equipment and storage medium |
CN112130200A (en) * | 2020-09-23 | 2020-12-25 | 电子科技大学 | Fault identification method based on grad-CAM attention guidance |
Non-Patent Citations (2)
Title |
---|
CHAOFEI WANG等: "Towards Learning Spatially Discriminative Feature Representations", 《ICCV 2021》 * |
RAMPRASAATH R. SELVARAJU 等: "Grad-CAM:Visual Explanations from Deep Networks via Gradient-based Localization", 《ICCV 2017》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113077466A (en) * | 2021-05-11 | 2021-07-06 | 清华大学深圳国际研究生院 | Medical image classification method and device based on multi-scale perception loss |
Also Published As
Publication number | Publication date |
---|---|
CN112613574B (en) | 2022-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860494B (en) | Optimization method and device for image target detection, electronic equipment and storage medium | |
CN112800964B (en) | Remote sensing image target detection method and system based on multi-module fusion | |
US11636306B2 (en) | Implementing traditional computer vision algorithms as neural networks | |
JP2010500677A (en) | Image processing method | |
CN110120065B (en) | Target tracking method and system based on hierarchical convolution characteristics and scale self-adaptive kernel correlation filtering | |
US20130343619A1 (en) | Density estimation and/or manifold learning | |
CN110532920A (en) | Smallest number data set face identification method based on FaceNet method | |
CN103996052B (en) | Three-dimensional face gender classification method based on three-dimensional point cloud | |
CN113591763B (en) | Classification recognition method and device for face shapes, storage medium and computer equipment | |
CN107301643A (en) | Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms | |
US11821986B1 (en) | Target tracking method, system, device and storage medium | |
CN110796250A (en) | Convolution processing method and system applied to convolutional neural network and related components | |
Zuobin et al. | Feature regrouping for cca-based feature fusion and extraction through normalized cut | |
CN112613574B (en) | Training method of image classification model, image classification method and device | |
CN116091823A (en) | Single-feature anchor-frame-free target detection method based on fast grouping residual error module | |
CN115239760A (en) | Target tracking method, system, equipment and storage medium | |
CN112749576B (en) | Image recognition method and device, computing equipment and computer storage medium | |
CN114565953A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
Lin et al. | Matching cost filtering for dense stereo correspondence | |
CN111667495A (en) | Image scene analysis method and device | |
CN116597275A (en) | High-speed moving target recognition method based on data enhancement | |
CN115719414A (en) | Target detection and accurate positioning method based on arbitrary quadrilateral regression | |
CN113515661B (en) | Image retrieval method based on filtering depth convolution characteristics | |
CN115731415A (en) | Small sample fine-grained target recognition model and method based on bimodal fusion | |
Dalara et al. | Entity Recognition in Indian Sculpture using CLAHE and machine learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |