CN113283524A - Anti-attack based deep neural network approximate model analysis method - Google Patents

Anti-attack based deep neural network approximate model analysis method Download PDF

Info

Publication number
CN113283524A
CN113283524A CN202110628619.7A CN202110628619A CN113283524A CN 113283524 A CN113283524 A CN 113283524A CN 202110628619 A CN202110628619 A CN 202110628619A CN 113283524 A CN113283524 A CN 113283524A
Authority
CN
China
Prior art keywords
sample
neural network
deep neural
layer
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110628619.7A
Other languages
Chinese (zh)
Inventor
蒋雯
李祥
邓鑫洋
耿杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110628619.7A priority Critical patent/CN113283524A/en
Publication of CN113283524A publication Critical patent/CN113283524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for analyzing an approximate model of a deep neural network based on anti-attack, which comprises the following steps: constructing a deep neural network to be analyzed; carrying out counterattack aiming at the deep neural network, and acquiring data sample pairs of a data set sample and a counterattack sample; inputting a data sample pair of a data set sample and a countermeasure sample, and acquiring a feature map pair of the deep neural network; calculating contribution degrees of all nodes by using the change situation of the activation values of all nodes in the depth measurement neural network by using the obtained feature graph; and deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network. The anti-attack-based deep neural network approximate model analysis method can measure the contribution degree of deep neural network nodes and obtain an approximate model with performance similar to that of the original network.

Description

Anti-attack based deep neural network approximate model analysis method
Technical Field
The invention belongs to the field of interpretability research of a deep neural network, and particularly relates to an anti-attack-based deep neural network approximate model analysis method.
Background
With the continuous improvement of computing power and the coming of big data era, deep learning technology has been developed rapidly in recent years, and deep learning has been successfully applied to specific fields related to natural language, multimedia, computer vision, voice, cross-media and the like.
However, deep learning networks often require a large amount of labeled data for model optimization, and have black box characteristics. The transparency, the interpretability and the credibility of the deep learning model are insufficient, and a credible calculation result cannot be provided for a user when safety critical fields such as intelligent decision making, unmanned driving and the like are carried out, so that the credibility technology for deep learning is urgently developed.
Researchers have made some progress on the problems of deep learning transparency, understandability and interpretability, and different researchers have different problems solving angles and different meanings given to the interpretability, and the proposed interpretation methods are also emphasized. However, many scientific problems still remain to be solved in the field, and one of the problems is that the size of deep neural network nodes is large, so that the network is difficult to analyze.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an anti-attack-based deep neural network approximate model analysis method aiming at the defects of the prior art, solve the problem that the network is difficult to analyze due to the large scale of deep neural network nodes, enhance the interpretability of the deep neural network, help to understand the deep neural network model and improve the credibility of a deep learning algorithm.
In order to solve the technical problems, the invention adopts the technical scheme that: a deep neural network approximate model analysis method based on anti-attack is characterized by comprising the following steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a model architecture of a certain deep neural network to be analyzed;
102, training the constructed deep neural network model by using a certain data set to obtain trained network weight;
step two, carrying out counterattack aiming at the deep neural network, and acquiring data sample pairs of the data set samples and the counterattack samples:
step 201, sequentially inputting data set samples to a deep neural network to be analyzed, and obtaining the gradient of the input samples through back propagation of loss values;
step 202, adding the gradient of the sample to the input sample to obtain an intermediate sample, and limiting the pixel value in the intermediate sample between 0 and 1;
step 203, inputting the intermediate samples into a network for classification, and if the classification result is still correct, repeating the steps 201 and 202 until the classification result is wrong, so that the samples with wrong classification results are countersamples;
step 204, forming a data sample pair by the data set sample and the corresponding countermeasure sample;
inputting a data sample pair of the data set sample and the countermeasure sample, and acquiring a feature map pair of the deep neural network:
step 301, sequentially inputting a data set sample and a countermeasure sample in a data sample pair into a deep neural network to be analyzed to obtain a characteristic diagram corresponding to the sample in each middle hidden layer of the network to be analyzed;
step 302, a feature map sample pair is formed by the data set samples in the data sample pair and the feature map obtained by the countermeasure sample on the network to be analyzed, and if there are m groups of data sample pairs and there are k convolutional layers in the network to be analyzed, the feature map sample pair is obtained
Figure BDA0003094107920000021
M, wherein (X)r,Xa) Respectively representing a data set sample characteristic diagram and a confrontation sample characteristic diagram;
step four, the change situation of the activation value of each node in the deep neural network is measured by using the obtained characteristic graph sample, and then the contribution degree of each node is calculated:
step 401, taking the maximum value of the feature map sample pairs corresponding to each convolution kernel in the deep neural network to obtain the activation value sample pairs of the convolution kernels
Figure BDA0003094107920000031
Wherein j is 1,2.. k is the sequence number of the convolution layer in the deep neural network, i is 1,2.. m is the sequence number of the data sample pair, (A)r,Aa) Respectively representing the activation value of the data set sample and the activation value of the attack resisting sample, Ar,AaAre all of length cjOne-dimensional vector of cjRepresenting the number of convolution kernels in the jth convolution layer of the deep neural network;
step 402, averaging all the activation value sample pairs to obtain an average activation value sample pair
Figure BDA0003094107920000032
Figure BDA0003094107920000033
Wherein m is the total number of the data set samples, j is 1,2.. k is the serial number of the convolution layer in the deep neural network, and i is 1,2.. m is the serial number of the data sample pair;
step 403, utilizing the average activation value sample pair
Figure BDA0003094107920000034
To measure the importance degree R of each node in the deep neural network:
Figure BDA0003094107920000035
wherein j is 1,2.. k is the number of convolution layer in the deep neural network, RjIs of length cjRepresents c in the jth convolutional layerjThe degree of importance of an individual node;
step 404, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes and the contribution degree C of the jth layer of nodesj
Figure BDA0003094107920000036
Wherein j is 1,2.. k is the sequence number of the convolution layer in the deep neural network;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 501, extracting weights and bias terms w in convolutional layers of deep neural network from networkj、bj,j=1,2...k,wjDimension of (c)j,cj-1,kernel_sizew,j,kernel_sizeh,j),,bjDimension of (c)jC) when j is 1, cj-1Is the number of input sample channels;
step 502, setting threshold tj,tjThe maximum limit of the network performance reduction after the j-th layer node is deleted is shown;
step 503, selecting the last layer (k layer) of convolution kernel for node deletion;
step 504, deleting the convolution kernel with the lowest contribution degree in the selected convolution layer;
505, inputting the data set into the deleted node, and calculating the performance of the network, if the performance is reduced to be less than the set threshold tjThen step 504 is repeated if the performance degradation is greater than the set threshold tjRestoring the last deleted node;
step 506, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is njThen the weight w after the convolution kernel is removedj' dimension is (c)j-nj,cj-1,kernel_sizew,j,kernel_sizeh,j) Deleting bias term b after convolution kernelj' dimension is (c)j-nj,);
And 507, selecting a previous layer of convolution kernel to delete the nodes, and repeating the step 504, the step 505 and the step 506 until the nodes of the whole network are deleted, thereby obtaining an approximate model of the deep neural network.
Compared with the prior art, the invention has the following advantages:
firstly, the invention provides a method for analyzing an approximate model of a deep neural network based on counterattack, which measures the contribution degree of nodes in the deep neural network by calculating the change condition of activation values of the deep neural network on a countersample and an original sample, and solves the problem that the network is difficult to analyze due to the large scale of the nodes of the deep neural network;
secondly, deleting the nodes with low contribution degree in the deep neural network to obtain a network approximate model, wherein the influence on the network performance is small;
the technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The method of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments of the invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
As shown in fig. 1, taking the disclosed small data sets Cifar10 and VGG16 networks for identifying pervasive objects as examples, the invention discloses a deep neural network approximate model analysis method based on anti-attack, which comprises the following specific steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a VGG16 network model, wherein the VGG network architecture model consists of an input layer, convolutional layers and full-connection layers, and the VGG16 network comprises 13 convolutional layers and 3 full-connection layers;
step 102, loading pre-training weights for a Cifar10 data set by using the constructed VGG16 network model;
step two, carrying out counterattack aiming at the deep neural network, and acquiring data sample pairs of the data set samples and the counterattack samples:
step 201, sequentially inputting Cifar10 data set samples into a deep neural network to be analyzed, wherein the loss function expression is
Figure BDA0003094107920000061
Class represents that a sample class label Cifar10 dataset contains 10 classes, so that class is 0,1.. 9, and the gradient grad of an input sample is obtained through back propagation of loss values, and the dimension of the grad is consistent with that of the input sample;
step 202, adding gradient grad of samples to Cifar10 data set samples to obtain intermediate samples, and limiting pixel values in the intermediate samples to be between 0 and 1;
step 203, inputting the intermediate samples into a network for classification, and if the classification result is still correct, repeating the steps 201 and 202 until the classification result is wrong, so that the samples with wrong classification results are countersamples;
204, forming a data sample pair by the Cifar10 data set sample and the countermeasure sample corresponding to the Cifar10 data set sample, wherein 50000 pictures are in the Cifar10 data set training set, and finally 50000 groups of data sample pairs are obtained;
inputting a data sample pair of the data set sample and the countermeasure sample, and acquiring a feature map pair of the deep neural network:
step 301, sequentially inputting a Cifar10 data set sample and a countermeasure sample in a data sample pair into a deep neural network to be analyzed to obtain a feature map corresponding to the sample in each middle hidden layer of the network to be analyzed, wherein the VGG16 network has 13 convolutional layers in total, and the Cifar10 data setObtaining a characteristic diagram of a sample:
Figure BDA0003094107920000062
obtaining a characteristic diagram of the confrontation sample:
Figure BDA0003094107920000063
where i ═ 1,2.. 50000 denotes the data sample pair number, (X)r,Xa) Respectively representing a data set sample characteristic diagram and a confrontation sample characteristic diagram;
step 302, a characteristic diagram sample pair is formed by a Cifar10 data set sample in the data sample pair and a characteristic diagram obtained by a countermeasure sample on a network to be analyzed, the characteristic diagram sample pair is obtained by 50000 groups of data sample pairs,
Figure BDA0003094107920000064
i=1,2...50000;
step four, the change situation of the activation value of each node in the deep neural network is measured by using the obtained characteristic graph, and then the contribution degree of each node is calculated:
step 401, taking the maximum value of the feature map sample pairs corresponding to each convolution kernel in the deep neural network, and obtaining the activation value sample pairs of the convolution kernels. Sample pair of j-th layer feature map of deep neural network
Figure BDA0003094107920000071
The dimensions of the characteristic diagram of the data set sample and the characteristic diagram of the attack resisting sample are both cj×hj×wjAt c, injObtaining activation value sample pairs by taking maximum value in dimension
Figure BDA0003094107920000072
Wherein, the activation value of the data set sample and the activation value of the attack resisting sample are both one-dimensional vectors with the length of cj
Step 402, averaging all the activation value sample pairs to obtain an average activation value sample pair
Figure BDA0003094107920000073
Figure BDA0003094107920000074
Wherein, m is 50000 is the total number of data set samples, j is 1,2.. 13 is the sequence number of the convolution layer in the deep neural network, and i is 1,2.. 50000 is the sequence number of the data sample pair;
step 403, utilizing the average activation value sample pair
Figure BDA0003094107920000075
To measure the importance degree R of each node in the deep neural network:
Figure BDA0003094107920000076
wherein j is 1,2.. 13 is the number of convolution layer in the deep neural network, RjIs of length cjRepresents c in the jth convolutional layerjThe degree of importance of an individual node;
step 404, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes, wherein the contribution degree of the j-th layer of nodes is as follows:
Figure BDA0003094107920000077
wherein j is 1,2.. 13 is the sequence number of the convolution layer in the deep neural network;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 501, extracting weights and bias terms w in convolutional layers of deep neural network from networkj、bj,j=1,2...13,wjDimension of (c)j,cj-1,kernel_sizew,j,kernel_sizeh,j),,bjDimension of (c)jC) when j is 1, cj-1Is the number of input sample channels;
step 502, setting threshold tj=0.3%,j=1,2...13,tjThe maximum limit of the network performance reduction after the j-th layer node is deleted is shown;
step 503, selecting the last layer (layer 13) of convolution kernel to delete the nodes;
step 504, deleting the convolution kernel with the lowest contribution degree in the convolution layer correspondingly, and deleting the weight and the bias item corresponding to the convolution kernel;
505, inputting the data set into the deleted node, and calculating the performance of the network, if the performance is reduced to be less than the set threshold tjThen step 504 is repeated if the performance degradation is greater than the set threshold tjRestoring the last deleted node;
step 506, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is njThen the weight w after the convolution kernel is removedj' dimension is (c)j,cj-1,kernel_sizew,j,kernel_sizeh,j) Deleting bias term b after convolution kernelj' dimension is (c)j-nj,);
And 507, selecting a previous layer of convolution kernel to delete the nodes, and repeating the step 504, the step 505 and the step 506 until the nodes of the whole network are deleted, thereby obtaining an approximate model of the deep neural network.
The above embodiments are only examples of the present invention, and are not intended to limit the present invention, and all simple modifications, changes and equivalent structural changes made to the above embodiments according to the technical spirit of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (1)

1. A deep neural network approximate model analysis method based on anti-attack is characterized by comprising the following steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a model architecture of a certain deep neural network to be analyzed;
102, training the constructed deep neural network model by using a certain data set to obtain trained network weight;
step two, carrying out counterattack aiming at the deep neural network, and acquiring data sample pairs of the data set samples and the counterattack samples:
step 201, sequentially inputting data set samples to a deep neural network to be analyzed, and obtaining the gradient of the input samples through back propagation of loss values;
step 202, adding the gradient of the sample to the input sample to obtain an intermediate sample, and limiting the pixel value in the intermediate sample between 0 and 1;
step 203, inputting the intermediate samples into a network for classification, and if the classification result is still correct, repeating the steps 201 and 202 until the classification result is wrong, so that the samples with wrong classification results are countersamples;
step 204, forming a data sample pair by the data set sample and the corresponding countermeasure sample;
inputting a data sample pair of the data set sample and the countermeasure sample, and acquiring a feature map pair of the deep neural network:
step 301, sequentially inputting a data set sample and a countermeasure sample in a data sample pair into a deep neural network to be analyzed to obtain a characteristic diagram corresponding to the sample in each middle hidden layer of the network to be analyzed;
step 302, a feature map sample pair is formed by the data set samples in the data sample pair and the feature map obtained by the countermeasure sample on the network to be analyzed, and if there are m groups of data sample pairs and there are k convolutional layers in the network to be analyzed, the feature map sample pair is obtained
Figure FDA0003094107910000011
Wherein (X)r,Xa) Respectively representing a data set sample characteristic diagram and a confrontation sample characteristic diagram;
step four, the change situation of the activation value of each node in the deep neural network is measured by using the obtained characteristic graph sample, and then the contribution degree of each node is calculated:
step 401, taking the maximum value of the feature map sample corresponding to each convolution kernel in the deep neural network, and obtaining the activation value sample of the convolution kernelBook pair
Figure FDA0003094107910000021
Wherein j is 1,2.. k is the sequence number of the convolution layer in the deep neural network, i is 1,2.. m is the sequence number of the data sample pair, (A)r,Aa) Respectively representing the activation value of the data set sample and the activation value of the attack resisting sample, Ar,AaAre all of length cjOne-dimensional vector of cjRepresenting the number of convolution kernels in the jth convolution layer of the deep neural network;
step 402, averaging all the activation value sample pairs to obtain an average activation value sample pair
Figure FDA0003094107910000022
Figure FDA0003094107910000023
Wherein m is the total number of the data set samples, j is 1,2.. k is the serial number of the convolution layer in the deep neural network, and i is 1,2.. m is the serial number of the data sample pair;
step 403, utilizing the average activation value sample pair
Figure FDA0003094107910000024
To measure the importance degree R of each node in the deep neural network:
Figure FDA0003094107910000025
wherein j is 1,2.. k is the number of convolution layer in the deep neural network, RjIs of length cjRepresents c in the jth convolutional layerjThe degree of importance of an individual node;
step 404, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes and the contribution degree C of the jth layer of nodesj
Figure FDA0003094107910000026
Wherein j is 1,2.. k is the sequence number of the convolution layer in the deep neural network;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 501, extracting weights and bias terms w in convolutional layers of deep neural network from networkj、bj,j=1,2...k,wjDimension of (c)j,cj-1,kernel_sizew,j,kernel_sizeh,j),bjDimension of (c)jC) when j is 1, cj-1Is the number of input sample channels;
step 502, setting threshold tj,tjThe maximum limit of the network performance reduction after the j-th layer node is deleted is shown;
step 503, selecting the last layer (k layer) of convolution kernel for node deletion;
step 504, deleting the convolution kernel with the lowest contribution degree in the selected convolution layer;
505, inputting the data set into the deleted node, and calculating the performance of the network, if the performance is reduced to be less than the set threshold tjThen step 504 is repeated if the performance degradation is greater than the set threshold tjRestoring the last deleted node;
step 506, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is njThen the weight w after the convolution kernel is removedj' dimension is (c)j-nj,cj-1,kernel_sizew,j,kernel_sizeh,j) Deleting bias term b after convolution kernelj' dimension is (c)j-nj,);
And 507, selecting a previous layer of convolution kernel to delete the nodes, and repeating the step 504, the step 505 and the step 506 until the nodes of the whole network are deleted, thereby obtaining an approximate model of the deep neural network.
CN202110628619.7A 2021-06-01 2021-06-01 Anti-attack based deep neural network approximate model analysis method Pending CN113283524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110628619.7A CN113283524A (en) 2021-06-01 2021-06-01 Anti-attack based deep neural network approximate model analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110628619.7A CN113283524A (en) 2021-06-01 2021-06-01 Anti-attack based deep neural network approximate model analysis method

Publications (1)

Publication Number Publication Date
CN113283524A true CN113283524A (en) 2021-08-20

Family

ID=77283586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110628619.7A Pending CN113283524A (en) 2021-06-01 2021-06-01 Anti-attack based deep neural network approximate model analysis method

Country Status (1)

Country Link
CN (1) CN113283524A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676491A (en) * 2021-09-17 2021-11-19 西北工业大学 Network topology confusion method based on common neighbor number and graph convolution neural network
CN113837244A (en) * 2021-09-02 2021-12-24 哈尔滨工业大学 Confrontation sample detection method and device based on multilayer significance characteristics
JP7316566B1 (en) 2022-05-11 2023-07-28 ノタ、インコーポレイテッド Neural network model weight reduction method and electronic device for performing the same
US11775806B2 (en) 2022-02-10 2023-10-03 Nota, Inc. Method of compressing neural network model and electronic apparatus for performing the same

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837244A (en) * 2021-09-02 2021-12-24 哈尔滨工业大学 Confrontation sample detection method and device based on multilayer significance characteristics
CN113676491A (en) * 2021-09-17 2021-11-19 西北工业大学 Network topology confusion method based on common neighbor number and graph convolution neural network
US11775806B2 (en) 2022-02-10 2023-10-03 Nota, Inc. Method of compressing neural network model and electronic apparatus for performing the same
JP7316566B1 (en) 2022-05-11 2023-07-28 ノタ、インコーポレイテッド Neural network model weight reduction method and electronic device for performing the same
JP2023168261A (en) * 2022-05-11 2023-11-24 ノタ、インコーポレイテッド Method for compressing neural network model and electronic apparatus for performing the same

Similar Documents

Publication Publication Date Title
CN108764292B (en) Deep learning image target mapping and positioning method based on weak supervision information
CN108170736B (en) Document rapid scanning qualitative method based on cyclic attention mechanism
CN110084296B (en) Graph representation learning framework based on specific semantics and multi-label classification method thereof
CN109934261B (en) Knowledge-driven parameter propagation model and few-sample learning method thereof
CN113283524A (en) Anti-attack based deep neural network approximate model analysis method
US20190228268A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN107220506A (en) Breast cancer risk assessment analysis system based on deep convolutional neural network
CN107562784A (en) Short text classification method based on ResLCNN models
CN110941734B (en) Depth unsupervised image retrieval method based on sparse graph structure
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
JP6738769B2 (en) Sentence pair classification device, sentence pair classification learning device, method, and program
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CN111400494B (en) Emotion analysis method based on GCN-Attention
CN112231477A (en) Text classification method based on improved capsule network
CN107563430A (en) A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension
CN114511710A (en) Image target detection method based on convolutional neural network
CN113627550A (en) Image-text emotion analysis method based on multi-mode fusion
CN113283519A (en) Deep neural network approximate model analysis method based on discrete coefficients
CN114511785A (en) Remote sensing image cloud detection method and system based on bottleneck attention module
CN114329474A (en) Malicious software detection method integrating machine learning and deep learning
CN105809200A (en) Biologically-inspired image meaning information autonomous extraction method and device
CN110288002B (en) Image classification method based on sparse orthogonal neural network
CN104573728A (en) Texture classification method based on extreme learning machine
CN113449751B (en) Object-attribute combined image identification method based on symmetry and group theory
CN115131646A (en) Deep network model compression method based on discrete coefficient

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210820

WD01 Invention patent application deemed withdrawn after publication