CN113283519A - Deep neural network approximate model analysis method based on discrete coefficients - Google Patents

Deep neural network approximate model analysis method based on discrete coefficients Download PDF

Info

Publication number
CN113283519A
CN113283519A CN202110617865.2A CN202110617865A CN113283519A CN 113283519 A CN113283519 A CN 113283519A CN 202110617865 A CN202110617865 A CN 202110617865A CN 113283519 A CN113283519 A CN 113283519A
Authority
CN
China
Prior art keywords
neural network
deep neural
layer
data set
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110617865.2A
Other languages
Chinese (zh)
Inventor
蒋雯
李祥
邓鑫洋
耿杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202110617865.2A priority Critical patent/CN113283519A/en
Publication of CN113283519A publication Critical patent/CN113283519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep neural network approximate model analysis method based on discrete coefficients, which comprises the following steps: constructing a deep neural network to be analyzed; inputting a data set sample according to the category to obtain a characteristic diagram of the deep neural network of each category; calculating contribution degrees of all nodes according to the change situation of all node activation values in the obtained deep neural network of the feature maps of all classes; and deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network. The deep neural network approximate model analysis method based on the discrete coefficient can measure the contribution degree of the deep neural network nodes and obtain an approximate model with performance similar to that of the original network.

Description

Deep neural network approximate model analysis method based on discrete coefficients
Technical Field
The invention belongs to the field of interpretability research of a deep neural network, and particularly relates to a deep neural network approximate model analysis method based on discrete coefficients.
Background
With the rapid development of computer performance in recent years, deep learning becomes a key technology leading the artificial intelligence trend of the current round, and the computer is widely discussed and paid attention to the society. Deep learning algorithms have achieved significant success in the fields of computer vision, natural language processing, audio recognition, and the like.
However, although deep learning techniques have achieved many good results, there still exist some limitations and disadvantages to overcome, wherein the lack of interpretability of deep learning is the most important disadvantage of the current deep learning techniques. At present, a deep neural network model is similar to a black box for a user, an input is given to the deep neural network model, a decision result is obtained after calculation of the deep neural network, but the decision process and the decision basis in the deep neural network model cannot be known, so that whether the decision result is reliable or not cannot be known, and people lack clear knowledge of the middle process of operation of the deep neural network model.
With the increasingly improved performance of deep learning technology and the more extensive application scenarios, how to understand the deep neural network model and how to understand the decision making process thereof are problems which are urgently needed to be solved in many application fields. Therefore, the method has great significance in researching how to analyze the decision process of the deep neural network and know the transmission process of the signal in the neural network.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a deep neural network approximate model analysis method based on discrete coefficients aiming at the defects of the prior art, solve the problem that the network is difficult to analyze due to the large scale of deep neural network nodes, enhance the interpretability of the deep neural network, help to understand the deep neural network model and improve the credibility of a deep learning algorithm.
In order to solve the technical problems, the invention adopts the technical scheme that: a deep neural network approximate model analysis method based on discrete coefficients is characterized by comprising the following steps:
1. a deep neural network approximate model analysis method based on discrete coefficients is characterized by comprising the following steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a model architecture of a certain deep neural network to be analyzed;
102, training the constructed deep neural network model by using a certain data set to obtain trained network weight;
step two, inputting a data set sample according to the category to obtain a characteristic diagram of the deep neural network of each category:
step 201, classifying the data set according to the data set label, and if the data set comprises n categories, obtaining a data set sample D of each categoryiWherein i is 1,2.. n is a sample class serial number;
step 202, sample D of each category data setiSequentially inputting the data into the deep neural network to be analyzed to obtain the characteristic diagram corresponding to each class of data set samples in each middle hidden layer of the network to be analyzed, and if the number of the samples in each class of data set samples is miAnd the network to be analyzed has k convolutional layers, the obtained characteristic diagram of the ith category can be expressed as
Figure BDA0003094101330000021
Wherein j is 1,2iSample number for dataset;
thirdly, calculating contribution degrees of all nodes by utilizing the change situation of all node activation values in the obtained deep neural network of the feature maps of all classes:
301, taking the maximum value of the feature map corresponding to each convolution kernel in the deep neural network to obtain the activation value of the convolution kernel
Figure BDA0003094101330000022
Wherein d is 1,2.. k is the number of the convolution layer in the deep neural network, and j is 1,2.. miN is the sample number of the data set, i is 1,2[d]Is of length cdOne-dimensional vector of cdRepresenting the number of convolution kernels in the d-th convolution layer of the deep neural network.
Step 302, averaging all the activation value sample pairs of the same category to obtain an average activation value sample of each category
Figure BDA0003094101330000031
Figure BDA0003094101330000032
Wherein m isiThe total number of samples in the ith category data set is d 1,2.. k is the serial number of the convolution layer in the deep neural network, and i 1,2.. n is the serial number of the sample category;
step 303, average activation value sample by each category
Figure BDA0003094101330000033
To calculate the discrete coefficient V of each node in the deep neural networks
Figure BDA0003094101330000034
Figure BDA0003094101330000035
Wherein, mu[d]Means, σ, representing the mean of the mean activation values of the samples of each class on the d-th convolutional layer of the deep neural network[d]And the standard deviation of the average activation value of each class sample on the layer d convolution layer of the deep neural network is shown.
Step 304, utilizing the average activation value samples V of each categorysTo measure the importance degree R of each node in the deep neural network:
Figure BDA0003094101330000036
wherein, d is 1,2.. k is the number of convolution layer in the deep neural network, R[d]Is of length cdThe one-dimensional vector of (a) represents c in the d-th convolutional layerdThe degree of importance of an individual node;
305, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes and the contribution degree C of the node at the d-th layer[d]
Figure BDA0003094101330000037
Wherein, d is 1,2.. k is the number of convolution layer in the deep neural network, C[d]Is of length cdThe one-dimensional vector of (a) represents the contribution of each node in the d-th convolutional layer;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 401, extracting weights and bias terms w in convolutional layers of deep neural network from networkd、bd,d=1,2...k,wdDimension of (c)d,cd-1,kernel_sizew,d,kernel_sizeh,d),,bdDimension of (c)dC) when d is 1d-1Is the number of input sample channels;
step 402, setting threshold td,tdThe maximum limit of the network performance reduction after the d-th node is deleted is shown;
step 403, selecting the last layer (k layer) of convolution kernel to delete the nodes;
step 404, deleting the convolution kernel with the lowest contribution degree in the convolution layer correspondingly;
step 405, inputting the data set into the deleted node and then calculating the performance of the network, if the performance is reduced to be less than the set threshold tdThen step 504 is repeated if the performance degradation is greater than the set threshold tdRestoring the last deleted node;
step 406, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is ndThen the weight w after the convolution kernel is removedd' dimension is (c)d-nd,cd-1,kernel_sizew,d,kernel_sizeh,d) Deleting bias term b after convolution kerneld' dimension is (c)d-nd,);
And 407, selecting a previous layer of convolution kernel to delete nodes, and repeating the 404, 405 and 406 until the deletion of the nodes of the whole network is finished, thereby obtaining an approximate model of the deep neural network.
Compared with the prior art, the invention has the following advantages:
firstly, the invention provides a deep neural network approximate model analysis method based on discrete coefficients, which measures the contribution degree of nodes in a deep neural network by calculating the discrete coefficients of activation values of the deep neural network on various types of samples, and solves the problem that the network is difficult to analyze due to the large scale of the nodes of the deep neural network;
secondly, deleting the nodes with low contribution degree in the deep neural network to obtain a network approximate model, wherein the influence on the network performance is small;
the technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The method of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments of the invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
As shown in fig. 1, taking the disclosed small data sets Cifar10 and VGG16 networks for identifying pervasive objects as examples, the invention discloses a deep neural network approximate model analysis method based on discrete coefficients, which comprises the following specific steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a VGG16 network model, wherein the VGG network architecture model consists of an input layer, convolutional layers and full-connection layers, and the VGG16 network comprises 13 convolutional layers and 3 full-connection layers;
step 102, loading pre-training weights for a Cifar10 data set by using the constructed VGG16 network model;
step two, inputting a data set sample according to the category to obtain a characteristic diagram of the deep neural network of each category:
step 201, the Cifar10 data set comprises 10 categories, the number of samples of each category in the training set is 5000, and the data set is classified according to the labels of the data set to obtain samples D of the data set of each categoryiWherein i is 1,2.. 10 is a sample class serial number, DiThe number of samples contained in the Chinese herbal medicine is 5000;
step 202, sample D of each category data setiSequentially inputting the data into the deep neural network to be analyzed to obtain a feature map corresponding to each class of data set samples in each middle hidden layer of the network to be analyzed, wherein the number of the samples in each class of data set sample in the Cifar10 data set is 5000, and the VGG16 network comprises 13 convolutional layers, so that the obtained feature map of the ith class can be represented as
Figure BDA0003094101330000061
Wherein j 1,2.. 5000 is a data set sample serial number;
thirdly, calculating contribution degrees of all nodes by utilizing the change situation of all node activation values in the obtained deep neural network of the feature maps of all classes:
301, for each convolution kernel in the deep neural networkAnd taking the maximum value of the corresponding characteristic graph to obtain the activation value of the convolution kernel. Data set sample D of the ith categoryiFeature maps at the d-th layer of a sample depth neural network
Figure BDA0003094101330000062
All dimensions are cd×hd×wdAt c, indObtaining activation value sample pairs by taking maximum value in dimension
Figure BDA0003094101330000063
Wherein j 1,2.. 5000 is a data set sample number, i 1,2.. 10 is a sample class number, and a[d]Is of length cdOne-dimensional vector of cdRepresenting the number of convolution kernels in the d-th convolution layer of the deep neural network.
Step 302, averaging all the activation value sample pairs of the same category to obtain an average activation value sample of each category
Figure BDA0003094101330000064
Figure BDA0003094101330000065
Wherein m isi5000 is the total number of samples of the ith category data set, d is 1,2.. 13 is the serial number of the convolution layer in the deep neural network, and i is 1,2.. 10 is the serial number of the sample category;
step 303, average activation value sample by each category
Figure BDA0003094101330000071
To calculate the discrete coefficient V of each node in the deep neural networks
Figure BDA0003094101330000072
Figure BDA0003094101330000073
Wherein, mu[d]Means, σ, representing the mean of the mean activation values of the samples of each class on the d-th convolutional layer of the deep neural network[d]And the standard deviation of the average activation value of each class sample on the layer d convolution layer of the deep neural network is shown.
Step 304, utilizing the average activation value samples V of each categorysTo measure the importance degree R of each node in the deep neural network:
Figure BDA0003094101330000074
wherein d is 1,2.. 13 is the number of convolution layer in the deep neural network, R[d]Is of length cdThe one-dimensional vector of (a) represents c in the d-th convolutional layerdThe degree of importance of an individual node;
305, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes and the contribution degree C of the node at the d-th layer[d]
Figure BDA0003094101330000075
Wherein d is 1,2.. 13 is the number of convolution layer in the deep neural network, C[d]Is of length cdThe one-dimensional vector of (a) represents the contribution of each node in the d-th convolutional layer;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 401, extracting weights and bias terms w in convolutional layers of deep neural network from networkd、bd,d=1,2...13,wdDimension of (c)d,cd-1,kernel_sizew,d,kernel_sizeh,d),,bdDimension of (c)dC) when d is 1d-1Is the number of input sample channels;
step 502, setting threshold tj=0.3%,j=1,2...13,tjThe maximum limit of the network performance reduction after the j-th layer node is deleted is shown;
step 403, selecting the last layer (layer 13) of convolution kernel to delete the nodes;
step 404, deleting the convolution kernel with the lowest contribution degree in the convolution layer correspondingly, and deleting the weight and the bias item corresponding to the convolution kernel;
step 405, inputting the data set into the deleted node and then calculating the performance of the network, if the performance is reduced to be less than the set threshold tdThen step 504 is repeated if the performance degradation is greater than the set threshold tdRestoring the last deleted node;
step 406, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is ndThen the weight w after the convolution kernel is removedd' dimension is (c)d-nd,cd-1,kernel_sizew,d,kernel_sizeh,d) Deleting bias term b after convolution kerneld' dimension is (c)d-nd,);
And 407, selecting a previous layer of convolution kernel to delete nodes, and repeating the 404, 405 and 406 until the deletion of the nodes of the whole network is finished, thereby obtaining an approximate model of the deep neural network.
The above embodiments are only examples of the present invention, and are not intended to limit the present invention, and all simple modifications, changes and equivalent structural changes made to the above embodiments according to the technical spirit of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (1)

1. A deep neural network approximate model analysis method based on discrete coefficients is characterized by comprising the following steps:
step one, constructing a deep neural network to be analyzed:
101, constructing a model architecture of a certain deep neural network to be analyzed;
102, training the constructed deep neural network model by using a certain data set to obtain trained network weight;
step two, inputting a data set sample according to the category to obtain a characteristic diagram of the deep neural network of each category:
step 201, classifying the data set according to the data set label, and if the data set comprises n categories, obtaining a data set sample D of each categoryiWherein i is 1,2.. n is a sample class serial number;
step 202, sample D of each category data setiSequentially inputting the data into the deep neural network to be analyzed to obtain the characteristic diagram corresponding to each class of data set samples in each middle hidden layer of the network to be analyzed, and if the number of the samples in each class of data set samples is miAnd the network to be analyzed has k convolutional layers, the obtained characteristic diagram of the ith category can be expressed as
Figure FDA0003094101320000011
Wherein j is 1,2iSample number for dataset;
thirdly, calculating contribution degrees of all nodes by utilizing the change situation of all node activation values in the obtained deep neural network of the feature maps of all classes:
301, taking the maximum value of the feature map corresponding to each convolution kernel in the deep neural network to obtain the activation value of the convolution kernel
Figure FDA0003094101320000012
Wherein d is 1,2.. k is the number of the convolution layer in the deep neural network, and j is 1,2.. miN is the sample number of the data set, i is 1,2[d]Is of length cdOne-dimensional vector of cdRepresenting the number of convolution kernels in the d-th convolution layer of the deep neural network.
Step 302, averaging all the activation value sample pairs of the same category to obtain an average activation value sample of each category
Figure FDA0003094101320000013
Figure FDA0003094101320000014
Wherein m isiThe total number of samples in the ith category data set is d 1,2.. k is the serial number of the convolution layer in the deep neural network, and i 1,2.. n is the serial number of the sample category;
step 303, average activation value sample by each category
Figure FDA0003094101320000021
To calculate the discrete coefficient V of each node in the deep neural networks
Figure FDA0003094101320000022
Figure FDA0003094101320000023
Wherein, mu[d]Means, σ, representing the mean of the mean activation values of the samples of each class on the d-th convolutional layer of the deep neural network[d]And the standard deviation of the average activation value of each class sample on the layer d convolution layer of the deep neural network is shown.
Step 304, utilizing the average activation value samples V of each categorysTo measure the importance degree R of each node in the deep neural network:
R[d]=Vs [d]
wherein, d is 1,2.. k is the number of convolution layer in the deep neural network, R[d]Is of length cdThe one-dimensional vector of (a) represents c in the d-th convolutional layerdThe degree of importance of an individual node;
305, normalizing the importance degree of each layer of nodes in the deep neural network to obtain the contribution degree of each layer of nodes and the contribution degree C of the node at the d-th layer[d]
Figure FDA0003094101320000024
Wherein, d is 1,2.. k is the number of convolution layer in the deep neural network, C[d]Is of length cdThe one-dimensional vector of (a) represents the contribution of each node in the d-th convolutional layer;
deleting the convolution kernel according to the contribution degree of each layer of nodes of the deep neural network to obtain an approximate model of the deep neural network:
step 401, extracting weights and bias terms w in convolutional layers of deep neural network from networkd、bd,d=1,2...k,wdDimension of (c)d,cd-1,kernel_sizew,d,kernel_sizeh,d),bdDimension of (c)dC) when d is 1d-1Is the number of input sample channels;
step 402, setting threshold td,tdThe maximum limit of the network performance reduction after the d-th node is deleted is shown;
step 403, selecting the last layer (k layer) of convolution kernel to delete the nodes;
step 404, deleting the convolution kernel with the lowest contribution degree in the convolution layer correspondingly;
step 405, inputting the data set into the deleted node and then calculating the performance of the network, if the performance is reduced to be less than the set threshold tdThen step 504 is repeated if the performance degradation is greater than the set threshold tdRestoring the last deleted node;
step 406, if the threshold requirement is met, the number of deleted convolution kernels of the current layer is ndThen the weight w after the convolution kernel is removedd' dimension is (c)d-nd,cd-1,kernel_sizew,d,kernel_sizeh,d) Deleting bias term b after convolution kerneld' dimension is (c)d-nd,);
And 407, selecting a previous layer of convolution kernel to delete nodes, and repeating the 404, 405 and 406 until the deletion of the nodes of the whole network is finished, thereby obtaining an approximate model of the deep neural network.
CN202110617865.2A 2021-06-01 2021-06-01 Deep neural network approximate model analysis method based on discrete coefficients Pending CN113283519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110617865.2A CN113283519A (en) 2021-06-01 2021-06-01 Deep neural network approximate model analysis method based on discrete coefficients

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110617865.2A CN113283519A (en) 2021-06-01 2021-06-01 Deep neural network approximate model analysis method based on discrete coefficients

Publications (1)

Publication Number Publication Date
CN113283519A true CN113283519A (en) 2021-08-20

Family

ID=77283110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110617865.2A Pending CN113283519A (en) 2021-06-01 2021-06-01 Deep neural network approximate model analysis method based on discrete coefficients

Country Status (1)

Country Link
CN (1) CN113283519A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114051218A (en) * 2021-11-09 2022-02-15 华中师范大学 Environment-aware network optimization method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114051218A (en) * 2021-11-09 2022-02-15 华中师范大学 Environment-aware network optimization method and system
CN114051218B (en) * 2021-11-09 2024-05-14 华中师范大学 Environment-aware network optimization method and system

Similar Documents

Publication Publication Date Title
CN107526785B (en) Text classification method and device
CN108388651B (en) Text classification method based on graph kernel and convolutional neural network
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
CN109934261B (en) Knowledge-driven parameter propagation model and few-sample learning method thereof
CN109446332B (en) People reconciliation case classification system and method based on feature migration and self-adaptive learning
CN112364638B (en) Personality identification method based on social text
CN110929848B (en) Training and tracking method based on multi-challenge perception learning model
CN112487199B (en) User characteristic prediction method based on user purchasing behavior
CN111259140B (en) False comment detection method based on LSTM multi-entity feature fusion
CN109492230B (en) Method for extracting insurance contract key information based on interested text field convolutional neural network
CN111461025B (en) Signal identification method for self-evolving zero-sample learning
CN113094578A (en) Deep learning-based content recommendation method, device, equipment and storage medium
CN113535964B (en) Enterprise classification model intelligent construction method, device, equipment and medium
CN112418320B (en) Enterprise association relation identification method, device and storage medium
CN113283524A (en) Anti-attack based deep neural network approximate model analysis method
CN111078895B (en) Remote supervision entity relation extraction method based on denoising convolutional neural network
CN112231477A (en) Text classification method based on improved capsule network
CN112529638B (en) Service demand dynamic prediction method and system based on user classification and deep learning
CN114693397A (en) Multi-view multi-modal commodity recommendation method based on attention neural network
CN111723874A (en) Sound scene classification method based on width and depth neural network
CN114329474A (en) Malicious software detection method integrating machine learning and deep learning
CN113283519A (en) Deep neural network approximate model analysis method based on discrete coefficients
CN113535928A (en) Service discovery method and system of long-term and short-term memory network based on attention mechanism
CN113627550A (en) Image-text emotion analysis method based on multi-mode fusion
CN113821571A (en) Food safety relation extraction method based on BERT and improved PCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210820

WD01 Invention patent application deemed withdrawn after publication