CN114295967A - Analog circuit fault diagnosis method based on migration neural network - Google Patents
Analog circuit fault diagnosis method based on migration neural network Download PDFInfo
- Publication number
- CN114295967A CN114295967A CN202110842458.1A CN202110842458A CN114295967A CN 114295967 A CN114295967 A CN 114295967A CN 202110842458 A CN202110842458 A CN 202110842458A CN 114295967 A CN114295967 A CN 114295967A
- Authority
- CN
- China
- Prior art keywords
- network
- layer
- training
- model
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000013508 migration Methods 0.000 title claims abstract description 23
- 230000005012 migration Effects 0.000 title claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 34
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 14
- 238000004088 simulation Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 30
- 238000011176 pooling Methods 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 14
- 210000002569 neuron Anatomy 0.000 claims description 12
- 238000013526 transfer learning Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 230000001617 migratory effect Effects 0.000 claims description 6
- 238000000354 decomposition reaction Methods 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 3
- 238000007634 remodeling Methods 0.000 claims description 3
- 238000010206 sensitivity analysis Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000012546 transfer Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 230000009466 transformation Effects 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a simulation circuit fault diagnosis method based on a migration neural network and an implementation means thereof. The output voltage of the tested circuit is collected, three-layer wavelet packet transformation is carried out on the output voltage, the energy data of the third layer of wavelet packet is taken and converted into a gray level image, and a fault characteristic gray level image data set is generated. A convolutional neural network was constructed and trained using the mnist dataset. And migrating the trained network layer parameters to a new network, and training the network by using a fault gray-scale map data set to finally obtain a fault diagnosis model of the analog circuit. The invention can still keep higher diagnosis precision under the condition of few fault samples, and can lighten the data acquisition and processing work in a large-scale circuit.
Description
Technical Field
The invention belongs to the field of analog circuit fault diagnosis, and relates to an analog circuit fault diagnosis method based on a migratory neural network and an implementation means thereof.
Background
Analog circuit fault diagnosis methods can be roughly classified into a modeling-based method and a data-driven-based method. The model-based approach is mainly based on the measured signal response under various conditions and the output of the circuit model, and is therefore also referred to as a signal model approach. General models include matrix models, fuzzy models, parity-based spatial models, hidden markov models, and the like, which are used for fault diagnosis of analog circuits by combining with various signal processing methods. However, modeling methods require manual analysis, rely on a large amount of a priori knowledge, and accurate modeling remains a significant challenge. With the rapid development of computer technology, a data-driven method is becoming one of the mainstream methods in the field of fault diagnosis. In the fault diagnosis method based on data driving, the machine learning algorithm is applied to fault diagnosis methods in various fields with the advantage of strong modeling of the mapping relation of the machine learning algorithm to nonlinear data. Deep learning is a special machine learning algorithm, the traditional machine learning algorithm is suitable for the condition that the sample data size is small, and the deep learning performance is better when a large data set is encountered.
Convolutional Neural Networks (CNN) are one of the representative algorithms for deep learning. The feedforward neural network comprises convolution calculation and has a depth structure, and the main structure comprises a convolution layer, a pooling layer, a full-connection layer and the like. The convolutional layer is responsible for the calculation of feature extraction through a convolutional kernel and is the core of a neural network. The pooling layer performs sparse processing on the characteristic diagram through downsampling, reduces the data calculation amount, and can reduce overfitting to a certain extent. And the full connection layer positioned behind the network is responsible for re-fitting the abstract features in the front, classifying and outputting the abstract features. When the network begins to train, the initial value in the convolution kernel is randomly generated, and the parameter value in the convolution kernel is continuously updated by the network through backward propagation until the optimal parameter value is found. The parameters in these convolution kernels, also referred to as network weights, can be evaluated by a loss function to see if the network weights are optimal. According to the calculation dimensionality of the convolution kernel, the method is divided into a one-dimensional convolution neural network (1D-CNN), a two-dimensional convolution neural network (2D-CNN) and a three-dimensional convolution neural network (3D-CNN). Wherein, the 1D-CNN is generally applied to the fields of natural language processing and the like; the 2D-CNN is generally applied to the fields of computer vision, image processing and the like; the 3D-CNN is generally used in the fields of video processing, medical detection and the like.
Transfer learning is a machine learning method that can reuse a pre-trained model in other tasks. In migration learning, existing knowledge is referred to as a source domain, and new knowledge to be learned is referred to as a target domain. The precondition for performing the migration learning is that a certain similarity exists between the source domain and the target domain, otherwise, a negative migration will occur. According to the content of migration, the migration learning can be divided into instance-based migration learning, relationship knowledge-based migration learning, data feature-based migration learning, and model parameter-based migration learning. At present, the transfer learning based on model parameters is a research hotspot in fault diagnosis of mechanical bearing faults, gear faults and the like, and the transfer learning has not been widely researched and applied in fault diagnosis of analog circuits. Deep learning requires a large amount of data with the same feature space and the same distribution to support model training, and the model training needs to consume a large time cost. And based on model parameter migration, pre-training the network model by using source domain data, then using part of pre-trained model parameters for constructing a new model, and then training new network layer parameters in the new model by using target domain data to obtain the network model of the target domain. The model parameter migration method can still keep good network performance when the small sample data set is used for training, and the training time of the model is shorter because only part of network layers need to be trained when the target domain model is trained. In the analog circuit fault diagnosis, a lot of artificial workload is needed for collecting data and preprocessing and extracting fault characteristics, more complex workload is needed along with the increase of circuit scale, the difficulty of characteristic extraction is increased, and the training time of a model is prolonged. Therefore, it is necessary to provide a 2D-CNN-based transfer learning method, which is applied to analog circuit fault diagnosis and reduces the workload of data preprocessing and the model training time on the premise of maintaining good feature extraction effect and fault diagnosis rate.
Disclosure of Invention
The invention provides a simulation circuit fault diagnosis method based on a migratory neural network, aiming at the problems of large-scale circuit data acquisition workload, difficult fault feature extraction and long time consumption of network model training in simulation circuit fault diagnosis, wherein migration learning is carried out on the basis of a two-dimensional convolutional neural network, a small sample data set is used for network training, higher fault diagnosis rate can still be kept, and the training time consumption is short; one-dimensional fault data are converted into a two-dimensional fault feature image data set through wavelet packet decomposition and gray map conversion, a two-dimensional transfer learning network model is trained, automatic extraction and classification of fault features are completed, and the problems that many fuzzy sets and complicated steps are needed for manually extracting the fault features are solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the analog circuit fault diagnosis method based on the migration neural network comprises data acquisition and preprocessing, network model building and training and migration network design.
The data acquisition and preprocessing are carried out, pspice software is used for simulating a circuit to be tested, an excitation source is applied, a sensitive element of the circuit is obtained through sensitivity analysis, and then a fault set is constructed. And respectively carrying out multiple Monte Carlo analyses on each fault condition in the fault set, wherein the result of each Monte Carlo analysis corresponds to one sample. The total number of samples obtained finally is the number of failures in the monte carlo analysis. Each sample retains 1024 samples, which are 1024 x 1 in size. And (3) performing three-layer wavelet packet decomposition on each sample, performing matrix remodeling and gray level transformation on energy characteristics of all frequency bands of a third layer, and generating a characteristic gray level map data set, wherein the size of each gray level map is 32 x 32. This data set will be used as training and testing for the migration learning model, with the training set accounting for 75% of the total and the remaining 25% as the testing set.
The network model is built and trained, a network is built by using a Python language and a TensorFlow framework, and the network structure mainly comprises four convolution layers, two pooling layers and three full-connection layers. The convolution layer uses a relu function as an activation function, the sizes of convolution kernels are all 3 x 3, and the number of the convolution kernels is 64, 32, 64 and 128 respectively; the size of the pooling matrix of the pooling layer is 2 x 2, the step length is 2, and the pooling type is maximum pooling; adjusting the learning rate by using a callback function ReduceLROnPateau; the number of neurons in the first two fully-connected layers was 256 and 1024, respectively, using relu function as the activation function, the number of neurons in the last fully-connected layer was 15, and softmax was used as the activation function. And pre-training the constructed model by using a public standard data set mnist to obtain the weight parameters of the model.
And in the migration network design, under the same deep learning framework and development environment, calling four convolutional layer parameters of the model after pre-training is finished, and redefining parameters of a network input layer and a full connection layer to form a new network model. Wherein, the input format requirement of the input layer is set to 32 x 32; redesigning three full-junction layers, wherein the number of neurons of the first two full-junction layers is 512, and a relu function is used as an activation function; since the circuit under test has 13 faults, the number of neurons in the last fully connected layer is 13, using softmax as the activation function. And training three new full-connection layers by using the characteristic gray-scale image training set, testing the whole transfer learning model by using the test set, and finally obtaining a fault diagnosis model of the tested circuit. Calculating and drawing loss function (loss) and accuracy (acc) graphs, calculating precision, recall, F1 and other indexes, and drawing a confusion matrix of fault classification.
The beneficial effects produced by the invention are as follows:
the provided analog circuit fault diagnosis method can reduce data acquisition and preprocessing work and shorten model training time on the premise of keeping good feature extraction effect and fault diagnosis rate.
Drawings
FIG. 1 is a system block diagram of the present invention depicting the operational steps of the present invention as applied to a circuit under test.
Fig. 2 is a fourth-order second-operational-amplifier high-pass filter circuit, which is an experimental object of the present invention.
Fig. 3 is a schematic diagram of a fault value corresponding to a fault set of a fourth-order second-operational amplifier high-pass filter circuit and a corresponding label.
Fig. 4 is a network model structure for pre-training constructed by the present invention, and the pooling layer is classified into the convolutional layer, and therefore is not specifically described in the structure diagram.
Fig. 5 is a schematic diagram of a network structure and a building process of the transfer learning model.
Detailed Description
The following detailed description of the present invention is provided in connection with the accompanying drawings and examples, but is not intended to limit the invention thereto. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of the present invention.
Example (b):
and selecting a four-order second-operational-amplifier high-pass filter circuit as a tested circuit. The circuit was simulated using PSPice software as shown in fig. 2. The resistance and capacitance tolerances in the circuit are set to 5% and 10%, respectively. After the sensitivity analysis is performed, sensitive elements of the obtained circuit are R1, R2, R3, R4, C1 and C2, the set of faults thereof is { R1 ↓, R1 ↓, R2 ↓, R2 ↓, R3 ↓, R3 ↓, R4 ↓, R4 ↓, C1 ↓, C1 ↓, C2 ↓, C2 ↓andnormal }, and there are a total of faults. The failure values corresponding to the sensitivity elements and the labels corresponding to each failure are shown in fig. 3, where ↓ and ↓ represent the element parameters higher and lower than 50% of the nominal value, respectively. And respectively carrying out 400 Monte Carlo analyses on each fault type, and taking 1024 sampling points as original data in the output voltage of one period. The dataset had 5200 samples each 1024 x 1 in size. And performing three-layer wavelet packet decomposition on the original data, performing matrix remodeling and gray level transformation on the energy of all frequency bands of the third layer, and generating a fault characteristic gray level graph data set, wherein the size of each gray level graph is 32 × 32. Taking 75% of the gray-scale image data set as a training set and 25% as a testing set.
A two-dimensional convolutional neural network model is built by using a Python (Python version 3.7) speech and a TensorFlow (TensorFlow version 2.1) framework, and the development environment of the program is Pycharm. The constructed network structure mainly comprises four convolutional layers, two pooling layers and three full-connection layers, as shown in fig. 4. The convolution kernels of the four convolution layers are all 3 x 3, the number of the convolution kernels is 64, 32, 64 and 128, and a relu function is used as an activation function. The pooling matrix size of the pooling layer was 2 x 2, the step size was 2, and the pooling type was maximum pooling. The learning rate is adjusted using the callback function ReduceLROnPlateau. The number of neurons in the first two of the three fully-connected layers was 256 and 1024, respectively, and the relu function was used as the activation function. The number of neurons in the last full-junction layer is 15, namely the data set is classified by 15, softmax is used as an activation function, and the classification result is normalized and output. And pre-training the model by using an mnist data set to obtain the weight parameters of the model.
Designing a transfer learning network model, and calling four convolutional layer parameters of the model after pre-training is finished as shown in FIG. 5; redefining input layer parameters, and setting input format requirements to be 32 x 32; redesigning three full-junction layers, wherein the number of neurons of the first two full-junction layers is 512, and a relu function is used as an activation function; since the circuit under test has 13 faults, the number of neurons in the last fully connected layer is 13, using softmax as the activation function. And training the three redesigned full-connection layers by using a characteristic gray-scale image training set, testing the whole transfer learning model by using a test set, and finally obtaining a fault diagnosis model of the tested circuit. And calculating and drawing loss function (loss) and accuracy (acc) graphs so as to be capable of visually analyzing the learning state and performance of the network and calculating indexes such as precision, call and F1 and the like so as to evaluate the performance of the migration learning network. And drawing a confusion matrix of fault classification to analyze the classification error of each fault.
Claims (4)
1. A simulation circuit fault diagnosis method based on a migration neural network is characterized by comprising data acquisition and preprocessing, network model building and training and migration network design; wavelet packet decomposition is carried out on output voltage data of the circuit, fault characteristics of the output voltage data are extracted, and then the output voltage data are converted into a gray level graph called a characteristic gray level graph; designing a two-dimensional convolutional neural network, and training by using an mnist data set; and designing a transfer learning network through network layer parameter transfer on the basis of the trained model, and training and testing by using a characteristic gray-scale image data set.
2. The analog circuit fault diagnosis method based on the migratory neural network according to claim 1, wherein the data acquisition and preprocessing are characterized in that the tested circuit is subjected to analog simulation, a single fault set is constructed according to sensitivity analysis, then output voltages under different faults are acquired through multiple Monte Carlo analyses, and 1024 sampling points are reserved for each sample; performing three-layer wavelet packet decomposition on the acquired original data, and performing matrix remodeling and grayscale conversion on the energy of a third layer of wavelet packet to generate a characteristic grayscale data set, wherein the size of each characteristic grayscale is 32 × 32; 75% of the data set is used as a training set of the migration learning network, and the other 25% is used as a test set.
3. The method for diagnosing the fault of the analog circuit based on the migratory neural network is characterized in that a two-dimensional convolutional neural network comprising four convolutional layers, two pooling layers and three fully-connected layers is built by utilizing a TensorFlow framework, wherein the convolutional layers use relu functions as activation functions, the sizes of convolutional kernels are 3 x 3, and the number of the convolutional kernels is 64, 32, 64 and 128 respectively; the size of the pooling matrix of the pooling layer is 2 x 2, the step length is 2, and the pooling type is maximum pooling; adjusting the learning rate by using a callback function ReduceLROnPateau; the number of neurons of the first two full-junction layers is 256 and 1024 respectively, a relu function is used as an activation function, the number of neurons of the last full-junction layer is 15, and softmax is used as the activation function, so that the classification result can be normalized; and pre-training the model by using a public handwritten digital data set mnist to obtain the network layer weight parameters of the model.
4. The analog circuit fault diagnosis method based on the migratory neural network is characterized in that four convolution layer parameters of the model after pre-training are called, and a network input layer and three full-connection layers are redesigned to form a new network model, namely a migratory learning network; the input format of the input layer is defined as 32 × 32, the number of neurons of the first two fully-connected layers is 512, a relu function is used as an activation function, the number of neurons of the last fully-connected layer is determined by the number of faults of the circuit to be tested, and softmax is used as the activation function; training the last three full-connection layers of the transfer learning network by using a characteristic gray-scale image training set, testing the transfer learning network by using a characteristic gray-scale image testing set, and finally obtaining a fault diagnosis model of the tested circuit; and calculating and drawing a graph of loss and acc to observe the learning state and the performance of the network, and meanwhile, calculating indexes such as precision, recall and F1 to jointly evaluate the performance of the network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110842458.1A CN114295967A (en) | 2021-07-26 | 2021-07-26 | Analog circuit fault diagnosis method based on migration neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110842458.1A CN114295967A (en) | 2021-07-26 | 2021-07-26 | Analog circuit fault diagnosis method based on migration neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114295967A true CN114295967A (en) | 2022-04-08 |
Family
ID=80963919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110842458.1A Pending CN114295967A (en) | 2021-07-26 | 2021-07-26 | Analog circuit fault diagnosis method based on migration neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114295967A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114926303A (en) * | 2022-04-26 | 2022-08-19 | 广东工业大学 | Electric larceny detection method based on transfer learning |
CN117390482A (en) * | 2023-07-11 | 2024-01-12 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-model interactive fault diagnosis method, equipment and medium based on SL frame |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106547962A (en) * | 2016-10-21 | 2017-03-29 | 天津大学 | Based on the integrated circuit interconnection analysis method for reliability that neural network parameter is modeled |
CN107316061A (en) * | 2017-06-22 | 2017-11-03 | 华南理工大学 | A kind of uneven classification ensemble method of depth migration study |
CN108805206A (en) * | 2018-06-13 | 2018-11-13 | 南京工业大学 | Improved L SSVM establishing method for analog circuit fault classification |
CN110780146A (en) * | 2019-12-10 | 2020-02-11 | 武汉大学 | Transformer fault identification and positioning diagnosis method based on multi-stage transfer learning |
US20200151572A1 (en) * | 2018-11-14 | 2020-05-14 | Advanced Micro Devices, Inc. | Using Multiple Functional Blocks for Training Neural Networks |
CN111242063A (en) * | 2020-01-17 | 2020-06-05 | 江苏大学 | Small sample classification model construction method based on transfer learning and iris classification application |
CN111898095A (en) * | 2020-07-10 | 2020-11-06 | 佛山科学技术学院 | Deep migration learning intelligent fault diagnosis method and device, storage medium and equipment |
CN112101116A (en) * | 2020-08-17 | 2020-12-18 | 北京无线电计量测试研究所 | Analog circuit fault diagnosis method based on deep learning |
CN112379779A (en) * | 2020-11-30 | 2021-02-19 | 华南理工大学 | Dynamic gesture recognition virtual interaction system based on transfer learning |
CN113077017A (en) * | 2021-05-24 | 2021-07-06 | 河南大学 | Synthetic aperture image classification method based on impulse neural network |
CN113095475A (en) * | 2021-03-02 | 2021-07-09 | 华为技术有限公司 | Neural network training method, image processing method and related equipment |
-
2021
- 2021-07-26 CN CN202110842458.1A patent/CN114295967A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106547962A (en) * | 2016-10-21 | 2017-03-29 | 天津大学 | Based on the integrated circuit interconnection analysis method for reliability that neural network parameter is modeled |
CN107316061A (en) * | 2017-06-22 | 2017-11-03 | 华南理工大学 | A kind of uneven classification ensemble method of depth migration study |
CN108805206A (en) * | 2018-06-13 | 2018-11-13 | 南京工业大学 | Improved L SSVM establishing method for analog circuit fault classification |
US20200151572A1 (en) * | 2018-11-14 | 2020-05-14 | Advanced Micro Devices, Inc. | Using Multiple Functional Blocks for Training Neural Networks |
CN110780146A (en) * | 2019-12-10 | 2020-02-11 | 武汉大学 | Transformer fault identification and positioning diagnosis method based on multi-stage transfer learning |
CN111242063A (en) * | 2020-01-17 | 2020-06-05 | 江苏大学 | Small sample classification model construction method based on transfer learning and iris classification application |
CN111898095A (en) * | 2020-07-10 | 2020-11-06 | 佛山科学技术学院 | Deep migration learning intelligent fault diagnosis method and device, storage medium and equipment |
CN112101116A (en) * | 2020-08-17 | 2020-12-18 | 北京无线电计量测试研究所 | Analog circuit fault diagnosis method based on deep learning |
CN112379779A (en) * | 2020-11-30 | 2021-02-19 | 华南理工大学 | Dynamic gesture recognition virtual interaction system based on transfer learning |
CN113095475A (en) * | 2021-03-02 | 2021-07-09 | 华为技术有限公司 | Neural network training method, image processing method and related equipment |
CN113077017A (en) * | 2021-05-24 | 2021-07-06 | 河南大学 | Synthetic aperture image classification method based on impulse neural network |
Non-Patent Citations (1)
Title |
---|
况麒麒: "基于深度学习的模拟电路故障诊断方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 01, pages 7 - 16 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114926303A (en) * | 2022-04-26 | 2022-08-19 | 广东工业大学 | Electric larceny detection method based on transfer learning |
CN117390482A (en) * | 2023-07-11 | 2024-01-12 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-model interactive fault diagnosis method, equipment and medium based on SL frame |
CN117390482B (en) * | 2023-07-11 | 2024-08-02 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-model interactive fault diagnosis method, equipment and medium based on SL frame |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112149316B (en) | Aero-engine residual life prediction method based on improved CNN model | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN109765053B (en) | Rolling bearing fault diagnosis method using convolutional neural network and kurtosis index | |
CN103793718B (en) | Deep study-based facial expression recognition method | |
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
CN107220506A (en) | Breast cancer risk assessment analysis system based on deep convolutional neural network | |
CN109389171B (en) | Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology | |
CN110660478A (en) | Cancer image prediction and discrimination method and system based on transfer learning | |
CN106971198A (en) | A kind of pneumoconiosis grade decision method and system based on deep learning | |
CN112766283B (en) | Two-phase flow pattern identification method based on multi-scale convolution network | |
CN111339935B (en) | Optical remote sensing picture classification method based on interpretable CNN image classification model | |
CN110210380B (en) | Analysis method for generating character based on expression recognition and psychological test | |
CN114295967A (en) | Analog circuit fault diagnosis method based on migration neural network | |
CN103077408B (en) | Method for converting seabed sonar image into acoustic substrate classification based on wavelet neutral network | |
CN106997373A (en) | A kind of link prediction method based on depth confidence network | |
CN113533945A (en) | Analog circuit fault diagnosis method based on two-dimensional convolutional neural network | |
CN114170657A (en) | Facial emotion recognition method integrating attention mechanism and high-order feature representation | |
CN110110426A (en) | A kind of Switching Power Supply filter capacitor abatement detecting method | |
CN114357372A (en) | Aircraft fault diagnosis model generation method based on multi-sensor data driving | |
CN113935413A (en) | Distribution network wave recording file waveform identification method based on convolutional neural network | |
CN105809200A (en) | Biologically-inspired image meaning information autonomous extraction method and device | |
Goutham et al. | Brain tumor classification using EfficientNet-B0 model | |
CN115206455B (en) | Deep neural network-based rare earth element component content prediction method and system | |
Ma | Summary of research on application of deep learning in image recognition | |
CN109934281A (en) | A kind of unsupervised training method of two sorter networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20220408 |
|
WD01 | Invention patent application deemed withdrawn after publication |