CN106682730A - Network performance assessment method based on VGG16 image deconvolution - Google Patents
Network performance assessment method based on VGG16 image deconvolution Download PDFInfo
- Publication number
- CN106682730A CN106682730A CN201710014706.7A CN201710014706A CN106682730A CN 106682730 A CN106682730 A CN 106682730A CN 201710014706 A CN201710014706 A CN 201710014706A CN 106682730 A CN106682730 A CN 106682730A
- Authority
- CN
- China
- Prior art keywords
- network
- vgg16
- image
- deconvolution
- files
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a network performance assessment method based on VGG16 image deconvolution and mainly aims at solving the problem that visual image assessment cannot be conducted on the performance of a deep convolutional neural network in the prior art. According to the achievement scheme, a VGG16 original network model is downloaded, fine adjustment is conducted on an original network to obtain a finely-adjusted network, and the two networks serve as networks to be accessed; primary network assessment is completed by comparing the sparse degrees of two to-be-accessed network 'conv5_3' layer characteristic visualization images and the strength of the capability of extracting remarkable characteristics of an object of corresponding images after deconvolution; final network assessment is completed by comparing coefficient attenuation graphs of the two to-be-accessed network 'conv5_3' layer characteristic visualization images. The working mechanism inside the network can be intuitively seen, the good or poor situation of the performance of the network can be accurately assessed, and the network performance assessment method can be used for conducting visual image assessment on the convolutional neural network to achieve network improvement.
Description
Technical field
The invention belongs to technical field of image processing, relates generally to a kind of network performance evaluation method, can be used for convolution
Neutral net carries out the assessment of visual pattern.
Background technology
Deep learning is one of study hotspot of artificial intelligence field in recent years.Based on the thought of neutral net, depth
Habit be devoted to imitate human brain distributed nature represent, the initial data from Pixel-level to abstract semantic concept, by successively
Extraction information is realizing more effective feature representation.
In recent years, due to large-scale common image data base, such as ImageNet and high performance computing system, such as GPU, or big rule
The appearance of mould distributed type assemblies, makes convolutional neural networks as the important component part of deep learning, takes in many fields
Obtained important breakthrough target detection, image classification, recognition of face etc..In image classification field, VGG convolutional neural networks make
With less convolution kernel and deeper network structure such that it is able to efficiently extract the significant characteristics of image, in 2014
ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) contests in VGG convolutional Neurals
Network obtains good achievement, not only completes ILSVRC contests classification and location tasks, Er Qietong with very high accuracy rate
Sample is applied to other data sets, for example:VOC-2007 VOC-2012 data sets, Caltech-101 Caltech-256 data
Collection, the VGG16 networks of particularly wherein 16 layer depths show very high classification capacity and very strong portability.
At present, people are mainly by the cost function penalty values and the accuracy rate tested on certain data set of comparing cell
To assess the performance of the network.However, for depth convolutional neural networks, input is piece image, output simply
One conceptual numerical value, can not visual pattern ground assessment system quality.In this case, network is for everybody
Say equivalent to a black box, it is impossible to which network internal framework is analyzed, and then make and be suitably modified to improve internetworking
Energy.
The content of the invention
Present invention aims to above-mentioned the deficiencies in the prior art, there is provided a kind of based on VGG16 image deconvolutions
Network performance evaluation method, assesses network performance with more visual pattern, makes to have between assessment result and the subjective perception of people
Preferable dependency and accuracy.
The present invention technical thought be:The deconvolution image of network after comparing the primitive network of VGG16 and finely tuning, point
Analyse its network performance;After comparing the primitive network of VGG16 and finely tuning, the coefficient attenuation figure of network 5_3 layer convolutional layers, enters
One step analyzes its network performance, implement step include it is as follows:
1. the network performance evaluation method based on VGG16 image deconvolutions, comprises the steps:
1) Liang Ge network modeies to be assessed and its associated documents are got out:
1a) VGG16 primitive networks model and its associated documents are downloaded from official website,
1b) caffe platforms are built under Linux system, with the good weighting parameter conduct of VGG16 primitive networks model training
Initial value, is finely adjusted to the network, the network model and its associated documents after being finely tuned;
1c) using the network after VGG16 primitive networks and fine setting as two networks to be assessed;
2) make two network " conv5_3 " layers to be assessed feature visualization image and its deconvolution after image:
2a) network model after VGG16 primitive networks model and fine setting is taken out by propagated forward process respectively
The feature visualization figure of " conv5_3 " layer;
Feature visualization image is inversely successively carried out into anti-pond according to network frame after VGG16 primitive networks and fine setting 2b)
Change, anti-rectification and deconvolution are operated, and obtain the deconvolution image of VGG16 primitive networks and the network after fine setting;
2c) deconvolution image of network after the deconvolution image of VGG16 primitive networks and fine setting is compared, this is drawn
Two kinds of networks are strong and weak to the significant characteristics extractability of kart, and network to be assessed is once assessed;
3) make the coefficient attenuation figure of feature visualization image:
3a) by 2a) in the data of " conv5_3 " layer feature visualization image that obtain save as the file of .mat forms;
Network " conv5_3 " layer feature visualization image after 3b) in MATLAB drawing VGG16 primitive networks and finely tune
The coefficient attenuation figure of two dimension;
3c) compare 3b) in two coefficient attenuation figures curve steep:The curve steep of coefficient attenuation figure is got over
Greatly, then illustrate that the ability of network extraction kart significant characteristics to be assessed is stronger, preferably can complete to kart
Identification classification, i.e. network performance more preferably, completes the secondary evaluation to network to be assessed.
The present invention has advantages below compared with prior art:
1. image of the present invention after comparative feature visual image and its deconvolution makes assessment result assessing network
There is and the subjective perception of people between more preferable dependency and accuracy, the appraisal procedure of black box formula before comparing can
More visual pattern ground confirms the quality of network performance.
2. the present invention by compare VGG16 primitive networks with fine setting after network " conv5_3 " layer coefficient attenuation curve chart
The steep that middle curve declines, judges Grasping skill of the network to saliency feature, improves the accurate of network evaluation
Property.
3. the present invention, more can be deep by visualizing the power of the image after deconvolution observation wherein object significant characteristics
Carve the internal work mechanism of ground understanding and cognition depth convolutional neural networks, so as to can as requested with realize purpose network entered
Row is improved.
Description of the drawings
Fig. 1 is the flowchart of the present invention;
The test chart used when Fig. 2 is by testing characteristics of network of the present invention;
Fig. 3 is the VGG16 network structures used by deconvolution process of the present invention;
Fig. 4 be VGG16 primitive networks with fine setting after network " conv5_3 " layer feature visualization image and its deconvolution after
Design sketch;
Fig. 5 is the coefficient attenuation curve chart of network " conv5_3 " layer after VGG16 primitive networks and fine setting.
Specific embodiment
The present invention is described in detail with example below in conjunction with the accompanying drawings.
With reference to Fig. 1, the present invention's realizes step such as:
Step 1, the model extremely associated documents of network after preparing VGG16 primitive networks and finely tuning.
1a) following file is downloaded in caffe official websites:
VGG16 primitive network model VGG_ILSVRC_16_layers.caffemodel,
Imagenet data sets,
Average file ilsvrc_2012_mean.npy,
The create_imagenet.sh needed for lmdb files is made,
The make_imagenet_mean.sh required for average file is made,
Parameter File solver.prototxt, train_val.prototxt of training network,
File deploy.prototxt required for test.
Caffe platforms are built under Linux system 1b), and network is finely adjusted with the file that obtains is downloaded:
1300 kart and 1300 motorcycles are selected from imagenet data sets 1b1), and respectively according to 11:2
Ratio be classified as training set and test set two parts, newly-built train_car and train_motor files for the first time will
1100 kart and 1100 motorcycles are put in train_car the and train_motor files newly built up respectively;The
200 other kart and 200 motorcycles are put into by secondary newly-built val_car and val_motor files respectively
In the secondary val_car and val_motor files newly built up;
1b2) third time new folder train, by the view data in train_car and train_motor files
The train files are put into as training dataset after merging;
1b3) the 4th new folder val, after the view data in val_car and val_motor files is merged
Val files are put into as test data set;
The whole pictorial informations in train_car files are converted into into .txt forms 1b4) and train_ is generated
Car.txt files, then label 1 is added to pictorial information therein with replacement method is searched;Again by train_motor files
Whole pictorial informations be converted into .txt forms and generate train_motor.txt files, it is same with searching replacement method to should
Pictorial information in file adds label 0, newly-built train.txt files, by train_car.txt and train_motor.txt
View data in file is merged in train.txt files;
The whole pictorial informations in val_car files are converted into into .txt forms 1b5) and to generate val_car.txt literary
Part, and label 1 is added to pictorial information therein with replacement method is searched;Again by the whole pictures in val_motor files
Information is converted into .txt forms and generates val_motor.txt files, and same lookup replacement method is to the picture in this document
Information adds label 0, and the view data in val_car.tx and val_motor.txt files is merged by newly-built val.txt files
To in newly-built val.txt files;
1b6) to download create_imagenet.sh files in called train, train.txt, val,
The path of tetra- files of val.txt is modified, and opens a terminal and perform create_imagenet.sh literary under caffe catalogues
Part, generates train_lmdb files and val_lmdb files;
1b7) the train_lmdb and val_lmdb two to being called in the make_imagenet_mean.sh files of download
Individual file path is modified, then performs make_imagenet_mean.sh file generated binary system average file my_
mean.binaryproto;
1b8) the lmdb files and two to being called in file train_val.prototxt needed for the training network of download
The path of system average file is modified, and the title of last layer of full connection is changed to fc8t, and output classification is changed to 2 classes;
1b9) path of the train_val.prototxt to calling in the solver.prototxt files of download is repaiied
Change, then the training parameter in this document is modified, i.e., learning rate base_lr is changed to 0.0001, maximum iteration time
Max_iter is changed to 2200 times;
1b10) open a terminal under caffe catalogues, using VGG16 network archetype parameters as the initial of trim network
Value, calls 1b9) amended Parameter File solver.prototxt, re -training VGG16 networks, network after being finely tuned
Model and its average file;
1b11) title of last layer in file deploy.prototxt needed for the test of download full connection is changed to
Fc8t, output classification are changed to 2 classes.
Step 2, prepares picture to be tested.
From 1a) picture of an animal is picked out in the imagenet data sets downloaded, shown in such as Fig. 2 (a), from 1b1) accurate
Pick out the picture of a kart in standby test set, shown in such as Fig. 2 (b), using Fig. 2 (a) and Fig. 2 (b) as to be tested
Picture.
Step 3, makes after VGG16 primitive networks and fine setting the feature visualization image of network " conv5_3 " layer and its anti-
Image after convolution.
3a) jupyter notebook are opened under caffe catalogues, in jupyter notebook search examples
00_classification.ipynb files under catalogue are simultaneously opened, and test image therein are changed into be measured shown in 2 (a)
Attempt picture, call the VGG16 primitive networks model and its deploy.prototxt files and average file of download, carry out before to
Communication process, afterwards, execution order Feat=net.blobs (' conv5_3 ') .data [0,:] obtain the spy of " conv5_3 " layer
Levy visual image;As shown in Fig. 4 (a);
3b) according to VGG16 network structures as shown in Figure 3, by 3a) the feature visualization image that obtains is from " conv5_3 "
Layer successively carries out anti-pond, anti-rectification, deconvolution operation according to the reverse order of network structure, obtains the warp as shown in Fig. 4 (b)
Image after product, wherein:
Anti- pondization operation:It is will to insert analog value at the layer maximum position of pond during propagated forward, other positions
Install and be set to 0;
Anti- rectifying operation:It is that the value negative that will carry out anti-rectification is set to 0, positive number retains;
Deconvolution is operated:It is by deconvolution formulaCalculated, wherein l is into being propagated forward mistake
The label of convolutional layer in journey, k is the convolution kernel number of l layers, and c is the convolution kernel number of l-1 layers, Fk,cIt is the convolution of l layers
Core,It is by convolution kernel Fk,cCarry out upper and lower, left and right upset, LKIt is image after the corresponding convolution of the layer each convolution kernel,It is
Output result;
3c) in 00_classification.ipynb files, test image is changed to into image Fig. 2 (b) to be tested, is adjusted
With the VGG16 primitive networks model and its deploy.prototxt files and average file downloaded, propagated forward process is carried out,
Afterwards, perform order Feat=net.blobs (' conv5_3 ') .data [0,:] obtain feature visualization Fig. 4 (c), and by its
Network structure starts inversely successively to carry out anti-pond, anti-rectification, deconvolution operation from " conv5_3 " layer as shown in Figure 3, obtains anti-
Image Fig. 4 (d) after convolution;
3d) in 00_classification.ipynb files, test image is changed to into image Fig. 2 (b) to be tested, is adjusted
Use step 1b10) in network model and its average file and step 1b11 after the fine setting that obtains) it is amended
Deploy.prototxt files, carry out propagated forward process, afterwards, perform order Feat=net.blobs (' conv5_3 ')
.data[0,:] obtain shown in feature visualization image such as Fig. 4 (e) of " conv5_3 " layer, and by its network structure as shown in Figure 3
Start inversely successively to carry out anti-pond, anti-rectification, deconvolution operation from " conv5_3 " layer, obtain image after deconvolution, such as Fig. 4
Shown in (f);
3e) in compare Fig. 4 after the sparse degree of feature visualization image and its correspondence deconvolution image to object
Significant characteristics extractability is strong and weak, and network is once assessed:
3e1) compare the visual image 4 (a) of " conv5_3 " layer, 4 (c) and its corresponding deconvolution image 4 (b), 4
D (), it can be seen that Fig. 4 (a) is more sparse than Fig. 4 (c), i.e., nonzero value is less;Simultaneously Fig. 4 (b) than in Fig. 4 (d) object it is aobvious
Work property feature becomes apparent from, and illustrates that VGG16 primitive networks are higher to the ability in feature extraction of animal, by analyzing the original nets of VGG16
Training set ImageNet of network, finds wherein animal data more than 10 times more than kart, and every class image in training set
For the e-learning, the category feature has important impact to quantity, this further illustrates the accuracy of assessment;
3e2) compare feature visualization image 4 (c), 4 (e) and its corresponding deconvolution image Fig. 4 of " conv5_3 " layer
(d), 4 (f), it can be seen that feature visualization image Fig. 4 (e) is more sparse than Fig. 4 (c), while Fig. 4 (f) is than object in Fig. 4 (d)
Significant characteristics become apparent from and redundancy feature is less, the network after this explanation fine setting has higher spy to kart
Extractability and classification capacity are levied, is then assessed the network performance preferably, is completed the once assessment to network to be assessed.
Step 4, makes the coefficient attenuation figure of two network " conv5_3 " layer feature visualization images to be assessed, to be evaluated
Estimating network carries out secondary evaluation.
4a) by 4c), 4d) in the data of " conv5_3 " layer feature visualization image that obtain save as the text of .mat forms
Part;
4b) in MATLAB, by 4a) the data elder generation integer of the feature visualization figure that preserves is a column vector, then will row
Vector calls plot functions to draw the coefficient attenuation figure of two dimension, as shown in Figure 5 by order arrangement from big to small;
4c) compare 4b) in coefficient attenuation figure curve steep:The curve steep of coefficient attenuation figure is bigger, then
Illustrate that the ability of network extraction kart significant characteristics to be assessed is stronger, it can be seen that network is done to kart after fine setting
The coefficient attenuation curve for going out is more precipitous, and after illustrating to finely tune, network can preferably complete the identification classification to kart, i.e.,
Network performance more preferably, completes the secondary evaluation to network to be assessed.
The result explanation of Fig. 4 and Fig. 5, network estimation method proposed by the present invention can more be intuitive to see network internal
Working mechanism, more can recognize the quality of network performance visual pattern, so as to can as requested with realize purpose to net
Network is improved, with stronger practical value.
Claims (3)
1. the network performance evaluation method based on VGG16 image deconvolutions, comprises the steps:
1) Liang Ge network modeies to be assessed and its associated documents are got out:
1a) VGG16 primitive networks model and its associated documents are downloaded from official website,
Caffe platforms are built under Linux system 1b), and network is finely adjusted with the file that obtains is downloaded, after being finely tuned
Network model and its associated documents;
1c) using the network after VGG16 primitive networks and fine setting as two networks to be assessed;
2) make two network " conv5_3 " layers to be assessed feature visualization image and its deconvolution after image:
The network model after VGG16 primitive networks model and fine setting is taken out into " conv5_3 " by propagated forward process respectively 2a)
The feature visualization figure of layer;
Feature visualization image is inversely successively carried out into anti-pond, anti-according to network frame after VGG16 primitive networks and fine setting 2b)
Rectification and deconvolution operation, obtain the deconvolution image of VGG16 primitive networks and the network after fine setting;
2c) deconvolution image of network after the deconvolution image of VGG16 primitive networks and fine setting is compared, both are drawn
Network is strong and weak to the significant characteristics extractability of kart, and network to be assessed is once assessed;
3) make the coefficient attenuation figure of feature visualization image:
3a) by 2a) in the data of " conv5_3 " layer feature visualization image that obtain save as the file of .mat forms;
The two dimension of network " conv5_3 " layer feature visualization image after 3b) in MATLAB drawing VGG16 primitive networks and finely tune
Coefficient attenuation figure;
3c) compare 3b) in coefficient attenuation figure curve steep:The curve steep of coefficient attenuation figure is bigger, then illustrate
The ability of network extraction kart significant characteristics to be assessed is stronger, can preferably complete the identification to kart point
Class, i.e. network performance more preferably, complete the secondary evaluation to network to be assessed.
2. method according to claim 1, wherein step 1b) in VGG16 primitive networks are finely adjusted, as follows
Carry out:
1300 kart and 1300 motorcycles are picked out from imagenet data sets 1b1), and respectively according to 11:2
Ratio is classified as training set and test set two parts, and wherein training dataset is 1100 kart and 1100 motors
Car, test data set are 200 kart and 200 motorcycles;
The .txt files of training dataset and test data set 1b3) are made respectively and label is added in .txt files;
Training dataset and its corresponding .txt files are converted into into the lmdb formatted files needed for network training 1b4), while will
Test data set and its corresponding .txt files are converted into the lmdb formatted files needed for network training, regenerate binary system average
File;
Lmdb files which calls 1b5) are changed in file train_val.prototxt needed for training network and binary system is equal
The path of value file, and the title of last layer of full connection is changed to into fc8t;Its tune is changed in solver.prototxt files
The path of train_val.prototxt, then the training parameter in this document is modified, i.e., learning rate base_lr
0.0001 is changed to, maximum iteration time max_iter is changed to 2200 times;
VGG16 network archetype parameters are called into solver.prototxt files as the initial value of trim network 1b6),
With amended training parameter re -training network, network model and associated documents after being finely tuned.
3. method according to claim 1, wherein step 2c) in network to be assessed is once assessed, be by than
Compared with VGG16 primitive networks and network after fine setting for the feature visualization image of same kart input picture and its corresponding
Deconvolution image is realized:If the feature visualization image of wherein certain network is more sparse, the significance of image is special after deconvolution
Levy and become apparent from, and redundancy feature is less, then it represents that the network is more accurate eventually for the information of classification, illustrates the network pair
Kart has higher ability in feature extraction and classification capacity, then assess the network performance preferable.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014706.7A CN106682730B (en) | 2017-01-10 | 2017-01-10 | network performance evaluation method based on VGG16 image deconvolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710014706.7A CN106682730B (en) | 2017-01-10 | 2017-01-10 | network performance evaluation method based on VGG16 image deconvolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106682730A true CN106682730A (en) | 2017-05-17 |
CN106682730B CN106682730B (en) | 2019-01-08 |
Family
ID=58850480
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710014706.7A Active CN106682730B (en) | 2017-01-10 | 2017-01-10 | network performance evaluation method based on VGG16 image deconvolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106682730B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392085A (en) * | 2017-05-26 | 2017-11-24 | 上海精密计量测试研究所 | The method for visualizing convolutional neural networks |
CN108710906A (en) * | 2018-05-11 | 2018-10-26 | 北方民族大学 | Real-time point cloud model sorting technique based on lightweight network LightPointNet |
CN110059772A (en) * | 2019-05-14 | 2019-07-26 | 温州大学 | Remote sensing images semantic segmentation method based on migration VGG network |
CN110892414A (en) * | 2017-07-27 | 2020-03-17 | 罗伯特·博世有限公司 | Visual analysis system for classifier-based convolutional neural network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104866900A (en) * | 2015-01-29 | 2015-08-26 | 北京工业大学 | Deconvolution neural network training method |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
-
2017
- 2017-01-10 CN CN201710014706.7A patent/CN106682730B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104866900A (en) * | 2015-01-29 | 2015-08-26 | 北京工业大学 | Deconvolution neural network training method |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
Non-Patent Citations (2)
Title |
---|
李彦冬: "卷积神经网络研究综述", 《计算机应用》 * |
王茜: "基于深度神经网络的汽车车型识别", 《现代计算机(专业版)》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392085A (en) * | 2017-05-26 | 2017-11-24 | 上海精密计量测试研究所 | The method for visualizing convolutional neural networks |
CN107392085B (en) * | 2017-05-26 | 2021-07-02 | 上海精密计量测试研究所 | Method for visualizing a convolutional neural network |
CN110892414A (en) * | 2017-07-27 | 2020-03-17 | 罗伯特·博世有限公司 | Visual analysis system for classifier-based convolutional neural network |
CN110892414B (en) * | 2017-07-27 | 2023-08-08 | 罗伯特·博世有限公司 | Visual analysis system for classifier-based convolutional neural network |
CN108710906A (en) * | 2018-05-11 | 2018-10-26 | 北方民族大学 | Real-time point cloud model sorting technique based on lightweight network LightPointNet |
CN108710906B (en) * | 2018-05-11 | 2022-02-11 | 北方民族大学 | Real-time point cloud model classification method based on lightweight network LightPointNet |
CN110059772A (en) * | 2019-05-14 | 2019-07-26 | 温州大学 | Remote sensing images semantic segmentation method based on migration VGG network |
CN110059772B (en) * | 2019-05-14 | 2021-04-30 | 温州大学 | Remote sensing image semantic segmentation method based on multi-scale decoding network |
Also Published As
Publication number | Publication date |
---|---|
CN106682730B (en) | 2019-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Unsupervised learning of edges | |
CN109614979B (en) | Data augmentation method and image classification method based on selection and generation | |
Liang | Image classification based on RESNET | |
CN111325165B (en) | Urban remote sensing image scene classification method considering spatial relationship information | |
CN108764471A (en) | The neural network cross-layer pruning method of feature based redundancy analysis | |
CN102750385B (en) | Correlation-quality sequencing image retrieval method based on tag retrieval | |
CN106682730A (en) | Network performance assessment method based on VGG16 image deconvolution | |
Agarwal et al. | Content based image retrieval using color edge detection and discrete wavelet transform | |
US20180129658A1 (en) | Color sketch image searching | |
CN113688894B (en) | Fine granularity image classification method integrating multiple granularity features | |
CN106570521B (en) | Multilingual scene character recognition method and recognition system | |
CN105139028A (en) | SAR image classification method based on hierarchical sparse filtering convolutional neural network | |
CN108121975A (en) | A kind of face identification method combined initial data and generate data | |
CN109190579B (en) | Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning | |
CN109146944A (en) | A kind of space or depth perception estimation method based on the revoluble long-pending neural network of depth | |
CN110399846A (en) | A kind of gesture identification method based on multichannel electromyography signal correlation | |
CN108062421A (en) | A kind of extensive picture multiscale semanteme search method | |
CN105512681A (en) | Method and system for acquiring target category picture | |
CN108764084A (en) | Video classification methods based on spatial domain sorter network and the time domain network integration | |
CN111222564B (en) | Image identification system, method and device based on image channel correlation | |
CN105989336A (en) | Scene recognition method based on deconvolution deep network learning with weight | |
CN111339935A (en) | Optical remote sensing picture classification method based on interpretable CNN image classification model | |
CN108898092A (en) | Multi-spectrum remote sensing image road network extracting method based on full convolutional neural networks | |
CN105808665A (en) | Novel hand-drawn sketch based image retrieval method | |
Kam et al. | Content based image retrieval through object extraction and querying |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |