CN110263863A - Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2 - Google Patents

Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2 Download PDF

Info

Publication number
CN110263863A
CN110263863A CN201910547744.8A CN201910547744A CN110263863A CN 110263863 A CN110263863 A CN 110263863A CN 201910547744 A CN201910547744 A CN 201910547744A CN 110263863 A CN110263863 A CN 110263863A
Authority
CN
China
Prior art keywords
network
bilinearity
mushroom
feature
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910547744.8A
Other languages
Chinese (zh)
Other versions
CN110263863B (en
Inventor
袁培森
申成吉
任守纲
顾兴健
车建华
徐焕良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Agricultural University
Original Assignee
Nanjing Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Agricultural University filed Critical Nanjing Agricultural University
Priority to CN201910547744.8A priority Critical patent/CN110263863B/en
Publication of CN110263863A publication Critical patent/CN110263863A/en
Application granted granted Critical
Publication of CN110263863B publication Critical patent/CN110263863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of the fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2, key step are as follows: (1) establish based on transfer learning and bilinear fine granularity mushroom phenotype identification model;(2) transfer learning and training are carried out based on identification model;(3) it is pre-processed after image being inputted identification model;(4) feature extraction is carried out to pretreated image data.The present invention combines the feature that two symmetrical InceptionResNetV2 feature extraction networks extract, and obtains more fine-grained feature, keeps recognition effect more preferable;And use the transfer learning training method based on model, the good feature extraction network parameter weight of the pre-training on ImageNet data set is moved on mushroom fine granularity phenotypic data collection, better constringency performance can be reached within the shorter training time, keep recognition result more preferable.

Description

Fine granularity mushroom table based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods
Technical field
The invention belongs to computer, artificial intelligence and field of image processings, and in particular to it is a kind of based on transfer learning with The fine granularity mushroom phenotype recognition methods of bilinearity InceptionResNetV2.
Background technique
Fine granularity image recognition (Fine-grained Image Recognition) at present be applied to vehicle cab recognition, The fields such as birds identification, but because the categorical measure of mushroom is more, and different subclass similarities are high, identification difficulty is big, currently without It specially can be used for the phenotypically recognized product of mushroom.
Although currently there are some fine granularity image recognition technologys, cannot preferable particulate be carried out to mushroom Spend phenotype identification.Specifically mainly there are following some problems to need to solve:
(1) Model Weight for how using the transfer learning method based on model that pre-training on ImageNet data set is good It moves in the fine granularity phenotype identification model of mushroom, reduces required data volume and training time, obtain preferably primality Energy and constringency performance.
(2) how to converge operation using bilinearity to be combined the characteristics of image that two feature extraction networks extract, It obtains more fine-grained feature and is used for image recognition.
(3) how the stronger InceptionResNetV2 feature extraction network progress image of ability in feature extraction is used Feature extraction obtains better feature progress bilinearity and converges operation.
Summary of the invention
It is an object of the invention to overcome above-mentioned the shortcomings of the prior art, provide a kind of based on transfer learning and two-wire The fine granularity mushroom phenotype recognition methods of property InceptionResNetV2, this method can be according to fine granularity mushroom phenotypic data Collection training pattern simultaneously identifies different types of fine granularity mushroom phenotype image.
To achieve the above object, the technical solution adopted by the present invention is that: one kind be based on transfer learning and bilinearity The fine granularity mushroom phenotype recognition methods of InceptionResNetV2 comprising following steps:
Step 1 is established based on transfer learning and bilinear fine granularity mushroom phenotype identification model;
Step 2 carries out transfer learning and training based on identification model;
Step 3 pre-processes after image is inputted identification model;
Step 4 carries out feature extraction to pretreated image data;Using symmetrical structure InceptionResNetV2 feature extraction network extracts the feature vector in image, then to the feature vector extracted and its Spontaneous transposition carries out bilinearity and converges operation obtaining the bilinearity eigenmatrix of each position of picture, and by bilinearity feature square Battle array is converted into bilinearity feature vector, is followed by softmax layers finally by full articulamentum and carries out more points to bilinearity feature vector Class obtains probability of all categories.
Further, the pretreatment in the step 3 includes centralization, normalization, scaling, random cropping and Random Level Overturning.
Further, after the image data input Network Recognition model of arbitrary size, entire data set is subtracted first Average value simultaneously carries out centralization and normalized divided by the standard deviation of entire data set, and image is scaled to short side and is later 448 pixels, and the square-shaped image region of 448*448 is cut out using the mode of random cropping from image, it is last random Flip horizontal is carried out to image.
Further, using the InceptionResNetV2 net in Inception series of network model in the step 4 Network carries out feature extraction, and joined residual block in InceptionResNetV2 feature extraction network.
Further, first 7 layers of the InceptionResNetV2 network are by three-layer coil lamination, one layer of maximum pond Layer, two layers of convolutional layer, one layer of maximum pond layer composition, are repeated 10 times the residual error Inception module having there are three branch later, Further through a better simply Inception module, using 20 tools, there are two the residual error Inception modules of branch, again By the Inception module of 4 branches, finally by 10 tools, there are two the residual error Inception modules of branch, then Output result is obtained by a convolutional layer.
Further, bilinear model B is made of four-tuple, as shown in formula (1),
B=(fA,fB,P,C) (4)
Wherein fAAnd fBIt is characteristic function, P is the pond function of model, and C is the classification function of mushroom;
Feature of the feature of output on each position using matrix apposition combination from, as shown in formula (2),
bilinear(L,I,fA,fB)=fA(L,I)TfB(L,I) (5)
Wherein L indicates that position and scale, I indicate picture;If the dimension difference for extracting feature of two characteristic functions For (K, M) and (K, N), then after bilinear bilinearity converges operation, dimension becomes (M, N), if using summation Chi Hualai The feature of comprehensive each position, then as shown in formula (3),
Wherein Φ (I) indicates that global picture feature indicates;
Bilinearity feature vector x=Φ (I) is finally passed through into symbol square root transformationAnd increase L2 RegularizationClassifier is inputted again obtains classification results to the end.
Further, training process is divided into two steps:
(1) what InceptionResNetV2 feature extraction network fixed first loaded obtains on ImageNet data set Pre-training parameter, only allow the parameter of the last full articulamentum random initializtion of training;
(2) after network convergence, then the parameter of solid InceptionResNetV2 feature extraction network is solved, use is lesser Learning rate is finely adjusted.
Further, total training process is as follows:
(1) building is based on transfer learning and bilinear fine granularity mushroom phenotype identification model, wherein including InceptionResNetV2 is as feature extraction network;
(2) ImageNet pre-training model initialization InceptionResNetV2 feature extraction network is used, is used Glorot normal initialization device initializes full connection layer parameter;
(3) parameter of InceptionResNetV2 feature extraction network is fixed, the training process after being allowed to can not pass through Backpropagation updates the parameter value of this part;
(4) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448*448;
(5) the batch training sample for getting (4) inputs network model, converges operation by feature extraction and bilinearity And full articulamentum, probability of all categories is calculated finally by softmax;
(6) use classes cross entropy loss function calculates the penalty values of network model;
(7) by calculating gradient value, using SGD optimizer, it is 1.0 that initial learning rate, which is arranged, and learning rate decays to 1e- 8, momentum Momentum is set as 0.9, and error back propagation is returned whole network, updates the parameter of full articulamentum;
(8) judge whether to reach given number of iterations 100 or meet the 10 iteration variations of verifying penalty values to be no more than 0.001 Morning stop method condition, if, it is believed that network has been restrained, then enters step (9), if otherwise reentering step (4);
(9) change the learning rate of SGD optimizer to 0.001;
(10) fixation to InceptionResNetV2 feature extraction network pre-training parameter is released, network is led to Cross the parameter value that backpropagation updates this part;
(11) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448*448;
(12) the batch training sample for getting (11) inputs network model, converges behaviour by feature extraction and bilinearity Work and full articulamentum, probability of all categories is calculated finally by softmax;
(13) use classes cross entropy loss function calculates the penalty values of network model;
(14) by calculating gradient value, using SGD optimizer, it is 0.001 that initial learning rate, which is arranged, and learning rate decays to 1e-8, momentum Momentum are set as 0.9, and error back propagation is returned whole network, updates each layer of network of parameter;
(15) judge whether to reach given number of iterations 70 or meet the 10 iteration variations of verifying penalty values to be no more than 0.001 Morning stop method condition, if, it is believed that network has been restrained, then enters step (16), if otherwise reentering step (11);
(16) accuracy rate, accurate rate, recall rate, the F1 value of network model are calculated by test set.
The invention has the benefit that (1) is converged using bilinearity, by two symmetrical InceptionResNetV2 features It extracts the feature that network extracts to combine, obtains more fine-grained feature, keep recognition effect more preferable.(2) it has used and has been based on The transfer learning training method of model migrates the good feature extraction network parameter weight of the pre-training on ImageNet data set Onto mushroom fine granularity phenotypic data collection, it can reach better constringency performance within the shorter training time, make recognition result More preferably.
The present invention respectively with use the standard of symmetrical VGG16 and symmetrical VGG19 model on mushroom fine granularity phenotypic data collection Four true rate, accurate rate, recall rate, F1 value results are compared, as shown in table 1.
1 result of table
As can be seen from the table, being learned based on migration using symmetrical InceptionResNetV2 network proposed by the present invention Practise it is best with bilinear fine granularity mushroom phenotype identification model effect, reached 0.90 accuracy rate, 0.91 accurate rate, 0.90 recall rate, 0.90 F1 value, and other methods about 2%~6% are higher by indices.
Detailed description of the invention
Fig. 1 is the fine granularity mushroom phenotype identification model frame of bilinearity InceptionResNetV2.
Fig. 2 is pretreatment process figure.
Fig. 3 is Inception module network structure.
Fig. 4 is InceptionResNetV2 overall network structure.
Fig. 5 is transfer learning schematic diagram.
Fig. 6 is trained flow chart.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
One, network model
The present invention chooses InceptionResNetV2 network as the feature extraction network in Bilinear CNN network, Wish the effect that overall network model is promoted by the deeper stronger ability in feature extraction of network bring.
It results in based on transfer learning and bilinear fine granularity mushroom phenotype identification model, overall network structure As shown in Figure 1, first passing around the pre- place of centralization, normalization, random cropping, Random Level overturning after image input network model Reason process obtains the spy extracted from image via the InceptionResNetV2 feature extraction network of symmetrical structure later Vector is levied, then are converged by operation and obtains each position of picture for the feature vector extracted and its spontaneous transposition progress bilinearity Bilinearity eigenmatrix, and convert bilinearity feature vector for bilinearity eigenmatrix, be followed by finally by full articulamentum Softmax layers carry out more classification to bilinearity feature vector and obtain probability of all categories.The mushroom classification being related in the present invention Title has: Amanita vaginata var.vaginata, Xerocomus subtomentosus, Conocybe Albipes, Cortinarius rubellus, Helvella crispa, Cuphophyllus flavipes, Hygrocybe Reidii, Inocybe erubescens, Lyophyllum fumosum, Russula pectinatoides, Tricholoma Fulvum, Tricholoma sciodes, Lycoperdon utriforme, Rhodocollybia butyracea f.asema。
1, image input and pretreatment
The average value of entire data set can be subtracted first after the image data input network model of arbitrary size and divided by whole The standard deviation of a data set carries out centralization and normalized, and its purpose is to allow near data zooming to 0 value and do not change The distribution of parameter evidence reduces difference of the different samples when calculating gradient, accelerates the convergence of network.
It is 448 pixels that image, which can be scaled to short side, later, and is cut out from image using the mode of random cropping The square-shaped image region of 448*448, last image can be by random flip horizontals.Random cropping and Random Level overturning etc. are pre- Processing means are provided to increase the diversity of data set, and network model is made to have better Generalization Capability.Due to mushroom data The characteristics of collection, the direction of growth of mushroom be from lower to upper, therefore only with flip horizontal rather than flip vertical.
Image color is indicated using tri- channels RGB, therefore passing through pretreated image data size is 448*448* 3, the subsequent data can be admitted to feature extraction network and be handled, and total preprocessing process is as shown in Figure 2.
2, feature extraction network
Feature extraction Web vector graphic based on transfer learning Yu bilinear fine granularity mushroom phenotype identification model InceptionResNetV2 network is constituted.As shown in figure 3, being remerged by using multiple convolution kernels in 4 branch parallel processing Structure (Bottleneck Layer), improve the width of network, increase network to the ability to accept of different size scale, Thus the problems such as deep neural network parameter is too many, computation complexity is too big, gradient disperse is solved.Use fractionation convolution operation Reasonable dimension decomposition is carried out, such dimension operation splitting can save in the case where not losing largely keeping minutia About permitted multiparameter, reduces calculating consumption to accelerate the convergence rate of network, while further having deepened the depth of network Degree, improves the non-linear of network.
The overall structure of InceptionResNetV2 feature extraction network as shown in figure 4, by reference to Microsoft residual error net Network joined the design of residual block, enables parameter to skip some layers by the shortcut in some networks and is propagated, in this way Design can solve deeper network structure Gradient disappearance the problems such as, thus make ultra deep network training become can Can, under the network structure of deeper, better training effect can be obtained.First 7 layers of InceptionResNetV2 network are It is made of three-layer coil lamination, one layer of maximum pond layer, two layers of convolutional layer, one layer of maximum pond layer, 10 tools is repeated later There are three the residual error Inception modules of branch to have further through a better simply Inception module using 20 times The residual error Inception module of Liang Ge branch finally has by 10 times further through the Inception module of 4 branches The residual error Inception module of Liang Ge branch, then output result has been obtained by a convolutional layer.
The parameter of InceptionResNetV2 feature extraction networks major layer is as shown in table 1, wherein only listing first 7 layers Convolution sum maximum pond layer and the merging layer of each residual error Inception module, convolutional layer, residual error layer and the last one convolution Layer, then Batch Normalization layers and ReLU layers after each convolutional layer.Image from being into input layer dimension size 448*448*3 starts, and increases the depth of image by continuous convolution, and maximum pond layer halves the dimension of image, residual error The dimension size that Inception module maintains image is constant, and every by a residual error Inception module, the length of image Width reduces depth and increases, and dimension size when finally exporting is 12*12*1536, and Headquarters of the General Staff quantity is 54336736.
The parameter of 1 InceptionResNetV2 feature extraction networks major layer of table
3, bilinearity is converged and classifies
Bilinearity refers to that, for function f (x, y), as the one of parameter such as x of fixation, function f (x, y) is to another A parameter y is linear.In the present invention, bilinear model B is made of four-tuple, as shown in formula (1),
B=(fA,fB,P,C) (7)
Wherein fAAnd fBIt is characteristic function, P is the pond function of model, and C is the classification function of mushroom.
Characteristic function f is that the effect of the feature extraction network in the present invention is that the picture of input and position are mapped to c × D The feature of size, D refer to depth.The feature exported in the present invention by the feature on each position using matrix apposition combine and Come, as shown in formula (2),
bilinear(L,I,fA,fB)=fA(L,I)TfB(L,I) (8)
Wherein L indicates that position and scale, I indicate picture.If the dimension difference for extracting feature of two characteristic functions For (K, M) and (K, N), then after bilinear bilinearity converges operation, dimension becomes (M, N).If using summation Chi Hualai The feature of comprehensive each position, then as shown in formula (3),
Wherein Φ (I) indicates that global picture feature indicates.
Bilinearity feature vector x=Φ (I) is finally passed through into symbol square root transformationAnd increase L2 RegularizationClassifier is inputted again obtains classification results to the end.
In the present invention, the characteristic length and width extracted by InceptionResNetV2 feature extraction network is equal It is 12, depth 1536.Converging operation firstly the need of by three-dimensional feature vector reshape to feature vector progress bilinearity is Two-dimensional feature vector obtains the feature vector of 144*1536.Next feature vector progress transposition is obtained into 1536*144 dimension The feature vector for spending size carries out matrix apposition using the feature vector after former feature vector and transposition, i.e. bilinearity converges behaviour Make, obtains the bilinearity feature vector that dimension size is 1536*1536.Bilinearity feature vector, which is shown laid flat in size, is 2359296 one-dimensional bilinearity feature vector, in addition symbol square root transformation and L2 regularization layer, then use full articulamentum, More classification are carried out by softmax, the parameter amount of full articulamentum is 33030158.
Two, transfer learning
In the present invention, the transfer learning based on model has been used, the ImageNet with about 14,190,000 pictures is used Data set includes many classifications in ImageNet data set, wherein there are plant, mushrooms etc. and mushroom of the present invention as source domain The similar classification of goal task, by the way that the good Model Weight of pre-training on ImageNet data set is moved to mushroom of the invention On data set, as shown in figure 5, not only reducing required data volume, moreover it is possible to obtain higher initial performance, faster training speed Degree, better constringency performance.
Pre-training model is obtained from Keras pre-training model library, then load into based on transfer learning with it is bilinear In fine granularity mushroom phenotype identification model, trained process is divided into two steps:
(1) what InceptionResNetV2 feature extraction network fixed first loaded obtains on ImageNet data set Pre-training parameter, only allow the parameter of the last full articulamentum random initializtion of training.
(2) after network convergence, then the parameter of solid InceptionResNetV2 feature extraction network is solved, use is lesser Learning rate is finely adjusted.
The reason of first step fixation InceptionResNetV2 network pre-training parameter, is that the full articulamentum of addition is Random initializtion, biggish penalty values can be generated at the beginning and then generate biggish gradient, and it is good to be easily destroyed pre-training Parameter, so to reuse lesser learning rate after the convergence of full articulamentum finely tunes entire model.
The pre-training model optimizer of transfer learning of the invention uses stochastic gradient descent (Stochastic Gradient Descent, SGD) algorithm as optimizer,
Total training process is as shown in Figure 6, the specific steps are as follows:
(1) building is based on transfer learning and bilinear fine granularity mushroom phenotype identification model, wherein including InceptionResNetV2 is as feature extraction network;
(2) ImageNet pre-training model initialization InceptionResNetV2 feature extraction network is used, is used Glorot normal initialization device initializes full connection layer parameter;
(3) parameter of InceptionResNetV2 feature extraction network is fixed, the training process after being allowed to can not pass through Backpropagation updates the parameter value of this part;
(4) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448*448;
(5) the batch training sample for getting (4) inputs network model, converges operation by feature extraction and bilinearity And full articulamentum, probability of all categories is calculated finally by softmax;
(6) use classes cross entropy loss function calculates the penalty values of network model;
(7) by calculating gradient value, using SGD optimizer, it is 1.0 that initial learning rate, which is arranged, and learning rate decays to 1e- 8, momentum Momentum is set as 0.9, and error back propagation is returned whole network, updates the parameter of full articulamentum;
(8) judge whether to reach given number of iterations 100 or meet the 10 iteration variations of verifying penalty values to be no more than 0.001 Morning stop method condition, if, it is believed that network has been restrained, then enters step (9), if otherwise reentering step (4);
(9) change the learning rate of SGD optimizer to 0.001;
(10) fixation to InceptionResNetV2 feature extraction network pre-training parameter is released, network is led to Cross the parameter value that backpropagation updates this part;
(11) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448*448;
(12) the batch training sample for getting (11) inputs network model, converges behaviour by feature extraction and bilinearity Work and full articulamentum, probability of all categories is calculated finally by softmax;
(13) use classes cross entropy loss function calculates the penalty values of network model;
(14) by calculating gradient value, using SGD optimizer, it is 0.001 that initial learning rate, which is arranged, and learning rate decays to 1e-8, momentum Momentum are set as 0.9, and error back propagation is returned whole network, updates each layer of network of parameter;
(15) judge whether to reach given number of iterations 70 or meet the 10 iteration variations of verifying penalty values to be no more than 0.001 Morning stop method condition, if, it is believed that network has been restrained, then enters step (16), if otherwise reentering step (11);
(16) accuracy rate, accurate rate, recall rate, the F1 value of network model are calculated by test set.
The basic principles, main features and advantages of the invention have been shown and described above.Those skilled in the art It should be appreciated that the protection scope that the above embodiments do not limit the invention in any form, all to be obtained using modes such as equivalent replacements The technical solution obtained, falls in protection scope of the present invention.
Part that the present invention does not relate to is the same as those in the prior art or can be realized by using the prior art.

Claims (8)

1. a kind of fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2, special Sign be the following steps are included:
Step 1 is established based on transfer learning and bilinear fine granularity mushroom phenotype identification model;
Step 2 carries out transfer learning and training based on identification model;
Step 3 pre-processes after image is inputted identification model;
Step 4 carries out feature extraction to pretreated image data;It is special using the InceptionResNetV2 of symmetrical structure Sign extracts the feature vector in network extraction image, then carries out bilinearity to the feature vector extracted and its spontaneous transposition Converge operation and obtain the bilinearity eigenmatrix of each position of picture, and by bilinearity eigenmatrix be converted into bilinearity feature to Amount is followed by softmax layers finally by full articulamentum and obtains probability of all categories to the more classification of bilinearity feature vector progress.
2. the fine granularity mushroom table according to claim 1 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that the pretreatment in the step 3 include centralization, normalization, scaling, random cropping and with Machine flip horizontal.
3. the fine granularity mushroom table according to claim 2 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that after the image data input Network Recognition model of arbitrary size, subtract entire data set first Average value and carry out centralization and normalized divided by the standard deviation of entire data set, image is scaled to short side and is later 448 pixels, and the square-shaped image region of 448*448 is cut out using the mode of random cropping from image, it is last random Flip horizontal is carried out to image.
4. the fine granularity mushroom table according to claim 1 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that using in Inception series of network model in the step 4 InceptionResNetV2 network carries out feature extraction, and joined in InceptionResNetV2 feature extraction network residual Poor block.
5. the fine granularity mushroom table according to claim 4 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that first 7 layers of the InceptionResNetV2 network are by three-layer coil lamination, one layer of maximum pond Change layer, two layers of convolutional layer, one layer of maximum pond layer composition, being repeated 10 times tool later, there are three the residual error Inception moulds of branch Block, further through a better simply Inception module, the residual error Inception module having using 20 times there are two branch, Further through the Inception module of 4 branches, the residual error Inception module finally having by 10 times there are two branch, Output result is obtained by a convolutional layer again.
6. the fine granularity mushroom table according to claim 1 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that bilinear model B is made of four-tuple, as shown in formula (1),
B=(fA,fB,P,C) (1)
Wherein fAAnd fBIt is characteristic function, P is the pond function of model, and C is the classification function of mushroom;
Feature of the feature of output on each position using matrix apposition combination from, as shown in formula (2),
bilinear(L,I,fA,fB)=fA(L,I)TfB(L,I) (2)
Wherein L indicates that position and scale, I indicate picture;If the dimension for extracting feature of two characteristic functions be respectively (K, M) and (K, N), then after bilinear bilinearity converges operation, dimension becomes (M, N), if comprehensive each using summation Chi Hualai The feature of a position, then as shown in formula (3),
Wherein Φ (I) indicates that global picture feature indicates;
Bilinearity feature vector x=Φ (I) is finally passed through into symbol square root transformationAnd increase L2 canonical ChangeClassifier is inputted again obtains classification results to the end.
7. the fine granularity mushroom table according to claim 1 based on transfer learning Yu bilinearity InceptionResNetV2 Type recognition methods, which is characterized in that training process is divided into two steps:
(1) what InceptionResNetV2 feature extraction network fixed first loaded obtains pre- on ImageNet data set Training parameter, the parameter for the full articulamentum random initializtion for only allowing training last;
(2) after network convergence, then the parameter of solid InceptionResNetV2 feature extraction network is solved, uses lesser study Rate is finely adjusted.
8. the fine granularity mushroom according to claim 1 or claim 7 based on transfer learning Yu bilinearity InceptionResNetV2 Phenotype recognition methods, which is characterized in that total training process is as follows:
(1) building is based on transfer learning and bilinear fine granularity mushroom phenotype identification model, wherein including InceptionResNetV2 is as feature extraction network;
(2) ImageNet pre-training model initialization InceptionResNetV2 feature extraction network is used, just using Glorot Normal initializer initializes full connection layer parameter;
(3) parameter of InceptionResNetV2 feature extraction network is fixed, the training process after being allowed to can not be by reversed Propagate the parameter value for updating this part;
(4) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448* 448;
(5) the batch training sample got (4) inputs network model, by feature extraction and bilinearity converge operation and Full articulamentum calculates probability of all categories finally by softmax;
(6) use classes cross entropy loss function calculates the penalty values of network model;
(7) by calculating gradient value, using SGD optimizer, it is 1.0 that initial learning rate, which is arranged, and learning rate decays to 1e-8, is moved Amount Momentum is set as 0.9, and error back propagation is returned whole network, updates the parameter of full articulamentum;
(8) judge whether to reach given number of iterations 100 or meet the morning that the 10 iteration variations of verifying penalty values are no more than 0.001 Stop method condition, if, it is believed that network has been restrained, then enters step (9), if otherwise reentering step (4);
(9) change the learning rate of SGD optimizer to 0.001;
(10) fixation to InceptionResNetV2 feature extraction network pre-training parameter is released, passes through network anti- To the parameter value for propagating this part of update;
(11) training sample after obtaining image preprocessing in input channel, batch size are 8, and image size is 448* 448;
(12) the batch training sample got (11) inputs network model, by feature extraction and bilinearity converge operation with And full articulamentum, probability of all categories is calculated finally by softmax;
(13) use classes cross entropy loss function calculates the penalty values of network model;
(14) by calculating gradient value, using SGD optimizer, it is 0.001 that initial learning rate, which is arranged, and learning rate decays to 1e- 8, momentum Momentum is set as 0.9, and error back propagation is returned whole network, updates each layer of network of parameter;
(15) judge whether to reach given number of iterations 70 or meet the morning that the 10 iteration variations of verifying penalty values are no more than 0.001 Stop method condition, if, it is believed that network has been restrained, then enters step (16), if otherwise reentering step (11);
(16) accuracy rate, accurate rate, recall rate, the F1 value of network model are calculated by test set.
CN201910547744.8A 2019-06-24 2019-06-24 Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2 Active CN110263863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910547744.8A CN110263863B (en) 2019-06-24 2019-06-24 Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910547744.8A CN110263863B (en) 2019-06-24 2019-06-24 Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2

Publications (2)

Publication Number Publication Date
CN110263863A true CN110263863A (en) 2019-09-20
CN110263863B CN110263863B (en) 2021-09-10

Family

ID=67920752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910547744.8A Active CN110263863B (en) 2019-06-24 2019-06-24 Fine-grained fungus phenotype identification method based on transfer learning and bilinear InceptionResNet V2

Country Status (1)

Country Link
CN (1) CN110263863B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781921A (en) * 2019-09-25 2020-02-11 浙江农林大学 Depth residual error network and transfer learning-based muscarinic image identification method and device
CN111401122A (en) * 2019-12-27 2020-07-10 航天信息股份有限公司 Knowledge classification-based complex target asymptotic identification method and device
CN111462068A (en) * 2020-03-30 2020-07-28 华南理工大学 Bolt and nut detection method based on transfer learning
CN111613287A (en) * 2020-03-31 2020-09-01 武汉金域医学检验所有限公司 Report coding model generation method, system and equipment based on Glow network
CN111666984A (en) * 2020-05-20 2020-09-15 海南电网有限责任公司电力科学研究院 Intelligent overvoltage identification method based on transfer learning
CN111709326A (en) * 2020-05-29 2020-09-25 安徽艾睿思智能科技有限公司 Deep learning-based cascaded granular crop quality analysis method and system
CN111860601A (en) * 2020-06-22 2020-10-30 北京林业大学 Method and device for predicting large fungus species
CN111914802A (en) * 2020-08-17 2020-11-10 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Ionosphere return scattering propagation pattern identification method based on transfer learning
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network
CN112364899A (en) * 2020-10-27 2021-02-12 西安科技大学 Abrasive grain ferrographic image intelligent identification method based on virtual image and transfer learning
CN112633370A (en) * 2020-12-22 2021-04-09 中国医学科学院北京协和医院 Detection method, device, equipment and medium for filamentous fungus morphology
WO2021115123A1 (en) * 2019-12-12 2021-06-17 苏州科技大学 Method for footprint image retrieval
CN113128593A (en) * 2021-04-20 2021-07-16 南京林业大学 Plant fine-grained identification method based on bilinear convolutional neural network
CN113642518A (en) * 2021-08-31 2021-11-12 山东省计算中心(国家超级计算济南中心) Cell membrane coloring integrity judging method for her2 pathological image based on transfer learning
CN114722928A (en) * 2022-03-29 2022-07-08 河海大学 Blue-green algae image identification method based on deep learning
CN111027626B (en) * 2019-12-11 2023-04-07 西安电子科技大学 Flow field identification method based on deformable convolution network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal
US20180260655A1 (en) * 2014-11-07 2018-09-13 Adobe Systems Incorporated Local feature representation for image recognition
CN109086836A (en) * 2018-09-03 2018-12-25 淮阴工学院 A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks
CN109389245A (en) * 2018-09-06 2019-02-26 浙江鸿程计算机系统有限公司 A kind of multifactor fusion school district school age population prediction technique based on deep neural network
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN109635712A (en) * 2018-12-07 2019-04-16 杭州电子科技大学 Spontaneous micro- expression type method of discrimination based on homogeneous network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180260655A1 (en) * 2014-11-07 2018-09-13 Adobe Systems Incorporated Local feature representation for image recognition
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal
CN109086836A (en) * 2018-09-03 2018-12-25 淮阴工学院 A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks
CN109389245A (en) * 2018-09-06 2019-02-26 浙江鸿程计算机系统有限公司 A kind of multifactor fusion school district school age population prediction technique based on deep neural network
CN109508650A (en) * 2018-10-23 2019-03-22 浙江农林大学 A kind of wood recognition method based on transfer learning
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109635712A (en) * 2018-12-07 2019-04-16 杭州电子科技大学 Spontaneous micro- expression type method of discrimination based on homogeneous network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈鹤森: ""基于深度学习的细粒度图像识别研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781921A (en) * 2019-09-25 2020-02-11 浙江农林大学 Depth residual error network and transfer learning-based muscarinic image identification method and device
CN111027626B (en) * 2019-12-11 2023-04-07 西安电子科技大学 Flow field identification method based on deformable convolution network
WO2021115123A1 (en) * 2019-12-12 2021-06-17 苏州科技大学 Method for footprint image retrieval
US11809485B2 (en) 2019-12-12 2023-11-07 Suzhou University of Science and Technology Method for retrieving footprint images
CN111401122B (en) * 2019-12-27 2023-09-26 航天信息股份有限公司 Knowledge classification-based complex target asymptotic identification method and device
CN111401122A (en) * 2019-12-27 2020-07-10 航天信息股份有限公司 Knowledge classification-based complex target asymptotic identification method and device
CN111462068A (en) * 2020-03-30 2020-07-28 华南理工大学 Bolt and nut detection method based on transfer learning
CN111462068B (en) * 2020-03-30 2023-03-21 华南理工大学 Bolt and nut detection method based on transfer learning
CN111613287A (en) * 2020-03-31 2020-09-01 武汉金域医学检验所有限公司 Report coding model generation method, system and equipment based on Glow network
CN111613287B (en) * 2020-03-31 2023-08-04 武汉金域医学检验所有限公司 Report coding model generation method, system and equipment based on Glow network
CN111666984B (en) * 2020-05-20 2023-08-25 海南电网有限责任公司电力科学研究院 Overvoltage intelligent identification method based on transfer learning
CN111666984A (en) * 2020-05-20 2020-09-15 海南电网有限责任公司电力科学研究院 Intelligent overvoltage identification method based on transfer learning
CN111709326A (en) * 2020-05-29 2020-09-25 安徽艾睿思智能科技有限公司 Deep learning-based cascaded granular crop quality analysis method and system
CN111709326B (en) * 2020-05-29 2023-05-12 安徽省科亿信息科技有限公司 Cascade type particle crop quality analysis method and system based on deep learning
CN111860601A (en) * 2020-06-22 2020-10-30 北京林业大学 Method and device for predicting large fungus species
CN111860601B (en) * 2020-06-22 2023-10-17 北京林业大学 Method and device for predicting type of large fungi
CN111914802B (en) * 2020-08-17 2023-02-07 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Ionosphere return scattering propagation pattern identification method based on transfer learning
CN111914802A (en) * 2020-08-17 2020-11-10 中国电波传播研究所(中国电子科技集团公司第二十二研究所) Ionosphere return scattering propagation pattern identification method based on transfer learning
CN112052904A (en) * 2020-09-09 2020-12-08 陕西理工大学 Method for identifying plant diseases and insect pests based on transfer learning and convolutional neural network
CN112364899A (en) * 2020-10-27 2021-02-12 西安科技大学 Abrasive grain ferrographic image intelligent identification method based on virtual image and transfer learning
CN112633370A (en) * 2020-12-22 2021-04-09 中国医学科学院北京协和医院 Detection method, device, equipment and medium for filamentous fungus morphology
CN113128593A (en) * 2021-04-20 2021-07-16 南京林业大学 Plant fine-grained identification method based on bilinear convolutional neural network
CN113642518A (en) * 2021-08-31 2021-11-12 山东省计算中心(国家超级计算济南中心) Cell membrane coloring integrity judging method for her2 pathological image based on transfer learning
CN113642518B (en) * 2021-08-31 2023-08-22 山东省计算中心(国家超级计算济南中心) Transfer learning-based her2 pathological image cell membrane coloring integrity judging method
CN114722928A (en) * 2022-03-29 2022-07-08 河海大学 Blue-green algae image identification method based on deep learning
CN114722928B (en) * 2022-03-29 2024-04-16 河海大学 Blue algae image recognition method based on deep learning

Also Published As

Publication number Publication date
CN110263863B (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN110263863A (en) Fine granularity mushroom phenotype recognition methods based on transfer learning Yu bilinearity InceptionResNetV2
CN103927531B (en) It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN104462494B (en) A kind of remote sensing image retrieval method and system based on unsupervised feature learning
CN109934761A (en) Jpeg image steganalysis method based on convolutional neural networks
CN110084221A (en) A kind of serializing face critical point detection method of the tape relay supervision based on deep learning
CN111696101A (en) Light-weight solanaceae disease identification method based on SE-Inception
CN106778918A (en) A kind of deep learning image identification system and implementation method for being applied to mobile phone terminal
CN111339818B (en) Face multi-attribute recognition system
CN110363253A (en) A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks
Ding et al. Research on daily objects detection based on deep neural network
Hara et al. Towards good practice for action recognition with spatiotemporal 3d convolutions
CN106339753A (en) Method for effectively enhancing robustness of convolutional neural network
CN109783887A (en) A kind of intelligent recognition and search method towards Three-dimension process feature
CN107679572A (en) A kind of image discriminating method, storage device and mobile terminal
CN110490298A (en) Lightweight depth convolutional neural networks model based on expansion convolution
CN108205703A (en) Multi-input multi-output matrix average value pooling vectorization implementation method
CN109165733A (en) Multi-input multi-output matrix maximum pooling vectorization implementation method
CN112906747A (en) Knowledge distillation-based image classification method
CN110674326A (en) Neural network structure retrieval method based on polynomial distribution learning
CN108182316A (en) A kind of Electromagnetic Simulation method and its electromagnetism brain based on artificial intelligence
CN111282281B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110046568A (en) A kind of video actions recognition methods based on Time Perception structure
Zhang et al. Summary of convolutional neural network compression technology
Ma et al. A survey of sparse-learning methods for deep neural networks
CN110188621A (en) A kind of three-dimensional face expression recognition methods based on SSF-IL-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant