CN112487909A - Fruit variety identification method based on parallel convolutional neural network - Google Patents
Fruit variety identification method based on parallel convolutional neural network Download PDFInfo
- Publication number
- CN112487909A CN112487909A CN202011327435.9A CN202011327435A CN112487909A CN 112487909 A CN112487909 A CN 112487909A CN 202011327435 A CN202011327435 A CN 202011327435A CN 112487909 A CN112487909 A CN 112487909A
- Authority
- CN
- China
- Prior art keywords
- neural network
- fruit
- convolutional neural
- parallel
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000013399 edible fruits Nutrition 0.000 title claims abstract description 55
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 18
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000003062 neural network model Methods 0.000 claims abstract description 8
- 238000013528 artificial neural network Methods 0.000 claims abstract description 7
- 238000000605 extraction Methods 0.000 claims abstract description 5
- 238000013519 translation Methods 0.000 claims abstract description 4
- 238000011176 pooling Methods 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a fruit variety identification method based on a parallel convolution neural network, which comprises the following steps of 1: changing the fruit image with the category label into a 3-channel picture with 128 x 128 pixels; step 2: carrying out translation, rotation and mirror image overturning operations on the result of the step 1 at the same time to generate fruit image data sets of all scales; step 3, inputting the result of the step 2 into a generation countermeasure network model to enhance the data; step 4, constructing a parallel convolution neural network model, and performing multi-scale feature extraction on the result of the step 3; and 5: and (4) according to the characteristics extracted in the step (4), predicting the category of the fruit image by the parallel convolutional neural network model, comparing the category with the category label, and training the parallel convolutional neural network model according to the comparison result. The method can improve the accuracy of fruit variety identification based on the image, and has important significance in mechanized and intelligent application of fruit industry.
Description
Technical Field
The invention relates to image recognition, in particular to a fruit variety recognition method based on a parallel convolution neural network.
Background
At present, the mechanization degree of the current fruit industry structure in China is low, and most production links, especially fruit picking, are mainly carried out by manpower which wastes time and labor. The whole fruit production operation comprises picking, storing, transporting, processing, selling and other links, so that the research and development of fruit agricultural production robots are necessary trends of improving the fruit production efficiency and saving the labor cost. However, in the picking and sorting robot and the fruit quality and variety detection system in the fruit production link, the normal work of the picking and sorting robot depends on the correct identification of the fruit by the image processing module, for example, the picking robot can provide motion parameters for the mechanical arm only by identifying the fruit from the fruit tree and obtaining the accurate position of the fruit, and then the picking operation of the fruit is completed.
In recent years, the deep learning technology has been rapidly developed, which can excellently perform various computer vision tasks and is gradually applied to the agricultural field. The built deep learning model can automatically learn the characteristic information of different objects through a large amount of data training, and the difference of each category is obtained. The deep learning model can convert the original data into more abstract and high-level expression through training and learning, and then tasks such as image classification and detection are completed. However, the accuracy of the current fruit variety identification method based on image deep learning is low, and the requirements of practical application cannot be completely met.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a fruit variety identification method based on a parallel convolution neural network, which solves the problems that the existing data set is less and the identification rate of the traditional parallel convolution neural network is low, and realizes the rapid and accurate identification of similar fruit varieties.
The technical scheme is as follows: the invention provides a fruit variety identification method based on a parallel convolution neural network, which comprises the following steps:
step 1: changing the fruit image with the category label into a 3-channel picture with 128 x 128 pixels;
step 2: carrying out translation, rotation and mirror image overturning operations on the result of the step 1 at the same time to generate fruit image data sets of all scales;
step 3, inputting the result of the step 2 into a generated countermeasure network model to enhance the data;
step 4, constructing a parallel convolution neural network model, and performing multi-scale feature extraction on the result of the step 3;
and 5: and (4) according to the characteristics extracted in the step (4), predicting the category of the fruit image by the parallel convolution neural network model, comparing the fruit image with the category label, and training the parallel convolution neural network model according to the comparison result, so that the accuracy of the training model is highest, and the loss rate is reduced to the lowest. .
Preferably, step 2 comprises a 30 °, 60 °, 90 ° rotation of the picture, a 10%, 20%, 30% translation, a 30 °, 60 °, 90 ° mirror rotation.
The generation countermeasure network in the step 3 comprises a generator and a discriminator, noise is input into the generator, random sampling is carried out on the noise, and 3-channel data samples of 128 x 128 are generated through a 5-layer network; the data sample of the discriminator is compared with the real sample, the authenticity of the sample generated by the generator is judged, and the generator is updated by the aid of the fixed parameters of the discriminator to generate a picture which makes the discriminator more difficult to distinguish the authenticity.
The parallel convolutional neural network used in the step 4 comprises 8 convolutional layers, 6 maximal pooling layers and finally a full-connection layer, wherein the full-connection layer integrates local information of the convolutional layers and the maximal pooling layers with classification information.
And step 5, optimizing the convolutional neural network model by utilizing the combination of the maximum class spacing loss function and the SoftmaxWithLoss loss function.
And (3) taking the fruit image to be classified as a target image, performing the operations of the steps 1 to 3, and inputting the trained parallel convolution neural network model to perform fruit identification.
Performing traditional data enhancement and generating a countermeasure network for data enhancement; and bringing the fruit images to be classified into the trained model for training.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: generating a large number of high-quality data sets by combining a countermeasure generation network with a traditional data enhancement method; the parallel convolution neural network is utilized to complete the synchronous extraction of the features with different scales, so that the feature expression is richer, and the network extracts more feature information; the distance between similar varieties is increased by utilizing a maximum class spacing loss function and a SoftmaxWithLoss combination mode, so that the identification accuracy rate between the similar varieties is improved. The accuracy of the method can reach 98.85% on the public data set front-360.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the generation of a countermeasure network of the present invention;
FIG. 3 is a schematic diagram of a parallel convolutional neural network of the present invention.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings. The invention provides a method for training a fruit variety classification model, which comprises the following steps:
step 1: conventional data enhancement is performed on the fruit image with the category label to form a fruit image with multiple data sets.
Step 2: and constructing a generation countermeasure network model, and generating more high-quality data sets by using the generation countermeasure network. The network comprises a discriminator and a generator, and the discriminator and the generator train a countermeasure network model in a game mode. The discriminator adjusts the parameters of the discriminator through the parameters transmitted by the generator so as to judge the authenticity of the incoming data, the generator updates the parameters of the discriminator by the fixed parameters of the discriminator so as to generate more data which are difficult to be judged by the discriminator, finally, the discriminator and the generator reach a balance point, and the training is finished.
When the generator is trained, the parameters of the discriminator are fixed, and the generator updates the parameters of the generator by the fixed parameters of the discriminator, so that the generated sample is difficult to judge whether the data is real data or a simulation sample by the discriminator.
When the discriminator is trained, the parameters of the generator are fixed, and the data generated by the generator are taken as negative samples.
And step 3: and (4) taking the fruit images with the class labels as training data, and respectively carrying out feature extraction by bringing the training data into a neural network model with parallel convolution. C1, C2 are 16 convolution kernels of 3 × 3, with all zero padding, step size 1, and output of 128 × 128 × 3. S1 is the maximum pooling layer of 2 × 2, step size is 2, and output is 64 × 64 × 16. C3 are 16 convolution kernels of size 3 × 3, filled with all zeros, with a step size of 1, and output of 64 × 64 × 16. C4 is 16 convolution kernels of size 3 × 3, with all zero padding, step size 1, output 64 × 64 × 16, S2 is the largest pooling layer of size 2 × 2, step size 2, output 32 × 32 × 16. And inputting the generated characteristic image into two parallel channels a and b, wherein the channel a comprises 32 convolution layers of 3 multiplied by 3 and a maximum pooling layer of 2 multiplied by 2, the convolution layers use all zero padding, the step size is 1, and the pooling layer step size is 4. The b-channel comprises 32 5 × 5 convolution kernels and two 2 × 2 maximum pooling layers, wherein the convolutional layers use all-zero padding, the step size is 1, the step size of the maximum pooling layer is 2, the a and b channels each generate 32 8 × 8 feature maps, then 64 8 × 8 feature maps generated by the two channels are used as input of C5, C5 and C6 are 64 convolutional layers with the size of 3 × 3, all-zero padding is used, the step size is 1, S3 is the maximum pooling layer with the size of 2 × 2, and the step size is 2. And finally, integrating local information of the convolution layer and the pooling layer with classification information for the full connection layer.
And 4, step 4: and predicting the variety types of the fruit images by using the extracted characteristic information, comparing the variety types with corresponding labels, and training the parallel convolution neural network model based on the comparison result.
In step 4, a new loss function consisting of a maximum class spacing loss function and a SoftmaxWithLoss function for predicting the same class and different varieties can be used for optimizing the convolutional neural network. The maximum class spacing loss function formula is as follows:
wherein i represents the ith fruit, j represents the jth fruit, M represents the total fruit category, and M(i)Means, M, for class i fruit(j)Represents the mean of the jth class, n represents the number of samples of the ith fruit, x(i,e)Value, h, representing the e variety of the ith fruitw,b(x(i,e)) Represents wx(i,e-1)+ b, w represents the weight of the e-th breed, b represents the bias term. The identified varieties do not need to be compared with other fruits one by one, and only need to be compared with similar varieties, and the maximum class spacing can increase the spacing between similar varieties along with the increase of training times. Combining the maximum class spacing with SoftmaxWithLoss to obtain a formula:
J=S-λL (3)
wherein S represents SoftmaxWithLoss, L represents a maximum class spacing function, lambda represents a hyper-parameter, and J enables classification between similar classes to be more and more accurate. The derivation of L yields the formula:
wherein Z represents f (h)w,b(x(i,e)) F, l, n respectively represent activation function, number of convolution layers, and number of samples.
In the step 3, the last pooling layer feature and the last full-connection layer feature of each picture can be extracted; and the regularization operation can also be carried out on the original image, the maximum pooling is carried out on the cutting image, and then the regularization operation is carried out.
In step 4, a Softmax classifier can also be used for variety prediction.
The method of the invention is as effective as the fruit image without label in the image, and the internal label means that: bounding-box labels, outline labels, etc.
Claims (6)
1. A fruit variety identification method based on a parallel convolution neural network is characterized by comprising the following steps:
step 1: changing the fruit image with the category label into a 3-channel picture with 128 x 128 pixels;
step 2: carrying out translation, rotation and mirror image overturning operations on the result of the step 1 at the same time to generate fruit image data sets of all scales;
step 3, inputting the result of the step 2 into a generation countermeasure network model to enhance the data;
step 4, constructing a parallel convolution neural network model, and performing multi-scale feature extraction on the result of the step 3;
and 5: and (4) according to the characteristics extracted in the step (4), predicting the category of the fruit image by the parallel convolutional neural network model, comparing the category with the category label, and training the parallel convolutional neural network model according to the comparison result to improve the identification accuracy.
2. The parallel convolutional neural network-based fruit variety identification method of claim 1, wherein said step 2 comprises rotating the picture by 30 °, 60 °, 90 °, shifting by 10%, 20%, 30%, mirroring by 30 °, 60 °, 90 °.
3. The fruit variety identification method based on the parallel convolutional neural network as claimed in claim 1, wherein the generation countermeasure network of step 3 comprises a generator and a discriminator, noise is input into the generator, then random sampling is carried out from the noise, and 3-channel data samples of 128 x 128 are generated through a 5-layer network; the discriminator compares the data sample with the real sample, the generator is judged to generate the true or false of the sample, and the generator is updated by the aid of the fixed parameters of the discriminator to generate a picture which makes the discriminator more difficult to distinguish the true or false.
4. The fruit variety identification method based on the parallel convolutional neural network as claimed in claim 1, wherein the parallel convolutional neural network used in the step 4 comprises 8 convolutional layers, 6 maximal pooling layers and a full-link layer, and the full-link layer integrates local information of the convolutional layers and the maximal pooling layers with classification information.
5. The parallel convolutional neural network-based fruit variety identification method as claimed in claim 1, wherein said step 5 further comprises optimization of convolutional neural network model using a combination of maximum class spacing loss function and SoftmaxWithLoss loss function.
6. The fruit variety identification method based on the parallel convolutional neural network as claimed in any one of claims 1 to 5, further comprising using the fruit image to be classified as a target image, performing the operations of steps 1 to 3, and inputting the trained parallel convolutional neural network model to perform fruit variety identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011327435.9A CN112487909A (en) | 2020-11-24 | 2020-11-24 | Fruit variety identification method based on parallel convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011327435.9A CN112487909A (en) | 2020-11-24 | 2020-11-24 | Fruit variety identification method based on parallel convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112487909A true CN112487909A (en) | 2021-03-12 |
Family
ID=74933434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011327435.9A Pending CN112487909A (en) | 2020-11-24 | 2020-11-24 | Fruit variety identification method based on parallel convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112487909A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7488218B2 (en) | 2021-03-29 | 2024-05-21 | Kddi株式会社 | Information processing device, information processing method, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651830A (en) * | 2016-09-28 | 2017-05-10 | 华南理工大学 | Image quality test method based on parallel convolutional neural network |
CN109740627A (en) * | 2018-11-27 | 2019-05-10 | 南京邮电大学 | A kind of insect image identification identifying system and its method based on parallel-convolution neural network |
CN110414371A (en) * | 2019-07-08 | 2019-11-05 | 西南科技大学 | A kind of real-time face expression recognition method based on multiple dimensioned nuclear convolution neural network |
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
CN111401156A (en) * | 2020-03-02 | 2020-07-10 | 东南大学 | Image identification method based on Gabor convolution neural network |
-
2020
- 2020-11-24 CN CN202011327435.9A patent/CN112487909A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651830A (en) * | 2016-09-28 | 2017-05-10 | 华南理工大学 | Image quality test method based on parallel convolutional neural network |
CN109740627A (en) * | 2018-11-27 | 2019-05-10 | 南京邮电大学 | A kind of insect image identification identifying system and its method based on parallel-convolution neural network |
CN110414371A (en) * | 2019-07-08 | 2019-11-05 | 西南科技大学 | A kind of real-time face expression recognition method based on multiple dimensioned nuclear convolution neural network |
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
CN111401156A (en) * | 2020-03-02 | 2020-07-10 | 东南大学 | Image identification method based on Gabor convolution neural network |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7488218B2 (en) | 2021-03-29 | 2024-05-21 | Kddi株式会社 | Information processing device, information processing method, and program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN112446388A (en) | Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model | |
CN107016405A (en) | A kind of insect image classification method based on classification prediction convolutional neural networks | |
CN106528826A (en) | Deep learning-based multi-view appearance patent image retrieval method | |
CN111696101A (en) | Light-weight solanaceae disease identification method based on SE-Inception | |
CN113705580B (en) | Hyperspectral image classification method based on deep migration learning | |
CN108021947A (en) | A kind of layering extreme learning machine target identification method of view-based access control model | |
CN111161207A (en) | Integrated convolutional neural network fabric defect classification method | |
CN113487576B (en) | Insect pest image detection method based on channel attention mechanism | |
CN109871892A (en) | A kind of robot vision cognitive system based on small sample metric learning | |
CN109800795A (en) | A kind of fruit and vegetable recognition method and system | |
CN111178177A (en) | Cucumber disease identification method based on convolutional neural network | |
WO2023019698A1 (en) | Hyperspectral image classification method based on rich context network | |
CN113011386B (en) | Expression recognition method and system based on equally divided characteristic graphs | |
CN110348503A (en) | A kind of apple quality detection method based on convolutional neural networks | |
CN112749675A (en) | Potato disease identification method based on convolutional neural network | |
CN114140665A (en) | Dense small target detection method based on improved YOLOv5 | |
CN115359353A (en) | Flower identification and classification method and device | |
CN108363962B (en) | Face detection method and system based on multi-level feature deep learning | |
CN112507896A (en) | Method for detecting cherry fruits by adopting improved YOLO-V4 model | |
CN113435254A (en) | Sentinel second image-based farmland deep learning extraction method | |
CN114676769A (en) | Visual transform-based small sample insect image identification method | |
CN113298817A (en) | High-accuracy semantic segmentation method for remote sensing image | |
CN116740516A (en) | Target detection method and system based on multi-scale fusion feature extraction | |
CN116258990A (en) | Cross-modal affinity-based small sample reference video target segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |