CN109684967A - A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network - Google Patents
A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network Download PDFInfo
- Publication number
- CN109684967A CN109684967A CN201811540821.9A CN201811540821A CN109684967A CN 109684967 A CN109684967 A CN 109684967A CN 201811540821 A CN201811540821 A CN 201811540821A CN 109684967 A CN109684967 A CN 109684967A
- Authority
- CN
- China
- Prior art keywords
- image
- ssd
- soybean plant
- layer
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 244000068988 Glycine max Species 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000012360 testing method Methods 0.000 claims abstract description 20
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 14
- 230000003321 amplification Effects 0.000 claims abstract description 10
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 10
- 238000010606 normalization Methods 0.000 claims description 12
- 235000010469 Glycine max Nutrition 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 5
- 230000003213 activating effect Effects 0.000 claims description 4
- 239000004744 fabric Substances 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 230000001629 suppression Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims 1
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000013095 identification testing Methods 0.000 abstract 1
- 238000011176 pooling Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 12
- 238000002372 labelling Methods 0.000 description 10
- 230000004913 activation Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 241000196324 Embryophyta Species 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000012214 genetic breeding Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of soybean plant strain stem pod recognition methods based on SSD convolutional network, comprising the following steps: acquisition single plant soybean sample image obtains soybean plant strain image library;Manual mark is carried out to stem pod, beanpod marks the beanpod tip not being blocked, and stalk marks stalk exposed part, and image library is divided into training set, verifying collection, test set without duplicate;Random image enhancing and data amplification are carried out to the training set image marked, and automatic mark again increases image newly;SSD convolutional network is constructed, with the characteristic pattern of different levels, carries out multiple scale detecting;The training set is randomly selected to the training for being used for SSD convolutional neural networks, and determines the learning parameter in the SSD convolutional neural networks;Test set is transported in trained SSD convolutional neural networks and carries out identification test, and by recognition result label in the original image in the test sample.A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network is provided, intelligentized identification is carried out to soybean plant strain stem pod by network training, high degree of automation effectively improves the efficiency to the detection of soybean plant strain stem pod.
Description
Technical Field
The invention relates to the technical field of computer image processing and identification methods, in particular to a soybean plant stem pod identification method based on a convolution network of an SSD.
Background
Soybean is an important crop of grain and oil in the world and also a main source of high-quality protein for human beings. Is one of the main crops in China and is the crop with the most economic benefit. The soybean seed testing work collects, sorts and counts the data of the whole soybean character, and is an important link in the genetic breeding process of soybean crops. At present, the work of soybean seed test mainly adopts manual operation, however, manual operation not only consumes a large amount of manpower and materials, has human error simultaneously, brings later data analysis's inaccuracy.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a soybean plant stem pod identification method based on a convolution network of an SSD (solid State disk), which applies machine vision to soybean seed test work, can improve the character data precision of soybean plants, reduce personal errors, shorten the seed test period, reduce the labor intensity and develop towards intellectualization, rapidness and accuracy.
In order to solve the technical problems, the invention adopts the technical scheme that:
provided is a soybean plant stem pod identification method based on a convolution network of an SSD, comprising the following steps:
s1, fixing a Canon 5D Mark II camera at a position 120cm away from blue background cloth to acquire a single soybean sample image to obtain a soybean plant image library;
s2, traversing all sample images in the image library in the step S1, manually marking pods and stems for each sample image, marking the tips of the pods which are not shielded as pods, and marking the exposed parts of the stems as stems to obtain an original image set;
and S3, carrying out random image enhancement and data amplification on the labeled training set image in the step S2. Carrying out image enhancement by adopting self-adaptive histogram equalization; randomly adjusting RGB color channels within a certain threshold value, turning horizontal and vertical mirror images, randomly rotating and translating to amplify data, centrally intercepting the rotated and translated images, discarding labels if the target of the processed images exceeds the boundary, and obtaining an enhanced amplification training set;
s4, constructing an SSD convolutional network, and carrying out multi-scale detection by using feature maps of different layers;
s5, conveying the training samples in the steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and meanwhile determining learning parameters in the SSD convolutional neural network;
and S6, transmitting the test set to the trained SSD convolutional neural network for recognition test, and taking a classification result with the confidence level of more than 40% in a recognition result as an output recognition result of the test sample.
According to the soybean plant stem pod identification method based on the SSD convolutional network, a fusion layer is added to directly transfer details of a lower layer to a residual error structure of a higher layer, the characteristic that the SSD network selects characteristic graphs of different layers to perform multi-scale detection is fully utilized, and the defect that the existing method is poor in detection accuracy of deformed, shielded and continuously overlapped objects is overcome. The method has the advantages of a convolutional neural network, reduces the interference of image background and environment brightness, has stronger anti-interference capability on shielding and overlapping, and improves the accuracy of detecting the soybean plant stem pods.
Preferably, in step S2, the pods and stems of each sample image are manually labeled, the pod tips that are not shielded are labeled as pods, and the stem exposed parts are labeled as stems. Only part of characteristics of the target are marked, so that the identification accuracy under the shielding and overlapping conditions can be improved.
Preferably, random image enhancement and data augmentation are performed on the labeled training set images described in step S2. Carrying out image enhancement by adopting self-adaptive histogram equalization; the method comprises the steps of carrying out data amplification by random adjustment of RGB color channels within a certain threshold value and horizontal and vertical mirror image turning and random rotation and translation operations, carrying out centered interception on images subjected to rotation and translation, and discarding labels if targets of the images subjected to processing exceed boundaries.
Preferably, the SSD model described in step S4 is built by adding one fusion layer and four convolutional layers in the VGG-16 network, and the building steps of the training model in step S4 are as follows:
s41, taking the soybean plant sample image as input, and carrying out convolution operation on the image in a convolution layer to obtain a characteristic diagram;
s42, adding an Add4_3 layer to the VGG-16 network, wherein the Add4_3 is formed by fusing (adding) two feature maps of Maxpool3 and Conv4_2, activating by ReLU and normalizing by Batch Normalization (BN), the Add4_3 is used as the input of the Conv4_3 layer, the feature maps of the Conv4_3 layer, the Fc7 layer and the Conv8_ 2-Conv 11_2 layers in the network are convolved by a 3 x 3 convolution kernel, and the confidence coefficient for output classification and the positioning information for output regression are respectively realized;
and S43, combining all the output structures, and obtaining a detection result through non-maximum suppression processing.
By selecting six feature maps of different levels for multi-scale detection and increasing fusion of the feature maps of the lower layer on the basis of keeping detection of the feature maps of the higher layer, the abundant image detail information of the feature maps of the lower layer is fully utilized, the target detection robustness effect of the anti-interference capability is achieved, and the detection and positioning problems under the conditions of deformation, shielding, continuous overlapping and the like are solved.
Preferably, when the confidence for classification is output, each frame generates the confidence of two classes; when the regression positioning information is output, four coordinate values (x, y, w, h) are generated for each frame.
Preferably, the characteristic map in step S41 is calculated as follows:
step 1: dividing a characteristic diagram output by a Conv4_3 layer into 76 × 38 units, wherein each unit uses four default boundary boxes, each default boundary box uses a convolution kernel with the size of 3 × 3 to perform convolution operation, and outputs four elements of a frame, namely, a horizontal coordinate x and a vertical coordinate y at the upper left corner of the output frame, the width w and the height h of the frame output by a frame regression layer, and the confidence degrees that objects in the frame belong to pods and stalks respectively;
step 2: calculating the characteristic diagrams output by the Fc7 layer and the Conv8_ 2-Conv 11_2 layer in sequence according to the same method in the step S411; each layer feature map is divided into 38 × 19, 20 × 10, 1 × 5, 6 × 3 and 1 × 1 units, and the default bounding boxes used by each unit are 6, 4 and 4 respectively.
Preferably, the training error of the pre-trained model in step S4 is less than 15%, and the average value of the test error is less than 20%.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention can realize the detection and the positioning of the soybean plant stem pod, has the advantages of higher accuracy, good stability, strong anti-interference capability, high universality and the like, has high detection precision on deformed, shielded and continuously overlapped targets, and can be applied to a soybean plant character detection system.
(2) The method has the advantages of a convolutional neural network, reduces the interference of image background and environment brightness, has stronger anti-interference capability on shielding and mutual overlapping, and improves the accuracy of soybean plant stem pod detection.
Drawings
FIG. 1 is a flow chart of a method of soybean plant stem pod identification based on a convolution network of SSD;
fig. 2 is a detailed flowchart of step S4;
FIG. 3 is a sample image of soybean plants identified in the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for a better understanding of the embodiments, the drawings may be omitted, enlarged or reduced and do not represent actual dimensions; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The invention is further described with reference to the following figures and detailed description.
Referring to fig. 1, the present embodiment is a first embodiment of a method for identifying soybean plant stem pods based on SSD convolutional network of the present invention, and comprises the following steps:
s1, fixing a Canon 5D Mark II camera at a position 120cm away from blue background cloth to acquire a single soybean sample image to obtain a soybean plant image library;
s2, traversing all sample images in the image set in the step S1, manually labeling pods and stems of each sample image, labeling the tips of the pods which are not shielded into pods, labeling the exposed parts of the stems into stems, and obtaining an original training set;
and S3, carrying out random image enhancement and data amplification on the labeled training set image in the step S2. Carrying out image enhancement by adopting self-adaptive histogram equalization; randomly adjusting RGB color channels within a certain threshold value, turning horizontal and vertical mirror images, randomly rotating and translating to amplify data, centrally intercepting the rotated and translated images, discarding labels if the target of the processed images exceeds the boundary, and obtaining an enhanced amplification training set;
referring to fig. 2, in step s4 of this embodiment, an SSD convolutional network is constructed, and multi-scale detection is performed by using feature maps of different levels;
s5, conveying the training samples in the steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and meanwhile determining learning parameters in the SSD convolutional neural network;
and S6, transmitting the test set to the trained SSD convolutional neural network for recognition test, and taking a classification result with the confidence level of more than 40% in a recognition result as an output recognition result of the test sample.
In step S1, a Canon 5D Mark II camera is fixed at a position 120cm away from the blue background cloth to acquire a single soybean sample image, so as to obtain a soybean plant image library. Specifically, the plant sample image set stores sample data in the following form:
{image_name,x,y}
wherein, image _ name represents the name of the soybean plant image, x represents the horizontal pixel value of the image, and y represents the vertical pixel value of the image.
In step S2, traversing all sample images in the image library in step S1, performing manual labeling of pods and stems on each sample image, labeling the tips of the pods which are not shielded as pods, and labeling the exposed parts of the stems as stems, thereby obtaining an original image set. Specifically, the plant sample image is used for labeling each real frame to form an image labeling set, and the image labeling set stores labeling data in the following form:
{label,xmin,ymin,xmax,ymax}
wherein label represents the labeled category, xmin represents the abscissa of the labeled minimum pixel point, ymin represents the ordinate of the labeled minimum pixel point, xmax represents the abscissa of the labeled maximum pixel point, and ymax represents the ordinate of the labeled maximum pixel point.
In step S4, the pre-trained model is established as follows:
s41, taking the soybean plant sample image as input, and carrying out convolution operation on the image in the convolution layer to obtain a characteristic diagram;
s42, adding an Add4_3 layer to the VGG-16 network, wherein the Add4_3 is formed by fusing (adding) two feature maps of Maxpool3 and Conv4_2, activating by ReLU and normalizing by Batch Normalization (BN), the Add4_3 is used as the input of the Conv4_3 layer, the feature maps of the Conv4_3 layer, the Fc7 layer and the Conv8_ 2-Conv 11_2 layers in the network are convolved by a 3 x 3 convolution kernel, and the confidence coefficient for output classification and the positioning information for output regression are respectively realized;
s43, combining all output structures, and obtaining a detection result through non-maximum suppression processing; wherein the confidence for output classification is the confidence of the class in each prediction box; the positioning information for output regression is four coordinate values (x, y, w, h) for each prediction frame.
The feature map in step S41 is calculated as follows:
step 1: dividing a characteristic diagram output by a Conv4_3 layer into 76 × 38 units, wherein each unit uses four default boundary boxes, each default boundary box uses a convolution kernel with the size of 3 × 3 to perform convolution operation, and outputs four elements of a frame, namely, a horizontal coordinate x and a vertical coordinate y at the upper left corner of the output frame, the width w and the height h of the frame output by a frame regression layer, and the confidence degrees that objects in the frame belong to pods and stalks respectively;
step 2: calculating the characteristic diagrams output by the Fc7 layer and the Conv8_ 2-Conv 11_2 layer in sequence according to the same method in the step S411; each layer feature map is divided into 38 × 19, 20 × 10, 10 × 5, 6 × 3 and 1 × 1 units, and the default bounding boxes used by each unit are 6, 4 and 4 respectively.
By selecting six feature maps of different levels for multi-scale detection and increasing fusion of the feature maps of the lower layer on the basis of keeping detection of the feature maps of the higher layer, the abundant image detail information of the feature maps of the lower layer is fully utilized, the target detection robustness effect of the anti-interference capability is achieved, and the detection and positioning problems under the conditions of deformation, shielding, continuous overlapping and the like are solved.
The VGG-16 partial network structure added with the residual structure in this embodiment is:
the first layer, using two consecutive 64 convolution filters with size of 3 × 3, with step of 1 and padding of 1, obtains two 600 × 300 × 64 convolution layers (Conv1_1, Conv1_2), after obtaining the output of the convolution layers, uses BN layer (batch normalization) to perform normalization, then uses ReLU function (Rectified Linear Units) as nonlinear activation function to perform activation, and finally uses a maximum pooling layer (maxporoling) with window size of 2 × 2 to perform pooling, and the sampling step of the maximum pooling layer (maxporoling) is 2.
And a second layer, in which 128 convolution filters of 3 × 3 size are continuously used twice, the step is 1, padding (padding) is 1, two convolution layers (Conv2_1, Conv2_2) of 300 × 150 × 128 are obtained, after the output of the convolution layers is obtained, normalization processing is performed by using a BN layer (batch normalization), then activation is performed by using a ReLU function (Rectified linear units) as a nonlinear activation function, and finally pooling is performed by using a maximum pooling layer (maxporoling) of 2 × 2 window size, and the sampling step of the maximum pooling layer (maxporoling) is 2.
And a third layer, in which 256 convolution filters with a size of 3 × 3 are continuously used three times, the step is 1, padding (padding) is 1, three 150 × 75 × 256 convolution layers (Conv3_1, Conv3_2, Conv3_3) are obtained, after the output of the convolution layers is obtained, normalization processing is performed by using a BN layer (batch normalization), then activation is performed by using a Rectified linear units (Rectified linear units) as a nonlinear activation function, and finally pooling is performed by using a maximum pooling layer (maxporoling) with a window size of 2 × 2, and the sampling of the maximum pooling layer (maxporoling) is 2.
And a fourth layer, which uses 512 convolution filters with a quadratic size of 3 × 3, 512 convolution filters with a primary size of 1 × 1, 512 convolution filters with a primary size of 3 × 3, stride of 1, padding of 1 to obtain three 76 × 38 × 512 convolution layers (Conv4_1, Conv4_2, Add4_3, and Conv4_3), obtains outputs of the convolution layers, performs normalization processing using a BN layer (batch normalization), activates using a ReLU function (Rectified linear units) as a nonlinear activation function, and performs pooling using a maximum pooling layer (max pooling) with a window size of 2 × 2, wherein the sampling stride of the maximum pooling layer (max pooling) is 2.
And a fifth layer, using 512 convolution filters with the size of 3 × 3 in succession, the step being 1, and padding (padding) being 1, obtaining three convolution layers (Conv5_1, Conv5_2, Conv5_3) with the size of 38 × 19 × 512, obtaining the output of the convolution layers, normalizing the output by using a BN layer (batch normalization), and then activating the output by using a ReLU function (Rectified linear units) as a nonlinear activation function.
Next, 1024 convolution filters of 3 × 3 size were used for the output of Conv5_3, the step size was 1, padding (padding) was 1, and the Fc6 layer of 38 × 19 × 1024 size was obtained, and further 1024 convolution filters of 1 × 1 size were used for the Fc6 layer, the step size was 1, and padding (padding) was 1, and the Fc7 layer of 38 × 19 × 1024 size was obtained.
Finally, four convolutional layers were added behind the Fc7 layer, 20 × 10 × 512 Conv8, 10 × 5 × 256 Conv9, 6 × 3 × 256 Conv10, and 1 × 1 × 256 Conv11, respectively.
In step S4, the training error of the pre-trained model is less than 15%, and the average value of the test error is less than 20%. The calculation method of the model training error is as follows:
step 1: matching each real frame with a default bounding box corresponding to the maximum jaccard coefficient overlap, and matching the default bounding box with any real frame with the jaccard coefficient overlap larger than 0.5;
step 2: i denotes the default box number, j denotes the real box number, p denotes the category number, 0 is the background, 1 is the pod, 2 is the stem,if the matching is judged to be matched, the maximum jaccard coefficient overlapping with the real frame is larger than the threshold value, the matching value is 1, and if not, the matching value is 0.
And step 3: the total target loss function L (x, c, L, g) is determined by the localization loss LlocAnd confidence loss LconfThe weighted sum yields:
wherein N is the number of default bounding boxes matched with the real box, LlocFor loss of alignment, LconfFor confidence loss, x represents the training sample, c represents the confidence of each class of object, l represents the prediction box, g represents the real box, α represents the weight, α in this embodiment is set to 0.8;
loss of positioning LlocWhere f (x) is a piecewise smoothing function controlled by σ, d represents a default box, w represents the width of a real box or a default bounding box, h represents the height of the real box or the default bounding box, i represents an ith default box, j represents a jth real box, m represents position information of the real box or the default bounding box (where cx represents a center point x-axis coordinate, cy represents a center point y-axis coordinate, w represents the width of the box, h represents the height of the box), and p represents a p-th category:
in the formula,
loss of confidence LconfIs a multi-class softmax loss function, as in equation 8. Wherein,the prediction probability of the ith default frame and the jth real frame corresponding to the category p is represented by the following calculation formula:
the method can accurately detect and position the pods and the stems in the soybean plant image, has stronger anti-interference capability on interference such as shielding, overlapping and the like, and improves the accuracy of pod detection of the soybean plant. Wherein, the identification result of the sample image is shown in fig. 3.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (7)
1. A soybean plant stem pod identification method based on a convolution network of an SSD is characterized by comprising the following steps:
s1, fixing a Canon 5D Mark II camera at a position 120cm away from blue background cloth to acquire a single soybean sample image to obtain a soybean plant image library;
s2, traversing all sample images in the image library in the step S1, manually marking pods and stems for each sample image, marking the tips of the pods which are not shielded as pods, marking the exposed parts of the stems as stems, obtaining an original image library, and repeatedly dividing the image library into a training set, a verification set and a test set;
and S3, carrying out random image enhancement and data amplification on the labeled training set image in the step S2. Carrying out image enhancement by adopting self-adaptive histogram equalization; randomly adjusting RGB color channels within a certain threshold value, turning horizontal and vertical mirror images, randomly rotating and translating to amplify data, centrally intercepting the rotated and translated images, discarding labels if the target of the processed images exceeds the boundary, and obtaining an enhanced amplification training set;
s4, constructing an SSD convolutional network, and carrying out multi-scale detection by using feature maps of different layers;
s5, conveying the training samples in the steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and meanwhile determining learning parameters in the SSD convolutional neural network;
and S6, transmitting the test set to the trained SSD convolutional neural network for recognition test, and taking a classification result with the confidence level of more than 40% in a recognition result as an output recognition result of the test sample.
2. The method of claim 1, wherein in step S2, each sample image is manually labeled with pods and stalks, wherein the pod tips that are not blocked are labeled with pods and the stalk bare parts are labeled with stalks.
3. The SSD-based convolutional network soybean plant stem pod identification method of claim 1, wherein the labeled training set images of step S3 are subjected to random image enhancement and data amplification. Carrying out image enhancement by adopting self-adaptive histogram equalization; the method comprises the steps of carrying out data amplification by random adjustment of RGB color channels within a certain threshold value and horizontal and vertical mirror image turning and random rotation and translation operations, carrying out centered interception on images subjected to rotation and translation, and discarding labels if targets of the images subjected to processing exceed boundaries.
4. The method for identifying soybean plant stems based on the SSD convolutional network as claimed in claim 1, wherein the SSD model in step S4 is constructed by adding a fusion layer and four convolutional layers in the VGG-16 network, and the training model in step S4 is constructed by the following steps:
s41, taking the soybean plant sample image as input, and carrying out convolution operation on the image in a convolution layer to obtain a characteristic diagram;
s42, adding an Add4_3 layer to the VGG-16 network, wherein the Add4_3 is formed by fusing (adding) two feature maps of Maxpool3 and Conv4_2, activating by ReLU and normalizing by Batch Normalization (BN), the Add4_3 is used as the input of the Conv4_3 layer, the feature maps of the Conv4_3 layer, the Fc7 layer and the Conv8_ 2-Conv 11_2 layers in the network are convolved by a 3 x 3 convolution kernel, and the confidence coefficient for output classification and the positioning information for output regression are respectively realized;
and S43, combining all the output structures, and obtaining a detection result through non-maximum suppression processing.
5. The SSD-based convolutional network soybean plant stem pod identification method of claim 4, wherein upon outputting confidence for classification, each frame generates two categories of confidence; when the regression positioning information is output, four coordinate values (x, y, w, h) are generated for each frame.
6. The SSD-based convolutional network soybean plant stem pod identification method of claim 4, wherein the profile in step S41 is calculated as follows:
step 1: dividing a characteristic diagram output by a Conv4_3 layer into 76 × 38 units, wherein each unit uses four default boundary boxes, each default boundary box uses a convolution kernel with the size of 3 × 3 to perform convolution operation, and outputs four elements of a frame, namely, a horizontal coordinate x and a vertical coordinate y at the upper left corner of the output frame, the width w and the height h of the frame output by a frame regression layer, and the confidence degrees that objects in the frame belong to pods and stalks respectively;
step 2: calculating the characteristic diagrams output by the Fc7 layer and the Conv8_ 2-Conv 11_2 layer in sequence according to the same method in the step S411; each layer feature map is divided into 38 × 19, 20 × 10, 10 × 5, 6 × 3 and 1 × 1 units, and the default bounding boxes used by each unit are 6, 4 and 4 respectively.
7. The SSD-convolutional-network-based soybean plant stem pod identification method of claim 1, wherein the pre-trained model in step S4 has training errors of less than 15% and the average of the test errors is less than 20%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540821.9A CN109684967A (en) | 2018-12-17 | 2018-12-17 | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540821.9A CN109684967A (en) | 2018-12-17 | 2018-12-17 | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109684967A true CN109684967A (en) | 2019-04-26 |
Family
ID=66187871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811540821.9A Pending CN109684967A (en) | 2018-12-17 | 2018-12-17 | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109684967A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110702A (en) * | 2019-05-20 | 2019-08-09 | 哈尔滨理工大学 | It is a kind of that algorithm is evaded based on the unmanned plane for improving ssd target detection network |
CN110100774A (en) * | 2019-05-08 | 2019-08-09 | 安徽大学 | River crab male and female recognition methods based on convolutional neural networks |
CN110443778A (en) * | 2019-06-25 | 2019-11-12 | 浙江工业大学 | A method of detection industrial goods random defect |
CN110602411A (en) * | 2019-08-07 | 2019-12-20 | 深圳市华付信息技术有限公司 | Method for improving quality of face image in backlight environment |
CN110781870A (en) * | 2019-11-29 | 2020-02-11 | 东北农业大学 | Milk cow rumination behavior identification method based on SSD convolutional neural network |
CN110839366A (en) * | 2019-10-21 | 2020-02-28 | 中国科学院东北地理与农业生态研究所 | Soybean plant seed tester and phenotype data acquisition and identification method |
CN111126402A (en) * | 2019-11-04 | 2020-05-08 | 北京海益同展信息科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111597868A (en) * | 2020-01-08 | 2020-08-28 | 浙江大学 | SSD-based substation disconnecting switch state analysis method |
CN111652012A (en) * | 2020-05-11 | 2020-09-11 | 中山大学 | Curved surface QR code positioning method based on SSD network model |
CN111833310A (en) * | 2020-06-17 | 2020-10-27 | 桂林理工大学 | Surface defect classification method based on neural network architecture search |
CN112232263A (en) * | 2020-10-28 | 2021-01-15 | 中国计量大学 | Tomato identification method based on deep learning |
CN117975172A (en) * | 2024-03-29 | 2024-05-03 | 安徽农业大学 | Method and system for constructing and training whole pod recognition model |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012123626A (en) * | 2010-12-08 | 2012-06-28 | Toyota Central R&D Labs Inc | Object detector and program |
US20170084067A1 (en) * | 2015-09-23 | 2017-03-23 | Samsung Electronics Co., Ltd. | Electronic device for processing image and method for controlling thereof |
CN107315999A (en) * | 2017-06-01 | 2017-11-03 | 范衠 | A kind of tobacco plant recognition methods based on depth convolutional neural networks |
CN107578050A (en) * | 2017-09-13 | 2018-01-12 | 浙江理工大学 | The automatic classifying identification method of rice basal part of stem On Planthopperss and its worm state |
CN108133186A (en) * | 2017-12-21 | 2018-06-08 | 东北林业大学 | A kind of plant leaf identification method based on deep learning |
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN108564065A (en) * | 2018-04-28 | 2018-09-21 | 广东电网有限责任公司 | A kind of cable tunnel open fire recognition methods based on SSD |
CN108592799A (en) * | 2018-05-02 | 2018-09-28 | 东北农业大学 | A kind of soybean kernel and beanpod image collecting device |
CN108647652A (en) * | 2018-05-14 | 2018-10-12 | 北京工业大学 | A kind of cotton development stage automatic identifying method based on image classification and target detection |
-
2018
- 2018-12-17 CN CN201811540821.9A patent/CN109684967A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012123626A (en) * | 2010-12-08 | 2012-06-28 | Toyota Central R&D Labs Inc | Object detector and program |
US20170084067A1 (en) * | 2015-09-23 | 2017-03-23 | Samsung Electronics Co., Ltd. | Electronic device for processing image and method for controlling thereof |
CN107315999A (en) * | 2017-06-01 | 2017-11-03 | 范衠 | A kind of tobacco plant recognition methods based on depth convolutional neural networks |
CN107578050A (en) * | 2017-09-13 | 2018-01-12 | 浙江理工大学 | The automatic classifying identification method of rice basal part of stem On Planthopperss and its worm state |
CN108133186A (en) * | 2017-12-21 | 2018-06-08 | 东北林业大学 | A kind of plant leaf identification method based on deep learning |
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN108564065A (en) * | 2018-04-28 | 2018-09-21 | 广东电网有限责任公司 | A kind of cable tunnel open fire recognition methods based on SSD |
CN108592799A (en) * | 2018-05-02 | 2018-09-28 | 东北农业大学 | A kind of soybean kernel and beanpod image collecting device |
CN108647652A (en) * | 2018-05-14 | 2018-10-12 | 北京工业大学 | A kind of cotton development stage automatic identifying method based on image classification and target detection |
Non-Patent Citations (4)
Title |
---|
W.D.HANSON: "Modified Seed Maturation Rates and Seed Yield Potentials in Soybean", 《CROP PHYSIOLOGY & METABOLISM》 * |
张晗: "苹果果实生长信息远程测定方法与技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
赵庆北: "改进的SSD的目标检测研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
高艳霞等: "基于粒子群算法和神经网络的大豆识别研究", 《信息自动化》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110100774A (en) * | 2019-05-08 | 2019-08-09 | 安徽大学 | River crab male and female recognition methods based on convolutional neural networks |
CN110110702A (en) * | 2019-05-20 | 2019-08-09 | 哈尔滨理工大学 | It is a kind of that algorithm is evaded based on the unmanned plane for improving ssd target detection network |
CN110443778B (en) * | 2019-06-25 | 2021-10-15 | 浙江工业大学 | Method for detecting irregular defects of industrial products |
CN110443778A (en) * | 2019-06-25 | 2019-11-12 | 浙江工业大学 | A method of detection industrial goods random defect |
CN110602411A (en) * | 2019-08-07 | 2019-12-20 | 深圳市华付信息技术有限公司 | Method for improving quality of face image in backlight environment |
CN110839366A (en) * | 2019-10-21 | 2020-02-28 | 中国科学院东北地理与农业生态研究所 | Soybean plant seed tester and phenotype data acquisition and identification method |
CN110839366B (en) * | 2019-10-21 | 2024-07-09 | 中国科学院东北地理与农业生态研究所 | Soybean plant seed tester and phenotype data acquisition and identification method |
CN111126402A (en) * | 2019-11-04 | 2020-05-08 | 北京海益同展信息科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111126402B (en) * | 2019-11-04 | 2023-11-03 | 京东科技信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110781870A (en) * | 2019-11-29 | 2020-02-11 | 东北农业大学 | Milk cow rumination behavior identification method based on SSD convolutional neural network |
CN111597868A (en) * | 2020-01-08 | 2020-08-28 | 浙江大学 | SSD-based substation disconnecting switch state analysis method |
CN111652012A (en) * | 2020-05-11 | 2020-09-11 | 中山大学 | Curved surface QR code positioning method based on SSD network model |
CN111833310B (en) * | 2020-06-17 | 2022-05-06 | 桂林理工大学 | Surface defect classification method based on neural network architecture search |
CN111833310A (en) * | 2020-06-17 | 2020-10-27 | 桂林理工大学 | Surface defect classification method based on neural network architecture search |
CN112232263A (en) * | 2020-10-28 | 2021-01-15 | 中国计量大学 | Tomato identification method based on deep learning |
CN112232263B (en) * | 2020-10-28 | 2024-03-19 | 中国计量大学 | Tomato identification method based on deep learning |
CN117975172A (en) * | 2024-03-29 | 2024-05-03 | 安徽农业大学 | Method and system for constructing and training whole pod recognition model |
CN117975172B (en) * | 2024-03-29 | 2024-07-09 | 安徽农业大学 | Method and system for constructing and training whole pod recognition model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109684967A (en) | A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network | |
Amara et al. | A deep learning-based approach for banana leaf diseases classification | |
CN114897816B (en) | Mask R-CNN mineral particle identification and particle size detection method based on improved Mask | |
CN111340141A (en) | Crop seedling and weed detection method and system based on deep learning | |
CN110070008A (en) | Bridge disease identification method adopting unmanned aerial vehicle image | |
CN110349145A (en) | Defect inspection method, device, electronic equipment and storage medium | |
CN112102229A (en) | Intelligent industrial CT detection defect identification method based on deep learning | |
Bukhari et al. | Assessing the impact of segmentation on wheat stripe rust disease classification using computer vision and deep learning | |
CN110929944A (en) | Wheat scab disease severity prediction method based on hyperspectral image and spectral feature fusion technology | |
CN110307903B (en) | Method for dynamically measuring non-contact temperature of specific part of poultry | |
CN111797760A (en) | Improved crop pest and disease identification method based on Retianet | |
CN115861170A (en) | Surface defect detection method based on improved YOLO V4 algorithm | |
CN111340019A (en) | Grain bin pest detection method based on Faster R-CNN | |
CN111882555B (en) | Deep learning-based netting detection method, device, equipment and storage medium | |
CN113420614A (en) | Method for identifying mildewed peanuts by using near-infrared hyperspectral images based on deep learning algorithm | |
CN116977960A (en) | Rice seedling row detection method based on example segmentation | |
CN117636314A (en) | Seedling missing identification method, device, equipment and medium | |
CN116434066B (en) | Deep learning-based soybean pod seed test method, system and device | |
CN117253192A (en) | Intelligent system and method for silkworm breeding | |
CN114596244A (en) | Infrared image identification method and system based on visual processing and multi-feature fusion | |
CN107016401B (en) | Digital camera image-based rice canopy recognition method | |
Deemyad et al. | HSL Color Space for Potato Plant Detection in the Field | |
CN109472771A (en) | Detection method, device and the detection device of maize male ears | |
Sundaram et al. | Weedspedia: Deep Learning-Based Approach for Weed Detection using R-CNN, YoloV3 and Centernet | |
CN118172676B (en) | Farmland pest detection method based on quantum deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190426 |