CN109684967A - A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network - Google Patents

A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network Download PDF

Info

Publication number
CN109684967A
CN109684967A CN201811540821.9A CN201811540821A CN109684967A CN 109684967 A CN109684967 A CN 109684967A CN 201811540821 A CN201811540821 A CN 201811540821A CN 109684967 A CN109684967 A CN 109684967A
Authority
CN
China
Prior art keywords
ssd
image
layer
soybean plant
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811540821.9A
Other languages
Chinese (zh)
Inventor
宁姗
陈海涛
王业成
史乃煜
王星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Agricultural University
Original Assignee
Northeast Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Agricultural University filed Critical Northeast Agricultural University
Priority to CN201811540821.9A priority Critical patent/CN109684967A/en
Publication of CN109684967A publication Critical patent/CN109684967A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种基于SSD卷积网络的大豆植株茎荚识别方法,包括以下步骤:采集单株大豆样本图像,得到大豆植株图像库;对茎荚进行手工标注,豆荚标注未被遮挡的豆荚尖部,茎秆标注茎秆裸露部分,并将图像库无重复的分为训练集、验证集、测试集;对已标注的训练集图像进行随机图像增强和数据扩增,并自动重新标注新增图像;构造SSD卷积网络,用不同层次的特征图,进行多尺度检测;将所述训练集随机抽取用于SSD卷积神经网络的训练,并确定所述SSD卷积神经网络中的学习参数;将测试集输送到训练好的SSD卷积神经网络中进行识别测试,并将识别结果标记在所述测试样本中的原始图像中。提供一种基于SSD卷积网络的大豆植株茎荚识别方法,通过网络训练对大豆植株茎荚进行智能化的识别,自动化程度高,有效地提高了对大豆植株茎荚检测的效率。

The invention provides a method for identifying stems and pods of soybean plants based on SSD convolutional networks, comprising the following steps: collecting a single soybean sample image to obtain a soybean plant image database; manually labeling the stems and pods, and labeling the pods with unobstructed pod tips The exposed part of the stem is marked, and the image database is divided into training set, validation set, and test set without repetition; random image enhancement and data augmentation are performed on the marked images of the training set, and the new images are automatically re-marked. image; construct an SSD convolutional network, use feature maps of different levels to perform multi-scale detection; randomly extract the training set for the training of the SSD convolutional neural network, and determine the learning parameters in the SSD convolutional neural network ; Send the test set to the trained SSD convolutional neural network for recognition test, and mark the recognition result in the original image in the test sample. Provided is a soybean plant stem pod identification method based on SSD convolution network, which can intelligently identify soybean plant stem pods through network training, has a high degree of automation, and effectively improves the detection efficiency of soybean plant stem pods.

Description

一种基于SSD卷积网络的大豆植株茎荚识别方法A method for identifying stems and pods of soybean plants based on SSD convolutional network

技术领域technical field

本发明涉及计算机图像处理识别方法的技术领域,更具体地,涉及一种基于SSD的卷积网络的大豆植株茎荚识别方法。The invention relates to the technical field of computer image processing and identification methods, and more particularly, to a method for identifying stems and pods of soybean plants based on SSD convolutional networks.

背景技术Background technique

大豆是世界重要的粮油兼用作物,也是人类优质蛋白的主要来源。既是我国主要作物之一,也是最具经济效益的作物。大豆考种工作将整株大豆性状的数据进行收集、整理和统计,是分析大豆作物的遗传育种过程中一个重要的环节。目前,大豆考种工作主要采用人工操作,然而人工操作不仅消耗大量的人力物力,同时存在着人为误差,带来后期数据分析的不准确性。Soybean is an important grain and oil crop in the world, and it is also the main source of high-quality protein for human beings. It is not only one of the main crops in my country, but also the most economical crop. Soybean seed testing is an important link in the process of analyzing the genetics and breeding of soybean crops, which collects, organizes and counts the data on the traits of the whole soybean plant. At present, the soybean seed testing work mainly adopts manual operation. However, manual operation not only consumes a lot of manpower and material resources, but also has human errors, which brings inaccuracy of later data analysis.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于克服现有技术的不足,提供一种基于SSD的卷积网络的大豆植株茎荚识别方法,将机器视觉运用到大豆考种工作,不仅能够提高大豆植株性状数据精度,减少人为误差,缩短考种周期,还能降低劳动强度,向智能化、快速化、准确化方向发展。The purpose of the present invention is to overcome the deficiencies of the prior art, and to provide a method for identifying stems and pods of soybean plants based on a convolutional network of SSD. The application of machine vision to the work of soybean seed testing can not only improve the accuracy of soybean plant character data, but also reduce artificial Errors, shorten the test cycle, but also reduce labor intensity, and develop in the direction of intelligence, speed, and accuracy.

为解决上述技术问题,本发明采用的技术方案是:In order to solve the above-mentioned technical problems, the technical scheme adopted in the present invention is:

提供一种基于SSD的卷积网络的大豆植株茎荚识别方法,包括以下步骤:Provided is a soybean plant stem pod identification method based on SSD convolution network, comprising the following steps:

S1.通过佳能5D Mark II相机固定于距离蓝色背景布120cm的位置进行采集单株大豆样本图像,得到大豆植株图像库;S1. The Canon 5D Mark II camera is fixed at a distance of 120cm from the blue background cloth to collect the image of a single soybean sample to obtain a soybean plant image library;

S2.遍历步骤S1中所述图像库中所有的样本图像,对每张样本图像进行豆荚、茎秆的手工标注,将未被遮挡的豆荚尖部标注为豆荚类,茎秆裸露部分标注茎秆类,获得原始图像集;S2. Traverse all the sample images in the image library described in step S1, carry out manual labeling of pods and stems for each sample image, label the unobstructed pod tips as pods, and label the bare stems as stems class, get the original image set;

S3.针对步骤S2中所述的已标注的训练集图像进行随机图像增强和数据扩增。采用自适应直方图均衡化进行图像增强;采用在一定阈值内RGB颜色通道的随机调整及水平、垂直镜像翻转和随机旋转、平移操作进行数据扩增,并对旋转、平移后的图像进行居中截取,经处理后的图像如果目标超出边界则丢弃标注,得到增强扩增训练集;S3. Perform random image enhancement and data augmentation on the labeled training set images described in step S2. Adaptive histogram equalization is used for image enhancement; random adjustment of RGB color channels within a certain threshold, horizontal and vertical mirror flips, random rotation and translation operations are used for data amplification, and the rotated and translated images are centered and intercepted , if the processed image exceeds the boundary, the label is discarded, and the enhanced training set is obtained;

S4.构造SSD卷积网络,用不同层次的特征图,进行多尺度检测;S4. Construct an SSD convolutional network, and use feature maps of different levels to perform multi-scale detection;

S5.将步骤S2、S3中训练样本输送到SSD卷积神经网络进行预训练并迭代预训练得到预训练后模型,同时确定所述SSD卷积神经网络中的学习参数;S5. Transport the training samples in steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and determine the learning parameters in the SSD convolutional neural network simultaneously;

S6.将所述测试集输送到训练好的所述SSD卷积神经网络中进行识别测试,并将识别结果中置信度大于40%的分类结果作为测试样本的输出识别结果。S6. Send the test set to the trained SSD convolutional neural network for recognition test, and use the classification result with a confidence greater than 40% in the recognition result as the output recognition result of the test sample.

本发明的基于SSD卷积网络的大豆植株茎荚识别方法,加入了融合层将低层的细节直接传递到高层的一种残差结构,充分利用了SSD网络选取不同层次的特征图进行多尺度检测的特点,弥补了现有方法对变形、遮挡、连续重叠物体检测精度较差的缺陷。本发明具有卷积神经网络的优点,同时降低了图像背景、环境亮度的干扰,对遮挡和重叠具有较强的抗干扰能力,提高大豆植株茎荚检测的准确率。The method for identifying stems and pods of soybean plants based on the SSD convolution network of the present invention adds a residual structure in which the fusion layer directly transmits the details of the lower layers to the higher layers, and makes full use of the SSD network to select feature maps of different levels for multi-scale detection. It makes up for the shortcomings of the existing methods that the detection accuracy of deformation, occlusion, and continuous overlapping objects is poor. The invention has the advantages of a convolutional neural network, reduces the interference of image background and environmental brightness, has strong anti-interference ability to occlusion and overlap, and improves the accuracy of soybean plant stem and pod detection.

优选地,步骤S2中所述的对每张样本图像进行豆荚、茎秆的手工标注,将未被遮挡的豆荚尖部标注为豆荚类,茎秆裸露部分标注茎秆类。仅标注目标的部分特征,可提高在遮挡、重叠情况下的识别准确度。Preferably, as described in step S2, pods and stems are manually labeled for each sample image, and the unobstructed tip of the pod is labeled as a pod, and the exposed part of the stem is labeled as a stem. Annotating only some features of the target can improve the recognition accuracy in the case of occlusion and overlap.

优选地,针对步骤S2中所述的已标注的训练集图像进行随机图像增强和数据扩增。采用自适应直方图均衡化进行图像增强;采用在一定阈值内RGB颜色通道的随机调整及水平、垂直镜像翻转和随机旋转、平移操作进行数据扩增,并对旋转、平移后的图像进行居中截取,经处理后的图像如果目标超出边界则丢弃标注的方法。Preferably, random image enhancement and data augmentation are performed on the labeled training set images described in step S2. Adaptive histogram equalization is used for image enhancement; random adjustment of RGB color channels within a certain threshold, horizontal and vertical mirror flips, random rotation and translation operations are used for data amplification, and the rotated and translated images are centered and intercepted , a method for discarding annotations if the object is out of bounds in the processed image.

优选地,步骤S4中所述的SSD模型在VGG-16网络中增加一层融合层和四个卷积层构建,步骤S4中训练模型的建立步骤如下:Preferably, the SSD model described in step S4 is constructed by adding one layer of fusion layer and four convolution layers in the VGG-16 network, and the steps of establishing the training model in step S4 are as follows:

S41.以大豆植株样本图像为输入,在卷积层中对图像卷积运算得到的特征图;S41. Taking the soybean plant sample image as input, the feature map obtained by convolution operation on the image in the convolution layer;

S42.对VGG-16网络中增加Add4_3层,Add4_3是由Maxpool3和Conv4_2两个特征图融合(Add)后,经ReLU激活,并由Batch Normalization(BN)归一化后构成,Add4_3作为Conv4_3层的输入,网络中的Conv4_3层、Fc7层、Conv8_2~ Conv11_2层的特征图用3×3卷积核进行卷积,分别实现输出分类用的置信度和输出回归用的定位信息;S42. Add the Add4_3 layer to the VGG-16 network. Add4_3 is composed of two feature maps of Maxpool3 and Conv4_2 after fusion (Add), activated by ReLU, and normalized by Batch Normalization (BN). Add4_3 is used as the Conv4_3 layer. Input, the feature maps of the Conv4_3 layer, Fc7 layer, and Conv8_2 to Conv11_2 layers in the network are convolved with a 3×3 convolution kernel to output the confidence for classification and the positioning information for output regression respectively;

S43.将所有的输出结构合并,通过非极大值抑制处理得到检测结果。S43. Combine all output structures, and obtain detection results through non-maximum suppression processing.

通过选取六个不同层次的特征图进行多尺度检测,在保留对高层特征图的检测的基础上,增加对低层特征图的融合,既充分利用了低层特征图的丰富图像细节信息,又达到抗干扰能力的目标检测鲁棒性效果,解决变形、遮挡、连续重叠等情况下的检测与定位问题。By selecting six feature maps of different levels for multi-scale detection, on the basis of retaining the detection of high-level feature maps, the fusion of low-level feature maps is added, which not only makes full use of the rich image detail information of low-level feature maps, but also achieves anti- The robustness effect of target detection with interference ability solves the detection and positioning problems in the case of deformation, occlusion, continuous overlap, etc.

优选地,输出分类用的置信度时,每个边框生成两个类别的置信度;输出回归用的定位信息时,每个边框的生成四个坐标值(x,y,w,h)。Preferably, when outputting the confidence for classification, each frame generates two categories of confidence; when outputting the positioning information for regression, each frame generates four coordinate values (x, y, w, h).

优选地,步骤S41中特征图按下述方法进行运算:Preferably, in step S41, the feature map is calculated according to the following method:

步骤1:将Conv4_3层输出的特征图分成76×38个单元,每个单元上使用四种默认边界框,每个默认边界框上使用大小为3×3的卷积核进行卷积运算,输出边框的四个元素,分别是输出边框的左上角的横纵坐标x、y和边框回归层所输出边框的宽w、高h,以及边框中的物体分别属于豆荚和茎秆的置信度;Step 1: Divide the feature map output by the Conv4_3 layer into 76 × 38 units, use four default bounding boxes on each unit, and use a convolution kernel of size 3 × 3 on each default bounding box for convolution operation, output The four elements of the frame are the horizontal and vertical coordinates x and y of the upper left corner of the output frame, the width w and height h of the frame output by the frame regression layer, and the confidence that the objects in the frame belong to pods and stalks respectively;

步骤2:按照步骤S411中相同的方法依次在Fc7层、Conv8_2~Conv11_2层输出的特征图上进行计算;其中,各层特征图分别被分为38×19、20×10、1×5、 6×3、1×1个单元,每个单元所使用的默认边界框数分别为6、6、6、4、4。Step 2: According to the same method in step S411, the calculation is performed on the feature maps output by the Fc7 layer and the Conv8_2~Conv11_2 layers in turn; wherein, the feature maps of each layer are respectively divided into 38×19, 20×10, 1×5, 6 ×3, 1×1 units, and the default number of bounding boxes used by each unit is 6, 6, 6, 4, and 4, respectively.

优选地,步骤S4中预训练后模型的训练误差小于15%,测试误差的平均值小于20%。Preferably, the training error of the pre-trained model in step S4 is less than 15%, and the average value of the test error is less than 20%.

与现有技术相比,本发明的有益效果是:Compared with the prior art, the beneficial effects of the present invention are:

(1)本发明能够实现大豆植株茎荚检测和定位,具有较高的准确率,并且具有稳定性好,抗干扰能力强,通用性高等优点,对变形、遮挡、连续重叠目标的检测精度高,就够应用大豆植株性状检测系统。(1) The invention can realize the detection and positioning of soybean plant stems and pods, has high accuracy, and has the advantages of good stability, strong anti-interference ability, high versatility, and high detection accuracy for deformation, occlusion, and continuous overlapping targets. , it is enough to apply the soybean plant trait detection system.

(2)本发明具有卷积神经网络的优点,减少图像背景、环境亮度的干扰,对遮挡和相互重叠具有较强的抗干扰能力,提高大豆植株茎荚检测的准确率。(2) The present invention has the advantages of convolutional neural network, reduces the interference of image background and environmental brightness, has strong anti-interference ability against occlusion and mutual overlap, and improves the accuracy of soybean plant stem and pod detection.

附图说明Description of drawings

图1是基于SSD的卷积网络的大豆植株茎荚识别方法的流程图;Fig. 1 is the flow chart of the soybean plant stem pod identification method based on SSD convolution network;

图2是步骤S4的具体流程图;Fig. 2 is the concrete flow chart of step S4;

图3是本发明中识别后的大豆植株的样本图像。Figure 3 is a sample image of a soybean plant identified in the present invention.

具体实施方式Detailed ways

附图仅用于示例性说明,不能理解为对本专利的限制;为了更好说明本实施例,附图会有省略、放大或缩小,并不代表实际的尺寸;对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。The accompanying drawings are for illustrative purposes only, and should not be construed as limitations on this patent; in order to better illustrate the present embodiment, the accompanying drawings may be omitted, enlarged or reduced, and do not represent the actual size; for those skilled in the art, It is understood that certain well-known structures and their descriptions may be omitted in the drawings.

下面结合附图和具体实施方式对本发明作进一步的说明。The present invention will be further described below with reference to the accompanying drawings and specific embodiments.

请参阅图1,本实施例为本发明的基于SSD的卷积网络的大豆植株茎荚识别方法的第一实施例,包括以下步骤:Please refer to FIG. 1, the present embodiment is the first embodiment of the SSD-based convolutional network soybean plant stem and pod identification method of the present invention, comprising the following steps:

S1.通过佳能5D Mark II相机固定于距离蓝色背景布120cm的位置进行单株大豆样本图像采集,得到大豆植株图像库;S1. The Canon 5D Mark II camera is fixed at a position 120cm away from the blue background cloth to collect the image of a single soybean sample to obtain a soybean plant image library;

S2.遍历步骤S1中所述图像集中所有的样本图像,对每张样本图像进行豆荚、茎秆的手工标注,将未被遮挡的豆荚尖部标注为豆荚类,茎秆裸露部分标注茎秆类,获得原始训练集;S2. Traverse all the sample images in the image set described in step S1, manually label the pods and stems for each sample image, label the unobstructed tip of the pods as pods, and label the exposed parts of the stem as stems , to obtain the original training set;

S3.针对步骤S2中所述的已标注的训练集图像进行随机图像增强和数据扩增。采用自适应直方图均衡化进行图像增强;采用在一定阈值内RGB颜色通道的随机调整及水平、垂直镜像翻转和随机旋转、平移操作进行数据扩增,并对旋转、平移后的图像进行居中截取,经处理后的图像如果目标超出边界则丢弃标注,得到增强扩增训练集;S3. Perform random image enhancement and data augmentation on the labeled training set images described in step S2. Adaptive histogram equalization is used for image enhancement; random adjustment of RGB color channels within a certain threshold, horizontal and vertical mirror flips, random rotation and translation operations are used for data amplification, and the rotated and translated images are centered and intercepted , if the processed image exceeds the boundary, the label is discarded, and the enhanced training set is obtained;

请参阅图2,本实施例的步骤S4.构造SSD卷积网络,用不同层次的特征图,进行多尺度检测;Please refer to FIG. 2, step S4 of this embodiment. Construct an SSD convolution network, and use feature maps of different levels to perform multi-scale detection;

S5.将步骤S2、S3中训练样本输送到SSD卷积神经网络进行预训练并迭代预训练得到预训练后模型,同时确定所述SSD卷积神经网络中的学习参数;S5. Transport the training samples in steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and determine the learning parameters in the SSD convolutional neural network simultaneously;

S6.将所述测试集输送到训练好的所述SSD卷积神经网络中进行识别测试,并将识别结果中置信度大于40%的分类结果作为测试样本的输出识别结果。S6. Send the test set to the trained SSD convolutional neural network for recognition test, and use the classification result with a confidence greater than 40% in the recognition result as the output recognition result of the test sample.

步骤S1中,通过佳能5D Mark II相机固定于距离蓝色背景布120cm的位置进行单株大豆样本图像采集,得到大豆植株图像库。具体地,植株样本图像集采用如下形式存储样本数据:In step S1, a Canon 5D Mark II camera is fixed at a position 120 cm away from the blue background cloth to collect an image of a single soybean sample to obtain a soybean plant image library. Specifically, the plant sample image set stores sample data in the following form:

{image_name,x,y}{image_name, x, y}

其中,image_name表示大豆植株图像名,x表示图像横向像素值,y表示图像纵向像素值。Among them, image_name represents the name of the soybean plant image, x represents the horizontal pixel value of the image, and y represents the vertical pixel value of the image.

步骤S2中,遍历步骤S1中所述图像库中所有的样本图像,对每张样本图像进行豆荚、茎秆的手工标注,将未被遮挡的豆荚尖部标注为豆荚类,茎秆裸露部分标注茎秆类,获得原始图像集。具体地,植株样本图像对其中每个真实框进行标注形成图像标注集,图像标注集采用如下形式存储标记数据:In step S2, all the sample images in the image library described in step S1 are traversed, and pods and stems are manually labeled for each sample image, and the unobstructed pod tips are labeled as pods, and the bare stems are labeled. Stem class, get the original image set. Specifically, the plant sample image annotates each real frame to form an image annotation set, and the image annotation set stores the marked data in the following form:

{label,xmin,ymin,xmax,ymax}{label, xmin, ymin, xmax, ymax}

其中,label表示标注的类别,xmin表示标注的最小像素点的横坐标,ymin 表示标注的最小像素点的纵坐标,xmax表示标注的最大像素点的横坐标,ymax 表示标注的最大像素点的纵坐标。Among them, label represents the category of the annotation, xmin represents the abscissa of the smallest pixel of the annotation, ymin represents the ordinate of the smallest pixel of the annotation, xmax represents the abscissa of the largest pixel of the annotation, and ymax represents the vertical of the largest pixel of the annotation. coordinate.

步骤S4中,预训练后模型的建立步骤如下:In step S4, the establishment steps of the model after pre-training are as follows:

S41.以大豆植株样本图像为输入,在卷积层中对图像进行卷积运算得到的特征图;S41. Take the soybean plant sample image as input, and perform the convolution operation on the image in the convolution layer to obtain the feature map;

S42.对VGG-16网络中增加Add4_3层,Add4_3是由Maxpool3和Conv4_2两个特征图融合(Add)后,经ReLU激活,并由Batch Normalization(BN)归一化后构成,Add4_3作为Conv4_3层的输入,网络中的Conv4_3层、Fc7层、Conv8_2~ Conv11_2层的特征图用3×3卷积核进行卷积,分别实现输出分类用的置信度和输出回归用的定位信息;S42. Add the Add4_3 layer to the VGG-16 network. Add4_3 is composed of two feature maps of Maxpool3 and Conv4_2 after fusion (Add), activated by ReLU, and normalized by Batch Normalization (BN). Add4_3 is used as the Conv4_3 layer. Input, the feature maps of the Conv4_3 layer, Fc7 layer, and Conv8_2 to Conv11_2 layers in the network are convolved with a 3×3 convolution kernel to output the confidence for classification and the positioning information for output regression respectively;

S43.将所有的输出结构合并,通过非极大值抑制处理得到检测结果;其中,输出分类用的置信度是每个预测框中类别的置信度;输出回归用的定位信息是每个预测框的四个坐标值(x,y,w,h)。S43. Combine all output structures, and obtain detection results through non-maximum suppression processing; wherein, the confidence level for output classification is the confidence level of each prediction box category; the positioning information for output regression is each prediction box The four coordinate values (x, y, w, h) of .

其中,步骤S41中的特征图按下述方法进行运算:Wherein, the feature map in step S41 is calculated according to the following method:

步骤1:将Conv4_3层输出的特征图分成76×38个单元,每个单元上使用四种默认边界框,每个默认边界框上使用大小为3×3的卷积核进行卷积运算,输出边框的四个元素,分别是输出边框的左上角的横纵坐标x、y和边框回归层所输出边框的宽w、高h,以及边框中的物体分别属于豆荚和茎秆的置信度;Step 1: Divide the feature map output by the Conv4_3 layer into 76 × 38 units, use four default bounding boxes on each unit, and use a convolution kernel of size 3 × 3 on each default bounding box for convolution operation, output The four elements of the frame are the horizontal and vertical coordinates x and y of the upper left corner of the output frame, the width w and height h of the frame output by the frame regression layer, and the confidence that the objects in the frame belong to pods and stalks respectively;

步骤2:按照步骤S411中相同的方法依次在Fc7层、Conv8_2~Conv11_2层输出的特征图上进行计算;其中,各层特征图分别被分为38×19、20×10、10×5、 6×3、1×1个单元,每个单元所使用的默认边界框数分别为6、6、6、4、4。Step 2: According to the same method in step S411, the calculation is performed on the feature maps output by the Fc7 layer and the Conv8_2 to Conv11_2 layers in turn; wherein, the feature maps of each layer are respectively divided into 38×19, 20×10, 10×5, 6 ×3, 1×1 units, and the default number of bounding boxes used by each unit is 6, 6, 6, 4, and 4, respectively.

通过选取六个不同层次的特征图进行多尺度检测,在保留对高层特征图的检测的基础上,增加对低层特征图的融合,既充分利用了低层特征图的丰富图像细节信息,又达到抗干扰能力的目标检测鲁棒性效果,解决变形、遮挡、连续重叠等情况下的检测与定位问题。By selecting six feature maps of different levels for multi-scale detection, on the basis of retaining the detection of high-level feature maps, the fusion of low-level feature maps is added, which not only makes full use of the rich image detail information of low-level feature maps, but also achieves anti- The robustness effect of target detection with interference ability solves the detection and positioning problems in the case of deformation, occlusion, continuous overlap, etc.

本实施例中增加残差结构的VGG-16部分网络结构为:The partial network structure of VGG-16 with residual structure added in this embodiment is:

第一层,连续使用两次大小为3×3的64个卷积滤波器,步幅为1,填充 (padding)为1,得到两个600×300×64的卷积层(Conv1_1,Conv1_2),获得卷积层的输出后,使用BN层(batch normalization)进行归一化处理,然后使用ReLU函数(Rectified Linear Units)作为非线性激活函数进行激活,最后再用一个窗口大小为2×2的最大池化层(Maxpooling)进行池化,最大池化层(Maxpooling)的采样步幅为2。In the first layer, 64 convolutional filters of size 3×3 are used twice in a row, with a stride of 1 and a padding of 1, resulting in two 600×300×64 convolutional layers (Conv1_1, Conv1_2) , after obtaining the output of the convolutional layer, use the BN layer (batch normalization) for normalization, then use the ReLU function (Rectified Linear Units) as the nonlinear activation function for activation, and finally use a window size of 2 × 2. The max pooling layer (Maxpooling) performs pooling, and the sampling stride of the max pooling layer (Maxpooling) is 2.

第二层,连续使用两次大小为3×3的128个卷积滤波器,步幅为1,填充 (padding)为1,得到两个300×150×128的卷积层(Conv2_1,Conv2_2),获得卷积层的输出后,使用BN层(batch normalization)进行归一化处理,然后使用ReLU函数(Rectified LinearUnits)作为非线性激活函数进行激活,最后再用一个窗口大小为2×2的最大池化层(Maxpooling)进行池化,最大池化层(Maxpooling)的采样步幅为2。In the second layer, 128 convolutional filters of size 3×3 are used twice consecutively, the stride is 1, and the padding is 1, resulting in two convolutional layers of 300×150×128 (Conv2_1, Conv2_2) , after obtaining the output of the convolutional layer, use the BN layer (batch normalization) for normalization, then use the ReLU function (Rectified LinearUnits) as the nonlinear activation function for activation, and finally use a window size of 2 × 2 maximum The pooling layer (Maxpooling) performs pooling, and the sampling stride of the maximum pooling layer (Maxpooling) is 2.

第三层,连续使用三次大小为3×3的256个卷积滤波器,步幅为1,填充 (padding)为1,得到三个150×75×256的卷积层(Conv3_1,Conv3_2,Conv3_3),获得卷积层的输出后,使用BN层(batch normalization)进行归一化处理,然后使用ReLU函数(Rectified LinearUnits)作为非线性激活函数进行激活,最后再用一个窗口大小为2×2的最大池化层(Maxpooling)进行池化,最大池化层(Maxpooling) 的采样步幅为2。In the third layer, 256 convolutional filters of size 3×3 are used continuously three times, the stride is 1, and the padding is 1, and three convolutional layers of 150×75×256 are obtained (Conv3_1, Conv3_2, Conv3_3 ), after obtaining the output of the convolutional layer, use the BN layer (batch normalization) for normalization, then use the ReLU function (Rectified LinearUnits) as the nonlinear activation function for activation, and finally use a window size of 2 × 2. The maximum pooling layer (Maxpooling) performs pooling, and the sampling stride of the maximum pooling layer (Maxpooling) is 2.

第四层,分别使用二次大小为3×3的512个卷积滤波器,一次大小为1×1的 512个卷积滤波器,一次大小为3×3的512个卷积滤波器,步幅为1,填充(padding) 为1,得到三个76×38×512的卷积层(Conv4_1,Conv4_2,Add4_3,Conv4_3),获得卷积层的输出后,使用BN层(batch normalization)进行归一化处理,然后使用ReLU函数(Rectified LinearUnits)作为非线性激活函数进行激活,最后再用一个窗口大小为2×2的最大池化层(Maxpooling)进行池化,最大池化层(Maxpooling) 的采样步幅为2。The fourth layer uses 512 convolution filters with a secondary size of 3×3, 512 convolution filters with a primary size of 1×1, and 512 convolution filters with a primary size of 3×3. The width is 1, the padding is 1, and three 76×38×512 convolutional layers (Conv4_1, Conv4_2, Add4_3, Conv4_3) are obtained. After obtaining the output of the convolutional layer, the BN layer (batch normalization) is used for normalization. Unification processing, and then use the ReLU function (Rectified LinearUnits) as the nonlinear activation function for activation, and finally use a maximum pooling layer (Maxpooling) with a window size of 2 × 2 for pooling, the maximum pooling layer (Maxpooling) The sampling stride is 2.

第五层,连续使用三次大小为3×3的512个卷积滤波器,步幅为1,填充 (padding)为1,得到三个38×19×512的卷积层(Conv5_1,Conv5_2,Conv5_3),获得卷积层的输出后,使用BN层(batch normalization)进行归一化处理,然后使用ReLU函数(Rectified LinearUnits)作为非线性激活函数进行激活。In the fifth layer, 512 convolutional filters of size 3×3 are used three times in a row, the stride is 1, and the padding is 1, and three convolutional layers of 38×19×512 are obtained (Conv5_1, Conv5_2, Conv5_3 ), after obtaining the output of the convolutional layer, use the BN layer (batch normalization) for normalization, and then use the ReLU function (Rectified LinearUnits) as the nonlinear activation function for activation.

接着,对Conv5_3的输出使用大小为3×3的1024个卷积滤波器,步幅为1,填充(padding)为1,得到大小为38×19×1024的Fc6层,再对Fc6层使用大小为1 ×1的1024个卷积滤波器,步幅为1,填充(padding)为1得到大小为38×19×1024 的Fc7层。Next, use 1024 convolution filters of size 3×3 for the output of Conv5_3, with a stride of 1 and a padding of 1, to obtain an Fc6 layer of size 38×19×1024, and then use size for the Fc6 layer is 1 × 1 1024 convolutional filters with stride of 1 and padding of 1 to obtain an Fc7 layer of size 38 × 19 × 1024.

最后,在Fc7层后面增加四个卷积层,分别是大小为20×10×512的Conv8层, 10×5×256的Conv9层,6×3×256的Conv10层,1×1×256的Conv11层。Finally, four convolutional layers are added after the Fc7 layer, namely the Conv8 layer of size 20×10×512, the Conv9 layer of 10×5×256, the Conv10 layer of 6×3×256, and the size of 1×1×256. Conv11 layer.

步骤S4中预训练后模型的训练误差小于15%,测试误差的平均值小于20%。模型训练误差的计算方法如下:The training error of the pre-trained model in step S4 is less than 15%, and the average value of the test error is less than 20%. The model training error is calculated as follows:

步骤1:将每个真实框与对应最大jaccard系数重叠的默认边界框相匹配,并将默认边界框与jaccard系数重叠大于0.5的任一真实框匹配;Step 1: Match each ground-truth box with the default bounding box corresponding to the maximum jaccard coefficient overlap, and match the default bounding box with any ground-truth box whose jaccard coefficient overlap is greater than 0.5;

步骤2:i表示默认框序号,j表示真实框序号,p表示类别序号,0为背景,1 为豆荚,2为茎秆,为是否匹配,与真实框最大jaccard系数重叠大于阈值为匹配值为1,否则为0。Step 2: i represents the default box number, j represents the real box number, p represents the category number, 0 is the background, 1 is the pod, 2 is the stem, For matching, if the overlap with the maximum jaccard coefficient of the real box is greater than the threshold, the matching value is 1, otherwise it is 0.

步骤3:总的目标损失函数L(x,c,l,g)由定位损失Lloc和置信度损失Lconf加权求和得到:Step 3: The total objective loss function L(x, c, l, g) is obtained by the weighted summation of the localization loss L loc and the confidence loss L conf :

式中,N为与真实框相匹配的默认边界框的个数,Lloc为定位损失,Lconf为置信度损失,x表示训练样本,c表示每一类物体的置信度,l代表预测框,g代表真实框,α表示权重,本实施例中的α设置为0.8;where N is the number of default bounding boxes matching the ground truth, L loc is the localization loss, L conf is the confidence loss, x is the training sample, c is the confidence of each type of object, and l is the prediction frame , g represents the real frame, α represents the weight, and α in this embodiment is set to 0.8;

定位损失Lloc中,f(x)为用σ来控制的分段平滑函数,d代表默认框,w表示真实框或默认边界框的宽度,h表示真实框或默认边界框的高度,i表示第i个默认框,j表示第j个真实框,m表示真实框或默认边界框的位置信息(其中,cx代表中心点x轴坐标;cy代表中心点y轴坐标;w代表框的宽度;h代表框的高度),p 表示第p个类别:In the localization loss Lloc , f(x) is a piecewise smoothing function controlled by σ, d represents the default box, w represents the width of the ground truth or default bounding box, h represents the height of the ground truth or default bounding box, and i represents the The i-th default box, j represents the j-th real box, and m represents the location information of the real box or the default bounding box (where cx represents the x-axis coordinate of the center point; cy represents the y-axis coordinate of the center point; w represents the width of the box; h represents the height of the box), p represents the p-th category:

式中, In the formula,

置信度损失Lconf为多类别softmax损失函数,如式8。其中,表示第i个默认框,第j个真实框对应类别p的预测概率,计算公式如下:The confidence loss L conf is a multi-class softmax loss function, as shown in Equation 8. in, Represents the i-th default box and the j-th real box corresponding to the predicted probability of category p. The calculation formula is as follows:

本发明方法能够对大豆植株图像中的豆荚和茎秆准确地检测与定位,对遮挡和重叠等干扰具有较强的抗干扰能力,提高大豆植株茎荚检测的准确率。其中,所述样本图像的标识结果如图3所示。The method of the invention can accurately detect and locate the pods and stems in the soybean plant image, has strong anti-interference ability against interference such as occlusion and overlap, and improves the accuracy of detection of the soybean plant stems and pods. The identification result of the sample image is shown in FIG. 3 .

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Obviously, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, rather than limiting the embodiments of the present invention. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (7)

1. A soybean plant stem pod identification method based on a convolution network of an SSD is characterized by comprising the following steps:
s1, fixing a Canon 5D Mark II camera at a position 120cm away from blue background cloth to acquire a single soybean sample image to obtain a soybean plant image library;
s2, traversing all sample images in the image library in the step S1, manually marking pods and stems for each sample image, marking the tips of the pods which are not shielded as pods, marking the exposed parts of the stems as stems, obtaining an original image library, and repeatedly dividing the image library into a training set, a verification set and a test set;
and S3, carrying out random image enhancement and data amplification on the labeled training set image in the step S2. Carrying out image enhancement by adopting self-adaptive histogram equalization; randomly adjusting RGB color channels within a certain threshold value, turning horizontal and vertical mirror images, randomly rotating and translating to amplify data, centrally intercepting the rotated and translated images, discarding labels if the target of the processed images exceeds the boundary, and obtaining an enhanced amplification training set;
s4, constructing an SSD convolutional network, and carrying out multi-scale detection by using feature maps of different layers;
s5, conveying the training samples in the steps S2 and S3 to the SSD convolutional neural network for pre-training and iterative pre-training to obtain a pre-trained model, and meanwhile determining learning parameters in the SSD convolutional neural network;
and S6, transmitting the test set to the trained SSD convolutional neural network for recognition test, and taking a classification result with the confidence level of more than 40% in a recognition result as an output recognition result of the test sample.
2. The method of claim 1, wherein in step S2, each sample image is manually labeled with pods and stalks, wherein the pod tips that are not blocked are labeled with pods and the stalk bare parts are labeled with stalks.
3. The SSD-based convolutional network soybean plant stem pod identification method of claim 1, wherein the labeled training set images of step S3 are subjected to random image enhancement and data amplification. Carrying out image enhancement by adopting self-adaptive histogram equalization; the method comprises the steps of carrying out data amplification by random adjustment of RGB color channels within a certain threshold value and horizontal and vertical mirror image turning and random rotation and translation operations, carrying out centered interception on images subjected to rotation and translation, and discarding labels if targets of the images subjected to processing exceed boundaries.
4. The method for identifying soybean plant stems based on the SSD convolutional network as claimed in claim 1, wherein the SSD model in step S4 is constructed by adding a fusion layer and four convolutional layers in the VGG-16 network, and the training model in step S4 is constructed by the following steps:
s41, taking the soybean plant sample image as input, and carrying out convolution operation on the image in a convolution layer to obtain a characteristic diagram;
s42, adding an Add4_3 layer to the VGG-16 network, wherein the Add4_3 is formed by fusing (adding) two feature maps of Maxpool3 and Conv4_2, activating by ReLU and normalizing by Batch Normalization (BN), the Add4_3 is used as the input of the Conv4_3 layer, the feature maps of the Conv4_3 layer, the Fc7 layer and the Conv8_ 2-Conv 11_2 layers in the network are convolved by a 3 x 3 convolution kernel, and the confidence coefficient for output classification and the positioning information for output regression are respectively realized;
and S43, combining all the output structures, and obtaining a detection result through non-maximum suppression processing.
5. The SSD-based convolutional network soybean plant stem pod identification method of claim 4, wherein upon outputting confidence for classification, each frame generates two categories of confidence; when the regression positioning information is output, four coordinate values (x, y, w, h) are generated for each frame.
6. The SSD-based convolutional network soybean plant stem pod identification method of claim 4, wherein the profile in step S41 is calculated as follows:
step 1: dividing a characteristic diagram output by a Conv4_3 layer into 76 × 38 units, wherein each unit uses four default boundary boxes, each default boundary box uses a convolution kernel with the size of 3 × 3 to perform convolution operation, and outputs four elements of a frame, namely, a horizontal coordinate x and a vertical coordinate y at the upper left corner of the output frame, the width w and the height h of the frame output by a frame regression layer, and the confidence degrees that objects in the frame belong to pods and stalks respectively;
step 2: calculating the characteristic diagrams output by the Fc7 layer and the Conv8_ 2-Conv 11_2 layer in sequence according to the same method in the step S411; each layer feature map is divided into 38 × 19, 20 × 10, 10 × 5, 6 × 3 and 1 × 1 units, and the default bounding boxes used by each unit are 6, 4 and 4 respectively.
7. The SSD-convolutional-network-based soybean plant stem pod identification method of claim 1, wherein the pre-trained model in step S4 has training errors of less than 15% and the average of the test errors is less than 20%.
CN201811540821.9A 2018-12-17 2018-12-17 A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network Pending CN109684967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811540821.9A CN109684967A (en) 2018-12-17 2018-12-17 A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811540821.9A CN109684967A (en) 2018-12-17 2018-12-17 A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network

Publications (1)

Publication Number Publication Date
CN109684967A true CN109684967A (en) 2019-04-26

Family

ID=66187871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811540821.9A Pending CN109684967A (en) 2018-12-17 2018-12-17 A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network

Country Status (1)

Country Link
CN (1) CN109684967A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110100774A (en) * 2019-05-08 2019-08-09 安徽大学 River crab male and female recognition methods based on convolutional neural networks
CN110110702A (en) * 2019-05-20 2019-08-09 哈尔滨理工大学 It is a kind of that algorithm is evaded based on the unmanned plane for improving ssd target detection network
CN110443778A (en) * 2019-06-25 2019-11-12 浙江工业大学 A method of detection industrial goods random defect
CN110602411A (en) * 2019-08-07 2019-12-20 深圳市华付信息技术有限公司 Method for improving quality of face image in backlight environment
CN110781870A (en) * 2019-11-29 2020-02-11 东北农业大学 Milk cow rumination behavior identification method based on SSD convolutional neural network
CN110839366A (en) * 2019-10-21 2020-02-28 中国科学院东北地理与农业生态研究所 Soybean plant testing instrument and phenotypic data collection and identification method
CN111126402A (en) * 2019-11-04 2020-05-08 北京海益同展信息科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111597868A (en) * 2020-01-08 2020-08-28 浙江大学 A state analysis method of substation isolation switch based on SSD
CN111652012A (en) * 2020-05-11 2020-09-11 中山大学 A Surface QR Code Localization Method Based on SSD Network Model
CN111833310A (en) * 2020-06-17 2020-10-27 桂林理工大学 A Surface Defect Classification Method Based on Neural Network Architecture Search
CN112232263A (en) * 2020-10-28 2021-01-15 中国计量大学 A method for tomato recognition based on deep learning
CN114724141A (en) * 2022-04-06 2022-07-08 东北农业大学 Machine vision-based soybean pod number statistical method
CN117975172A (en) * 2024-03-29 2024-05-03 安徽农业大学 Method and system for constructing and training whole pod recognition model

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123626A (en) * 2010-12-08 2012-06-28 Toyota Central R&D Labs Inc Object detector and program
US20170084067A1 (en) * 2015-09-23 2017-03-23 Samsung Electronics Co., Ltd. Electronic device for processing image and method for controlling thereof
CN107315999A (en) * 2017-06-01 2017-11-03 范衠 A kind of tobacco plant recognition methods based on depth convolutional neural networks
CN107578050A (en) * 2017-09-13 2018-01-12 浙江理工大学 Automatic Classification and Recognition Method of Planthopper Species and Insect State at the Base of Rice Stem
CN108133186A (en) * 2017-12-21 2018-06-08 东北林业大学 A kind of plant leaf identification method based on deep learning
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN108564065A (en) * 2018-04-28 2018-09-21 广东电网有限责任公司 A kind of cable tunnel open fire recognition methods based on SSD
CN108592799A (en) * 2018-05-02 2018-09-28 东北农业大学 A kind of soybean kernel and beanpod image collecting device
CN108647652A (en) * 2018-05-14 2018-10-12 北京工业大学 A kind of cotton development stage automatic identifying method based on image classification and target detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012123626A (en) * 2010-12-08 2012-06-28 Toyota Central R&D Labs Inc Object detector and program
US20170084067A1 (en) * 2015-09-23 2017-03-23 Samsung Electronics Co., Ltd. Electronic device for processing image and method for controlling thereof
CN107315999A (en) * 2017-06-01 2017-11-03 范衠 A kind of tobacco plant recognition methods based on depth convolutional neural networks
CN107578050A (en) * 2017-09-13 2018-01-12 浙江理工大学 Automatic Classification and Recognition Method of Planthopper Species and Insect State at the Base of Rice Stem
CN108133186A (en) * 2017-12-21 2018-06-08 东北林业大学 A kind of plant leaf identification method based on deep learning
CN108288075A (en) * 2018-02-02 2018-07-17 沈阳工业大学 A kind of lightweight small target detecting method improving SSD
CN108564065A (en) * 2018-04-28 2018-09-21 广东电网有限责任公司 A kind of cable tunnel open fire recognition methods based on SSD
CN108592799A (en) * 2018-05-02 2018-09-28 东北农业大学 A kind of soybean kernel and beanpod image collecting device
CN108647652A (en) * 2018-05-14 2018-10-12 北京工业大学 A kind of cotton development stage automatic identifying method based on image classification and target detection

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
W.D.HANSON: "Modified Seed Maturation Rates and Seed Yield Potentials in Soybean", 《CROP PHYSIOLOGY & METABOLISM》 *
张晗: "苹果果实生长信息远程测定方法与技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
赵庆北: "改进的SSD的目标检测研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
高艳霞等: "基于粒子群算法和神经网络的大豆识别研究", 《信息自动化》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110100774A (en) * 2019-05-08 2019-08-09 安徽大学 River crab male and female recognition methods based on convolutional neural networks
CN110110702A (en) * 2019-05-20 2019-08-09 哈尔滨理工大学 It is a kind of that algorithm is evaded based on the unmanned plane for improving ssd target detection network
CN110443778A (en) * 2019-06-25 2019-11-12 浙江工业大学 A method of detection industrial goods random defect
CN110443778B (en) * 2019-06-25 2021-10-15 浙江工业大学 A method for detecting irregular defects in industrial products
CN110602411A (en) * 2019-08-07 2019-12-20 深圳市华付信息技术有限公司 Method for improving quality of face image in backlight environment
CN110839366B (en) * 2019-10-21 2024-07-09 中国科学院东北地理与农业生态研究所 Soybean plant seed tester and phenotype data acquisition and identification method
CN110839366A (en) * 2019-10-21 2020-02-28 中国科学院东北地理与农业生态研究所 Soybean plant testing instrument and phenotypic data collection and identification method
CN111126402A (en) * 2019-11-04 2020-05-08 北京海益同展信息科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111126402B (en) * 2019-11-04 2023-11-03 京东科技信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN110781870A (en) * 2019-11-29 2020-02-11 东北农业大学 Milk cow rumination behavior identification method based on SSD convolutional neural network
CN111597868A (en) * 2020-01-08 2020-08-28 浙江大学 A state analysis method of substation isolation switch based on SSD
CN111652012A (en) * 2020-05-11 2020-09-11 中山大学 A Surface QR Code Localization Method Based on SSD Network Model
CN111833310A (en) * 2020-06-17 2020-10-27 桂林理工大学 A Surface Defect Classification Method Based on Neural Network Architecture Search
CN111833310B (en) * 2020-06-17 2022-05-06 桂林理工大学 Surface defect classification method based on neural network architecture search
CN112232263A (en) * 2020-10-28 2021-01-15 中国计量大学 A method for tomato recognition based on deep learning
CN112232263B (en) * 2020-10-28 2024-03-19 中国计量大学 Tomato identification method based on deep learning
CN114724141A (en) * 2022-04-06 2022-07-08 东北农业大学 Machine vision-based soybean pod number statistical method
CN114724141B (en) * 2022-04-06 2024-11-01 东北农业大学 Soybean pod number statistical method based on machine vision
CN117975172A (en) * 2024-03-29 2024-05-03 安徽农业大学 Method and system for constructing and training whole pod recognition model
CN117975172B (en) * 2024-03-29 2024-07-09 安徽农业大学 Method and system for constructing and training whole pod recognition model

Similar Documents

Publication Publication Date Title
CN109684967A (en) A kind of soybean plant strain stem pod recognition methods based on SSD convolutional network
Amara et al. A deep learning-based approach for banana leaf diseases classification
CN107016405B (en) A Pest Image Classification Method Based on Hierarchical Prediction Convolutional Neural Network
CN109766856B (en) A dual-stream RGB-D Faster R-CNN method for recognizing the posture of lactating sows
WO2020177432A1 (en) Multi-tag object detection method and system based on target detection network, and apparatuses
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN111178197A (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN108805064A (en) A kind of fish detection and localization and recognition methods and system based on deep learning
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN110472575A (en) A kind of string tomato maturation detection method based on deep learning and computer vision
CN114841961B (en) Wheat scab detection method based on image enhancement and improved YOLOv5
CN114972208B (en) YOLOv 4-based lightweight wheat scab detection method
CN110163798A (en) Fishing ground purse seine damage testing method and system
CN108898156A (en) A kind of green green pepper recognition methods based on high spectrum image
Singhi et al. Integrated YOLOv4 deep learning pretrained model for accurate estimation of wheat rust disease severity
CN108664979A (en) The construction method of Maize Leaf pest and disease damage detection model based on image recognition and application
CN117593252A (en) Crop disease identification method, computer equipment and storage medium
CN117636314A (en) Seedling missing identification method, device, equipment and medium
CN110378953B (en) A Method for Intelligently Identifying the Spatial Distribution Behavior in Pig Pens
Nayak et al. Improved Detection of Fusarium Head Blight in Wheat Ears Through SOLO Instance Segmentation
Khanal et al. Paddy Disease Detection and Classification Using Computer Vision Techniques: A Mobile Application to Detect Paddy Disease
Shetty et al. Corn Care: Plant Disease Defender.
Jadhaw et al. AUTOMATED DETECTION OF BANANA LEAF DISEASES USING DEEP LEARNING TECHNIQUES: A COMPARATIVE STUDY OF CNN ARCHITECTURES
Kim et al. HAFREE: A heatmap-based anchor-free detector for apple defects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190426

WD01 Invention patent application deemed withdrawn after publication