CN115512232A - Crop seed germination condition identification model, construction method and application thereof - Google Patents

Crop seed germination condition identification model, construction method and application thereof Download PDF

Info

Publication number
CN115512232A
CN115512232A CN202211479034.4A CN202211479034A CN115512232A CN 115512232 A CN115512232 A CN 115512232A CN 202211479034 A CN202211479034 A CN 202211479034A CN 115512232 A CN115512232 A CN 115512232A
Authority
CN
China
Prior art keywords
seed
image
seeds
germination
hyperspectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211479034.4A
Other languages
Chinese (zh)
Other versions
CN115512232B (en
Inventor
陈渝阳
朱旭华
周希杰
刘荣利
王闯
谢朝明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Top Cloud Agri Technology Co ltd
Original Assignee
Zhejiang Top Cloud Agri Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Top Cloud Agri Technology Co ltd filed Critical Zhejiang Top Cloud Agri Technology Co ltd
Priority to CN202211479034.4A priority Critical patent/CN115512232B/en
Publication of CN115512232A publication Critical patent/CN115512232A/en
Application granted granted Critical
Publication of CN115512232B publication Critical patent/CN115512232B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pretreatment Of Seeds And Plants (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a crop seed germination condition identification model, a construction method and application thereof, and hyperspectral images containing seeds are obtained, wherein a germination label of each seed is marked in the hyperspectral images, and the seeds are placed on paper containing grids to shoot the hyperspectral images; segmenting the hyperspectral image to obtain position information of each seed, wherein the position information comprises pixel coordinates contained in the seed and a seed position binary image; matching the position information with the germination labels, and extracting a seed characteristic image of each seed in the hyperspectral image; filling each seed characteristic image in a self-adaptive manner to obtain a corresponding filling characteristic image; and filling the characteristic images as a training set, inputting the training set into an improved deep convolutional neural network for training to obtain a crop seed germination condition recognition model, wherein the seed germination condition is used as an output result, and the hyperspectral segmentation images and the deep neural network can be combined to improve the crop seed germination rate recognition rate.

Description

Crop seed germination condition identification model, construction method and application thereof
Technical Field
The application relates to the field of image recognition, in particular to a crop seed germination condition recognition model, a construction method and application thereof.
Background
The quality of the crop seed will affect the yield of the crop. At present, the germination rate of crop seeds is mainly used as an index of seed quality in China, the germination rate of the seeds is the proportion of the number of the seeds capable of germinating to the total number of experimental seeds under a certain condition, and if the germination recognition rate of the seeds can be effectively improved by a proper method, the inactivated seeds are removed as much as possible before sowing, so that huge economic benefits can be brought. Therefore, how to improve the seed germination recognition rate has important significance.
At present, methods for rapidly measuring the germination rate of seeds comprise conductivity measurement, a cold soaking method, a hot dipping method, an organoleptic method, a imbibition method and the like. However, these methods take a long time, and the measurement process requires special equipment or reagents, and is complicated. In recent years, hyperspectral technology is also commonly applied to seed germination rate identification, and scholars apply hyperspectral imaging technology to a germination rate identification method, for example, chinese patent CN201010514132.8 discloses a grain moisture content detection method based on hyperspectral image technology, chinese patent CN201210090171.9 discloses a hyperspectral reflectance image acquisition system and a corn seed purity nondestructive detection method based on the system, and journal of agricultural science and academic of Anhui publication discloses application of hyperspectral technology to conventional crop seed vigor detection. The above patents adopt the traditional recognition algorithm, which has high requirements for selecting features, and has certain limitations, and the generalization cannot be guaranteed, and accurate feature expression cannot be automatically learned well.
With the maturity of deep convolutional neural network research, deep learning is widely applied to scene recognition and object detection, and the efficiency of deep convolutional neural network recognition is faster and higher, and the accuracy is higher and higher. Also, researchers have combined shallow neural networks with multispectral or hyperspectral image technologies to improve the seed germination recognition rate, for example, CN201910247893.2 is a method for detecting the germination capacity of crop seeds based on hyperspectral imaging and artificial neural networks, the hyperspectral image input into the network is not subjected to detailed segmentation processing, and a neural network with a shallow layer is used, so that representative semantic information cannot be learned.
Therefore, how to effectively combine the hyperspectral segmentation image and the deep convolutional neural network to improve the seed germination recognition rate becomes an urgent problem to be solved.
Disclosure of Invention
The embodiment of the application provides a crop seed germination condition identification model, a construction method and application thereof, and provides the crop seed germination condition identification model which can effectively combine a hyperspectral segmentation image and a deep convolutional neural network to improve the seed germination identification rate.
In a first aspect, an embodiment of the present application provides a method for constructing a model for identifying a germination condition of crop seeds, including:
s1: acquiring hyperspectral images containing seeds, wherein the hyperspectral images mark the germination label of each seed, and the seeds are placed on paper containing grids to shoot the hyperspectral images;
s2: segmenting the hyperspectral images to obtain position information of each seed, wherein the position information comprises pixel coordinates contained in the seeds and binary images of seed positions;
s3: matching the position information with the germination labels, and extracting a seed characteristic image of each seed in the hyperspectral image;
s4: adaptively filling each seed characteristic image to obtain a corresponding filling characteristic image;
s5: and the filling characteristic image is used as a training set, the training set is input into an improved deep convolutional neural network for training to obtain a crop seed germination condition recognition model, and the seed germination condition is used as an output result.
In a second aspect, an embodiment of the present application provides a crop seed germination condition identification model, which is constructed according to the construction method of the crop seed germination condition identification model.
In a third aspect, an embodiment of the present application provides an identification method for increasing the identification rate of the germination condition of crop seeds, including the following steps: acquiring a hyperspectral image of the seed to be detected, inputting the hyperspectral image into the crop seed germination condition identification model to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
In a fourth aspect, an embodiment of the present application provides an identification apparatus for increasing the identification rate of the germination condition of crop seeds, including: the acquisition unit is used for acquiring a hyperspectral image of the seed to be detected; and the detection unit is used for inputting the hyperspectral image into the constructed crop seed germination condition identification model to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
In a fifth aspect, the present invention provides a readable storage medium, wherein the readable storage medium stores a computer program, the computer program includes program code for controlling a process to execute a process, the process includes a method for constructing the crop seed germination recognition model or a method for improving the recognition rate of the crop seed germination recognition.
The main contributions and innovation points of the invention are as follows:
the segmentation algorithm adopted by the embodiment of the application can fully filter the noise region, fully segment the target region information and serve as a tamping foundation for subsequent label matching and model training; the deep convolutional network part provided by the scheme has the following characteristics: 1. the network accepts variable input sizes: the variable input size network is accepted, so that the calculation amount of the network is reduced, and the network training time and the network hardware training cost are reduced; 2. feature fusion: the method performs one-dimensional feature fusion after the optimal features are selected in the deep convolutional network, and does not increase the calculated amount and the parameter amount of the network; 3. self-adaptive filling: on the premise of not changing the size of the original picture, the picture which conforms to the minimum input size of the network is obtained, the network training time is reduced, and the recognition speed is increased.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a method for constructing a model for identifying germination conditions of crop seeds according to the present invention.
FIG. 2 is a hyperspectral gray scale map of a seed of the invention.
FIG. 3 is a fusion graph of thresholding and adaptive thresholding in accordance with the invention.
FIG. 4 is a preliminary segmentation binary image according to the present invention.
FIG. 5 is a binary image of the grid area within the paper of the present invention.
FIG. 6 is a binary image of the seed position segmentation in each sheet of paper according to the present invention.
FIG. 7 is a graph corresponding to the results of each seed and germination of the present invention.
FIG. 8 is a seed adaptive scaling pad graph for an input network of the present invention.
FIG. 9 is a schematic diagram of the seed adaptive scaling process of the present invention.
Fig. 10 is a schematic structural diagram of a crop seed germination recognition model according to the present invention.
Fig. 11 is a schematic view of an electronic device according to the present embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Example one
The application aims to provide a method for improving the germination rate recognition rate of crop seeds by combining a hyperspectral segmentation image and a deep neural network, and the crops referred to by the scheme can be rice, barley and the like. It is worth mentioning that in the scheme, after the improved deep convolution neural network is improved, the self-adaptive filling characteristic image is used for training the deep convolution neural network to obtain a crop seed germination condition recognition model capable of improving the crop seed germination recognition rate, and the model is used for testing the crop image to accurately judge whether crops germinate.
Referring to fig. 1, the method for constructing the crop seed germination condition identification model provided by the present disclosure includes:
s1: acquiring hyperspectral images containing seeds, wherein the hyperspectral images mark germination labels of the seeds, and the seeds are placed on paper containing grids to shoot the hyperspectral images;
s2: segmenting the hyperspectral images to obtain position information of each seed, wherein the position information comprises pixel coordinates contained in the seeds and binary images of seed positions;
s3: matching the position information with the germination labels, and extracting a seed characteristic image of each seed in the hyperspectral image;
s4: adaptively filling each seed characteristic image to obtain a corresponding filling characteristic image;
s5: and the filling characteristic image is used as a training set, the training set is input into an improved deep convolutional neural network for training to obtain a crop seed germination condition recognition model, and the seed germination condition is used as an output result.
In step S1, the specific steps of acquiring the hyperspectral image including the seed are: a batch of seeds are selected and placed on paper containing grids, then the paper is placed on a shooting platform of a hyperspectral device, a hyperspectral image containing the seeds is shot by a hyperspectral camera after black and white correction in an environment with stable lighting conditions, wherein the wavelength range of the hyperspectral camera is 400-1000nm and the hyperspectral camera contains 462 wave bands.
In a specific example, 1500 rice seeds are selected, 96 seeds are taken as a batch, the batch is placed on paper containing 24 × 4 grids, one seed is placed in one grid, then the batch is placed on a shooting platform of a hyperspectral device, in an environment with stable illumination conditions, a PIKA XC2 hyperspectral camera of RESONON company is used for shooting after black and white correction, the wavelength range of the hyperspectral camera is 400-1000nm and the hyperspectral camera contains 462 wave bands, and a hyperspectral image of the seeds is obtained after shooting.
According to the scheme, the hyperspectral images are selected, and the hyperspectral images can obtain the characteristic information of a plurality of wave bands, so that the subsequent deep convolutional neural network is facilitated to learn more representative characteristics, and the seed germination recognition rate is improved.
It should be noted that the seeds for capturing the hyperspectral images are in an ungerminated state, and a standard germination test needs to be performed on the seeds to obtain the germination condition of each seed, and the germination condition is used as a germination label of the seeds. Wherein the germination label of the germinated seeds is marked as 1, and the germination label of the ungerminated seeds is marked as 0.
Because the germination condition of each seed needs to be marked, the method also needs to put the seeds which shoot the hyperspectral image into warm water at 55 ℃ for soaking for 10 minutes, then reduce the water temperature to 30 ℃ for soaking for 3 hours, take out the seeds and place the seeds in a constant-temperature illumination incubator for germination test to obtain the germination result of each seed, if the seeds grow small buds of 3mm or more, the seeds germinate, otherwise, the seeds do not germinate.
In step S2, the originally shot hyperspectral image may include irrelevant information such as paper edge lines, grid lines, impurities, etc., where the grid lines corresponding to the irrelevant information are the grid parts of the console and cannot be directly input into the deep convolutional neural network for training, so that the hyperspectral image needs to be segmented to segment the position information of each seed, the irrelevant information is removed, and the effects of accelerating the training speed of the deep convolutional neural network and improving the recognition effect of the model are achieved. However, since the hyperspectral image cannot be directly segmented, the hyperspectral image needs to be split to obtain a gray scale map of each wave band, and splitting the seed gray scale map does not affect the position information of the seeds in the map.
Specifically, the step S2 further includes the steps of:
s21: the hyperspectral Image is split to obtain a gray level Image of each wave band, threshold segmentation and self-adaptive segmentation operations are carried out on the gray level Image to obtain a segmentation binary Image, and the segmentation binary Image is fused to obtain a first Image1.
In the step, the threshold segmentation has poor segmentation effect on the image with uneven illumination, so that the influence of the illumination on the image segmentation is reduced by using the self-adaptive segmentation, and the two segmentation results are fused to obtain a better roughly segmented image, thereby improving the efficiency of the subsequent segmentation of the seeds.
As shown in fig. 2 and 3, fig. 2 is a schematic diagram of a gray scale map for a hyperspectral image, and fig. 3 is a schematic diagram of a fused first image. In a specific example, the threshold of the threshold segmentation is selected to be 40, and the adaptive threshold segmentation selects a local neighborhood block mean algorithm, a binary threshold method with a threshold of 150.
S22: carrying out corrosion and expansion operations on the first Image1 to remove all grid lines in the first Image, and preliminarily segmenting a first binary Image2 which displays a seed position and a paper edge contour position; the schematic diagram of the first binary image is shown in fig. 4, where the grid lines refer to the console grid portions.
S23: performing hole filling and morphological operation on the first Image Image1 after threshold segmentation to obtain all pixel coordinates contained in a grid area, and further obtaining the grid area, wherein the grid area refers to a grid part on paper;
it is worth mentioning that, as shown in fig. 2, two grid portions, namely, a console grid portion and a paper grid portion, exist in the first image, useless information of the console grid portion is removed in step S22, and an area surrounded by the paper grid portion is acquired in step S23.
S24: drawing the grid area on a black image with the same size as the hyperspectral image according to the outline information of the grid area to obtain a second binary image mask with a white grid area and black other areas; the second binary image mask is shown in fig. 5.
S25: carrying out binary AND operation on each pixel in the first binary Image2 and the second binary Image mask to obtain a divided binary Image3 of the seeds in the grid area;
s26: and filtering the connected domain area of the obtained divided binary Image3 to remove impurities, so as to obtain well-divided position information, wherein the position information comprises pixel coordinates and seed position binary images contained in each seed, and the seed binary images are shown in fig. 6.
The segmentation scheme provided by the scheme reduces the segmentation difficulty, especially, the correction idea can accurately and effectively treat the label association problem caused by adhesion in the seed placing process, even if the seeds have nonstandard operation conditions such as adhesion, the seeds can be well extracted one by one, on one hand, the segmentation accuracy is improved, the subsequent classification model training is guaranteed, and on the other hand, the operation difficulty of users is reduced.
Furthermore, the data input into the deep convolutional neural network also comprises a germination label besides the filling characteristic image of each seed, in order to correspond the result of the germination test with the segmentation seeds of the hyperspectral image, each seed obtained by segmentation is sequenced, and then the sequenced seeds are corresponding to the germination result. The automatic matching of the seed data and the label is realized by adopting an automatic matching mode, the manual participation degree can be reduced, and the accuracy of the process is improved.
Specifically, step S3 further includes the steps of:
s31: correcting the minimum matrix frame of the grid corresponding to the position information to obtain a grid matrix frame:
the grid herein refers to the grid in the paper, i.e. the grid area where 96 seeds are located. Since the grid part with seeds placed in the paper is skewed compared with the horizontal axis, the part of the operation is related to the label corresponding to the sequence of the seeds in the subsequent judgment according to the horizontal angle correction.
Calculating a minimum rectangular frame of the grid corresponding to the position information to obtain a rotation angle between the minimum rectangular frame and a horizontal axis, and correcting four vertex coordinates of the minimum rectangular frame through affine transformation to obtain vertex coordinates of the corrected minimum rectangular frame;
s32: acquiring the distance from the central point of each seed to the boundary line of the grid matrix frame, and obtaining a positive external rectangular frame corresponding to each seed:
according to the vertex coordinates of the grid rectangular frame, taking the upper edge and the right edge of the grid rectangular frame as boundary lines, calculating the distance from the central point of each seed to the boundary line of the grid rectangular frame, and calculating the external rectangular frame of each seed;
s33: and sequencing the seeds according to the distance between the central point of each seed and the boundary line of the grid matrix frame:
dividing the obtained grid rectangular frame into several rows, judging which row the seeds belong to according to the distance from the center point of the seeds to the right edge of the grid, and then sequencing the seeds from small to large according to the distance from the center point of the seeds to the upper edge of the grid to obtain the sequenced seeds;
s34: judging whether the seeds are overlapped or not based on the sorted seeds, removing the overlapped seeds to obtain the sorted available seed information, wherein the seed information comprises the sequence number of each seed and the pixel coordinate contained in a rectangle just circumscribed to each seed;
s35: reading the germination labels of the seeds, and numbering the germination labels and the seeds in sequence one to one correspondence;
s36: according to the pixel coordinates and the rotation angle of a rectangle just externally connected to each seed, extracting features on each wave band of the hyperspectral image to obtain the hyperspectral features of each seed and a corresponding germination label, and obtaining a seed feature image formed by the hyperspectral features. The obtained seed feature image is shown in fig. 7. The seed characteristic graph is composed of hyperspectral characteristics of each pixel point of 96 seeds.
Further, since the image of each seed is small, a minimum filling image which does not change the aspect ratio of the seed image and satisfies the input of the deep neural network can be generated by using the self-adaptive scaling; in step S4, each seed feature image is adaptively filled by using an adaptive scaling method to obtain a corresponding filled feature image.
As shown in fig. 9, in particular, the formula of the adaptive scaling is as follows:
Figure DEST_PATH_IMAGE002
wherein padding represents the respective filling lengths of the upper side and the lower side of the short side of the seed characteristic image, and the gray value of the filling part is 0; n represents the input size required by the network; H. w respectively represents the height and the width of the seed characteristic image; max (H, W) and min (H, W) respectively represent the maximum value and the minimum value in H, W; s represents the multiple of the down-sampling of the deep convolutional neural network; mod S denotes the remainder of the division by S.
According to the filling mode, under the condition that the network output result is the same, the input area is greatly reduced compared with that of a common filling image, and therefore the calculation amount of the convolution operation of the part is reduced.
Through the improved deep convolutional neural network, the network can accept filling images with different sizes, the calculated amount of the deep convolutional neural network is reduced, and the training time of the model is shortened. In addition, in order to balance the positive and negative sample data of training, the number of ungerminated seeds in 1500 seeds is counted, the germinated seeds with the same number are randomly screened out to serve as data samples, and the data samples are randomly divided into a training set, a verification set and a test set according to the proportion of (6).
In step S5, the deep learning convolutional neural network may be ResNet34, because the ResNet34 deep convolutional neural network deepens the network depth and does not have the gradient disappearance problem, and the optimal classification effect is obtained in many classification tasks, so ResNet34 is selected as the identification network of the seed germination rate. Meanwhile, improvement is carried out on the basis of a ResNet34 network, because the original network comprises full connection layers, the image size of the input network is required to be fixed, the image size of the seeds is small and different, in order to enable the network to accept input images with different sizes, a spatial pyramid pooling layer is added after the last residual error structure of the ResNet34, the original average pooling layer is removed, the structures of other layers of the network are unchanged, and a deep convolutional neural network is constructed. That is, the scheme modifies the average pooling layer in the network into a spatial pyramid pooling layer.
The specific operation of the spatial pyramid pooling layer is as follows: the number of the spatial pyramid multi-scale output features is 14, namely 1*1,2*2 and 3*3 grid division is respectively carried out on the feature diagram input at the previous layer, the maximum pooling operation is carried out on the original feature diagram mapped by each grid to obtain the feature value of each grid, and the obtained feature values are fused in the channel direction to obtain a feature of 1 × 14.
The structure of the improved deep learning convolution network model is shown in fig. 10, wherein a filled feature image is input into a ResNet34 residual module to extract features, each feature image obtained by a final residual module Conv5_ x of the ResNet34 network is divided according to 1 × 1,2 × 2 and 4 × 4 grids respectively, each grid is subjected to maximum pooling operation to obtain a feature value, the feature values of each network are fused in the channel direction to obtain a one-dimensional feature with the length of 21 × c, the one-dimensional feature is input into a full connection layer to obtain a two-dimensional value, and the size of the two-dimensional value is judged to obtain a final recognition result.
The method comprises the steps of taking a filling characteristic image obtained by self-adaptive scaling as the input of an improved deep learning convolution network, taking a germination result corresponding to each seed as the output, setting initialization parameters, selecting Adam as an optimizer, taking cross entropy as a loss function, training the network, making iteration times in training and loss size, training accuracy and verification accuracy obtained by each iteration into a chart, and preventing over-fitting or under-fitting of network training according to the chart to obtain a crop seed germination condition recognition model with a good training effect.
In order to prove that the training model has better generalization capability and better performance in other samples, the training model needs to predict a test set sample, wherein the test set sample is the same as the content of the training set sample, the precision of each model is calculated by taking the test result of the test set as a standard, and the model with the highest precision is selected as an optimal model. Comparing the result with the actual germination result of the test set, and counting the number of seeds which correctly predict the germinated seeds and the ungerminated seeds, actually germinated seeds but predicted ungerminated seeds, and actually ungerminated seeds but predicted germinated seeds; and calculating the model accuracy rate according to TP and FP in the statistical result, and selecting the model with the maximum accuracy rate on the test set as the optimal recognition model. The accuracy calculation formula is as follows: where P represents the precision rate.
Figure DEST_PATH_IMAGE004
The optimal recognition model obtained by the method is used for carrying out germination recognition on the hyperspectral image of the rice seed to be detected, the germinated seed and the ungerminated seed can be quickly and accurately screened out, 1 is returned to indicate that the seed germination is predicted, and 0 indicates that the seed germination is not predicted.
Example two
Based on the same conception, the application also provides a crop seed germination condition identification model which is constructed according to the construction method of the crop seed germination condition identification model recorded in the first embodiment.
EXAMPLE III
The scheme provides an identification method for improving the identification rate of the germination condition of crop seeds, which comprises the following steps:
and acquiring a hyperspectral image of the seed to be detected, inputting the hyperspectral image into the crop seed germination condition identification model constructed in the second embodiment to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
Example four
Based on the same design, this application has still provided an identification means who improves crops seed germination condition recognition rate, includes:
the acquisition unit is used for acquiring a hyperspectral image of the seed to be detected;
and the detection unit is used for inputting the hyperspectral image into the crop seed germination condition identification model constructed in the second embodiment to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
EXAMPLE five
The present embodiment further provides an electronic device, referring to fig. 11, comprising a memory 404 and a processor 402, wherein the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform the steps of any one of the above embodiments of the method for constructing a model for identifying a germination condition of crop seeds.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, the memory 404 may include a hard disk drive (hard disk drive, abbreviated HDD), a floppy disk drive, a solid state drive (solid state drive, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically erasable ROM (EEPROM), electrically Alterable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 404 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 402.
The processor 402 reads and executes the computer program instructions stored in the memory 404 to implement the method for constructing the crop seed germination recognition model in any one of the above embodiments.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The input and output devices 408 are used to input or output information. In this embodiment, the input information may be a hyperspectral image or the like of the crop seed, and the output information may be the germination condition or the like.
Optionally, in this embodiment, the processor 402 may be configured to execute the following steps by a computer program:
s1: acquiring a hyperspectral image containing seeds, wherein a germination label of each seed is marked in the hyperspectral image;
s2: segmenting the hyperspectral images to obtain position information of each seed, wherein the position information comprises pixel coordinates contained in the seeds and binary images of seed positions;
s3: matching the position information with the germination labels, and extracting a seed characteristic image of each seed in the hyperspectral image;
s4: filling each seed characteristic image in a self-adaptive mode to obtain a corresponding filling characteristic image;
s5: and the filling characteristic image is used as a training set, the training set is input into an improved deep convolutional neural network for training to obtain a crop seed germination condition recognition model, and the seed germination condition is used as an output result.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiment and optional implementation manners, and details of this embodiment are not described herein again.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method for constructing a crop seed germination condition identification model is characterized by comprising the following steps:
s1: acquiring hyperspectral images containing seeds, wherein the hyperspectral images mark germination labels of the seeds, and the seeds are placed on paper containing grids to shoot the hyperspectral images;
s2: segmenting the hyperspectral images to obtain position information of each seed, wherein the position information comprises pixel coordinates contained in the seeds and binary images of seed positions;
s3: matching the position information with the germination labels, and extracting seed features of each seed in the hyperspectral image to form a seed feature image;
s4: adaptively filling each seed characteristic image to obtain a corresponding filling characteristic image;
s5: and the filling characteristic image is used as a training set, the training set is input into an improved deep convolutional neural network for training to obtain a crop seed germination condition recognition model, and the seed germination condition is used as an output result.
2. The method for constructing the crop seed germination condition recognition model according to claim 1, wherein the step S2 further includes the steps of:
s21: splitting the hyperspectral image to obtain a gray scale image of each wave band, performing threshold segmentation and self-adaptive segmentation operation on the gray scale image to obtain a segmented binary image, and fusing the segmented binary image to obtain a first image;
s22: removing all grid lines in the first image, and preliminarily segmenting a first binary image which displays the seed position and the paper edge contour position;
s23: hole filling and morphological operation are carried out on the first image, all pixel coordinates contained in the grid area are obtained, and then the grid area is obtained;
s24: drawing the grid area on a black image with the same size as the hyperspectral image according to the outline information of the grid area to obtain a second binary image with a white grid area and black other areas;
s25: carrying out binary and operation on each pixel in the first binary image and the second binary image mask to obtain a segmentation binary image of the seeds in the grid area;
s26: and filtering the connected domain area of the obtained segmentation binary image to remove impurities, and obtaining well-segmented position information, wherein the position information comprises pixel coordinates contained in each seed and a seed position binary image.
3. The method for constructing the crop seed germination condition recognition model according to claim 1, wherein the step S3 further includes the steps of:
s31: correcting the minimum matrix frame of the grid corresponding to the position information to obtain a grid matrix frame;
s32: acquiring the distance from the central point of each seed to the boundary line of the grid matrix frame, and acquiring a positive external rectangular frame corresponding to each seed;
s33: sorting the seeds according to the distance between the central point of each seed and the boundary line of the grid matrix frame;
s34: judging whether the seeds are overlapped or not based on the sorted seeds, removing the overlapped seeds to obtain the sorted usable seed information, wherein the seed information comprises the sequence number of each seed and the pixel coordinate contained in a rectangle just circumscribed to each seed;
s35: reading the germination labels of the seeds, and numbering the germination labels with the seeds in sequence in a one-to-one correspondence manner;
s36: and extracting features on each wave band of the hyperspectral image according to the pixel coordinates and the rotation angle of the right external rectangle of each seed to obtain the hyperspectral features of each seed and the corresponding germination label.
4. The method of claim 3, wherein in step S31, the minimum rectangular frame of the grid corresponding to the position information is calculated to obtain a rotation angle between the minimum rectangular frame and a horizontal axis, and four vertex coordinates of the minimum rectangular frame are corrected by affine transformation to obtain vertex coordinates of the corrected minimum rectangular frame.
5. The method for constructing the crop seed germination condition recognition model according to claim 1, wherein each seed feature image is adaptively filled in an adaptive scaling manner to obtain a corresponding filled feature image.
6. The method for constructing the crop seed germination condition recognition model according to claim 1, wherein the deep learning convolutional neural network is ResNet34, a spatial pyramid pooling layer is added after the last residual structure of ResNet34, the original average pooling layer is removed, and the structure of other layers of the network is unchanged.
7. A model for identifying the germination of crop seeds, which is constructed by the method for constructing a model for identifying the germination of crop seeds according to any one of claims 1 to 6.
8. An identification method for improving the identification rate of the germination condition of crop seeds is characterized by comprising the following steps: acquiring a hyperspectral image of a seed to be detected, inputting the hyperspectral image into the crop seed germination condition identification model according to claim 7 to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
9. An identification device for improving the identification rate of the germination condition of crop seeds is characterized by comprising:
the acquisition unit is used for acquiring a hyperspectral image of the seed to be detected;
and the detection unit is used for inputting the hyperspectral image into the crop seed germination condition identification model constructed in the second embodiment to obtain an output result, and judging the germination condition of the seed to be detected based on the output result.
10. A readable storage medium, wherein a computer program is stored in the readable storage medium, the computer program comprising program code for controlling a process to execute the process, the process comprising the method for constructing a crop seed germination recognition model according to any one of claims 1 to 6 or the method for improving crop seed germination recognition rate according to claim 8.
CN202211479034.4A 2022-11-24 2022-11-24 Crop seed germination condition identification model, construction method and application thereof Active CN115512232B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211479034.4A CN115512232B (en) 2022-11-24 2022-11-24 Crop seed germination condition identification model, construction method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211479034.4A CN115512232B (en) 2022-11-24 2022-11-24 Crop seed germination condition identification model, construction method and application thereof

Publications (2)

Publication Number Publication Date
CN115512232A true CN115512232A (en) 2022-12-23
CN115512232B CN115512232B (en) 2023-04-07

Family

ID=84514150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211479034.4A Active CN115512232B (en) 2022-11-24 2022-11-24 Crop seed germination condition identification model, construction method and application thereof

Country Status (1)

Country Link
CN (1) CN115512232B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612139A (en) * 2023-07-20 2023-08-18 浙江托普云农科技股份有限公司 High-precision seed germination rate determination method, system and device based on deep learning
CN116994064A (en) * 2023-08-25 2023-11-03 河北地质大学 Seed lesion particle identification method and seed intelligent screening system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103636315A (en) * 2013-11-20 2014-03-19 华南理工大学 Hyperspectrum-based seed germination rate online-detection apparatus and method thereof
CN103745478A (en) * 2014-01-24 2014-04-23 山东农业大学 Machine vision determination method for wheat germination rate
WO2018084612A1 (en) * 2016-11-02 2018-05-11 한국식품연구원 System for measuring quality of rice, method for evaluating palatability of rice, system for predicting germination rate of grain and method for predicting germination rate
CN113129247A (en) * 2021-04-21 2021-07-16 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN113610101A (en) * 2021-06-17 2021-11-05 上海师范大学 Method for measuring germination rate of grains
WO2022160771A1 (en) * 2021-01-26 2022-08-04 武汉大学 Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103636315A (en) * 2013-11-20 2014-03-19 华南理工大学 Hyperspectrum-based seed germination rate online-detection apparatus and method thereof
CN103745478A (en) * 2014-01-24 2014-04-23 山东农业大学 Machine vision determination method for wheat germination rate
WO2018084612A1 (en) * 2016-11-02 2018-05-11 한국식품연구원 System for measuring quality of rice, method for evaluating palatability of rice, system for predicting germination rate of grain and method for predicting germination rate
WO2022160771A1 (en) * 2021-01-26 2022-08-04 武汉大学 Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN113129247A (en) * 2021-04-21 2021-07-16 重庆邮电大学 Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution
CN113610101A (en) * 2021-06-17 2021-11-05 上海师范大学 Method for measuring germination rate of grains

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于施淼等: "基于高光谱图像与视觉词袋模型的稻种发芽率预测研究", 《激光与光电子学进展》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612139A (en) * 2023-07-20 2023-08-18 浙江托普云农科技股份有限公司 High-precision seed germination rate determination method, system and device based on deep learning
CN116612139B (en) * 2023-07-20 2023-09-29 浙江托普云农科技股份有限公司 High-precision seed germination rate determination method, system and device based on deep learning
CN116994064A (en) * 2023-08-25 2023-11-03 河北地质大学 Seed lesion particle identification method and seed intelligent screening system
CN116994064B (en) * 2023-08-25 2024-02-27 河北地质大学 Seed lesion particle identification method and seed intelligent screening system

Also Published As

Publication number Publication date
CN115512232B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN115512232B (en) Crop seed germination condition identification model, construction method and application thereof
CN114120037B (en) Germinated potato image recognition method based on improved yolov5 model
CN109086826B (en) Wheat drought identification method based on image deep learning
CN109035289B (en) Purple soil image segmentation and extraction method based on Chebyshev inequality H threshold
CN109522899B (en) Detection method and device for ripe coffee fruits and electronic equipment
CN110427933A (en) A kind of water gauge recognition methods based on deep learning
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
CN108073947B (en) Method for identifying blueberry varieties
CN116363520B (en) Landscape ecological detection system for urban green land planning
CN107871133A (en) The recognition methods of the optimization method, pavement disease of rim detection network and system
CN114078213A (en) Farmland contour detection method and device based on generation of confrontation network
CN114677525B (en) Edge detection method based on binary image processing
CN112700488A (en) Living body long blade area analysis method, system and device based on image splicing
CN115861823A (en) Remote sensing change detection method and device based on self-supervision deep learning
CN111626335A (en) Improved hard case mining training method and system of pixel-enhanced neural network
CN111724354A (en) Image processing-based method for measuring spike length and small spike number of multiple wheat
CN109166127B (en) Wearable plant phenotype sensing system
CN117635615A (en) Defect detection method and system for realizing punching die based on deep learning
CN115345880B (en) Corn ear character estimation method and device based on corn ear unilateral scanning map
CN115953352A (en) Peanut seed selection evaluation and classification method based on network model
CN113379620B (en) Optical remote sensing satellite image cloud detection method
CN112950479B (en) Image gray level region stretching algorithm
CN114359748A (en) Target classification extraction method and device
CN114049390A (en) Wheat seedling planting density measuring device and method based on machine vision
CN116597318B (en) Irrigation area cultivated land precise extraction method, equipment and storage medium based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant