CN116310548A - Method for detecting invasive plant seeds in imported seed products - Google Patents

Method for detecting invasive plant seeds in imported seed products Download PDF

Info

Publication number
CN116310548A
CN116310548A CN202310256931.7A CN202310256931A CN116310548A CN 116310548 A CN116310548 A CN 116310548A CN 202310256931 A CN202310256931 A CN 202310256931A CN 116310548 A CN116310548 A CN 116310548A
Authority
CN
China
Prior art keywords
seed
image
seeds
model
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310256931.7A
Other languages
Chinese (zh)
Inventor
乔曦
张子照
钱万强
黄亦其
刘博�
刘聪辉
黄聪
万方浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Genomics Institute at Shenzhen of CAAS
Original Assignee
Agricultural Genomics Institute at Shenzhen of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Genomics Institute at Shenzhen of CAAS filed Critical Agricultural Genomics Institute at Shenzhen of CAAS
Priority to CN202310256931.7A priority Critical patent/CN116310548A/en
Publication of CN116310548A publication Critical patent/CN116310548A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/40Monitoring or fighting invasive species

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting invasive plant seeds in imported seed products, which mainly comprises the following steps: shooting and obtaining images of all seeds; labeling the obtained seed image to establish a semantic segmentation data set and training a deep LabV3 semantic segmentation model; performing mask processing on the segmented image; extracting the outline of the seed boundary of the image after masking, drawing an external rectangular frame, and cutting to obtain an image of a single seed; marking the cut single seed image to establish a seed data set, and training a MobileNet image classification model; and detecting and applying the seeds to be detected by using the trained deep LabV3 semantic segmentation model and the MobileNet image classification model. Compared with the prior art, the invention has the advantages that: the method combining semantic segmentation and image classification can ensure that the background is removed efficiently, and single seeds are extracted, so that the accuracy of classifying multiple seeds is improved.

Description

Method for detecting invasive plant seeds in imported seed products
Technical Field
The invention relates to the field of invasive plant seed detection, in particular to an invasive plant seed detection method in imported seed products.
Background
With the continuous penetration of economic globalization, commodity import and export quantities among countries are continuously increasing. In the last half of 2022, the import and export of China is increased by 9.4 percent. Seed products occupy a large specific gravity among various commodities in customs imports and exports. A large number of crop seed products are imported from abroad in China every year, and in the huge number of seeds, a plurality of other seeds are inevitably mixed, so that the seeds have no effect on agricultural production, some of the seeds even belong to foreign invasive species, if the seeds are not distinguished and managed, the seeds can rapidly reproduce, rob nutrition and space of native plants, seriously threaten the existence of native organisms, even cause extinction, and finally cause ecological imbalance.
The rapid detection of seeds is a widely focused problem no matter in import and export of production at home and abroad or in experimental research, and the seed detection is important for seed producers and farmers to maintain variety purity and crop yield. The detection of seeds at home and abroad is mainly carried out on several seeds which are similar or different varieties in the same seed, and is mainly carried out by naked eyes or chemical methods. But by naked eyes, the accuracy is low, and the time and the labor are consumed. If the seeds are detected by chemical means, irreversible damage to the seeds is easy to cause, and the cost is high and the efficiency is low. Therefore, it is critical to perform nondestructive and accurate seed detection using image recognition methods.
In order to extract morphological and colorimetric features, it is necessary to segment the image containing the sample to be detected. This stage represents one of the most challenging steps in image processing, as it is difficult to separate the object containing the kind from the background. The common segmentation mode in image segmentation is threshold segmentation, but the threshold segmentation is a traditional image segmentation method, and has the advantages of simple realization, small calculation amount and the like, so that the segmentation result is not satisfactory for images with certain complexity and interference, and the final classification effect is influenced.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the technical defects, and provide the method for detecting the invasive plant seeds in imported seed products, which is convenient for nondestructive and accurate distinction and convenient for improving the efficiency.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: a method for detecting invasive plant seeds in imported seed products, the method comprising the steps of:
s1) randomly sowing imported seed products mixed with various invasive plant seeds on an experiment table, and shooting to obtain images of the seeds;
s2) labeling the original image containing all seeds obtained in the step S1 and establishing a semantic segmentation data set;
s3) training a DeepLabV3 semantic segmentation model by utilizing the semantic segmentation data set established in the S2;
s4) inputting the original image containing all seeds obtained in the step S1 into a trained deep LabV3 semantic segmentation model to obtain a segmented image of the background and the seeds;
s5) performing mask processing on the segmented image obtained in the step S4;
s6) extracting the outline of the seed boundary of the segmented image after the mask is subjected to S5, drawing out the external rectangular frame of each seed in the segmented image, and cutting to obtain a single seed image;
s7) marking the single seed image obtained in the S6 and constructing a seed data set;
s8) training a MobileNet image classification model by using the seed data set constructed in the S7;
s9) detecting and applying the trained deep LabV3 semantic segmentation model and the MobileNet image classification model to the original images which do not participate in training and contain all seeds.
Compared with the prior art, the invention has the advantages that: the method uses semantic segmentation as a segmentation mode of the image, omits a link of manually selecting a threshold value, has stronger universality in the occasion with various seed types, is not easy to be interfered by environmental conditions, has stronger robustness, and realizes more automatic and accurate segmentation treatment; the method adopts various semantic segmentation networks and image classification networks to carry out seed classification work, and selects the optimal combination from the segmentation effect and the classification accuracy to carry out final classification application, so that the optimal classification technology can be selected, and the classification accuracy is improved to the greatest extent; the method adopts the image processing technology, the whole process can not damage the interior of the seeds, and can greatly save manpower and material resources, improve the classification efficiency, and is efficient and convenient; according to the method, the deep learning model is taken as a framework, so that the internal law of sample data can be automatically learned, the automatic extraction of the characteristics is realized, the subjective influence of human factors on classification is eliminated, and objective, fair, strict and accurate classification is realized.
Further, the original image containing all the seeds obtained in S1 specifically includes: the method comprises the steps of shooting seeds to be detected through image acquisition equipment to obtain original images, wherein single seed or multi-species mixed seed is contained in the original images, and the types and the numbers of imported seed products and invasive plant seeds contained in each original image are different.
Further, the step of S2 labeling the original image acquired in the step of S1 and establishing a semantic segmentation data set includes the following steps: marking the seed part in each original image by using EIseg marking software, dividing the seed part into two types of targets, wherein the seed is the first type, and the background defaults to the zeroth type; and finishing the generated annotation file into the format of the VOC data set, and finishing the annotation information into the json format so as to be input into the semantic segmentation network.
Further, training the deep labv3 semantic segmentation model by using the semantic segmentation data set established in the step S2 includes the following steps: constructing a deep LabV3 semantic segmentation model; inputting the semantic segmentation data set into a semantic segmentation model according to the input format of the VOC data set, setting proper parameters of BatchSize, epoch and LearnRate, and training the model; saving model weights of different iteration times; and respectively testing in a test set by using the trained and stored models, and selecting the model with the best segmentation effect as a final deep LabV3 segmentation model.
Further, building the deep LabV3 semantic segmentation model in the S3 comprises the following steps: setting up a deep LabV3 network, taking a ResNet network as a framework, introducing a ASPP (Atrous Spatial Pyramid Pooling) module, and effectively resampling by using hole convolution of different conditions on a given characteristic layer, so that convolution kernels of different sensing fields can be built, and multi-scale object information can be obtained.
Further, the step of masking the segmented image in S5 includes the steps of: the obtained segmented image is a binary image, the seed part is white, the pixel value is 255, the background part is black, and the pixel value is 0; performing AND operation by using the segmented image and the original image to realize the effects of seed preservation and background shielding and obtain a seed mask image of a black background; performing inversion operation on the obtained segmented image, wherein the seed part is changed to black, and the background part is changed to white, so as to obtain an inversion binary image; and carrying out or operation on the seed mask image with the black background and the inverse binary image to realize the effects that seeds are reserved and the background color becomes white, and finally obtaining the seed mask image with the white background.
Further, the step S6 of extracting a seed boundary contour of the segmented image after masking, drawing a rectangular frame circumscribing each seed in the segmented image, and clipping to obtain a single seed image includes the following steps: extracting the outline of the seeds by using opencv on the obtained white seed mask graph, visualizing the outline on the image, and independently storing the outline information of each seed; drawing an external rectangle of each saved seed outline by using opencv, and saving; and cutting the seeds according to the saved circumscribed rectangle, and saving the seeds as a single seed image for subsequent seed image classification.
Further, the step S7 of labeling the single seed image and constructing the seed data set includes the following steps: dividing the cut seed pictures into corresponding folders according to seed categories.
Further, the training the MobileNet image classification model by using the seed data set in S8 includes the following steps: dividing the seed data set into a training set, a verification set and a test set according to the proportion of 8:1:1; using the divided training set and verification set data to be imported into a MobileNet image classification model, wherein the initial learning rate is selected to be 0.0001, the BatchSize is set to be 32, the epoch is set to be 200, and training is performed for multiple times and training results are stored; and respectively testing the training-completed and stored models in a test set, and selecting the model with the highest classification accuracy as a final image classification model.
Further, the step S9 of using the trained deep labv3 semantic segmentation model and the MobileNet image classification model to detect and apply the original image containing all seeds which do not participate in training includes the following steps: inputting an original image which does not participate in training and contains all seeds into a final deep LabV3 segmentation model to obtain a segmentation image; performing mask processing on the segmented image; extracting the outline of the seed boundary of the segmented image after masking, drawing out an external rectangular frame of each seed in the segmented image, cutting to obtain a plurality of single seed images, and counting the number of the single seed images; respectively inputting a plurality of single seed images into a final MobileNet image classification model to obtain classification results of each single seed image, and counting the number of each type of results; and comparing with the manual detection result, evaluating the detection method, if the number of single seed images, the classification result of the single seed images and the number of results of each type are not expected, adjusting the data set, retraining the corresponding model, and if the expected number is reached, using the data set for actual detection.
Drawings
FIG. 1 is a flow chart of a method for detecting invasive plant seeds in imported seed products according to the present invention.
FIG. 2 is an exemplary diagram of an image of an invasive plant seed.
Fig. 3 is a deep labv3 semantic segmentation network architecture diagram.
FIG. 4 is an exemplary graph of the result of semantic segmentation of invasive plant seeds.
FIG. 5 is an exemplary graph of mask processing results after semantic segmentation of invasive plant seeds.
FIG. 6 is a graph showing an example of the result of invasive plant seed segmentation and cutting.
FIG. 7 is an exemplary graph of a result of a seed classification confusion matrix after clipping.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
1-7, a method for detecting invasive plant seeds in imported seed products, as shown in FIG. 1, comprising the steps of:
s1) shooting and obtaining original images containing all seeds:
taking pictures according to the collected seeds of 16 foreign invasive plants, each invasive plant seed is scattered on an experiment table to be distributed completely randomly, the seeds are taken with a high-definition camera at a resolution of 8256×5504, 1600 images of Zuo You are taken in total, the original images contain seeds of single species or mixed species, the imported seed products and the invasive plant seed species contained in each original image are different in number, and an image data example of the invasive plant seed is shown in fig. 2.
S2) marking the obtained original image and establishing a semantic segmentation data set:
marking the seed part in each original image by using EIseg marking software, adding a category into a category frame, and taking the category as a seed type marking, wherein the background is the default zeroth category; manually clicking and selecting seeds on each picture, automatically selecting pixels possibly to be marked according to a model by EIseg, visualizing the pixels on the picture, finely adjusting marking results according to the visualized results, storing the marking results, generating a marking picture after storing, a marking effect preview picture, and storing coco-format files of marking information of all images; creating a catalog according to the format of the VOC data set, respectively placing the generated annotation graph and seed original graph under the corresponding catalog folders, dividing the annotation file in the coco format into the annotation information files in the xml format of each picture independently by using the script file, namely storing json format files required by the VOC data set under the specific catalog so as to be uniformly input into the semantic segmentation network.
S3) training a deep LabV3 semantic segmentation model by using the semantic segmentation data set, as shown in FIG. 3:
s31: building a deep LabV3 semantic segmentation model: constructing a deep LabV3 network, taking a ResNet network as a framework, introducing a ASPP (Atrous Spatial Pyramid Pooling) module, and effectively resampling by using hole convolution of different conditions on a given feature layer, so that convolution kernels of different sensing fields can be constructed to obtain multi-scale object information; as shown in fig. 3, the deep labv3 semantic segmentation network structure of the present invention is: input layer: an RGB three-channel image of 224 x 224 size; preliminary rolling and pooling: carrying out convolution pooling on an input image to obtain a feature map with 56×56 resolution; block: for the layer structure in the backbone network ResNet, there are 33×3 convolutions in each Block, and the excitation function follows the model. However, in Block4, the convolution step of 3×3 and the convolution step of 1×1 of the shortcut branch are changed from 2 to 1, downsampling is not performed, and the 3×3 convolution is changed into an expansion convolution; ASPP (Atrous Spatial Pyramid Pooling) module: the method comprises a common convolution of 1×1, 3×3 cavity convolutions with three cavity rates of 6,12,18 respectively, and a global tie-pooling layer. Carrying out convolution and then fusion; output layer: a 1 x 1 convolution kernel and BN layer;
s32: inputting the semantic segmentation data set into a semantic segmentation model according to the input format of the VOC data set, setting proper parameters of BatchSize, epoch and LearnRate, and training the model; inputting the marked pictures, marked information and the like of the VOC data format which are finished before into a model, setting the BatchSize of the model to be 4, setting the initial learning rate to be 0.01, uniformly using an SGD optimizer, creating a learning strategy updated once by each step, setting epoch to be 100, and training the model;
s33: model weights of different iteration times are saved: when training, continuously calculating the evaluation index of each round of model through a verification set, wherein the mIOU is used as the evaluation index of the model effect, the model with the highest mIOU and the evaluation index thereof are stored, when the value of the model mIOU of the current round is larger than the stored value, the model is replaced by the stored model to be used as the best model, and the best model is stored after all epochs are iterated.
S34: respectively testing in a test set by using the trained and stored models, and selecting a model with the best segmentation effect as a final deep LabV3 segmentation model; the deep LabV3 semantic segmentation network is iterated to obtain an optimal model of the deep LabV3 semantic segmentation network, then a test set is used for carrying out segmentation test on the model of the network, and the effect of the semantic segmentation network is evaluated according to the size of an evaluation index and the segmentation effect of an actual picture and is used as a semantic segmentation network model of the subsequent actual segmentation;
s4) inputting the acquired original image containing all seeds into a trained deep LabV3 semantic segmentation model to obtain segmented images of the background and the seeds, wherein an example of a seed segmentation result is shown in FIG. 4.
S5) masking the segmented image: obtaining a segmented binary image, wherein the seed part is white, the pixel value is 255, the background part is black, and the pixel value is 0; performing AND operation on the segmented binary image and the original image, wherein 255 pixel values of the seed part and pixel value phase of the original seed are ANDed, and the result is the pixel value of the original seed, so as to realize the seed retaining effect; the pixel value of the background part 0 and the pixel value of the original seed background are combined, the result is the pixel value 0, the background shielding effect is realized, and a seed mask image of a black background is obtained; performing inversion operation on the obtained segmented binary image, wherein the seed part is changed into black, the pixel value is 0, the background part is changed into white, and the pixel value is 255, so as to obtain an inversion binary image; performing OR operation on the seed mask image of the black background and the inverse binary image, wherein the seed part is the phase or of the original seed pixel value and the 0 pixel value, and the result is a non-0 value, namely the result is the original seed pixel value, so as to realize the seed retaining effect; the background part is the phase or of the 0 pixel value and the 255 pixel value, and the result is the non-0 value, namely the result is the 255 pixel value, so that the background color is changed into white, and finally the seed mask image with the white background is obtained. An example of the result of the seed mask process is shown in fig. 5.
S6) extracting the outline of the seed boundary of the segmented image after masking, drawing out the external rectangular frame of each seed in the segmented image, and cutting to obtain a single seed image; extracting the outline of the seeds on the obtained white seed mask graph by using a findContours function in opencv according to the gradient change of pixel values, drawing the outline on an original graph by using an API (application program interface) for outline drawing so as to intuitively see the extraction condition of the outline, and then independently storing the outline information of each seed in a list; using a rectangular frame drawing function boundingRect in opencv to receive coordinate information of the saved contour points, drawing an external rectangle of each saved seed contour through an iterative contour list, and saving; cutting seeds by using a cutting function rectangle of opencv according to the stored circumscribed rectangle, and storing the cut seeds as a single seed picture for subsequent seed image classification; an example of a result after seed segmentation clipping is shown in fig. 6.
S7) marking the single seed image and constructing a seed data set; dividing the cut seed pictures into corresponding folders according to seed categories, and naming the corresponding folders in a unified format.
S8) training a MobileNet image classification model by using the seed data set: dividing the seed data set into a training set, a verification set and a test set according to the proportion of 8:1:1; using the divided training set and verification set data to be imported into a MobileNet image classification model, wherein the initial learning rate is selected to be 0.0001, the BatchSize is set to be 32, the epoch is set to be 200, and training is performed for multiple times and training results are stored; and respectively testing in a test set by using the trained and stored models to generate a confusion matrix, and selecting the model with the highest classification accuracy as a final image classification model, wherein an example of a confusion matrix result after seed classification is shown in fig. 7.
S9) detecting and applying the trained deep LabV3 semantic segmentation model and the MobileNet image classification model to the original images which do not participate in training and contain all seeds: the trained segmentation model and the classification model are connected in series to form a comprehensive classification system, the input of the system is a seed picture to be classified, and the input of the system is a cut single seed picture and a detection result thereof; inputting an original image which does not participate in training and contains all seeds into a final deep LabV3 segmentation model to obtain a segmentation image; performing mask processing on the segmented image; extracting the outline of the seed boundary of the segmented image after masking, drawing out an external rectangular frame of each seed in the segmented image, cutting to obtain a plurality of single seed images, and counting the number of the single seed images; respectively inputting a plurality of single seed images into a final MobileNet image classification model to obtain classification results of each single seed image, and counting the number of each type of results; and comparing with the manual detection result, and evaluating the applicability of the model according to the segmentation and classification results. If the difference between the number of the single segmented seed images and the number of the seeds in the original image before segmentation is less than 10 percent and each seed is segmented completely, the accuracy of the representative segmentation model is higher, and the method can be applied to actual detection; if the difference is greater than 10%, the error is considered to be large and the segmentation model needs to be retrained. If the classification accuracy of each seed calculated by the confusion matrix is higher than 85% after classification, the classification effect of the model is considered to be good, and the model can be used for actual detection; if the classification accuracy of most seeds is lower than 85%, the classification model is considered invalid, and the parameter training model needs to be readjusted; if the classification accuracy of only a few seeds of the category is lower than 85%, the sensitivity of the model to the seeds of the category is considered to be lower, the seed image data sets of the categories are required to be enhanced and cleaned correspondingly (such as the transformation of turning, zooming, translation and the like is carried out on single seed images, some incomplete seed images are removed), and then the modified data sets are used for retraining the classification model.
If the evaluation results of the segmentation model and the classification model meet the requirements, the method can be used for actual detection.
The invention and its embodiments have been described above with no limitation, and the actual construction is not limited to the embodiments of the invention as shown in the drawings. In summary, if one of ordinary skill in the art is informed by this disclosure, a structural manner and an embodiment similar to the technical solution should not be creatively devised without departing from the gist of the present invention.

Claims (10)

1. A method for detecting invasive plant seeds in imported seed products, the method comprising the steps of:
s1) randomly sowing imported seed products mixed with various invasive plant seeds on an experiment table, and shooting to obtain images of the seeds;
s2) labeling the original image containing all seeds obtained in the step S1 and establishing a semantic segmentation data set;
s3) training a DeepLabV3 semantic segmentation model by utilizing the semantic segmentation data set established in the S2;
s4) inputting the original image containing all seeds obtained in the step S1 into a trained deep LabV3 semantic segmentation model to obtain a segmented image of the background and the seeds;
s5) performing mask processing on the segmented image obtained in the step S4;
s6) extracting the outline of the seed boundary of the segmented image after the mask is subjected to S5, drawing out the external rectangular frame of each seed in the segmented image, and cutting to obtain a single seed image;
s7) marking the single seed image obtained in the S6 and constructing a seed data set;
s8) training a MobileNet image classification model by using the seed data set constructed in the S7;
s9) detecting and applying the trained deep LabV3 semantic segmentation model and the MobileNet image classification model to the original images which do not participate in training and contain all seeds.
2. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the original image containing all seeds obtained in the step S1 specifically comprises the following steps: the method comprises the steps of shooting seeds to be detected through image acquisition equipment to obtain original images, wherein single seed or multi-species mixed seed is contained in the original images, and the types and the numbers of imported seed products and invasive plant seeds contained in each original image are different.
3. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the step S2 of labeling the original image acquired in the step S1 and establishing a semantic segmentation data set comprises the following steps: marking the seed part in each original image by using EIseg marking software, dividing the seed part into two types of targets, wherein the seed is the first type, and the background defaults to the zeroth type; and finishing the generated annotation file into the format of the VOC data set, and finishing the annotation information into the json format so as to be input into the semantic segmentation network.
4. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: training the deep LabV3 semantic segmentation model by utilizing the semantic segmentation data set established in the S2 comprises the following steps: constructing a deep LabV3 semantic segmentation model; inputting the semantic segmentation data set into a semantic segmentation model according to the input format of the VOC data set, setting proper parameters of BatchSize, epoch and LearnRate, and training the model; saving model weights of different iteration times; and respectively testing in a test set by using the trained and stored models, and selecting the model with the best segmentation effect as a final deep LabV3 segmentation model.
5. The method for detecting invasive plant seeds in an imported seed product according to claim 4, wherein: the step of building the deep LabV3 semantic segmentation model in the step S3 comprises the following steps: setting up a deep LabV3 network, taking a ResNet network as a framework, introducing a ASPP (Atrous Spatial Pyramid Pooling) module, and effectively resampling by using hole convolution of different conditions on a given characteristic layer, so that convolution kernels of different sensing fields can be built, and multi-scale object information can be obtained.
6. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the step S5 of masking the segmented image comprises the following steps: the obtained segmented image is a binary image, the seed part is white, the pixel value is 255, the background part is black, and the pixel value is 0; performing AND operation by using the segmented image and the original image to realize the effects of seed preservation and background shielding and obtain a seed mask image of a black background; performing inversion operation on the obtained segmented image, wherein the seed part is changed to black, and the background part is changed to white, so as to obtain an inversion binary image; and carrying out or operation on the seed mask image with the black background and the inverse binary image to realize the effects that seeds are reserved and the background color becomes white, and finally obtaining the seed mask image with the white background.
7. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: s6, extracting a seed boundary outline of the segmented image after masking, drawing out an external rectangular frame of each seed in the segmented image, and cutting to obtain a single seed image, wherein the method comprises the following steps of: extracting the outline of the seeds by using opencv on the obtained white seed mask graph, visualizing the outline on the image, and independently storing the outline information of each seed; drawing an external rectangle of each saved seed outline by using opencv, and saving; and cutting the seeds according to the saved circumscribed rectangle, and saving the seeds as a single seed image for subsequent seed image classification.
8. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the step S7 of labeling the single seed image and constructing a seed data set comprises the following steps: dividing the cut seed pictures into corresponding folders according to seed categories.
9. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the step S8 of training a MobileNet image classification model by using the seed data set comprises the following steps of: dividing the seed data set into a training set, a verification set and a test set according to the proportion of 8:1:1; using the divided training set and verification set data to be imported into a MobileNet image classification model, wherein the initial learning rate is selected to be 0.0001, the BatchSize is set to be 32, the epoch is set to be 200, and training is performed for multiple times and training results are stored; and respectively testing the training-completed and stored models in a test set, and selecting the model with the highest classification accuracy as a final image classification model.
10. The method for detecting invasive plant seeds in imported seed products according to claim 1, wherein: the step S9 of detecting and applying the original image which does not participate in training and contains all seeds by using a trained deep LabV3 semantic segmentation model and a MobileNet image classification model comprises the following steps: inputting an original image which does not participate in training and contains all seeds into a final deep LabV3 segmentation model to obtain a segmentation image; performing mask processing on the segmented image; extracting the outline of the seed boundary of the segmented image after masking, drawing out an external rectangular frame of each seed in the segmented image, cutting to obtain a plurality of single seed images, and counting the number of the single seed images; respectively inputting a plurality of single seed images into a final MobileNet image classification model to obtain classification results of each single seed image, and counting the number of each type of results; and comparing with the manual detection result, evaluating the detection method, if the number of single seed images, the classification result of the single seed images and the number of results of each type are not expected, adjusting the data set, retraining the corresponding model, and if the expected number is reached, using the data set for actual detection.
CN202310256931.7A 2023-03-17 2023-03-17 Method for detecting invasive plant seeds in imported seed products Pending CN116310548A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310256931.7A CN116310548A (en) 2023-03-17 2023-03-17 Method for detecting invasive plant seeds in imported seed products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310256931.7A CN116310548A (en) 2023-03-17 2023-03-17 Method for detecting invasive plant seeds in imported seed products

Publications (1)

Publication Number Publication Date
CN116310548A true CN116310548A (en) 2023-06-23

Family

ID=86837564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310256931.7A Pending CN116310548A (en) 2023-03-17 2023-03-17 Method for detecting invasive plant seeds in imported seed products

Country Status (1)

Country Link
CN (1) CN116310548A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333400A (en) * 2023-11-06 2024-01-02 华中农业大学 Root box cultivated crop root system image broken root restoration and phenotype extraction method
CN117788829A (en) * 2024-02-27 2024-03-29 长春师范大学 Image recognition system for invasive plant seed detection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117333400A (en) * 2023-11-06 2024-01-02 华中农业大学 Root box cultivated crop root system image broken root restoration and phenotype extraction method
CN117333400B (en) * 2023-11-06 2024-04-30 华中农业大学 Root box cultivated crop root system image broken root restoration and phenotype extraction method
CN117788829A (en) * 2024-02-27 2024-03-29 长春师范大学 Image recognition system for invasive plant seed detection
CN117788829B (en) * 2024-02-27 2024-05-07 长春师范大学 Image recognition system for invasive plant seed detection

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN116310548A (en) Method for detecting invasive plant seeds in imported seed products
WO2022236876A1 (en) Cellophane defect recognition method, system and apparatus, and storage medium
CN112070727B (en) Metal surface defect detection method based on machine learning
CN113191334B (en) Plant canopy dense leaf counting method based on improved CenterNet
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN112183448B (en) Method for dividing pod-removed soybean image based on three-level classification and multi-scale FCN
CN113191222B (en) Underwater fish target detection method and device
CN114549507B (en) Improved Scaled-YOLOv fabric flaw detection method
CN111932639B (en) Detection method of unbalanced defect sample based on convolutional neural network
CN114140665A (en) Dense small target detection method based on improved YOLOv5
Liu et al. Deep learning based research on quality classification of shiitake mushrooms
Shete et al. Tasselgan: An application of the generative adversarial model for creating field-based maize tassel data
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
Zhang et al. A novel image detection method for internal cracks in corn seeds in an industrial inspection line
CN112966698A (en) Freshwater fish image real-time identification method based on lightweight convolutional network
CN116416523A (en) Machine learning-based rice growth stage identification system and method
Manasa et al. Plant recognition using watershed and convolutional neural network
Yang et al. Cherry recognition based on color channel transform
Li et al. A novel denoising autoencoder assisted segmentation algorithm for cotton field
CN116310549A (en) Method for detecting invasive plant seeds in imported soybeans
CN111160079A (en) Method for rapidly identifying flowering phase of citrus
Adão et al. Multi-purpose chestnut clusters detection using deep learning: A preliminary approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination