CN116452526A - Rice seed identification and counting method based on image detection - Google Patents

Rice seed identification and counting method based on image detection Download PDF

Info

Publication number
CN116452526A
CN116452526A CN202310330769.9A CN202310330769A CN116452526A CN 116452526 A CN116452526 A CN 116452526A CN 202310330769 A CN202310330769 A CN 202310330769A CN 116452526 A CN116452526 A CN 116452526A
Authority
CN
China
Prior art keywords
image
rice seed
rice
color space
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310330769.9A
Other languages
Chinese (zh)
Inventor
刘晓洋
谭良晨
宁建峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202310330769.9A priority Critical patent/CN116452526A/en
Publication of CN116452526A publication Critical patent/CN116452526A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a rice seed identification and counting method based on image detection, which adopts grid paper as a background for rice seed image acquisition, thereby realizing positioning and correction of images; then selecting color components with obvious differences between rice seeds and the background to form an SbCBCr color space, and dividing the rice seed image into binary images by combining a BPNN classification model; after the open-close operation, dividing the original image into sub-images of single rice seeds or a plurality of adhered rice seeds by taking the binary image communication area as a mask, and carrying out size normalization; and establishing a rice seed number image data set by the sub-images, classifying the acquired rice seed images by using a network classification model by using a deep convolution god, and then counting the rice seed number corresponding to each sub-image classification result in the original rice seed image to obtain the total number of rice seeds in the original rice seed image. The invention has simple operation and accurate counting, can save labor cost and equipment cost and greatly improve the working efficiency.

Description

Rice seed identification and counting method based on image detection
Technical Field
The invention belongs to the field of image processing, and particularly relates to a rice seed identification and counting method based on image detection.
Background
The accurate statistics of seed grain number is an indispensable step in modern agricultural scientific research, and is an important link in breeding and seed examination. Crop seed research is mainly concerned with quality detection and counting, and identification and segmentation of seeds in counting of crop seeds are the most important links. Most research focuses on segmentation of adhered crop seeds, where segmentation algorithms based on morphology, pit detection, elliptic curve fitting are the basic method. There are other algorithms, such as segmentation algorithm based on seed region growth, segmentation algorithm based on clustering algorithm and watershed algorithm, etc. However, the current counting method has higher requirements on environment and pictures, and the like, and the speed is relatively low, so that the convenience, the accuracy and the rapidness of the counting method are all required to be improved.
At present, the manual counting mode is quite common, and has the advantages of high instantaneity, low tool requirement and the like, and has the disadvantages of low efficiency, easy error, large labor force and easy visual fatigue. There are few electromechanical integrated counting devices which can replace manual counting, but the problems of large error, complex manufacture, high price and the like exist, and the wide popularization and application are difficult. In summary, in order to solve the above-mentioned shortcomings in the prior art, the present invention provides a method for identifying and counting rice seeds based on image detection based on image processing technology.
Disclosure of Invention
The invention aims to: aiming at the defects existing in the prior art, the invention provides the rice seed identification and counting method based on image detection, and the counting of rice seed adhesion is realized by using a deep convolutional neural network classification model, so that the effects of high speed, high accuracy and high stability are achieved.
The technical scheme is as follows: the invention provides a rice seed identification and counting method based on image detection, which specifically comprises the following steps:
(1) Uniformly throwing rice seeds on white grid paper, performing nodding shooting by a camera lens parallel to the upper part of the grid paper, and performing light supplementing by adopting a parallel light source;
(2) Rotating and cutting the image according to the positions of the four corner positioning blocks of the grid paper, and calibrating the image distortion according to the positions of grid intersection points in the grid paper;
(3) Extracting an S component of the rice seed image in an HSV color space, a b component of the Lab color space and Cb and Cr components of the YCbCr color space to form a new color space SbCBCr;
(4) Constructing a BPNN pixel classification model, then extracting pixel values of seed pixels and background pixels in a sample image in four channels in a newly built color space SbCr, training the BPNN model, and selecting an optimal model;
(5) Taking the color value of each pixel in the rice seed image in a SbCr color space as the input of an optimal BPNN model, and dividing the rice seed image into binary images, wherein the rice seed is divided into a foreground, and the grid paper is divided into a background;
(6) The small adhesion of part of rice seed communication areas is disconnected by adopting open operation, tiny noise points are filtered, and then the edges of the rice seeds are smoother by adopting closed operation;
(7) Dividing an original rice seed image into rice seed sub-images consisting of single rice seeds or a plurality of adhered rice seeds by taking each rice seed communication area as a mask, and then carrying out size normalization on the rice seed sub-images;
(8) Marking according to the number of rice seeds in each sub-image, thereby establishing a rice seed number image dataset, establishing an image classification model by adopting a deep convolution god to a network, and inputting the image dataset into the classification model for training;
(9) Dividing the collected rice seed image into a plurality of sub-images and sending the sub-images into a trained classification model, and then counting the number of rice seeds corresponding to the classification result of each rice seed sub-image in the original rice seed image to obtain the total number of rice seeds in the original rice seed image.
Further, the implementation process of the step (2) is as follows:
(21) Dividing the positioning block from the image according to the color of the grid paper positioning block;
(22) Determining the vertex position of each positioning block by adopting a corner detection method;
(23) Connecting the vertexes of the four positioning blocks, and calculating the deflection angle of the placement of the grid paper;
(24) According to the deflection angle, carrying out reverse rotation and cutting operation on the acquired image, so as to straighten the image;
(25) Determining the position of each intersection point in the grid by adopting angular point detection, and calculating the distortion rate of the image in the transverse direction and the longitudinal direction by calculating the distance between adjacent intersection points in different positions in the image and subtracting the actual edge length of the grid in the image;
(26) Image calibration was performed according to the distortion rate.
Further, the implementation process of the step (3) is as follows:
the calculation modes of different color components are that the calculation formula of an S component in an HSV color space is shown as a formula (1), the calculation formula of a b component in a Lab color space is shown as a formula (2), and the calculation formulas of Cb and Cr components in a YCbCr color space are respectively shown as a formula (4);
b=200(h(Y/Y w )-h(Z/Z w )) (2)
wherein MAX and MIN are respectively the maximum value and the minimum value in 3 color components of RGB color space; y and Z are the corresponding components in the XYZ color space, Y w And Z w The reference values of (2) are 1.0000 and 1.0888 respectively, wherein the color stimulus value calibration function h (t) is shown as a formula (3); r, G, B are 3 components of the RGB color space, respectively.
Further, the implementation process of the step (4) is as follows:
(41) The node number of the BPNN model input layer is 4, and four color channel components of each pixel in the SbCBCr color space are used as input data;
(42) The number of nodes of the BPNN model output layer is 1, a rice seed pixel sample is marked as a positive sample and is marked as 1, and a background pixel sample is marked as a negative sample and is marked as 0;
(43) Setting the number of hidden layer nodes of the BPNN model as 10, setting an activation function of a hidden layer as Sigmoid, setting an activation function of an output layer as Softmax, and measuring a prediction error by adopting a cross entropy loss function;
(44) And carrying out multiple training by adopting a quantized conjugate gradient function, calculating and analyzing parameters such as loss, error, accuracy, true rate, false positive rate and the like of different training models, and selecting an optimal BPNN model.
Further, the implementation process of the step (5) is as follows:
(51) Converting m×n rice seed images into a matrix of (m×n, 4), wherein each row of data of the matrix corresponds to one pixel of the image, and each column of data corresponds to one color component of the SbCbCr color space;
(52) Normalizing the matrix data and inputting an optimal BPNN model;
(53) Quantizing the column vectors of M-dimension and N-dimension output by the model, wherein the column vectors are set to be 1 and 0, and the column vectors are set to be 1 and 0;
(54) And converting the column vector into a binary image with the size of M x N, wherein the pixel point with the value of 1 is used as a foreground to represent rice seeds, and the pixel point with the value of 0 is used as a background to represent grid paper.
Further, the implementation process of the step (7) is as follows:
(71) Calculating pixel point coordinates of each communication area, which are positioned at the uppermost edge, the lowermost edge, the leftmost edge and the rightmost edge;
(72) Drawing an external rectangle according to the pixel coordinates of the four directions;
(73) Mapping the circumscribed rectangle in the binary image into an original image, and cutting out a subgraph containing a single rice seed or a plurality of adhered rice seeds in a corresponding area in the original image;
(74) And carrying out size normalization on the cut sub-graph.
The beneficial effects are that: compared with the prior art, the invention has the beneficial effects that: the invention applies the digital image processing technology to seed counting in the breeding field, replaces the modes of manual counting and mechanical counting, saves labor cost and equipment cost, and greatly improves the working efficiency; the invention uses the grid paper with the positioning block to accurately identify and divide rice seeds; the invention realizes counting of seed adhesion by using the deep convolutional neural network classification model, and achieves the advantages of high speed, high accuracy, strong stability, batch processing and the like; the method is simple and convenient to operate, low in cost and capable of accurately predicting the rice seed quantity.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart for building a BPNN pixel classification model;
FIG. 3 is a rice seed binarized image;
FIG. 4 is a morphological image of rice seeds.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
The invention provides a rice seed identification and counting method based on image detection, which is characterized in that rice seeds are uniformly thrown on white grid paper, a camera lens is parallel to the upper part of the grid paper for nodding, and a parallel light source is adopted for light supplementing. The acquired image is imported into a computer, the image is rotated and cut according to the positions of four corner positioning blocks of the grid paper, and then the image distortion is calibrated according to the positions of grid intersection points in the grid paper; extracting the S component of the rice seed image in the HSV color space, the b component of the Lab color space and the Cb and Cr components of the YCbCr color space to form a new color space SbCBCr. Establishing a BPNN pixel classification model, extracting pixel values of seed pixels and background pixels in a sample image in four channels in a newly built color space SbCBCr, carrying out parameter training on the BPNN model, and selecting an optimal model; and taking the color value of each pixel in the rice seed image in the SbCr color space as the input of the optimal BPNN model, and dividing the rice seed image into binary images, wherein the rice seed is divided into a foreground, and the grid paper is divided into a background. And (3) adopting an open operation to disconnect tiny adhesion of part of rice seed communication areas and filter tiny noise points, and then adopting a closed operation to enable the rice seed edges to be smoother. Dividing an original rice seed image into rice seed sub-images consisting of single rice seeds or a plurality of adhered rice seeds by taking each rice seed communication area as a mask, and then carrying out size normalization on the rice seed sub-images; labeling according to the number of rice seeds in each sub-image, thereby establishing a rice seed number image dataset, establishing an image classification model by adopting a deep convolution god, and inputting the image dataset into the classification model for training. Dividing the collected rice seed image into a plurality of sub-images and sending the sub-images into a trained classification model, and then counting the number of rice seeds corresponding to the classification result of each rice seed sub-image in the original rice seed image to obtain the total number in the original rice seed image. As shown in fig. 1, the method specifically comprises the following steps:
step 1: the rice seeds are evenly thrown on the white grid paper, the camera lens is parallel to the upper side of the grid paper to perform nodding, and the parallel light source is adopted to perform light filling, so that the collected images are clear and bright, and the influence of shadows is reduced. And grid paper with a positioning block is used as a shooting background, and parallel light sources are used for reducing the generation of shadows.
Step 2: and rotating and cutting the image according to the positions of the four corner positioning blocks of the grid paper, and correcting the image distortion according to the positions of grid intersection points in the grid paper.
Dividing the positioning block from the image according to the color of the grid paper positioning block; determining the vertex position of each positioning block by adopting a corner detection method; connecting the vertexes of the four positioning blocks, and calculating the deflection angle of the placement of the grid paper; and (3) carrying out reverse rotation and cutting operation on the acquired image according to the deflection angle, so as to align the image. And determining the position of each intersection point in the grid by adopting angle point detection, then respectively calculating the distance between the transverse and longitudinal adjacent angle points, subtracting the distance from the actual edge length of the grid in the image, thereby calculating the distortion rate of the image in the transverse and longitudinal directions, and finally carrying out image calibration according to the distortion rate.
Step 3: extracting the S component of the rice seed image in the HSV color space, the b component of the Lab color space and the Cb and Cr components of the YCbCr color space to form a new color space SbCBCr.
The calculation modes of different color components are that the calculation formula of an S component in an HSV color space is shown as a formula (1), the calculation formula of a b component in a Lab color space is shown as a formula (2), and the calculation formulas of Cb and Cr components in a YCbCr color space are respectively shown as a formula (4).
b=200(h(Y/Y w )-h(Z/Z w )) (2)
Wherein MAX and MIN are respectively the maximum value and the minimum value in 3 color components of RGB color space; y and Z are the corresponding components in the XYZ color space, Y w And Z w The reference values of (2) are 1.0000 and 1.0888, respectively, wherein the function color stimulus value calibration function h (t) is shown in formula 3; r, G, B are 3 components of the RGB color space, respectively.
Step 4: and establishing a BPNN pixel classification model, extracting pixel values of seed pixels and background pixels in the sample image in a four-channel mode in a newly-built color space SbCBCr, carrying out parameter training on the BPNN model, and selecting an optimal model.
The training BPNN pixel classification model is specifically shown in fig. 2, and includes:
s1: the number of nodes of the BPNN model input layer is 4, and four color channel components of each pixel in the SbCbCr color space are used as input data.
S2: the number of nodes of the output layer of the BPNN model is 1, a rice seed pixel sample is marked as a positive sample and a background pixel sample is marked as a negative sample and is marked as 0.
S3: and setting the number of hidden layer nodes of the BPNN model as 10, setting an activation function of a hidden layer as Sigmoid, setting an activation function of an output layer as Softmax, and measuring a prediction error by adopting a cross entropy loss function.
S4: and carrying out multiple training by adopting a quantized conjugate gradient function, calculating and analyzing parameters such as loss, error, accuracy, true rate, false positive rate and the like of different training models, and selecting an optimal BPNN model.
Step 5: the color value of each pixel in the rice seed image in the SbCr color space of four channels is used as the input of the optimal BPNN model, and the rice seed image is divided into binary images, as shown in fig. 3, wherein the rice seed is divided into a foreground, and the grid paper is divided into a background.
And converting the M x N rice seed image into a matrix of (M x N, 4), wherein each row of data of the matrix corresponds to one pixel of the image, and each column of data respectively corresponds to one color component of the SbCbCr color space. And carrying out normalization processing on the matrix data, and inputting an optimal BPNN model. And quantizing the column vectors of M-dimension and N-dimension output by the model, wherein the column vectors are set to be 1 and 0, and the column vectors are set to be 0, and the column vectors are greater than or equal to 0.5. The column vector is converted into a binary image with the size of M x N, the pixel point with the value of 1 in fig. 3 is used as a foreground to represent rice seeds, and the pixel point with the value of 0 in the fig. is used as a background to represent grid paper.
Step 6: and (3) adopting an open operation to disconnect tiny adhesion of part of rice seed communication areas and filter tiny noise points, and then adopting a closed operation to enable the rice seed edges to be smoother. Selecting square structural elements with side length of 2 for carrying out open operation for processing, and selecting square structural elements with side length of 2 for carrying out close operation; the image after the opening and closing operation is shown in fig. 4.
Step 7: the original rice seed image is divided into rice seed sub-images composed of a single rice seed or a plurality of adhered rice seeds by using each rice seed communication area as a mask, and then the size normalization is performed.
Calculating pixel point coordinates of each communication area, which are positioned at the uppermost edge, the lowermost edge, the leftmost edge and the rightmost edge; and drawing the external rectangle according to the pixel coordinates of the four directions. Mapping the circumscribed rectangle in the binary image into an original image, and cutting out a subgraph containing a single rice seed or a plurality of adhered rice seeds in a corresponding area in the original image; and carrying out size normalization on the cut sub-graph.
Step 8: labeling according to the number of rice seeds in each sub-image, thereby establishing a rice seed number image dataset, establishing an image classification model by adopting a deep convolution god, and inputting the image dataset into the classification model for training.
Marking according to the rice seed number of each sub-image, and establishing a rice seed number image data set; establishing a deep convolutional neural network classification model, wherein the input of the model is consistent with the image size of a data set, and the model is output as the number of rice seeds; and sending the data set image into a deep convolutional neural network model for training for a plurality of times, and testing the convolutional neural network model to obtain a trained convolutional neural network model.
Step 9: dividing the collected rice seed image into a plurality of sub-images and sending the sub-images into a trained classification model, and then counting the number of rice seeds corresponding to the classification result of each rice seed sub-image in the original rice seed image to obtain the total number in the original rice seed image.
Completing the collection of rice seed images, and dividing the collected images into size normalization subgraphs containing single or multiple adhered rice seeds; sending the normalized subgraph into a trained image classification model for classification; and counting the classification result of each subgraph and summing to obtain the accurate number of rice seeds in the acquisition graph.
The method has the advantages of accurate calculation, simple and convenient operation, capability of effectively and accurately counting rice seeds, labor cost and equipment cost saving, and great improvement of working efficiency.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explanation of the principles of the present invention and are in no way limiting of the invention. Accordingly, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention. Furthermore, the appended claims are intended to cover all such changes and modifications that fall within the scope and boundary of the appended claims, or equivalents of such scope and boundary.

Claims (6)

1. The rice seed identification and counting method based on image detection is characterized by comprising the following steps of:
(1) Uniformly throwing rice seeds on white grid paper, performing nodding shooting by a camera lens parallel to the upper part of the grid paper, and performing light supplementing by adopting a parallel light source;
(2) Rotating and cutting the image according to the positions of the four corner positioning blocks of the grid paper, and calibrating the image distortion according to the positions of grid intersection points in the grid paper;
(3) Extracting an S component of the rice seed image in an HSV color space, a b component of the Lab color space and Cb and Cr components of the YCbCr color space to form a new color space SbCBCr;
(4) Constructing a BPNN pixel classification model, then extracting pixel values of seed pixels and background pixels in a sample image in four channels in a newly built color space SbCr, training the BPNN model, and selecting an optimal model;
(5) Taking the color value of each pixel in the rice seed image in a SbCr color space as the input of an optimal BPNN model, and dividing the rice seed image into binary images, wherein the rice seed is divided into a foreground, and the grid paper is divided into a background;
(6) The small adhesion of part of rice seed communication areas is disconnected by adopting open operation, tiny noise points are filtered, and then the edges of the rice seeds are smoother by adopting closed operation;
(7) Dividing an original rice seed image into rice seed sub-images consisting of single rice seeds or a plurality of adhered rice seeds by taking each rice seed communication area as a mask, and then carrying out size normalization on the rice seed sub-images;
(8) Marking according to the number of rice seeds in each sub-image, thereby establishing a rice seed number image dataset, establishing an image classification model by adopting a deep convolution god to a network, and inputting the image dataset into the classification model for training;
(9) Dividing the collected rice seed image into a plurality of sub-images and sending the sub-images into a trained classification model, and then counting the number of rice seeds corresponding to the classification result of each rice seed sub-image in the original rice seed image to obtain the total number of rice seeds in the original rice seed image.
2. The method for identifying and counting rice seeds based on image detection as recited in claim 1, wherein said step (2) is implemented as follows:
(21) Dividing the positioning block from the image according to the color of the grid paper positioning block;
(22) Determining the vertex position of each positioning block by adopting a corner detection method;
(23) Connecting the vertexes of the four positioning blocks, and calculating the deflection angle of the placement of the grid paper;
(24) According to the deflection angle, carrying out reverse rotation and cutting operation on the acquired image, so as to straighten the image;
(25) Determining the position of each intersection point in the grid by adopting angular point detection, and calculating the distortion rate of the image in the transverse direction and the longitudinal direction by calculating the distance between adjacent intersection points in different positions in the image and subtracting the actual edge length of the grid in the image;
(26) Image calibration was performed according to the distortion rate.
3. The method for identifying and counting rice seeds based on image detection as recited in claim 1, wherein said step (3) is implemented as follows:
the calculation modes of different color components are that the calculation formula of an S component in an HSV color space is shown as a formula (1), the calculation formula of a b component in a Lab color space is shown as a formula (2), and the calculation formulas of Cb and Cr components in a YCbCr color space are respectively shown as a formula (4);
b=200(h(YY w )-h(ZZ w ))(2)
wherein MAX and MIN are respectively the maximum value and the minimum value in 3 color components of RGB color space; y and Z are the corresponding components in the XYZ color space, Y w And Z w The reference values of (2) are 1.0000 and 1.0888 respectively, wherein the color stimulus value calibration function h (t) is shown as a formula (3); r, G, B are 3 components of the RGB color space, respectively.
4. The method for identifying and counting rice seeds based on image detection as recited in claim 1, wherein said step (4) is implemented as follows:
(41) The node number of the BPNN model input layer is 4, and four color channel components of each pixel in the SbCBCr color space are used as input data;
(42) The number of nodes of the BPNN model output layer is 1, a rice seed pixel sample is marked as a positive sample and is marked as 1, and a background pixel sample is marked as a negative sample and is marked as 0;
(43) Setting the number of hidden layer nodes of the BPNN model as 10, setting an activation function of a hidden layer as Sigmoid, setting an activation function of an output layer as Softmax, and measuring a prediction error by adopting a cross entropy loss function;
(44) And carrying out multiple training by adopting a quantized conjugate gradient function, calculating and analyzing parameters such as loss, error, accuracy, true rate, false positive rate and the like of different training models, and selecting an optimal BPNN model.
5. The method for identifying and counting rice seeds based on image detection as recited in claim 1, wherein said step (5) is implemented as follows:
(51) Converting m×n rice seed images into a matrix of (m×n, 4), wherein each row of data of the matrix corresponds to one pixel of the image, and each column of data corresponds to one color component of the SbCbCr color space;
(52) Normalizing the matrix data and inputting an optimal BPNN model;
(53) Quantizing the column vectors of M-dimension and N-dimension output by the model, wherein the column vectors are set to be 1 and 0, and the column vectors are set to be 1 and 0;
(54) And converting the column vector into a binary image with the size of M x N, wherein the pixel point with the value of 1 is used as a foreground to represent rice seeds, and the pixel point with the value of 0 is used as a background to represent grid paper.
6. The method for identifying and counting rice seeds based on image detection as recited in claim 1, wherein said step (7) is implemented as follows:
(71) Calculating pixel point coordinates of each communication area, which are positioned at the uppermost edge, the lowermost edge, the leftmost edge and the rightmost edge;
(72) Drawing an external rectangle according to the pixel coordinates of the four directions;
(73) Mapping the circumscribed rectangle in the binary image into an original image, and cutting out a subgraph containing a single rice seed or a plurality of adhered rice seeds in a corresponding area in the original image;
(74) And carrying out size normalization on the cut sub-graph.
CN202310330769.9A 2023-03-30 2023-03-30 Rice seed identification and counting method based on image detection Pending CN116452526A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310330769.9A CN116452526A (en) 2023-03-30 2023-03-30 Rice seed identification and counting method based on image detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310330769.9A CN116452526A (en) 2023-03-30 2023-03-30 Rice seed identification and counting method based on image detection

Publications (1)

Publication Number Publication Date
CN116452526A true CN116452526A (en) 2023-07-18

Family

ID=87128103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310330769.9A Pending CN116452526A (en) 2023-03-30 2023-03-30 Rice seed identification and counting method based on image detection

Country Status (1)

Country Link
CN (1) CN116452526A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496353A (en) * 2023-11-13 2024-02-02 安徽农业大学 Rice seedling weed stem center distinguishing and positioning method based on two-stage segmentation model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496353A (en) * 2023-11-13 2024-02-02 安徽农业大学 Rice seedling weed stem center distinguishing and positioning method based on two-stage segmentation model
CN117496353B (en) * 2023-11-13 2024-09-27 安徽农业大学 Rice seedling weed stem center distinguishing and positioning method based on two-stage segmentation model

Similar Documents

Publication Publication Date Title
CN109154978B (en) System and method for detecting plant diseases
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
US9292729B2 (en) Method and software for analysing microbial growth
CN109658381B (en) Method for detecting copper surface defects of flexible IC packaging substrate based on super-pixels
CN110675368B (en) Cell image semantic segmentation method integrating image segmentation and classification
CN113723573B (en) Tumor tissue pathological classification system and method based on adaptive proportion learning
CN110070008A (en) Bridge disease identification method adopting unmanned aerial vehicle image
CN108765433A (en) One kind is for carrying high-precision leafy area measurement method
CN110705639A (en) Medical sperm image recognition system based on deep learning
CN112330672B (en) Crop leaf area index inversion method based on PROSAIL model and canopy coverage optimization
CN116452526A (en) Rice seed identification and counting method based on image detection
CN112446298A (en) Hyperspectral nondestructive testing method for wheat scab
CN113241154B (en) Artificial intelligence blood smear cell labeling system and method
CN114511567B (en) Tongue body and tongue coating image identification and separation method
CN111008642A (en) High-resolution remote sensing image classification method and system based on convolutional neural network
CN113160222A (en) Production data identification method for industrial information image
CN115423802A (en) Automatic classification and segmentation method for squamous epithelial tumor cell picture based on deep learning
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN112861693B (en) Plant leaf microscopic image pore segmentation method based on deep learning
CN112184696A (en) Method and system for counting cell nucleus and cell organelle and calculating area of cell nucleus and cell organelle
CN116543414A (en) Tongue color classification and tongue redness and purple quantification method based on multi-model fusion
Otoya et al. A machine vision system based on RGB-D image analysis for the artichoke seedling grading automation according to leaf area
CA2939304C (en) A pixel-based universal image info extraction models and process flow
CN115274093A (en) Method and system for generating reference pathology data set containing automatic labeling file
Kumar et al. A symbiosis with panicle-SEG based CNN for count the number of wheat ears

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination