CN114677325A - Construction method of rice stem section segmentation model and detection method based on model - Google Patents
Construction method of rice stem section segmentation model and detection method based on model Download PDFInfo
- Publication number
- CN114677325A CN114677325A CN202210089760.9A CN202210089760A CN114677325A CN 114677325 A CN114677325 A CN 114677325A CN 202210089760 A CN202210089760 A CN 202210089760A CN 114677325 A CN114677325 A CN 114677325A
- Authority
- CN
- China
- Prior art keywords
- rice stem
- section
- image
- cross
- segmentation model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000007164 Oryza sativa Nutrition 0.000 title claims abstract description 137
- 235000009566 rice Nutrition 0.000 title claims abstract description 137
- 230000011218 segmentation Effects 0.000 title claims abstract description 56
- 238000001514 detection method Methods 0.000 title claims description 10
- 238000010276 construction Methods 0.000 title claims description 6
- 240000007594 Oryza sativa Species 0.000 title 1
- 241000209094 Oryza Species 0.000 claims abstract description 137
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000007246 mechanism Effects 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012795 verification Methods 0.000 claims abstract description 9
- 238000012360 testing method Methods 0.000 claims abstract description 7
- 230000008520 organization Effects 0.000 claims abstract description 4
- 238000011176 pooling Methods 0.000 claims description 16
- 238000010586 diagram Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010008 shearing Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000007619 statistical method Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000011160 research Methods 0.000 description 4
- 239000010902 straw Substances 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- CLSIFQGHPQDTHQ-DTWKUNHWSA-N (2s,3r)-2-[(4-carboxyphenyl)methyl]-3-hydroxybutanedioic acid Chemical compound OC(=O)[C@H](O)[C@@H](C(O)=O)CC1=CC=C(C(O)=O)C=C1 CLSIFQGHPQDTHQ-DTWKUNHWSA-N 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 2
- 238000009395 breeding Methods 0.000 description 2
- 230000001488 breeding effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 235000013339 cereals Nutrition 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 210000004738 parenchymal cell Anatomy 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for constructing a rice stem section segmentation model, which comprises the following steps: acquiring a rice stem cross-section CT image, and preprocessing the rice stem cross-section CT image to obtain a preprocessed rice stem cross-section CT image; based on the preprocessed rice stem cross section CT image, a rice stem cross section CT image data set is manufactured, wherein the rice stem cross section CT image data set comprises: a training set, a verification set and a test set; constructing a rice stem section segmentation model integrating a lightweight U-Net and a space-channel attention mechanism, wherein the lightweight U-Net adopts a depth separable convolution to replace a standard convolution layer in an original U-Net network encoder; and training the rice stem section segmentation model by adopting the rice stem section CT image data set so as to segment the organization structure of the rice stem section CT image based on the rice stem section segmentation model and calculate corresponding microstructure parameters.
Description
Technical Field
The invention relates to the technical field of rice stem section detection, in particular to a construction method of a rice stem section segmentation model and a detection method based on the model.
Background
Rice is one of the main grain crops in China, and the relationship between improving the yield and the quality of the rice is the development and the stability of the society and the economy in China. Therefore, how to breed a high-quality rice variety with high yield has been one of the research hotspots of rice cultivation and breeding experts. The rice stalks have the functions of nutrient conveying and supporting on plants, the plant type of the high rods can improve the rice yield, the lodging is easy to occur, the internal structure phenotype characters of the rice stalks need to be measured, rice varieties with lodging resistance are screened, and the rice yield and the quality are improved.
At present, the traditional rice stem phenotypic character acquisition and screening are mainly completed manually, and have the defects of loss, low efficiency, poor repeatability and the like, so that the method is not suitable for large-scale screening and identification of rice varieties, and the functional genome of rice and the development of rice improvement breeding are severely restricted. Combining with the theory of materials, the lodging resistance of rice plants depends on the strength, rigidity and stability of the stalks, and the mechanical indexes are closely related to the number of layers of parenchymal cells of the rice stalks, the thickness, the number and area of vascular bundles, the size of medullary cavity and other structural characteristic parameters. With the development of subjects such as an optical imaging technology, a computer science technology, an automatic integrated control technology and the like, automatic detection of the microstructure parameters of the cross section of the rice stem becomes possible, a technical basis is established for crop lodging resistance research, and the method has important significance for realizing high, stable and excellent yield of rice crops.
Disclosure of Invention
The embodiment of the invention aims to provide a construction method of a rice stem section segmentation model and a detection method based on the model, aiming at carrying out parameter measurement on a microstructure of the rice stem section and establishing a technical basis for crop lodging resistance research. The specific technical scheme is as follows:
in order to achieve the above object, an embodiment of the present invention provides a method for constructing a rice straw cross-section segmentation model, including:
acquiring a rice stem cross-section CT image, and preprocessing the rice stem cross-section CT image to obtain a preprocessed rice stem cross-section CT image;
based on the preprocessed rice stem cross section CT image, a rice stem cross section CT image data set is manufactured, wherein the rice stem cross section CT image data set comprises: a training set, a verification set and a test set;
constructing a rice stem section segmentation model integrating a lightweight U-Net and a space-channel attention mechanism, wherein a standard convolution layer in an original U-Net network encoder is replaced by adopting depth separable convolution;
and training the rice stem section segmentation model by adopting the rice stem section CT image data set so as to segment the organization structure of the rice stem section CT image based on the rice stem section segmentation model and calculate corresponding microstructure parameters.
In one implementation, the step of preprocessing the CT image of the cross section of the rice stem includes:
shearing the CT image of the cross section of the rice stem, and reducing the size of the image;
carrying out Gaussian fuzzy denoising on the sheared rice stem cross section CT image to obtain a denoised image, wherein the transformation of pixels of the denoised image comprises the following steps:
wherein G (x, y) represents the value of the Gaussian kernel at x, y, x and y representing the image space coordinates, respectively,denotes the blur radius, σ is the standard deviation of the normal distribution.
One implementation manner, the step of making a rice stem cross-section CT image dataset includes:
marking the preprocessed rice stem cross-section CT images by using a marking tool Labelme, and marking the preprocessed rice stem cross-section CT images into a plurality of types according to the microstructure of the rice stem cross-section, wherein the plurality of types include but are not limited to: one or more of the outermost sclerenchyma tissue, mature parenchyma tissue, immature parenchyma tissue, medullary cavity, leaf sheath and air cavity;
and dividing the labeled CT image data set into a training set, a verification set and a test set according to a preset proportion.
One implementation manner, the step of constructing a rice straw section segmentation model fusing a lightweight U-Net and a space-channel attention mechanism includes:
Constructing a rice stem section segmentation model based on U-Net, wherein the U-Net network comprises an encoder and a decoder, the encoder consists of four stages which are connected in sequence, each stage comprises 2 convolutions of 3 x 3, a RELU activation function and a maximum pooling layer with the step length of 2; the decoder part consists of four stages, wherein each stage comprises an upsampling layer, a feature fusion layer and two layers of 3 x 3 convolutions;
the lightweight U-Net uses a depth separable convolution to replace a standard convolution layer in a U-Net network encoder, decomposes standard convolution operation into depth-by-depth convolution and point-by-point 1 x 1 convolution based on the depth separable convolution, carries out convolution operation of a single convolution kernel on each input channel through the depth-by-depth convolution so as to keep the number of channels of an input characteristic diagram and an output characteristic diagram consistent, and combines the output of different depth convolutions by using the 1 x 1 point-by-point convolution to realize the change of the number of the channels;
wherein, the calculated amount proportion of the depth separable convolution and the standard convolution is specifically expressed as:
wherein D iskRepresents the convolution kernel size of the depth separable convolution, M represents the number of eigen channels, and O represents the number of output eigen channels.
One implementation incorporates a spatial and channel attention mechanism module CBAM in the encoder, decoder and feature fusion sections of the lightweight U-Net. The spatial channel attention mechanism module CBAM comprises a channel attention mechanism and a spatial attention mechanism, wherein the channel attention mechanism is defined by the following formula:
Wherein, among others,andrespectively representing the average pooling and maximum pooling characteristics of input images, fusing output characteristic vectors by bitwise addition via a shared network composed of multilayer perceptron MLP with hidden layers, and generating a channel attention map Mc∈RC×1×1(ii) a σ denotes sigmoid function, W0And W1Represents a weight size of the MLP;
the spatial attention mechanism is defined by the formula:
wherein the content of the first and second substances,andrespectively representing the average pooling operation and the maximum pooling operation of the feature maps subjected to channel attention thinning along the channel axis, carrying out dimension splicing on the two obtained 2D feature maps, and utilizing a 7 multiplied by 7 convolutional layer f7×7Generating a spatial attention map Ms(F) (ii) a σ denotes sigmoid function, f7×7Representing a 7 x 7 convolution operation.
An implementation manner, the step of training the rice stem cross section segmentation model by using the rice stem cross section CT image dataset includes:
dividing the training set and the verification set into a plurality of batches to train the rice stem section segmentation model, wherein one iteration is obtained when all training set images complete traversal calculation in the rice stem section segmentation model;
the method comprises the following steps of initializing a rice stem section segmentation model by adopting a loading pre-training weight initialization trunk network, setting an initial learning rate to be le-3, reducing the learning rate once per 3000 iterations, reducing the learning rate to be 0.9 each time, calculating the self-adaptive learning rate of each weight parameter by adopting an Adam algorithm, and calculating a loss function which is cross entropy loss, wherein the concrete expression is as follows:
Wherein N represents the number of categories, ycRepresenting a one-hot vector, wherein the element has two values of 0 and 1, if the class is the same as the sample class, the element takes 1, otherwise, the element takes 0 and PcRepresenting the probability that the prediction sample belongs to c.
In addition, the invention also discloses a detection method based on the rice stem section segmentation model, which comprises the following steps:
obtaining a CT image of the cross section of the rice stem to be detected;
preprocessing the CT image of the cross section of the rice stem to be detected to obtain a preprocessed CT image of the cross section of the rice stem to be detected, inputting the preprocessed CT image of the cross section of the rice stem to be detected into a trained segmentation model of the cross section of the rice stem to segment binary image regions of multiple categories in the CT image of the cross section of the rice stem, wherein the binary image regions comprise: the first outermost sclerenchyma tissue, the second mature sclerenchyma tissue, the third immature sclerenchyma tissue, the fourth medullary cavity, the fifth leaf sheath and the sixth air cavity;
calculating the area of each category region based on pixel statistical analysis;
and performing minimum circumscribed rectangle fitting on the binary images of each category, wherein a fitting formula comprises the following steps:
wherein t represents a connected domain shown by each category of binary images, y (t) represents that the connected domain rotates at equal intervals within a range of 90 degrees, circumscribed rectangle parameters of the outline of the connected domain in the direction of a coordinate system are recorded each time, the area of each circumscribed rectangle is calculated through external connection, and the minimum circumscribed rectangle fitting of each category of binary images is solved;
And calculating the length and width information of the rectangle based on the fitted rectangle so as to describe the length and width information of various regions.
The construction method of the rice stem section segmentation model provided by the embodiment of the invention comprises the steps of firstly carrying out preprocessing such as shearing and denoising on an obtained rice stem section CT image, then segmenting the section image by using a lightweight U-Net network in combination with a convolution attention CBAM module, classifying obtained segmentation results, and measuring the areas of various structures and the minimum rectangular size of the regions. The method can be used for parameter determination of the microstructure of the cross section of the rice stem, establishes a technical basis for crop lodging resistance research, and has important significance for realizing high yield, stable yield and excellent yield of rice crops.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing a rice straw cross-section segmentation model according to an embodiment of the present invention.
FIG. 2 is a cross-sectional CT view of a rice straw after pretreatment according to the present invention.
FIG. 2a is an illustration of a thick-walled tissue according to the present invention.
FIG. 2b is an illustration of a mature parenchyma tissue labeling diagram according to the present invention.
FIG. 2c is an illustration of an immature parenchyma tissue labeling diagram according to the present invention.
FIG. 2d is an illustration of a medullary cavity label according to the present invention.
FIG. 2e is an illustration of a leaf sheath label according to the present invention.
FIG. 2f is an example of a labeled diagram of an air cavity according to the present invention.
FIG. 3 is a block diagram of a standard convolution and depth separable convolution.
FIG. 4 is a schematic structural diagram of a CBAM (channel attention mechanism module).
FIG. 5 is a schematic diagram of a channel attention mechanism.
Fig. 6 is a schematic diagram of a spatial attention mechanism.
FIG. 7 is a view showing a structure of a rice stem section segmentation model.
Fig. 8 is an example of a minimum rectangular frame of a sclerenchymal tissue region after segmentation in accordance with the present invention.
FIG. 9 is an example of a minimum rectangular box of a region of mature parenchymal tissue after segmentation in accordance with the present invention.
FIG. 10 is an example of a minimal rectangular box of an immature parenchymal tissue region after segmentation in accordance with the present invention.
FIG. 11 is an example of a minimum rectangular box in the medullary cavity region after segmentation according to the present invention.
FIG. 12 is an example of a minimal rectangular box of a segmented leaf sheath region according to the present invention.
Fig. 13 is an example of a minimum rectangular box of an air cavity region after segmentation according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, there is provided a method for constructing a rice stem cross-section segmentation model,
s1: and acquiring a rice stem cross-section CT image, and preprocessing the rice stem cross-section CT image to obtain a preprocessed rice stem cross-section CT image.
(1) Shearing the CT image of the cross section of the rice stem, and reducing the size of the image;
(2) and (3) carrying out Gaussian blur denoising on the cut rice stem section CT image, wherein Gaussian blur is an image blur filter, and the transformation of each pixel in the image is calculated by utilizing normal distribution shown in the formula (1).
Wherein G (x, y) represents the value of the Gaussian kernel at x, y, x and y representing the image space coordinates, respectively,denotes the blur radius, σ is the standard deviation of the normal distribution.
Fig. 2 shows a CT image of the rice stem interface after pretreatment according to an embodiment of the present invention.
S2: based on the preprocessed rice stem cross section CT image, a rice stem cross section CT image data set is manufactured, wherein the rice stem cross section CT image data set comprises: training set, validation set and test set.
The marking tool Labelme is utilized to manually mark the preprocessed rice stem cross section CT image, and the cross section CT image is marked into 6 types according to the composition of the rice stem cross section microstructure, wherein the types of the cross section CT image are respectively as follows: FIG. 2a is a drawing showing the most outside sclerenchyma, FIG. 2b is a drawing showing the mature sclerenchyma, FIG. 2c is a drawing showing the immature sclerenchyma, FIG. 2d is a drawing showing the medullary cavity, FIG. 2e is a drawing showing the sheath of the leaf, and FIG. 2f is a drawing showing the lumen of the air cavity.
(2) Dividing a rice stem cross section CT image data set into a training set, a verification set and a test set according to the ratio of 7:2: 1;
s3: and constructing a rice stem section segmentation model integrating a lightweight U-Net and a space-channel attention mechanism, wherein the lightweight U-Net adopts a depth separable convolution to replace a standard convolution layer in an original U-Net network encoder.
(1) A rice stem cross section segmentation model is constructed based on U-Net, the U-Net network comprises an encoder and a decoder, the encoder is mainly used for extracting image features and is formed by sequentially connecting four stages, each stage comprises 2 convolutions of 3 x 3, a RELU activation function and a maximum pooling layer with the step length of 2 are used for down-sampling, and the number of feature channels is doubled in each down-sampling. The decoder part consists of four stages, each stage comprises an upsampling layer, a feature fusion layer and two layers of convolution of 3 multiplied by 3, and the number of feature channels is halved by each upsampling. In order to reduce the parameters of the U-Net network, reduce the difficulty of network training and improve the network learning efficiency, the standard convolution layer in the original U-Net network encoder is replaced by the deep separable convolution, as shown in FIG. 3. The standard convolution operation is decomposed into depth-by-depth convolution and point-by-point 1 x 1 convolution based on the depth separable convolution, the convolution operation of a single convolution kernel is carried out on each input channel through the depth-by-depth convolution so as to keep the channel number of the input characteristic diagram consistent with that of the output characteristic diagram, and the output of different depth convolutions is combined through the 1 x 1 point-by-point convolution to realize the change of the channel number. The calculation quantity proportion formula of the depth separable convolution and the standard convolution is shown as the formula (2).
Wherein D iskRepresents the convolution kernel size of the depth separable convolution, M represents the number of eigen channels, and O represents the number of output eigen channels. Convolving all compared to a standard convolutional layerThe kernel is calculated with each input characteristic channel, and the deep separable convolution can effectively reduce the calculated amount and the model training parameters, so that the light weight of the U-Net network is realized.
(2) In order to improve the accuracy of the rice stem section segmentation model, a space-channel attention mechanism module CBAM is added in a coding part, a decoding part and a feature fusion part of the lightweight U-Net, as shown in figure 4. The CBAM module multiplies the attention map into an input feature map for self-adaptive feature refinement through two independent channels and space dimensions, and can enhance the attention degree of the segmentation model to the tiny features of the rice stem section image at different stages. Wherein, the definition of the channel attention mechanism is shown as formula (3).
Wherein the content of the first and second substances,andrespectively representing the features of the input image after average pooling and maximum pooling, and generating a channel attention map M by fusing output feature vectors by bitwise addition through a shared network composed of multilayer perceptrons (MLPs) with hidden layersc∈RC×1×1As shown in fig. 5. σ denotes sigmoid function, W 0And W1Represents the weight size of the MLP.
The spatial attention mechanism is defined as shown in equation (4):
wherein, the first and the second end of the pipe are connected with each other,andrespectively representing the average pooling operation and the maximum pooling operation of the feature maps after the attention thinning of the channel along the channel axis, performing dimension splicing on the two obtained 2D feature maps, and then utilizing a convolution layer f of 7 multiplied by 77×7A spatial attention map is generated as shown in fig. 6.
As shown in fig. 7, each of the stage 1, the stage 2, the stage 3, and the stage 4 includes, from top to bottom, a 3 × 3 convolutional layer, a 1 × 1 convolutional layer, a maximum pooling layer, and a CBMA, which are connected in sequence, and after being connected in sequence, the four stages are sequentially input to the stage 6, the stage 7, the stage 8, and the stage 9 through the stage 4 composed of 3 sets of the 3 × 3 convolutional layers and the 1 × 1 convolutional layers connected in sequence. Each of the stages 6, 7, 8 and 9 comprises from top to bottom: an upsampling layer, a 3 × 3 convolutional layer, a feature fusion layer, a 3 × 3 convolutional layer, and CBMA, after stage 9, S1 is obtained.
S4: and training the rice stem section segmentation model by adopting the rice stem section CT image data set so as to segment the organization structure of the rice stem section CT image based on the rice stem section segmentation model and calculate corresponding microstructure parameters.
In the training process of the rice stem section segmentation model, network training is carried out in batches, a training set and a verification set are divided into a plurality of batches for training, and traversing calculation of all training set images in the network model is one iteration (epoch). The network model initialization adopts the loading of the pre-training weight to initialize the backbone network. The initial learning rate is le-3, the learning rate is reduced once every 3000 iterations, the learning rate is reduced to 0.9, the adaptive learning rate of each weight parameter is calculated by adopting an Adam algorithm, and the loss function is cross entropy loss and is defined as shown in a formula (5).
Wherein N represents the number of categories, ycRepresenting a one-hot vector, the elements have two values of 0 and 1, if the category is the sumTaking 1 when the sample class is the same, otherwise taking 0, PcRepresenting the probability that the prediction sample belongs to c.
After the model training is completed, inputting the preprocessed rice stem section CT image to be detected into the trained rice stem section segmentation model, segmenting 6 categories of binary image regions in the rice stem section CT image, namely a first outermost thick-walled tissue, a second mature thin-walled tissue, a third immature thin-walled tissue, a fourth medullary cavity, a fifth leaf sheath and a sixth air cavity, and obtaining segmentation results as shown in figures 10-13. The area of each class region in FIGS. 10-13 was calculated based on pixel statistical analysis, where the thick-wall tissue area was 0.425mm 2The area of mature parenchyma is 0.211mm2The area of immature thin-wall tissue is 0.096mm2The area of the medullary cavity is 0.044mm2The area of the leaf sheath is 0.052mm2The total area of the air cavity is 0.252mm2. And (3) performing minimum circumscribed rectangle fitting on the binary images of each category, as shown in fig. 8-13, wherein a rectangle fitting formula is shown as formula (6), and calculating length and width information of a rectangle by combining the fitted rectangle to describe the length and width information of each region.
Wherein t represents a connected domain shown by each category of binary images, y (t) represents that the connected domain rotates at equal intervals within a range of 90 degrees, circumscribed rectangle parameters of the outline of the connected domain in the direction of a coordinate system are recorded each time, the area of each circumscribed rectangle is calculated through external connection, and the minimum circumscribed rectangle fitting of each category of binary images is solved.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (7)
1. A construction method of a rice stem section segmentation model is characterized by comprising the following steps:
acquiring a rice stem cross-section CT image, and preprocessing the rice stem cross-section CT image to obtain a preprocessed rice stem cross-section CT image;
Based on the preprocessed rice stem cross section CT images, a rice stem cross section CT image data set is manufactured, and the rice stem cross section CT image data set comprises: a training set, a verification set and a test set;
constructing a rice stem section segmentation model integrating a lightweight U-Net and a space-channel attention mechanism, wherein the lightweight U-Net adopts a depth separable convolution to replace a standard convolution layer in an original U-Net network encoder;
and training the rice stem section segmentation model by adopting the rice stem section CT image data set so as to segment the organization structure of the rice stem section CT image based on the rice stem section segmentation model and calculate corresponding microstructure parameters.
2. The method for constructing the rice stem cross-section segmentation model according to claim 1, wherein the step of preprocessing the CT image of the rice stem cross-section comprises:
shearing the CT image of the cross section of the rice stem, and reducing the size of the image;
performing Gaussian fuzzy denoising on the sheared rice stem section CT image to obtain a denoised image, wherein the transformation of the denoised image pixel comprises the following steps:
3. The method for constructing a rice stem cross-section segmentation model according to claim 1, wherein the step of preparing a rice stem cross-section CT image dataset includes:
marking the preprocessed rice stem cross-section CT images by using a marking tool Labelme, and marking the preprocessed rice stem cross-section CT images into a plurality of types according to the microstructure of the rice stem cross-section, wherein the plurality of types include but are not limited to: one or more of the outermost sclerenchyma tissue, mature parenchyma tissue, immature parenchyma tissue, medullary cavity, leaf sheath and air cavity;
and dividing the labeled CT image data set into a training set, a verification set and a test set according to a preset proportion.
4. The method for constructing a rice stem cross-section segmentation model according to any one of claims 1 to 3, wherein the step of constructing a rice stem cross-section segmentation model that combines a lightweight U-Net and a space-channel attention mechanism comprises:
constructing a rice stem section segmentation model based on U-Net, wherein the U-Net network comprises an encoder and a decoder, the encoder consists of four stages which are connected in sequence, each stage comprises 2 convolutions of 3 x 3, a RELU activation function and a maximum pooling layer with the step length of 2; the decoder part consists of four stages, wherein each stage comprises an upsampling layer, a feature fusion layer and two layers of 3 x 3 convolutions;
The lightweight U-Net uses a depth separable convolution to replace a standard convolution layer in a U-Net network encoder, decomposes standard convolution operation into depth-by-depth convolution and point-by-point 1 x 1 convolution based on the depth separable convolution, carries out convolution operation of a single convolution kernel on each input channel through the depth-by-depth convolution so as to keep the number of channels of an input characteristic diagram and an output characteristic diagram consistent, and combines the output of different depth convolutions by using the 1 x 1 point-by-point convolution to realize the change of the number of the channels;
wherein, the calculated amount proportion of the depth separable convolution and the standard convolution is specifically expressed as:
wherein D iskRepresents the convolution kernel size of the depth separable convolution, M represents the number of eigen channels, and O represents the number of output eigen channels.
5. The method for constructing a rice stem cross-section segmentation detection model according to claim 4, wherein a spatial channel attention mechanism module CBAM is added to the encoder, the decoder and the feature fusion part of the lightweight U-Net, wherein the spatial channel attention mechanism module CBAM includes two parts of a channel attention mechanism and a spatial attention mechanism, wherein the channel attention mechanism is defined by the following formula:
wherein, among others, Andrespectively representing the features of the input image after average pooling and maximum pooling, and fusing output feature vectors by bitwise addition through a shared network consisting of multilayer perceptron MLPs with hidden layers to generate a channel attention map Mc∈RC×1×1(ii) a σ denotes sigmoid function, W0And W1Represents a weight size of the MLP;
the spatial attention mechanism is defined by the formula:
wherein the content of the first and second substances,andrespectively representing the average pooling operation and the maximum pooling operation of the feature maps subjected to channel attention thinning along the channel axis, carrying out dimension splicing on the two obtained 2D feature maps, and utilizing a 7 multiplied by 7 convolutional layer f7×7Generating a spatial attention map Ms(F) (ii) a σ denotes sigmoid function, f7×7Representing a 7 x 7 convolution operation.
6. The method for constructing a rice stem cross-section segmentation model according to any one of claims 1 to 3 or 5, wherein the step of training the rice stem cross-section segmentation model using the rice stem cross-section CT image dataset comprises:
dividing the training set and the verification set into a plurality of batches to train the rice stem section segmentation model, wherein one iteration is obtained when all training set images complete traversal calculation in the rice stem section segmentation model;
The method comprises the following steps of initializing a rice stem section segmentation model by adopting a loading pre-training weight initialization trunk network, setting an initial learning rate to be le-3, reducing the learning rate once per 3000 iterations, reducing the learning rate to be 0.9 each time, calculating the self-adaptive learning rate of each weight parameter by adopting an Adam algorithm, and calculating a loss function which is cross entropy loss, wherein the concrete expression is as follows:
wherein N represents the number of categories, ycRepresenting a one-hot vector, wherein the element has two values of 0 and 1, if the category is the same as the sample category, the element takes 1, otherwise, the element takes 0 and PcRepresenting the probability that the prediction sample belongs to c.
7. A detection method based on a rice stem section segmentation detection model is characterized by comprising the following steps:
obtaining a CT image of the cross section of the rice stem to be detected;
preprocessing the CT image of the cross section of the rice stem to be detected to obtain a preprocessed CT image of the cross section of the rice stem to be detected, inputting the preprocessed CT image of the cross section of the rice stem to be detected into a trained segmentation model of the cross section of the rice stem to segment binary image regions of multiple categories in the CT image of the cross section of the rice stem, wherein the binary image regions comprise: the first outermost sclerenchyma tissue, the second mature sclerenchyma tissue, the third immature sclerenchyma tissue, the fourth medullary cavity, the fifth leaf sheath and the sixth air cavity;
Calculating the area of each category region based on pixel statistical analysis;
and performing minimum circumscribed rectangle fitting on the binary images of each category, wherein a fitting formula comprises the following steps:
wherein t represents a connected domain shown by each category of binary images, y (t) represents that the connected domain rotates at equal intervals within a range of 90 degrees, circumscribed rectangle parameters of the outline of the connected domain in the direction of a coordinate system are recorded each time, the area of each circumscribed rectangle is calculated through external connection, and the minimum circumscribed rectangle fitting of each category of binary images is solved;
and calculating the length and width information of the rectangle based on the fitted rectangle so as to describe the length and width information of various regions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089760.9A CN114677325A (en) | 2022-01-25 | 2022-01-25 | Construction method of rice stem section segmentation model and detection method based on model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210089760.9A CN114677325A (en) | 2022-01-25 | 2022-01-25 | Construction method of rice stem section segmentation model and detection method based on model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114677325A true CN114677325A (en) | 2022-06-28 |
Family
ID=82072497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210089760.9A Pending CN114677325A (en) | 2022-01-25 | 2022-01-25 | Construction method of rice stem section segmentation model and detection method based on model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114677325A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116844658A (en) * | 2023-07-13 | 2023-10-03 | 中国矿业大学 | Method and system for rapidly measuring water content of coal based on convolutional neural network |
CN117011607A (en) * | 2023-08-08 | 2023-11-07 | 安徽农业大学 | Rice seed classification method based on attention residual error network |
CN117011316A (en) * | 2023-10-07 | 2023-11-07 | 之江实验室 | Method and system for identifying internal structure of soybean stalk based on CT image |
CN117496353A (en) * | 2023-11-13 | 2024-02-02 | 安徽农业大学 | Rice seedling weed stem center distinguishing and positioning method based on two-stage segmentation model |
CN117522950A (en) * | 2023-12-28 | 2024-02-06 | 江西农业大学 | Geometric parameter measurement method for plant stem growth based on machine vision |
-
2022
- 2022-01-25 CN CN202210089760.9A patent/CN114677325A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116844658A (en) * | 2023-07-13 | 2023-10-03 | 中国矿业大学 | Method and system for rapidly measuring water content of coal based on convolutional neural network |
CN116844658B (en) * | 2023-07-13 | 2024-01-23 | 中国矿业大学 | Method and system for rapidly measuring water content of coal based on convolutional neural network |
CN117011607A (en) * | 2023-08-08 | 2023-11-07 | 安徽农业大学 | Rice seed classification method based on attention residual error network |
CN117011316A (en) * | 2023-10-07 | 2023-11-07 | 之江实验室 | Method and system for identifying internal structure of soybean stalk based on CT image |
CN117011316B (en) * | 2023-10-07 | 2024-02-06 | 之江实验室 | Method and system for identifying internal structure of soybean stalk based on CT image |
CN117496353A (en) * | 2023-11-13 | 2024-02-02 | 安徽农业大学 | Rice seedling weed stem center distinguishing and positioning method based on two-stage segmentation model |
CN117522950A (en) * | 2023-12-28 | 2024-02-06 | 江西农业大学 | Geometric parameter measurement method for plant stem growth based on machine vision |
CN117522950B (en) * | 2023-12-28 | 2024-03-12 | 江西农业大学 | Geometric parameter measurement method for plant stem growth based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114677325A (en) | Construction method of rice stem section segmentation model and detection method based on model | |
CN107247971B (en) | Intelligent analysis method and system for ultrasonic thyroid nodule risk index | |
CN108416353B (en) | Method for quickly segmenting rice ears in field based on deep full convolution neural network | |
Nasiri et al. | Automated grapevine cultivar identification via leaf imaging and deep convolutional neural networks: a proof-of-concept study employing primary iranian varieties | |
Bi et al. | Development of deep learning methodology for maize seed variety recognition based on improved swin transformer | |
CN110728666B (en) | Typing method and system for chronic nasosinusitis based on digital pathological slide | |
CN115909006B (en) | Mammary tissue image classification method and system based on convolution transducer | |
CN115115830A (en) | Improved Transformer-based livestock image instance segmentation method | |
Sun et al. | Lightweight apple detection in complex orchards using YOLOV5-PRE | |
CN116091937A (en) | High-resolution remote sensing image ground object recognition model calculation method based on deep learning | |
CN108537342A (en) | A kind of network representation learning method and system based on neighbor information | |
CN113077438B (en) | Cell nucleus region extraction method and imaging method for multi-cell nucleus color image | |
CN114399108A (en) | Tea garden yield prediction method based on multi-mode information | |
CN110992309B (en) | Fundus image segmentation method based on deep information transfer network | |
CN116245855B (en) | Crop variety identification method, device, equipment and storage medium | |
CN111784676A (en) | Novel feature extraction and segmentation method for liver CT image | |
Dang et al. | Vpbr: An automatic and low-cost vision-based biophysical properties recognition pipeline for pumpkin | |
CN113344008B (en) | High-throughput extraction method of stalk tissue anatomical characteristic parameters based on deep learning | |
Chen et al. | MCC-Net: A class attention-enhanced multi-scale model for internal structure segmentation of rice seedling stem | |
CN116206210A (en) | NAS-Swin-based remote sensing image agricultural greenhouse extraction method | |
CN116091940A (en) | Crop classification and identification method based on high-resolution satellite remote sensing image | |
CN114693600A (en) | Semi-supervised learning method for carrying out nucleus segmentation on tissue pathology image | |
CN114663791A (en) | Branch recognition method for pruning robot in unstructured environment | |
CN115170987A (en) | Method for detecting diseases of grapes based on image segmentation and registration fusion | |
CN112465821A (en) | Multi-scale pest image detection method based on boundary key point perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |