CN113506263A - Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network - Google Patents

Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network Download PDF

Info

Publication number
CN113506263A
CN113506263A CN202110769069.0A CN202110769069A CN113506263A CN 113506263 A CN113506263 A CN 113506263A CN 202110769069 A CN202110769069 A CN 202110769069A CN 113506263 A CN113506263 A CN 113506263A
Authority
CN
China
Prior art keywords
pore
microscopic image
plant leaf
convolutional neural
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110769069.0A
Other languages
Chinese (zh)
Inventor
黄建平
李君禹
宋文龙
李飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202110769069.0A priority Critical patent/CN113506263A/en
Publication of CN113506263A publication Critical patent/CN113506263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Abstract

A plant leaf pore anatomical parameter measuring method based on a convolutional neural network relates to the field of digital image processing. The invention aims to solve the problems that the existing method for measuring the pore aperture anatomical parameters of the plant leaves is time-consuming, labor-consuming, strong in subjectivity and low in efficiency, and further the actual needs of rapid and high-flux microscopic image pore analysis of the plant leaves cannot be met. The invention comprises the following steps: acquiring a gas pore sample data set; constructing and training a plant leaf microscopic image pore space segmentation model based on a convolutional neural network by utilizing a pore sample data set to obtain a final plant leaf microscopic image pore space segmentation model; obtaining pore aperture anatomical parameters by utilizing a final plant leaf microscopic image pore aperture segmentation model; the method is used for obtaining the plant leaf pore aperture anatomical parameters.

Description

Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network
Technical Field
The invention relates to the field of digital image processing, in particular to a method for measuring the plant leaf pore aperture anatomical parameters based on a convolutional neural network.
Background
The stomata of the plants mainly move on the surfaces of the leaves and are channels for gas exchange between the plants and the external environment. The gas components entering the plant body through the air holes mainly comprise a dioxide tower, oxygen and water vapor, the single air holes cannot be distinguished by naked eyes of a human body, the shape and the structure of the single air holes are required to be seen by a microscope under an electron microscope, and in an area with concentrated air hole distribution, a plurality of small air holes can be seen, but if the shape and the structure of the air holes are required to be seen, an dissecting needle is required to separate the upper surface and the lower surface of the leaf and then an optical microscope or an electron microscope is used for observing, so how to obtain dissecting parameters before dissecting the air hole holes becomes the research focus in the field.
At present, most pore anatomical parameter measurement methods mainly utilize image processing software such as ImageJ to perform manual measurement, and the methods need researchers to manually mark pore characteristic points (such as boundaries, lengths and widths) on microscopic images, but the methods need manual intervention, are time-consuming, labor-consuming and highly subjective, and cannot meet the processing requirements of large data volumes. At present, the measurement of pore aperture anatomical parameters firstly needs to be carried out on pore image segmentation, and pore segmentation methods can be roughly divided into three types: firstly, utilizing a classical image segmentation algorithm, such as a segmentation method based on a threshold value, a segmentation method based on a region, a segmentation method based on an energy functional and the like; secondly, approximately measuring the parameters of the air holes by a self-made template or by means of an ellipse fitting technology; and thirdly, a method based on machine learning. Omasa and Onoe proposed in 1984 a pore anatomical parameter measurement method based on image threshold segmentation, which first uses a hanning filter and an inverse fourier transform to remove a television scanning line (noise) and a blurred image in an original image, and then uses the two results to sharpen the original image, enhance the contrast of the image, and highlight the image edges. And finally, performing thresholding treatment on the sharpened image, extracting a pore area, and performing principal component analysis and moving coordinate axes. However, this method requires a large amount of computation in processing the pores, and is only suitable for the simple case where there is only one pore in the image. Jayakody et al propose an automatic grape leaf pore space measuring method in 2017, which comprises the steps of firstly positioning pores in a leaf image by using a machine learning technology, then obtaining structural information of the pores by adopting a threshold segmentation method aiming at the pores with complete edges, firstly performing contour completion on the pores with incomplete contours by adopting a skeleton technology, and then approximately calculating pore anatomical parameters by using an ellipse fitting technology. However, this method has certain requirements for measuring image quality, and requires that the microscopic image to be analyzed contains abundant background features. Toda et al proposed a stomata segmentation method based on threshold segmentation in 2018. The method comprises the steps of firstly detecting the position of an air hole by utilizing a direction histogram feature (HOG) based on machine learning, cutting the detected air hole into a single air hole image data set, then classifying (opening or closing) the single air hole image in the data set by utilizing a convolutional neural network, and finally carrying out image segmentation on the opened air hole by utilizing a threshold segmentation technology and finishing the pore measurement work. However, in the method, threshold segmentation is used in the segmentation part, a series of parameters such as area, stability, length of the long axis of the air hole, coordinates of the mass center and the like need to be defined manually, and when the size or the shape of the air hole is not in the predefined parameter range, the air hole identification effect is poor. Li Kexin et al proposed a C-V (Chan-Vese) model-based pore space segmentation method in 2019, which firstly utilizes fast R-CNN to locate pores, cuts the detected pores in a microscopic image to generate a single pore data set, and then utilizes a C-V model to perform image segmentation. This method also requires the researcher to manually adjust the parameters of the C-V model based on the characteristics of the processed image. Sanyal et al, 2008, proposed a method for segmenting pores based on watershed algorithm. In 2017, Duarte et al were inspired by the human perceptual principle to convert RGB images into CIELab images, then use the scale wavelet speckle method to locate pores, and then predict pore structure information based on the watershed algorithm. Both methods require high quality pore images. Laga et al proposed a method for locating pores by using templates in 2014, which includes extracting a part of segmented pores from a small amount of training data to serve as templates, expanding the templates by means of rotation, scale change and the like, performing convolution calculation by using the templates and pore images to obtain a result mapping map, wherein a pixel point with the maximum value in each group of maps is the center position of the pore. And finally, extracting the detected stomata, and manually measuring the opening of the stomata and the size of guard cells. However, this method has the disadvantage of requiring the preparation of corresponding templates for different types of plants. Liu et al in 2016 proposed a mobile device-based stomata anatomical parameter measurement method, which locates the stomata by maximum stable outer zone technology, selects a proper ellipse template to fit the stomata on a mobile device interaction interface through manual intervention, and finally fits the anatomical parameters of the stomata by calculating the mathematical parameters of the ellipse. But this method requires the user to correctly select the elliptical template from the different stomata. Bhugar et al, 2018, proposed a deep learning-based pore analysis method, which first locates pores using a SSD (Single Shot Multi-box Detector) network. And then, a Super-resolution Convolutional Neural Network (SRCNN) is used for improving the resolution of the cut local image. Finally, the air pore is segmented by using a full Convolutional neural Network (FCN) to obtain the anatomical parameters of the air pore space. Although the method realizes full-automatic measurement, three convolutional neural networks need to be trained, the training is time-consuming, and the performance of the super-resolution model influences the result of the whole method. Therefore, the conventional image segmentation algorithm has the problems of large calculated amount, high requirement on image quality and manual parameter adjustment; the method for approximately measuring stomata parameters by self-making a template or by means of an ellipse fitting technology needs to manually design a proper template, so that automatic treatment is difficult to realize, and the method also has the problem of poor universality because different types of plants have different stomata forms and the template is fixed; the machine learning based approach also suffers from the problem that training is time consuming and the performance of the super-resolution model affects the results of the overall approach.
Disclosure of Invention
The invention aims to solve the problems that the conventional method for measuring the pore anatomical parameters of the plant leaves is time-consuming, labor-consuming, strong in subjectivity and low in efficiency, and further the actual needs of rapid and high-flux microscopic image pore analysis of the plant leaves cannot be met.
The method for measuring the plant leaf pore anatomical parameters based on the convolutional neural network comprises the following specific processes:
step one, acquiring a gas hole sample data set;
constructing and training a plant leaf microscopic image pore space segmentation model based on a convolutional neural network by using a pore sample data set to obtain a final plant leaf microscopic image pore space segmentation model;
thirdly, obtaining pore anatomical parameters by utilizing the final plant leaf microscopic image pore segmentation model, and the method comprises the following steps:
inputting the plant leaf image to be detected into an air hole segmentation model and outputting a segmentation result image of the plant leaf microscopic image to be detected;
step two, adding an ellipse fitting branch based on a least square method to the final plant leaf microscopic image air hole segmentation model obtained in the step two, inputting the edge coordinates of each air hole mask in the segmentation result image of the plant leaf microscopic image to be detected into the ellipse fitting branch, and obtaining an ellipse of each air hole segmentation result which is fitted and synthesized by an ellipse fitting technology based on the least square method;
and step three, obtaining parameters of the ellipse synthesized in the step three and two, thereby obtaining anatomical parameters of the pore space.
The invention has the beneficial effects that:
the invention combines the convolution neural network and the least square ellipse fitting technology to measure the pore anatomical parameters of the plant leaf microscopic image, realizes the automatic, high-flux and accurate measurement of the pore anatomical parameters of the microscopic image, and provides a technical means for the pore behavior analysis under the environmental change. The invention realizes accurate segmentation of pore pores based on a convolutional neural example segmentation network, simultaneously realizes positioning of pores in plant leaf microscopic images and calculation of anatomical parameters by adopting a least square method ellipse fitting technology according to segmentation results, can obtain parameters such as long axes, short axes, opening degrees, areas and the like of the pores, effectively solves the technical problems of time and labor consumption, strong subjectivity and low efficiency of the existing manual measurement or semi-automatic analysis method, and meets the actual requirements of rapid and high-flux plant leaf microscopic image pore analysis.
Drawings
FIG. 1 is a diagram showing the effect of pore division;
FIG. 2 is a flow chart of pore aperture anatomical parameter measurement;
FIG. 3 is a graph of segmentation and ellipse fitting results;
FIG. 4 is an overall flow chart of the present invention;
FIG. 5 is a flow chart of acquiring a pore sample data set;
FIG. 6 is a flow chart of a stomata segmentation model for obtaining a final plant leaf microscopic image;
fig. 7 is a flow chart for obtaining pore aperture anatomical parameters.
Detailed Description
The first embodiment is as follows: the specific process of the plant leaf stomata pore anatomical parameter measurement method based on the convolutional neural network in the embodiment is as follows (as shown in fig. 4):
the method comprises the following steps of firstly, acquiring a gas pore sample data set (as shown in figure 5), and comprising the following steps:
step one, obtaining a leaf microscopic image
Firstly, placing plant leaves to be measured on an optical microscope object stage, adjusting the focal length and the magnification of a microscope, and obtaining leaf microscopic images under the focal length;
then, under the magnification of 1000 times, a plurality of images with different focal lengths are fused by adopting the depth synthesis function of a microscope to obtain a clear leaf microscopic image;
and step two, carrying out pixel-level marking on the pore space areas of the pores of the blade microscopic images acquired one by one in the step by utilizing a manual marking mode to obtain a sample set of pores.
Step two, constructing and training a plant leaf microscopic image pore space segmentation model based on a convolutional neural network by utilizing a pore sample data set to obtain a final plant leaf microscopic image pore space segmentation model (as shown in figure 6), and the method comprises the following steps:
step two, firstly, the sample set of the air holes is processed according to the proportion A1:A2:A3Division into training sets D1Verification set D2And test set D3
A is described1:A2:A3=3:1:1;
Secondly, constructing a plant leaf microscopic image pore segmentation model based on an example segmentation network of a convolutional neural network;
an example segmentation algorithm of the example segmentation network of the convolutional neural network is Yolact (you only look at coeffients); the characteristic extraction Network in the Yolact algorithm adopts Resnet50+ FPN (Feature Pyramid Network);
step two and three, training set D1Performing data enhancement processing, and then utilizing the enhanced training set D1Training an air hole segmentation model, and calculating a loss function value of the plant leaf microscopic image air hole segmentation model of each iteration to obtain the trained plant leaf microscopic image air hole segmentation model;
the data enhancement processing comprises random illumination, cutting, amplifying and zooming;
the loss functions are divided into classification loss, boundary box regression loss and mask loss, wherein the classification loss adopts SoftMax to convert the result into probability, then the probability difference between the mark value and the predicted value is calculated by utilizing a cross entropy loss function, the boundary box regression loss adopts a Smooth L1 loss function, the mask loss adopts Sigmoid to convert the result into probability, and then the probability difference between the mark value and the predicted value is calculated by utilizing a binary cross entropy loss function;
step two and four, pair verification set D2Normalizing the data, and then using the normalized verification set to finish each training roundVerifying the network model, judging the fitting condition of the current network to data according to the verification result, adjusting the model hyper-parameters according to the fitting condition, sequentially iterating until the loss function value is lower than a verification preset threshold value, and saving the current model as a final plant leaf microscopic image pore segmentation model;
verification set D2The network hyper-parameters are adjusted by Early stopping technique (Early stopping). In the training process of the network, each time a certain number of iterations of training is carried out, the current network is tested by using a verification set, and the network effect is quantitatively checked according to the segmentation parameters such as loss function values. If the network segmentation effect obtained by comparing the current network segmentation effect with the previous training is poor, the network training needs to be manually stopped in advance, and the overall iteration round number, the learning rate and other hyper-parameters of the network are adjusted.
Step two and step five, utilizing the test set D3And testing the final plant leaf microscopic image air hole segmentation model stored in the second step and the fourth step, if the accuracy of the final plant leaf microscopic image air hole segmentation model is higher than a preset threshold value, executing the third step, and if the accuracy of the final plant leaf microscopic image air hole segmentation model is lower than the preset test threshold value, adjusting the network hyper-parameter until the accuracy of the final plant leaf microscopic image air hole segmentation model is higher than the preset test threshold value.
Step three, obtaining the pore anatomical parameters by utilizing the final plant leaf microscopic image pore segmentation model (as shown in figure 7), and comprising the following steps:
inputting the plant leaf image to be detected into the final plant leaf microscopic image pore segmentation model and outputting a segmentation result image of the plant leaf microscopic image to be detected (as shown in figure 1);
step two, adding an ellipse fitting branch based on a least square method at the output end of the final plant leaf microscopic image air hole segmentation model obtained in the step two, inputting the edge coordinates of each air hole mask in the segmentation result image of the plant leaf microscopic image to be detected into the ellipse fitting branch, and obtaining an ellipse (shown in figure 3) of each air hole segmentation result which is fitted and synthesized by an ellipse fitting technology based on the least square method;
thirdly, obtaining parameters of the ellipse, and approximating the parameters to anatomical parameters of pore space (as shown in fig. 2);
the parameters of the ellipse include:
a long axis corresponding to the length of the pore space;
a minor axis corresponding to the width of the pore space;
area, corresponding to the area of the pore;
eccentricity, which corresponds to the similarity of pore porosity to ellipse;
and the opening degree is used for describing the conductance of the air hole, and the opening degree is numerically equal to the ratio of the short axis to the long axis.

Claims (10)

1. A plant leaf pore anatomical parameter measuring method based on a convolutional neural network is characterized by comprising the following specific processes:
step one, acquiring a gas hole sample data set;
constructing and training a plant leaf microscopic image pore space segmentation model based on a convolutional neural network by using a pore sample data set to obtain a final plant leaf microscopic image pore space segmentation model;
thirdly, obtaining pore anatomical parameters by utilizing the final plant leaf microscopic image pore segmentation model, and the method comprises the following steps:
inputting the plant leaf image to be detected into the final plant leaf microscopic image pore segmentation model and outputting a segmentation result image of the plant leaf microscopic image to be detected;
step two, adding an ellipse fitting branch based on a least square method at the output end of the final plant leaf microscopic image air hole segmentation model obtained in the step two, inputting the edge coordinates of each air hole mask in the segmentation structure image of the plant leaf microscopic image to be detected into the ellipse fitting branch, and obtaining an ellipse of each air hole segmentation result which is fitted and synthesized by an ellipse fitting technology based on the least square method;
and step three, obtaining parameters of the ellipse synthesized in the step three and two, thereby obtaining anatomical parameters of the pore space.
2. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 1, wherein: the acquiring of the air hole sample data set in the first step comprises the following steps:
step one, obtaining a leaf microscopic image;
and step two, carrying out pixel level marking on the pore space areas of the pores of the blade microscopic image acquired in the step one by one to obtain a sample data set of the pores.
3. The method for measuring the plant leaf stomata pore anatomical parameters based on the convolutional neural network as claimed in claim 2 or 1, wherein: the method for acquiring the leaf microscopic image in the steps comprises the following steps:
firstly, placing plant leaves to be measured on an optical microscope object stage, adjusting the focal length and the magnification of a microscope, and obtaining leaf microscopic images under the focal length;
then, under the magnification of 1000 times, a plurality of images with different focal lengths are fused by adopting the depth synthesis function of the microscope, and a clear leaf microscopic image is obtained.
4. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 3, wherein: in the second step, a plant leaf microscopic image pore space segmentation model based on a convolutional neural network is constructed and trained by utilizing a pore sample data set to obtain a final plant leaf microscopic image pore space segmentation model, and the method comprises the following steps:
step two, firstly, the sample set of the air holes is processed according to the proportion A1:A2:A3Division into training sets D1Verification set D2And test set D3
Secondly, establishing a plant leaf microscopic image pore segmentation model of an example segmentation network based on a convolutional neural network;
step two and three, training set D1Performing data enhancement processing, and then utilizing the enhanced training set D1Training an air hole segmentation model, and calculating a loss function value of the plant leaf microscopic image air hole segmentation model of each iteration to obtain the trained plant leaf microscopic image air hole segmentation model;
step two and four, pair verification set D2Performing normalization processing on the data, verifying the network model after each training round by using the normalized verification set, judging the fitting condition of the current network to the data according to the verification result, adjusting the hyper-parameters of the model according to the fitting condition, sequentially iterating until the loss function value is lower than a verification preset threshold value, and storing the current model as a final plant leaf microscopic image pore segmentation model;
the method for judging the data fitting condition of the current network according to the verification result and adjusting the model hyperparameters according to the fitting condition specifically comprises the following steps: if the current network segmentation effect is found to be poor compared with the network segmentation effect obtained in the previous training, stopping network training in advance artificially, and adjusting the number of integral iteration rounds of the network and the over-parameters of the learning rate;
step two and step five, utilizing the test set D3And testing the final plant leaf microscopic image air hole segmentation model stored in the second step and the fourth step, if the accuracy of the final plant leaf microscopic image air hole segmentation model is higher than a test preset threshold, executing the third step, and if the accuracy of the final plant leaf microscopic image air hole segmentation model is lower than the test preset threshold, adjusting network hyper-parameters until the accuracy of the final plant leaf microscopic image air hole segmentation model is higher than the test preset threshold, executing the third step.
5. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 4, wherein: a in the step two1:A2:A3=3:1:1。
6. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 5, wherein: the example segmentation algorithm of the example segmentation network of the convolutional neural network in the second step is Yolact; the characteristic extraction network in the Yolact adopts Resnet50+ FPN.
7. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 6, wherein: the pair of training sets D in the second step and the third step1The data enhancement processing comprises the following steps: random illumination, cropping, magnification and scaling.
8. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 7, wherein: the loss function of the network model of each iteration in the second step and the third step comprises the following steps: classification loss, bounding box regression loss, and mask loss; the classification loss adopts a SoftMax function; the bounding box regression loss adopts a Smooth L1 loss function; the mask loss adopts a Sigmoid function.
9. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 8, wherein: the step two and four, adjusting the model hyper-parameters according to the fitting condition, specifically comprises: verification set D2The network hyper-parameters are adjusted by an early stop technique.
10. The method for measuring the anatomical parameters of the stomatal pores of the plant leaves based on the convolutional neural network as claimed in claim 9, wherein: the parameters of the ellipse in the third step comprise:
a long axis corresponding to the length of the pore space;
a minor axis corresponding to the width of the pore space;
area, corresponding to the area of the pore;
eccentricity, which corresponds to the similarity of pore porosity to ellipse;
and the opening degree is used for describing the conductance of the air hole, and the opening degree is numerically equal to the ratio of the short axis to the long axis.
CN202110769069.0A 2021-07-07 2021-07-07 Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network Pending CN113506263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110769069.0A CN113506263A (en) 2021-07-07 2021-07-07 Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110769069.0A CN113506263A (en) 2021-07-07 2021-07-07 Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN113506263A true CN113506263A (en) 2021-10-15

Family

ID=78011848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110769069.0A Pending CN113506263A (en) 2021-07-07 2021-07-07 Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113506263A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115078430A (en) * 2022-06-10 2022-09-20 水木未来(北京)科技有限公司 Method and device for determining quality of support film of grid of cryoelectron microscope

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180355369A1 (en) * 2015-07-02 2018-12-13 The Australian National University Method for Increasing Plant Stress Tolerance and Seed Dormancy
CN111540006A (en) * 2020-04-26 2020-08-14 河南大学 Plant stomata intelligent detection and identification method and system based on deep migration learning
CN112861693A (en) * 2021-02-02 2021-05-28 东北林业大学 Plant leaf microscopic image pore segmentation method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180355369A1 (en) * 2015-07-02 2018-12-13 The Australian National University Method for Increasing Plant Stress Tolerance and Seed Dormancy
CN111540006A (en) * 2020-04-26 2020-08-14 河南大学 Plant stomata intelligent detection and identification method and system based on deep migration learning
CN112861693A (en) * 2021-02-02 2021-05-28 东北林业大学 Plant leaf microscopic image pore segmentation method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENLONG SONG 等: "An Automatic Method for Stomatal Pore Detection and Measurement in Microscope Images of Plant Leaf Based on a Convolutional Neural Network Model", 《FORESTS》 *
王静涛 等: ""依据Faster R-CNN的活体植株叶片气孔检测方法 "", 《东北林业大学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115078430A (en) * 2022-06-10 2022-09-20 水木未来(北京)科技有限公司 Method and device for determining quality of support film of grid of cryoelectron microscope

Similar Documents

Publication Publication Date Title
CN109977808B (en) Wafer surface defect mode detection and analysis method
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN111462076B (en) Full-slice digital pathological image fuzzy region detection method and system
CN110543878A (en) pointer instrument reading identification method based on neural network
CN110321844B (en) Fast iris detection method based on convolutional neural network
CN109584286B (en) Asphalt pavement structure depth calculation method based on generalized regression neural network
CN115063409B (en) Method and system for detecting surface material of mechanical cutter
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN101866427A (en) Method for detecting and classifying fabric defects
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN112613097A (en) BIM rapid modeling method based on computer vision
CN113239930A (en) Method, system and device for identifying defects of cellophane and storage medium
CN113269791B (en) Point cloud segmentation method based on edge judgment and region growth
CN114372955A (en) Casting defect X-ray diagram automatic identification method based on improved neural network
CN108830856B (en) GA automatic segmentation method based on time series SD-OCT retina image
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN114549446A (en) Cylinder sleeve defect mark detection method based on deep learning
CN111008649B (en) Defect detection data set preprocessing method based on three decisions
CN116128873A (en) Bearing retainer detection method, device and medium based on image recognition
CN113506263A (en) Plant leaf pore aperture anatomical parameter measuring method based on convolutional neural network
CN111738931A (en) Shadow removal algorithm for aerial image of photovoltaic array unmanned aerial vehicle
CN105389820A (en) Infrared image definition evaluating method based on cepstrum
CN113435460A (en) Method for identifying brilliant particle limestone image
CN109670408A (en) A kind of object-based remote sensing images Clean water withdraw method
CN116597275A (en) High-speed moving target recognition method based on data enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211015

WD01 Invention patent application deemed withdrawn after publication