CN112581483A - Self-learning-based plant leaf vein segmentation method and device - Google Patents

Self-learning-based plant leaf vein segmentation method and device Download PDF

Info

Publication number
CN112581483A
CN112581483A CN202011528023.1A CN202011528023A CN112581483A CN 112581483 A CN112581483 A CN 112581483A CN 202011528023 A CN202011528023 A CN 202011528023A CN 112581483 A CN112581483 A CN 112581483A
Authority
CN
China
Prior art keywords
vein
plant leaf
extraction module
image
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011528023.1A
Other languages
Chinese (zh)
Other versions
CN112581483B (en
Inventor
张长水
李磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202011528023.1A priority Critical patent/CN112581483B/en
Publication of CN112581483A publication Critical patent/CN112581483A/en
Application granted granted Critical
Publication of CN112581483B publication Critical patent/CN112581483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a self-learning-based plant leaf vein segmentation method and device, and relates to the technical field of data processing, wherein the method comprises the following steps: training the marked plant leaf sample through a deep neural network model, and processing the unmarked plant leaf picture through an acquisition feature extraction module, a rough vein extraction module and a fine vein extraction module to acquire a rough vein picture and a fine vein picture; and fusing the rough vein image and the fine vein image to obtain a vein segmentation image as the labeling information of the plant leaf image without labeling, and training the deep neural network model according to a preset loss function so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result. Therefore, the model can automatically learn information in a large number of unmarked pictures by using a small number of marked pictures, so that the generalization performance is improved, and the efficiency and the accuracy of the plant leaf vein segmentation are improved.

Description

Self-learning-based plant leaf vein segmentation method and device
Technical Field
The application relates to the technical field of data processing, in particular to a self-learning-based plant leaf vein segmentation method and device.
Background
The leaves of the plant are important organs of the plant, and the outline and veins of the leaves are important components of the morphological characteristics of the leaves. The leaf veins are regarded as the 'fingerprints' of the leaves, and are not only used as important parameters for measuring the biochemical reaction processes of plant growth and development, growth condition, genetic characteristics and the like, but also used as important bases for classification and identification of plants and widely applied to agricultural production and scientific research services. The variety of plants on the earth is huge, and the extraction of plant veins has very important significance in the aspects of botany, agricultural production, gardening and the like. Traditional plant vein segmentation methods are performed manually, such as by using chemical reagents, high resolution scanners, and X-rays, which require specialized personnel and complex equipment and are inefficient. With the development of artificial intelligence and computer vision, the deep learning method has a great prospect in solving the problems, however, the existing method usually needs a large number of finely labeled images for training, and the labeling process is tedious and consumes manpower.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the first objective of the present application is to provide a self-learning based plant leaf vein segmentation method, which utilizes a deep learning algorithm framework to clearly and accurately extract leaf contours and veins in a picture containing plant leaves, can directly process a picture taken under a simple background, does not need special picture preprocessing, performs iterative self-learning training on a neural network by using as few as possible labeled picture samples (such as 10 pictures) and a large number of unlabeled picture samples (such as hundreds of pictures), and obtains more and more clear and complete contour and vein segmentation maps through a continuous iterative process. The algorithm greatly reduces the requirement of a training process on a large amount of labeled data, fully utilizes the information of the unlabeled picture, enables the model to learn the characteristics related to the outline and the vein in repeated iteration, enables the outline and the vein to be more clearly and accurately extracted, and finally can realize the end-to-end segmentation of the outline and the vein of the input picture, thereby not only saving the complex process of the previous picture pretreatment and improving the generalization performance, but also fully utilizing the information of the unlabeled data and being more beneficial to the model to extract the characteristic information of the leaf.
The second purpose of the application is to provide a self-learning-based plant leaf vein segmentation device.
In order to achieve the above object, a first aspect of the present application provides a self-learning based plant leaf vein segmentation method, including:
obtaining a marked plant leaf picture sample, training the marked plant leaf sample through a deep neural network model, and obtaining a feature extraction module, a rough vein extraction module and a fine vein extraction module;
acquiring a non-labeled plant leaf picture, inputting the non-labeled plant leaf picture into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing, and acquiring a rough vein picture and a fine vein picture of the non-labeled plant leaf picture;
and fusing the rough leaf vein image and the fine leaf vein image to obtain a leaf vein segmentation image of the plant leaf image without the label, and taking the leaf vein segmentation image as the label information of the plant leaf image without the label and training the deep neural network model according to a preset loss function so as to enable the trained deep neural network model to process the plant leaf image to be processed and obtain a plant leaf segmentation result.
According to the self-learning-based plant leaf vein segmentation method, a labeled plant leaf sample is trained through a deep neural network model, an unmarked plant leaf picture is processed through an acquisition feature extraction module, a rough leaf vein extraction module and a fine leaf vein extraction module, and a rough leaf vein picture and a fine leaf vein picture are acquired; and fusing the rough vein image and the fine vein image to obtain a vein segmentation image as the labeling information of the plant leaf image without labeling, and training the deep neural network model according to a preset loss function so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result. Therefore, the model can automatically learn information in a large number of unmarked pictures by using a small number of marked pictures, so that the generalization performance is improved, and the efficiency and the accuracy of the plant leaf vein segmentation are improved.
In an embodiment of the present application, the obtaining of the unmarked plant leaf image input the feature extraction module, the rough vein extraction module and the fine vein extraction module pair, and the obtaining of the rough vein image and the fine vein image of the unmarked plant leaf image comprises:
the characteristic extraction module is used for extracting the characteristics of the unmarked plant leaf picture to obtain a characteristic picture;
the rough vein extraction module is used for processing the characteristic diagram to obtain an intermediate layer characteristic diagram and the rough vein diagram;
and the fine vein extraction module is used for processing the characteristic diagram and the intermediate layer characteristic diagram to obtain the fine vein diagram.
In an embodiment of the application, the training the deep neural network model with the vein segmentation map as the labeling information of the unlabeled plant leaf image according to a preset loss function includes:
and inputting the vein segmentation graph serving as the labeling information of the label-free plant leaf picture into the deep neural network model for training, calculating a difference value between a training result and the labeling information according to a loss function, and adjusting parameters of the deep neural network model until the training condition is met.
In an embodiment of the present application, a confidence map is generated according to the rough vein segmentation map, and a loss of a region in the confidence map, where the region with high confidence corresponds to a labeled plant leaf image sample, is calculated, where a loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)
ln=-wn[yn·logxn] (2)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenFor marking the label value of the corresponding position of the picture, wnE {0,1} is confidence weight, when the corresponding output position is high confidence, wnIs 1, otherwise is 0.
In an embodiment of the present application, the loss of the region of the fine leaf vein map corresponding to the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)
ln=-yn·logxn (4)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenAnd marking the label value of the corresponding position of the picture.
In order to achieve the above object, a second aspect of the present application provides a self-learning based plant leaf vein segmentation apparatus, including:
the system comprises an acquisition training module, a characteristic extraction module, a rough vein extraction module and a fine vein extraction module, wherein the acquisition training module is used for acquiring a labeled plant leaf picture sample, training the labeled plant leaf sample through a deep neural network model, and acquiring the characteristic extraction module, the rough vein extraction module and the fine vein extraction module;
the acquisition module is used for acquiring a non-labeled plant leaf picture and inputting the non-labeled plant leaf picture into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing to acquire a rough vein picture and a fine vein picture of the non-labeled plant leaf picture;
and the processing module is used for fusing the rough leaf vein image and the fine leaf vein image to obtain a leaf vein segmentation image of the plant leaf image without the label, and taking the leaf vein segmentation image as the label information of the plant leaf image without the label and training the deep neural network model according to a preset loss function so as to enable the trained deep neural network model to process the plant leaf image to be processed to obtain a plant leaf segmentation result.
According to the plant leaf vein segmentation device based on self-learning, a labeled plant leaf sample is trained through a deep neural network model, an unmarked plant leaf picture is processed through an acquisition feature extraction module, a rough leaf vein extraction module and a fine leaf vein extraction module, and a rough leaf vein picture and a fine leaf vein picture are acquired; and fusing the rough vein image and the fine vein image to obtain a vein segmentation image as the labeling information of the plant leaf image without labeling, and training the deep neural network model according to a preset loss function so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result. Therefore, the model can automatically learn information in a large number of unmarked pictures by using a small number of marked pictures, so that the generalization performance is improved, and the efficiency and the accuracy of the plant leaf vein segmentation are improved.
In an embodiment of the present application, the obtaining module is specifically configured to:
the characteristic extraction module is used for extracting the characteristics of the unmarked plant leaf picture to obtain a characteristic picture;
the rough vein extraction module is used for processing the characteristic diagram to obtain an intermediate layer characteristic diagram and the rough vein diagram;
and the fine vein extraction module is used for processing the characteristic diagram and the intermediate layer characteristic diagram to obtain the fine vein diagram.
In an embodiment of the present application, the processing module is specifically configured to:
and inputting the vein segmentation graph serving as the labeling information of the label-free plant leaf picture into the deep neural network model for training, calculating a difference value between a training result and the labeling information according to a loss function, and adjusting parameters of the deep neural network model until the training condition is met.
In an embodiment of the present application, a confidence map is generated according to the rough vein segmentation map, and a loss of a region in the confidence map, where the region with high confidence corresponds to a labeled plant leaf image sample, is calculated, where a loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)
ln=-wn[yn·logxn] (2)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenFor markingTag value, w, of picture corresponding positionnE {0,1} is confidence weight, when the corresponding output position is high confidence, wnIs 1, otherwise is 0.
In an embodiment of the present application, the loss of the region of the fine leaf vein map corresponding to the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)
ln=-yn·logxn (4)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenAnd marking the label value of the corresponding position of the picture.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a self-learning based plant leaf vein segmentation method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of a self-learning based plant leaf vein segmentation method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an example of picture data according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating an embodiment of extracting vein segmentation pseudo labels from an unlabeled picture using a pre-trained model;
fig. 5 is a schematic structural diagram of a self-learning based plant leaf vein segmentation apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The self-learning based plant leaf vein segmentation method and device according to the embodiments of the present application are described below with reference to the accompanying drawings.
According to the plant leaf vein segmentation method based on self-learning, self-learning-based outline and vein segmentation tasks under few labeled samples are achieved, through iterative learning training, the model fully learns the characteristic information of leaves in a label-free picture, the segmentation effect from rough to fine is achieved, training is not limited to several pieces of labeled data, and the method has good generalization.
Fig. 1 is a schematic flow chart of a self-learning based plant leaf vein segmentation method according to an embodiment of the present application.
As shown in FIG. 1, the self-learning based plant leaf vein segmentation method comprises the following steps:
101, obtaining a labeled plant leaf image sample, training the labeled plant leaf sample through a deep neural network model, and obtaining a feature extraction module, a rough vein extraction module and a fine vein extraction module.
And 102, acquiring a non-labeled plant leaf image, inputting the non-labeled plant leaf image into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing, and acquiring a rough vein image and a fine vein image of the non-labeled plant leaf image.
And 103, fusing the rough vein image and the fine vein image to obtain a vein segmentation image of the plant leaf image without the label, taking the vein segmentation image as the label information of the plant leaf image without the label, and training the deep neural network model according to a preset loss function so as to enable the trained deep neural network model to process the plant leaf image to be processed to obtain a plant leaf segmentation result.
Specifically, the present application uses a deep learning model to extract leaf information and uses these intermediate features for inference of leaf segmentation, and uses a self-learning mode to let the model automatically learn information in a large number of unlabeled pictures with a very small number of labeled pictures, thereby improving generalization. This makes the application more robust and immune to local noise. When the user uses the algorithm, the trained model can complete the plant classification task without knowing the principle behind the algorithm.
According to the method, a deep learning algorithm framework is utilized, leaf outlines and veins in the pictures containing the plant leaves are clearly and accurately extracted, iterative self-learning training is conducted on the neural network by utilizing the image samples with labels (such as 10 images) and a large number of image samples without labels (such as hundreds of images) as few as possible, and network performance and generalization are improved.
Specifically, as shown in fig. 2, a small number of labeled pictures are used for pre-training, and three modules, namely a feature extraction module, a rough vein extraction module and a fine vein extraction module, are obtained through training of a deep neural network model; secondly, extracting features of a new label-free picture by using the feature extraction module, further deducing by using a rough vein extraction module and a fine vein extraction module based on the features to respectively obtain high-confidence-degree pseudo labels with different finenesses, and fusing the two pseudo labels; and thirdly, based on a self-learning iterative process, the obtained pseudo labels are used as labels of the input pictures to train the neural network again, and through continuous iterative training, clearer and clearer complete contour and vein segmentation maps are obtained.
Specifically, a small number of samples of labeled pictures are shown in fig. 3, where white parts in the labeled pictures are contours and veins to be segmented, and there is no need to model and preprocess the pictures, and a large number of samples without labels refer to samples of only left input pictures and no right artificially labeled pictures.
In the embodiment of the application, a feature extraction module performs feature extraction on the plant leaf picture without the label to obtain a feature map; the rough vein extraction module is used for processing the characteristic diagram to obtain an intermediate layer characteristic diagram and the rough vein diagram; and the fine vein extraction module is used for processing the characteristic diagram and the middle layer characteristic diagram to obtain a fine vein diagram.
In the embodiment of the application, a confidence map is generated according to the rough vein segmentation map, the loss of a region in the confidence map, which corresponds to the labeled plant leaf image sample, is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)
ln=-wn[yn·logxn] (2)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenFor marking the label value of the corresponding position of the picture, wnE {0,1} is confidence weight, when the corresponding output position is high confidence, wnIs 1, otherwise is 0, i.e. no transmission loss.
In the embodiment of the present application, the loss of the region corresponding to the fine leaf vein image and the labeled plant leaf image sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)
ln=-yn·logxn (4)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenSince the estimation result is considered to be highly reliable in order to label the label value of the corresponding position of the picture, there is no corresponding weight wnIt is used.
Thus, by minimizing the above two loss functions, the relevant model is trained by a back propagation algorithm.
Specifically, in order to obtain features containing rich images and semantic information, a feature extraction module is designed, and the module converts an input image with a fixed size into a feature map with a fixed size. In order to obtain clear and accurate veins, mesophyll and veins are separated, leaves and backgrounds are simultaneously separated, a rough vein extraction module and a fine vein extraction module are designed, the input of the rough vein extraction module is a feature map output by the feature extraction module, the output of the rough vein extraction module is a rough segmentation map with the same size as the original input leaf image, namely, probability values of all points serving as extraction targets can be obtained, meanwhile, the confidence coefficient of each pixel point can be obtained, the fine vein extraction module infers the most uncertain region again according to the confidence coefficient map, and a local fine vein map is inferred by utilizing a multi-scale feature map. In order to improve generalization capability of each module, the rough vein extraction module and the fine vein extraction module share the image feature extraction module, and in addition, the fine vein extraction module also uses multi-scale feature maps of each middle layer in the rough vein extraction module.
The first step is to use a Convolutional Neural Network (CNN) method, a feature extraction module, a rough vein extraction module and a fine vein extraction module, wherein the three modules describe the shape features of the plant leaf image on one hand, and on the other hand, the features are used for deducing integral and local vein segmentation maps from different granularities, so that the contour and vein segmentation of the leaf can be realized through synthesis. In addition, the basic performance of each model obtained through pre-training in the step is the basis of further learning of the whole algorithm, in order to improve the effectiveness of the method, the performance of the model is improved by increasing the sample data volume by using a common deep learning method, the method is designed aiming at a small amount of labeled data, and under the condition that the data volume is kept so small, a more efficient network model, namely a residual error network (ResNet), is selected and adopted. The residual network structure parameters embodied by the feature extraction module in this application are shown in table 1, and are composed of 6, 8, 12, and 6 convolution kernels of the same size from convolution 2 to convolution 5, respectively, except that the step size of the first convolution kernel of each layer is 2, and the step size of the remaining convolution kernel is 1 (except convolution 2, which has all step sizes of 1, since the step size of pooling 1 is 2, the feature map size has been reduced at this step). The rough vein extraction module gradually increases the resolution of the feature map by deconvolution, and finally obtains a rough vein map with the same size as the input image, wherein the network parameters for realizing the rough vein extraction module are shown in table 2. The fine vein extraction module is realized by adopting a convolution kernel with the size of 1 x 1, the purpose of secondary inference is achieved by adopting a simple and effective network structure, and the network parameters for realizing the fine vein extraction module are shown in a table 3.
TABLE 1 feature extraction Module Structure parameter Table
Figure BDA0002851401170000071
TABLE 2 rough vein extraction Module Structure parameter Table
Layer name Categories Size of Step size Input size Output size
Deconvolution of 1 Deconvolution 3*3 2*2 516*14*14 256*28*28
Deconvolution 2 Deconvolution 3*3 2*2 256*28*28 128*56*56
Deconvolution 3 Deconvolution 3*3 2*2 128*56*56 64*112*112
Deconvolution 4 Deconvolution 3*3 2*2 64*112*112 64*224*224
Deconvolution 5 Deconvolution 3*3 2*2 64*224*224 1*448*448
TABLE 3 Fine vein extraction Module Structure parameter Table
Layer name Categories Size of Step size Input size Output size
Convolution 1 Convolution with a bit line 1*1 1*1 1029*1*1 512*1*1
Convolution 2 Convolution with a bit line 1*1 1*1 513*1*1 256*1*1
Convolution 3 Convolution with a bit line 1*1 1*1 257*1*1 256*1*1
Convolution 4 Convolution with a bit line 1*1 1*1 257*1*1 64*1*1
Convolution 5 Convolution with a bit line 1*1 1*1 65*1*1 1*1*1
It should be noted that, when the feature extraction module performs feature extraction and segmentation on the original image, different deep learning network models may be used to implement the feature extraction, and different schemes may also be used for the size of the feature map.
The second step is to extract a feature map from a new unlabeled picture by using the feature extraction module, then further obtain high-confidence pseudo labels with different finenesses by inference by using a rough vein extraction module and a fine vein extraction module based on the feature, and fuse the two pseudo labels, wherein the whole flow chart is shown in fig. 4. The feature extraction module operates an input picture to obtain a feature map containing rich semantic information; the rough vein extraction module performs segmentation operation by using the information in the characteristic images, and gradually increases the resolution at the same time, so as to segment a vein segmentation image with the same size as the original input image, and the result is called a rough vein segmentation image because only a part of regions have high confidence; and sampling the part with low confidence coefficient, performing secondary inference by using a fine vein extraction module, wherein the input of the module comprises a feature map obtained by the feature extraction module, an intermediate layer feature map of the rough vein module and points sampled in the rough vein confidence coefficient map, and integrating the information to generate a fine vein segmentation result corresponding to the sampling points. And finally, fusing the rough veins and the fine veins to obtain a final vein segmentation picture, namely a pseudo label without labeling a leaf picture.
It should be noted that when the rough vein extraction module segments the features, different deep learning network models may be used, the sampling mode of the reliability map may be a plurality of modes such as uniform sampling and non-uniform sampling, the fine vein extraction module performs a secondary inference process on the features of the sampling points, different deep learning models may be used, and the fusion mode of the pseudo-tags may be a plurality of modes such as additive, multiplicative, maximum and minimum.
In the third step, the leaf vein segmentation graph obtained in the second step, i.e. the pseudo label, is used as the label of the unlabeled picture, and is input into the model obtained in the first step, and the feature extraction module and the rough leaf vein extraction module are retrained repeatedly, as in the following algorithm flow. As described above, the feature extraction module employs a residual error network (ResNet) as a skeleton, and the coarse vein extraction module includes a multi-layer deconvolution layer. After the training is converged, the vein segmentation process of any test picture is similar to the process, and the test picture is directly output
Figure BDA0002851401170000083
I.e. the segmentation result, no iterative training is required.
Specifically, the self-learning iteration process inputs: unlabelled leaf picture I and feature extraction module m obtained by pre-trainingeRough vein extraction module mcAnd a fine vein extraction module mfQuadratic extrapolation scale N; for each unlabeled leaf picture: extracting the feature map epsilon of the picture m by a feature extraction modulee(I) (ii) a Calculating rough vein image of the leaf by rough vein extraction module
Figure BDA0002851401170000081
And confidence map Dc(ii) a Sampling N most uncertain points according to the confidence coefficient map; calculating the positions of the feature maps corresponding to the points; combined with characteristic pattern epsilon, coarse veinsDrawing (A)
Figure BDA0002851401170000082
And the intermediate layer characteristics of the rough vein extraction module are subjected to secondary inference by the fine vein extraction module to obtain fine veins
Figure BDA0002851401170000093
Coarse vein pattern
Figure BDA0002851401170000094
And the fine veins of the leaves
Figure BDA0002851401170000095
Fusing to obtain a pseudo label
Figure BDA0002851401170000091
Will be provided with
Figure BDA0002851401170000092
And calculating a loss function l as a label of the I, and updating the model parameters by gradient back propagation. And returning to the first step iteration until convergence.
Therefore, for plant leaves of which the leaf veins are to be extracted, only the whole leaves need to be simple backgrounds and are completely contained in shot pictures, a deep learning model is applied to automatically learn and extract leaf features, then automatic leaf vein segmentation is carried out by using the intermediate features, and a leaf vein segmentation map with high confidence is obtained through inference of two stages. The secondary inference process can achieve the effect of enabling the model to infer the local information again and correct errors in the first inference, so that the result extracted by the method is more accurate and the robustness is stronger.
In addition, the self-learning mode is used, the extremely small number of marked pictures are utilized, the model automatically learns information in a large number of unmarked pictures, and the generalization capability of the self-learning model can be improved. Meanwhile, the dependence on a large number of labeled pictures in model training is reduced, and unlabeled data are fully utilized, so that the application scene of the method is wider.
Based on the description of the embodiment, the multi-scale feature information of the blade is extracted by using a deep learning method; for the leaves of the plants, the outline and the veins of the leaves are extracted through a deep learning network, namely a rough vein extraction module. For the leaves of the plants, the outline and the veins of the leaves are extracted through a deep learning network, namely a fine vein extraction module. And for the leaves of the plants, further deducing and segmenting the obtained contour and veins by sampling the confidence map and deducing the confidence. For the pre-training process of each module, only a small amount of marked leaf pictures are required to be used, for the self-learning process, the iterative self-learning algorithm can be converged only by the unmarked leaf pictures, and extra marked data are not required to be used. Confidence coefficient inference is combined with self-learning, the confidence coefficient inference provides guarantee for self-learning pseudo labels, and a model obtained by the self-learning provides a confidence coefficient graph with more confidence in the inference process. For the generation of the pseudo label, the pseudo label is obtained by fusing the rough vein map and the fine vein map.
In the related technology, most of the acquired pictures have high requirements such as hyperspectral images and projection scanning images, and some of the acquired pictures need to be processed manually and complexly to obtain data formats required by algorithms such as manual cutting, point cloud data, denoising and binarization. Moreover, the methods all use image processing algorithms requiring manual parameter adjustment, noise cannot be completely removed, and many burrs exist in the segmented veins, which in turn makes the final effect especially dependent on the quality of the input image, and in addition, the whole process cannot be processed automatically. More importantly, the leaves in the real scene are often fine veins, and these methods either require special equipment to obtain high-definition pictures to obtain fine results or only obtain simple rough veins for simply shot pictures, which makes these methods inconvenient and generalizable. The deep learning segmentation algorithm similar to the deep learning segmentation algorithm of the invention usually needs a large amount of carefully labeled training data to improve the effect, and a method which can be automatically processed and reduce the dependence on manually labeled data is urgently needed at the present stage.
The application only needs the leaves with clean background shot by common handheld equipment (mobile phones, cameras and the like) to the plant leaves of the leaf veins to be extracted, and can extract the veins automatically without any pretreatment on pictures, and the training of the model can be carried out only by a very small amount of labeled data. Compared with other schemes, the scheme has great advantages and convenience, and saves time and labor.
The method and the device adopt a deep learning method model to extract blade features, only pictures acquired by a portable handheld camera need to be utilized, special sensors and complex preprocessing work are not needed, test pictures can be directly input into a trained model, and the effect and the generalization performance are better to the defect that the picture quality requirement is high.
The method and the device have the advantages that the deep learning method model is adopted to automatically learn the parameters, manual parameter adjustment is not needed, and the denoising effect is better. According to the method and the device, the picture is obtained without special sensors and the like, the application difficulty is greatly reduced, the application scene is widened, richer characteristic information in the picture can be captured through the neural network, and the problem that information is lost in the related technology is avoided.
The method and the device avoid complicated manual preprocessing operation, are closer to actual scenes, overcome the limitation of adopting manual design parameters in the related technology, and have the advantages of small error, strong robustness and wide application range of the deep learning neural network algorithm adopted in the method and the device.
The deep learning network adopted by the application is far better than Canny operator filtering in denoising effect, strong in robustness, capable of avoiding the problem that burrs are formed on the edge of the vein, reducing the generation of a vein fracture area, and capable of finishing automatic vein segmentation only by inputting a test picture into a trained model, without artificial hand-inserting processing.
The deep learning network can automatically process, so that the threshold value which needs to be manually set is avoided, the algorithm error is small, burrs cannot be generated, the robustness is strong, and the flexibility is high.
The training and testing pictures of the application all adopt pictures directly shot by a common camera, point cloud is obtained without pretreatment, and information loss caused by the pretreatment process is avoided. Aiming at the problem that only one main vein in the middle can be obtained in the related technology, the method can obtain more fine multistage veins, and the effect is better.
The method and the device do not require the high-quality pictures, and only need the pictures shot by the common camera, so that the application difficulty of the technology is greatly reduced. In addition, the error of the deep learning algorithm in the application is smaller than that of the skeletonization algorithm in the related technology, the obtained vein segmentation graph is finer, the problem that the veins are too thin or too thick due to continuous thinning in the skeletonization algorithm is avoided, and the vein segmentation graph is closer to the pixel width of the veins in the original graph.
According to the self-learning-based plant leaf vein segmentation method, a labeled plant leaf sample is trained through a deep neural network model, an unmarked plant leaf picture is processed through an acquisition feature extraction module, a rough leaf vein extraction module and a fine leaf vein extraction module, and a rough leaf vein picture and a fine leaf vein picture are acquired; and fusing the rough vein image and the fine vein image to obtain a vein segmentation image as the labeling information of the plant leaf image without labeling, and training the deep neural network model according to a preset loss function so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result. Therefore, the model can automatically learn information in a large number of unmarked pictures by using a small number of marked pictures, so that the generalization performance is improved, and the efficiency and the accuracy of the plant leaf vein segmentation are improved.
In order to realize the embodiment, the application also provides a self-learning-based plant leaf vein segmentation device.
Fig. 5 is a schematic structural diagram of a self-learning-based plant leaf vein segmentation device according to an embodiment of the present application.
As shown in fig. 5, the self-learning based plant leaf vein segmentation apparatus includes an acquisition training module 510, an acquisition module 520, and a processing module 530.
The acquisition training module 510 is configured to acquire a labeled plant leaf image sample, train the labeled plant leaf sample through a deep neural network model, and acquire a feature extraction module, a rough vein extraction module, and a fine vein extraction module.
And the obtaining module 520 is used for obtaining a non-labeled plant leaf image and inputting the non-labeled plant leaf image into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing to obtain a rough vein image and a fine vein image of the non-labeled plant leaf image.
And the processing module 530 is configured to fuse the rough leaf vein image and the fine leaf vein image to obtain a leaf vein segmentation image of the plant leaf image without the label, and train the deep neural network model according to a preset loss function by using the leaf vein segmentation image as the label information of the plant leaf image without the label, so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result.
According to the plant leaf vein segmentation device based on self-learning, a labeled plant leaf sample is trained through a deep neural network model, an unmarked plant leaf picture is processed through an acquisition feature extraction module, a rough leaf vein extraction module and a fine leaf vein extraction module, and a rough leaf vein picture and a fine leaf vein picture are acquired; and fusing the rough vein image and the fine vein image to obtain a vein segmentation image as the labeling information of the plant leaf image without labeling, and training the deep neural network model according to a preset loss function so that the trained deep neural network model processes the plant leaf image to be processed to obtain a plant leaf segmentation result. Therefore, the model can automatically learn information in a large number of unmarked pictures by using a small number of marked pictures, so that the generalization performance is improved, and the efficiency and the accuracy of the plant leaf vein segmentation are improved.
It should be noted that the foregoing explanation of the embodiment of the plant leaf vein segmentation method based on self-learning also applies to the plant leaf vein segmentation apparatus based on self-learning of this embodiment, and details thereof are not repeated here.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those of ordinary skill in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A self-learning based plant leaf vein segmentation method is characterized by comprising the following steps:
obtaining a marked plant leaf picture sample, training the marked plant leaf sample through a deep neural network model, and obtaining a feature extraction module, a rough vein extraction module and a fine vein extraction module;
acquiring a non-labeled plant leaf picture, inputting the non-labeled plant leaf picture into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing, and acquiring a rough vein picture and a fine vein picture of the non-labeled plant leaf picture;
and fusing the rough leaf vein image and the fine leaf vein image to obtain a leaf vein segmentation image of the plant leaf image without the label, and taking the leaf vein segmentation image as the label information of the plant leaf image without the label and training the deep neural network model according to a preset loss function so as to enable the trained deep neural network model to process the plant leaf image to be processed and obtain a plant leaf segmentation result.
2. The method of claim 1, wherein the obtaining the image of the plant leaf without label input into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing, and obtaining the rough vein image and the fine vein image of the plant leaf without label comprises:
the characteristic extraction module is used for extracting the characteristics of the unmarked plant leaf picture to obtain a characteristic picture;
the rough vein extraction module is used for processing the characteristic diagram to obtain an intermediate layer characteristic diagram and the rough vein diagram;
and the fine vein extraction module is used for processing the characteristic diagram and the intermediate layer characteristic diagram to obtain the fine vein diagram.
3. The method of claim 1, wherein the training the deep neural network model with the vein segmentation map as labeled information of the unlabeled plant leaf image according to a preset loss function comprises:
and inputting the vein segmentation graph serving as the labeling information of the label-free plant leaf picture into the deep neural network model for training, calculating a difference value between a training result and the labeling information according to a loss function, and adjusting parameters of the deep neural network model until the training condition is met.
4. The method of claim 1,
generating a confidence map according to the rough vein segmentation map, and calculating the loss of a region corresponding to the labeled plant leaf picture sample in a high confidence region in the confidence map, wherein the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)
ln=-wn[yn·logxn] (2)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenFor marking the label value of the corresponding position of the picture, wnE {0,1} is confidence weight, when the corresponding output position is high confidence, wnIs 1, otherwise is 0.
5. The method of claim 1,
calculating the loss of the region corresponding to the fine leaf vein image and the marked plant leaf image sample, wherein the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)
ln=-yn·logxn (4)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenAnd marking the label value of the corresponding position of the picture.
6. A self-learning based plant leaf vein segmentation device is characterized by comprising:
the system comprises an acquisition training module, a characteristic extraction module, a rough vein extraction module and a fine vein extraction module, wherein the acquisition training module is used for acquiring a labeled plant leaf picture sample, training the labeled plant leaf sample through a deep neural network model, and acquiring the characteristic extraction module, the rough vein extraction module and the fine vein extraction module;
the acquisition module is used for acquiring a non-labeled plant leaf picture and inputting the non-labeled plant leaf picture into the feature extraction module, the rough vein extraction module and the fine vein extraction module for processing to acquire a rough vein picture and a fine vein picture of the non-labeled plant leaf picture;
and the processing module is used for fusing the rough leaf vein image and the fine leaf vein image to obtain a leaf vein segmentation image of the plant leaf image without the label, and taking the leaf vein segmentation image as the label information of the plant leaf image without the label and training the deep neural network model according to a preset loss function so as to enable the trained deep neural network model to process the plant leaf image to be processed to obtain a plant leaf segmentation result.
7. The apparatus of claim 6, wherein the obtaining module is specifically configured to:
the characteristic extraction module is used for extracting the characteristics of the unmarked plant leaf picture to obtain a characteristic picture;
the rough vein extraction module is used for processing the characteristic diagram to obtain an intermediate layer characteristic diagram and the rough vein diagram;
and the fine vein extraction module is used for processing the characteristic diagram and the intermediate layer characteristic diagram to obtain the fine vein diagram.
8. The apparatus of claim 6, wherein the processing module is specifically configured to:
and inputting the vein segmentation graph serving as the labeling information of the label-free plant leaf picture into the deep neural network model for training, calculating a difference value between a training result and the labeling information according to a loss function, and adjusting parameters of the deep neural network model until the training condition is met.
9. The apparatus of claim 6,
generating a confidence map according to the rough vein segmentation map, and calculating the loss of a region corresponding to the labeled plant leaf picture sample in a high confidence region in the confidence map, wherein the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)
ln=-wn[yn·logxn] (2)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenFor marking the label value of the corresponding position of the picture, wnE {0,1} is confidence weight, when the corresponding output position is high confidence, wnIs 1, otherwise is 0.
10. The apparatus of claim 6,
calculating the loss of the region corresponding to the fine leaf vein image and the marked plant leaf image sample, wherein the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)
ln=-yn·logxn (4)
wherein x isnThe predicted value y of the nth pixel point output by the rough vein extraction modulenAnd marking the label value of the corresponding position of the picture.
CN202011528023.1A 2020-12-22 2020-12-22 Self-learning-based plant leaf vein segmentation method and device Active CN112581483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011528023.1A CN112581483B (en) 2020-12-22 2020-12-22 Self-learning-based plant leaf vein segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011528023.1A CN112581483B (en) 2020-12-22 2020-12-22 Self-learning-based plant leaf vein segmentation method and device

Publications (2)

Publication Number Publication Date
CN112581483A true CN112581483A (en) 2021-03-30
CN112581483B CN112581483B (en) 2022-10-04

Family

ID=75138943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011528023.1A Active CN112581483B (en) 2020-12-22 2020-12-22 Self-learning-based plant leaf vein segmentation method and device

Country Status (1)

Country Link
CN (1) CN112581483B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure
CN114140688A (en) * 2021-11-23 2022-03-04 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101969852A (en) * 2008-03-04 2011-02-09 断层放疗公司 Method and system for improved image segmentation
CN109544554A (en) * 2018-10-18 2019-03-29 中国科学院空间应用工程与技术中心 A kind of segmentation of plant image and blade framework extracting method and system
CN109636809A (en) * 2018-12-03 2019-04-16 西南交通大学 A kind of image segmentation hierarchy selection method based on scale perception
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111598174A (en) * 2020-05-19 2020-08-28 中国科学院空天信息创新研究院 Training method of image ground feature element classification model, image analysis method and system
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101969852A (en) * 2008-03-04 2011-02-09 断层放疗公司 Method and system for improved image segmentation
CN109544554A (en) * 2018-10-18 2019-03-29 中国科学院空间应用工程与技术中心 A kind of segmentation of plant image and blade framework extracting method and system
CN109636809A (en) * 2018-12-03 2019-04-16 西南交通大学 A kind of image segmentation hierarchy selection method based on scale perception
WO2020215236A1 (en) * 2019-04-24 2020-10-29 哈尔滨工业大学(深圳) Image semantic segmentation method and system
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111598174A (en) * 2020-05-19 2020-08-28 中国科学院空天信息创新研究院 Training method of image ground feature element classification model, image analysis method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NOOR M. AL-SHAKARJI 等: "Unsupervised Learning Method for Plant and Leaf Segmentation", 《2017 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR)》 *
许新华: "基于改进LBP 和Otsu 相结合的病害叶片图像分割方法", 《计算机产品与流通》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327299A (en) * 2021-07-07 2021-08-31 北京邮电大学 Neural network light field method based on joint sampling structure
CN113327299B (en) * 2021-07-07 2021-12-14 北京邮电大学 Neural network light field method based on joint sampling structure
CN114140688A (en) * 2021-11-23 2022-03-04 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
CN114140688B (en) * 2021-11-23 2022-12-09 武汉理工大学 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment

Also Published As

Publication number Publication date
CN112581483B (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN108388896B (en) License plate identification method based on dynamic time sequence convolution neural network
CN109086811B (en) Multi-label image classification method and device and electronic equipment
CN114120102A (en) Boundary-optimized remote sensing image semantic segmentation method, device, equipment and medium
CN108335303B (en) Multi-scale palm skeleton segmentation method applied to palm X-ray film
CN110008956B (en) Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium
CN111027547A (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN110648310B (en) Weak supervision casting defect identification method based on attention mechanism
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN112102229A (en) Intelligent industrial CT detection defect identification method based on deep learning
CN111462076A (en) Method and system for detecting fuzzy area of full-slice digital pathological image
CN112581483B (en) Self-learning-based plant leaf vein segmentation method and device
CN111178438A (en) ResNet 101-based weather type identification method
CN114240961A (en) U-Net + + cell division network system, method, equipment and terminal
CN113269224A (en) Scene image classification method, system and storage medium
CN112330701A (en) Tissue pathology image cell nucleus segmentation method and system based on polar coordinate representation
CN114266881A (en) Pointer type instrument automatic reading method based on improved semantic segmentation network
CN109919215B (en) Target detection method for improving characteristic pyramid network based on clustering algorithm
CN111047559A (en) Method for rapidly detecting abnormal area of digital pathological section
CN114882204A (en) Automatic ship name recognition method
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN113313179A (en) Noise image classification method based on l2p norm robust least square method
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant