CN114140688B - Vein phenotype extraction method and device based on transmission scanning image and electronic equipment - Google Patents

Vein phenotype extraction method and device based on transmission scanning image and electronic equipment Download PDF

Info

Publication number
CN114140688B
CN114140688B CN202111395372.5A CN202111395372A CN114140688B CN 114140688 B CN114140688 B CN 114140688B CN 202111395372 A CN202111395372 A CN 202111395372A CN 114140688 B CN114140688 B CN 114140688B
Authority
CN
China
Prior art keywords
image
leaf
vein
network model
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111395372.5A
Other languages
Chinese (zh)
Other versions
CN114140688A (en
Inventor
刘唯真
袁晓辉
陈饶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111395372.5A priority Critical patent/CN114140688B/en
Publication of CN114140688A publication Critical patent/CN114140688A/en
Application granted granted Critical
Publication of CN114140688B publication Critical patent/CN114140688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention relates to a vein phenotype extraction method, a vein phenotype extraction device, electronic equipment and a computer-readable storage medium based on a transmission scanning image, wherein the method comprises the following steps: acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf; constructing a network model, and training the network model by using the data set to obtain a network model with complete training; and acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the completely trained network model and the leaf image to be marked. The vein phenotype extraction method based on the transmission scanning image can accurately extract vein phenotypes corresponding to first-level, second-level and third-level veins.

Description

Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
Technical Field
The invention relates to the technical field of plant vein level segmentation and vein phenotype extraction at all levels, in particular to a vein phenotype extraction method and device based on a transmission scanning image, electronic equipment and a computer-readable storage medium.
Background
The veins are vascular bundles with different thicknesses distributed on the leaves and are distributed in mesophyll tissues to play a role in transportation and support, the internal structure of the veins is different along with the size of the veins, and the veins can provide moisture and inorganic salt for the leaves and output photosynthetic products, and support the leaves to extend in space and ensure the smooth performance of the physiological functions of the leaves.
The plant veins are divided into main veins, secondary veins, tertiary veins and finer high-grade veins, each grade of vein has unique properties such as length, width, density and the like, the research on the plant veins has a vital role in further researching the growth process of plants, and the traditional method for researching the veins usually utilizes the binarization technology of images to obtain an integral network of the veins, then calculates the integral characteristics of the vein network, but cannot extract the graded veins.
Disclosure of Invention
In view of the above, it is desirable to provide a method, an apparatus, an electronic device and a computer-readable storage medium for extracting vein phenotype based on transmission scan image, so as to solve the problem that the hierarchical vein phenotype cannot be accurately extracted in the prior art.
In order to solve the above problems, the present invention provides a vein phenotype extraction method based on transmission scan image, comprising:
acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the network model with complete training and the leaf image to be marked.
Further, labeling the leaf image to obtain a vein image of the leaf includes:
and marking the leaf image by using three colors corresponding to RGB (red, green and blue) to obtain a primary vein image, a secondary vein image and a tertiary vein image of the leaf, and combining the primary vein image, the secondary vein image and the tertiary vein image of the leaf with the leaf image to obtain the vein image of the leaf.
Further, labeling the leaf image by using three colors corresponding to RGB to obtain a primary vein image, a secondary vein image and a tertiary vein image of the leaf, including:
and traversing all pixel points of the blade image in sequence, marking the pixel points as first-stage vein images of the blade when the R channel of the pixel points is 255 and other channels are 0, marking the pixel points as second-stage vein images of the blade when the G channel of the pixel points is 255 and other channels are 0, and marking the pixel points as third-stage vein images of the blade when the B channel of the pixel points is 255 and other channels are 0.
Further, generating a data set from the leaf image and the vein image of the leaf, comprising:
and carrying out size consistency processing on the leaf image and the vein image of the leaf to obtain a processed leaf image and a processed vein image of the leaf, and generating a data set by using the processed leaf image and the processed vein image of the leaf.
Further, constructing a network model, comprising:
and constructing a CTVE-Net network model, wherein the CTVE-Net network model comprises a feature encoder module, a context extractor module and a feature decoder module, the feature encoder module comprises a double attention module, and the context extractor module comprises an intensive void convolution module and a spatial pyramid pooling module.
Further, training the network model by using the data set to obtain a network model with complete training, including:
and training the CTVE-Net network model by using the data set to obtain a trained CTVE-Net network model, and optimizing the trained CTVE-Net network model by using a preset loss function to obtain a completely trained CTVE-Net network model.
Further, the method also comprises the following steps:
and refining the vein image of the leaf to be marked into a skeleton map by utilizing a refining algorithm, and obtaining vein phenotype data according to the skeleton map.
The invention also provides a vein phenotype extraction device based on the transmission scanning image, which comprises a data acquisition module, a network training module and a vein extraction module;
the data acquisition module is used for acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
the network training module is used for constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and the vein extraction module is used for acquiring a leaf image to be marked and acquiring a vein image of the leaf to be marked according to the completely trained network model and the leaf image to be marked.
The invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the method for extracting a vein phenotype based on a transmission scan image according to any of the above technical solutions is implemented.
The present invention also provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for extracting vein phenotype based on transmission scan image according to any of the above technical solutions.
The beneficial effects of adopting the above embodiment are: according to the vein phenotype extraction method based on the transmission scanning image, the vein image of the leaf is obtained by obtaining the leaf image, the leaf image is marked, the network model is constructed, the network model is trained, the vein image of the leaf to be marked is obtained by utilizing the network model which is completely trained, and the vein phenotypes corresponding to the first-level, second-level and third-level veins can be accurately extracted.
Drawings
Fig. 1 is a schematic view of an application scenario of a vein phenotype extraction apparatus based on a transmission scan image according to the present invention;
FIG. 2 is a schematic flow chart of an embodiment of a transmission-scan-image-based vein phenotype extraction method according to the present invention;
FIG. 3 is a schematic view of a blade image provided in an embodiment of the invention;
FIG. 4 is a schematic illustration of a vein image of a leaf provided in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a CTVE-Net network model provided in an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a dual attention module provided in an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a dense hole convolution module provided in an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a spatial pyramid pooling module provided in an embodiment of the present invention;
FIG. 9 is a schematic representation of a primary vein skeleton provided in an embodiment of the present invention;
FIG. 10 is a diagrammatic illustration of a secondary vein skeleton provided in an embodiment of the present invention;
FIG. 11 is a schematic representation of a three-level vein skeleton provided in an embodiment of the present invention;
FIG. 12 is a block diagram illustrating an embodiment of a vein phenotype extraction apparatus based on transmission scan images according to the present invention;
fig. 13 is a block diagram of an embodiment of an electronic device provided in the present invention.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate preferred embodiments of the invention and together with the description, serve to explain the principles of the invention and not to limit the scope of the invention.
The invention provides a transmission scanning image-based vein phenotype extraction method, a transmission scanning image-based vein phenotype extraction device, electronic equipment and a computer-readable storage medium, which are respectively described in detail below.
Fig. 1 is a schematic view of an application scenario of a vein-type extraction apparatus based on a transmission scan image according to the present invention, and the system may include a server 100, where the vein-type extraction apparatus based on a transmission scan image is integrated in the server 100, such as the server in fig. 1.
The server 100 in the embodiment of the present invention is mainly used for:
acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the completely trained network model and the leaf image to be marked.
In this embodiment of the present invention, the server 100 may be an independent server, or may be a server network or a server cluster composed of servers, for example, the server 100 described in this embodiment of the present invention includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets, or a cloud server composed of multiple servers. Among them, the Cloud server is constituted by a large number of computers or web servers based on Cloud Computing (Cloud Computing).
It will be appreciated that the terminal 200 used in embodiments of the present invention may be a device that includes both receiving and transmitting hardware, i.e. a device having receiving and transmitting hardware capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display. The specific terminal 200 may be a desktop, a laptop, a web server, a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communication device, an embedded device, and the like, and the type of the terminal 200 is not limited in this embodiment.
It can be understood by those skilled in the art that the application environment shown in fig. 1 is only one application scenario of the present invention, and does not constitute a limitation on the application scenario of the present invention, and other application environments may further include more or less terminals than those shown in fig. 1, for example, only 2 terminals are shown in fig. 1, and it can be understood that the vein-type extracting apparatus based on transmission scan image may further include one or more other terminals, which is not limited herein.
In addition, referring to fig. 1, the vein phenotype extraction apparatus based on transmission scan image may further include a memory 200 for storing data, such as leaf image data.
It should be noted that the scene schematic diagram of the vein phenotype extraction apparatus based on the transmission scan image shown in fig. 1 is only an example, and the vein phenotype extraction apparatus based on the transmission scan image and the scene described in the embodiment of the present invention are for more clearly illustrating the technical solution of the embodiment of the present invention, and do not constitute a limitation to the technical solution provided by the embodiment of the present invention.
The embodiment of the invention provides a vein phenotype extraction method based on a transmission scanning image, which has a flow schematic diagram, as shown in fig. 2, and comprises the following steps:
step S201, obtaining a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
step S202, constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and S203, acquiring a leaf image to be labeled, and acquiring a vein image of the leaf to be labeled according to the network model with complete training and the leaf image to be labeled.
In a specific embodiment, a transmission scanning method is used for scanning a blade to obtain a blade image, the blade image is subjected to gray processing and threshold binarization processing to obtain a blade foreground image, a black mask is generated and combined with the blade foreground image to obtain a blade image and a schematic diagram of the blade image, as shown in fig. 3.
As a preferred embodiment, labeling the leaf image to obtain a vein image of the leaf includes:
and marking the leaf image by using three colors corresponding to RGB to obtain a primary vein image, a secondary vein image and a tertiary vein image of the leaf, and combining the primary vein image, the secondary vein image and the tertiary vein image of the leaf with the leaf image to obtain the vein image of the leaf.
In one particular embodiment, a schematic representation of a vein image of a leaf is shown in FIG. 4.
As a preferred embodiment, labeling the leaf image with three colors corresponding to RGB to obtain a primary vein image, a secondary vein image, and a tertiary vein image of the leaf includes:
and traversing all pixel points of the blade image in sequence, marking the pixel points as first-stage vein images of the blade when the R channel of the pixel points is 255 and other channels are 0, marking the pixel points as second-stage vein images of the blade when the G channel of the pixel points is 255 and other channels are 0, and marking the pixel points as third-stage vein images of the blade when the B channel of the pixel points is 255 and other channels are 0.
It should be noted that RGB is used to label the first, second and third veins of the leaf, so that the veins can be labeled more completely and clearly.
As a preferred embodiment, generating a data set from the leaf image and the vein image of the leaf comprises:
and carrying out size consistency processing on the leaf image and the vein image of the leaf to obtain a processed leaf image and a processed vein image of the leaf, and generating a data set by using the processed leaf image and the processed vein image of the leaf.
In a specific embodiment, the leaf images and the vein images of the leaves are processed into 1024 × 1024 RGB three-channel images, and the 1024 × 1024 RGB three-channel images are used to generate the data sets.
As a preferred embodiment, constructing a network model includes:
and constructing a CTVE-Net network model, wherein the CTVE-Net network model comprises a feature encoder module, a context extractor module and a feature decoder module, the feature encoder module comprises a double attention module, and the context extractor module comprises an intensive void convolution module and a spatial pyramid pooling module.
In a specific embodiment, a schematic structural diagram of the CTVE-Net network model is shown in fig. 5, where the CTVE-Net network model includes a feature encoder, a context extractor, and a feature decoder, and the specific steps are as follows:
firstly, the size of an input image is 1024 × 1024 × 3, the input image is subjected to normalization processing after 2-dimensional convolution with the step length of 2 pixels by 64 convolution kernels with the size of 7 × 7, the normalization processing result is activated by an activation function ReLu, the maximum pooling is used after activation to obtain a 512 × 512 × 64 pooling result, and the 512 × 512 × 64 pooling result is directly input to a feature decoder by using a jump function;
secondly, inputting the pooling result obtained in the first step into a first coding module, wherein the first coding module comprises three basic blocks, an input channel is used in the processing process of each basic block, 64 convolution kernels with the size of 3 × 3 are respectively output channels, 2-dimensional convolution with 1 pixel is filled according to the step size of 1 pixel and then normalization processing is carried out, the result of the normalization processing is activated by using a ReLu function to obtain an activation result, the pooling result obtained in the first step is processed by the three basic blocks, the dimension is changed into a result of 256 × 256 × 64 by using a linear function of torch, and the result of 256 × 256 × 64 is directly input into a feature decoder by using a jump function;
inputting the result obtained in the previous step into a second coding module, wherein the second coding module comprises four basic blocks, the number of input channels of the first basic block is 64, the number of output channels of the first basic block is 128, the step size is 2, the convolution kernel size is 3 × 3, 2-dimensional convolution with the step size being 2 pixels is filled into 1 pixel, then normalization processing is carried out, the result of the normalization processing is activated by using a ReLu function to obtain an activation result, the activation result is input into one input and output channel which is 128, the convolution kernel size is 3 × 3, the step size is 2, normalization processing is carried out after 2-dimensional convolution with the step size being 1 is filled into the input and output channel, the result of the normalization processing is activated by using the ReLu function, the activation result is subjected to downsampling, the downsampling comprises one input channel which is 64, the output channel is 128, the convolution kernel size is 1 × 1, normalization processing is carried out after 2-dimensional convolution with the step size being 2 to obtain a feature map, the feature map is processed by using the remaining three basic blocks, the feature map is changed into a 128-dimensional convolution processing result which is directly input into a jump map of the 128-dimensional graph after the feature map is obtained by using the same as the input and the feature map of the 128-by using the skip function;
fourthly, inputting the result obtained in the last step into a third coding module, wherein the third coding module comprises five basic blocks, the first basic block comprises 128 input channels, 256 output channels, 2 step length, 3 multiplied by 3 convolution kernels, 1 pixel according to the step length, and carrying out normalization processing after 2-dimensional convolution filled with 1 pixel, the result of the normalization processing is activated by a ReLu function to obtain an activation result, the activation result is input into one input and output channel, the input and output channels are 256, the convolution kernels comprise 3 multiplied by 3, the step length is 1, the normalization processing is carried out after 2-dimensional convolution filled with 1 pixel, the result of the normalization processing is activated by the ReLu function, the activation result is subjected to down-sampling, the down-sampling comprises that an input channel is 128, an output channel is 256, the size of a convolution kernel is 1 multiplied by 1, normalization processing is carried out after 2-dimensional convolution with the step length of 2, a feature map is obtained, the feature map is processed by the remaining four basic blocks, the remaining four basic blocks are the same as the rest operations after the input and output channels of the basic blocks in the first coding module are all changed into 256, the processed feature map is obtained, the dimension of the processed feature map is changed into 256 multiplied by 64 by the linear function of the torch, and the 256 multiplied by 64 result is directly input into a feature decoder by the jump function;
fifthly, inputting the result obtained in the last step into a fourth coding module, wherein the fourth coding module comprises three basic blocks, the first basic block is input with 256 channels, the number of output channels is 512, the step size is 2, the size is 3 × 3 convolution kernels, performing normalization processing after 2-dimensional convolution with 1 pixel according to the step size of 1 pixel, activating the result of the normalization processing by using a ReLu function to obtain an activation result, inputting the activation result into one input and output channel, the size of the convolution kernel is 3 × 3, the step size is 1, performing normalization processing after 2-dimensional convolution with 1 is filled, activating the result of the normalization processing by using the ReLu function, performing downsampling on the activation result, wherein the downsampling comprises that one input channel is 256, the output channel is 512, the size of the convolution kernel is 1 × 1, performing normalization processing after 2-dimensional convolution with the step size is 2 to obtain a feature map, processing the feature map by using the remaining two basic blocks, processing the remaining two basic blocks are the same as the remaining operations after the input and output channels of the basic blocks in the first coding module are 256, obtaining the feature map after the 2-dimensional convolution kernel is convolved, and the feature map is directly converted into a feature map 32 × 521, and the feature map of a jump decoder using the feature map of the input and output channel of the first coding module, and the jump decoder;
sixthly, inputting the result obtained in the previous step into a double attention module, which is a schematic structural diagram of the double attention module, as shown in fig. 6, wherein the double attention module comprises two sub-modules, namely a space attention module and a channel attention module, respectively, the two sub-modules each comprise three 2-dimensional convolutions and a Softmax function, the three 2-dimensional convolutions respectively comprise 2-dimensional convolutions with the first input channel number of 512, the output channel number of 64, the convolution kernel size of 1 × 1 and the step size of 1, the second input channel number of 512, the output channel number of 64, the convolution kernel size of 1 × 1 and the step size of 1, the 2-dimensional convolutions with the third input channel number of 512 and the output channel number of 512, the convolution kernel size of 1 × 1 and the step size of 1, inputting the result obtained in the previous step, namely the feature map, into the double attention module for convolution, coding the convolution results, performing mean pooling on the convolution results to obtain a feature map, the feature map dimension does not change into 521 × 32 × 32, and performing pixel decoding and merging with the input feature map, namely performing pixel classification and adding;
step seven, inputting the result obtained in the previous step into a dense void convolution module, wherein the dense void convolution module is a schematic structural diagram, as shown in fig. 7, the dense void convolution module comprises 2-dimensional convolution operation with four input and output channels of 512, convolution kernels of 3 × 3 and step length of 1, wherein rate =1 indicates that the dense void convolution module is not adopted, and when the rate is other values, the rate is a few, namely void convolution with a pixel span is adopted, inputting the result obtained in the previous step into the dense void convolution module to output a new feature map, and the dimension of the new feature map is 512 × 32 × 32;
eighthly, inputting the result obtained in the previous step into a space pyramid pooling module, wherein the space pyramid pooling module is a schematic structural diagram, as shown in fig. 8, the space pyramid pooling module comprises four maximum pooling operations and a splicing operation, the input picture is subjected to the four maximum pooling operations, parameters of the four maximum pooling operations respectively correspond to convolution kernels of 2 × 2, 3 × 3, 5 × 5 and 6 × 6, the step length is 2, 3, 5 and 6, so that four feature maps with different sizes are obtained, and the four feature maps with different sizes are spliced into a feature map with dimensions of 512 × 32 × 32 by using a cat function of a torch;
ninthly, inputting the result obtained in the last step into a first decoding module, wherein the first decoding module comprises an input channel of 512, an output channel of 128, a convolution kernel of 1 × 1, and normalization processing after 2-dimensional convolution with a step size of 1, the normalization processing result uses input and output channels of 128, the convolution kernel of 3 × 3, the step size of 2 and 1 filling 2-dimensional deconvolution, the obtained deconvolution result uses input channel of 128, the output channel of 256, the convolution kernel of 1 × 1 and the 2-dimensional convolution with a step size of 1 after normalization processing, the obtained convolution result is subjected to normalization processing, and the feature diagram dimension of 256 × 64 × 64 is obtained;
tenth, inputting the result obtained in the previous step into a second decoding module, wherein the second decoding module comprises an input channel of 256, an output channel of 64, a convolution kernel of 1 × 1, and normalization processing after 2-dimensional convolution with a step size of 1, the normalization processing result uses 2-dimensional deconvolution with input and output channels of 64, convolution kernel of 3 × 3, a step size of 2 and 1 padding, the obtained deconvolution result uses 2-dimensional deconvolution with input channel of 64, output channel of 128, convolution kernel of 1 × 1 and a step size of 1 after normalization processing, and the obtained convolution result is subjected to normalization processing to obtain a feature map dimension of 128 × 128 × 128;
a tenth step of inputting the result obtained in the previous step into a third decoding module, wherein the third decoding module comprises an input channel of 128, an output channel of 32, a convolution kernel of 1 × 1 in size, and normalization processing is performed after 2-dimensional convolution with a step size of 1, the normalization processing result uses 2-dimensional deconvolution with input and output channels of 32, convolution kernels of 3 × 3 and 2 in step size and 1 in filling, the obtained deconvolution result uses 2-dimensional deconvolution with input channels of 32, output channels of 64, convolution kernels of 1 × 1 and step size of 1 after normalization processing, and the obtained convolution result is subjected to normalization processing to obtain a feature map dimension of 256 × 256 × 64;
a twelfth step of inputting the result obtained in the previous step into a fourth decoding module, wherein the fourth decoding module comprises an input channel of 64, an output channel of 16, a convolution kernel of 1 × 1 in size, and normalization processing after 2-dimensional convolution with a step size of 1, the normalization processing result uses input and output channels of 16, the convolution kernel of 3 × 3 and a step size of 2 is filled with 1 in 2-dimensional deconvolution, the obtained deconvolution result uses input channel of 16, the output channel of 64, the convolution kernel of 1 × 1 and the 2-dimensional convolution with a step size of 1 after normalization processing, and the obtained convolution result is subjected to normalization processing to obtain a feature map dimension of 512 × 64 × 64;
and a thirteenth step of inputting the result obtained in the previous step into a 2-dimensional deconvolution with input channel number of 64, output channel number of 32, convolution kernel size of 4 × 4, step size of 2 and filling 1 to obtain a deconvolution result, using the 2-dimensional deconvolution with input channel number of 32, output channel number of 32, convolution kernel size of 3 × 3, step size of 1 and filling 1 to obtain a convolution result, using the 2-dimensional convolution with input channel number of 32, output channel number of 4, convolution kernel size of 3 × 3, step size of 1 and filling 1 to obtain an output picture with dimensionality of 1024 × 1024 × 3.
As a preferred embodiment, the training the network model by using the data set to obtain a network model with complete training, includes:
and training the CTVE-Net network model by using the data set to obtain a trained CTVE-Net network model, and optimizing the trained CTVE-Net network model by using a preset loss function to obtain a completely trained CTVE-Net network model.
In a specific embodiment, the predetermined loss function is an RMI loss function.
As a preferred embodiment, further comprising:
and thinning the vein image of the leaf to be marked into a skeleton map by using a thinning algorithm, and obtaining vein phenotype data according to the skeleton map.
In a specific embodiment, the leaf vein image of the leaf to be labeled is refined into a skeleton map by using a refinement algorithm, the primary leaf vein skeleton map is shown as fig. 9, the secondary leaf vein skeleton map is shown as fig. 10, the tertiary leaf vein skeleton map is shown as fig. 11, leaf vein phenotype data is obtained according to the skeleton map, and part of the leaf vein phenotype data is shown in table 1 below.
TABLE 1 partial vein phenotype data
Figure BDA0003369816510000131
The embodiment of the invention provides a vein phenotype extraction device based on a transmission scanning image, which has a structural block diagram, as shown in fig. 12, the vein phenotype extraction device based on the transmission scanning image comprises a data acquisition module 1201, a network training module 1202 and a vein extraction module 1203;
the data acquisition module 1201 is configured to acquire a leaf image, label the leaf image to obtain a vein image of a leaf, and generate a data set according to the leaf image and the vein image of the leaf;
the network training module 1202 is configured to construct a network model, and train the network model by using the data set to obtain a network model with complete training;
the vein extraction module 1203 is configured to obtain a leaf image to be labeled, and obtain a vein image of a leaf to be labeled according to the completely trained network model and the leaf image to be labeled.
As shown in fig. 13, the present invention further provides an electronic device, which may be a mobile terminal, a desktop computer, a notebook, a palm computer, a server, or other computing devices, based on the above method for extracting a vein phenotype based on a transmission scan image. The electronic device comprises a processor 10, a memory 20 and a display 30.
The storage 20 may in some embodiments be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The memory 20 may also be an external storage device of the computer device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Further, the memory 20 may also include both an internal storage unit and an external storage device of the computer device. The memory 20 is used for storing application software installed in the computer device and various data, such as program codes installed in the computer device. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a transmission scan image-based vein phenotype extraction program 40, and the transmission scan image-based vein phenotype extraction program 40 can be executed by the processor 10, so as to implement the transmission scan image-based vein phenotype extraction method according to the embodiments of the present invention.
The processor 10 may be, in some embodiments, a Central Processing Unit (CPU), microprocessor or other data Processing chip, for executing program codes stored in the memory 20 or Processing data, such as executing a vein phenotype extraction program based on the transmission scan image.
The display 30 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch panel, or the like in some embodiments. The display 30 is used for displaying information at the computer device and for displaying a visual user interface. The components 10-30 of the computer device communicate with each other via a system bus.
In one embodiment, the following steps are implemented when the processor 10 executes the leaf vein phenotype extraction program 40 based on the transmission scan image in the memory 20:
acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the network model with complete training and the leaf image to be marked.
The present embodiment also provides a computer-readable storage medium on which a transmission scan image-based vein phenotype extraction program is stored, the transmission scan image-based vein phenotype extraction program implementing the following steps when executed by a processor:
acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
and acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the completely trained network model and the leaf image to be marked.
The invention discloses a transmission scanning image-based vein phenotype extraction method, a transmission scanning image-based vein phenotype extraction device, electronic equipment and a computer-readable storage medium.
According to the technical scheme, the first-stage, second-stage and third-stage veins of the leaves are labeled, so that the veins are labeled more completely and clearly, the vein image of the leaves to be labeled is obtained by using the network model which is trained completely, the labeling process is simplified, the labeling efficiency is improved, the skeleton diagram is extracted by using the thinning algorithm, so that the vein phenotype can be better extracted, and the accuracy of phenotype extraction is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (8)

1. A vein phenotype extraction method based on a transmission scanning image is characterized by comprising the following steps:
acquiring a leaf image, labeling the leaf image to obtain a vein image of a leaf, and generating a data set according to the leaf image and the vein image of the leaf;
constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
acquiring a leaf image to be marked, and acquiring a vein image of the leaf to be marked according to the network model with complete training and the leaf image to be marked;
labeling the leaf image to obtain a vein image of the leaf, comprising:
labeling the leaf image by using three colors corresponding to RGB to obtain a primary vein image, a secondary vein image and a tertiary vein image of the leaf, and combining the primary vein image, the secondary vein image and the tertiary vein image of the leaf with the leaf image to obtain the vein image of the leaf;
constructing a network model, comprising:
constructing a CTVE-Net network model, wherein the CTVE-Net network model comprises a feature encoder module, a context extractor module and a feature decoder module, the feature encoder module comprises a double attention module, and the context extractor module comprises an intensive void convolution module and a spatial pyramid pooling module; the feature graph of the last step of the feature encoder module is input into the double attention module to be convoluted, the convolution result is encoded and then subjected to Softmax function to carry out mean pooling to obtain a feature graph, and intermediate decoding and input feature graphs are subjected to feature fusion, namely, added and then subjected to pixel classification to obtain a segmentation graph.
2. The method as claimed in claim 1, wherein the leaf vein phenotype extraction method using RGB to label the leaf vein images to obtain a primary leaf vein image, a secondary leaf vein image and a tertiary leaf vein image of the leaf comprises:
and traversing all pixel points of the blade image in sequence, marking the pixel points as first-stage vein images of the blade when the R channel of the pixel points is 255 and other channels are 0, marking the pixel points as second-stage vein images of the blade when the G channel of the pixel points is 255 and other channels are 0, and marking the pixel points as third-stage vein images of the blade when the B channel of the pixel points is 255 and other channels are 0.
3. The method of claim 1, wherein generating a data set from the leaf image and the leaf vein image of the leaf comprises:
and carrying out size consistency processing on the leaf image and the vein image of the leaf to obtain a processed leaf image and a processed vein image of the leaf, and generating a data set by using the processed leaf image and the processed vein image of the leaf.
4. The method for extracting vein phenotype based on transmission scan image of claim 1, wherein training the network model by using the data set to obtain a well-trained network model comprises:
and training the CTVE-Net network model by using the data set to obtain a trained CTVE-Net network model, and optimizing the trained CTVE-Net network model by using a preset loss function to obtain a completely trained CTVE-Net network model.
5. The method for extracting vein phenotype based on transmission scan image according to claim 1, further comprising:
and thinning the vein image of the leaf to be marked into a skeleton map by using a thinning algorithm, and obtaining vein phenotype data according to the skeleton map.
6. A vein phenotype extraction device based on a transmission scanning image is characterized by comprising a data acquisition module, a network training module and a vein extraction module;
the data acquisition module is used for acquiring a leaf image, labeling the leaf image to obtain a vein image of the leaf, and generating a data set according to the leaf image and the vein image of the leaf;
the network training module is used for constructing a network model, and training the network model by using the data set to obtain a network model with complete training;
the vein extraction module is used for acquiring a leaf image to be marked and acquiring a vein image of the leaf to be marked according to the completely trained network model and the leaf image to be marked;
labeling the leaf image to obtain a vein image of the leaf, comprising:
labeling the leaf image by using three colors corresponding to RGB to obtain a primary vein image, a secondary vein image and a tertiary vein image of the leaf, and combining the primary vein image, the secondary vein image and the tertiary vein image of the leaf with the leaf image to obtain the vein image of the leaf;
constructing a network model, comprising:
constructing a CTVE-Net network model, wherein the CTVE-Net network model comprises a feature encoder module, a context extractor module and a feature decoder module, the feature encoder module comprises a double attention module, and the context extractor module comprises a dense void convolution module and a spatial pyramid pooling module; the feature graph of the last step of the feature encoder module is input into the double attention module to be convoluted, the convolution result is encoded and then subjected to Softmax function to carry out mean pooling to obtain a feature graph, and intermediate decoding and input feature graphs are subjected to feature fusion, namely, added and then subjected to pixel classification to obtain a segmentation graph.
7. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, implements the transmission scan image-based vein phenotype extraction method according to any one of claims 1 to 5.
8. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements a method for vein phenotype extraction based on transmission scan images according to any one of claims 1 to 5.
CN202111395372.5A 2021-11-23 2021-11-23 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment Active CN114140688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111395372.5A CN114140688B (en) 2021-11-23 2021-11-23 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111395372.5A CN114140688B (en) 2021-11-23 2021-11-23 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment

Publications (2)

Publication Number Publication Date
CN114140688A CN114140688A (en) 2022-03-04
CN114140688B true CN114140688B (en) 2022-12-09

Family

ID=80391443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111395372.5A Active CN114140688B (en) 2021-11-23 2021-11-23 Vein phenotype extraction method and device based on transmission scanning image and electronic equipment

Country Status (1)

Country Link
CN (1) CN114140688B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787519A (en) * 2016-03-21 2016-07-20 浙江大学 Tree species classification method based on vein detection
CN110148146A (en) * 2019-05-24 2019-08-20 重庆大学 A kind of plant leaf blade dividing method and system using generated data
CN110570443A (en) * 2019-08-15 2019-12-13 武汉理工大学 Image linear target extraction method based on structural constraint condition generation model
CN110660070A (en) * 2019-08-12 2020-01-07 北京瀚景锦河科技有限公司 Rice vein image extraction method and device
CN111723819A (en) * 2020-06-24 2020-09-29 武汉理工大学 Grading identification method for leaf veins of plant leaves
CN111753903A (en) * 2020-06-24 2020-10-09 武汉理工大学 Soybean variety identification method based on vein topological characteristics
CN112581483A (en) * 2020-12-22 2021-03-30 清华大学 Self-learning-based plant leaf vein segmentation method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401480B (en) * 2020-04-27 2023-07-25 上海市同济医院 Novel mammary gland MRI automatic auxiliary diagnosis method based on fusion attention mechanism
CN112183640A (en) * 2020-09-29 2021-01-05 无锡信捷电气股份有限公司 Detection and classification method based on irregular object

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787519A (en) * 2016-03-21 2016-07-20 浙江大学 Tree species classification method based on vein detection
CN110148146A (en) * 2019-05-24 2019-08-20 重庆大学 A kind of plant leaf blade dividing method and system using generated data
CN110660070A (en) * 2019-08-12 2020-01-07 北京瀚景锦河科技有限公司 Rice vein image extraction method and device
CN110570443A (en) * 2019-08-15 2019-12-13 武汉理工大学 Image linear target extraction method based on structural constraint condition generation model
CN111723819A (en) * 2020-06-24 2020-09-29 武汉理工大学 Grading identification method for leaf veins of plant leaves
CN111753903A (en) * 2020-06-24 2020-10-09 武汉理工大学 Soybean variety identification method based on vein topological characteristics
CN112581483A (en) * 2020-12-22 2021-03-30 清华大学 Self-learning-based plant leaf vein segmentation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于U型结构上下文编码解码网络的皮肤病变分割研究";蒋新辉 等;《激光与光电子学进展》;20210630;第58卷(第12期);第1-8页 *
"基于叶脉分割的植物叶片表型提取研究";甘扬静;《中国优秀硕士学位论文全文数据库 基础科学辑》;20200715;第A006-375页 *

Also Published As

Publication number Publication date
CN114140688A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN111401371B (en) Text detection and identification method and system and computer equipment
CN107516096A (en) A kind of character identifying method and device
CN110853047A (en) Intelligent image segmentation and classification method and device and computer readable storage medium
CN113011420B (en) Character recognition method, model training method, related device and electronic equipment
CN103345493B (en) Method that content of text on mobile terminal shows, Apparatus and system
CN113012265B (en) Method, apparatus, computer device and medium for generating needle-type printed character image
US20220108478A1 (en) Processing images using self-attention based neural networks
CN109255826B (en) Chinese training image generation method, device, computer equipment and storage medium
CN109522436A (en) Similar image lookup method and device
CN110399760A (en) A kind of batch two dimensional code localization method, device, electronic equipment and storage medium
CN112084342A (en) Test question generation method and device, computer equipment and storage medium
CN106557549A (en) The method and apparatus of identification destination object
CN112581477A (en) Image processing method, image matching method, device and storage medium
CN113920296B (en) Text recognition method and system based on comparative learning
CN114140688B (en) Vein phenotype extraction method and device based on transmission scanning image and electronic equipment
CN104516899B (en) Character library update method and device
CN114694150B (en) Method and system for improving generalization capability of digital image classification model
CN116681581A (en) Font generation method and device, electronic equipment and readable storage medium
CN116152575A (en) Weak supervision target positioning method, device and medium based on class activation sampling guidance
US8488183B2 (en) Moving labels in graphical output to avoid overprinting
CN104657733B (en) A kind of massive medical image data memory storage and method
CN112036501A (en) Image similarity detection method based on convolutional neural network and related equipment thereof
CN113011132B (en) Vertical text recognition method, device, computer equipment and storage medium
Zhang et al. MF-Dfnet: a deep learning method for pixel-wise classification of very high-resolution remote sensing images
CN114359905B (en) Text recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant