CN116740703B - Wheat phenotype parameter change rate estimation method and device based on point cloud information - Google Patents

Wheat phenotype parameter change rate estimation method and device based on point cloud information Download PDF

Info

Publication number
CN116740703B
CN116740703B CN202310719552.7A CN202310719552A CN116740703B CN 116740703 B CN116740703 B CN 116740703B CN 202310719552 A CN202310719552 A CN 202310719552A CN 116740703 B CN116740703 B CN 116740703B
Authority
CN
China
Prior art keywords
wheat
potted wheat
potted
point cloud
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310719552.7A
Other languages
Chinese (zh)
Other versions
CN116740703A (en
Inventor
杨宝华
潘明
李云龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Agricultural University AHAU
Original Assignee
Anhui Agricultural University AHAU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Agricultural University AHAU filed Critical Anhui Agricultural University AHAU
Priority to CN202310719552.7A priority Critical patent/CN116740703B/en
Publication of CN116740703A publication Critical patent/CN116740703A/en
Application granted granted Critical
Publication of CN116740703B publication Critical patent/CN116740703B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Abstract

The invention discloses a method and a device for estimating the change rate of a wheat phenotype parameter based on point cloud information, wherein the method comprises the following steps: 1. acquiring image information of two potted wheat in different periods, obtaining RGB (red green blue) image and depth image data of the multi-view sequence potted wheat, and constructing a data set; 2. the WheatMVS model is constructed and comprises a pyramid feature fusion module, a block matching module and a depth map fusion module; 3. loading an optimal weight after model training is completed, reconstructing potted wheat by using a WheatMVS model, and performing point cloud preprocessing to obtain scale information of the potted wheat and a corrected three-dimensional point cloud model; 4. extracting phenotype parameters of the potted wheat by using a phenotype extraction method, and estimating the phenotype parameter change rate of the potted wheat point cloud model by using a parameter change rate method. The method can accurately reconstruct the point cloud model of the potted wheat and can accurately estimate the phenotype parameter change rate of the potted wheat.

Description

Wheat phenotype parameter change rate estimation method and device based on point cloud information
Technical Field
The invention relates to the field of nondestructive testing and image processing, in particular to a method and a device for estimating the change rate of a wheat phenotype parameter based on point cloud information.
Background
The current wheat phenotype parameter change rate estimation method comprises the steps of manually measuring, two-dimensional image data and three-dimensional point cloud data. In addition, the two-dimensional image-based mode can only provide texture features of the surface of the wheat canopy, and has certain limitations for extracting the phenotype information of the wheat. In contrast, the point cloud data can acquire information of two dimensions of the horizontal dimension and the vertical dimension of the potted wheat at the same time, and can effectively reflect accurate information of phenotype parameters of the wheat. However, the point cloud is scattered data, has no specific topological structure, has large calculation amount, and is difficult to directly process.
Disclosure of Invention
The invention provides a method and a device for estimating the change rate of a wheat phenotype parameter based on point cloud information, aiming at realizing the estimation of the change rate of the phenotype parameter of potted wheat, thereby improving the efficiency and the accuracy of the estimation of the change rate of the phenotype parameter.
In order to achieve the aim of the invention, the invention adopts the following technical scheme:
the invention relates to a wheat phenotype parameter change rate estimation method based on point cloud information, which is characterized by comprising the following steps:
step 1, shooting a wheat multi-view RGB image and a depth image of a v th period and a b th period of a potted plant by using a depth camera, and constructing a potted wheat RGB image data set of the v th period with resolution of H multiplied by W multiplied by 3Potted wheat RGB image dataset of the b th period +.>Potted wheat depth image dataset of the v-th period with resolution H×W×1 +.>Potted wheat depth image dataset of the b th period +.>Wherein (1)>Represents the ith potted wheat RGB image at the v-th period, <>Representing an ith potted wheat RGB image at a b-th period; />Representing the i-th potted wheat depth image in the v-th period, < >>The method comprises the steps of representing an ith potted wheat RGB image in a b-th period, wherein I represents the total number of images; h represents the height of the image, W represents the width of the image, and 3 and 1 respectively represent the channel numbers of the RGB image and the depth image of the potted wheat;
the true height of the flowerpot for measuring the photographed potted wheat is h A
Potted wheat RGB image dataset S v And potted wheat depth image dataset S v,d Forming a data pair set, and dividing a training set and a verification set;
step 2, constructing a deep learning-based WheatMVS potted wheat three-dimensional reconstruction model, inputting a training set into the WheatMVS potted wheat three-dimensional reconstruction model, training the WheatMVS potted wheat three-dimensional reconstruction model by using a gradient descent method, and screening out a potted wheat three-dimensional reconstruction model with optimal weight by using a verification set;
step 3, the potted wheat RGB image data set S v Inputting the three-dimensional reconstruction model of the potted wheat with the optimal weight to process the three-dimensional reconstruction model of the potted wheat to obtain the three-dimensional point cloud of the potted wheat
The potted wheat point cloud is prepared by utilizing the formula (1)Correction processing is carried out to obtain corrected three-dimensional point cloud data +.>
In the formula (1), matrix T Representing a feature rotation matrix calculated by an original point cloud centroid point;
extracting corrected three-dimensional point cloud data of potted wheat by using distance maximum traversal methodHeight information h of potted wheat point cloud re The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining the scale information +.>
Step 4, according to the scale information rho, correcting the corrected potted wheat point cloud dataExtracting to obtain phenotype information of the potted wheat; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.
The wheat phenotype parameter change rate estimation method based on the point cloud information is also characterized in that the wheatMVS potted wheat three-dimensional reconstruction model in the step 2 comprises the following steps: the system comprises a pyramid feature fusion module, a block matching module and a point cloud generation module;
step 2.1, the pyramid feature fusion module comprises: the main feature extraction module and the double-channel attention module;
step 2.1.1, the trunk feature extraction module is formed by cascade connection of N-stage convolution layer structures; the convolution layer structure of each stage sequentially comprises a two-dimensional convolution layer, a batch normalization layer and an activation function layer;
when n=1, the v-th period i-th RGB image of the potted wheatInputting the model into a WheatMVS potted wheat three-dimensional reconstruction model, and extracting features by a convolution layer structure at the nth stage of a main feature extraction module in the pyramid feature fusion module to obtain the model with the resolution of ++>N-th wheat trunk feature extraction map +.>
When n=2, 3, N, N-1 wheat trunk feature extraction graph of nth-level convolution layer structureProcessing to obtain n-th wheat trunk feature extraction map +.>Thus outputting N wheat trunk feature extraction diagram from the convolution layer structure of N stage>And forms a main characteristic diagram set of potted wheat +.>
Step 2.1.2, the dual-channel attention module is used for collecting main feature graphsSequentially inputting into the channel attention layer and the space attention layer for weighting treatment to obtain a potted wheat double-channel attention characteristic diagram setWherein (1)>Representing an nth potted wheat double-channel attention characteristic diagram;
step 2.2, the block matching module includes: the system comprises a depth initialization module, a self-adaptive propagation module and a self-adaptive space cost body module;
step 2.2.1, the depth initialization module sets a preset inverse depth range [ d ] min ,d max ]For opposite ofProcessing each pixel of the frame to obtain an n-th potted wheat initial depth image +.>Wherein d min Represents the minimum value of the depth value, d max Representing the maximum value of the depth value;
step 2.2.2, the adaptive propagation Module pairProcessing to obtain n-th potted wheat transmission depth image +.>
Step 2.2.3, the adaptive space cost volume module calculatesMiddle->The weighted average value of the pixels is used for obtaining the weighted n-th potted wheat depth image +.>Will->N pieces of->Fusion treatment is performed to obtain +.>Corresponding weighted potted wheat depth map +.>And then obtaining I weighted potted wheat depth image sets +.>
Step 2.3, collecting depth images of potted wheatInput to the point cloud generation module, will +.>Processing the depth image in the (2) and fusing the depth image into corresponding potted wheat point cloud data +.>
Step 2.4, establishing a Loss function Loss of the wheatMVS potted wheat three-dimensional reconstruction model by using the formula (2):
and 2.5, training the wheatMVS potted wheat three-dimensional reconstruction model based on the training set, verifying the wheatMVS potted wheat three-dimensional reconstruction model after training every t times based on the verification set, and stopping training when the Loss function Loss is not lowered any more, so that an optimal weight is selected according to the result of the verification set, and the wheatMVS potted wheat three-dimensional reconstruction model corresponding to the optimal weight is used as a final potted wheat three-dimensional reconstruction model.
The step 4 comprises the following steps:
step 4.1 wheat multiview RGB image S of the b-th period b Inputting the data into a three-dimensional reconstruction model of the potted wheat with optimal weight, and obtaining the point cloud data of the potted wheat in the b-th periodBy using (2) pair->Performing correction treatment to obtain corrected potted wheat point cloud data +.>
Step 4.2, obtaining the estimated potted wheat plant height change rate mu by using the formula (3) L
In the formula (4), the amino acid sequence of the compound,potted wheat point cloud data +.>Maximum and minimum values of z-axis in the corresponding coordinate point set, +.>Representing the basin under the b th periodWheat-planted point cloud data->Maximum and minimum values of z-axis in corresponding coordinate point set, T v 、T b A time for acquiring data in the v period and a time for acquiring data in the b period are represented;
step 4.3, slaveRespectively obtaining convex hulls of the potted wheat, and correspondingly obtaining M in the v-th period according to the convex hulls of the potted wheat v A triangular patch, b-th period M b The triangular patches below the three-dimensional coordinate data are provided with three points; then processing three-dimensional coordinate data in the potted wheat point cloud convex hull in the v-th period and the b-th period according to a formula (4) to obtain an estimated potted wheat convex hull area change rate mu Area
In the formula (4), M is the sequence number of the triangular patch currently processed, M v For all the triangular patches in the v-th period, M b For all the triangular patches in the b-th period,three-dimensional coordinate values of three vertices of the mth triangular patch at the v-th period,/->Is the three-dimensional coordinate value of the three vertices of the mth triangular patch at the b-th period.
The invention relates to a wheat phenotype parameter change rate estimation device based on point cloud information, which is characterized by comprising the following components: a data acquisition module, a data set construction module, a model construction module and a wheat phenotype extraction module, wherein,
the data set construction module is used for acquiring wheat multi-view RGB images and depth images of a v-th period and a b-th period of a potted plant by using a camera, constructing a data pair set and dividing the data pair set into a training set and a verification set;
the three-dimensional point cloud reconstruction module is used for constructing the deep learning-based WheatMVS potted wheat three-dimensional reconstruction model, and screening out the potted wheat three-dimensional reconstruction model with optimal weight by using the verification set when training the model by using the training set;
and the three-dimensional point cloud processing module acquires potted wheat point cloud by utilizing the potted wheat three-dimensional reconstruction model with optimal weight, corrects the potted wheat point cloud, and acquires corrected potted wheat point cloud model data.
The wheat phenotype parameter change rate estimation module is used for extracting corrected potted wheat point cloud model data to obtain potted wheat phenotype information; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.
Compared with the existing phenotype parameter change rate estimation method of potted wheat, the method has the advantages that deep learning is applied to perform three-dimensional reconstruction tasks, and three-dimensional reconstruction of the detail characteristics of the potted wheat is gradually completed from thick to thin through a multistage cascade reconstruction network, so that the method has the following beneficial effects:
1. according to the invention, a three-dimensional reconstruction model of potted wheat is formed by using neural network characteristics and block matching ideas, and a plurality of visual angle images are used as input, so that the three-dimensional reconstruction result of the wheat is more accurate, and the phenotype parameter information of the potted wheat is calculated; meanwhile, according to the extracted phenotype parameter information of the potted wheat, the phenotype parameter change rate calculation of the small potted wheat is carried out by utilizing a mathematical model, so that the problem that the edge of a part of plant overlapping areas in the three-dimensional reconstruction of the potted wheat is unclear in the phenotype parameter change rate estimation technology of the potted wheat is solved, and the phenotype parameter change rate of the potted wheat can be accurately estimated.
2. Aiming at the problem that the coordinate axis is inconsistent with the world coordinate axis due to the influence of a camera in the image acquisition process, the original point cloud is preprocessed, so that the problem that the original point cloud coordinate axis is inconsistent with the world coordinate axis is solved, point cloud data are obtained, a data basis is provided for extracting the phenotype parameters of the potted wheat, and the effect of improving the estimation precision of the phenotype parameter change rate of the potted wheat is achieved.
3. According to the invention, the dual-channel attention module is added after the characteristic extraction network of the encoder, and the spatial attention and the channel attention are combined, so that the three-dimensional reconstruction of the potted wheat is more focused on the plant detail area in the image, the non-plant detail area is restrained, an accurate three-dimensional point cloud model of the potted wheat can be obtained, and the phenotype parameters of the potted wheat can be calculated conveniently, thereby improving the effect of the phenotype parameter change rate estimation method of the potted wheat.
4. According to the invention, after the characteristic extraction network is used, the block matching structure is used, so that the matching mode and characteristic representation among images with different visual angles can be captured, the WheatMVS can better utilize the information of the images with multiple visual angles when the three-dimensional reconstruction of the potted wheat is carried out, the precision and accuracy of the reconstruction result are improved, an accurate three-dimensional point cloud model of the potted wheat can be obtained, the phenotype parameters of the potted wheat can be calculated conveniently, and the effect of the phenotype parameter change rate estimation method of the potted wheat is improved.
Drawings
FIG. 1 is a flow chart of a method for estimating the rate of change of a wheat phenotype parameter based on point cloud information according to the present invention;
fig. 2 is a schematic diagram of a device for estimating a change rate of a wheat phenotype parameter based on point cloud information according to the present invention.
Detailed Description
In this embodiment, a method for estimating a change rate of a wheat phenotype parameter based on point cloud information is performed according to the following steps with reference to fig. 1:
step 1, shooting a wheat multi-view RGB image and a depth image of a v th period and a b th period of a potted plant by using a depth camera, and constructing a potted wheat RGB image data set of the v th period with resolution of H multiplied by W multiplied by 3Potted wheat RGB image dataset of the b th period +.>Potted wheat depth image dataset of the v-th period with resolution H×W×1 +.>Potted wheat depth image dataset of the b th period +.>Wherein (1)>Represents the ith potted wheat RGB image at the v-th period, <>Representing an ith potted wheat RGB image at a b-th period; />Representing the i-th potted wheat depth image in the v-th period, < >>The method comprises the steps of representing an ith potted wheat RGB image in a b-th period, wherein I represents the total number of images; h represents the height of the image, W represents the width of the image, and 3 and 1 respectively represent the channel numbers of the RGB image and the depth image of the potted wheat;
the true height of the flowerpot for measuring the photographed potted wheat is h A
Potted wheat RGB image dataset S v And potted wheat depth image dataset S v,d Forming a data pair set, and dividing a training set and a verification set;
in the present embodiment, a multi-view RGB image of 1200 wheat with a resolution of 980×1280×3 is obtained at v-stage, a depth image of 1200 wheat with a resolution of 980×1280×1 is obtained, and a multi-view RGB image of 1200 wheat with a resolution of 980×1280×3 is obtainedThe RGB image is viewed, the period of 1200 wheat multi-view depth images with the resolution of 980 x 1280 x 1 is obtained, and the actual measurement record h of the height of each group of potted wheat flower pots is completed A 10cm, a potted wheat image dataset S was composed.
Step 2, constructing a deep learning-based WheatMVS potted wheat three-dimensional reconstruction model, inputting a training set into the WheatMVS potted wheat three-dimensional reconstruction model, training the WheatMVS potted wheat three-dimensional reconstruction model by using a gradient descent method, and screening out a potted wheat three-dimensional reconstruction model with optimal weight by using a verification set;
step 2.1, pyramid feature fusion module includes: the main feature extraction module and the double-channel attention module;
step 2.1.1, a trunk feature extraction module is formed by cascading a convolution layer structure of N stages; the convolution layer structure of each stage sequentially comprises a two-dimensional convolution layer, a batch normalization layer and an activation function layer;
when n=1, the ith RGB image of the v th period of the potted wheatInputting the WheatMVS potted wheat three-dimensional reconstruction model, and carrying out feature extraction by a convolution layer structure of an nth stage of a main feature extraction module in a pyramid feature fusion module to obtain a +.>N-th wheat trunk feature extraction map +.>
When n=2, 3, N, N-1 wheat trunk feature extraction graph of nth-level convolution layer structureProcessing to obtain n-th wheat trunk feature extraction map +.>Thus outputting N wheat trunk feature extraction diagram from the convolution layer structure of N stage>And forms a main characteristic diagram set of potted wheat +.>In this embodiment, the trunk feature extraction module is formed by cascade connection of 4 stages of convolution layer structures, and resolutions of the trunk feature extraction graphs of the 4 stages are respectively: 980 x 1280, 490 x 640, 245 x 320, 122 x 160.
Step 2.1.2, the two-channel attention module is used for collecting the main feature mapSequentially inputting into the channel attention layer and the space attention layer for weighting treatment to obtain a potted wheat double-channel attention characteristic diagram setWherein (1)>Representing an nth potted wheat double-channel attention characteristic diagram;
when n=2, 3, N, the nth stem profileInputting into the attention layer of the channel, and respectively carrying out average pooling and maximum pooling operation to obtain +.>Respectively inputting the average value and the maximum value into a multi-layer perceptron to perform characteristic transformation, correspondingly obtaining the average value and the maximum value after the characteristic transformation, adding the average value and the maximum value, and inputting the average value and the maximum value into a Sigmoid function to perform normalization processing so as to obtain the wheat channel attention map->
Spatial attention layer pairRespectively carrying out average pooling and maximum pooling operation in the space dimension to obtainThe average value and the maximum value in the space dimension are spliced in the channel dimension to obtain a characteristic image containing two channels, and the characteristic image is sequentially processed by a convolution layer and a Sigmoid function to obtain an nth wheat double-channel attention diagram>Thus, the N-1 wheat double-channel attention diagram forms a potted wheat double-channel attention characteristic diagram set +.>
Step 2.2, a block matching module includes: the system comprises a depth initialization module, a self-adaptive propagation module and a self-adaptive space cost body module;
step 2.2.1, the depth initialization module sets a preset inverse depth range [ d ] min ,d max ]For opposite ofProcessing each pixel of the frame to obtain an n-th potted wheat initial depth image +.>Wherein d min Represents the minimum value of the depth value, d max Representing a maximum value representing a depth value; in this embodiment, the inverse depth range [ d ] is preset min ,d max ]The method comprises the following steps: [0,192]。
Step 2.2.2 adaptive propagation Module pairsProcessing to obtain an nth potted wheat transmission depth image
Step 2.2.3, the adaptive space cost volume module calculatesMiddle->The weighted average value of the pixels is used for obtaining the weighted n-th potted wheat depth image +.>Will->N pieces of->Fusion treatment is performed to obtain +.>Corresponding weighted potted wheat depth map +.>And then obtaining I weighted potted wheat depth image sets +.>
Step 2.3, collecting depth images of potted wheatInput to the point cloud generation module, will +.>Processing the depth image in the (2) and fusing the depth image into corresponding potted wheat point cloud data +.>
Step 2.4, establishing a Loss function Loss of the wheatMVS potted wheat three-dimensional reconstruction model by using the formula (1):
and 2.5, training the WheatMVS potted wheat three-dimensional reconstruction model based on a training set, verifying the WheatMVS potted wheat three-dimensional reconstruction model after every t times of training based on a verification set, and stopping training when the Loss function Loss is not lowered any more, so that an optimal weight is selected according to the result of the verification set, and the WheatMVS potted wheat three-dimensional reconstruction model corresponding to the optimal weight is used as a final potted wheat three-dimensional reconstruction model. In this embodiment, t is set as: 10 times.
Step 3, RGB image data set S of potted wheat v Inputting the three-dimensional reconstruction model of the potted wheat with the optimal weight to process the three-dimensional reconstruction model of the potted wheat to obtain the three-dimensional point cloud of the potted wheat
Potted wheat point cloud by utilizing (2)Correction processing is carried out to obtain corrected three-dimensional point cloud data +.>
In the formula (1),Matrix T Representing a feature rotation matrix calculated by an original point cloud centroid point;
in this embodiment, matrix T The method comprises the following steps:
extracting corrected three-dimensional point cloud data of potted wheat by using distance maximum traversal methodHeight information h of potted wheat point cloud re The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining the scale information +.>In this embodiment, the true height h of the flowerpot A 10cm; pixel height h after three-dimensional point cloud reconstruction re Is 0.85; the ratio ρ of coordinates is: 1:0.085.
Step 4, correcting the corrected potted wheat point cloud data according to the scale information rhoExtracting to obtain phenotype information of the potted wheat; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.
Step 4.1 wheat multiview RGB image S of the b-th period b Inputting the data into a three-dimensional reconstruction model of the potted wheat with optimal weight, and obtaining the point cloud data of the potted wheat in the b-th periodBy using (2) pair->Performing correction treatment to obtain corrected potted wheat point cloud data +.>
Step 4.2, obtaining the estimated potted wheat plant height change rate mu by using the formula (3) L
In the formula (4), the amino acid sequence of the compound,potted wheat point cloud data +.>Maximum and minimum values of z-axis in the corresponding coordinate point set, +.>Potted wheat point cloud data +.>Maximum and minimum values of z-axis in corresponding coordinate point set, T v 、T b A time for acquiring data in the v period and a time for acquiring data in the b period are represented; in this example, v-stage wheat was estimated to be high +.>For (I)>The method comprises the following steps: wheat estimated height in 1.23, 0.085, b period +.>For (I)>The method comprises the following steps: 0.6 and 0.09, the time interval between the two collection is 30 days, and the change rate mu of the height of the potted wheat plant is high L The method comprises the following steps: 0.25 cm/day.
Step 4.3, slaveRespectively obtaining convex hulls of the potted wheat, and obtaining a v-th period M according to the convex hulls of the potted wheat v A triangular patch, b-th period M b Three-dimensional coordinate data of three points are provided for each triangular patch; then three-dimensional coordinate data of three points in each triangular surface patch in the potting wheat point cloud convex hull in the v-th period and the b-th period are processed, and the estimated potting wheat convex hull area change rate mu is obtained through a formula (4) Area
In the formula (4), M is the sequence number of the triangular patch currently processed, M v For all the triangular patches in the v-th period, M b For all the triangular patches in the b-th period,three-dimensional coordinate values of three vertices of the mth triangular patch at the v-th period,/->Is the three-dimensional coordinate value of the three vertices of the mth triangular patch at the b-th period. In this embodiment, the area of the wheat point cloud convex hull in the v-th period is: 1.105, the wheat point cloud convex hull area in the b-th period is as follows: 0.425, the time interval between two acquisitions is: 30 days, the area change rate mu of the convex hull of the potted wheat Area The method comprises the following steps: 0.26cm 2 Day.
In this embodiment, as shown in fig. 2, a device for estimating a change rate of a wheat phenotype parameter based on point cloud information includes: a data acquisition module, a data set construction module, a model construction module and a wheat phenotype extraction module, wherein,
the data set construction module is used for acquiring wheat multi-view RGB images and depth images of a v-th period and a b-th period of a potted plant by using a camera, constructing a data pair set and dividing the data pair set into a training set and a verification set;
the three-dimensional point cloud reconstruction module is used for constructing the WheatMVS potted wheat three-dimensional reconstruction model based on the deep learning, and screening out the potted wheat three-dimensional reconstruction model with optimal weight by using the verification set when training the model by using the training set;
and the three-dimensional point cloud processing module is used for acquiring potted wheat point cloud by utilizing the potted wheat three-dimensional reconstruction model with optimal weight, correcting and processing the potted wheat point cloud, and acquiring corrected potted wheat point cloud model data.
The wheat phenotype parameter change rate estimation module is used for extracting corrected potted wheat point cloud model data to obtain potted wheat phenotype information; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.

Claims (4)

1. The method for estimating the change rate of the wheat phenotype parameters based on the point cloud information is characterized by comprising the following steps of:
step 1, shooting a wheat multi-view RGB image and a depth image of a v th period and a b th period of a potted plant by using a depth camera, and constructing a potted wheat RGB image data set of the v th period with resolution of H multiplied by W multiplied by 3Potted wheat RGB image dataset of the b th period +.>Potted wheat depth image dataset of the v-th period with resolution H×W×1 +.>Potted wheat depth image dataset of the b th period +.>Wherein (1)>Represents the ith potted wheat RGB image at the v-th period, <>Representing an ith potted wheat RGB image at a b-th period; />Representing the i-th potted wheat depth image in the v-th period, < >>The method comprises the steps of representing an ith potted wheat RGB image in a b-th period, wherein I represents the total number of images; h represents the height of the image, W represents the width of the image, and 3 and 1 respectively represent the channel numbers of the RGB image and the depth image of the potted wheat;
the true height of the flowerpot for measuring the photographed potted wheat is h A
Potted wheat RGB image dataset S v And potted wheat depth image dataset S v,d Forming a data pair set, and dividing a training set and a verification set;
step 2, constructing a deep learning-based WheatMVS potted wheat three-dimensional reconstruction model, inputting a training set into the WheatMVS potted wheat three-dimensional reconstruction model, training the WheatMVS potted wheat three-dimensional reconstruction model by using a gradient descent method, and screening out a potted wheat three-dimensional reconstruction model with optimal weight by using a verification set;
step 3, the potted wheat RGB image data set S v Inputting the three-dimensional reconstruction model of the potted wheat with the optimal weight to process the three-dimensional reconstruction model of the potted wheat to obtain the three-dimensional point cloud of the potted wheat
The potted wheat point cloud is prepared by utilizing the formula (1)Performing correction processing to obtain corrected three-dimensional point cloud data T of potted wheat in the v-th period r v
In the formula (1), matrix T Representing a feature rotation matrix calculated by an original point cloud centroid point;
extracting corrected three-dimensional point cloud data of potted wheat by using distance maximum traversal methodHeight information h of potted wheat point cloud re The method comprises the steps of carrying out a first treatment on the surface of the Thereby obtaining the scale information +.>
Step 4, according to the scale information ρ, correcting the corrected potted wheat point cloud data T r v Extracting to obtain phenotype information of the potted wheat; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.
2. The method for estimating the rate of change of the phenotypic parameters of wheat based on the point cloud information according to claim 1, wherein the WheatMVS potted wheat three-dimensional reconstruction model in the step 2 comprises: the system comprises a pyramid feature fusion module, a block matching module and a point cloud generation module;
step 2.1, the pyramid feature fusion module comprises: the main feature extraction module and the double-channel attention module;
step 2.1.1, the trunk feature extraction module is formed by cascade connection of N-stage convolution layer structures; the convolution layer structure of each stage sequentially comprises a two-dimensional convolution layer, a batch normalization layer and an activation function layer;
when n=1, the v-th period i-th RGB image of the potted wheatInputting the model into a WheatMVS potted wheat three-dimensional reconstruction model, and extracting features by a convolution layer structure at the nth stage of a main feature extraction module in the pyramid feature fusion module to obtain the model with the resolution of ++>N-th wheat trunk feature extraction map +.>
When n=2, 3, N, N-1 wheat trunk feature extraction graph of nth-level convolution layer structureProcessing to obtain n-th wheat trunk feature extraction map +.>Thus outputting N wheat trunk feature extraction diagram from the convolution layer structure of N stage>And forms a main characteristic diagram set of potted wheat +.>
Step 2.1.2, the dual-channel attention module is used for collecting main feature graphsSequentially inputting into the channel attention layer and the space attention layer for weighting treatment to obtain a potted wheat double-channel attention characteristic diagram setWherein (1)>Representing an nth potted wheat double-channel attention characteristic diagram;
step 2.2, the block matching module includes: the system comprises a depth initialization module, a self-adaptive propagation module and a self-adaptive space cost body module;
step 2.2.1, the depth initialization module sets a preset inverse depth range [ d ] min ,d max ]For opposite ofProcessing each pixel of the frame to obtain an n-th potted wheat initial depth image +.>Wherein d min Represents the minimum value of the depth value, d max Representing the maximum value of the depth value;
step 2.2.2, the adaptive propagation Module pairProcessing to obtain an nth potted wheat transmission depth image
Step 2.2.3, the adaptive space cost volume module calculatesMiddle->The weighted average value of the pixels is used for obtaining the weighted n-th potted wheat depth image +.>Will->N pieces of->Fusion treatment is performed to obtain +.>Corresponding weighted potted wheat depth map +.>And then obtaining I weighted potted wheat depth image sets +.>
Step 2.3, collecting depth images of potted wheatInput to the point cloud generation module, will +.>Processing the depth image in the (2) and fusing the depth image into corresponding potted wheat point cloud data +.>
Step 2.4, establishing a Loss function Loss of the wheatMVS potted wheat three-dimensional reconstruction model by using the formula (2):
and 2.5, training the wheatMVS potted wheat three-dimensional reconstruction model based on the training set, verifying the wheatMVS potted wheat three-dimensional reconstruction model after training every t times based on the verification set, and stopping training when the Loss function Loss is not lowered any more, so that an optimal weight is selected according to the result of the verification set, and the wheatMVS potted wheat three-dimensional reconstruction model corresponding to the optimal weight is used as a final potted wheat three-dimensional reconstruction model.
3. The method for estimating the rate of change of a wheat phenotype parameter based on the point cloud information according to claim 1, wherein the step 4 comprises:
step 4.1 wheat multiview RGB image S of the b-th period b Inputting the data into a three-dimensional reconstruction model of the potted wheat with optimal weight, and obtaining the point cloud data of the potted wheat in the b-th periodBy using (2) pair->Performing correction treatment to obtain corrected potted wheat point cloud data +.>
Step 4.2, obtaining the estimated potted wheat plant height change rate mu by using the formula (3) L
In the formula (4), the amino acid sequence of the compound,potted wheat point cloud data T representing the v-th period r v Maximum and minimum values of z-axis in the corresponding coordinate point set, +.>Representation ofPotted wheat point cloud data T in the b-th period r b Maximum and minimum values of z-axis in corresponding coordinate point set, T v 、T b A time for acquiring data in the v period and a time for acquiring data in the b period are represented;
step 4.3, slaveRespectively obtaining convex hulls of the potted wheat, and correspondingly obtaining M in the v-th period according to the convex hulls of the potted wheat v A triangular patch, b-th period M b The triangular patches below the three-dimensional coordinate data are provided with three points; then processing three-dimensional coordinate data in the potted wheat point cloud convex hull in the v-th period and the b-th period according to a formula (4) to obtain an estimated potted wheat convex hull area change rate mu Area
In the formula (4), M is the sequence number of the triangular patch currently processed, M v For all the triangular patches in the v-th period, M b For all the triangular patches in the b-th period,three-dimensional coordinate values of three vertices of the mth triangular patch at the v-th period,/->Is the three-dimensional coordinate value of the three vertices of the mth triangular patch at the b-th period.
4. The utility model provides a wheat phenotype parameter change rate estimation device based on point cloud information which characterized in that includes: a data set construction module, a model construction module and a wheat phenotype extraction module, wherein,
the data set construction module is used for utilizing the depth cameraAcquiring wheat multi-view RGB image and depth image of a v-th and b-th potted stage and constructing a v-th potted wheat RGB image dataset with resolution H x W x 3Potted wheat RGB image dataset of the b th period +.>Potted wheat depth image dataset of the v-th period with resolution H×W×1 +.>Potted wheat depth image dataset of the b th period +.>Wherein (1)>Represents the ith potted wheat RGB image at the v-th period, <>Representing an ith potted wheat RGB image at a b-th period; />Representing the i-th potted wheat depth image in the v-th period, < >>The method comprises the steps of representing an ith potted wheat RGB image in a b-th period, wherein I represents the total number of images; h represents the height of the image, W represents the width of the image, and 3 and 1 respectively represent the channel numbers of the RGB image and the depth image of the potted wheat;
the true height of the flowerpot for measuring the photographed potted wheat is h A
Potted wheat RGB image dataset S v And potted wheat depth image dataset S v,d After the data pair set is formed, dividing the data pair set into a training set and a verification set;
the model construction module is used for constructing a WheatMVS potted wheat three-dimensional reconstruction model based on deep learning, inputting a training set into the WheatMVS potted wheat three-dimensional reconstruction model, and screening out a potted wheat three-dimensional reconstruction model with optimal weight by using a verification set when training the WheatMVS potted wheat three-dimensional reconstruction model by using a gradient descent method;
RGB image dataset S of the potted wheat v Inputting the three-dimensional reconstruction model of the potted wheat with the optimal weight to process the three-dimensional reconstruction model of the potted wheat to obtain the three-dimensional point cloud of the potted wheat
Potted wheat point cloud by utilizing (1)Correction processing is carried out to obtain corrected potted wheat point cloud data +.>
In the formula (1), matrix T Representing a feature rotation matrix calculated by an original point cloud centroid point;
the wheat phenotype extraction module uses a distance maximum traversal method to correct the corrected potted wheat point cloud data T r v Height information h of potted wheat point cloud re Extracting; thereby obtaining the scale informationAnd then according to the scale information rho,for the corrected potted wheat point cloud data T r v Extracting to obtain phenotype information of the potted wheat; and estimating the change rate of the target potted wheat phenotype parameters according to the phenotype information of the potted wheat.
CN202310719552.7A 2023-06-16 2023-06-16 Wheat phenotype parameter change rate estimation method and device based on point cloud information Active CN116740703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310719552.7A CN116740703B (en) 2023-06-16 2023-06-16 Wheat phenotype parameter change rate estimation method and device based on point cloud information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310719552.7A CN116740703B (en) 2023-06-16 2023-06-16 Wheat phenotype parameter change rate estimation method and device based on point cloud information

Publications (2)

Publication Number Publication Date
CN116740703A CN116740703A (en) 2023-09-12
CN116740703B true CN116740703B (en) 2023-11-24

Family

ID=87905846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310719552.7A Active CN116740703B (en) 2023-06-16 2023-06-16 Wheat phenotype parameter change rate estimation method and device based on point cloud information

Country Status (1)

Country Link
CN (1) CN116740703B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112504A (en) * 2021-04-08 2021-07-13 浙江大学 Plant point cloud data segmentation method and system
CN115049945A (en) * 2022-06-10 2022-09-13 安徽农业大学 Method and device for extracting lodging area of wheat based on unmanned aerial vehicle image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020355226A1 (en) * 2019-09-25 2022-04-07 Blue River Technology Inc. Treating plants using feature values and ground planes extracted from a single image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112504A (en) * 2021-04-08 2021-07-13 浙江大学 Plant point cloud data segmentation method and system
CN115049945A (en) * 2022-06-10 2022-09-13 安徽农业大学 Method and device for extracting lodging area of wheat based on unmanned aerial vehicle image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于三维点云的甜菜根表型参数提取与根型判别;柴宏红;邵科;于超;邵金旺;王瑞利;随洋;白凯;刘云玲;马韫韬;;农业工程学报(第10期);全文 *

Also Published As

Publication number Publication date
CN116740703A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN112270249B (en) Target pose estimation method integrating RGB-D visual characteristics
CN107767413B (en) Image depth estimation method based on convolutional neural network
CN106097322A (en) A kind of vision system calibration method based on neutral net
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN111768452B (en) Non-contact automatic mapping method based on deep learning
CN113408423B (en) Aquatic product target real-time detection method suitable for TX2 embedded platform
CN110009667B (en) Multi-view point cloud global registration method based on Rodrigues transformation
CN110175506B (en) Pedestrian re-identification method and device based on parallel dimensionality reduction convolutional neural network
CN111553949A (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN113313047B (en) Lane line detection method and system based on lane structure prior
CN112907573B (en) Depth completion method based on 3D convolution
CN109670509A (en) Winter wheat seedling stage growing way parameter evaluation method and system based on convolutional neural networks
CN114549555A (en) Human ear image planning and division method based on semantic division network
CN115687850A (en) Method and device for calculating irrigation water demand of farmland
CN116485885A (en) Method for removing dynamic feature points at front end of visual SLAM based on deep learning
CN110956601A (en) Infrared image fusion method and device based on multi-sensor mode coefficients and computer readable storage medium
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN116740703B (en) Wheat phenotype parameter change rate estimation method and device based on point cloud information
CN115049791B (en) Numerical control lathe workpiece three-dimensional modeling method combined with image processing
CN111161227A (en) Target positioning method and system based on deep neural network
CN110826691A (en) Intelligent seismic velocity spectrum pickup method based on YOLO and LSTM
CN107133634B (en) Method and device for acquiring plant water shortage degree
CN115578624A (en) Agricultural disease and pest model construction method, detection method and device
CN109389629B (en) Method for determining stereo matching self-adaptive parallax grade
CN115439731A (en) Fruit identification and positioning method for crop fruit picking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant