CN112200854B - Leaf vegetable three-dimensional phenotype measuring method based on video image - Google Patents
Leaf vegetable three-dimensional phenotype measuring method based on video image Download PDFInfo
- Publication number
- CN112200854B CN112200854B CN202011021158.9A CN202011021158A CN112200854B CN 112200854 B CN112200854 B CN 112200854B CN 202011021158 A CN202011021158 A CN 202011021158A CN 112200854 B CN112200854 B CN 112200854B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- dimensional
- cloud model
- video
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 235000021384 green leafy vegetables Nutrition 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 36
- 235000013311 vegetables Nutrition 0.000 claims abstract description 31
- 238000012805 post-processing Methods 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000005259 measurement Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000000691 measurement method Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 abstract description 3
- 239000002689 soil Substances 0.000 description 4
- 230000006872 improvement Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009395 breeding Methods 0.000 description 1
- 230000001488 breeding effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N2021/8466—Investigation of vegetal material, e.g. leaves, plants, fruits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a three-dimensional phenotype measuring method of leaf vegetables based on video images, which comprises the following steps: acquiring video image data of leaf vegetables through a data acquisition device; processing the video image data to remove the blurred image frames, and obtaining key frames containing leaf vegetable areas in the video image data by using a vegetation index and scale invariant feature transformation matching method; reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing on a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model; extracting a point cloud framework from the post-processing point cloud model, performing point cloud segmentation, and further calculating the phenotypic parameters of the leafy vegetables to obtain a three-dimensional phenotypic measurement result of the leafy vegetables; the invention provides a convenient and low-cost three-dimensional phenotype measuring means, a complicated image shooting process is not needed, phenotype parameters can be obtained by directly recording videos of green vegetables, and the method can be further applied to analysis of other leaf vegetables.
Description
Technical Field
The invention relates to the research field of intersection of computer vision technology and agricultural plant phenotypes, in particular to a three-dimensional phenotype measuring method of leaf vegetables based on video images.
Background
Leafy vegetables are an important source of various nutrients required in people's daily lives. Quantification of vegetable phenotype and yield estimation are the precondition for improvement of the selection and planting modes of the genetic varieties, and the traditional mode is completed by manual measurement, which is time-consuming and tedious, and changes the growth state of crops, and is destructive. Therefore, there is a need for efficient and convenient in situ vegetable phenotype identification methods to provide data support for breeding studies and leaf vegetable yield monitoring, thereby improving and increasing vegetable yield.
In the last decade, researchers have widely used two-dimensional imaging techniques to acquire the phenotype of leafy vegetables. Some researchers use CCD cameras to collect rape data, and realize automatic measurement methods and devices for rape phenotype two-dimensional parameters. In recent years, related researchers have combined machine vision with depth, use deep neural networks for yield estimation, and the like. Meanwhile, some researchers explore the application of mobile phones in agricultural production, and develop a tree diameter calculation method based on images shot by the mobile phones. However, due to the lack of depth information in the two-dimensional world, it is difficult to solve the problem of occlusion, especially under field conditions, which is very common, so that it is difficult to obtain accurate structural information of the study object.
In recent years, the trend towards plant phenotyping using three-dimensional data generation techniques has increased. To collect three-dimensional data of plants, researchers in different fields use a variety of different three-dimensional sensor-based technologies, which can be largely divided into two categories: active and passive. Active sensors emit independent light sources such as hand-held laser scanning, structured light, ground laser scanning, and time-of-flight. LiDAR is one of the most widely used active sensors for phenotypic analysis. In addition, microsoft corporation's Kinect is a common active sensor that can collect RGB-D data while being relatively inexpensive, however, it is difficult to accurately collect effective data in outdoor scenes due to its low resolution.
Three-dimensional reconstruction based on passive methods has received attention in recent years. Images of the region of interest are acquired at different angles by a camera, and then depth information of the target is calculated by a triangulation principle. The method for reconstructing the Structure (SFM) from the motion is used for measuring the plant phenotype structure, estimating the yield, predicting the good products due to the characteristics of simplicity in use, robustness and the like. At present, 30 to 50 images are required to be shot for a single crop due to SFM related work, the shooting process is complex and cumbersome, and certain experience is required for adjusting parameters of a camera shutter and ISO. Meanwhile, the field environment is complex, such as shielding of plants, influence of wind on plant shake, illumination conditions and the like, all bring challenges to the work.
Disclosure of Invention
The invention mainly aims to overcome the defects and shortcomings of the prior art and provides a three-dimensional phenotype measuring method for leaf vegetables based on video images.
The aim of the invention is achieved by the following technical scheme:
a three-dimensional phenotype measuring method of leaf vegetables based on video images comprises the following steps:
acquiring video image data of leaf vegetables through a data acquisition device;
processing the video image data to remove the blurred image frames, and obtaining key frames containing leaf vegetable areas in the video image data by using a vegetation index and scale invariant feature transformation matching method;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing on a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, performing point cloud segmentation, and further calculating the phenotypic parameters of the leaf vegetables to obtain a three-dimensional phenotypic measurement result of the leaf vegetables.
Further, the obtaining the video image data of the leafy vegetables through the data obtaining device specifically includes: and shooting videos around the leafy vegetables at different angles through the data acquisition device to acquire video image data of the leafy vegetables.
Further, the deblurring processing is performed on the video image data, and a key frame containing leaf vegetable areas in the video image data is obtained by using a feature transformation matching method based on vegetation indexes and scale invariance, specifically:
s201, decoding video image data into single-frame images to be stored, sequentially judging whether each stored frame image is blurred or not, and marking the blurred image sequence number;
s202, calculating an ExGR vegetation index of the single frame image which is not marked with the blurring after processing to obtain a leaf vegetable saliency map;
s203, sequentially grouping and calculating the leaf vegetable saliency maps to obtain key frames containing leaf vegetable areas in the video image data.
Further, the step S203 specifically includes:
setting I to represent one frame of image in video, s= { I for one video with n frames of image i I=1, 2,3,..n }, assuming the number of key frames expected m, dividing the video S having n frames into m sets at equal intervals of frame numbers from the first frame;
computing SIFT features for each leaf vegetable saliency map within all collections, with 1 st saliency map I 'of 1 st collection' 1 As the 1 st key frame, starting from the 2 nd set, performing feature point matching on all salient frames in the current set and the previous key frame, and taking the frame with the most matching points in the current set as a new key frame; repeating the current step until m sets are calculated to obtain m key frames, wherein the calculation formula is as follows:
wherein { I' } is i Representing a set of saliency maps, I l 'represents the ith set { I' } i Each frame of image within the frame of image,representing the key frame obtained for the ith saliency map set,/and (ii)>Is represented by I' l And->Calculating the number of feature points obtained by SIFT feature matching from the two frames of images;
and taking frames corresponding to the m key frames in the original video as an image sequence of the original video, and reconstructing a three-dimensional point cloud model of the leaf vegetables.
Further, the post-processing of the three-dimensional space comprises filtering and plane fitting, and is simplified.
Further, the reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing on a three-dimensional space through the three-dimensional point cloud model to obtain a post-processed point cloud model, which specifically comprises:
s301, firstly reconstructing a key frame image by using an SFM algorithm to obtain a sparse three-dimensional point cloud model, and further reconstructing the key frame image by using an MVS algorithm to obtain a dense three-dimensional point cloud model;
s302, carrying out filtering processing on a three-dimensional space on the three-dimensional point cloud model to obtain a filtered three-dimensional point cloud model;
s303, performing plane fitting on the filtered three-dimensional point cloud model, taking the detected plane as the ground, defining the direction vertical to the ground as the Z-axis direction, and simultaneously deleting all point clouds below the plane to obtain a plane fitting three-dimensional point cloud model;
s304, simplifying the planar fitting point cloud model, and reducing the number of point clouds of each sample, so that the calculated amount of the subsequent step is reduced.
Further, the method extracts a point cloud skeleton from the post-processed point cloud model and performs point cloud segmentation, so as to calculate phenotypic parameters of the leaf vegetables, specifically:
s401, performing size conversion on the point cloud model to obtain a size conversion point cloud model, and further obtaining the real size of the leaf vegetable point cloud model;
s402, skeletonizing the size conversion cloud model to obtain a point cloud skeleton model;
s403, performing point cloud segmentation on the post-processing point cloud model on the basis of the point cloud skeleton model to obtain a point cloud segmentation model;
s404, calculating the phenotypic parameters of the leafy vegetables by using the point cloud segmentation model and the point cloud skeleton model to obtain three-dimensional phenotypic measurement of the leafy vegetables.
Further, the leaf vegetable phenotype parameters include: plant height, number of leaves, length of leaves, and angle of leaves.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the method, the phenotype parameters are finally obtained by directly recording the video of the green leaf vegetables, and compared with other methods based on photogrammetry, the method does not need a complicated image shooting process; in the key frame extraction step, a high-quality point cloud model can be obtained through reconstruction by a fuzzy image removal and feature point matching-based method; the method is convenient and low in cost, and can be further applied to analysis of other leafy vegetables, so that the method has important practical significance for improvement of leafy vegetables and improvement of the yield of leafy vegetables.
Drawings
FIG. 1 is a flow chart of a three-dimensional phenotyping method of leafy vegetables based on video images according to the invention;
fig. 2 is a schematic diagram of video data acquisition through different angles according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples:
a three-dimensional phenotype measuring method of leaf vegetables based on video images is shown in fig. 1, and comprises the following steps:
acquiring video image data of leaf vegetables through a data acquisition device;
deblurring the video image data, and acquiring a key frame containing leaf vegetable areas in the video image data by using a vegetation index and scale invariant feature transformation matching method;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing on a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, performing point cloud segmentation, and further calculating the phenotypic parameters of the leaf vegetables to obtain a three-dimensional phenotypic measurement result of the leaf vegetables.
The method comprises the following steps:
firstly, shooting videos around leaf vegetables at different angles through a mobile phone; the mobile phone with 2000 ten thousand pixels of camera parameters is used, the shooting resolution is set to 2880x2160, the video frame rate is 30fps, the exposure parameters are automatic, the focusing mode is to perform manual focusing before each shooting, and the shooting video is shot by shooting around leaf vegetables in a stable mode as much as possible. In order to obtain as much information as possible, each plant was photographed at a distance of 30 to 50cm from the center of the plant, rotated 3 times at an included angle of 0 degrees, 45 degrees, 75 degrees, respectively, with respect to the ground plane. The photographing mode is shown in fig. 2. After the video is captured, the individual leafy vegetables are manually measured for their phenotypic parameters.
Judging whether each frame of image of the obtained video is blurred or not and removing the blurred image, and obtaining a key frame containing a crop area in the video by using a vegetation index and Scale Invariant Feature Transform (SIFT) matching algorithm on the basis of the blur;
for video shot around leafy vegetables by using a mobile phone, blurring of video frames occurs due to shaking of a person during shooting. The known blurred picture boundary information is less, while the normally clear picture boundary information is more. Therefore, the variance value of the second derivative of the picture can be used as a basis for judging whether the picture is blurred or not. The Laplacian operator has second order conductivity and can be used to calculate the region of the boundary in the picture. The degree of blurring of the image can be expressed as:
D(f)=∑ y ∑ x |G(x,y)|,
where G (x, y) is the convolution of the Laplacian operator at pixel point (x, y) in image f.
The vegetation index can mainly reflect indexes of differences between vegetation, such as visible light, near infrared band reflection and the like, and soil backgrounds, and is used for enhancing differences between crops and surrounding ground objects and effectively separating green crops from the soil backgrounds. For an input RGB color space image, experiments show that the excessive green minus excessive red index (ExGR) index has better performance on a green leaf vegetable image, wherein the calculation method of the ExGR index is shown as the formula:
ExG=2g-r-b,
ExR=1.48r-g,
ExGR=RxG-ExR,
the Scale Invariant Feature Transform (SIFT) algorithm has excellent performance in complex changing environments such as image translation, scaling, rotation and the like, and is commonly used in algorithms such as image stitching, point cloud model reconstruction and the like. The complete key frame extraction method is therefore as follows:
1) Setting I to represent one frame of image in video, s= { I for one video with n frames of image i I=1, 2,3,..n }, assuming the number of key frames expected, m. The video S having n frames is divided into m sets at equal intervals of frames from the first frame. And taking the 1 st frame image of each set as a starting frame, firstly judging whether the current frame is blurred, if so, directly eliminating the current frame, then judging the next frame until the video frame in the set is judged completely, and finally obtaining the rest frames in the set as clear frames.
2) And obtaining a graying leaf vegetable saliency map I' by adopting an ExGR method on the video frames in each set.
3) SIFT features are computed for each leaf vegetable saliency map within all sets. 1 st saliency map I in 1 st set 1 ' as the 1 st key frame, starting from the 2 nd set, feature point matching is performed on all salient frames in the current set with the previous key frame. And taking the frame with the most matching points in the current set as a new key frame. Repeating the current stepsUntil m sets are calculated, m key frames are obtained, and the calculation formula is as follows:
4) And taking frames corresponding to the m key frames in the original video as an image sequence of the original video, and further reconstructing a 3D point cloud model of the leaf vegetables.
Step three, reconstructing a key frame image into a three-dimensional point cloud model through an SFM algorithm, and performing post-processing such as filtering of a three-dimensional space;
and (3) obtaining a sparse point cloud by using an SFM algorithm on the key frame image obtained in the step two, and then reconstructing the sparse point cloud into a dense point cloud by using an MVS algorithm. The generated point cloud model contains some noise with discrete distribution under the influence of external environment, so that the three-dimensional filtering processing of the point cloud model is needed. And adopting an outlier filter based on the radius to perform noise point filtering, so as to obtain a smoother point cloud model.
For the reconstructed point cloud, it contains the plant and also the soil surface. Firstly, carrying out plane detection on the point cloud model, wherein the detected plane is the ground plane. A random sample consensus (RANSAC) algorithm is used to fit the plane, after the ground is detected, the direction perpendicular to the ground is defined as the Z-axis, while the point clouds below the plane are all deleted, leaving them for the calculation of the subsequent plant phenotype. The reconstructed high-precision point cloud often contains on the order of hundreds of thousands or even as much as millions, which is enormous for subsequent computation. In order to improve efficiency, the point clouds are required to be simplified, the number of the point clouds of each plant is reduced to about 1-3 ten thousand, and the calculated amount of the subsequent steps is effectively reduced on the premise of less influence on model precision.
And fourthly, extracting a point cloud skeleton from the post-processed point cloud model, performing point cloud segmentation, and calculating the phenotypic parameters of the leaf vegetables on the basis.
To obtain the dimensional relationship between the point cloud and the real world, the dimensions of the cube in three-dimensional space are first measuredThe ratio k=l can be obtained real /L virtual Wherein L is real For the size value of the cube in the real world, L virtual Is the size value of the cube in the point cloud space. Then, performing size conversion on the point cloud model through a ratio k, so that a corresponding size result of the point cloud model in the real world can be obtained; the point cloud model uses a point cloud skeletonizing algorithm based on slice clustering to obtain a point cloud skeleton model, and the point cloud is segmented on the basis to obtain leaf vegetable point cloud models with different branches; and then, performing phenotype parameter calculation through the leaf vegetable point cloud skeleton model and the segmented point cloud.
Plant height; when the plant is nearly vertical to the ground, it is the Euclidean distance from the aerial part of the soil to the top of the plant. At this time, the maximum value and the minimum value of the Z-axis direction of the point cloud can be directly differed, namely the plant height H=Z max /Z min Wherein Z is max The value of the point with the largest Z axis direction in the plant point cloud is Z min Is the value of the point where the Z-axis direction is smallest. When a certain included angle theta exists between the plant and the ground, the plant height H= (Z) max -Z min )/sinθ。
The number of blades; for leaf vegetables, the number of leaves can be obtained by calculating the number of branches of the plant skeleton.
Blade length; which is calculated from the length of the blade branch skeleton line. For n nodes v included i (i=1,., n) whose branch length is the sum of the distances between nodes, i.e. for each blade length L b :
Blade angle; firstly, calculating a normal vector alpha of the ground, and then calculating a tangential vector gamma at a branch position through a blade framework branch, wherein the blade branch included angle theta is as follows:
the above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (6)
1. The three-dimensional phenotype measuring method for the leafy vegetables based on the video image is characterized by comprising the following steps of:
acquiring video image data of leaf vegetables through a data acquisition device;
the method comprises the steps of performing fuzzy image removal processing on video image data, and acquiring key frames containing leaf vegetable areas in the video image data by using a vegetation index and scale invariant feature transformation matching method, wherein the key frames comprise the following specific steps:
s201, decoding video image data into single-frame images to be stored, sequentially judging whether each stored frame image is blurred or not, and marking the blurred image sequence number;
s202, calculating an ExGR vegetation index of the single frame image which is not marked with the blurring after processing to obtain a leaf vegetable saliency map;
s203, sequentially grouping and calculating the leaf vegetable saliency maps to obtain key frames containing leaf vegetable areas in the video image data;
the step S203 specifically includes:
setting I to represent one frame of image in video, s= { I for one video with n frames of image i I=1, 2,3,..n }, assuming the number of key frames expected m, dividing the video S having n frames into m sets at equal intervals of frame numbers from the first frame;
computing SIFT features of each leaf vegetable saliency map in all sets to 1 st saliency map I of 1 st set 1 ' as the 1 st key frame, starting from the 2 nd set, performing feature point matching on all salient frames in the current set and the previous key frame, and taking the frame with the most matching points in the current set as a new key frame; repeatingIn the current step, until m sets are calculated, m key frames are obtained, and the calculation formula is as follows:
wherein { I' } is i Representing a set of saliency maps, I l 'represents the ith set { I' } i Each frame of image within the frame of image,representing the key frame obtained for the ith saliency map set,/and (ii)>Is represented by I' l And->Calculating the number of feature points obtained by SIFT feature matching from the two frames of images;
taking frames corresponding to m key frames in an original video as an image sequence of the original video, and reconstructing a three-dimensional point cloud model of the leaf vegetables;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing on a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, performing point cloud segmentation, and further calculating the phenotypic parameters of the leaf vegetables to obtain a three-dimensional phenotypic measurement result of the leaf vegetables.
2. The three-dimensional phenotype measuring method for leafy vegetables based on video images according to claim 1, wherein the video image data of the leafy vegetables is obtained by a data obtaining device, specifically: and shooting videos around the leafy vegetables at different angles through the data acquisition device to acquire video image data of the leafy vegetables.
3. The three-dimensional phenotyping method for leafy vegetables based on video images according to claim 1, wherein the post-processing of the three-dimensional space comprises filtering, plane fitting and simplification.
4. The three-dimensional phenotype measurement method for leaf vegetables based on video images according to claim 3, wherein the key frame images are reconstructed into a three-dimensional point cloud model, and the three-dimensional space is post-processed through the three-dimensional point cloud model to obtain a post-processed point cloud model, which specifically comprises the following steps:
s301, firstly reconstructing a key frame image by using an SFM algorithm to obtain a sparse three-dimensional point cloud model, and further reconstructing the key frame image by using an MVS algorithm to obtain a dense three-dimensional point cloud model;
s302, carrying out filtering processing on a three-dimensional space on the three-dimensional point cloud model to obtain a filtered three-dimensional point cloud model;
s303, performing plane fitting on the filtered three-dimensional point cloud model, taking the detected plane as the ground, defining the direction vertical to the ground as the Z-axis direction, and simultaneously deleting all point clouds below the plane to obtain a plane fitting three-dimensional point cloud model;
s304, simplifying the planar fitting point cloud model, and reducing the number of point clouds of each sample, so that the calculated amount of the subsequent step is reduced.
5. The three-dimensional phenotype measurement method for leafy vegetables based on video images according to claim 1, wherein the method is characterized in that the point cloud skeleton is extracted from the post-processed point cloud model and the point cloud segmentation is performed, so as to calculate the phenotype parameters of the leafy vegetables, specifically:
s401, performing size conversion on the point cloud model to obtain a size conversion point cloud model, and further obtaining the real size of the leaf vegetable point cloud model;
s402, skeletonizing the size conversion cloud model to obtain a point cloud skeleton model;
s403, performing point cloud segmentation on the post-processing point cloud model on the basis of the point cloud skeleton model to obtain a point cloud segmentation model;
s404, calculating the phenotypic parameters of the leafy vegetables by using the point cloud segmentation model and the point cloud skeleton model to obtain three-dimensional phenotypic measurement of the leafy vegetables.
6. The video image-based three-dimensional phenotyping method of leafy vegetables according to claim 5, wherein said leafy vegetable phenotyping parameters comprise: plant height, number of leaves, length of leaves, and angle of leaves.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011021158.9A CN112200854B (en) | 2020-09-25 | 2020-09-25 | Leaf vegetable three-dimensional phenotype measuring method based on video image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011021158.9A CN112200854B (en) | 2020-09-25 | 2020-09-25 | Leaf vegetable three-dimensional phenotype measuring method based on video image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200854A CN112200854A (en) | 2021-01-08 |
CN112200854B true CN112200854B (en) | 2023-10-17 |
Family
ID=74008283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011021158.9A Active CN112200854B (en) | 2020-09-25 | 2020-09-25 | Leaf vegetable three-dimensional phenotype measuring method based on video image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200854B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114061488B (en) * | 2021-11-15 | 2024-05-14 | 华中科技大学鄂州工业技术研究院 | Object measurement method, system and computer readable storage medium |
CN115115621B (en) * | 2022-08-24 | 2022-11-11 | 聊城市泓润能源科技有限公司 | Lubricating oil pollution degree detection method based on image processing |
CN115326805A (en) * | 2022-10-12 | 2022-11-11 | 云南瀚哲科技有限公司 | Image acquisition device and IBMR-based tobacco crop growth analysis method |
CN116704497B (en) * | 2023-05-24 | 2024-03-26 | 东北农业大学 | Rape phenotype parameter extraction method and system based on three-dimensional point cloud |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021544A (en) * | 2014-05-07 | 2014-09-03 | 中国农业大学 | Greenhouse vegetable disease surveillance video key frame extracting method and extracting system |
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
CN110866975A (en) * | 2019-04-25 | 2020-03-06 | 华中农业大学 | Multi-vision-based rape image acquisition device and three-dimensional feature extraction method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7187809B2 (en) * | 2004-06-10 | 2007-03-06 | Sarnoff Corporation | Method and apparatus for aligning video to three-dimensional point clouds |
US11397088B2 (en) * | 2016-09-09 | 2022-07-26 | Nanyang Technological University | Simultaneous localization and mapping methods and apparatus |
-
2020
- 2020-09-25 CN CN202011021158.9A patent/CN112200854B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021544A (en) * | 2014-05-07 | 2014-09-03 | 中国农业大学 | Greenhouse vegetable disease surveillance video key frame extracting method and extracting system |
CN106251399A (en) * | 2016-08-30 | 2016-12-21 | 广州市绯影信息科技有限公司 | A kind of outdoor scene three-dimensional rebuilding method based on lsd slam |
CN110866975A (en) * | 2019-04-25 | 2020-03-06 | 华中农业大学 | Multi-vision-based rape image acquisition device and three-dimensional feature extraction method |
Non-Patent Citations (2)
Title |
---|
基于ORB关键帧匹配算法的机器人SLAM实现;艾青林;余杰;胡克用;陈琦;;机电工程(05);第12-19页 * |
植物根系三维模型中心线提取方法;杨自尚;《中国优秀硕士学位沦为全文数据库》;第A006-713页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112200854A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112200854B (en) | Leaf vegetable three-dimensional phenotype measuring method based on video image | |
CN109146948B (en) | Crop growth phenotype parameter quantification and yield correlation analysis method based on vision | |
CN111724433B (en) | Crop phenotype parameter extraction method and system based on multi-view vision | |
CN106651900B (en) | A kind of overhead strawberry three-dimensional modeling method in situ based on contours segmentation | |
CN109785379A (en) | The measurement method and measuring system of a kind of symmetric objects size and weight | |
Santos et al. | 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera | |
CN103218787B (en) | Multi-source heterogeneous remote sensing image reference mark automatic acquiring method | |
CN113112504A (en) | Plant point cloud data segmentation method and system | |
CN107607053B (en) | A kind of standing tree tree breast diameter survey method based on machine vision and three-dimensional reconstruction | |
CN109118544B (en) | Synthetic aperture imaging method based on perspective transformation | |
CN110070571B (en) | Phyllostachys pubescens morphological parameter detection method based on depth camera | |
CN115937151B (en) | Method for judging curling degree of crop leaves | |
WO2022179549A1 (en) | Calibration method and apparatus, computer device, and storage medium | |
CN115375842A (en) | Plant three-dimensional reconstruction method, terminal and storage medium | |
CN113674400A (en) | Spectrum three-dimensional reconstruction method and system based on repositioning technology and storage medium | |
CN110866975A (en) | Multi-vision-based rape image acquisition device and three-dimensional feature extraction method | |
Paturkar et al. | 3D reconstruction of plants under outdoor conditions using image-based computer vision | |
CN112906719A (en) | Standing tree factor measuring method based on consumption-level depth camera | |
Peng et al. | Binocular-vision-based structure from motion for 3-D reconstruction of plants | |
Xinmei et al. | Passive measurement method of tree height and crown diameter using a smartphone | |
CN110866945A (en) | Method for generating three-dimensional tree by automatic identification of oblique photography model | |
CN112017221B (en) | Multi-modal image registration method, device and equipment based on scale space | |
CN116843738A (en) | Tree dumping risk assessment system and method based on TOF depth camera | |
Li et al. | Automatic reconstruction and modeling of dormant jujube trees using three-view image constraints for intelligent pruning applications | |
CN115655157A (en) | Fish-eye image-based leaf area index measuring and calculating method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |