CN112200854A - Leaf vegetable three-dimensional phenotype measurement method based on video image - Google Patents

Leaf vegetable three-dimensional phenotype measurement method based on video image Download PDF

Info

Publication number
CN112200854A
CN112200854A CN202011021158.9A CN202011021158A CN112200854A CN 112200854 A CN112200854 A CN 112200854A CN 202011021158 A CN202011021158 A CN 202011021158A CN 112200854 A CN112200854 A CN 112200854A
Authority
CN
China
Prior art keywords
point cloud
dimensional
leaf
cloud model
phenotype
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011021158.9A
Other languages
Chinese (zh)
Other versions
CN112200854B (en
Inventor
韩宇星
杨自尚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202011021158.9A priority Critical patent/CN112200854B/en
Publication of CN112200854A publication Critical patent/CN112200854A/en
Application granted granted Critical
Publication of CN112200854B publication Critical patent/CN112200854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N2021/8466Investigation of vegetal material, e.g. leaves, plants, fruits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Geometry (AREA)
  • Chemical & Material Sciences (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a leaf vegetable three-dimensional phenotype measuring method based on video images, which comprises the following steps: acquiring video image data of leaf vegetables through a data acquisition device; removing blurred image frames from video image data, and obtaining a key frame containing a leaf vegetable area in the video image data by using a vegetation index and scale invariant feature transformation matching method; reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing of a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model; extracting a point cloud framework from the post-processing point cloud model, carrying out point cloud segmentation, and further calculating phenotype parameters of the leaf vegetables to obtain three-dimensional phenotype measurement results of the leaf vegetables; the invention provides a convenient and low-cost three-dimensional phenotype measurement means, does not need a complicated image shooting process, can obtain phenotype parameters by directly recording the video of green leaf vegetables, and further can be expanded and applied to the analysis of other leaf vegetables.

Description

Leaf vegetable three-dimensional phenotype measurement method based on video image
Technical Field
The invention relates to the field of cross research of computer vision technology and agricultural plant phenotype, in particular to a leaf vegetable three-dimensional phenotype measurement method based on video images.
Background
Leaf vegetables are important sources of various nutrients required in daily life of people. The quantification of vegetable phenotype and yield estimation are the prerequisite for the selection of its genetic varieties and the improvement of planting patterns, and the traditional method is completed by manual measurement, which is time-consuming and tedious, and changes the growth state of crops, and is destructive. Therefore, there is a need for efficient and convenient in situ vegetable phenotype identification methods to provide data support for breeding research and leaf vegetable yield monitoring, thereby improving vegetable yield.
Over the past decade, researchers have extensively used two-dimensional imaging techniques to obtain the phenotype of leafy vegetables. Some researchers use CCD cameras to collect rape data, and realize automatic measurement methods and devices for rape phenotype two-dimensional parameters. In recent years, relevant researchers have combined machine vision with depth, using deep neural networks for yield estimation, etc. Meanwhile, some researchers explore the application of mobile phones in agricultural production and develop a tree diameter calculation method based on images shot by mobile phones. However, due to the lack of depth information in the two-dimensional world, the problem of occlusion is difficult to solve, especially under field conditions, which is very common, and thus it is difficult to obtain accurate structural information of the study object.
In recent years, the trend toward plant phenotyping using three-dimensional data generation technology has been increasing. In order to collect three-dimensional data of plants, researchers in different fields use a variety of different three-dimensional sensor-based techniques, which can be mainly classified into two categories: active and passive. Active sensors emit independent light sources such as hand-held laser scanning, structured light, ground laser scanning and time of flight. LiDAR is one of the most widely used active sensors for phenotypic analysis. In addition, the Kinect of microsoft corporation is a common active sensor, which can collect RGB-D data and is relatively cheap, but it is difficult to accurately collect effective data in outdoor scenes due to its low resolution.
Three-dimensional reconstruction based on passive methods has received attention in recent years. Images of the region of interest are acquired by a camera at different angles, and then depth information of the target is calculated by a triangulation principle. The method for reconstructing the Structure (SFM) from the movement is used for measuring the plant phenotype structure, estimating the yield and predicting the good product due to the characteristics of simple use, robustness and the like. At present, due to the fact that work related to SFM usually needs to shoot 30 to 50 images for a single crop, the shooting process is complex and tedious, and certain experience is needed for adjusting parameters of a camera shutter and ISO. Meanwhile, due to the complexity of field environments, such as the shielding of plant objects, the influence of wind on plant shaking, the illumination condition and the like, challenges are brought to the work.
Disclosure of Invention
The invention mainly aims to overcome the defects of the prior art and provide a leaf vegetable three-dimensional phenotype measuring method based on video images.
The purpose of the invention is realized by the following technical scheme:
a leaf vegetable three-dimensional phenotype measurement method based on video images comprises the following steps:
acquiring video image data of leaf vegetables through a data acquisition device;
removing blurred image frames from video image data, and obtaining a key frame containing a leaf vegetable area in the video image data by using a vegetation index and scale invariant feature transformation matching method;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing of a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, carrying out point cloud segmentation, and further calculating phenotype parameters of the leaf vegetables to obtain three-dimensional phenotype measurement results of the leaf vegetables.
Further, the acquiring the video image data of the leafy vegetables by the data acquiring device specifically includes: the data acquisition device is used for shooting videos around the leaf vegetables at different angles to acquire video image data of the leaf vegetables.
Further, the deblurring processing is performed on the video image data, and a vegetation index and scale invariant feature transformation matching method is used to obtain a key frame containing a leaf vegetable region in the video image data, specifically:
s201, decoding video image data into single-frame images for storage, sequentially judging whether each stored frame image is fuzzy or not, and marking the serial number of the fuzzy image;
s202, calculating an ExGR vegetation index of the processed unmarked and fuzzy single-frame image to obtain a leaf vegetable saliency map;
and S203, sequentially grouping and calculating the leaf vegetable saliency maps to obtain a key frame containing a leaf vegetable region in the video image data.
Further, in step S203, specifically:
setting I to represent a frame of image in a video, S ═ I for a video with n frames of imagesi1,2, 3.., n }, and dividing the video S with n frames into m sets by the number of frames at equal intervals from the first frame, assuming the number m of key frames desired;
calculating SIFT features of each leafy vegetable saliency map within all sets, with 1 st saliency map I 'of 1 st set'1As the 1 st key frame, starting from the 2 nd set, performing feature point matching on all the significant frames in the current set and the previous key frame, and taking the frame with the most matching points in the current set as a new key frame; repeating the current steps until m sets are calculated to obtain m key frames, and calculating formula thereofComprises the following steps:
Figure BDA0002700645900000031
wherein, { I' }iRepresenting a set comprising a plurality of saliency maps, Il'denotes the ith set { I' }iEach frame of the image in the frame is,
Figure BDA0002700645900000032
representing the key frame obtained from the ith salient image set,
Figure BDA0002700645900000033
is represented by l'lAnd
Figure BDA0002700645900000034
calculating the number of feature points obtained by SIFT feature matching of the two frames of images;
and taking the frames corresponding to the m key frames in the original video as an image sequence of the original video for reconstructing the three-dimensional point cloud model of the leafy vegetables.
Further, post-processing of the three-dimensional space, including filtering and plane fitting, is simplified.
Further, reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing of a three-dimensional space through the three-dimensional point cloud model to obtain a post-processed point cloud model, specifically:
s301, reconstructing the key frame image by using an SFM algorithm to obtain a sparse three-dimensional point cloud model, and further reconstructing by using an MVS algorithm to obtain a dense three-dimensional point cloud model;
s302, carrying out three-dimensional space filtering processing on the three-dimensional point cloud model to obtain a filtered three-dimensional point cloud model;
s303, carrying out plane fitting on the filtering three-dimensional point cloud model, taking the detected plane as the ground, defining the direction vertical to the ground as the Z-axis direction, and simultaneously deleting all point clouds below the plane to obtain a plane fitting three-dimensional point cloud model;
s304, simplifying the plane fitting point cloud model, and reducing the point cloud number of each sample, thereby reducing the calculation amount of the subsequent steps.
Further, the point cloud framework is extracted from the post-processed point cloud model and subjected to point cloud segmentation, and then phenotype parameters of the leaf vegetables are calculated, specifically:
s401, carrying out size conversion on the point cloud model to obtain a size conversion point cloud model, and further obtaining the real size of the leaf vegetable point cloud model;
s402, skeletonizing the size conversion cloud model to obtain a point cloud skeleton model;
s403, on the basis of the point cloud framework model, performing point cloud segmentation on the post-processing point cloud model to obtain a point cloud segmentation model;
s404, calculating phenotype parameters of the leaf vegetables by using the point cloud segmentation model and the point cloud framework model to obtain three-dimensional phenotype measurement of the leaf vegetables.
Further, the leaf vegetable phenotypic parameters include: plant height, leaf number, leaf length, leaf included angle.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the phenotype parameters are finally obtained by directly recording the videos of the green vegetables, and compared with other methods based on photogrammetry, the method does not need a fussy image shooting process; in the key frame extraction step, a high-quality point cloud model can be obtained through reconstruction by means of fuzzy image removal and a method based on feature point matching; the method provides a convenient and low-cost three-dimensional phenotype measurement means, further can be expanded to be applied to analysis of other leaf vegetables, and has important practical significance for improvement of the leaf vegetables and improvement of yield of the leaf vegetables.
Drawings
FIG. 1 is a flow chart of a method for measuring three-dimensional phenotype of leafy vegetables based on video images according to the present invention;
fig. 2 is a schematic diagram of video data acquired from different angles according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example (b):
a method for measuring three-dimensional phenotype of leafy vegetables based on video images is shown in figure 1 and comprises the following steps:
acquiring video image data of leaf vegetables through a data acquisition device;
deblurring processing is carried out on video image data, and a key frame containing leaf vegetable areas in the video image data is obtained by using a vegetation index and scale invariant feature transformation matching method;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing of a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, carrying out point cloud segmentation, and further calculating phenotype parameters of the leaf vegetables to obtain three-dimensional phenotype measurement results of the leaf vegetables.
The method comprises the following specific steps:
firstly, shooting videos around leafy vegetables at different angles through a mobile phone; the method is characterized in that a mobile phone with 2000 ten thousand pixels of camera parameters is used, the shooting resolution is set to be 2880x2160, the video frame rate is 30fps, the exposure parameters are automatic, the focusing mode is to carry out manual focusing before shooting each time, and shooting is carried out by rotating around leafy vegetables in a mode as stable as possible so as to shoot videos. In order to obtain as much information as possible, each plant was photographed 3 times with an angle of 0 degrees, 45 degrees, 75 degrees, respectively, with respect to the ground plane at a distance of 30 to 50cm from the center of the plant. The shooting mode is shown in fig. 2. After the video is taken, the phenotypic parameters of the individual leafy vegetables are manually measured.
Secondly, judging whether each frame of image of the obtained video is blurred or not and removing the blurred image, and on the basis, obtaining a key frame containing a crop area in the video by using a feature invariant feature transform (SIFT) matching algorithm based on vegetation indexes and scale invariance;
for videos shot by a mobile phone around leafy vegetables, blurring of video frames occurs due to shaking of people in the shooting process. It is known that the blurred picture boundary information is small, and the normal clear picture boundary information is large. Therefore, the variance value of the second derivative of the picture can be used as a basis for judging whether the picture is blurred or not. The Laplacian operator has second-order conductibility and can be used to calculate the boundary region in the picture. The degree of blur of the image can be expressed as:
D(f)=∑yx|G(x,y)|,
wherein, G (x, y) is the convolution of Laplacian operators at the pixel point (x, y) in the image f.
The vegetation index can mainly reflect indexes of difference between visible light, near infrared band reflection and the like of vegetation and a soil background, is used for enhancing the difference between crops and surrounding ground objects, and effectively separates out green crops and the soil background. For the input RGB color space image, experiments show that the excess green minus excess red index (ExGR) index has better performance for green leaf vegetable images, wherein the calculation method of the ExGR index is as shown in the formula:
Figure BDA0002700645900000051
ExG=2g-r-b,
ExR=1.48r-g,
ExGR=RxG-ExR,
the Scale Invariant Feature Transform (SIFT) algorithm has excellent performance in complex change environments such as image translation, scaling and rotation, and is commonly used in algorithms such as image splicing and point cloud model reconstruction. The complete key frame extraction method is therefore as follows:
1) setting I to represent a frame of image in a video, S ═ I for a video with n frames of imagesi1,2,3,.., n }, assuming the number m of keyframes desired. The video S with n frames is divided into m sets by the number of equally spaced frames from the first frame. Taking the 1 st frame image of each set as the initial frame, firstly judging the current frameAnd (4) judging whether the frame is fuzzy or not, if so, directly rejecting the current frame, and then judging the next frame until the video frame in the set is judged to be complete, and finally, obtaining clear frames in the set in the remaining way.
2) And obtaining a grayed leaf vegetable saliency map I' by adopting an ExGR method for the video frames in each set.
3) SIFT features of each leafy vegetable saliency map within all sets were calculated. 1 st saliency map I aggregated at 1 st1' starting from the 2 nd set as the 1 st key frame, all the salient frames in the current set are feature point matched with the previous key frame. And taking the frame with the most matching points in the current set as a new key frame. Repeating the current step until m sets are calculated to obtain m key frames, wherein the calculation formula is as follows:
Figure BDA0002700645900000061
4) and taking the frames corresponding to the m key frames in the original video as an image sequence of the original video, and further reconstructing a 3D point cloud model of the leafy vegetables.
Thirdly, reconstructing the key frame image into a three-dimensional point cloud model through an SFM algorithm and carrying out post-processing such as filtering of a three-dimensional space;
and D, obtaining a sparse point cloud from the key frame image obtained in the step two by using an SFM algorithm, and then reconstructing the sparse point cloud into a dense point cloud by using an MVS algorithm. Under the influence of the external environment, the generated point cloud model contains a plurality of discretely distributed noises, so that the point cloud model needs to be subjected to three-dimensional filtering processing. And (4) performing noise point filtering by adopting an outlier filter based on the radius, thereby obtaining a smoother point cloud model.
And (4) reconstructing the obtained point cloud, wherein the point cloud comprises the plants and the soil ground. Firstly, carrying out plane detection on the point cloud model, wherein the detected plane is a ground plane. A plane is fitted by using a random sample consensus (RANSAC) algorithm, after the ground is detected, a direction vertical to the ground is defined as a Z axis, all point clouds below the plane are deleted, and the point clouds are reserved for calculation of subsequent plant phenotypes. The reconstructed high-precision point cloud often contains hundreds of thousands or even millions of orders of magnitude, which is huge for subsequent calculation. In order to improve the efficiency, the point clouds need to be simplified, the number of the point clouds of each plant is reduced to about 1-3 ten thousand, and the calculated amount of the subsequent steps is effectively reduced on the premise that the influence of the model precision is small.
And fourthly, extracting a point cloud framework from the post-processed point cloud model, carrying out point cloud segmentation, and calculating phenotype parameters of the leaf vegetables on the basis.
In order to obtain the size relationship between the point cloud and the real world, the size of the cube in the three-dimensional space is firstly measured, and the ratio k to L can be obtainedreal/LvirtualWherein L isrealIs the real world size value of the cube, LvirtualIs the size value of the cube in point cloud space. Then, carrying out size conversion on the point cloud model through the ratio k, thereby obtaining a corresponding size result of the point cloud model in the real world; using a point cloud skeletonization algorithm based on slice clustering to the point cloud model to obtain a point cloud skeleton model, and partitioning the point cloud on the basis to obtain leaf vegetable point cloud models with different branches; and then, performing phenotype parameter calculation through the leaf vegetable point cloud skeleton model and the segmented point cloud.
The plant height; when the plant is nearly vertical to the ground, it is the Euclidean distance from the above-ground part of the soil to the topmost end of the plant. At the moment, the difference between the maximum value and the minimum value in the Z-axis direction of the point cloud can be directly made, namely the plant height H is Zmax/ZminWherein Z ismaxIs the value of the point with the maximum Z-axis direction in the plant point cloud, ZminThe value of the smallest point in the Z-axis direction. When the plant has a certain included angle theta with the ground, the height H of the plant is equal to (Z)max-Zmin)/sinθ。
The number of blades; for leaf vegetables, the number of leaves can be obtained by calculating the number of branches of the plant skeleton.
Blade length; which is calculated from the length of the blade branch skeleton line. For n junctions includedPoint vi(i 1.. n) with a branch length that is the sum of the distances between the nodes, i.e. L for each blade lengthb
Figure BDA0002700645900000071
A blade angle; firstly, calculating a normal vector alpha of the ground, and then calculating a tangent vector gamma at a branch position through a blade framework branch, wherein a blade branch included angle theta is as follows:
Figure BDA0002700645900000072
the above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A leaf vegetable three-dimensional phenotype measurement method based on video images is characterized by comprising the following steps:
acquiring video image data of leaf vegetables through a data acquisition device;
removing fuzzy images from video image data, and obtaining a key frame containing a leaf vegetable region in the video image data by using a vegetation index and scale invariant feature transformation matching method;
reconstructing the key frame image into a three-dimensional point cloud model, and performing post-processing of a three-dimensional space through the three-dimensional point cloud model to obtain a post-processing point cloud model;
and extracting a point cloud framework from the post-processing point cloud model, carrying out point cloud segmentation, and further calculating phenotype parameters of the leaf vegetables to obtain three-dimensional phenotype measurement results of the leaf vegetables.
2. The method for measuring three-dimensional phenotype of leaf vegetables based on video images as claimed in claim 1, wherein the obtaining of video image data of leaf vegetables by the data obtaining device specifically comprises: the data acquisition device is used for shooting videos around the leaf vegetables at different angles to acquire video image data of the leaf vegetables.
3. The method for measuring the three-dimensional phenotype of leaf vegetables based on the video image according to claim 1, wherein the blurred image removal processing is performed on the video image data, and a vegetation index and scale invariant feature transformation matching method is used to obtain a key frame of a leaf vegetable region in the video image data, specifically:
s201, decoding video image data into single-frame images for storage, sequentially judging whether each stored frame image is fuzzy or not, and marking the serial number of the fuzzy image;
s202, calculating an ExGR vegetation index of the processed unmarked and fuzzy single-frame image to obtain a leaf vegetable saliency map;
and S203, sequentially grouping and calculating the leaf vegetable saliency maps to obtain a key frame containing a leaf vegetable region in the video image data.
4. The method for measuring three-dimensional phenotype of leafy vegetables based on video image as claimed in claim 3, wherein said step S203 specifically comprises:
setting I to represent a frame of image in a video, S ═ I for a video with n frames of imagesi1,2, 3.., n }, and dividing the video S with n frames into m sets by the number of frames at equal intervals from the first frame, assuming the number m of key frames desired;
calculating SIFT features of each leafy vegetable saliency map within all sets, with 1 st saliency map I 'of 1 st set'1As the 1 st key frame, starting from the 2 nd set, performing feature point matching on all the significant frames in the current set and the previous key frame, and taking the frame with the most matching points in the current set as a new key frame; repeating the current steps until m is calculatedAnd obtaining m key frames by the sets, wherein the calculation formula is as follows:
Figure FDA0002700645890000021
wherein, { I' }iRepresenting a set comprising a plurality of saliency maps, Il'denotes the ith set { I' }iEach frame of the image in the frame is,
Figure FDA0002700645890000022
representing the key frame obtained from the ith salient image set,
Figure FDA0002700645890000023
is represented by l'lAnd
Figure FDA0002700645890000024
calculating the number of feature points obtained by SIFT feature matching of the two frames of images;
and taking the frames corresponding to the m key frames in the original video as an image sequence of the original video for reconstructing the three-dimensional point cloud model of the leafy vegetables.
5. The method as claimed in claim 1, wherein the post-processing of the three-dimensional space includes filtering, plane fitting and simplification.
6. The method for measuring the three-dimensional phenotype of leaf vegetables based on the video image as claimed in claim 5, wherein the key frame image is reconstructed into a three-dimensional point cloud model, and the three-dimensional space post-processing is performed through the three-dimensional point cloud model to obtain a post-processed point cloud model, specifically:
s301, reconstructing the key frame image by using an SFM algorithm to obtain a sparse three-dimensional point cloud model, and further reconstructing by using an MVS algorithm to obtain a dense three-dimensional point cloud model;
s302, carrying out three-dimensional space filtering processing on the three-dimensional point cloud model to obtain a filtered three-dimensional point cloud model;
s303, carrying out plane fitting on the filtering three-dimensional point cloud model, taking the detected plane as the ground, defining the direction vertical to the ground as the Z-axis direction, and simultaneously deleting all point clouds below the plane to obtain a plane fitting three-dimensional point cloud model;
s304, simplifying the plane fitting point cloud model, and reducing the point cloud number of each sample, thereby reducing the calculation amount of the subsequent steps.
7. The method for measuring the three-dimensional phenotype of leaf vegetables based on the video image as claimed in claim 1, wherein the point cloud model for post-processing is used for extracting a point cloud skeleton and performing point cloud segmentation, and further calculating phenotype parameters of leaf vegetables, specifically:
s401, carrying out size conversion on the point cloud model to obtain a size conversion point cloud model, and further obtaining the real size of the leaf vegetable point cloud model;
s402, skeletonizing the size conversion cloud model to obtain a point cloud skeleton model;
s403, on the basis of the point cloud framework model, performing point cloud segmentation on the post-processing point cloud model to obtain a point cloud segmentation model;
s404, calculating phenotype parameters of the leaf vegetables by using the point cloud segmentation model and the point cloud framework model to obtain three-dimensional phenotype measurement of the leaf vegetables.
8. The method as claimed in claim 7, wherein the leaf vegetable phenotype parameters include: plant height, leaf number, leaf length, leaf included angle.
CN202011021158.9A 2020-09-25 2020-09-25 Leaf vegetable three-dimensional phenotype measuring method based on video image Active CN112200854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011021158.9A CN112200854B (en) 2020-09-25 2020-09-25 Leaf vegetable three-dimensional phenotype measuring method based on video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011021158.9A CN112200854B (en) 2020-09-25 2020-09-25 Leaf vegetable three-dimensional phenotype measuring method based on video image

Publications (2)

Publication Number Publication Date
CN112200854A true CN112200854A (en) 2021-01-08
CN112200854B CN112200854B (en) 2023-10-17

Family

ID=74008283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011021158.9A Active CN112200854B (en) 2020-09-25 2020-09-25 Leaf vegetable three-dimensional phenotype measuring method based on video image

Country Status (1)

Country Link
CN (1) CN112200854B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061488A (en) * 2021-11-15 2022-02-18 华中科技大学鄂州工业技术研究院 Object measuring method, system and computer readable storage medium
CN115115621A (en) * 2022-08-24 2022-09-27 聊城市泓润能源科技有限公司 Lubricating oil pollution degree detection method based on image processing
CN115326805A (en) * 2022-10-12 2022-11-11 云南瀚哲科技有限公司 Image acquisition device and IBMR-based tobacco crop growth analysis method
CN116704497A (en) * 2023-05-24 2023-09-05 东北农业大学 Rape phenotype parameter extraction method and system based on three-dimensional point cloud

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031064A1 (en) * 2004-06-10 2007-02-08 Wenyi Zhao Method and apparatus for aligning video to three-dimensional point clouds
CN104021544A (en) * 2014-05-07 2014-09-03 中国农业大学 Greenhouse vegetable disease surveillance video key frame extracting method and extracting system
CN106251399A (en) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
US20190226852A1 (en) * 2016-09-09 2019-07-25 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN110866975A (en) * 2019-04-25 2020-03-06 华中农业大学 Multi-vision-based rape image acquisition device and three-dimensional feature extraction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031064A1 (en) * 2004-06-10 2007-02-08 Wenyi Zhao Method and apparatus for aligning video to three-dimensional point clouds
CN104021544A (en) * 2014-05-07 2014-09-03 中国农业大学 Greenhouse vegetable disease surveillance video key frame extracting method and extracting system
CN106251399A (en) * 2016-08-30 2016-12-21 广州市绯影信息科技有限公司 A kind of outdoor scene three-dimensional rebuilding method based on lsd slam
US20190226852A1 (en) * 2016-09-09 2019-07-25 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN110866975A (en) * 2019-04-25 2020-03-06 华中农业大学 Multi-vision-based rape image acquisition device and three-dimensional feature extraction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨自尚: "植物根系三维模型中心线提取方法", 《中国优秀硕士学位沦为全文数据库》, pages 006 - 713 *
艾青林;余杰;胡克用;陈琦;: "基于ORB关键帧匹配算法的机器人SLAM实现", 机电工程, no. 05, pages 12 - 19 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114061488A (en) * 2021-11-15 2022-02-18 华中科技大学鄂州工业技术研究院 Object measuring method, system and computer readable storage medium
CN114061488B (en) * 2021-11-15 2024-05-14 华中科技大学鄂州工业技术研究院 Object measurement method, system and computer readable storage medium
CN115115621A (en) * 2022-08-24 2022-09-27 聊城市泓润能源科技有限公司 Lubricating oil pollution degree detection method based on image processing
CN115115621B (en) * 2022-08-24 2022-11-11 聊城市泓润能源科技有限公司 Lubricating oil pollution degree detection method based on image processing
CN115326805A (en) * 2022-10-12 2022-11-11 云南瀚哲科技有限公司 Image acquisition device and IBMR-based tobacco crop growth analysis method
CN116704497A (en) * 2023-05-24 2023-09-05 东北农业大学 Rape phenotype parameter extraction method and system based on three-dimensional point cloud
CN116704497B (en) * 2023-05-24 2024-03-26 东北农业大学 Rape phenotype parameter extraction method and system based on three-dimensional point cloud

Also Published As

Publication number Publication date
CN112200854B (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN112200854B (en) Leaf vegetable three-dimensional phenotype measuring method based on video image
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
US10930065B2 (en) Three-dimensional modeling with two dimensional data
CN111724433B (en) Crop phenotype parameter extraction method and system based on multi-view vision
Jay et al. In-field crop row phenotyping from 3D modeling performed using Structure from Motion
Nielsen et al. Vision-based 3D peach tree reconstruction for automated blossom thinning
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
CN113112504A (en) Plant point cloud data segmentation method and system
Zhang et al. 3D monitoring for plant growth parameters in field with a single camera by multi-view approach
CN114359546B (en) Day lily maturity identification method based on convolutional neural network
Masuda Leaf area estimation by semantic segmentation of point cloud of tomato plants
CN110866975A (en) Multi-vision-based rape image acquisition device and three-dimensional feature extraction method
CN111429490A (en) Agricultural and forestry crop three-dimensional point cloud registration method based on calibration ball
Paturkar et al. 3D reconstruction of plants under outdoor conditions using image-based computer vision
CN115861409B (en) Soybean leaf area measuring and calculating method, system, computer equipment and storage medium
Paturkar et al. Non-destructive and cost-effective 3D plant growth monitoring system in outdoor conditions
Zhou et al. Individual tree crown segmentation based on aerial image using superpixel and topological features
CN115687850A (en) Method and device for calculating irrigation water demand of farmland
CN116862955A (en) Three-dimensional registration method, system and equipment for plant images
Amean et al. Automatic leaf segmentation and overlapping leaf separation using stereo vision
CN110866945A (en) Method for generating three-dimensional tree by automatic identification of oblique photography model
CN110689022A (en) Leaf matching-based image extraction method for each crop
CN112906719A (en) Standing tree factor measuring method based on consumption-level depth camera
CN115937151B (en) Method for judging curling degree of crop leaves
Dong et al. Three-dimensional quantification of apple phenotypic traits based on deep learning instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant