AU2019100886A4 - Plant organ image separation method and system - Google Patents
Plant organ image separation method and system Download PDFInfo
- Publication number
- AU2019100886A4 AU2019100886A4 AU2019100886A AU2019100886A AU2019100886A4 AU 2019100886 A4 AU2019100886 A4 AU 2019100886A4 AU 2019100886 A AU2019100886 A AU 2019100886A AU 2019100886 A AU2019100886 A AU 2019100886A AU 2019100886 A4 AU2019100886 A4 AU 2019100886A4
- Authority
- AU
- Australia
- Prior art keywords
- plant
- point cloud
- dimensional
- measured plant
- measured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Landscapes
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a plant organ image separation method and system. The separation method includes: acquiring an image of a to-be-measured plant captured by each camera; acquiring a three-dimensional point cloud under the perspective of each camera; unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant; projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant; locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model; acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and performing organ point cloud segmentation of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant. The present invention can avoid a data spot when data is spliced, thus improving the plant organ image separation accuracy. To-be-measured plant image acquisition module Three-dimensional point cloud acquisition module Splicing module Projection module Locating module Plant organ three-dimensional point cloud region initial acquisition module Plant organ three-dimensional point cloud region initial determining module 1-2 -1 1-3
Description
The present invention relates to the field of plant organ separation, and in particular, to a plant organ image separation method and system.
BACKGROUND
At present, three-dimensional scanning and measurement of plants directly uses point cloud data, and uses image software for modeling. The point cloud data is rarely classified and identified to separate plant organs. An existing literature has proposed a classification method targeting crowns, branches, trunks and ground respectively. It is mainly suitable for large-scale scene identification, and is not possible to finely classify plant organs of an individual tree. A literature has proposed to construct spatial distribution characteristics of scattered points. A support vector machine (SVM) is used to classify the plant structure of vines, but the geometrical and morphological characteristics of leaf surfaces, branches and trunks are not fully considered. A literature uses two types of classifiers, a proximal support vector machine (PSVM) and a proximal support vector machine via generalized eigenvalues (GEPSVM). Additionally, a literature combines the geometrical and morphological characteristics of leaf surfaces, branches and trunks, and considers the characteristics of a manifold structure to construct local tangent plane distribution characteristics, expecting to form a multi-dimensional integrated characteristic to improve the classification effect. However, the application method does not achieve fully automatic branch and leaf separation. At an earlier stage, some training samples need to be manually labeled, and the calibration of the training samples is restricted to some extent by human factors, thus resulting in low separation accuracy.
SUMMARY
An objective of the present invention is to provide a plant organ image separation method and system, to improve the plant organ image separation accuracy, and reduce the complexity of the separation process.
To achieve the above purpose, the present invention provides the following technical solutions.
A plant organ image separation method is provided, where the plant organ image separation method is applied to a plant organ image separation device, which includes: a scanner, a test rack, a test platform, and a computer; the scanner is fixed on the test rack, and the test rack can be
2019100886 12 Aug 2019 moved up and down; the test platform is used for placing a to-be-measured plant; the scanner is used for scanning the to-be-measured plant along with the movement of the test rack to obtain plant images at different heights, and transmitting the plant images to the computer; the computer is used for separating the organs of the to-be-measured plant according to the plant images scanned by the scanner; the scanner includes a plurality of cameras and two linear laser emitters; the plurality of cameras are located on a first horizontal plane, and are distributed with a same spacing on a virtual arc; the two linear laser emitters are located on a second horizontal plane under the first horizontal plane; the upward sides of the two linear laser emitters are respectively corresponding to the cameras at the two endpoints of the virtual arc; laser lights emitted by the two linear laser emitters are located on a third horizontal plane parallel to the test platform; the third horizontal plane is parallel to both the first horizontal plane and the second horizontal plane; the shooting angles of the plurality of cameras are titled downward;
the plant organ image separation method includes:
acquiring an image of the to-be-measured plant captured by each camera;
acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;
unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant;
projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;
locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model;
acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
Optionally, before the acquiring an image of the to-be-measured plant captured by each camera, the method further includes:
calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and
2019100886 12 Aug 2019 unifying a local coordinate system of each camera into the global coordinate system by a global correction method.
Optionally, the unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant specifically includes:
extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;
acquiring a three-dimensional point cloud under the perspective of each camera, according to the calibrated parameters of each camera; and unifying all three-dimensional point clouds into the global coordinate system by the global correction method for splicing, to obtain the initial three-dimensional point cloud of the to-be-measured plant.
Optionally, the locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model specifically includes:
labeling the stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and training a stem and leaf region detection model by a mask-recycle convolutional neural network (Mask-RCNN) deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant.
A plant organ image separation system is provided, where the plant organ image separation system is applied to a plant organ image separation device, which includes: a scanner, a test rack, a test platform, and a computer; the scanner is fixed on the test rack, and the test rack can be moved up and down; the test platform is used for placing a to-be-measured plant; the scanner is used for scanning the to-be-measured plant along with the movement of the test rack to obtain plant images at different heights, and transmitting the plant images to the computer; the computer is used for separating the organs of the to-be-measured plant according to the plant images scanned by the scanner; the scanner includes a plurality of cameras and two linear laser emitters; the plurality of cameras are located on a first horizontal plane, and are distributed with a same spacing on a virtual arc; the two linear laser emitters are located on a second horizontal plane under the first horizontal plane; the upward sides of the two linear laser emitters are respectively corresponding to the cameras at the two endpoints of the virtual arc; laser lights emitted by the two linear laser emitters are located on a third horizontal plane parallel to the test platform; the third horizontal plane is parallel to both the first horizontal plane and the second horizontal plane;
2019100886 12 Aug 2019 the shooting angles of the plurality of cameras are titled downward;
the plant organ image separation system includes:
a to-be-measured plant image acquisition module, for acquiring an image of the to-be-measured plant captured by each camera;
a three-dimensional point cloud acquisition module, for acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;
a splicing module, for unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant;
a projection module, for projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;
a locating module, for locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model;
a plant organ three-dimensional point cloud region initial acquisition module, for acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and a plant organ three-dimensional point cloud region determining module, for performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
Optionally, the system further includes:
a camera calibration module, for calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and a global correction module, for unifying a local coordinate system of each camera into the global coordinate system by a global correction method.
Optionally, the splicing module specifically includes:
a laser stripe centerline extraction unit, for extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;
a three-dimensional point cloud acquisition unit, for acquiring a three-dimensional point cloud under the perspective of each camera, according to the calibrated parameters of each camera; and
2019100886 12 Aug 2019 a splicing unit, for unifying all three-dimensional point clouds into the global coordinate system by the global correction method for splicing, to obtain the initial three-dimensional point cloud of the to-be-measured plant.
Optionally, the locating module specifically includes:
a labeling unit, for labeling the stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and a model training unit, for training a stem and leaf region detection model by a Mask-RCNN deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant.
According to specific embodiments provided in the present invention, the present invention discloses the following technical effects.
Through the cooperation of a plurality of cameras and laser emitters, a plant is scanned from a plurality of angles in an all-round way to avoid a blind spot when data is spliced; the system has higher efficiency, strong applicability and low cost for three-dimensional scanning of the plant; the point cloud data is more complete, and the reconstruction effect is better.
In order to complete three-dimensional plant organ separation, a three-dimensional point is projected to generate a two-dimensional projection point, and triangulation network construction and light treatment are performed based on a two-dimensional point to generate a two-dimensional projection image; stem and leaf regions are segmented based on the two-dimensional projection image, and then based on a corresponding relationship between the two-dimensional projection point and the three-dimensional point, a stem point cloud region and a leaf point cloud region are automatically acquired from a three-dimensional point cloud, and approximate stem and leaf point cloud regions are obtained according to a projection relationship between the three-dimensional point and the projection point; finally, point cloud segmentation of all the organs of the plant is realized by a K-means clustering algorithm [7]. In the whole three-dimensional organ separation process, three-dimensional space calculation is firstly converted into two-dimensional space calculation to obtain initially segmented stem and leaf point clouds, which reduces the complexity of point cloud segmentation that is directly based on the three-dimensional point cloud.
Compared with a general three-dimensional point cloud segmentation method, the three-dimensional point cloud segmentation herein is based on the initially acquired three-dimensional stem and leaf point cloud regions, and is then based on all the initially segmented point cloud regions to carry out K-means clustering segmentation. The entire
2019100886 12 Aug 2019 three-dimensional point cloud segmentation is more efficient and more accurate than point cloud segmentation without prior knowledge.
In specific implementation, a high-performance two-dimensional stem and leaf detection model is trained on a small number of samples by a transfer learning method based on an image segmentation pre-training model; thus, a good performance effect can be obtained through a two-dimensional stem and leaf training sample that is finite and labeled. A three-dimensional com plant sample is generated by an intelligent plant morphology modeling method that is based on gene expression programming, and is slightly perturbed for expansion to generate a small number of finite samples. In this way, the com plant sample generated by simulation based on the plant gene expression method is richer, more diverse, and closer to a real sample.
BRIEF DESCRIPTION OF THE DRAWINGS
To describe the technical solutions in the embodiments of the present invention or in the prior art more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings in the following description show merely some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a plant organ image separation method according to the present invention;
FIG. 2 is a schematic structural diagram of a plant organ image separation system according to the present invention;
FIG. 3 is a schematic structural diagram of a plant organ image separation device according to the present invention;
FIG. 4 is a schematic flow diagram of a corn plant morphology modeling method based on gene expression programming according to a specific embodiment of the present invention; and
FIG. 5 is a schematic flow diagram of stem region detection model training by a mask-recycle convolutional neural network (Mask-RCNN) deep learning method according to a specific embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
The following clearly and completely describes the technical solutions in the embodiments of the present invention with reference to accompanying drawings in the embodiments of the present invention. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present invention. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts shall
2019100886 12 Aug 2019 fall within the protection scope of the present invention.
To make objectives, features, and advantages of the present invention more comprehensible, the following describes the present invention in more detail with reference to accompanying drawings and specific implementations.
FIG. 1 is a schematic flow diagram of a plant organ image separation method according to the present invention. As shown in FIG. 1, the plant organ image separation method includes the following steps:
step 100: acquiring an image of a to-be-measured plant captured by each camera;
step 200: acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;
step 300: unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant;
step 400: projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;
step 500: locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model;
step 600: acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and step 700: performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
Before step 100, the method further includes:
calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and unifying a local coordinate system of each camera into the global coordinate system by a global correction method.
Step 300 specifically includes:
extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;
acquiring a three-dimensional point cloud under the perspective of each camera, according to the calibrated parameters of each camera; and
2019100886 12 Aug 2019 unifying all three-dimensional point clouds into the global coordinate system by the global correction method for splicing, to obtain the initial three-dimensional point cloud of the to-be-measured plant.
Step 500 specifically includes:
labeling stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and training a stem and leaf region detection model by a mask-recycle convolutional neural network (Mask-RCNN) deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant.
The present invention further provides a plant organ image separation system. FIG. 2 is a schematic structural diagram of the plant organ image separation system according to the present invention. As shown in FIG. 2, the plant organ image separation system includes the following structures:
a to-be-measured plant image acquisition module 201, for acquiring an image of a to-be-measured plant captured by each camera;
a three-dimensional point cloud acquisition module 202, for acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;
a splicing module 203, for unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant;
a projection module 204, for projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;
a locating module 205, for locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model;
a plant organ three-dimensional point cloud region initial acquisition module 206, for acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and a plant organ three-dimensional point cloud region determining module 207, for performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and
2019100886 12 Aug 2019 the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
The splicing module 203 specifically includes:
a laser stripe centerline extraction unit, for extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;
a three-dimensional point cloud acquisition unit, for acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera; and a splicing unit, for unifying all three-dimensional point clouds into the global coordinate system by a global correction method for splicing, to obtain the initial three-dimensional point cloud of the to-be-measured plant.
The locating module 205 specifically includes:
a labeling unit, for labeling the stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and a model training unit, for training a stem and leaf region detection model by a Mask-RCNN deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant.
The foregoing system further includes:
a camera calibration module, for calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and a global correction module, for unifying a local coordinate system of each camera into the global coordinate system by the global correction method.
The foregoing plant organ image separation method and plant organ image separation system are both applied to a plant organ image separation device. FIG. 3 is a schematic structural diagram of the plant organ image separation device according to the present invention. As shown in FIG. 3, the plant organ image separation device includes the following structures: a scanner, a test rack 3, a test platform 4, and a computer (not shown in the diagram). The scanner is fixed on the test rack 3, and the test rack 3 can be moved up and down; the test platform 4 is used for placing a to-be-measured plant; the scanner is used for scanning the to-be-measured plant along with the movement of the test rack 3 to obtain plant images at different heights, and transmitting the plant images to the computer; the computer is used for separating organs of the to-be-measured plant according to the plant images scanned by the scanner.
2019100886 12 Aug 2019
The scanner includes a plurality of cameras and two linear laser emitters 2-1 and 2-2; the plurality of cameras are located on a first horizontal plane, and are distributed with a same spacing on a virtual arc. In the diagram, three cameras are taken as an example, including a camera 1-1, a camera 1-2, and a camera 1-3.
The two linear laser emitters are located on a second horizontal plane under the first horizontal plane; the upward sides of the two linear laser emitters are respectively corresponding to the cameras at the two endpoints of the virtual arc; in the diagram, the upward side of the linear laser emitter 2-1 is corresponding to the camera 1-1, and the upward side of the linear laser emitter 2-2 is corresponding to the camera 1-3. Laser lights emitted by the two linear laser emitters are located on a third horizontal plane parallel to the test platform; the third horizontal plane is parallel to both the first horizontal plane and the second horizontal plane. The shooting angles of the plurality of cameras are titled downward, so that when the laser lights emitted by the two linear laser emitters are located at the top of the to-be-measured plant, each camera can capture the image of the top of the to-be-measured plant. As green leaves have absorption peaks in red and blue bands, the linear laser emitters can use a green laser for laser scanning.
In the present embodiment, the plant organ image separation device further includes: a movement control card and a stepping motor; the test rack 3 and the stepping motor are both connected to the movement control card to adjust the movement state of the test rack 3 through the stepping motor. In the present embodiment, the plant organ image separation device may further include a plurality of narrow-band filters, having a center wavelength of 532 nm; one of the narrow-band filters is correspondingly disposed on a lens of each camera.
The following further describes the solution of the present invention in combination with a specific embodiment.
The present specific embodiment was used to separate organs of a corn plant. The separation device was composed of three charge coupled device (CCD) industrial cameras, three filters, two linear laser emitters, a test rack, a test platform, and a computer. A green laser was used for laser scanning, and in order to increase the contrast ratio of a laser stripe for higher accuracy, a narrow-band filter having a center wavelength of 532 nm was added to a lens of each camera.
The linear lasers and the cameras were fixed to form a scanner; they always kept their relative positions unchanged; the cameras were respectively fixed at three positions with a same spacing on the rack. Linear laser lights emitted by the two lasers were maintained at a same plane and were parallel to the test platform; the cameras were tilted downward to keep a fixed angle with the lasers. The rack was linked to a movement control card, and a stepping motor was connected to the movement control card to move the rack up and down by means of labview programming control.
2019100886 12 Aug 2019
Before the device was started, coordinate systems of the cameras needed to be unified; a global correction method was used to unify a local coordinate system of each camera into a global coordinate system, that is, to determine a rotation matrix and a translation matrix to transform the coordinate system of each camera into the global coordinate system.
In the present embodiment, three groups of scanners were installed. A camera calibration method and a laser stripe image based on a line-structured light were used to reconstruct a three-dimensional point cloud of the plant under the perspective of each camera. In this paper, in order to organically splice acquired three-dimensional point clouds, the coordinate system of a No. 1 camera was used as a reference coordinate system, and then coordinate systems of the other cameras were globally corrected into the reference coordinate system. The global correction was to calculate the rotation and translation matrices to transform the coordinate systems of the other cameras into the reference coordinate system.
The coordinate systems of all the cameras had the following transformation relationship:
X | Ί | ΧΊ | |
h | ΧτΊ | ft | |
X | 0Tl J | ||
1 | J | .1 J |
where K was a 3x3 rotation matrix between two cameras; T was a 3x1 translation vector between the two cameras.
Three-dimensional point cloud data obtained by all the cameras could be unified by calculating the rotation and translation matrices between two cameras and determining the rotation and translation matrices between all the coordinate systems of the cameras and the reference coordinate system by the equation (1).
After the device was started, when the stepping motor controlled the rack and the scanner to scan from top to bottom at a constant speed, the linear lasers projected a laser stripe onto the surface of the to-be-measured plant; the three cameras acquired images synchronously by a signal generator; the acquired image data was sent to the computer for real-time three-dimensional reconstruction; the three-dimensional reconstruction and organ point cloud separation of the whole corn plant were completed when the scanner scanned to the bottommost end from the top. The specific process is as follows.
Step 1, three-dimensional reconstruction of the corn plant
1. Three-dimensional point cloud acquisition of the corn plant
Extract the center of a laser stripe by a gray centroid method that is based on a laser stripe skeleton, acquire a three-dimensional point cloud under the perspective of each camera by using calibrated parameters of each camera, and then adjust all three-dimensional point clouds into a
2019100886 12 Aug 2019 unified coordinate system by a global correction method to complete splicing of the three-dimensional point clouds, thus obtaining an initial three-dimensional point cloud of the plant.
2. Point cloud de-noising
Remove an internal high-frequency point cloud from the spliced three-dimensional point clouds.
Step 2, com plant organ image separation
The organ point cloud separation method of the corn plant is as follows: firstly, project the acquired three-dimensional point cloud of the plant onto a YOZ plane to form a two-dimensional projection point of the corn plant, and construct a two-dimensional plant image by triangulation network construction and light treatment; secondly, locate stem and leaf regions in the two-dimensional plant image by an image region segmentation model of the corn plant; thirdly, obtain approximate stem and leaf point cloud regions by using a projection relationship between a three-dimensional point and the projection point; and at last, realize point cloud segmentation of all the organs of the plant by a K-means clustering algorithm.
(1) Three-dimensional model sample collection of the com plant
Firstly, collect 50 com plants of different growth cycles and morphologies; secondly, scan and reconstruct the 50 com plants by the system herein; thirdly, respectively input three-dimensional models of the 50 plants, and by an intelligent plant morphology modeling method based on gene expression programming, evolve the initial com plants for 100 generations, and simulate the three-dimensional models of superior individual com plants, the three-dimensional simulation models of the entire com plant totaling 3,951; and at last, collect the initial three-dimensional models of the 50 plants and the simulated three-dimensional models of the corn plant, there being a total of 4,001 three-dimensional model samples for the entire corn plant. FIG. 4 is a schematic flow diagram of a corn plant morphology modeling method based on gene expression programming according to the specific embodiment of the present invention.
In order to acquire more samples, the foregoing collected samples were all aligned into the global correction coordinate system, and then each corn plant model was slightly perturbed 5 times in the coordinate system to achieve a sample expansion purpose. The slight perturbation of the model refers to random rotation, scaling and movement centering on a centroid of the model. A rotation angle ranges from 0 to 5 degrees, the scaling size is 0.9 to 1.1 times of the original model size, and the movement ranges from 0 to 0.1xL, L being the shortest side length of a whole model bounding box.
After all the samples were collected, there were a total of 20,005 samples. The samples were divided into two parts, 90% of the samples were used as training samples for subsequent
2019100886 12 Aug 2019 generation of two-dimensional stem and leaf regions, and 10% of the samples were used as test samples for subsequent generation of two-dimensional stem and leaf region detection and accuracy evaluation samples for final organ separation.
(2) Sample labeling
Project three-dimensional point clouds of all the three-dimensional models onto an OYZ plane of the global coordinate system to form a two-dimensional projection point, perform triangulation network construction and light treatment on the two-dimensional projection point to form a two-dimensional corn plant image, and finally, label a stem region and a leaf region in each two-dimensional projection image of the corn plant by a labeling tool, labelme.
(3) Two-dimensional stem and leaf detection model
A Mask-RCNN deep learning method was used to train a stem region detection model. The main process is as follows: firstly, input a corn stem and leaf training sample, and load a pre-training model trained based on a common objects in context (COCO) data set; and secondly, fine-tune an image stem and leaf region segmentation model of the corn by a transfer learning method. The training process of the image region segmentation model of the corn is shown in FIG. 5, which is a schematic flow diagram of stem region detection model training by the Mask-RCNN deep learning method.
Step 3, three-dimensional organ segmentation
Firstly, project a reconstructed three-dimensional point cloud model of the corn plant onto a YOZ plane, and construct a two-dimensional plant image by triangulation network construction and light treatment; secondly, perform two-dimensional image region segmentation of the corn plant by the stem and leaf detection model; thirdly, reversely calculate approximate stem and leaf point cloud regions in the three-dimensional model by using the detected information of the stem and leaf regions and a projection relationship between a two-dimensional point and a three-dimensional point; and at last, realize three-dimensional point cloud separation of all the organs by a K-means clustering algorithm, and classify and label each three-dimensional point, the label information including an index and a color.
The present embodiment can realize the following beneficial effects.
Creative and beneficial effects:
1. Through the cooperation of a plurality of cameras and laser emitters, the plant was scanned from three angles in an all-round way to avoid a blind spot when data was spliced; the system had higher efficiency, strong applicability and low cost for the three-dimensional scanning of the plant; the point cloud data was more complete, and the reconstruction effect was better.
2. As green leaves have absorption peaks in red and blue bands, we chose green lasers for laser scanning, and in order to increase the contrast ratio of a laser stripe for higher accuracy, we
2019100886 12 Aug 2019 added a narrow-band filter having a center wavelength of 532 nm to the lens of each camera, thus simplifying the extraction of a light-structured laser stripe region, and improving the central extraction accuracy of the line-structured laser stripe.
3. A high-performance two-dimensional stem and leaf detection model was trained on a small number of samples by a transfer learning method based on an image segmentation pre-training model; thus, a good performance effect could be obtained through a two-dimensional stem and leaf training sample that was finite and labeled.
4. A three-dimensional corn plant sample was generated by an intelligent plant morphology modeling method that was based on gene expression programming, and was slightly perturbed for expansion to generate a small number of finite samples. In this way, the corn plant sample generated by simulation based on the plant gene expression method was richer, more diverse, and closer to a real sample.
5. In order to complete the three-dimensional organ separation of the plant, a three-dimensional point was projected to generate a two-dimensional projection point, and triangulation network construction and light treatment were performed based on a two-dimensional point to generate a two-dimensional projection image; stem and leaf regions were segmented based on the two-dimensional projection image, and then based on a corresponding relationship between the two-dimensional projection point and the three-dimensional point, a stem point cloud region and a leaf point cloud region were automatically acquired from a three-dimensional point cloud, and approximate stem and leaf point cloud regions were obtained according to a projection relationship between the three-dimensional point and the projection point; finally, the point cloud segmentation of all the organs of the plant was realized by a K-means clustering algorithm [7]. In the whole three-dimensional organ separation process, three-dimensional space calculation was firstly converted into two-dimensional space calculation to obtain initially segmented stem and leaf point clouds, which reduced the complexity of point cloud segmentation that was directly based on the three-dimensional point cloud.
6. Compared with a general three-dimensional point cloud segmentation method, the three-dimensional point cloud segmentation herein was based on the initially acquired three-dimensional stem and leaf point cloud regions, and was then based on all the initially segmented point cloud regions to carry out K-means clustering segmentation. The entire three-dimensional point cloud segmentation was more efficient and more accurate than point cloud segmentation without prior knowledge.
Each embodiment of the present specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts
2019100886 12 Aug 2019 between the embodiments may refer to each other. For a system disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and reference can be made to the method description.
Several examples are used for illustration of the principles and implementation methods of the present invention. The description of the embodiments is used to help illustrate the method and its core principles of the present invention. In addition, those skilled in the art can make various modifications in terms of specific embodiments and scope of application in accordance with the teachings of the present invention. In conclusion, the content of this specification shall not be construed as a limitation to the present invention.
Claims (5)
- What is claimed is:1. A plant organ image separation method, wherein the plant organ image separation method is applied to a plant organ image separation device, which comprises: a scanner, a test rack, a test platform, and a computer; the scanner is fixed on the test rack, and the test rack can be moved up and down; the test platform is used for placing a to-be-measured plant; the scanner is used for scanning the to-be-measured plant along with the movement of the test rack to obtain plant images at different heights, and transmitting the plant images to the computer; the computer is used for separating the organs of the to-be-measured plant according to the plant images scanned by the scanner; the scanner comprises a plurality of cameras and two linear laser emitters; the plurality of cameras are located on a first horizontal plane, and are distributed with a same spacing on a virtual arc; the two linear laser emitters are located on a second horizontal plane under the first horizontal plane; the upward sides of the two linear laser emitters are respectively corresponding to the cameras at the two endpoints of the virtual arc; laser lights emitted by the two linear laser emitters are located on a third horizontal plane parallel to the test platform; the third horizontal plane is parallel to both the first horizontal plane and the second horizontal plane; the shooting angles of the plurality of cameras are titled downward;the plant organ image separation method comprises:acquiring an image of the to-be-measured plant captured by each camera;acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant; preferably, wherein the unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant specifically comprises:extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;acquiring a three-dimensional point cloud under the perspective of each camera, according to the calibrated parameters of each camera; and unifying all three-dimensional point clouds into the global coordinate system by the global correction method for splicing, to obtain the initial three-dimensional point2019100886 12 Aug 2019 cloud of the to-be-measured plant;projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model; preferably, wherein the locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model specifically comprises:labeling the stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and training a stem and leaf region detection model by a mask-recycle convolutional neural network (Mask-RCNN) deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant;acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
- 2. The plant organ image separation method according to claim 1, before the acquiring an image of the to-be-measured plant captured by each camera, further comprising:calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and unifying a local coordinate system of each camera into the global coordinate system by a global correction method.
- 3.A plant organ image separation system, wherein the plant organ image separation system2019100886 12 Aug 2019 is applied to a plant organ image separation device, which comprises: a scanner, a test rack, a test platform, and a computer; the scanner is fixed on the test rack, and the test rack can be moved up and down; the test platform is used for placing a to-be-measured plant; the scanner is used for scanning the to-be-measured plant along with the movement of the test rack to obtain plant images at different heights, and transmitting the plant images to the computer; the computer is used for separating the organs of the to-be-measured plant according to the plant images scanned by the scanner; the scanner comprises a plurality of cameras and two linear laser emitters; the plurality of cameras are located on a first horizontal plane, and are distributed with a same spacing on a virtual arc; the two linear laser emitters are located on a second horizontal plane under the first horizontal plane; the upward sides of the two linear laser emitters are respectively corresponding to the cameras at the two endpoints of the virtual arc; laser lights emitted by the two linear laser emitters are located on a third horizontal plane parallel to the test platform; the third horizontal plane is parallel to both the first horizontal plane and the second horizontal plane; the shooting angles of the plurality of cameras are titled downward;the plant organ image separation system comprises:a to-be-measured plant image acquisition module, for acquiring an image of the to-be-measured plant captured by each camera;a three-dimensional point cloud acquisition module, for acquiring a three-dimensional point cloud under the perspective of each camera, according to calibrated parameters of each camera;a splicing module, for unifying all three-dimensional point clouds into a global coordinate system for splicing, to obtain an initial three-dimensional point cloud of the to-be-measured plant;a projection module, for projecting the initial three-dimensional point cloud of the to-be-measured plant onto an OXY plane of the global coordinate system, to obtain a two-dimensional projection point image of the to-be-measured plant;a locating module, for locating stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a plant image region segmentation model; preferably, wherein the locating module specifically comprises:a labeling unit, for labeling the stem and leaf regions of the to-be-measured plant in the two-dimensional projection point image of the to-be-measured plant by a labeling tool, labelme; and2019100886 12 Aug 2019 a model training unit, for training a stem and leaf region detection model by a Mask-RCNN deep learning method, and using the stem and leaf region detection model to detect the two-dimensional projection point image of the to-be-measured plant, to obtain the stem and leaf regions of the to-be-measured plant;a plant organ three-dimensional point cloud region initial acquisition module, for acquiring a stem point cloud region and a leaf point cloud region of the to-be-measured plant in the initial three-dimensional point cloud of the to-be-measured plant, according to a corresponding relationship between the three-dimensional point cloud and a projection point in the global coordinate system; and a plant organ three-dimensional point cloud region determining module, for performing organ point cloud segmentation of the to-be-measured plant in the stem point cloud region and the leaf point cloud region of the to-be-measured plant by a K-means clustering algorithm, to obtain a three-dimensional point cloud region corresponding to each organ of the to-be-measured plant.
- 4. The plant organ image separation system according to claim 3, wherein the system further comprises:a camera calibration module, for calibrating internal and external parameters of each camera to obtain the calibrated parameters of each camera; and a global correction module, for unifying a local coordinate system of each camera into the global coordinate system by a global correction method.
- 5. The plant organ image separation system according to claim 3, wherein the splicing module specifically comprises:a laser stripe centerline extraction unit, for extracting the centerline of a laser stripe by a gray centroid method that is based on a laser stripe skeleton;a three-dimensional point cloud acquisition unit, for acquiring a three-dimensional point cloud under the perspective of each camera, according to the calibrated parameters of each camera; and a splicing unit, for unifying all three-dimensional point clouds into the global coordinate system by the global correction method for splicing, to obtain the initial three-dimensional point cloud of the to-be-measured plant.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910135760.6 | 2019-02-25 | ||
CN201910135760.6A CN109887020B (en) | 2019-02-25 | 2019-02-25 | Plant organ separation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2019100886A4 true AU2019100886A4 (en) | 2019-09-12 |
Family
ID=66929125
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2019100886A Ceased AU2019100886A4 (en) | 2019-02-25 | 2019-08-12 | Plant organ image separation method and system |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN109887020B (en) |
AU (1) | AU2019100886A4 (en) |
ZA (1) | ZA201906149B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768413A (en) * | 2020-05-29 | 2020-10-13 | 北京农业信息技术研究中心 | Plant three-dimensional point cloud segmentation method and system |
CN111811442A (en) * | 2020-07-14 | 2020-10-23 | 上海船舶工艺研究所(中国船舶工业集团公司第十一研究所) | Method for rapidly measuring and calculating planeness of large-area deck of ship |
CN116612129A (en) * | 2023-06-02 | 2023-08-18 | 清华大学 | Low-power consumption automatic driving point cloud segmentation method and device suitable for severe environment |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110895810B (en) * | 2019-10-24 | 2022-07-05 | 中科院广州电子技术有限公司 | Chromosome image example segmentation method and device based on improved Mask RCNN |
CN110930424B (en) * | 2019-12-06 | 2023-04-18 | 深圳大学 | Organ contour analysis method and device |
CN111080612B (en) * | 2019-12-12 | 2021-01-01 | 哈尔滨市科佳通用机电股份有限公司 | Truck bearing damage detection method |
CN111487646A (en) * | 2020-03-31 | 2020-08-04 | 安徽农业大学 | Online detection method for corn plant morphology |
CN113112504B (en) * | 2021-04-08 | 2023-11-03 | 浙江大学 | Plant point cloud data segmentation method and system |
CN114812418B (en) * | 2022-04-25 | 2023-10-27 | 安徽农业大学 | Portable plant density and plant spacing measurement system |
CN117689823B (en) * | 2024-02-02 | 2024-05-14 | 之江实验室 | Plant three-dimensional model generation method and device based on splicing technology |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184564A (en) * | 2011-04-13 | 2011-09-14 | 北京农业信息技术研究中心 | Facility horticulture plant three-dimensional reconstruction method based on double-scaling three-dimensional digitized data |
CN103065352B (en) * | 2012-12-20 | 2015-04-15 | 北京农业信息技术研究中心 | Plant three-dimensional reconstruction method based on image and scanning data |
CN104599318B (en) * | 2014-12-25 | 2017-08-04 | 北京农业信息技术研究中心 | A kind of method and system of the seamless fusion of plant three-dimensional model gridding |
US10008035B1 (en) * | 2015-05-18 | 2018-06-26 | Blue River Technology Inc. | System and method of virtual plant field modelling |
US20170053168A1 (en) * | 2015-08-21 | 2017-02-23 | Murgyver Consulting Ltd. | Method and system for the optimization of plant growth |
CN105675549B (en) * | 2016-01-11 | 2019-03-19 | 武汉大学 | A kind of Portable rural crop parameter measurement and growing way intellectual analysis device and method |
CN105844244A (en) * | 2016-03-28 | 2016-08-10 | 北京林业大学 | Fruit tree ratoon identifying and positioning method |
CN107687816B (en) * | 2017-08-22 | 2019-05-14 | 大连理工大学 | A kind of measurement method of the fit-up gap based on point cloud local feature extraction |
CN107869962B (en) * | 2017-10-31 | 2020-12-15 | 南京农业大学 | High-flux facility crop three-dimensional morphological information measuring system based on space carving technology |
-
2019
- 2019-02-25 CN CN201910135760.6A patent/CN109887020B/en active Active
- 2019-08-12 AU AU2019100886A patent/AU2019100886A4/en not_active Ceased
- 2019-09-18 ZA ZA2019/06149A patent/ZA201906149B/en unknown
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768413A (en) * | 2020-05-29 | 2020-10-13 | 北京农业信息技术研究中心 | Plant three-dimensional point cloud segmentation method and system |
CN111768413B (en) * | 2020-05-29 | 2023-12-05 | 北京农业信息技术研究中心 | Plant three-dimensional point cloud segmentation method and system |
CN111811442A (en) * | 2020-07-14 | 2020-10-23 | 上海船舶工艺研究所(中国船舶工业集团公司第十一研究所) | Method for rapidly measuring and calculating planeness of large-area deck of ship |
CN116612129A (en) * | 2023-06-02 | 2023-08-18 | 清华大学 | Low-power consumption automatic driving point cloud segmentation method and device suitable for severe environment |
Also Published As
Publication number | Publication date |
---|---|
CN109887020B (en) | 2020-08-04 |
CN109887020A (en) | 2019-06-14 |
ZA201906149B (en) | 2022-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2019100886A4 (en) | Plant organ image separation method and system | |
Onishi et al. | An automated fruit harvesting robot by using deep learning | |
Chen et al. | A YOLOv3-based computer vision system for identification of tea buds and the picking point | |
CN109708578B (en) | Plant phenotype parameter measuring device, method and system | |
CN105069746B (en) | Video real-time face replacement method and its system based on local affine invariant and color transfer technology | |
CN113591766B (en) | Multi-source remote sensing tree species identification method for unmanned aerial vehicle | |
Silwal et al. | Apple identification in field environment with over the row machine vision system | |
Medeiros et al. | Modeling dormant fruit trees for agricultural automation | |
Karkee et al. | A method for three-dimensional reconstruction of apple trees for automated pruning | |
CN106846462B (en) | insect recognition device and method based on three-dimensional simulation | |
CN114119574B (en) | Picking point detection model construction method and picking point positioning method based on machine vision | |
Ge et al. | Three dimensional apple tree organs classification and yield estimation algorithm based on multi-features fusion and support vector machine | |
Li et al. | An improved binocular localization method for apple based on fruit detection using deep learning | |
CN113538666A (en) | Rapid reconstruction method for three-dimensional model of plant | |
Liu et al. | Development of a machine vision algorithm for recognition of peach fruit in a natural scene | |
CN116051783A (en) | Multi-view-based soybean plant three-dimensional reconstruction and shape analysis method | |
CN117409339A (en) | Unmanned aerial vehicle crop state visual identification method for air-ground coordination | |
CN116682106A (en) | Deep learning-based intelligent detection method and device for diaphorina citri | |
CN106886754B (en) | Object identification method and system under a kind of three-dimensional scenic based on tri patch | |
CN114926357B (en) | LED array light source pose self-correction method for computing microscopic imaging system | |
CN114677474A (en) | Hyperspectral three-dimensional reconstruction system and method based on SfM and deep learning and application | |
CN114494586B (en) | Lattice projection deep learning network broadleaf branch and leaf separation and skeleton reconstruction method | |
Xiang et al. | Measuring stem diameter of sorghum plants in the field using a high-throughput stereo vision system | |
CN115424247A (en) | Greenhouse tomato identification and detection method adopting CBAM and octave convolution to improve YOLOV5 | |
Zhong et al. | Identification and depth localization of clustered pod pepper based on improved Faster R-CNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |