CN116862955A - Three-dimensional registration method, system and equipment for plant images - Google Patents

Three-dimensional registration method, system and equipment for plant images Download PDF

Info

Publication number
CN116862955A
CN116862955A CN202310813414.5A CN202310813414A CN116862955A CN 116862955 A CN116862955 A CN 116862955A CN 202310813414 A CN202310813414 A CN 202310813414A CN 116862955 A CN116862955 A CN 116862955A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target plant
target
plant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310813414.5A
Other languages
Chinese (zh)
Inventor
王敏娟
李桂鑫
郑立华
张漫
李寒
李民赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Agricultural University
Original Assignee
China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Agricultural University filed Critical China Agricultural University
Priority to CN202310813414.5A priority Critical patent/CN116862955A/en
Publication of CN116862955A publication Critical patent/CN116862955A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional registration method, a system and equipment for plant images, which relate to the technical field of computer vision of artificial intelligence, and the method comprises the following steps: extracting target plant point cloud data from three-dimensional point cloud images of a target plant to be detected under different shooting angles; determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under the adjacent shooting view angles based on a voxel grid method and a Kd-Tree feature matching algorithm, and then determining a rotation matrix and a translation matrix between source point cloud data and target point cloud data in the two groups of target plant point cloud data under the adjacent shooting view angles; and constructing a point cloud registration conversion model by taking the minimum registration error between the source point cloud data and the target point cloud data as a target, solving to obtain an optimal rotation matrix and translation matrix, and then determining the final point cloud space mapping matrix of the target plant. The three-dimensional registration method can rapidly and accurately realize three-dimensional registration of the plant images.

Description

Three-dimensional registration method, system and equipment for plant images
Technical Field
The invention relates to the technical field of computer vision of artificial intelligence, in particular to a three-dimensional registration method, a system and equipment of plant images.
Background
The intelligent agriculture uses information and knowledge as core elements, and realizes a new agricultural production mode through the fusion of various disciplines, which is an advanced stage from digitization to intellectualization of agricultural informatization development. With the vigorous development of research and application of intelligent agriculture, plant visualization research is also gradually in progress. Because plants are complex organisms, 3D images bear more information than 2D images, and in the growth and development process of plants, the three-dimensional model is used for reflecting the morphological structure of the plants, so that the rapid acquisition of data and the accurate reconstruction of the three-dimensional plant model are always research hot spots in the fields of botanics and computer graphics. Therefore, the establishment of an accurate three-dimensional static morphological structure model of the plant is beneficial to various researches related to the plant space structure, and is an important aspect of the research on problems such as virtual plants, plant modeling and the like. However, the plant leaves are complex in shape, the mutual shielding condition among the leaves is serious, and the complete morphological structure of the plant leaves cannot be obtained by single shooting of common equipment. Therefore, the complete phenotype information of the plants is obtained nondestructively, and research on the growth situation of crops is an important development trend in the field of plant phenotype research.
Machine vision utilizes a computer to simulate the visual function of human beings, extracts information from an image sequence of objective things, processes and understands the information, and is finally used for actual detection, measurement and control. The machine vision has the advantages of detecting in a non-invasive way, not damaging target crops, and having the characteristics of real-time, rapidness and low cost compared with a chemical or physical measuring method. Therefore, the three-dimensional morphology of the plant is constructed and analyzed by a machine vision method. As three-dimensional data is applied in more and more scenes, three-dimensional data acquisition technology is also becoming mature. The three-dimensional laser scanning technology is suitable for medium and large-sized objects, has higher precision, is easily influenced by surrounding environment, has low reduction degree of local detail characteristics and lower reduction and resolution of texture colors. The raster projection scanning technology is suitable for medium and small-sized objects, has higher precision and detail characteristic reduction degree, but has poorer data integrity for high-reflection materials and larger three-dimensional data volume of a model. The cost of the photogrammetry technology is low, the average projection error is small, but the obtained data integrity and detail reduction degree are low.
Point cloud registration is a task of relatively transforming and aligning two or more different point clouds acquired using a scanner, and plays a vital role in many applications, such as lidar, three-dimensional reconstruction, mapping, and the like. Unlike the conventional image matching problem, it is often impossible to find two exact matching points from the source and target point clouds due to the sparsity of the point clouds. The scanner has larger appearance difference when observing the same object from different view angles, so that the difficulty of feature extraction is increased.
Disclosure of Invention
The invention aims to provide a three-dimensional registration method, a system and equipment for plant images, which can rapidly and accurately realize three-dimensional registration of plant images.
In order to achieve the above object, the present invention provides the following solutions:
in a first aspect, the present invention provides a method for three-dimensional registration of plant images, comprising:
acquiring three-dimensional point cloud images of a target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data;
extracting the target plant point cloud data from each three-dimensional point cloud image;
determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting view angles based on a voxel grid method and a Kd-Tree feature matching algorithm;
determining a rotation matrix and a translation matrix between source point cloud data and target point cloud data in two groups of target plant point cloud data under adjacent shooting angles based on the preliminary point cloud space mapping matrix of the target plant;
the registration error between the source point cloud data and the target point cloud data is the minimum, and a point cloud registration conversion model is constructed according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data;
solving the point cloud registration conversion model to obtain an optimal rotation matrix and an optimal translation matrix;
and determining an end point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
Optionally, based on a voxel grid method and a Kd-Tree feature matching algorithm, determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting angles, wherein the method specifically comprises the following steps:
downsampling to construct a first voxel grid and a second voxel grid based on target plant point cloud data under adjacent shooting angles;
calculating a first characteristic histogram characteristic of the target plant based on the first voxel grid; calculating a second feature histogram feature of the target plant based on the second voxel grid;
and matching the first characteristic histogram characteristic of the target plant with the second characteristic histogram characteristic of the target plant based on a Kd-Tree characteristic matching algorithm to obtain a preliminary point cloud space mapping matrix of the target plant.
Optionally, calculating the first feature histogram feature of the target plant based on the first voxel grid specifically includes:
for a key point corresponding to any voxel in the first voxel grid, calculating the surface normal of the key point;
determining the neighborhood radius of the key point;
determining a plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius by taking the key points as the origins; the ternary characteristic histogram characteristic group comprises the product of the distance between the key point and any adjacent key point in the range of the neighborhood radius, the angle difference of the surface normal and the surface normal module length;
determining a plurality of ternary characteristic histogram characteristic groups of adjacent key points in the neighborhood radius range aiming at any adjacent key point in the neighborhood radius range;
and carrying out weighted calculation on the plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius to obtain the first characteristic histogram characteristic of the target plant.
In a second aspect, the present invention provides a three-dimensional registration system for plant images, comprising:
the point cloud data acquisition module is used for acquiring three-dimensional point cloud images of the target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data;
the point cloud data extraction module is used for extracting the target plant point cloud data from each three-dimensional point cloud image;
the preliminary mapping registration module is used for determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting view angles based on a voxel grid method and a Kd-Tree feature matching algorithm;
the matrix conversion module is used for determining a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data in two groups of target plant point cloud data under adjacent shooting angles based on the preliminary point cloud space mapping matrix of the target plant;
the optimization model construction module is used for constructing a point cloud registration conversion model according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data by taking the minimum registration error between the source point cloud data and the target point cloud data as a target;
the optimization model solving module is used for solving the point cloud registration conversion model to obtain an optimal rotation matrix and an optimal translation matrix;
and the final registration module is used for determining the final point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
In a third aspect, the invention provides an electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform a method of three-dimensional registration of plant images.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a three-dimensional registration method, a system and equipment for plant images, which are used for extracting target plant point cloud data from three-dimensional point cloud images of a target plant to be detected under different shooting visual angles, so that the processing of subsequent data is facilitated, the calculated amount is reduced, and the three-dimensional registration speed is improved. Determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting angles based on a voxel grid method and a Kd-Tree feature matching algorithm, wherein the voxel grid method and the Kd-Tree feature matching algorithm are adopted, so that the registration efficiency of the three-dimensional point cloud of the lettuce can be improved while the point cloud features are maintained, and the registration precision is ensured; further determining a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data in two groups of target plant point cloud data under adjacent shooting view angles; constructing a point cloud registration conversion model by taking the minimum registration error between the source point cloud data and the target point cloud data as a target, and solving to obtain an optimal rotation matrix and translation matrix; and finally, determining the final point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the optimal translation matrix. By setting the preliminary point cloud space mapping matrix and the final point cloud space mapping matrix, the three-dimensional registration accuracy of plant images under different shooting angles can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a three-dimensional registration method of plant images of the present invention;
fig. 2 is a schematic structural diagram of a three-dimensional registration system of plant images of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a three-dimensional registration method, a system and equipment for plant images, wherein common part point clouds are mutually corresponding and non-common part point clouds are mutually complemented by fusing point cloud data of multiple views through a multi-view three-dimensional reconstruction algorithm, so that superposition type registration is realized.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
As shown in fig. 1, the present invention provides a three-dimensional registration method of plant images, comprising:
step 100, acquiring three-dimensional point cloud images of a target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data.
In a specific embodiment, the target plant to be tested selects for lettuce in the growing period. In a greenhouse environment suitable for the growth of lettuce, performing water culture cultivation on the lettuce by using Hoagland nutrient solution; and in different growing periods of the lettuce, taking black light absorption cloth as a background, and acquiring three-dimensional image data of the lettuce under different shooting visual angles by using an Azure Kinect depth camera and an automatic turntable. The specific setting mode is as follows: the European water planting basin and the lettuce are placed on an automatic turntable, and an Azure Kinect camera is fixed on a tripod, and the tripod is positioned right in front of the turntable. Along with the rotation of the turntable, three-dimensional point cloud images of the lettuce under different shooting visual angles can be obtained.
And 200, extracting the target plant point cloud data from each three-dimensional point cloud image. Specifically, firstly, separating target plant point cloud data from background point cloud data by using cloudCompare. And then, detecting and deleting outlier noise of the target plant point cloud data by utilizing an average distance between point clouds and a filtering algorithm.
Step 300, determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting angles based on a voxel grid method and a Kd-Tree feature matching algorithm.
Step 300 specifically includes:
(1) And (3) based on target plant point cloud data under adjacent shooting view angles, downsampling to construct a first voxel grid and a second voxel grid, so that the characteristics of the point cloud are maintained while the point cloud is simplified, and the loss of characteristic information is prevented.
(2) Calculating a first characteristic histogram characteristic of the target plant based on the first voxel grid; and calculating a second characteristic histogram characteristic of the target plant based on the second voxel grid. The first characteristic histogram features of the target plant and the second characteristic histogram features of the target plant are the same in calculation process; calculating a first characteristic histogram characteristic of the target plant based on the first voxel grid, specifically comprising:
21 For a keypoint corresponding to any voxel in the first voxel grid, calculating a surface normal n of the keypoint. Specifically, an algorithm for calculating the normal line of the point cloud is provided in the PCL point cloud library, and after the radius of the point cloud is input, the normal line estimation object uses KD-Tree to search the nearest neighbor point and calculate the normal line of the point cloud.
22 A neighborhood radius r of the keypoint is determined.
23 Determining a plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius by taking the key points as the origins; the ternary feature histogram feature set includes a product of a distance between the keypoint and any adjacent keypoint within a radius of the neighborhood radius, an angle difference of a surface normal, and a surface normal modular length.
In practice shouldIn use, two groups of target plant point cloud data under adjacent shooting angles are respectively defined as source point cloud data and target point cloud data p i ,p j (i+.j) having a normal n in the neighborhood i ,n j The coordinate axes of a local coordinate system are calculated as follows:
where u is a coordinate axis constructed in a normal direction of the source point cloud data, v is a longitudinal coordinate axis orthogonal to u, and w is a transverse coordinate orthogonal to u.
The angular transformation of the surface normal can be determined by:
where α is the target normal vector n j An included angle with the v-axis,is the normal vector n of the source standard i Included angle between the target normal vector n and the connecting line between the target normal vector n and the quasi-registration point cloud j And the angle between the perpendicular of the uw plane and the u axis.
24 For any neighboring keypoint within the range of the neighborhood radius, determining a plurality of ternary feature histogram feature sets for the neighboring keypoint within the range of the neighborhood radius. That is, the k neighborhood is determined for each point in the k neighborhood, and the SPFH is formed in the above steps.
When the triples are calculated, not only the triples among the adjacent points in the neighborhood of the key point are considered, but also the adjacent points of the adjacent points are considered, so that the range of obtaining the information of the triples is enlarged, the problem of insufficient precision is solved, and meanwhile, the calculation efficiency is remarkably improved.
25 And carrying out weighted calculation on the plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius to obtain the first characteristic histogram characteristic of the target plant. Specifically, according to the formula
A first feature histogram feature of the target plant is calculated.
Wherein FPFH (p) represents the first characteristic histogram characteristic of the target plant, SPFH (p) represents the ternary characteristic histogram characteristic group of the key point p, k represents the number of adjacent key points, and w k Representing weights, characterizing the keypoint p and neighboring keypoints p k Distance between them, SPFH (p k ) Representing adjacent keypoints p k Is described.
(3) And matching the first characteristic histogram characteristic of the target plant with the second characteristic histogram characteristic of the target plant based on a Kd-Tree characteristic matching algorithm to obtain a preliminary point cloud space mapping matrix of the target plant. The FPFH characteristic values are matched based on a Kd-Tree characteristic matching algorithm, a space mapping matrix between two point clouds is preliminarily obtained, and preliminary rapid global registration of the point clouds is realized.
In one specific example, to further improve the accuracy of the preliminary point cloud spatial mapping matrix of the target plant, a random sampling consistency algorithm (Random Sample Consensus, RANSAC) is used to reject mismatching pairs of points. The random sampling consistency algorithm is an iterative algorithm that correctly estimates mathematical model parameters from a set of data containing outliers. The outer points comprise noise points in the data, and the outliers in the curve are estimated by mismatching in the three-dimensional point cloud registration; in contrast, the inner points in the RANSAC algorithm represent data that make up the model parameters. The RANSAC algorithm is an uncertain algorithm that can only produce results with probability and the probability increases with increasing number of iterations.
The specific steps of the algorithm in the point cloud registration process are as follows: 1) And randomly selecting three corresponding point pairs in the corresponding three-dimensional point set, and solving a rigid body transformation matrix of the three corresponding point pairs. 2) Calculating the distance error of the current residual point pair under the transformation matrix and comparing the distance error with a preset threshold value to judge whether the current residual point pair is an inner point or an outer point; if the point is smaller than the threshold value, the point is considered as an inner point; otherwise, the point is the outer point. 3) Repeating the steps until the upper limit of the iteration times, counting the number of the inner points of the samples under different rigid body transformation matrix models, taking the largest number as an optimal model, removing the outer points and reserving the inner points in the samples for later point cloud registration operation.
The registration accuracy is represented and evaluated by using the distance error after registration of the two point clouds, and the formula is as follows:
the overall registration distance error is expressed as follows:
wherein x is i 、y i 、z i Is the three-dimensional coordinates, x 'of the source point cloud i' i 、y′ i 、z′ i Is the three-dimensional coordinates of the target point cloud i', and n is the number of the point clouds.
The RANSAC algorithm has the advantage that a good initial estimate does not need to be obtained in advance, but a group of samples are selected from the data set for fitting in a random sampling manner, so that the dependence on the initial estimate is avoided. The maximum number of interior points generated in the generated model is recorded as the best model by continuous random sampling and model fitting. After a sufficient number of iterations, the RANSAC algorithm converges and returns an optimal model parameter, which contains only interior points and can be considered as a good estimate of the dataset. Therefore, the algorithm has better performance when the data set with noise and abnormal points is used for purifying the mapping relation between two point clouds in the initial point cloud registration, so that a better initial position is obtained.
Step 400, determining, based on the preliminary point cloud space mapping matrix of the target plant, between the source point cloud data and the target point cloud data in two sets of target plant point cloud data under adjacent shooting anglesA rotation matrix and a translation matrix, the formula of which is: p (P) 1 =R 1 Q 0 +T 1 . Wherein P is 1 And Q is equal to 0 Points in the point cloud P and the point cloud Q are represented, respectively.
T obtained by the above 1 And R is R 1 For Q 1 Coordinate transformation is carried out to obtain a new transformation point set Q 2
Q 2 =R 1 Q 1 +T 1
Repeating the transformation process of the rotation matrix and the translation matrix, and performing iterative calculation, wherein the formula is as follows:
P m =R m Q m-1 +T m
Q m+1 =R m Q m +T m
where m represents the number of corresponding point clouds.
And 500, constructing a point cloud registration conversion model according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data by taking the minimum registration error between the source point cloud data and the target point cloud data as a target. The objective function in the point cloud registration conversion model is as follows:
wherein f (R, T) represents the value of the objective function, N p Representing the number of point clouds of the target point cloud data, Q i Represents the ith source point cloud data, R represents a translation matrix, P i Represents the ith target point cloud data, and T represents the rotation matrix.
And 600, solving the point cloud registration conversion model to obtain an optimal rotation matrix and an optimal translation matrix. Specifically, the iteration is stopped when the objective function is minimum, resulting in a mean square error d m+1 Setting an iteration convergence threshold tau (tau > 0), when the mean square error d of two adjacent iterations m -d m+1 And stopping iteration when tau is less than tau, and repeating iterative calculation if tau is not met until the requirement is met.
And 700, determining the final point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
In another specific application example, a specific process of acquiring three-dimensional point cloud images of the lettuce to be detected under 8 different shooting angles and performing three-dimensional registration on the three-dimensional point cloud images is as follows:
(1) In a greenhouse under the illumination condition of an LED lamp, black light absorption cloth is used as a background, and an Azure Kinect depth camera and an automatic turntable are used for acquiring three-dimensional image data of lettuce in different growth periods. The tripod center is 78cm away from the ground, the horizontal distance from the turntable center is 76cm, the turntable is controlled to rotate at 45 degrees each time and the automatic acquisition of point cloud images by a camera is realized through programming, 3 frames of point cloud images are acquired in total from each view angle, and 24 frames of point cloud images in total under 8 view angles can be obtained after the turntable is controlled to rotate for 360 degrees. The camera model used therein was Azure Kinect DK,100 ten thousand pixel advanced depth camera, 360 ° microphone array, 1200 ten thousand pixel full high definition camera and orientation sensor.
(2) And separating plants from an acquisition background by using CloudCompare, extracting target plant point clouds, and detecting and deleting outlier noise of noise points of the blade data by using an average distance between the point clouds and a filtering algorithm.
(3) And after the point cloud data with noise removed is obtained, selecting and inputting two pieces of point cloud data of adjacent view angles. Firstly, performing voxel grid downsampling on point cloud data. And then calculating the characteristic of a rapid point characteristic histogram (Fast Point Feature Histogram, FPFH) of the downsampled points, matching the FPFH characteristic value based on a Kd-Tree characteristic matching algorithm, and preliminarily obtaining a space mapping matrix between two point clouds to realize preliminary rapid global registration of the point clouds.
(4) In the lettuce point cloud fine registration, firstly, determining a target point cloud P and a source point cloud Q, determining a corresponding point pair with a mapping relation between the point clouds P and Q by using a Kd-Tree searching method, and forming a set; from the set of corresponding point pairs, an initial rotation is obtained by a singular value decomposition method or the likeA matrix and a translation matrix; point-to-point cloud Q 1 Coordinate transformation is carried out to obtain a new transformation point set Q 2 The method comprises the steps of carrying out a first treatment on the surface of the And continuously repeating the steps to ensure that the obtained new source point cloud and the transformation point set have the minimum objective function in the registration conversion process, stopping iteration when the objective function is the minimum, obtaining the mean square error, setting an iteration convergence threshold, stopping iteration when the mean square error of two adjacent iterations is smaller than the iteration convergence threshold, and repeating iterative calculation if the mean square error of two adjacent iterations is not smaller than the iteration convergence threshold, until the requirement is met.
(5) Sequentially registering the acquired 8 visual angles in pairs according to the sequence of the number 1, the number 2, the number 3, the number 4, the number 5, the number 6, the number 7 and the number 8 to obtain four groups of registration results; and similarly, registering the four groups of point cloud data in pairs to obtain two groups of registration results, and finally registering the two results to obtain the final registration results of 8 view angle combinations.
On the whole, according to the invention, on the one hand, by automatically adjusting important registration parameters in an automatic registration process, the registration efficiency of the three-dimensional point cloud of the lettuce can be improved while the point cloud characteristics are maintained, and the registration precision is ensured. On the other hand, the number of clouds in the continuous growth process of the lettuce is exponentially increased, the leaf shielding condition is serious, the lettuce has ideal registration effect on different varieties and different growth periods by combining a global rapid registration and iterative nearest point algorithm, and the lettuce has better reconstruction effect on plants such as aloe, scindapsus aureus and the like and also has certain robustness and generalization through experiments.
Example two
As shown in fig. 2, the present invention further provides a three-dimensional registration system for plant images, comprising:
the point cloud data acquisition module 101 is used for acquiring three-dimensional point cloud images of the target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data.
The point cloud data extraction module 201 is configured to extract the target plant point cloud data from each of the three-dimensional point cloud images.
The preliminary mapping registration module 301 is configured to determine a preliminary point cloud space mapping matrix of the target plant according to two sets of target plant point cloud data under adjacent shooting angles based on a voxel grid method and a Kd-Tree feature matching algorithm.
The matrix conversion module 401 is configured to determine a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data in two sets of target plant point cloud data under adjacent shooting angles based on the preliminary point cloud space mapping matrix of the target plant.
The optimization model construction module 501 is configured to construct a point cloud registration conversion model according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data, with a minimum registration error between the source point cloud data and the target point cloud data as a target.
And the optimization model solving module 601 is configured to solve the point cloud registration transformation model to obtain an optimal rotation matrix and an optimal translation matrix.
The final registration module 701 is configured to determine an end point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
Example III
The present embodiment provides an electronic device including a memory and a processor, the memory storing a computer program, the processor running the computer program to cause the electronic device to perform the three-dimensional registration method of the plant image of the first embodiment.
Alternatively, the electronic device may be a server.
In addition, the embodiment of the present invention further provides a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements the three-dimensional registration method of plant images of the first embodiment.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. A method of three-dimensional registration of plant images, the method comprising:
acquiring three-dimensional point cloud images of a target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data;
extracting the target plant point cloud data from each three-dimensional point cloud image;
determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting view angles based on a voxel grid method and a Kd-Tree feature matching algorithm;
determining a rotation matrix and a translation matrix between source point cloud data and target point cloud data in two groups of target plant point cloud data under adjacent shooting angles based on the preliminary point cloud space mapping matrix of the target plant;
the registration error between the source point cloud data and the target point cloud data is the minimum, and a point cloud registration conversion model is constructed according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data;
solving the point cloud registration conversion model to obtain an optimal rotation matrix and an optimal translation matrix;
and determining an end point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
2. The three-dimensional registration method of a plant image according to claim 1, wherein determining a preliminary point cloud space mapping matrix of a target plant according to two sets of target plant point cloud data under adjacent shooting angles based on a voxel grid method and a Kd-Tree feature matching algorithm specifically comprises:
downsampling to construct a first voxel grid and a second voxel grid based on target plant point cloud data under adjacent shooting angles;
calculating a first characteristic histogram characteristic of the target plant based on the first voxel grid; calculating a second feature histogram feature of the target plant based on the second voxel grid;
and matching the first characteristic histogram characteristic of the target plant with the second characteristic histogram characteristic of the target plant based on a Kd-Tree characteristic matching algorithm to obtain a preliminary point cloud space mapping matrix of the target plant.
3. The three-dimensional registration method of plant images according to claim 2, wherein calculating a target plant first feature histogram feature based on the first voxel grid, comprises:
for a key point corresponding to any voxel in the first voxel grid, calculating the surface normal of the key point;
determining the neighborhood radius of the key point;
determining a plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius by taking the key points as the origins; the ternary characteristic histogram characteristic group comprises the product of the distance between the key point and any adjacent key point in the range of the neighborhood radius, the angle difference of the surface normal and the surface normal module length;
determining a plurality of ternary characteristic histogram characteristic groups of adjacent key points in the neighborhood radius range aiming at any adjacent key point in the neighborhood radius range;
and carrying out weighted calculation on the plurality of ternary characteristic histogram characteristic groups in the range of the neighborhood radius to obtain the first characteristic histogram characteristic of the target plant.
4. A method of three-dimensional registration of plant images according to claim 3, wherein weighting the plurality of sets of ternary feature histogram features within the neighborhood radius to obtain the first feature histogram feature of the target plant comprises:
according to the formula
Calculating a first characteristic histogram characteristic of the target plant;
wherein FPFH (p) represents the first characteristic histogram characteristic of the target plant, SPFH (p) represents the ternary characteristic histogram characteristic group of the key point p, k represents the number of adjacent key points, and w k Representing weights, characterizing the keypoint p and neighboring keypoints p k Distance between them, SPFH (p k ) Representing adjacent keypoints p k Is described.
5. The method of three-dimensional registration of plant images according to claim 1, wherein the objective function in the point cloud registration transformation model is:
wherein f (R, T) represents the value of the objective function, N p Representing the number of point clouds of the target point cloud data, Q i Represents the ith source point cloud data, R represents a translation matrix, P i Represents the ith target point cloud data, and T represents the rotation matrix.
6. The method of three-dimensional registration of plant images according to claim 1, wherein the target plant point cloud data is extracted from each of the three-dimensional point cloud images, after which the method further comprises:
and detecting and deleting outlier noise of the target plant point cloud data by utilizing an average distance between point clouds and a filtering algorithm.
7. A three-dimensional registration system for plant images, the system comprising:
the point cloud data acquisition module is used for acquiring three-dimensional point cloud images of the target plant to be detected under different shooting visual angles; each three-dimensional point cloud image comprises target plant point cloud data and background point cloud data;
the point cloud data extraction module is used for extracting the target plant point cloud data from each three-dimensional point cloud image;
the preliminary mapping registration module is used for determining a preliminary point cloud space mapping matrix of the target plant according to two groups of target plant point cloud data under adjacent shooting view angles based on a voxel grid method and a Kd-Tree feature matching algorithm;
the matrix conversion module is used for determining a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data in two groups of target plant point cloud data under adjacent shooting angles based on the preliminary point cloud space mapping matrix of the target plant;
the optimization model construction module is used for constructing a point cloud registration conversion model according to a rotation matrix and a translation matrix between the source point cloud data and the target point cloud data by taking the minimum registration error between the source point cloud data and the target point cloud data as a target;
the optimization model solving module is used for solving the point cloud registration conversion model to obtain an optimal rotation matrix and an optimal translation matrix;
and the final registration module is used for determining the final point cloud space mapping matrix of the target plant based on the optimal rotation matrix and the translation matrix.
8. An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the method of three-dimensional registration of plant images according to any one of claims 1 to 6.
CN202310813414.5A 2023-07-04 2023-07-04 Three-dimensional registration method, system and equipment for plant images Pending CN116862955A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310813414.5A CN116862955A (en) 2023-07-04 2023-07-04 Three-dimensional registration method, system and equipment for plant images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310813414.5A CN116862955A (en) 2023-07-04 2023-07-04 Three-dimensional registration method, system and equipment for plant images

Publications (1)

Publication Number Publication Date
CN116862955A true CN116862955A (en) 2023-10-10

Family

ID=88227940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310813414.5A Pending CN116862955A (en) 2023-07-04 2023-07-04 Three-dimensional registration method, system and equipment for plant images

Country Status (1)

Country Link
CN (1) CN116862955A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274512A (en) * 2023-11-23 2023-12-22 岭南现代农业科学与技术广东省实验室河源分中心 Plant multi-view image processing method and system
CN117496359A (en) * 2023-12-29 2024-02-02 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274512A (en) * 2023-11-23 2023-12-22 岭南现代农业科学与技术广东省实验室河源分中心 Plant multi-view image processing method and system
CN117274512B (en) * 2023-11-23 2024-04-26 岭南现代农业科学与技术广东省实验室河源分中心 Plant multi-view image processing method and system
CN117496359A (en) * 2023-12-29 2024-02-02 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud
CN117496359B (en) * 2023-12-29 2024-03-22 浙江大学山东(临沂)现代农业研究院 Plant planting layout monitoring method and system based on three-dimensional point cloud

Similar Documents

Publication Publication Date Title
Gibbs et al. Approaches to three-dimensional reconstruction of plant shoot topology and geometry
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN113112504B (en) Plant point cloud data segmentation method and system
Sodhi et al. In-field segmentation and identification of plant structures using 3D imaging
Guo et al. Realistic procedural plant modeling from multiple view images
CN110796694A (en) Fruit three-dimensional point cloud real-time acquisition method based on KinectV2
Medeiros et al. Modeling dormant fruit trees for agricultural automation
CN116862955A (en) Three-dimensional registration method, system and equipment for plant images
CN102222357B (en) Foot-shaped three-dimensional surface reconstruction method based on image segmentation and grid subdivision
McKinnon et al. Towards automated and in-situ, near-real time 3-D reconstruction of coral reef environments
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Peng et al. Binocular-vision-based structure from motion for 3-D reconstruction of plants
Ma et al. A method for calculating and simulating phenotype of soybean based on 3D reconstruction
CN113932712A (en) Melon and fruit vegetable size measuring method based on depth camera and key points
Sodhi et al. Robust plant phenotyping via model-based optimization
Li et al. Automatic reconstruction and modeling of dormant jujube trees using three-view image constraints for intelligent pruning applications
Akhtar et al. Unlocking plant secrets: A systematic review of 3D imaging in plant phenotyping techniques
Ambrus et al. Autonomous meshing, texturing and recognition of object models with a mobile robot
Zhi et al. Unifying Scene Representation and Hand-Eye Calibration with 3D Foundation Models
Zuo et al. A Review of Plant Phenotype Research Based on 3D Point Cloud Technology
Hu et al. Multiview point clouds denoising based on interference elimination
CN118522055B (en) Method, system, equipment and storage medium for realizing real wrinkle detection
Ma et al. Research on Crop 3D Model Reconstruction Based on RGB‐D Binocular Vision
Dai et al. Research on Leaf Area Index Extraction Algorithm Based on 3D Reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination