CN115035254A - Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud - Google Patents

Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud Download PDF

Info

Publication number
CN115035254A
CN115035254A CN202210769692.0A CN202210769692A CN115035254A CN 115035254 A CN115035254 A CN 115035254A CN 202210769692 A CN202210769692 A CN 202210769692A CN 115035254 A CN115035254 A CN 115035254A
Authority
CN
China
Prior art keywords
point cloud
vegetation
dimensional
voxel
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210769692.0A
Other languages
Chinese (zh)
Inventor
黄方
何伟丙
彭书颖
陈胜亿
强晓勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210769692.0A priority Critical patent/CN115035254A/en
Publication of CN115035254A publication Critical patent/CN115035254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a three-dimensional green quantity estimation method for regional vegetation based on reconstructed point cloud, and belongs to the technical field of three-dimensional green quantity estimation. The method analyzes the problems in the existing urban three-dimensional green quantity calculation model and is designed by combining the inherent characteristics of the three-dimensional reconstruction point cloud and the requirement of model universality. The method is based on the unmanned aerial vehicle remote sensing image to carry out three-dimensional reconstruction to obtain three-dimensional reconstruction point cloud data, and is characterized in that a scanning line filling optimization method is innovatively added to obtain a three-dimensional green quantity calculation improvement model based on an octree-voxel theory, so that the measurement and calculation of the urban three-dimensional green quantity are completed. The method effectively solves the problem of inaccurate measurement in the conventional three-dimensional green quantity quantitative measuring and calculating method.

Description

Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud
Technical Field
The invention belongs to the technical field of three-dimensional green quantity estimation, and particularly relates to a regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud.
Background
The three-dimensional green Volume (LVV) is a new ecological quantification index, which can reflect the change of greening effect in a region to be measured along with spatial distribution, and can be more directly closely associated with other ecological indexes such as biomass, and the like, so that the LVV is widely applied to urban ecological environment assessment and urban planning. Meanwhile, the three-dimensional point cloud data is widely applied to the fields of remote sensing and mapping, but is less directly applied to vegetation three-dimensional green quantity estimation in urban areas. With the maturity of the unmanned aerial vehicle remote sensing technology and the three-dimensional reconstruction technology, the real-scene three-dimensional reconstruction technology developed by using the unmanned aerial vehicle remote sensing image is rapidly emerging, the cross research field provides a data acquisition means with quicker response, lower cost and richer ground feature information for the estimation of the three-dimensional green quantity of the urban area, the surface texture of the point cloud data generated by three-dimensional reconstruction is more delicate, and the point cloud data has more advantages in the aspect of recording the vegetation canopy characteristics compared with the point cloud data in other forms. However, through investigation, the related studies have the following problems:
(1) the existing three-dimensional green quantity quantification model method, such as the research of Fang Huang et al (Fang Huang, Shuyi ng Pen, Shengyi Chen, Hongxia Cao, Ning Ma.VO-LVV-A novel urea regional differentiation volumetric estimation model based on the voxel parameter method d an o tree data structure [ J ]. removal Sensing,2022,14(4):855.), is mainly based on the measurement and calculation of the growth parameters of a single tree species, needs a plurality of groups of different model parameters for dealing with different types of trees, and is difficult to achieve the universality of the model;
(2) the existing three-dimensional green quantity quantification model method is mainly based on single plant modeling, cannot be applied to the conditions of various tree species, complex forest layer structure and the like in an area range, and is not suitable for large-scale application in urban areas;
(3) the existing three-dimensional reconstruction based on the remote sensing image of the unmanned aerial vehicle consumes too long time and is low in efficiency, and due to the fact that texture information of ground objects such as vegetation is complex and the photographing visual angle of the remote sensing of the unmanned aerial vehicle is limited, vegetation reconstruction effect is poor in the three-dimensional reconstruction process, under-forest information is lacked, and the fact that the urban area three-dimensional green volume calculation is directly carried out based on three-dimensional point cloud data is difficult.
Therefore, how to carry out convenient, quick and universal quantitative analysis on the three-dimensional green quantity of the urban area based on the three-dimensional reconstruction point cloud data becomes a problem to be solved.
Disclosure of Invention
Aiming at the problems in the background art, the invention aims to provide a regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud. The method comprises the steps that unmanned aerial vehicle remote sensing data are collected, and a high-performance computing technology is utilized to achieve rapid three-dimensional reconstruction of a regional vegetation scene, so that three-dimensional reconstruction point cloud data are obtained; and according to the three-dimensional reconstruction point cloud data, extracting the point cloud of the vegetation in the region by using a PointNet + + network model, finally obtaining a dense point cloud model reflecting the green plant distribution of the space according to a three-dimensional reconstruction technology, establishing an octree and Voxel (Voxel) quantitative model suitable for three-dimensional green quantity, and finally finishing quantitative estimation of the three-dimensional green quantity.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud comprises the following steps:
step 1, selecting an urban area distributed with typical vegetation at one position as an area to be measured, and carrying out oblique photogrammetry on the area to be measured by using an unmanned aerial vehicle oblique photogrammetry operation system to obtain oblique image data;
step 2, based on an open source three-dimensional reconstruction algorithm, respectively performing sparse reconstruction and dense reconstruction on the oblique image data set obtained in the step 1 to obtain scene three-dimensional reconstruction point cloud data of the detected area;
step 3, extracting vegetation point cloud data from the scene three-dimensional reconstruction point cloud data obtained in the step 2 by using a point cloud semantic segmentation neural network model;
step 4, constructing a regional three-dimensional green quantity quantization model for the vegetation point cloud data obtained in the step 3, wherein the method specifically comprises the following steps: firstly, establishing a voxel model under an octree search structure, and then correcting the voxel structure of the vegetation in the region by using a scanning line filling algorithm, thereby obtaining a three-dimensional green quantity quantification model of the region;
and 5, inputting the vegetation point cloud data extracted in the step 3 into the regional three-dimensional green quantity quantization model obtained in the step 4, and estimating the three-dimensional green quantity.
Further, the typical vegetation in step 1 refers to a canopy with a regular geometric shape, and the vegetation capable of calculating a three-dimensional green quantity true value using the equation of crown diameter-crown height.
Further, the unmanned aerial vehicle oblique photography measurement operation system in the step 1 is composed of a Xinjiang longitude and latitude M300 RTK and a Zen P1 image sensor; the image sensor is fixed on the unmanned aerial vehicle, relevant aerial survey parameters of the unmanned aerial vehicle are set, the aerial survey parameters comprise the height h of the unmanned aerial vehicle which is 60m, the speed v of the unmanned aerial vehicle which is 3m/s, the course overlapping rate which is 80 percent, the lateral overlapping rate which is 70 percent and the like.
Further, 3-4 image control points are distributed in the area to be measured in the step 1, and the RTK base station is used for measuring the spatial position information of the image control points; each control point should be captured by as many oblique images as possible and should be selected at an easily identifiable corner position.
Further, the specific process of step 2 is:
step 2.1, extracting Feature points by adopting a Scale Invariant Feature Transform (SIFT) algorithm based on the oblique image data set obtained in the step 1, and then matching the extracted Feature points to obtain Feature matching points;
step 2.2, based on the characteristic matching points, adopting a Structure from Motion (SfM) algorithm to carry out sparse point cloud reconstruction to obtain sparse point cloud data;
step 2.3, on the basis of the sparse point cloud, adopting a Clustering and Patch model-based Multi-View dense matching (Clustering Multiple View and Patch-based Multiple-View Stereo, CMVS + PMVS) algorithm to reconstruct the dense point cloud to obtain dense point cloud data; and the dense point cloud data is scene three-dimensional reconstruction point cloud data.
Further, the point cloud semantic segmentation neural network model in the step 3 comprises an encoder and a decoder, wherein the encoder is used for obtaining a feature extraction result, and the decoder is used for obtaining vegetation point cloud data with the same data quantity as the original point cloud data;
wherein, the coder consists of a sampling layer and a combination layer; the sampling layer represents the dense point cloud data to a high-dimensional space formed by coordinate values and characteristic values, and N with the farthest distance is selected from N input point clouds in the measurement space through a farthest point sampling method 1 The point clouds take the group of point clouds as a group of area central points; in the combination layer, the area is represented by selecting k points nearest to the center point of each area to obtain a feature extraction result;
the decoder consists of a plurality of groups of decoding units, wherein each group of decoding units consists of an insertion layer and a PointNet unit layer; after the encoder finishes feature extraction, the model takes an accessed PointNet layer as a feature extractor, and k adjacent features are converged through trainable weight;
the insertion layer firstly completes point cloud up-sampling through the distance weighting characteristics of k nearest neighbor points, as shown in formula (1):
Figure BDA0003723528940000031
wherein d (x, x) i ) Is the Euclidean distance from the point x to the ith adjacent point, P represents the power of the Euclidean distance, f i (j) I-th neighbor point representing j-th layer, k total, f (j) (x) Represents the output of the j-th layer point x, and C is the total number of layers.
Further, the specific process of step 3 is:
3.1, screening data of the three-dimensional reconstruction point cloud, and removing data with poor reconstruction quality (indicating that the form of the canopy is incomplete due to the lack of photogrammetry visual angle, and particularly vegetation point cloud data with large-range cavities on the lower surface of the canopy);
step 3.2, the obtained data are cut into blocks, each block of data is manually labeled with a semantic label, the vegetation information concerned by the research is labeled as an attribute class, and other ground objects are labeled as another class;
3.3, dividing the marked data into a training set and a testing set (approximately 5:1) according to a certain proportion; the training set data is used for point cloud semantic segmentation neural network model parameter training, and the test set is used for detecting the actual network effect after training to obtain network model parameters;
and 3.4, segmenting the neural network model based on the trained point cloud semantics, and inputting scene three-dimensional reconstruction point cloud data to obtain vegetation point cloud data.
Further, the specific process of step 4 is as follows:
step 4.1, counting the maximum value and the minimum value of the vegetation point cloud data in X, Y, Z coordinate directions, and recording as (x) min ,x max ),(y min ,y max ),(z min ,z max ) (ii) a With (x) min ,y min ,z min ) And (x) max ,y max ,z max ) Constructing a three-dimensional bounding box of vegetation point cloud data as a dividing space of a voxel model and an octree by the formed diagonal line;
step 4.2, constructing a voxel model according to the set voxel resolution, and establishing an octree index for each vegetation point cloud on the basis;
4.3, sequencing all non-empty voxels according to voxel center coordinates, and layering a voxel set according to the size of a Z-axis coordinate; in each layer of non-empty voxel subset, firstly sorting according to the size of an X-axis coordinate, then sorting according to a Y-axis, and traversing voxels according to a sorting result; when one voxel and the previous voxel are on the same X-axis coordinate, a section of scanning line is formed;
step 4.4, presetting a scanning line length threshold, and judging the canopy part to which each scanning line belongs for each section of scanning line: if the scanning line is too long, the section is considered to belong to the interval between two adjacent numbers, otherwise, the section can be considered to be a component of the crown;
step 4.5, for the scanning line in the crown, calculating the volume according to the length of the scanning line and the cross-sectional area of the volume element to obtain the cross-sectional volume of the crown layer on the scanning line;
and 4.6, repeating the processes of the steps 4.3 to 4.5 for each voxel along the X direction and the Y direction of the contour according to the same method to obtain the volume of the canopy of each layer, and accumulating the volume of the canopy of each layer to obtain the total volume V of the canopy of the three-dimensional green volume to be measured.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the method analyzes the problems in the existing urban three-dimensional green amount calculation model, and is designed by combining the requirements of the inherent characteristics and the model universality of the three-dimensional reconstruction point cloud. The method effectively solves the problem of inaccurate measurement in the conventional three-dimensional green quantity quantitative measuring and calculating method.
Drawings
FIG. 1 is a schematic flow chart of a method for estimating the three-dimensional green quantity of vegetation in a region according to the present invention.
Fig. 2 is a schematic flow chart of constructing a three-dimensional dense point cloud based on an unmanned aerial vehicle image.
FIG. 3 is a flow chart of a three-dimensional green quantity estimation method based on octree-voxel method in the prior art.
FIG. 4 is a flowchart of a three-dimensional green value calculation method based on scan line filling according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
A method for estimating regional vegetation three-dimensional green quantity based on reconstructed point cloud is shown in figure 1 and comprises the following steps:
step 1, selecting an urban area distributed with typical vegetation at a position as an area to be measured, wherein the typical vegetation is a canopy with a regular geometric shape, the vegetation with a three-dimensional green quantity true value can be calculated by using a crown diameter-crown height equation, and oblique photogrammetry is carried out on the area to be measured by using an unmanned aerial vehicle oblique photogrammetry operation system to obtain oblique image data, and the specific process is as follows:
step 1.1, selecting a Xinjiang longitude and latitude M300 RTK and a Zen Si P1 image sensor to form the unmanned aerial vehicle oblique photogrammetry operation system;
step 1.2, mounting the image sensor on the unmanned aerial vehicle, and starting the unmanned aerial vehicle; in operation software matched with the unmanned aerial vehicle, performing wireless hotspot connection with the unmanned aerial vehicle, and setting relevant aerial survey parameters, wherein the parameters comprise an aerial altitude h of 60m, an aerial speed v of 3m/s, a course overlapping rate of 80%, a side overlapping rate of 70% and the like;
step 1.3, a quadrilateral area which can completely cover the range of the area to be measured is defined in operation software as an operation area, and a good route is automatically planned according to the aerial survey parameters set in the step 1.2: generally, the direction parallel to the longest edge in the operation area is taken as the flight direction, and no person can perform retrace scanning in the operation area along the direction until the operation area is completely flown;
step 1.4, in order to improve the precision of the three-dimensional reconstruction of the interior industry, 3-4 image control points are distributed in a measuring area, and the space position information of the image control points is measured by an RTK base station; each image control point should be captured by a plurality of oblique images as much as possible and should be set at an angular point position easy to identify;
step 2, based on an open source three-dimensional reconstruction algorithm, respectively performing sparse reconstruction and dense reconstruction on the oblique image data set obtained in the step 1 to obtain scene three-dimensional reconstruction point cloud data of the detected area, wherein a flow schematic diagram of the step 2 is shown in fig. 2, and the specific process is as follows:
step 2.1, extracting feature points by adopting an SIFT algorithm based on the oblique image data set obtained in the step 1, and then matching the extracted feature points in a plurality of images to obtain feature matching points, wherein the specific matching process comprises the following steps:
finding the points P closest to the rays in three-dimensional space corresponding to the camera P j =K j [R j |t j ]Position { x ] of observed feature point on image j In which K is j Is an internal parameter matrix, [ R ] j |t j ]For extrinsic parameters, the camera ray may be represented by the formula (A)1):
Figure BDA0003723528940000061
Where q is the point on the ray, c j Is the camera position, d is the length coefficient,
Figure BDA0003723528940000062
is the direction vector of the imaging;
let q be the point on each ray closest to point p j Then q is j The minimum is required to satisfy equation (2):
Figure BDA0003723528940000063
according to the mathematical model, solving each parameter is as follows:
Figure BDA0003723528940000064
wherein r is j 2 Is p and q j 2 norm between, d j Is the minimum value of d in the formula (2), r j Is p and q j The distance between the two cameras, j, is the camera mark number in the photogrammetry process;
p obtained by solving is a characteristic matching point, and the solution of the point p can be realized by accumulating r j 2 Solving the optimum value of point p as a least squares problem or by minimizing the residual of the measurement equation;
step 2.2, based on the characteristic matching points, adopting a Structure from Motion (SfM) algorithm to carry out sparse point cloud reconstruction to obtain sparse point cloud data;
step 2.3, on the basis of sparse point cloud, adopting a Clustering and Patch model-based Multi-View dense matching (CMVS + PMVS) algorithm to carry out dense point cloud reconstruction to obtain dense point cloud data, namely, on the basis of sparse point cloud, dividing a Patch and expanding a Patch structure through Patch diffusion to obtain dense point cloud, wherein the Patch is a local tangent plane approximate to the surface of an object and is commonly determined by a central point c (P), a normal vector n (P) and an associated reference image set V (P); the obtained dense point cloud data is scene three-dimensional reconstruction point cloud data;
and 3, for the scene three-dimensional reconstruction point cloud data obtained in the step 2, training and extracting vegetation point cloud data by using a point cloud semantic segmentation neural network model, wherein the specific process is as follows:
3.1, screening data of the three-dimensional reconstruction point cloud, and removing data with poor reconstruction quality (indicating that the form of the canopy is incomplete due to the lack of photogrammetry visual angle, and particularly vegetation point cloud data with large-range cavities on the lower surface of the canopy);
step 3.2, the obtained data are cut into blocks, each block of data is manually labeled with a semantic label, the vegetation information concerned by the research is labeled as an attribute class, and other ground objects are labeled as another class;
3.3, dividing the marked data into a training set and a testing set (approximately 5:1) according to a certain proportion; the training set data is used for point cloud semantic segmentation neural network model parameter training, and the test set is used for detecting the actual network effect after training to obtain network model parameters;
and 3.4, segmenting the neural network model based on the trained point cloud semantics, and inputting scene three-dimensional reconstruction point cloud data to obtain vegetation point cloud data.
In the prior art, an octree-voxel method is directly adopted to estimate three-dimensional green quantity based on extracted vegetation point cloud data, a specific flow is shown in fig. 3, and the process is as follows:
step 4.1, constructing a voxel model for the extracted vegetation point cloud data, specifically: counting the maximum value and the minimum value of the vegetation point cloud data in the X, Y and Z coordinate directions, and recording as (X) min ,x max ),(y min ,y max ),(z min ,z max ) (ii) a With (x) min ,y min ,z min ) And (x) max ,y max ,z max ) Formed of an inclined planeDetermining a maximum bounding box which encloses all the point clouds to obtain a root node of the 0 th level of the octree by using a diagonal line;
in the octree structure, each point cloud space is divided equally from 3 coordinate directions, and after being divided equally into 8 parts, every 1 part of subspace theoretically has three possibilities:
(1) the subspace is completely filled with data, but for point cloud data, there is no notion of volume, and therefore no completely filled case;
(2) the subspace has no data at all, namely is a null node, and at the moment, the subspace does not need to be divided continuously;
(3) and (3) data distribution exists in the subspace, namely the data distribution is a non-empty node, and at the moment, the point cloud space is continuously divided into 8 subcubes, and the subcubes are recurrently divided in sequence.
Repeating the judging and dividing operations on each point cloud space in the octree until no data point exists in the current point cloud space or the volume of the current space reaches a preset minimum value, and stopping the loop iteration to obtain a voxel model;
step 4.2, establishing an octree search structure for the point cloud data in the voxel model, and introducing a uniform positional information representation method of octree nodes in the octree for the convenience of voxel neighbor search, which specifically comprises the following steps:
assuming that the sub-nodes divided in a point cloud space node can be numbered 0-7, the sub-node number oct i Can be represented by the formula (5),
oct i =x i +2y i +4z i (5)
wherein x i ,y i ,z i The binary variable is 0-1, and the geometric meaning of the binary variable is that the child node is positioned in the front/rear half part, the left/right half part and the upper/lower half part of the divided parent node; the position of a node can be represented by a series of octants, wherein each digit represents the child node number of an ancestor node of the node in the parent node of the ancestor; based on this node numbering method, the position of the node in the entire octree space can be represented as (6):
Figure BDA0003723528940000081
for the point cloud to be searched, the voxel space where the point is located can be positioned by solving the space position code of the point, and all point clouds in the voxel space are jointly used as the belonging judgment basis of a new adding point, so that voxel neighbor search is completed;
step 4.3, all non-empty voxels are traversed respectively, and meanwhile, the situation that the number of point clouds in the divided voxels is too small, so that the finally calculated green value is too large in m i The point cloud contained in the voxel as the center needs to satisfy a preset density condition, and if the density threshold of the target voxel under the octree division is t, the vegetation voxel suitable for being considered needs to satisfy the formula (7):
Figure BDA0003723528940000082
wherein, V i A single voxel volume;
step 4.4, when the density is lower than the threshold value, the vegetation information in the voxel is considered to be rare, and the vegetation information should be discarded in order to avoid the overlarge estimated value;
step 4.5, when the density is greater than or equal to a certain threshold value, the voxel is considered to be completely covered by the vegetation point cloud; accumulating according to different levels to obtain a total three-dimensional green quantity estimation volume as shown in a formula (8);
Figure BDA0003723528940000083
where M is the set of all voxel centers associated with the vegetation point cloud, H (M) i ) Is a node hierarchy;
step 4.6, according to the observation original data, according to the data acquisition mode and the characteristics of the vegetation type, correcting the deduced three-dimensional green quantity calculation formula, the information completion parameter, the form completion parameter and the like; obtaining a more accurate corrected three-dimensional green value as shown in a formula (9);
Figure BDA0003723528940000084
wherein c (P) is an information completion parameter, and c (Q) is a form completion parameter. For c and P, because the three-dimensional point cloud data has information loss, c and P are defined as the ratio of the complete vegetation point cloud information quantity to the actually acquired vegetation point cloud information quantity, so as to correct the measurement and calculation result. By observing the vegetation point cloud data, only the upper half information of the vegetation canopy can be obtained for the data acquired in an airborne mode (airborne LiDAR/airborne three-dimensional reconstruction), and only half of a complete canopy model can be considered approximately, namely c (p) is 2; for on-vehicle LiDAR data, it may be considered that crown dorsal information is missing, i.e., the amount of data obtained is only 0.75 times the amount of raw data, when c (p) 4/3.
For c (q), we assume that the real three-dimensional green volume is solved by the equation of "crown diameter-crown height-volume", and if the theoretical value obtained by this formula is to be used as the true value, there is a big premise that: assuming that the vegetation grows uniformly in all directions, its canopy satisfies isotropy, i.e., the crown diameter values measured from all directions are equal. Because the plant is simulated to be in a regular round shape on each layer of cross section by the formula, the actually measured cross section of each layer of the plant is usually in an oval shape, and the calculation mode of the shape completion parameter c (Q) is shown in the formula (10).
Figure BDA0003723528940000091
Wherein the content of the first and second substances,
Figure BDA0003723528940000092
is the maximum crown diameter vector measured at the maximum cross section of the vegetation point cloud,
Figure BDA0003723528940000093
is on the cross section of
Figure BDA0003723528940000094
A vertical crown diameter vector;
and 4.7, establishing a polynomial function relationship between the corrected measured value and the true value, presetting a plurality of fitting functions, simultaneously solving a plurality of polynomial fitting functions with different highest orders, and evaluating the result by using indexes such as correlation coefficients, variances and the like to select a better fitting function. And obtaining a final corrected three-dimensional green value calculation value through parameter adjustment and fitting of the fitting function.
However, the green value obtained by the three-dimensional green value estimation method is not matched with the actual green value, so the invention provides an octree-voxel improved three-dimensional green value calculation method based on scanning line filling. The method still constructs an octree partition structure of the vegetation point cloud according to the set voxel resolution, and the difference lies in that: the measurement and calculation of the three-dimensional green amount are not regarded as simple accumulation of voxels where detected vegetation point clouds are located, but the voxels in the hollow part inside are required to be filled into the result; the flow chart of the improved method of the invention is shown in fig. 4, and the specific process is as follows:
step 5.1, constructing a three-dimensional bounding box of vegetation point cloud data, which is used as a dividing space of a voxel model and an octree; constructing a voxel model according to the set voxel resolution, and establishing an octree index for each vegetation point cloud on the basis;
step 5.2, sequencing all non-empty voxels according to voxel center coordinates, and layering a voxel set according to the size of a Z-axis coordinate; dividing vegetation volume elements in the space along the Z axis according to the concept of layered measurement, regarding each layer of the element space, considering the vegetation point cloud as a slice of the vegetation point cloud on the same horizontal height, and accumulating the three-dimensional green volume of the vegetation slice to finally obtain the accurate measurement value of the three-dimensional green volume of the vegetation in the region;
step 5.3, in each layer of non-empty voxel subset, sorting according to the size of the X-axis coordinate, then sorting according to the Y-axis, and traversing voxels according to the sorting result; when one voxel and the previous voxel are on the same X-axis coordinate, a section of scanning line is formed;
step 5.4, presetting a scanning line length threshold, and judging the canopy part to which each scanning line belongs for each section of scanning line: if the scanning line is too long, the section is considered to belong to the interval between two adjacent trees, otherwise, the section is considered to be a component of the crown;
and 5.5, for the scanning line positioned in the crown, calculating the cross-sectional volume of the crown layer on the scanning line according to the length and the voxel cross-sectional area of the scanning line, and specifically comprising the following steps:
the key of measuring and calculating each layer of point cloud slices is to calculate the cross section area, scan a layer of voxel space along the X-axis or Y-axis direction according to a scanning line filling algorithm, and fill the hollow zone of a vegetation canopy slice. For example, for a scan in the X-axis direction, a starting voxel is picked up near the lower left corner of the contour and a line is traced from the starting voxel to a voxel on the other side of the contour in the positive X-axis direction; the first voxel corresponds to the minimum value and the last voxel corresponds to the maximum X-axis coordinate value along the scan line, and all empty voxels between the start and end voxels are marked, while the cross-sectional volume of the canopy over this segment of the scan line is obtained, as shown in equation (11).
Figure BDA0003723528940000101
Wherein x is j max X is the value of x at the larger end of the j-th scanning line j min The x value of the smaller end of the j-th scanning line is shown,
Figure BDA0003723528940000102
and
Figure BDA0003723528940000103
the length of the side of the y axis and the z axis of the ith level single non-null child node along the three-dimensional direction is;
step 5.6, according to the same method, repeating the process from step 5.2 to step 5.5 for each voxel along the X direction and the Y direction of the profile, thereby obtaining the volume of each layer of canopy, and then accumulating the volume of each layer of canopy to obtain the total volume V of the canopy of the three-dimensional green volume to be measured, wherein the total volume V is shown in a formula (12);
Figure BDA0003723528940000104
example 1
Selecting the same ellipsoidal vegetation in an experimental area, and comparing three different three-dimensional green quantity calculation methods by using three point cloud data acquired at the same time phase. Using the preprocessed single LAS point cloud data, setting the voxel resolution at 0.21m and the density threshold at 500 points/m according to the experience of actual three-dimensional green measurement and calculation 3 The length threshold of the scanning line is set to be 4.0m, and the measurement result is shown in table 1:
TABLE 1 comparison of results of different three-dimensional green body measurement methods (ellipsoid type single plant)
Figure BDA0003723528940000111
Through testing the vegetation point cloud sample, an experimental conclusion in the single plant vegetation point cloud data can be obtained:
(1) compared with a convex hull method for constructing a triangular mesh, the three-dimensional green quantity calculation method realized in the research adopts an octree organization method, the speed is higher when the three-dimensional green quantity is calculated, the acceleration can reach about 54: 1-196: 1 compared with the common acceleration, and the calculation efficiency is very good;
(2) the method has the advantages that particularly when reconstructed point cloud data are processed, the absolute value of the error rate of the calculation is 11.2%, and compared with the absolute value of the error rate of 26.8% in the traditional triangular grid-convex hull method, the method has better calculation accuracy.
(3) For the improved octree-voxel measurement algorithm of the research, besides the high measurement efficiency (the acceleration ratio generally reaches about 34: 1-166: 1) brought by the octree and the voxel model, on one hand, the measurement results of vegetation point cloud data obtained in different acquisition modes are relatively stable; on the other hand, the three-dimensional green measurement calculation value of the method on the three data is closer to the true value overall. In particular, in the task of measuring and calculating three-dimensional reconstruction point clouds, the error rate absolute value is only 2.6% compared with the true value.
In order to verify whether the model has certain universality in measurement and calculation on vegetation with other crown forms, a vegetation with a cylindrical crown layer is selected, verification is carried out under the condition that other test conditions are not changed according to a crown diameter-crown height-volume equation, and the experimental result is shown in table 2:
TABLE 2 comparison of results of different three-dimensional green color measurement methods (cylindrical single plant)
Figure BDA0003723528940000121
By testing another type of point cloud sample of individual vegetation, the following conclusions can be drawn: (1) compared with a convex hull method, the octree-voxel-based three-dimensional green volume calculation method has the advantages that when point cloud data of different types of trees and different acquisition modes are faced, the calculated result variance is smaller, and the stability and the universality are better; (2) in the three types of point clouds, the three methods for comparing the point clouds have the optimal measuring and calculating effects on the three-dimensional reconstruction point clouds; (3) compared with the convex hull method, the octree-voxel measurement and calculation improved method has slightly larger measurement and calculation result deviation, but has absolute advantages in measurement and calculation speed.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (8)

1. A region vegetation three-dimensional green quantity estimation method based on reconstructed point cloud is characterized by comprising the following steps:
step 1, selecting an urban area distributed with typical vegetation at one position as an area to be measured, and carrying out oblique photogrammetry on the area to be measured by using an unmanned aerial vehicle oblique photogrammetry operation system to obtain oblique image data;
step 2, based on an open source three-dimensional reconstruction algorithm, respectively performing sparse reconstruction and dense reconstruction on the oblique image data set obtained in the step 1 to obtain scene three-dimensional reconstruction point cloud data of the detected area;
step 3, extracting vegetation point cloud data from the scene three-dimensional reconstruction point cloud data obtained in the step 2 by using a point cloud semantic segmentation neural network model;
step 4, constructing a regional three-dimensional green quantity quantization model for the vegetation point cloud data obtained in the step 3, wherein the method specifically comprises the following steps: firstly, establishing a voxel model under an octree search structure, and then correcting the voxel structure of the vegetation in the region by using a scanning line filling algorithm, thereby obtaining a three-dimensional green quantity quantification model of the region;
and 5, inputting the vegetation point cloud data extracted in the step 3 into the regional three-dimensional green quantity quantization model obtained in the step 4, and estimating the three-dimensional green quantity.
2. The method for estimating the three-dimensional green quantity of vegetation in the reconstructed point cloud-based region according to claim 1, wherein the typical vegetation in the step 1 is a canopy with a regular geometric shape, and the vegetation with the three-dimensional green quantity truth value can be calculated by using a crown diameter-crown height equation.
3. The method for estimating the three-dimensional green quantity of vegetation in a region based on reconstructed point cloud of claim 1, wherein the unmanned aerial vehicle oblique photogrammetry operation system in the step 1 is composed of a Xinjiang longitude and latitude M300 RTK and a Zen P1 image sensor; the image sensor is fixed on the unmanned aerial vehicle, relevant aerial survey parameters of the unmanned aerial vehicle are set, the aerial survey parameters comprise the height h of the unmanned aerial vehicle which is 60m, the speed v of the unmanned aerial vehicle which is 3m/s, the course overlapping rate which is 80 percent, the lateral overlapping rate which is 70 percent and the like.
4. The method for estimating the three-dimensional green quantity of vegetation in the area based on the reconstructed point cloud as claimed in claim 1, wherein 3-4 image control points are distributed in the area to be measured in step 1, and the RTK base station is used for measuring the spatial position information of the image control points; each image control point should be captured by as many oblique images as possible and should be selected at an easily identifiable corner position.
5. The method for estimating the regional vegetation three-dimensional green quantity based on the reconstructed point cloud of claim 1, wherein the specific process of the step 2 is as follows:
step 2.1, extracting feature points by adopting a scale-invariant feature transformation algorithm based on the oblique image data set obtained in the step 1, and then matching the extracted feature points to obtain feature matching points;
2.2, based on the feature matching points, performing sparse point cloud reconstruction by adopting a motion recovery structure algorithm to obtain sparse point cloud data;
step 2.3, on the basis of the sparse point cloud, performing dense point cloud reconstruction by adopting a multi-view dense matching algorithm based on clustering and a patch model to obtain dense point cloud data; and the dense point cloud data is scene three-dimensional reconstruction point cloud data.
6. The method for estimating the three-dimensional vegetation green quantity of the reconstructed point cloud based area according to claim 1, wherein the point cloud semantic segmentation neural network model in the step 3 comprises an encoder and a decoder, wherein the encoder is used for obtaining a feature extraction result, and the decoder is used for obtaining vegetation point cloud data with the same quantity as the original point cloud data;
wherein, the coder consists of a sampling layer and a combination layer; the sampling layer represents the dense point cloud data to a high-dimensional space formed by coordinate values and characteristic values, and N with the farthest distance is selected from N input point clouds in the measurement space through a farthest point sampling method 1 The point clouds take the group of point clouds as a group of area central points; in the combination layer, the area is represented by selecting k points nearest to the center point of each area to obtain a feature extraction result;
the decoder consists of a plurality of groups of decoding units, wherein each group of decoding units consists of an insertion layer and a PointNet unit layer; after the encoder finishes feature extraction, the model takes an accessed PointNet layer as a feature extractor, and k adjacent features are converged by trainable weight;
the insertion layer firstly completes point cloud up-sampling through the distance weighting characteristics of k nearest neighbor points, as shown in formula (1):
Figure FDA0003723528930000021
wherein d (x, x) i ) Is the Euclidean distance from point x to the ith neighbor point, P represents the power of the Euclidean distance, f i (j) I-th neighbor point representing j-th layer, k total, f (j) (x) Represents the output of the j-th layer point x, and C is the total number of layers.
7. The method for estimating the regional vegetation three-dimensional green quantity based on the reconstructed point cloud of claim 1, wherein the specific process of the step 3 is as follows:
3.1, screening data of the three-dimensional reconstruction point cloud, and removing data with poor reconstruction quality, wherein the data with poor reconstruction quality refers to vegetation point cloud data with incomplete canopy morphology, particularly large-range cavities on the lower surface of a canopy due to the lack of photogrammetric viewing angles;
step 3.2, the obtained data are cut into blocks, each block of data is manually labeled with a semantic label, the vegetation information concerned by the research is labeled as an attribute class, and other ground objects are labeled as another class;
step 3.3, dividing the marked data into a training set and a testing set according to a certain proportion; the training set data is used for point cloud semantic segmentation neural network model parameter training, and the test set is used for detecting the actual network effect after training to obtain network model parameters;
and 3.4, segmenting the neural network model based on the trained point cloud semantics, and inputting scene three-dimensional reconstruction point cloud data to obtain vegetation point cloud data.
8. The method for estimating the regional vegetation three-dimensional green quantity based on the reconstructed point cloud of claim 1, wherein the specific process of the step 4 is as follows:
step 4.1. statistics of vegetation pointsThe maximum value and the minimum value of the cloud data in X, Y, Z coordinate directions are marked as (x) min ,x max ),(y min ,y max ),(z min ,z max ) (ii) a With (x) min ,y min ,z min ) And (x) max ,y max ,z max ) Constructing a three-dimensional bounding box of vegetation point cloud data as a dividing space of a voxel model and an octree by the formed diagonal line;
step 4.2, constructing a voxel model according to the set voxel resolution, and establishing an octree index for each vegetation point cloud on the basis;
4.3, sequencing all non-empty voxels according to voxel center coordinates, and layering a voxel set according to the size of a Z-axis coordinate; in each layer of non-empty voxel subset, firstly sorting according to the size of an X-axis coordinate, then sorting according to a Y-axis, and traversing voxels according to a sorting result; when one voxel and the previous voxel are on the same X-axis coordinate, a section of scanning line is formed;
step 4.4, presetting a length threshold of the scanning line, and judging the canopy part of each scanning line: if the scanning line is too long, the section is considered to belong to the interval between two adjacent numbers, otherwise, the section can be considered to be a component of the crown;
step 4.5, for the scanning line in the crown, calculating the volume according to the length and the voxel cross-sectional area of the scanning line to obtain the cross-sectional volume of the crown layer on the scanning line;
and 4.6, repeating the processes of the steps 4.3 to 4.5 for each voxel along the X direction and the Y direction of the outline according to the same method to obtain the volume of each layer of canopy, and accumulating the volumes of each layer of canopy to obtain the total volume V of the canopy of the three-dimensional green quantity to be measured.
CN202210769692.0A 2022-06-30 2022-06-30 Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud Pending CN115035254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210769692.0A CN115035254A (en) 2022-06-30 2022-06-30 Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210769692.0A CN115035254A (en) 2022-06-30 2022-06-30 Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud

Publications (1)

Publication Number Publication Date
CN115035254A true CN115035254A (en) 2022-09-09

Family

ID=83128786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210769692.0A Pending CN115035254A (en) 2022-06-30 2022-06-30 Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud

Country Status (1)

Country Link
CN (1) CN115035254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311434A (en) * 2022-10-10 2022-11-08 深圳大学 Tree three-dimensional reconstruction method and device based on oblique photography and laser data fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311434A (en) * 2022-10-10 2022-11-08 深圳大学 Tree three-dimensional reconstruction method and device based on oblique photography and laser data fusion

Similar Documents

Publication Publication Date Title
CN109410321B (en) Three-dimensional reconstruction method based on convolutional neural network
CN112489212B (en) Intelligent building three-dimensional mapping method based on multi-source remote sensing data
CN110221311B (en) Method for automatically extracting tree height of high-canopy-closure forest stand based on TLS and UAV
CN111709981A (en) Registration method of laser point cloud and analog image with characteristic line fusion
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN110060332B (en) High-precision three-dimensional mapping and modeling system based on airborne acquisition equipment
CN111047695B (en) Method for extracting height spatial information and contour line of urban group
CN105931234A (en) Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN108007438A (en) The estimating and measuring method of unmanned plane aeroplane photography remote sensing wetland plant biomass
CN110517311A (en) Pest and disease monitoring method based on leaf spot lesion area
CN114332366A (en) Digital city single house point cloud facade 3D feature extraction method
CN111898688A (en) Airborne LiDAR data tree species classification method based on three-dimensional deep learning
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
CN103090946B (en) Method and system for measuring single fruit tree yield
CN113435282A (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN114581619A (en) Coal bunker modeling method based on three-dimensional positioning and two-dimensional mapping
CN115035254A (en) Regional vegetation three-dimensional green quantity estimation method based on reconstructed point cloud
CN115128628A (en) Road grid map construction method based on laser SLAM and monocular vision
CN115049925A (en) Method for extracting field ridge, electronic device and storage medium
CN115032648A (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN106709432A (en) Binocular stereoscopic vision based head detecting and counting method
CN112561981A (en) Photogrammetry point cloud filtering method fusing image information
CN109635834A (en) A kind of method and system that grid model intelligence is inlayed
CN110580468B (en) Single wood structure parameter extraction method based on image matching point cloud
Lin et al. Research on denoising and segmentation algorithm application of pigs’ point cloud based on DBSCAN and PointNet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination