CN113379919A - Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera - Google Patents

Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera Download PDF

Info

Publication number
CN113379919A
CN113379919A CN202110649212.2A CN202110649212A CN113379919A CN 113379919 A CN113379919 A CN 113379919A CN 202110649212 A CN202110649212 A CN 202110649212A CN 113379919 A CN113379919 A CN 113379919A
Authority
CN
China
Prior art keywords
ground
point cloud
points
filtering
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110649212.2A
Other languages
Chinese (zh)
Inventor
田雨
计楚柠
赵义博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Huance Digital Remote Technology Co ltd
Original Assignee
Xuzhou Huance Digital Remote Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Huance Digital Remote Technology Co ltd filed Critical Xuzhou Huance Digital Remote Technology Co ltd
Priority to CN202110649212.2A priority Critical patent/CN113379919A/en
Publication of CN113379919A publication Critical patent/CN113379919A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention particularly relates to a vegetation canopy height fast extraction method based on an unmanned aerial vehicle RGB camera, which comprises the steps of generating three-dimensional dense point cloud based on an orthographic image acquired by an unmanned aerial vehicle along a set flight zone, combining an aggressive two-step filtering and an Otsu segmentation (ExG-ExR) index to obtain a binary image to realize ground point filtering, carrying out regularized grid processing on ground points, respectively carrying out fast interpolation on the gridded ground points according to rows and columns to obtain two ground models after interpolation based on the basic ideas of section line least square quadratic fitting and prediction, averaging the two ground models to obtain a final ground model, carrying out grid processing with the same resolution on the original dense point cloud, and subtracting the final ground model from the original gridded point cloud to obtain a vegetation canopy height model; according to the method, the parameter selection difficulty of two-step filtering is simplified by using the point cloud RGB information in the ground point filtering, and compared with the traditional algorithm, the provided interpolation algorithm consumes less time and obtains similar precision.

Description

Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera
Technical Field
The invention relates to the technical field of ecological environment vegetation canopy extraction, in particular to a vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera.
Background
The vegetation is an important component in the ecosystem, is a main producer of the ecosystem, is most closely related to other elements (such as soil water and underground water), and has important significance in the aspects of suppressing desertification, protecting biological diversity and the like. Vegetation canopy height is one of the most common indicators for assessing vegetation biomass and health in a region.
Remote sensing measurement based on unmanned aerial vehicle has overcome the drawback that traditional manual measurement is inefficient, and the main means of using unmanned aerial vehicle to obtain vegetation height at present is light detection and range finding (LiDAR) technique, and then leads to ground filtering algorithm and interpolation algorithm also mainly to focus on LiDAR data research. In recent years, the synchronous improvement of computer hardware performance and image matching software enables the dense matching of stereo images using a moving structure to be faster and more precise. These recent technological advances have made photogrammetry a competitive alternative to LiDAR.
In contrast to LiDAR, SFM generates three-dimensional point cloud formats that are similar to LiDAR data, but due to their production principles, point clouds differ from LiDAR point clouds primarily in that: the vegetation canopy structure has the advantages that dense vegetation canopies cannot be penetrated, ground points in vegetation areas with complex canopy structures are few, and the capability of distinguishing low vegetation is lower than that of LiDAR; the number of the point clouds is dozens of times or even more than one hundred times of the LiDAR point clouds, and the point clouds are real 'dense point clouds'; the point cloud is accompanied by RGB information. Most of the current ground filtering algorithms rely on accurate structural threshold parameter setting, and point cloud RGB information is not considered; when the non-ground points are not completely eliminated, the residual non-ground points often cause overestimation of the ground surface of the vegetation region based on a traditional interpolation algorithm of certain neighborhood ground points, and further cause underestimation of the height of the canopy; an efficient interpolation algorithm is not introduced, the calculation complexity is a common problem of the interpolation algorithm, the efficiency of the traditional algorithm is too low, and the fast acquisition of the height of the regional canopy is difficult.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera.
A vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera comprises the following steps:
step 1, processing and generating a three-dimensional dense point cloud with RGB (red, green and blue) wave band information by adopting a motion structure method based on image data acquired by an unmanned aerial vehicle;
2, performing ground point filtering on the dense point cloud based on the aggressive two-step filtering, simultaneously calculating a point cloud (ExG-ExR) index, obtaining ground points based on an Otsu segmentation (ExG-ExR) index, and obtaining intersection of the ground points obtained by the aggressive two-step filtering and the Otsu to obtain ground point cloud;
step 3, performing regular meshing based on the ground point cloud obtained in the step 2;
step 4, based on the ground point cloud regularly gridded in the step 3, extracting section lines according to the row and column directions, performing least square quadratic fitting on each section line, predicting and filling all ground cavities to obtain two surface models filled according to the row and column, and performing weighted average on the two surface models to obtain a weighted average ground model;
and 5, performing the same regular meshing on the three-dimensional dense point cloud obtained in the step 1 as the regular meshing in the step 3 to obtain an original earth surface model, and subtracting the weighted average ground model obtained in the step 4 from the original earth surface model to obtain a canopy structure model.
Further, in step 1, the method for acquiring the image data by the unmanned aerial vehicle is as follows: presetting two groups of mutually vertical navigation bands covering the whole area according to the shape of the vegetation canopy height area, wherein the course and the lateral overlapping degree reach 80%;
in the step 1, the method for generating the three-dimensional dense point cloud comprises the following steps: feature point extraction and matching are carried out on the basis of an SIFT operator and a sampling consistency algorithm RANSC, a light beam method adjustment is adopted to optimize a reconstruction result, and finally dense point cloud is obtained on the basis of a three-dimensional multi-view stereoscopic vision algorithm PMVS of a patch.
Further, (a) the original surface model of step 5 is obtained: determining the size of an initial filtering window according to the maximum plant canopy size of a vegetation canopy height area, identifying the lowest point in each window by applying the initial filtering window to the three-dimensional dense point cloud generated in the step 1, classifying the three-dimensional dense point cloud into a plurality of initial ground points, and then performing linear interpolation on the plurality of initial ground points to obtain an initial terrain model;
(II) evaluating the remaining unclassified points: two conditions are satisfied, which can be added to the ground category,
(a) the distance from the remaining unclassified points to the initial terrain surface does not exceed a maximum distance threshold;
(b) an angular difference between the nearest ground point and the initial terrain surface and an angular difference between the nearest initial ground point and the evaluated point are less than a maximum angular threshold;
the filtering window, the maximum distance threshold and the maximum angle threshold are gradually reduced by adopting an aggressive strategy so as to realize the removal of vegetation from large to small step by step, and a complex scene mixed by arbor, shrub and grass is subjected to three-time aggressive filtering to obtain ground point cloud;
secondly, calculating an (ExG-ExR) index of the three-dimensional dense point cloud generated in the step 1 by using the formula (1), obtaining a ground classification result binary image based on an Otsu automatic segmentation (ExG-ExR) index, and obtaining a ground point cloud intersection by filtering the ground point cloud binary image and the ground point cloud intersection to obtain the ground point cloud in the step two.
ExG-ExR=3*G-2.4*R-B (1)
Further, the step 4 specific obtaining method of the weighted average ground model includes:
respectively extracting all section lines of the regular gridding point cloud in the step 3 according to rows and columns;
detecting the number and the length of the cavity areas for each section line, and sequencing according to the cavity lengths from large to small;
(III) counting the length L of the region with the maximum cavity length, if the length L is an even number, adding one, then taking 3(L-1)/2 points from the left boundary to the left, likewise taking 3(L-1)/2 points from the right boundary to the right, dividing the 3(L-1) plus cavity length L into a left section, a middle section and a right section, taking the first 3(L-1)/2 points from the left section to perform least square quadratic fit, predicting (L-1)/2 points to the right, taking the first 3(L-1)/2 points from the right section to perform least square quadratic fit, predicting (L-1)/2 points to the left, taking (L-1) points from the middle section to perform least square quadratic fit, predicting the length L points of the cavity, and coinciding L points on the left section, the middle section and the right section;
(IV) using y(i)(d1)、y(i+1)(d2),d1,d21, 2, …, L represents the fitted polynomials of the i-th segment and the i + 1-th segment, respectively, and the left-middle and right-middle overlapping regions are weighted-averaged by equation (2), respectively, the weights decrease linearly with the distance from the point to the center of the line segment and effectively eliminate any jumps or discontinuities near the boundary of the adjacent portions;
Figure BDA0003111080450000031
replacing the hole area by using the weighted value of the hole area obtained in the step (IV), and circularly executing the steps from the second step to the fourth step until all holes are eliminated to obtain two terrain models extracted according to rows and columns;
and (VI) carrying out weighted average on the two terrain models to obtain a final terrain model.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention introduces the aggressive formula idea into two-step filtering, gradually reduces the threshold value to realize the stepwise elimination of arbor-shrub grasses, thus avoiding the error classification of the primary filtering with fixed threshold value in a complex scene due to the gradient and the heterogeneity of vegetation types, and more effectively filtering vegetation point cloud, and obtaining the final ground point cloud by using the (ExG-ExR) index obtained by the RGB wave band calculation of the point cloud and taking intersection with the two-step filtering result by using the Otsu method for automatic classification, thus effectively assisting the selection of the two-step filtering parameters, because the Otsu method has better effect in distinguishing low and short vegetation, thus avoiding the loss of ground points caused by selecting too strict threshold value by aggressive formula two-step filtering, and the accuracy of an interpolation algorithm is influenced by the reduction of the ground points;
(2) the least square quadratic fitting and prediction algorithm based on the section line is characterized in that the ground points within a certain length are adaptively intercepted and divided into three sections for fitting prediction on the basis of the size of a vacant window in one dimension, a reasonable weighting method is adopted, namely, the reservation of local terrain trend is ensured, meanwhile, a trend item is introduced to effectively inhibit inevitable error classification points in the filtering process of the ground points, in addition, the algorithm provided is low in calculation complexity and easy to realize in parallel, and compared with the traditional interpolation algorithms such as inverse distance weight interpolation and kriging interpolation, the running time is greatly shortened.
Drawings
Fig. 1 is a flow chart of a vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera provided by the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The technical scheme adopted by the invention is a vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera, and the method mainly comprises two main parts: the fast interpolation method based on the cross section line least square quadratic fitting and prediction is characterized in that ground points are extracted through the advanced two-step filtering based on the point cloud RGB wave band information assistance, and a fast interpolation algorithm based on the cross section line least square quadratic fitting and prediction is provided for data holes existing in ground point clouds after ground point filtering. The method comprises the following steps:
the invention is further illustrated with reference to fig. 1:
a vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera comprises the following steps:
step 1, processing and generating a three-dimensional dense point cloud with RGB (red, green and blue) wave band information by adopting a motion structure method based on image data acquired by an unmanned aerial vehicle;
2, performing ground point filtering on the dense point cloud based on the aggressive two-step filtering, simultaneously calculating a point cloud (ExG-ExR) index, obtaining ground points based on an Otsu segmentation (ExG-ExR) index, and obtaining intersection of the ground points obtained by the aggressive two-step filtering and the Otsu to obtain ground point cloud;
step 3, performing regular meshing based on the ground point cloud obtained in the step 2;
step 4, based on the ground point cloud regularly gridded in the step 3, extracting section lines according to the row and column directions, performing least square quadratic fitting on each section line, predicting and filling all ground cavities to obtain two surface models filled according to the row and column, and performing weighted average on the two surface models to obtain a weighted average ground model;
and 5, performing the same regular meshing on the three-dimensional dense point cloud obtained in the step 1 as the regular meshing in the step 3 to obtain an original earth surface model, and subtracting the weighted average ground model obtained in the step 4 from the original earth surface model to obtain a canopy structure model.
In the step 1, the flight route is designed to cover two mutually perpendicular sets of flight zones of the whole research area, the course and the lateral overlapping degree both reach 80% to expand the visual angle of the camera, further increase the number of ground points and simultaneously improve the three-processing precision of altitude, conditionally introduce oblique photography, select the flight height to be calculated according to the camera parameters and the minimum ground resolution required by work, select the time period when the illumination change is small and the wind speed is small for acquiring data, such as 2 pm, the time periods should be guaranteed to be the same among a plurality of flight frames, and in addition, to guarantee the reliable result, a certain number of ground control points need to be arranged according to the precision requirement;
the unmanned aerial vehicle sequence image is subjected to feature point extraction and matching through an SIFT operator and a sampling consistency algorithm RANSC, a light beam method adjustment is adopted to optimize a reconstruction result, and finally a dense point cloud with RGB waveband information is obtained through a three-dimensional multi-view stereo vision algorithm PMVS based on a patch.
In the step 2, the size of an initial filtering window is determined according to the size of the maximum plant canopy of the research area, the window needs to be larger than the maximum canopy area, at least one ground point is guaranteed to be required in each window, the ground points are applied to the point cloud generated in the step 1 to identify the lowest point in each window and classify the lowest point into initial ground points, and then linear interpolation is carried out on the initial ground points to obtain an approximate initial terrain model. Second, all remaining unclassified points are evaluated and added to the ground category if they satisfy both of the following conditions: (ii) their distance to the initial terrain surface does not exceed a maximum distance threshold, and (ii) the angular difference between the closest ground point and the initial terrain surface and the angular difference between the closest initial ground point and the point being evaluated are less than a maximum angular threshold; the filtering window, the maximum distance threshold and the maximum angle threshold are gradually reduced by adopting an aggressive strategy to remove vegetation from big to small step by step, the arbor, shrub and grass mixed complex scene suggests three aggressive filtering to obtain ground point cloud, and taking an arbor, shrub and grass mixed area in a semiarid area as an example, the parameters of the aggressive triple filtering are as follows: window size: 10m, 2m, 0.5m, maximum distance threshold: 0.6m, 0.3m, 0.1m, maximum angle threshold: 7 degrees, 3 degrees and 1 degree.
Calculating an (ExG-ExR) index of the point cloud generated in the step 1 by using the formula (1), obtaining a ground classification result binary image based on Otsu automatic segmentation (ExG-ExR), and filtering the binary image and the ground point cloud obtained by the two steps to obtain an intersection to obtain the final ground point cloud.
ExG-ExR=3*G-2.4*R-B (1)
In the step 3, the selection of the size of the grid in the ground point cloud regularization process follows the principle of effectively identifying a research object, for example, the height of a shrub canopy is mainly researched, a grid of 5cm can be selected, the grid is suggested to be slightly larger than the ground resolution of an unmanned aerial vehicle image, the average value or the maximum value of the elevation of the point cloud in the grid is taken as a grid value, and if no point exists in the grid, the point cloud is set as a null value.
The specific operation steps of the step 4 are as follows:
respectively extracting all section lines of the gridding point cloud in the step 3 according to rows and columns;
detecting the number and the length of the hole areas (vegetation areas) for each section line, and sequencing the hole areas from large to small according to the hole lengths;
(III) counting the length L of the region with the maximum cavity length, if the length L is an even number, adding one, then taking 3(L-1)/2 points from the left boundary to the left, likewise taking 3(L-1)/2 points from the right boundary to the right, dividing the 3(L-1) plus cavity length L into a left section, a middle section and a right section, taking the first 3(L-1)/2 points from the left section to perform least square quadratic fit, predicting (L-1)/2 points to the right, taking the first 3(L-1)/2 points from the right section to perform least square quadratic fit, predicting (L-1)/2 points to the left, taking (L-1) points from the middle section to perform least square quadratic fit, predicting the length L points of the cavity, and coinciding L points on the left section, the middle section and the right section;
(IV) using y(i)(d1)、y(i+1)(d2),d1,d21, 2, …, L respectively represents the fitting polynomial of the i-th segment and the i + 1-th segment, the left, middle and right overlapping regions are weighted and averaged by the formula (2), the weight is linearly reduced along with the distance from the point to the center of the line segment, the weighting ensures symmetry, and effectively eliminates any jump or discontinuity near the boundary of the adjacent parts, and in fact, the scheme ensures that the filling region is continuous and smooth everywhere and conforms to the general regular terrain;
Figure BDA0003111080450000051
replacing the hole area by using the weighted value of the hole area obtained in the step (IV), and circularly executing the steps from the step (II) to the step (IV) until all holes are eliminated to obtain two terrain models extracted according to rows and columns;
and (VI) carrying out weighted average on the two terrain models to obtain a final terrain model.
In the step 5, the original three-dimensional dense point cloud obtained in the step 1 is subjected to meshing by adopting the same mesh size in the step 3 to obtain an original earth surface model, and the terrain model interpolated in the step 4 is subtracted from the original earth surface model to obtain a final regional vegetation canopy structure model.
Finally, it should be noted that, in step 4, the sectioning line of the gridded point cloud is taken as a basic research object according to the rows and columns, which can be understood as a four-direction interpolation of the ground point, if a more accurate and smooth vegetation canopy model is desired to be obtained, a person skilled in the art can introduce two diagonal directions of the gridded point cloud to further realize eight-direction interpolation, even introduce sixteen, thirty-two directions and the like, and finally, the interpolation results in multiple directions of the vacancy area are averaged to be taken as a final interpolation result after error points are removed, so that the result can be ensured to be more reliable and the hydrologic connectivity can be restored to a certain degree.
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings are only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that methods and embodiments similar to those described above can be devised without departing from the spirit and scope of the present invention.

Claims (4)

1. A vegetation canopy height rapid extraction method based on an unmanned aerial vehicle RGB camera is characterized by comprising the following steps:
step 1, processing and generating a three-dimensional dense point cloud with RGB (red, green and blue) wave band information by adopting a motion structure method based on image data acquired by an unmanned aerial vehicle;
2, performing ground point filtering on the dense point cloud based on the aggressive two-step filtering, simultaneously calculating a point cloud (ExG-ExR) index, obtaining ground points based on an Otsu segmentation (ExG-ExR) index, and obtaining intersection of the ground points obtained by the aggressive two-step filtering and the Otsu to obtain ground point cloud;
step 3, performing regular meshing based on the ground point cloud obtained in the step 2;
step 4, based on the ground point cloud regularly gridded in the step 3, extracting section lines according to the row and column directions, performing least square quadratic fitting on each section line, predicting and filling all ground cavities to obtain two surface models filled according to the row and column, and performing weighted average on the two surface models to obtain a weighted average ground model;
and 5, performing the same regular meshing on the three-dimensional dense point cloud obtained in the step 1 as the regular meshing in the step 3 to obtain an original earth surface model, and subtracting the weighted average ground model obtained in the step 4 from the original earth surface model to obtain a canopy structure model.
2. The method for rapidly extracting the vegetation canopy height based on the unmanned aerial vehicle RGB camera of claim 1, wherein in the step 1, the method for the unmanned aerial vehicle to acquire the image data is as follows: presetting two groups of mutually vertical navigation bands covering the whole area according to the shape of the vegetation canopy height area, wherein the course and the lateral overlapping degree reach 80%;
in the step 1, the method for generating the three-dimensional dense point cloud comprises the following steps: feature point extraction and matching are carried out on the basis of an SIFT operator and a sampling consistency algorithm RANSC, a light beam method adjustment is adopted to optimize a reconstruction result, and finally dense point cloud is obtained on the basis of a three-dimensional multi-view stereoscopic vision algorithm PMVS of a patch.
3. The vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera of claim 1, characterized in that:
step 5, obtaining the original earth surface model: determining the size of an initial filtering window according to the maximum plant canopy size of a vegetation canopy height area, identifying the lowest point in each window by applying the initial filtering window to the three-dimensional dense point cloud generated in the step 1, classifying the three-dimensional dense point cloud into a plurality of initial ground points, and then performing linear interpolation on the plurality of initial ground points to obtain an initial terrain model;
(II) evaluating the remaining unclassified points: two conditions are satisfied, which can be added to the ground category,
(a) the distance from the remaining unclassified points to the initial terrain surface does not exceed a maximum distance threshold;
(b) an angular difference between the nearest ground point and the initial terrain surface and an angular difference between the nearest initial ground point and the evaluated point are less than a maximum angular threshold;
the filtering window, the maximum distance threshold and the maximum angle threshold are gradually reduced by adopting an aggressive strategy so as to realize the removal of vegetation from large to small step by step, and a complex scene mixed by arbor, shrub and grass is subjected to three-time aggressive filtering to obtain ground point cloud;
secondly, calculating an (ExG-ExR) index of the three-dimensional dense point cloud generated in the step 1 by using the formula (1), obtaining a ground classification result binary image based on an Otsu automatic segmentation (ExG-ExR) index, and obtaining a ground point cloud intersection by filtering the ground point cloud binary image and the ground point cloud intersection to obtain the ground point cloud in the step two.
ExG-ExR=3*G-2.4*R-B (1)
4. The method for rapidly extracting the vegetation canopy height based on the unmanned aerial vehicle RGB camera according to claim 1, wherein the step 4 specific obtaining method of the weighted average ground model comprises:
respectively extracting all section lines of the regular gridding point cloud in the step 3 according to rows and columns;
detecting the number and the length of the cavity areas for each section line, and sequencing according to the cavity lengths from large to small;
(III) counting the length L of the region with the maximum cavity length, if the length L is an even number, adding one, then taking 3(L-1)/2 points from the left boundary to the left, likewise taking 3(L-1)/2 points from the right boundary to the right, dividing the 3(L-1) plus cavity length L into a left section, a middle section and a right section, taking the first 3(L-1)/2 points from the left section to perform least square quadratic fit, predicting (L-1)/2 points to the right, taking the first 3(L-1)/2 points from the right section to perform least square quadratic fit, predicting (L-1)/2 points to the left, taking (L-1) points from the middle section to perform least square quadratic fit, predicting the length L points of the cavity, and coinciding L points on the left section, the middle section and the right section;
(IV) using y(i)(d1)、y(i+1)(d2),d1,d21, 2, …, L represents the fitted polynomials of the i-th segment and the i + 1-th segment, respectively, and the left-middle and right-middle overlapping regions are weighted-averaged by equation (2), respectively, the weights decrease linearly with the distance from the point to the center of the line segment and effectively eliminate any jumps or discontinuities near the boundary of the adjacent portions;
Figure FDA0003111080440000021
replacing the hole area by using the weighted value of the hole area obtained in the step (IV), and circularly executing the steps from the second step to the fourth step until all holes are eliminated to obtain two terrain models extracted according to rows and columns;
and (VI) carrying out weighted average on the two terrain models to obtain a final terrain model.
CN202110649212.2A 2021-06-10 2021-06-10 Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera Pending CN113379919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110649212.2A CN113379919A (en) 2021-06-10 2021-06-10 Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110649212.2A CN113379919A (en) 2021-06-10 2021-06-10 Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera

Publications (1)

Publication Number Publication Date
CN113379919A true CN113379919A (en) 2021-09-10

Family

ID=77573743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110649212.2A Pending CN113379919A (en) 2021-06-10 2021-06-10 Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera

Country Status (1)

Country Link
CN (1) CN113379919A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114000396A (en) * 2021-11-02 2022-02-01 西华大学 Automatic repairing system and method for unevenness of highway road
CN114548277A (en) * 2022-02-22 2022-05-27 电子科技大学 Method and system for fitting ground points and extracting crop height based on point cloud data
CN117095134A (en) * 2023-10-18 2023-11-21 中科星图深海科技有限公司 Three-dimensional marine environment data interpolation processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114000396A (en) * 2021-11-02 2022-02-01 西华大学 Automatic repairing system and method for unevenness of highway road
CN114548277A (en) * 2022-02-22 2022-05-27 电子科技大学 Method and system for fitting ground points and extracting crop height based on point cloud data
CN114548277B (en) * 2022-02-22 2023-09-08 电子科技大学 Method and system for ground point fitting and crop height extraction based on point cloud data
CN117095134A (en) * 2023-10-18 2023-11-21 中科星图深海科技有限公司 Three-dimensional marine environment data interpolation processing method
CN117095134B (en) * 2023-10-18 2023-12-22 中科星图深海科技有限公司 Three-dimensional marine environment data interpolation processing method

Similar Documents

Publication Publication Date Title
CN113379919A (en) Vegetation canopy height rapid extraction method based on unmanned aerial vehicle RGB camera
Gruszczyński et al. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation
CN107016677A (en) A kind of cloud atlas dividing method based on FCN and CNN
CN111340723B (en) Terrain-adaptive airborne LiDAR point cloud regularization thin plate spline interpolation filtering method
KR101549155B1 (en) Method of automatic extraction of building boundary from lidar data
CN115564926A (en) Three-dimensional patch model construction method based on image building structure learning
CN110969654A (en) Corn high-throughput phenotype measurement method and device based on harvester and harvester
CN109389553B (en) Meteorological facsimile picture contour interpolation method based on T spline
CN116704333B (en) Single tree detection method based on laser point cloud data
CN108802729B (en) Method and device for selecting time sequence InSAR optimal interference image pair
CA2684893A1 (en) Geospatial modeling system providing data thinning of geospatial data points and related methods
Rashidi et al. Ground filtering LiDAR data based on multi-scale analysis of height difference threshold
CN114119902A (en) Building extraction method based on unmanned aerial vehicle inclined three-dimensional model
AU2010200144A1 (en) Extraction processes
CN114463338B (en) Automatic building laser foot point extraction method based on graph cutting and post-processing
CN115761682A (en) Method and device for identifying travelable area based on laser perception and intelligent mine card
CN113409332B (en) Building plane segmentation method based on three-dimensional point cloud
CN111046884A (en) Slope geological disaster extraction method of multi-feature auxiliary watershed algorithm
CN107564024B (en) SAR image aggregation region extraction method based on single-side aggregation line segment
CN116152512A (en) Height measuring and calculating method based on building shadow restoration
CN114494586B (en) Lattice projection deep learning network broadleaf branch and leaf separation and skeleton reconstruction method
Yue et al. A method for extracting street trees from mobile LiDAR point clouds
CN115661398A (en) Building extraction method, device and equipment for live-action three-dimensional model
Lin et al. Segmentation-based ground points detection from mobile laser scanning point cloud
Mahphood et al. Virtual first and last pulse method for building detection from dense LiDAR point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination