CN117456364B - Grassland biomass estimation method and system based on SfM and grassland height factors - Google Patents

Grassland biomass estimation method and system based on SfM and grassland height factors Download PDF

Info

Publication number
CN117456364B
CN117456364B CN202311487751.6A CN202311487751A CN117456364B CN 117456364 B CN117456364 B CN 117456364B CN 202311487751 A CN202311487751 A CN 202311487751A CN 117456364 B CN117456364 B CN 117456364B
Authority
CN
China
Prior art keywords
grassland
data
target area
biomass
vegetation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311487751.6A
Other languages
Chinese (zh)
Other versions
CN117456364A (en
Inventor
孙伟
曹姗姗
孔繁涛
曹梦迪
韩昀
陈若彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Information Institute of CAAS
Original Assignee
Agricultural Information Institute of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Information Institute of CAAS filed Critical Agricultural Information Institute of CAAS
Priority to CN202311487751.6A priority Critical patent/CN117456364B/en
Publication of CN117456364A publication Critical patent/CN117456364A/en
Application granted granted Critical
Publication of CN117456364B publication Critical patent/CN117456364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Vascular Medicine (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a grassland biomass estimation method and a grassland biomass estimation system based on SfM and grassland height factors, which relate to the technical field of grassland biomass monitoring, and are characterized in that an unmanned aerial vehicle is utilized to acquire multispectral and common image data of grasslands, a three-dimensional scene coordinate construction is realized by adopting an SfM method based on common image feature points and ground control point data which are set in advance, and a digital surface model DSM and a vegetation-free coverage digital terrain model DTM are acquired, so that high-resolution grassland height data are obtained; the grassland vegetation index is calculated through multispectral image data, and the grassland biomass is predicted by combining the grassland height data to construct a model, so that the accuracy, the instantaneity and the cost requirements of grassland biomass estimation on the grassland are met, the grassland grazing management can be better guided, and the grassland and livestock balance is kept.

Description

Grassland biomass estimation method and system based on SfM and grassland height factors
Technical Field
The invention relates to the technical field of grassland biomass monitoring, in particular to a grassland biomass estimation method and system based on SfM and grassland height factors.
Background
Grasslands are important components of the land ecological system, have extremely important ecological service functions of regulating climate, conserving water sources, preventing wind, fixing sand, holding carbon and the like, and play an important role in global climate change. In recent years, due to the increase in grazing intensity and development of grassland resources, the problem of grassland degradation is increasingly prominent, and it is particularly important to evaluate grassland productivity and maintain the balance of grasslands. Grassland Biomass (AGB) is an important index for evaluating grassland productivity, and accurate inversion of grassland AGB has important significance for grassland growth monitoring and grassland animal balance.
However, the existing grassland AGB estimation method mainly comprises a harvesting method, a vegetation characteristic estimation method and a remote sensing estimation method, wherein the harvesting method has high damage to grasslands, and huge manpower and material resources are required to be consumed, so that the grassland AGB estimation method is only suitable for biomass estimation in a small range of sample sites; the vegetation characteristic estimation method is to establish a mathematical model of the vegetation characteristic estimation method and grassland biomass according to the characteristics of plant height, high grass compression and grassland coverage, so that grassland biomass estimation is carried out on other areas of the same or similar grasslands, but the accuracy of grassland biomass estimation is difficult to ensure under the influence of uneven grassland growth distribution, plant growth period and the like; the remote sensing estimation method is suitable for long-term supervision and management of a large-scale grassland, but is limited by the space and time resolution of a satellite sensor, and the problem that the space resolution and the time resolution are not high is also faced when biomass estimation is carried out on the grassland. The grassland biomass estimation methods are difficult to meet the requirements of accuracy and real-time of grassland biomass estimation in a small-range area of a pasture.
Therefore, how to provide a method and a system for timely and accurately estimating the biomass on the grasslands is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a grassland biomass estimation method and a grassland biomass estimation system based on SfM and grassland height factors, so as to estimate grassland biomass timely and accurately.
In order to achieve the above object, the present invention provides the following technical solutions:
The invention discloses a grassland biomass estimation method based on SfM and grassland height factors, which comprises the following specific steps:
Step 1: setting a ground control point in a target area, and recording the coordinates of the ground control point;
Step 2: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; the method comprises the steps of obtaining grass layer height data and aboveground biomass data of a target area sample party in the field;
step 3: calculating a target area vegetation index based on the multispectral data;
step4: acquiring the height of a grass layer of a target area by adopting an SFM method based on the common image data and the ground control point coordinates;
Step 5: according to the grass layer height and vegetation index of the target area and the biomass data of the sample party, respectively constructing a random forest model and a support vector regression model, evaluating the model precision, and selecting the model with the highest fitting precision to estimate the biomass on the grassland of the target area.
Further, the step2 further includes:
selecting a plurality of sampling parties by a random sampling method, and setting the area of the sampling party according to the pixel size of the remote sensing image of the unmanned aerial vehicle; measuring the vertical distance from the crown of the plant in the sample side to the ground, and taking the average value of the vertical distance as the height of the grass layer of the sample side; and obtaining plant samples in the sample party by a mowing method in the uniform land, and weighing and sampling the fresh weight and the dry weight of the sample party to obtain biomass data of the sample party.
Further, the step 3 specifically includes the following steps:
Step 3.1: performing radiation correction, geometric correction and splicing and cutting treatment on the multispectral data;
Step 3.2: calculating a vegetation index of the target area based on the processed multispectral data, comprising: normalized vegetation index NDVI, differential vegetation index DVI, ratio vegetation index RVI, enhanced vegetation index EVI, soil conditioning vegetation index SAVI, and triangular vegetation index TVI, the calculation formula is as follows:
DVI=NIR-Red
TVI=0.5[120(NIR-Green)-200(Red-Green)]
Wherein NIR, red, BLUE, green is the reflectivity of near infrared band, red light band, blue light band and green light band, and L is soil regulating coefficient.
Further, the specific steps of the step 4 are as follows:
step 4.1: acquiring a target area digital surface model DSM according to the common image data and the ground control point coordinates;
Step 4.2: acquiring a vegetation-free coverage digital terrain model DTM based on the digital surface model DSM;
Step 4.3: the digital surface model DSM and the non-vegetation coverage digital terrain model DTM are used for superposition and subtraction to obtain a grass layer height model, so that grass layer height data of a target area are obtained;
step 4.4: and (3) carrying out precision inspection on the grass layer height model according to the grass layer height data of the sample side obtained in the step (2).
Further, the step 4.1 further includes the following steps:
Step 4.1.1: preprocessing the common image data of the target area, including: denoising, correcting distortion and enhancing an image;
Step 4.1.2: extracting features of the preprocessed common image data, extracting feature points in different images, describing the feature points in the images by using SIFT and SURF local feature descriptors, and detecting and selecting the feature points by using ground control points; matching the characteristic points among different images to establish corresponding matching relations among the characteristic points of the different images;
Step 4.1.3: estimating camera motion and pose between different images by using the matching relation of the feature points and an optimization method for resolving geometric transformation and camera pose;
Step 4.1.4: triangulation is performed on the image according to the matching relation between the pose of the camera and the characteristic points, and three-dimensional coordinates of the characteristic points are calculated by using the matching relation between the characteristic points under a plurality of view angles, so that sparse point clouds are obtained, and the calculation formula is as follows:
Where, (u 1,v1) and (u 2,v2) are projection coordinates of the feature point in the two camera views, (c x1,cy1) and (c x2,cy2) are optical center coordinates of the two camera views; (f x1,fy1) and (f x2,fy2) are focal lengths of two camera views; z 1 and Z 2 are depth values of the feature points at two viewing angles, respectively; p 1 and P 2 are three-dimensional coordinates of the feature points in two camera view angles, and then the three-dimensional coordinates of the feature points are calculated by a linear interpolation method in combination with pixel displacement of two images;
Step 4.1.5: estimating the depth value of the missing region by adopting a bilinear interpolation method to obtain dense three-dimensional point cloud data, and obtaining a digital surface model DSM, wherein the bilinear interpolation method comprises the following calculation formula:
Z=Z00*(1-dx)(1-dy)+Z01*(1-dx)dy+Z10*dx(1-dy)+Z11*dxdy
Where Z is the depth estimate at pixel coordinates (x, y), Z 00、Z01、Z10 and Z 11 represent the depth values at pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, and d x and d y are the difference ratios between (x, y) and the nearest four pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, representing the interpolation ratios in the horizontal and vertical directions.
Further, the step 4.2 further includes the following steps:
step 4.2.1: obtaining a tiff format DSM image based on digital surface model DSM conversion, and separating a vegetation area from a non-vegetation area by using an image segmentation technology;
Step 4.2.2: extracting features of the DSM image and the original common image, comparing gray values and gradients of the DSM image and the original common image, and distinguishing vegetation from non-vegetation by using a thresholding method so as to generate a vegetation mask;
Step 4.2.4: removing vegetation parts in the DSM image by using a vegetation mask to obtain bare land information without vegetation;
Step 4.2.5: and filling the missing ground information generated in the vegetation removal process by adopting a Kriging interpolation method to obtain a continuous non-vegetation coverage digital terrain model DTM.
Further, the step 5 specifically includes the following steps:
step 5.1: resampling the vegetation index data and the grass height data to a uniform spatial resolution;
Step 5.2: constructing a training set and a testing set according to the sample side biomass data and the corresponding calculated target area grass layer height data and vegetation index data; respectively constructing a random forest model and a support vector regression model, and training by using a training set;
Step 5.3: model accuracy testing is carried out by using a test set, and model fitting accuracy is estimated by using a Root Mean Square Error (RMSE) and a decision coefficient (R 2), wherein the calculation formulas of the RMSE and the R 2 are as follows:
where y i is the true value of the biomass on the grassland of the ith sample party, Is the predicted value of biomass on the grassland of the ith sample party,/>Is the average value of the true biomass values on the grasslands of the sample parties, and N is the total number of the sample parties to be tested;
step 5.4: and selecting a model with better fitting precision to estimate the grassland biomass of the target area, and obtaining the current grassland biomass state of the pasture.
The invention also discloses a grassland biomass estimation system based on the SfM and the grassland height factor, which comprises:
and a recording module: setting a ground control point in a target area, and recording the coordinates of the ground control point;
And the acquisition module is used for: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; the method comprises the steps of obtaining grass layer height data and aboveground biomass data of a target area sample party in the field;
the vegetation index calculating module: calculating a target area vegetation index based on the multispectral data;
Grass layer height acquisition module: acquiring the height of a grass layer of a target area by adopting an SFM method based on the common image data and the ground control point coordinates;
And an estimation module: according to the grass layer height and vegetation index of the target area and the biomass data of the sample party, respectively constructing a random forest model and a support vector regression model, evaluating the model precision, and selecting the model with the highest fitting precision to estimate the biomass on the grassland of the target area.
Further, the grass layer height acquisition module further includes:
a first acquisition unit: acquiring a target area digital surface model DSM according to the common image data and the ground control point coordinates;
a second acquisition unit: acquiring a vegetation-free coverage digital terrain model DTM based on the digital surface model DSM;
A grass layer height calculating unit: the digital surface model DSM and the non-vegetation coverage digital terrain model DTM are used for superposition and subtraction to obtain a grass layer height model, so that grass layer height data of a target area are obtained;
And (3) a checking unit: and according to the sample side grass layer height data acquired by the acquisition module, performing accuracy inspection on the grass layer height model.
Compared with the prior art, the grassland biomass estimation method and system based on the SfM and the grassland height factor provided by the invention can be used for acquiring multispectral and common image data of grasslands through the unmanned aerial vehicle, and has the characteristics of good real-time performance, high accuracy and low use cost; by adopting the SfM method, three-dimensional scene coordinate construction can be realized through image features and ground control points, and a digital surface model DSM and a vegetation-free coverage digital terrain model DTM can be quickly and accurately obtained, so that high-resolution grass layer height data can be quickly obtained; by constructing an estimation model by taking the vegetation index and the grass layer height as explanatory variables together, the estimation precision of the grassland biomass on the grassland is improved, and the requirements of the accuracy, the instantaneity and the cost of the estimation of the grassland biomass on the grassland are met, so that the pasture grazing management is guided better, and the balance of the livestock and the grass is kept.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic overall flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention discloses a grassland biomass estimation method based on SfM and grassland height factors, which comprises the following specific steps as shown in figure 1:
Step 1: setting a ground control point in a target area, and recording the coordinates of the ground control point;
Step 2: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; the method comprises the steps of obtaining grass layer height data and aboveground biomass data of a target area sample party in the field;
step 3: calculating a target area vegetation index based on the multispectral data;
Step 4: acquiring the grass layer height of the target area by adopting an SFM method based on the common image data and the ground control point coordinates;
Step 5: according to the grass layer height and vegetation index of the target area and the biomass data of the sample party, respectively constructing a random forest model and a support vector regression model, evaluating the model precision, and selecting the model with the highest fitting precision to estimate the biomass on the grassland of the target area.
In one implementation, firstly, uniformly distributed ground control points are arranged in a target area, and the accurate position of each reference point is measured by using a global navigation satellite system and used as a reference point for correcting and positioning aerial images, so that the accuracy and the reliability of three-dimensional scene reconstruction can be improved; then, the unmanned aerial vehicle carries a multispectral camera and a common image camera to synchronously shoot the ground, the flying height is set to be 30m, the course direction image overlapping degree is set to be 80% during shooting, and the course image overlapping degree is set to be 50%, so that the unmanned aerial vehicle shooting efficiency is ensured, and meanwhile, high-resolution image data of different shooting angles are acquired.
In a specific embodiment, step 2 further comprises:
Selecting a plurality of sampling parties by a random sampling method, and setting the area of the sampling party according to the pixel size of the remote sensing image of the unmanned aerial vehicle; measuring the vertical distance from the crown of the plant in the sample side to the ground, and taking the average value of the vertical distance as the height of the grass layer of the sample side; and obtaining plant samples in the sample party by a mowing method in the uniform land, weighing the fresh weight of the sample, and drying the sample party to constant weight in a 70 ℃ environment of a laboratory to obtain biomass data of the sample party.
In one embodiment, step 3 is specifically as follows:
step 3.1: radiation correction, geometric correction, and stitching and cropping processes are performed on the multispectral data.
Step 3.2: calculating a vegetation index of the target area based on the processed multispectral data, comprising: normalized vegetation index NDVI, differential vegetation index DVI, ratio vegetation index RVI, enhanced vegetation index EVI, soil conditioning vegetation index SAVI, and triangular vegetation index TVI, the calculation formula is as follows:
DVI=NIR-Red
TVI=0.5[120(NIR-Green)-200(Red-Green)]
Wherein NIR, red, BLUE, green is the reflectivity of near infrared band, red light band, blue light band and green light band, and L is soil regulating coefficient.
In one embodiment, step 4 is specifically as follows:
step 4.1: and acquiring a digital surface model DSM of the target area according to the common image data and the ground control point coordinates.
Step 4.2: based on the digital surface model DSM, a vegetation-free coverage digital terrain model DTM is obtained.
Step 4.3: and obtaining a grass layer height model by utilizing the superposition subtraction of the digital surface model DSM and the non-vegetation coverage digital terrain model DTM, thereby obtaining grass layer height data of the target area.
Step 4.4: and (3) carrying out precision inspection on the grass layer height model according to the grass layer height data of the sample side obtained in the step (2).
In a specific embodiment, step 4.1 further comprises the steps of:
step 4.1.1: preprocessing the common image data of the target area, including: image denoising, distortion correction and image enhancement.
Step 4.1.2: extracting features of the preprocessed common image data, extracting feature points in different images, describing the feature points in the images by using SIFT and SURF local feature descriptors, and detecting and selecting the feature points by using ground control points; and matching the characteristic points of different images to establish corresponding matching relations among the characteristic points of different images.
Step 4.1.3: and estimating the camera motion and the camera pose between different images by using the matching relation of the feature points and an optimization method for calculating the geometric transformation and the camera pose.
Step 4.1.4: triangulation is performed on the image according to the matching relation between the pose of the camera and the characteristic points, and three-dimensional coordinates of the characteristic points are calculated by using the matching relation between the characteristic points under a plurality of view angles, so that sparse point clouds are obtained, and the calculation formula is as follows:
Where, (u 1,v1) and (u 2,v2) are projection coordinates of the feature point in the two camera views, (c x1,cy1) and (c x2,cy2) are optical center coordinates of the two camera views; (f x1,fy1) and (f x2,fy2) are focal lengths of two camera views; z 1 and Z 2 are depth values of the feature points at two viewing angles, respectively; p 1 and P 2 are three-dimensional coordinates of the feature point in the two camera views, and then the three-dimensional coordinates of the feature point are calculated by a linear interpolation method in combination with pixel displacement of the two images.
Step 4.1.5: estimating the depth value of the missing region by adopting a bilinear interpolation method to obtain dense three-dimensional point cloud data, and obtaining a digital surface model DSM, wherein the bilinear interpolation method comprises the following calculation formula:
Z=Z00*(1-dx)(1-dy)+Z01*(1-dx)dy+Z10*dx(1-dy)+Z11*dxdy
Where Z is the depth estimate at pixel coordinates (x, y), Z 00、Z01、Z10 and Z 11 represent the depth values at pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, and d x and d y are the difference ratios between (x, y) and the nearest four pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, representing the interpolation ratios in the horizontal and vertical directions.
In a specific embodiment, step 4.2 further comprises the steps of:
Step 4.2.1: obtaining a tiff format DSM image based on digital surface model DSM conversion, and separating a vegetation area from a non-vegetation area by using an image segmentation technology; and smoothing the DSM image using a filter.
Step 4.2.2: and extracting features of the DSM image and the original common image, comparing gray values and gradients of the DSM image and the original common image, and distinguishing vegetation from non-vegetation by using a thresholding method so as to generate a vegetation mask.
Step 4.2.4: and removing vegetation parts in the DSM image by using a vegetation mask to obtain bare land information without vegetation.
Step 4.2.5: and filling the missing ground information generated in the vegetation removal process by adopting a Kriging interpolation method to obtain a continuous non-vegetation coverage digital terrain model DTM.
The optimal vegetation-free coverage digital terrain model DTM is obtained by continuously adjusting parameters such as the size, the maximum distance and the like of the image units.
In one embodiment, step 5 is specifically as follows:
Step 5.1: the vegetation index data and the grass height data are resampled to a uniform spatial resolution for estimation of the grassland biomass.
Step 5.2: constructing a training set and a testing set by using sample side biomass data, and corresponding calculated target area grass layer height data and vegetation index data, and carrying out normalization processing on each characteristic variable in the data set to eliminate the influence of dimension differences; and respectively constructing a random forest model and a support vector regression model, training by utilizing a training set, and improving the model precision by continuously adjusting model parameters.
Step 5.3: model accuracy testing is carried out by using a test set, and model fitting accuracy is estimated by using a Root Mean Square Error (RMSE) and a decision coefficient (R 2), wherein the calculation formulas of the RMSE and the R 2 are as follows:
where y i is the true value of the biomass on the grassland of the ith sample party, Is the predicted value of biomass on the grassland of the ith sample party,/>Is the average value of the true biomass values on the grasslands of the sample parties, and N is the total number of the sample parties to be tested.
Step 5.4: the larger the R 2 value is, the smaller the RMSE value is, the better the model fitting effect is, the better the fitting precision is selected to estimate the grassland biomass of the target area, and the current grassland biomass state is obtained.
One embodiment of the invention also discloses a grassland biomass estimation system based on SfM and grassland height factors, comprising:
and a recording module: setting a ground control point in a target area, and recording the coordinates of the ground control point.
And the acquisition module is used for: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; and obtaining the grass layer height data and the aboveground biomass data of the target area sample party in the field.
The vegetation index calculating module: a target area vegetation index is calculated based on the multispectral data.
Grass layer height acquisition module: and acquiring the grass layer height of the target area by adopting an SFM method based on the common image data and the ground control point coordinates.
And an estimation module: according to the grass layer height and vegetation index of the target area and the biomass data of the sample party, respectively constructing a random forest model and a support vector regression model, evaluating the model precision, and selecting the model with the highest fitting precision to estimate the biomass on the grassland of the target area.
In a specific embodiment, the grass height acquisition module further comprises:
a first acquisition unit: and acquiring a digital surface model DSM of the target area according to the common image data and the ground control point coordinates.
A second acquisition unit: based on the digital surface model DSM, a vegetation-free coverage digital terrain model DTM is obtained.
A grass layer height calculating unit: and obtaining a grass layer height model by utilizing the superposition subtraction of the digital surface model DSM and the non-vegetation coverage digital terrain model DTM, thereby obtaining grass layer height data of the target area.
And (3) a checking unit: and according to the sample side grass layer height data acquired by the acquisition module, performing accuracy inspection on the grass layer height model.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A grassland biomass estimation method based on SfM and grassland height factors, characterized by the specific steps of:
Step 1: setting a ground control point in a target area, and recording the coordinates of the ground control point;
Step 2: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; the method comprises the steps of obtaining grass layer height data and aboveground biomass data of a target area sample party in the field;
step 3: calculating a target area vegetation index based on the multispectral data;
step4: acquiring the height of a grass layer of a target area by adopting an SFM method based on the common image data and the ground control point coordinates;
Step 5: respectively constructing a random forest model and a support vector regression model according to the grass layer height and the vegetation index of the target area and the biomass data of the sample party, evaluating the model precision, and selecting a model with highest fitting precision to estimate the biomass on the grassland of the target area;
The specific steps of the step 4 are as follows:
step 4.1: acquiring a target area digital surface model DSM according to the common image data and the ground control point coordinates;
Step 4.2: acquiring a vegetation-free coverage digital terrain model DTM based on the digital surface model DSM;
Step 4.3: the digital surface model DSM and the non-vegetation coverage digital terrain model DTM are used for superposition and subtraction to obtain a grass layer height model, so that grass layer height data of a target area are obtained;
step 4.4: 2, according to the sample side grass layer height data obtained in the step, carrying out precision inspection on the grass layer height model;
The step 4.1 further comprises the following steps:
Step 4.1.1: preprocessing the common image data of the target area, including: denoising, correcting distortion and enhancing an image;
Step 4.1.2: extracting features of the preprocessed common image data, extracting feature points in different images, describing the feature points in the images by using SIFT and SURF local feature descriptors, and detecting and selecting the feature points by using ground control points; matching the characteristic points among different images to establish corresponding matching relations among the characteristic points of the different images;
Step 4.1.3: estimating camera motion and pose between different images by using the matching relation of the feature points and an optimization method for resolving geometric transformation and camera pose;
Step 4.1.4: triangulation is performed on the image according to the matching relation between the pose of the camera and the characteristic points, and three-dimensional coordinates of the characteristic points are calculated by using the matching relation between the characteristic points under a plurality of view angles, so that sparse point clouds are obtained, and the calculation formula is as follows:
Where, (u 1,v1) and (u 2,v2) are projection coordinates of the feature point in the two camera views, (c x1,cy1) and (c x2,cy2) are optical center coordinates of the two camera views; (f x1,fy1) and (f x2,fy2) are focal lengths of two camera views; z 1 and Z 2 are depth values of the feature points at two viewing angles, respectively; p 1 and P 2 are three-dimensional coordinates of the feature points in two camera view angles, and then the three-dimensional coordinates of the feature points are calculated by a linear interpolation method in combination with pixel displacement of two images;
Step 4.1.5: estimating the depth value of the missing region by adopting a bilinear interpolation method to obtain dense three-dimensional point cloud data, and obtaining a digital surface model DSM, wherein the bilinear interpolation method comprises the following calculation formula:
Z=Z00*(1-dx)(1-dy)+Z01*(1-dx)dy+Z10*dx(1-dy)+Z11*dxdy
Where Z is the depth estimate at pixel coordinates (x, y), Z 00、Z01、Z10 and Z 11 represent the depth values at pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, and d x and d y are the difference ratios between (x, y) and the nearest four pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, representing the interpolation ratios in the horizontal and vertical directions.
2. A method of grassland biomass estimation based on SfM and grassland height factors as claimed in claim 1, wherein said step 2 further comprises:
selecting a plurality of sampling parties by a random sampling method, and setting the area of the sampling party according to the pixel size of the remote sensing image of the unmanned aerial vehicle; measuring the vertical distance from the crown of the plant in the sample side to the ground, and taking the average value of the vertical distance as the height of the grass layer of the sample side; and obtaining plant samples in the sample party by a mowing method in the uniform land, and weighing and sampling the fresh weight and the dry weight of the sample party to obtain biomass data of the sample party.
3. A method for estimating grassland biomass based on SfM and grassland height factors as claimed in claim 1, wherein said step 3 is specifically as follows:
Step 3.1: performing radiation correction, geometric correction and splicing and cutting treatment on the multispectral data;
step 3.2: calculating a vegetation index of the target area based on the processed multispectral data, comprising: normalized vegetation index, difference vegetation index, ratio vegetation index, enhanced vegetation index, soil conditioning vegetation index, and delta vegetation index.
4. A method of grassland biomass estimation based on SfM and grassland height factors as claimed in claim 1, wherein said step 4.2 further comprises the steps of:
step 4.2.1: obtaining a tiff format DSM image based on digital surface model DSM conversion, and separating a vegetation area from a non-vegetation area by using an image segmentation technology;
Step 4.2.2: extracting features of the DSM image and the original common image, comparing gray values and gradients of the DSM image and the original common image, and distinguishing vegetation from non-vegetation by using a thresholding method so as to generate a vegetation mask;
Step 4.2.4: removing vegetation parts in the DSM image by using a vegetation mask to obtain bare land information without vegetation;
Step 4.2.5: and filling the missing ground information generated in the vegetation removal process by adopting a Kriging interpolation method to obtain a continuous non-vegetation coverage digital terrain model DTM.
5. A method for estimating grassland biomass based on SfM and grassland height factors as claimed in claim 1, wherein said step 5 is specifically as follows:
step 5.1: resampling the vegetation index data and the grass height data to a uniform spatial resolution;
Step 5.2: constructing a training set and a testing set according to the sample side biomass data and the corresponding calculated target area grass layer height data and vegetation index data; respectively constructing a random forest model and a support vector regression model, and training by using a training set;
Step 5.3: model accuracy testing is carried out by using a test set, and model fitting accuracy is estimated by using a Root Mean Square Error (RMSE) and a decision coefficient (R 2), wherein the calculation formulas of the RMSE and the R 2 are as follows:
where y i is the true value of the biomass on the grassland of the ith sample party, Is the predicted value of biomass on the grassland of the ith sample party,/>Is the average value of the true biomass values on the grasslands of the sample parties, and N is the total number of the sample parties to be tested;
step 5.4: and selecting a model with better fitting precision to estimate the grassland biomass of the target area, and obtaining the current grassland biomass state of the pasture.
6. A grassland biomass estimation system based on SfM and a grassland height factor, comprising:
and a recording module: setting a ground control point in a target area, and recording the coordinates of the ground control point;
And the acquisition module is used for: acquiring multispectral data and common image data of a target area through an unmanned aerial vehicle; the method comprises the steps of obtaining grass layer height data and aboveground biomass data of a target area sample party in the field;
the vegetation index calculating module: calculating a target area vegetation index based on the multispectral data;
Grass layer height acquisition module: acquiring the height of a grass layer of a target area by adopting an SFM method based on the common image data and the ground control point coordinates;
And an estimation module: respectively constructing a random forest model and a support vector regression model according to the grass layer height and the vegetation index of the target area and the biomass data of the sample party, evaluating the model precision, and selecting a model with highest fitting precision to estimate the biomass on the grassland of the target area;
the grass layer height acquisition module further comprises:
a first acquisition unit: acquiring a target area digital surface model DSM according to the common image data and the ground control point coordinates;
a second acquisition unit: acquiring a vegetation-free coverage digital terrain model DTM based on the digital surface model DSM;
A grass layer height calculating unit: the digital surface model DSM and the non-vegetation coverage digital terrain model DTM are used for superposition and subtraction to obtain a grass layer height model, so that grass layer height data of a target area are obtained;
and (3) a checking unit: according to the sample side grass layer height data acquired by the acquisition module, performing accuracy inspection on the grass layer height model;
The first acquisition unit specifically executes the following steps:
Step 4.1.1: preprocessing the common image data of the target area, including: denoising, correcting distortion and enhancing an image;
Step 4.1.2: extracting features of the preprocessed common image data, extracting feature points in different images, describing the feature points in the images by using SIFT and SURF local feature descriptors, and detecting and selecting the feature points by using ground control points; matching the characteristic points among different images to establish corresponding matching relations among the characteristic points of the different images;
Step 4.1.3: estimating camera motion and pose between different images by using the matching relation of the feature points and an optimization method for resolving geometric transformation and camera pose;
Step 4.1.4: triangulation is performed on the image according to the matching relation between the pose of the camera and the characteristic points, and three-dimensional coordinates of the characteristic points are calculated by using the matching relation between the characteristic points under a plurality of view angles, so that sparse point clouds are obtained, and the calculation formula is as follows:
Where, (u 1,v1) and (u 2,v2) are projection coordinates of the feature point in the two camera views, (c x1,cy1) and (c x2,cy2) are optical center coordinates of the two camera views; (f x1,fy1) and (f x2,fy2) are focal lengths of two camera views; z 1 and Z 2 are depth values of the feature points at two viewing angles, respectively; p 1 and P 2 are three-dimensional coordinates of the feature points in two camera view angles, and then the three-dimensional coordinates of the feature points are calculated by a linear interpolation method in combination with pixel displacement of two images;
Step 4.1.5: estimating the depth value of the missing region by adopting a bilinear interpolation method to obtain dense three-dimensional point cloud data, and obtaining a digital surface model DSM, wherein the bilinear interpolation method comprises the following calculation formula:
Z=Z00*(1-dx)(1-dy)+Z01*(1-dx)dy+Z10*dx(1-dy)+Z11*dxdy
Where Z is the depth estimate at pixel coordinates (x, y), Z 00、Z01、Z10 and Z 11 represent the depth values at pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, and d x and d y are the difference ratios between (x, y) and the nearest four pixel coordinates (x 0,y0)、(x0,y1)、(x1,y0) and (x 1,y1), respectively, representing the interpolation ratios in the horizontal and vertical directions.
CN202311487751.6A 2023-11-09 2023-11-09 Grassland biomass estimation method and system based on SfM and grassland height factors Active CN117456364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311487751.6A CN117456364B (en) 2023-11-09 2023-11-09 Grassland biomass estimation method and system based on SfM and grassland height factors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311487751.6A CN117456364B (en) 2023-11-09 2023-11-09 Grassland biomass estimation method and system based on SfM and grassland height factors

Publications (2)

Publication Number Publication Date
CN117456364A CN117456364A (en) 2024-01-26
CN117456364B true CN117456364B (en) 2024-04-26

Family

ID=89579798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311487751.6A Active CN117456364B (en) 2023-11-09 2023-11-09 Grassland biomass estimation method and system based on SfM and grassland height factors

Country Status (1)

Country Link
CN (1) CN117456364B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778451A (en) * 2015-03-31 2015-07-15 中国科学院上海技术物理研究所 Grassland biomass remote sensing inversion method considering grassland height factor
CN108020211A (en) * 2017-12-01 2018-05-11 云南大学 A kind of method of unmanned plane aeroplane photography estimation instruction plant biomass
CN110647786A (en) * 2018-06-27 2020-01-03 中国科学院地理科学与资源研究所 Non-growing season grass and livestock balance assessment method based on unmanned aerial vehicle LIDAR aerial survey technology
CN111882242A (en) * 2020-08-06 2020-11-03 安阳师范学院 Evaluation method of vegetation index in herbage biomass estimation research
CN112215169A (en) * 2020-10-10 2021-01-12 华中农业大学 Crop plant height and biomass self-adaptive high-precision resolving method based on low-altitude unmanned-machine passive remote sensing
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants
CN114972160A (en) * 2022-02-09 2022-08-30 中国农业科学院农业资源与农业区划研究所 Natural pasture grass yield rapid measurement method and system based on unmanned aerial vehicle
CN115453555A (en) * 2022-09-19 2022-12-09 中国科学院植物研究所 Unmanned aerial vehicle rapid monitoring method and system for grassland productivity
CN116469000A (en) * 2023-02-17 2023-07-21 煤炭科学研究总院有限公司 Inversion method and device for forest ground biomass and leaf area index

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037911B (en) * 2022-01-06 2022-04-15 武汉大学 Large-scale forest height remote sensing inversion method considering ecological zoning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778451A (en) * 2015-03-31 2015-07-15 中国科学院上海技术物理研究所 Grassland biomass remote sensing inversion method considering grassland height factor
CN108020211A (en) * 2017-12-01 2018-05-11 云南大学 A kind of method of unmanned plane aeroplane photography estimation instruction plant biomass
CN110647786A (en) * 2018-06-27 2020-01-03 中国科学院地理科学与资源研究所 Non-growing season grass and livestock balance assessment method based on unmanned aerial vehicle LIDAR aerial survey technology
CN111882242A (en) * 2020-08-06 2020-11-03 安阳师范学院 Evaluation method of vegetation index in herbage biomass estimation research
WO2022032329A1 (en) * 2020-08-14 2022-02-17 Agriculture Victoria Services Pty Ltd System and method for image-based remote sensing of crop plants
CN112215169A (en) * 2020-10-10 2021-01-12 华中农业大学 Crop plant height and biomass self-adaptive high-precision resolving method based on low-altitude unmanned-machine passive remote sensing
CN114972160A (en) * 2022-02-09 2022-08-30 中国农业科学院农业资源与农业区划研究所 Natural pasture grass yield rapid measurement method and system based on unmanned aerial vehicle
CN115453555A (en) * 2022-09-19 2022-12-09 中国科学院植物研究所 Unmanned aerial vehicle rapid monitoring method and system for grassland productivity
CN116469000A (en) * 2023-02-17 2023-07-21 煤炭科学研究总院有限公司 Inversion method and device for forest ground biomass and leaf area index

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Gil-Docampo, ML.Above-ground biomass estimation of arable crops using UAV-based SfM photogrammetry.View Web of Science ResearcherID and ORCID.2020,第35卷(第7期),全文. *
孙世泽 ; 汪传建 ; 尹小君 ; 王伟强 ; 刘伟 ; 张雅 ; 赵庆展 ; .无人机多光谱影像的天然草地生物量估算.遥感学报.2018,(第05期),全文. *
张雅 ; 尹小君 ; 王伟强 ; 汪传建 ; 鲁为华 ; 孙世泽 ; 高军 ; .基于Landsat 8 OLI遥感影像的天山北坡草地地上生物量估算.遥感技术与应用.2017,(第06期),全文. *
牛亚晓.基于无人机遥感的不同水分胁迫下大田玉米长势参数估计研究.中国博士学位论文全文数据库.2022,(第3期),摘要,正文1-30页. *

Also Published As

Publication number Publication date
CN117456364A (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN107451982B (en) High-canopy-density forest stand crown area acquisition method based on unmanned aerial vehicle image
CN110866531A (en) Building feature extraction method and system based on three-dimensional modeling and storage medium
CN108195736B (en) Method for extracting vegetation canopy clearance rate through three-dimensional laser point cloud
CN112418188A (en) Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision
CN112884672B (en) Multi-frame unmanned aerial vehicle image relative radiation correction method based on contemporaneous satellite images
CN110988909A (en) TLS-based vegetation coverage determination method for sandy land vegetation in alpine and fragile areas
Brocks et al. Toward an automated low-cost three-dimensional crop surface monitoring system using oblique stereo imagery from consumer-grade smart cameras
CN108776106A (en) A kind of crop condition monitoring method and system based on unmanned plane low-altitude remote sensing
WO2019026619A1 (en) Image processing device, image processing method, and program
Ahamed et al. Tower remote-sensing system for monitoring energy crops; image acquisition and geometric corrections
CN112200854A (en) Leaf vegetable three-dimensional phenotype measurement method based on video image
Yin et al. Individual tree parameters estimation for chinese fir (cunninghamia lanceolate (lamb.) hook) plantations of south china using UAV Oblique Photography: Possibilities and Challenges
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
CN115453555A (en) Unmanned aerial vehicle rapid monitoring method and system for grassland productivity
CN117115683A (en) Remote sensing extraction method and system for dangerous rock falling rocks under vegetation coverage
CN113724381B (en) Dynamic three-dimensional scene rapid reconstruction method based on high-resolution remote sensing image
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN112906719A (en) Standing tree factor measuring method based on consumption-level depth camera
CN110580468B (en) Single wood structure parameter extraction method based on image matching point cloud
Niederheiser et al. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain
CN116645321B (en) Vegetation leaf inclination angle calculation statistical method and device, electronic equipment and storage medium
CN117456364B (en) Grassland biomass estimation method and system based on SfM and grassland height factors
Itakura et al. Voxel-based leaf area estimation from three-dimensional plant images
CN111257854B (en) Universal terrain correction optimization method based on remote sensing image segmentation unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant