SE2150439A1 - System and method for measuring carbon sequestration - Google Patents

System and method for measuring carbon sequestration

Info

Publication number
SE2150439A1
SE2150439A1 SE2150439A SE2150439A SE2150439A1 SE 2150439 A1 SE2150439 A1 SE 2150439A1 SE 2150439 A SE2150439 A SE 2150439A SE 2150439 A SE2150439 A SE 2150439A SE 2150439 A1 SE2150439 A1 SE 2150439A1
Authority
SE
Sweden
Prior art keywords
tree
dataset
training
image
pixel
Prior art date
Application number
SE2150439A
Other languages
Swedish (sv)
Other versions
SE544695C2 (en
Inventor
Bo Kofod
Edward West
Rishabh Khanna
Smriti Smriti
Thomas Luke Duncan
Original Assignee
Earthbanc Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Earthbanc Ab filed Critical Earthbanc Ab
Priority to SE2150439A priority Critical patent/SE544695C2/en
Publication of SE2150439A1 publication Critical patent/SE2150439A1/en
Publication of SE544695C2 publication Critical patent/SE544695C2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/84Greenhouse gas [GHG] management systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/84Greenhouse gas [GHG] management systems
    • Y02P90/845Inventory and reporting systems for greenhouse gases [GHG]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Investigating Or Analyzing Non-Biological Materials By The Use Of Chemical Means (AREA)

Abstract

A method for measuring carbon sequestration of a forest region includes: receiving an aerial image of the forest region at a given time unit, extracting features from the aerial image, computing vegetative indices based on the features, training a model by comparing the features and vegetative indices to a ground-truth dataset, determining tree instances and tree characteristics associated with each tree instance from the model, quantifying a number of trees and a carbon sequestration yield in the forest region, aggregating a time series carbon analysis data for the forest region.

Description

SYSTEM AND METHOD FOR MEASURING CARBON SEQUESTRATION TECHNICAL FIELD [0001] This invention relates generally to the forest region image processing field, and more specifically to a new and useful system and method for measuring carbon sequestration of a forest region in the forest region image processing field.
BRIEF DESCRIPTION OF FIGURES 1 to 3 2. id="p-2" id="p-2"
[0002] FIGURE 1 is a flowchart representation of the method of one embodiment. 3. id="p-3" id="p-3"
[0003] FIGURE 2 is a flowchart representation of an embodiment of training a model. 4. id="p-4" id="p-4"
[0004] FIGURE 3 is a flowchart representation of one variation of the method.
DESCRIPTION OF THE PREFERRED EMBODIMENT S . id="p-5" id="p-5"
[0005] The following description of the preferred embodiments of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 6. id="p-6" id="p-6"
[0006] 1. Overview. 7. id="p-7" id="p-7"
[0007] As shown in FIGS. 1-3, the method 1oo for measuring carbon sequestration of a forest region includes receiving an aerial image of the forest region at a given time unit S110, extracting features from the aerial image S120, computing 1of45 vegetative indices based on the features S130, training a model by comparing the features and vegetative indices to a ground-truth dataset S140, determining tree instances and tree characteristics associated with each tree instance from the model S150, quantifying a number of trees and a carbon sequestration yield in the forest region S160, aggregating a time series carbon analysis data for the forest region S170. [0008] 2. Potential Benefits. 9. id="p-9" id="p-9"
[0009] The method 100 can confer several benefits over conventional methods for measuring carbon sequestration. First, the method 100 uses aerial images (e.g., satellite images), such that entire forest regions can be analyzed for carbon sequestration yield automatically. This is in contrast with conventional methods, in which carbon verification of forest carbon projects involves mainly manually intensive processes (e.g., when individuals physically enter the forest carbon project zone to measure the height and girth of all trees in a hectare plot, repeat this process to sample for few hectare plots, and extrapolate the collected data for the entirety of the forest carbon project zone). . id="p-10" id="p-10"
[0010] Second, the method 100 uses a machine learning technique for a model supplemented by a ground-truth dataset that can generate a high level of accuracy. The ground-truth dataset is factual data that has been observed or measured at a forest carbon project zone, thereby creating the reality of what the model can predict by learning the underlying pattern in the data extracted from the aerial images that is causing the ground-truth. This is in contrast with conventional methods, in which manually intensive processes are prone to errors incurred by sampling errors when the 2of45 statistical characteristics of a population (e.g., forest carbon project zone) are estimated from a subset of that population (e.g., a few hectare plots of the forest carbon project zone). Trees in a forest region are not homogenous, and carbon sequestration yield for each tree is dependent on its dimensions (e.g., height, girth) and species type. 11. id="p-11" id="p-11"
[0011] Third, since forest carbon projects have a minimum requirement of 1,500 acres to be profitable, the method 100 can confer the additional benefit of circumventing the exclusivity of small forest carbon projects (e.g., more than 60% of planted trees in the world are located on lot sizes smaller than 1,500 acres), thereby increasing accessibility to carbon verification, and as a result, availability of carbon credits and carbon incentivization payments. 12. id="p-12" id="p-12"
[0012] Fourth, the method 100 aggregates a time series carbon analysis data for the forest region, allowing periodic monitoring, reporting, and verification of past and present carbon sequestration yield. 13. id="p-13" id="p-13"
[0013] 3. Method. [0014] Receiving an aerial image of the forest region at a given time unit S110. [0015] As shown in FIG. 1, receiving an aerial image S110 functions to obtain an aerial image indicative of the state of trees in a forest region. 16. id="p-16" id="p-16"
[0016] The aerial image is preferably of a forest region. Alternatively, the aerial image can include a portion of the forest region and areas surrounding the forest region. However, the aerial image can include any suitable content to be used in measuring the state of trees of the forest region. 3 0f45 17. id="p-17" id="p-17"
[0017] The forest region is preferably a two-or three-dimensional physical region, but can alternatively be one dimensional or be a point (e.g., a forest location). The forest region can be predetermined (e.g., by a political entity or a user), dynamically determined (e. g., automatically determined), or otherwise determined. The forest region can be defined by geofence, common land unit, geological features (e.g., mountains, rivers) or defined in any other suitable manner. The location of the forest region is a geolocation, in which the geolocation can be identified by a geographic coordinate system (e.g., geographic latitude and longitude, UTM and/or UPS system, Cartesian coordinates), or by any other suitable location identifler. 18. id="p-18" id="p-18"
[0018] The aerial image is preferably a two-dimensional image, but can alternatively be a one-dimensional image, a three-dimensional image (e.g., generated from two or more images), or have any suitable number of dimensions. The aerial image can be a single image or frame or can be a composite image (e.g., mosaic) including multiple images that are stitched together. If the aerial image is a composite image, the individual images constituting the composite image are preferably recorded at substantially the same time unit. The aerial image can be a still image, a kinetic image (e.g., a video), or have any other suitable kinetic parameter. The aerial image is preferably a multispectral image. Alternatively, the aerial image can be a hyperspectral image, ultraspectral image, be an image captured Within the visible range, LIDAR- derived image, ultrasound-derived image, radar-derived image, or be an image captured by any other suitable electromagnetic or acoustic frequency. The aerial image is 40f45 preferably captured and/ or received by a satellite system. Alternatively, the aerial image can be captured and/ or received by a drone system or any other suitable aerial system. [0019] The aerial image is preferably associated with one or more time units. The time unit can be a time unit relative to a time duration, an absolute time (e.g., indicated by a global timestamp), or any other suitable measure of time. The time duration can be a unique or non-unique time duration. The time duration includes a unique year (e.g., 2015), a unique season (e.g., spring of 2021), a relative time duration (e.g., spring), or any other suitable time duration. The time unit relative to the time duration can be a time unit within the time duration (e.g., a day of the month, a day of a year, a week of a year, a month of a year), or be any other suitable time unit. The time unit can be a recurrent time unit that recurs across multiple time durations (e.g., January of 2020 and January of 2021). . id="p-20" id="p-20"
[0020] The aerial image is preferably received at a remote server that stores and processes the image. The aerial image is preferably received from a third-party source (e.g., a third-party service that captured the aerial images, NASA, European Space Agency, World Resource Institute, United Nations). Alternatively, the aerial image can be received from a direct source (e.g., directly from an image-taking device, privately owned microsatellites). 21. id="p-21" id="p-21"
[0021] The aerial image is in the format of TXT (e.g., .txt), TIFF (e.g., .tif, .tiff), PNG (e.g., .png), JPEG (e.g., .jpg, .jpeg), EPS (e.g., .eps), RAW image files (e.g., .raw, .cr2, .sr2), or any other suitable image format. The aerial image preferably includes a 50f45 pixel set. Alternatively, the pixel set can include a pixel set, a super pixel set, a digital value set, or a combination thereof. 22. id="p-22" id="p-22"
[0022] Extracting features from the aerial image S120. 23. id="p-23" id="p-23"
[0023] As shown in FIG. 1, extracting a set of features for each pixel from the pixel set of the aerial image S120 functions to receive a multispectral bands dataset and a light detection and ranging (LiDAR) dataset, and determine a forest canopy height dataset for the aerial image. The multispectral bands dataset includes data in multiple regions of the electromagnetic spectrum. The multispectral bands dataset contains a red-green-blue (RGB) visual bands dataset and a near-infrared (NIR) bands dataset. The RGB visual bands dataset includes a red band (o.64-0.67 micrometers wavelength), a green band (0.53-o.59 micrometers wavelength), and a blue band (0.45-0.51 micrometers wavelength). The NIR bands dataset includes a near infrared band (o.85- 0.88 micrometers wavelength). 24. id="p-24" id="p-24"
[0024] The LiDAR dataset can be captured by a LiDAR system which can be used to measure forest canopy height across the forest region. LiDAR point clouds derived from the LiDAR system can calculate a digital surface model (DSM) and a digital terrain model (DTM), which are raster format LiDAR derived data products. The DSM maps out the top of the surface elevation, and the DTM maps out the ground elevation. By subtracting the DSM from the DTM, a canopy height model (CHM) is created. The CHM calculates the height or residual distance between the ground and the top of the objects above the ground to distinguish the difference between trees and other objects (etc. bushes, rocks). The forest canopy height dataset for the aerial image is the CHM, which 6of45 is used for retrieving the actual height of trees with the influence of ground elevation removed in the forest region. . id="p-25" id="p-25"
[0025] Extracting features from the aerial image S120 can include identifying and/ or removing outliers from the multispectral bands dataset and the LiDAR dataset. Outliers can be identified as values falling within a predetermined percentile (e.g., within the 10th percentile), values falling outside a predetermined percentile (e.g., above the 90th percentile), values falling outside a predetermined percentile range, or be identified in any other suitable manner. The outliers in the multispectral bands dataset and the LiDAR dataset can be removed. 26. id="p-26" id="p-26"
[0026] Computing vegetative indices based on the features S130. 27. id="p-27" id="p-27"
[0027] As shown in FIG. 1, computing a set of vegetative indices based on the set of features extracted for each pixel in the pixel set from the aerial image S130 functions to generate vegetative indices across the forest region captured by the aerial image at a given time unit. 28. id="p-28" id="p-28"
[0028] The multispectral bands dataset (the red-green-blue (RGB) visual bands dataset and the near-infrared (NIR) bands dataset for the pixel set of the aerial image is used to derive the set of vegetative indices. The set of vegetative indices preferably indicates the excess green index (ExG) at a given time unit for a forest region. Alternatively, the set of vegetative indices can indicate normalized different vegetation index (NDVI), structure insensitive pigment index (SIPI), atmospherically resistant vegetation index (ARVI), and/ or any other suitable vegetative index for the forest region. Every vegetative index is a certain combination of sensor-measured reflectance 70f45 properties (e.g., water content, chlorophyll content, pigment) at two or more wavelengths that reveals particular characteristics of vegetation. Since every vegetative index has its limitations, it is recommended to apply additional indices for a more accurate analysis of vegetation growth and structure in the forest region. 29. id="p-29" id="p-29"
[0029] The ExG can be calculated using only the RGB visual bands dataset, as the ExG contrasts the green portion of the spectrum against the red and blue portions to distinguish vegetation from soil. The ExG can distinguish vegetation from unnatural green colors (e.g., paint, plastic), thereby providing a higher level of positive returns when monitoring urban areas. However, combining the ExG with other vegetative indices is recommended to create a higher accuracy of vegetation growth and structure of the forest region. . id="p-30" id="p-30"
[0030] The NDVI can be derived from the ExG to determine the health of the forest region (e.g., relative biomass), particularly in cases of drought. The SIPI can detect plant disease or other causes of stress in plant health, and the ARVI can correct the influence of atmospheric noise (e.g., high aerosol interference caused by rain, fog, dust, smoke, air pollution). The SIPI and the ARVI can be calculated using both the RGB visual bands dataset and the NIR bands dataset. 31. id="p-31" id="p-31"
[0031] Training a model by comparing the features and vegetative indices of a training aerial images dataset to a ground-truth dataset S140. 32. id="p-32" id="p-32"
[0032] As shown in FIGS. 1 and 2, training a model by comparing the subset of features and the set of vegetative indices of a training aerial images dataset to a ground- truth dataset S140 functions to create a model that inputs the subset of features and the 8 of45 set of vegetative indices and outputs the set of tree instances and the set of tree characteristics. The ground-truth dataset generates ground-truth labels associated to the subset of features and the set of vegetative indices. 33. id="p-33" id="p-33"
[0033] The model uses a machine learning technique which can be a computer vision model, a supervised learning model (e.g., random forest, support vector machine), and/or a deep learning model (e.g., convolutional neural networks, CNN). The model inputs the subset of features (e.g., forest canopy height dataset) combined with the set of vegetative indices (e.g., ExG, NDVI, SIPI, ARVI) and outputs the set of tree instances (e.g., number of trees) and the set of tree characteristics (e.g., tree species, tree height, tree girth, tree age, and tree health) associated with each tree instance in the set of tree instances. 34. id="p-34" id="p-34"
[0034] Training the model includes comparing the subset of features and the set of vegetative indices from a training aerial images dataset to the ground-truth dataset S140. Training the model S140 can include receiving a training aerial image of a forest region at a given time unit S141, extracting a set of features from the training aerial image S142, computing a set of vegetative indices based on the set of features S143, gathering a ground-truth dataset for the training aerial image S144, iterating the previous steps S141, S142, S143, S144 for a set of training aerial images S145, and training a model for the set of training aerial images S146. . id="p-35" id="p-35"
[0035] The ground-truth dataset includes a decentralized mobile device imagery dataset from a network of mobile devices and an unmanned aerial vehicle dataset 90f45 collected by a network of unmanned aerial vehicles mounted With photogrammetry and LiDAR systems. 36. id="p-36" id="p-36"
[0036] The network of mobile devices that collects the decentralized mobile device imagery dataset includes one or more mobile device user (e.g., local communities, conservationists) physically entering the forest region and capturing ground-truth images of trees using one or more mobile device. 37. id="p-37" id="p-37"
[0037] The mobile device can include a wireless phone, an iPhone, an iPad, a notebook computer, or any suitable wireless control devices with a camera. A mobile device user can install and operate a computer program (e.g., packaged as an App) on the mobile device. The mobile device user can capture a ground-truth image at the geolocation during a time unit on the computer program. The ground-truth image can be a still image, a kinetic image (e.g., a video), or have any other suitable kinetic parameter. 38. id="p-38" id="p-38"
[0038] The computer program can recognize the geolocation of the ground-truth image, process the ground-truth image, and predict the set of tree characteristics of the tree instance (e.g., tree species, tree height, tree girth, tree age, and tree health) from the ground-truth image. The computer program can attach the ground-truth image, the set of tree characteristics, the geolocation of the ground-truth image, the time unit of the ground-truth image, and information about the mobile device user (e.g., name, address, phone number, bank account information) who captured the ground-truth data as metadata to a data object (e.g., QR code, blockchain transaction). A tree species of45 identification phone attachment can be attached to the mobile device to enhance tree species data collection. 39. id="p-39" id="p-39"
[0039] The network of unmanned aerial vehicles mounted with photogrammetry and LiDAR systems that collects the unmanned aerial vehicle dataset includes one or more unmanned aerial vehicle user controlling one or more unmanned aerial vehicle. The unmanned aerial vehicle can include a fixed wing drone, a rotor drone, a hybrid drone, or any other suitable unmanned aerial vehicle. The unmanned aerial vehicle mounted with photogrammetry and LiDAR systems can measure the set of tree characteristics of the tree instance (e.g., tree species, tree height, tree girth, tree age, and tree health) at the geolocation. 40. id="p-40" id="p-40"
[0040] The model is preferably the deep learning model using convolutional neural networks because of its accuracy level. The deep learning model uses a CNN to adaptively learn spatial hierarchies of features through backpropagation by using convolution layers, pooling layers, and fully connected layers. Training the CNN is a process of finding kernels in convolution layers and weights in fully connected layers with the goal of minimizing the residuals between the predicted values of the set of tree characteristics and given ground-truth labels of the set of tree characteristics on the training aerial images dataset. The CNN performance under specific kernels and weights is computed by a loss function (e.g., Tversky loss function) through forward propagation on the training aerial images dataset. The kernels and weights are updated depending on the loss value through an optimization algorithm of backpropagation and gradient descent (e. g., Adadelta optimizer). 11 of 45 41. id="p-41" id="p-41"
[0041] Careful collection of the ground-truth dataset from the network of mobile devices and the network of unmanned aerial vehicles in which to train, validate, and test the model is important for the performance (e.g., accuracy, F1 score, precision, recall), but obtaining the ground-truth dataset can be costly and time-consuming. Although deep learning models often outperform computer vision models and supervised learning models (e.g., support vector machines, random forests) for forest region image processing, computer vision models and supervised learning models are more suitable options when the ground-truth dataset is not available. 42. id="p-42" id="p-42"
[0042] Determining tree instances and tree characteristics associated With each tree instance from the model S150. 43. id="p-43" id="p-43"
[0043] As shown in FIG. 1, determining tree instances and tree characteristics associated with each tree instance from the model S150 functions to infer the set of tree instances and the set of tree characteristics. 44. id="p-44" id="p-44"
[0044] Quantifying a number of trees and a carbon sequestration yield in the forest region S160. 45. id="p-45" id="p-45"
[0045] As shown in FIG. 1, quantifying a number of trees and a carbon sequestration yield in the forest region S160 functions as a carbon quantifier to predict the number of trees and/or the carbon sequestration yield in the forest region of the aerial image. 46. id="p-46" id="p-46"
[0046] If the deep learning model using convolutional neural networks was used that outputs the set of tree instances and the set of tree characteristics, the carbon quantifier inputs the set of tree instances, the set of tree characteristics, and an auxiliary 12 of 45 data source set and outputs the carbon sequestration yield of the forest region in the aerial image at the time unit. 47. id="p-47" id="p-47"
[0047] The auxiliary dataset preferably includes auxiliary information about local climate, drought status, flood status, soil health, crop failure, and bird migration of the forest region. For example, drought in forest regions can stunt the forests' ability to store carbon and ability to grow, which can result in less carbon being sequestered. The auxiliary dataset inputted into the carbon quantifier adjusts the amount of carbon sequestration yield accordingly to the auxiliary information in the forest region. 48. id="p-48" id="p-48"
[0048] The carbon quantifier sums up the carbon sequestration yield for each tree instance across the entire forest region in the aerial image. The carbon quantifier can use an equation method, a neural network method, and/ or any other suitable mathematical carbon quantifier method. 49. id="p-49" id="p-49"
[0049] Aggregating a time series carbon analysis dataset for the forest region S170. [0050] As shown in FIG. 1, aggregating a time series carbon analysis dataset for the forest region S170 functions to display the number of trees and the carbon sequestration yield over a series of a particular time period of the forest region. The time series carbon analysis dataset of the forest region can be attached as metadata to a data object (e.g., QR code, blockchain transaction). The time series carbon analysis dataset of the forest region can be transacted on a carbon credit marketplace platform. The carbon credit marketplace platform can be used to buy and sell one or more carbon credit. 51. id="p-51" id="p-51"
[0051] Examples. 13 Of 45 52. id="p-52" id="p-52"
[0052] In a specific example, as shown in FIG. 3, the method 100 includes receiving an aerial image of a forest region from a satellite at a given time unit S118, extracting a RGB visual bands dataset, a NIR bands dataset, and a forest canopy height dataset from the aerial image S128, computing the ExG and the NDVI based on the RGB visual bands dataset S138, training a deep learning CNN model by comparing the forest canopy height, the ExG, and the NDVI of a training aerial images dataset to a ground-truth dataset S148, determining a set of tree instances and tree height, tree girth, and tree species associated with each tree instance of the aerial image from the model S158, quantifying a number of trees and a carbon sequestration yield of the forest region S168, and aggregating a time series carbon analysis for the forest region S178. 53. id="p-53" id="p-53"
[0053] Alternative embodiments implement the above methods and/ or processing modules in non-transitory computer-readable media, storing computer- readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/ or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer- readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the 14 Of 45 instructions can alternatively or additionally be executed by any suitable dedicated hardware device. 54. id="p-54" id="p-54"
[0054] Embodiments of the system and/ or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/ or using one or more instances of the systems, elements, and/ or entities described herein. 55. id="p-55" id="p-55"
[0055] As a person skilled in the art Will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. of 45 56. id="p-56" id="p-56"
[0056] Referring now to Figures 4 to 8, there is described a second aspect of the invention that relates generally to the biomass quantification field, and more specifically to a new and useful method for calculating vegetation metrics that provide information about an area of land. This aspect may be referred to as a computative method for quantifying vegetation metrics, wherein: 57. id="p-57" id="p-57"
[0057] FIGURE 4 is a schematic representation of the system. [0058] FIGURE 5 is a schematic of a variation of the user interface. [0059] FIGURE 6 is a schematic of a component of the user interface that shows vegetation index reports. 60. id="p-60" id="p-60"
[0060] FIGURE 7 is a schematic of the flow of ground-truth information for the computational computer model 61. id="p-61" id="p-61"
[0061] FIGURE 8 is an image generated by the computational computer model to include the training area, training polygons, and data-filter mask applied to the original spectral band input to the image processing module. 16 of 45 62. id="p-62" id="p-62"
[0062] The following description of the preferred embodiments of the second aspect of the invention is not intended to limit the invention to these preferred embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Method for calculating land biomass data 63. id="p-63" id="p-63"
[0063] As shown in Fig. 4, the method of preferred embodiment for quantifying biomass data in a portion of land S1000 includes an image processing module receiving a substantially aerial set of electromagnetic bands that capture the land in Block S1100, defining an area of interest based the on spectral bands in Block S1200, extracting a set of vegetation indices from the area of interest in Block 81300, generating a data-layer mask based on the set of vegetation indices in Block S14o0, and applying the data-layer mask to the original set of electromagnetic bands to create a filtered, masked image in Block S15oo. The method further includes providing the computational computer model inputs including the filtered, masked image, an intermediate output image from the image processing module, and ground-truth vegetation data in Block S1600, and then using said model to predict biomass data for the portion of land captured by the original set of electromagnetic bands in Block S1700. The method functions to extract data from captured electromagnetic bands to generate predictions for a series of vegetative metrics for a portion of land including carbon sequestration, biomass density, vegetation health, deforestation volatility, and vegetation gain / loss. 64. id="p-64" id="p-64"
[0064] The preferred embodiment of this method has the set of electromagnetic bands being processed prior to the run of the computational computer model. An 17 of 45 alternative method of this method has the processing of the set of electromagnetic bands completed within the computational computer model, wherein the computational computer model contains an internal, spectral band, processor or processing algorithm. In this implementation, the computational computer model would read in a raw data set of electromagnetic bands rather than processed images. 65. id="p-65" id="p-65"
[0065] The method preferably interfaces with an internet web page, phone or web application, or other internet accessible platform, that allows user to access different functions including contributing ground-truth data, viewing outputted vegetative metrics, or receiving payment through a cloud based blockchain payment system, as shown in Fig. 5. This user interface functions to allow clients to understand and track the overall status and health of the plot of land that is of interest to them, while also allowing other users to contribute ground-truth data, and creating a monetization system for the contribution of said data. 1.1 The Dataset of Electromagnetic Bands 66. id="p-66" id="p-66"
[0066] Block S1100 describes the model as receiving a dataset of electromagnetic bands for a land of area. In the preferred embodiment, this is a multispectral image which includes RGB spectral bands and Near Infrared (NIR) or Color Infrared (CIR) spectral bands. The multispectral image is preferably generated from electro-optical sensors on satellites, but alternatively can be captured by a combination of remote-controlled model aircrafts or other earth- bound machines with multispectral imaging capabilities. These capabilities can be intrinsic to the flying machine or attachable, including sensors and cameras. The image capturing systems have the capability to 18 of45 generate a photo at minimum daily basis. The image is then accessed by the individual, group, or other user that is implementing the method of S1000 through means including connecting to internet archives or databases. The multispectral image functions to provide information about a given plot of land through spectral image bands. 67. id="p-67" id="p-67"
[0067] An alternative embodiment of the method has the image processing module receiving a hyperspectral image. The hyperspectral image has a continuum of spectral bands including visual and infrared light. The hyperspectral image is generated through hyperspectral sensors attached to or included in airborne machines including aircrafts, satellites, and remote flying objects. The image is then accessed by the individual, group, or other user that is implementing the method of S100 through means including connecting to internet archives or databases. The hyperspectral image functions to provide information about a given plot of land through spectral image bands. 1.2 Image Processing Module 68. id="p-68" id="p-68"
[0068] In the preferred embodiment, the image processing module takes in a set of data corresponding to spectral bands in from the multispectral image. The spectral band data is read into the model as RGB spectral bands, NIR/CIR spectral bands, and other electromagnetic spectral bands. The RGB spectral bands are used to distinguish areas of vegetation from areas of non-vegetation like soil, such that there is a defined area of interest for extracting biomass data S1200. The area of interest is determined through a boarder identification processes which maps RGB data to specific land characteristics like healthy vegetation, shorelines, roads, or cleared fields. This process 19 of 45 is completed by weighing the RGB values according to the observed value of the land characteristic in the human visible light spectrum. In the preferred embodiment, the process of boarder detection is completed prior to calculating the Excess Green Index (ExG), but these can also be done interchangeably. ExG data functions to better distinguish vegetation from areas of non-vegetation and identify unnatural colors that result from paint, plastic, or other non-organic material that might be confused for plant matter. Once the EXG data has been calculated, the image processing module produces a duplicate image wherein the ExG data is mapped onto the multispectral image as different RGB values corresponding to different relative intensities of vegetation. RGB spectral bands and ExG data are further used in the determination of vegetation indices in Block 81300. The NIR/CIR bands function to assist in calculating these determinations. 69. id="p-69" id="p-69"
[0069] In the implementation that utilizes hyperspectral imaging, the image processing model distinguishes the different bands of light included in the hyperspectral image into different wavelengths on the electromagnetic spectrum including visible, infrared, and ultraviolet light bands. The image processing module then defines an area of interest using RGB and NIR/CIR spectral bands in the same manner as for the multispectral image. The image processing module also calculates ExG data for the hyperspectral image in the same manner as for the multispectral image. Hyperspectral images capture a Wider, more precise, variety of light than multispectral images. As a result, this variation can calculate a larger range of vegetation indices with greater accuracy than the implementation that uses multispectral images. of 45 1.2A Vegetation Indices 70. id="p-70" id="p-70"
[0070] The vegetation indices indicated by step Block S13oo are generated from the electromagnetic spectral bands of the preferred embodiment. These vegetation indices function to help increase the accuracy of the vegetative metrics that are output by the computational computer model. The set of vegetation indices includes ExG, NDVI, SIPI, and ARVI. NDVI functions to quantify the relative biomass density Within the identified area of interest and contributes to monitoring drought in mmaæaTmpmmmflamwämmufimßmwmdmæaummmmmndR@md NIR/CIR data to determine the NDVI. Alternatively, the NDVI is calculated as a function of ExG from RGB data alone via extrapolation algorithms. SIPI functions to quantify the health of the vegetation by calculating the prevalence of plant disease, which further informs users, preferably clients who have monetary interest in the designated land area, of potential dangers to that area. ARVI accounts for atmospheric, aerosol influence in the calculation of other vegetation indices. 71. id="p-71" id="p-71"
[0071] However, a wide range of vegetation indices beyond ExG, NDVI, SIPI, and ARVI can be calculated including Chlorophyll Index, Moisture Stress Index, and Greenness Above Bare Soil index. Variations of the image processing model use combinations of CIR/NIR data and RGB data in order to calculate these values per the data request of the client and/ or method implementor. In the embodiment that utilizes hyperspectral data, a larger and more specific range of vegetation indices is able to be calculated in addition to the indices capable of being calculated from the multispectral image. 21of45 72. id="p-72" id="p-72"
[0072] Once the vegetation index of interest is calculated within the area of interest, it is assigned an RGB value according to a predetermined color scale. The determined RGB value is then mapped back on to the multispectral image. This process results in an intermediate, new image wherein the RGB values of the electromagnetic data set are overwritten with the newly determined RGB value for the vegetation index. Alternatively, it results in an image mask containing the RGB values associated with the predetermined color scale that is applied over the original electromagnetic data set as a new layer. This process functions to create a visual representation of the desired vegetation index over the region of interest defined by the image processing module. This information can be accessed on the user interface 300 such that clients and other users can see visual interpretations of the vegetation indices and their associated values as seen in Fig. 6. 1.2B Data-Layer Mask 73. id="p-73" id="p-73"
[0073] In the preferred embodiment, the data-layer mask functions to limit the amount of information within the electromagnetic data set that needs to be processed by the computational model in Block S160. The data layer mask is formed by quantifying the biomass density in the area of interest and its relative health based on the vegetation indices for that area. This data is stored as RGB data values that can be layered over the original multispectral image. By measuring the intensity of the health and cover of the vegetation in that area, the data input into the computational computer model is limited to areas that can be recognized as vegetation cover. The data-layer mask is then applied to the electromagnetic band dataset in Block S1500 resulting in a masked image. The 22 of 45 image processing module then translates the masked image into a monochromatic image with the entirety of the RGB data in the multispectral image being combined into a single band that represents all of the colors of RGB. The masked image is now a monochromatic image of only the region of interest as defined by the ExG and RGB data from the original electromagnetic band data set. In the preferred implementation, the monochromatic image is a panchromatic image. 1.3 Computational Computer Model 74. id="p-74" id="p-74"
[0074] As called out in Block S160, the computational computer model of method S100 takes different computational inputs. In the preferred embodiment, the computational computer model reads in the ExG image of the area of interest as well as the panchromatic image generated by the image processing module. These images function to provide the information required to generate the desired vegetation metric(s). If the model has yet to be trained, it also reads in a training area file and training polygon files. These files function to further define the boarder of the vegetation, identify specific vegetation within the training area and area of interest, and incorporate boundary weights. 75. id="p-75" id="p-75"
[0075] The computational computer model is further able to access a database of ground-truth information that includes different metrics like vegetation species data including carbon density, root volume, and standing carbon stock, all of which are not quantifiable through imaging data alone. In the preferred embodiment, the model has been trained on a dataset of electromagnetic bands of the area of interest, ground truth data for that area of interest, and the training polygon files. If the computational 23 of 45 computer model has been previously trained, then said model is directly applied to the filtered, panchromatic image and the ExG image in order to calculate the desired vegetation metrics. In one embodiment of the computational computer model, the model is a Unet Convolutional Neural Network. The neural network functions to predict the vegetation metrics of interest including number of trees and plants in a region of interest and annual carbon sequestration. 76. id="p-76" id="p-76"
[0076] Alternative implementations of this method may provide electromagnetic band data sets, panchromatic images, or other data sets that contain clippings of the region of interest With a single or multiple vegetation indices captured. This implementation would require that the model run its optimization and loss algorithms for each of the individual inputs and then calculate an intermediate output of the vegetation metric(s) of interest for each input. Next, an additional optimization and loss algorithm would be required to run with the incorporation of ground-truth data such that the output of the model was optimized to match the ground-truth data. 1.3A Ground Truth Data 77. id="p-77" id="p-77"
[0077] To validate the predictions made by the computational computer model when in use and to train said model, the computational computer model also takes in ground-truth data. Ground-truth data captures a range of vegetation information including volume of vegetation (e.g. tree girth and height), species type and quantity, and soil qualities which are not discernable through imaging alone. Ground truth data may be collected in a variety of ways. 24 of 45 78. id="p-78" id="p-78"
[0078] In one implementation, ground truth data is captured by crowdsourcing data from individuals. In this implementation, individuals may use a camera attachment with their smartphones, or the camera imbedded within their smartphones, which allows them to photograph on site vegetation at the area of interest. Through these photographs, vegetation species, height, density, and other information is stored in a database that can be accessed by the computational computer model. This database is accessible by the user through a user interface that may include a smartphone app, web browser page, or a cloud-based storage system. 79. id="p-79" id="p-79"
[0079] An alternative manual method has individuals using manual means to gather vegetation information including tape measures, rulers, or laser measurement to record quantitative, vegetation data while qualitative data is be recorded by hand and entered into a digital database similar to the one described for the camera attachment implementation. In one embodiment, users who contribute data to this system are compensated for the information that they provide through methods including a cloud based, block-chain payment rail system. 80. id="p-80" id="p-80"
[0080] An alternative implementation for gathering ground-truth data 400 includes the use of hardware that updates a database with current information. This hardware does not need to be consistently operated by a user in contrast to the manual implementation. This category of ground-truth gathering hardware includes IOT soil sensors and probes, animal trackers for following the migration of native species, and satellite tracking of climate patterns like El Niño. The database 410 containing this of 45 information is accessible to the computational computer model 420 via a remote server, as shown in Fig. 7. 1.3B Training the Computative Computer Model 81. id="p-81" id="p-81"
[0081] As mentioned in previous sections, the model is trained on available ground-truth data, identified training areas, training polygon files, and electromagnetic band datasets for the area of interest. In the preferred embodiment, a training run of the computative computer model begins with overlapping the EXG and monochromatic images generated from the image processing module. These images are cross-referenced with the training area that is provided to the model for training. If an overlap is found, then the computative computer model generates a new image wherein the overlapping areas of the EXG, panchromatic, and training area are analyzed using the training polygons file. The training polygon file includes a set of boundary weights for each polygon shape. These boundary weights are incorporated into the creation of the new image 500 as shown in Fig. 8. This process is repeated for a series of images of the area of interest. 82. id="p-82" id="p-82"
[0082] The new images which are created by each training run are then separated into training frames, Validation frames, and testing frames. The training frames function to train the computational computer model, the testing frames function to test the training of the computational computer model, and the Validation frames function to validate the results of the trained computational computer model. The Validation frames are based on information from the ground-truth data, of which is available to the computational computer model during training so that the accuracy of the predictions 26 of 45 made in the training frames is tested. The computer model is trained using optimization and loss algorithms including the adaDelta optimizer and Tversky loss index. However, other optimization algorithms and loss indices are suitable for training purposes. 1.3C Receiving an Orthophoto 83. id="p-83" id="p-83"
[0083] In one implementation of method S1ooo, the computative computer model receives an orthophoto in addition to receiving the monochromatic and ExG images. The data is obtained through methods including LiDAR, manual measurement, image extrapolation, or SONAR. LiDAR and SONAR data is generated by flying machines capable of taking orthophotos of substantial aerial views including planes, satellites, and autonomous flying objects. Methods for manual measurement of height are include tape measure, rulers, and hand-held laser measurement devices. 84. id="p-84" id="p-84"
[0084] The orthographic data is used to create a Digital Surface Model (DSM) and a Digital Terrain Model (DTM), the difference of which generates a height model for the area captured in the orthophoto. The height model is then filtered to distinguish the canopy height, lowland vegetation, and ground. This is achieved through the application of a gaussian filter. However, filtering the height model can be done in any other suitable manner. The filtered height model functions to allow the deep learning, computational computer model to extract the quantity of vegetation and distinguish different types of vegetation such as trees from bushes, or even different types of trees or different types of vegetation based on vegetation species data. 1.4 User Interface 27 Of 45 85. id="p-85" id="p-85"
[0085] As shown in Fig. 5, the user interface 200 functions to allow clients to view metrics about the land that they are interested in, as well as allow contributors and other users to provide data about land of interest and receive payment for said data. The user interface 200 can be a smartphone application or a web browser page. However, any other internet enabled platform for transferring and accessing information is suitable. 86. id="p-86" id="p-86"
[0086] Metrics about the land of interest are viewable in a published report that provides an overview of the results of the computative computer model including metrics calculated, visual representations of the vegetative indices, and predictions about future metrics. Data that users who contribute ground-truth data (“contributors”) are able to input include species information, height data, or soil information. The function of the user interface for this demographic of user is to gather larger quantities of ground-truth data that will be used by the computative model to increase accuracy. 87. id="p-87" id="p-87"
[0087] In one implementation of the user interface, contributors are able to receive payments for the data that they contribute through a block-chain payment rail. The block-chain payment rail is also a suitable method for clients to pay services in including running of the computative computer model and monitoring the land area of interest. 1.5 Other Embodiments 88. id="p-88" id="p-88"
[0088] Alternative embodiments implement the above methods and/ or processing modules in non-transitory computer-readable media, storing computer- readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein. The instructions can be executed by 28 of 45 computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/ or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer- readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device. 89. id="p-89" id="p-89"
[0089] Embodiments of the system and/ or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), concurrently (e.g., in parallel), or in any other suitable order by and/ or using one or more instances of the systems, elements, and/ or entities described herein. 90. id="p-90" id="p-90"
[0090] As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. 29 of 45 91. id="p-91" id="p-91"
[0091] According to a list of non-claimed embodiments A to T of the second aspect of the invention, there is provided: A. A computer-implemented method for quantifying vegetation metrics in an area of land comprising: a. Running an image processing module further comprising i. Receiving first a substantially aerial data set of electromagnetic bands for said land; ii. Defining a region of interest within the area of land based on the spectral bands; iii. Extracting a set of vegetative indices from the region of interest; iv. Generating a data-layer mask based on the region of interest and the set of vegetative indices; v. Applying the data-layer mask to the received data set of electromagnetic bands to create a new image; b. Reading the output of the image processing module into a computative computer model; c. Running the computative computer model to predict vegetation metrics for the previously identified region of interest. B. The method of embodiment A, wherein the area of land captured by the electromagnetic bands is substantially a forest. C. The method of embodiment A, wherein the training for the computational computer model is further comprised of of 45 h. i.
Receiving a set of image input, ground-truth data, and additional training files including training polygons and training area files; Identifying overlap between the images in the image data set and training files; Generating a new image that incorporating the boundary weight information from the training files in the overlap for each image in the image data set, thus creating a new image data set; Separating the new image data set into training, Validation, and testing images; Generating predictions of vegetation metrics based off the training frames; Validating the predictions using the Validation frames; Testing the model using the test frames; Confirming the results of the testing frames against ground-truth data; Repeating steps a-h to substantially cover the region of interest.
D. The method of embodiment A wherein the results are reported and displayed on a virtual user interface.
E. The method of embodiment A, wherein the vegetative indices calculated by the image processing module is one of or is a combination of NDVI, ARVI, ExG, and SIPI.
F. The method of embodiment A, wherein the electromagnetic bands include bands in the RGB and Near Infrared (NIR) or Color Infrared (CIR). 310f45 G. The method of embodiment D, wherein the user interface incorporates a payment system for users that contribute ground-truth data.
H. The method of embodiment A, wherein the computational computer model calculates the required increase in vegetation as a result of a carbon sequestration metric.
I. The method of embodiment C, wherein the computational computer model training further receives height information for the land area of interest through means including LiDAR, SONAR, or manual measurement.
J. The method of embodiment C, wherein the computation computer model is a deep-learning, convoluted neural network.
K. The method of embodiment G, wherein the ground-truth data is gathered by individuals making via imaging, manual labor including manual measurement With measurement tools like tape measures or handheld lasers, or Written observation.
L. The method of embodiment K, wherein additional ground-truth data is gathered by hardware at least having soil information gathering capabilities or other vegetation identification abilities.
M. The method of embodiment K, wherein individuals who gather ground-truth data submit said data to an online database that is accessible by the computational computer model. N. The method of embodiment M, wherein the individuals who gather ground-truth data submit said data through the user interface described in D. 32 of 45 O. The method of embodiment C, wherein the ground-truth data set includes vegetation species, species volume, and species density.
P. The method of embodiment A, wherein the vegetation metric output is carbon sequestration.
Q. The method of embodiment A, wherein the image processing module is an incorporatedaspect of the computational computer model such that the computative computer model is capable of receiving the raw dataset of electromagnetic bands.
R. The method of embodiment A, wherein the dataset of electromagnetic bands is received by the image processing module as a multispectral or hyperspectral image.
S. The method of R, wherein the image processing module generates a heatmap indicating the intensity of a calculated vegetative index overlayed onto the multispectral or hyperspectral image for the region of interest.
T. The method of A, wherein the computative computer model reads in the filtered image produced by the image processing module and an image with ExG data for the area of interest. 33 of 45 92. id="p-92" id="p-92"
[0092] To summarize, the second aspect of the invention relates to a method for quantifying biomass data in a portion of land including an image processing module receiving a substantially aerial set of electromagnetic bands that capture the land; defining an area of interest; extracting a set of vegetation indices; generating a data- layer mask; and applying the data-layer mask to the original set of electromagnetic bands. The method further comprises a computational computer model that takes in the results of the image processing module, and predicts biomass data metrics for the portion of land captured by the original set of electromagnetic bands through loss and optimization algorithms. The method functions to extract data from captured electromagnetic bands to generate predictions for a series of vegetative metrics for a portion of land including carbon sequestration, biomass density, vegetation health, deforestation volatility, and vegetation gain/ loss. 34 of 45

Claims (20)

1. A method for measuring carbon sequestration of a forest region, performed by a computer-based machine, the method comprising: a) receiving an aerial image of a forest region at a given time unit, the aerial image including a pixel set; b) for each pixel in a geolocation in the pixel set of the aerial image: extracting a set of features from the pixel set, and computing a set of vegetative indices based on the set of features extracted from the pixel set; c) using a machine learning technique for a model, training the model by comparing a subset of features and the set of vegetative indices to a ground-truth dataset, determining a set of tree instances and a set of tree characteristics associated With each tree instance in the set of tree instances; d) quantifying a number of trees and a carbon sequestration yield in the forest region based on the set of tree instances, the set of tree characteristics associated with each tree instance, and an auxiliary data source set; and e) iterating previous steps a-d at different times in the forest region and aggregating a time series data, the time series data including the number of trees and the carbon sequestration yield over a series of a particular time period, performing a time series carbon analysis based on the time series data.
2. The method of claim 1, wherein receiving the aerial image of the forest region comprises receiving a multispectral image of the forest region from an artificial satellite.
3. The method of claim 1, wherein extracting the set of features from the pixel set comprises extracting a multispectral bands dataset for the pixel set, determining a forest canopy height dataset for the pixel set.
4. The method of claim 3, wherein determining the forest canopy height dataset for the pixel set comprises receiving a red-green-blue (RGB) visual bands dataset with a light detection and ranging (LiDAR) dataset based on LiDAR sensors. 35 of
5. The method of claim 1, wherein computing the set of vegetative indices based on the set of features comprises calculating the set of vegetative indices using the multispectral bands dataset.
6. The method of claim 5, wherein the set of vegetative indices comprises normalized different vegetation index (NDVI), excess green index (ExG), structure insensitive pigment index (SIPI), and atmospherically resistant vegetation index (ARVI).
7. The method of claim 1, wherein the ground-truth dataset comprises a decentralized mobile device imagery dataset from a network of mobile devices and an unmanned aerial vehicle dataset collected by a network of unmanned aerial vehicles mounted With photogrammetry and LiDAR systems.
8. The method of claim 1, wherein the set of tree characteristics associated with each tree instance comprises tree species, tree height, tree girth, tree age, and tree health.
9. The method of claim 1, wherein the auxiliary data source set comprises local climate, drought status, flood status, soil health, crop failure, and bird migration.
10. The method of claim 1, further comprising: attaching the time series carbon analysis of the forest region as metadata to a data object; transacting the time series carbon analysis of the forest region on a carbon credit marketplace platform.
11. A method for training a model to count a set of tree instances and classify the set of tree characteristics associated with each tree instance, performed by a computer-based machine, the method comprising: a) receiving a training aerial image of a forest region at a given time unit, the training aerial image including a pixel set; b) for each pixel in a geolocation in the set of pixels of the training aerial image: extracting a set of features from the pixel set, and computing a set of vegetative indices based on the set of features extracted from the pixel set; 36 ofc) gathering a ground-truth dataset based on the geolocation of each pixel in the pixel set, the ground-truth dataset containing a set of tree instances and a set of tree Characteristics associated With each tree instance in the set of tree instances; d) iterating previous steps a-c for a set of training aerial images to create a training aerial images dataset, the training aerial images dataset including the set of features and the set of vegetative indices for each training aerial image in the set of training aerial images; and e) training a model using a machine learning technique for the set of training aerial images, associating the set of tree instances and the set of tree Characteristics for each tree instance from the ground-truth dataset to the corresponding set of features and the set of vegetative indices for each pixel in the pixel set.
12. The method of claim 11, wherein receiving the training aerial image of the forest region comprises receiving a multispectral image of the forest region from an artiflcial satellite.
13. The method of claim 11, wherein extracting the set of features from the pixel set comprises extracting a multispectral bands dataset for the pixel set, determining a forest canopy height dataset for the pixel set.
14. The method of claim 13, wherein determining the forest canopy height dataset for the pixel set comprises receiving a red-green-blue (RGB) visual bands dataset with a light detection and ranging (LiDAR) dataset based on LiDAR sensors.
15. The method of claim 11, wherein computing the set of vegetative indices based on the set of features comprises calculating the set of vegetative indices using the multispectral bands dataset.
16. The method of claim 15, wherein the set of vegetative indices comprises normalized different vegetation index (NDVI), excess green index (EXG), structure insensitive pigment index (SIPI), and atmospherically resistant vegetation index (ARVI).
17. The method of claim 11, wherein gathering the ground-truth dataset comprises capturing a decentralized mobile device imagery dataset by a network of mobile devices, and collecting an unmanned aerial vehicle dataset by a network of unmanned aerial vehicles mounted with photogrammetry and LiDAR systems. 37 of
18. The method of claim 11, wherein the set of tree characteristics associated with each tree instance comprises tree species, tree height, tree girth, tree age, and tree health.
19. The method of claim 11, wherein training a model using a machine learning technique comprises training a convolutional neural network model.
20. The method of claim 11, further comprising, transacting the model on a carbon credit marketplace platform. 38 of 45
SE2150439A 2021-04-08 2021-04-08 System and method for measuring carbon sequestration SE544695C2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SE2150439A SE544695C2 (en) 2021-04-08 2021-04-08 System and method for measuring carbon sequestration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE2150439A SE544695C2 (en) 2021-04-08 2021-04-08 System and method for measuring carbon sequestration

Publications (2)

Publication Number Publication Date
SE2150439A1 true SE2150439A1 (en) 2022-10-09
SE544695C2 SE544695C2 (en) 2022-10-18

Family

ID=83600582

Family Applications (1)

Application Number Title Priority Date Filing Date
SE2150439A SE544695C2 (en) 2021-04-08 2021-04-08 System and method for measuring carbon sequestration

Country Status (1)

Country Link
SE (1) SE544695C2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4358031A1 (en) * 2022-10-19 2024-04-24 Totalenergies Onetech A method for determining the health status of vegetal elements

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009086158A2 (en) * 2007-12-20 2009-07-09 Imagetree Corp. Remote sensing and probabilistic sampling based method for determining the carbon dioxide volume of a forest
US7974853B1 (en) * 2003-02-10 2011-07-05 South Dakota School Of Mines And Technology Techniques for minimizing nitrous oxide emissions and increasing certainty in generating, quantifying and verifying standardized environmental attributes relating to nitrous oxide
US20210004592A1 (en) * 2019-07-03 2021-01-07 Battelle Energy Alliance, Llc Systems and methods for improved landscape management
CN112287287A (en) * 2020-11-06 2021-01-29 东北林业大学 Method, system and device for measuring forest carbon sequestration
CN112507839A (en) * 2020-12-02 2021-03-16 上海市建筑科学研究院有限公司 Method for rapidly measuring and calculating carbon fixation amount of urban landscape
CN113656757A (en) * 2021-08-27 2021-11-16 邢长山 Forestry carbon sequestration measuring method based on oxygen concentration change rule

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7974853B1 (en) * 2003-02-10 2011-07-05 South Dakota School Of Mines And Technology Techniques for minimizing nitrous oxide emissions and increasing certainty in generating, quantifying and verifying standardized environmental attributes relating to nitrous oxide
WO2009086158A2 (en) * 2007-12-20 2009-07-09 Imagetree Corp. Remote sensing and probabilistic sampling based method for determining the carbon dioxide volume of a forest
US20210004592A1 (en) * 2019-07-03 2021-01-07 Battelle Energy Alliance, Llc Systems and methods for improved landscape management
CN112287287A (en) * 2020-11-06 2021-01-29 东北林业大学 Method, system and device for measuring forest carbon sequestration
CN112507839A (en) * 2020-12-02 2021-03-16 上海市建筑科学研究院有限公司 Method for rapidly measuring and calculating carbon fixation amount of urban landscape
CN113656757A (en) * 2021-08-27 2021-11-16 邢长山 Forestry carbon sequestration measuring method based on oxygen concentration change rule

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Bricklemyer; Lawrence R S; Miller R L; Battogtokh P R; N, Monitoring and verifying agricultural practices related to soil carbon sequestration with satellite imagery, AGRICULTURE, ECOSYSTEMS AND ENVIRONMENT, 20061122, ELSEVIER, AMSTERDAM, NL, ISSN 0167-8809 *
Camarretta Nicolò; Harrison Peter A; Bailey Tanya; Potts Brad; Lucieer Arko; Davidson Neil; Hunt Mark, Monitoring forest structure to guide adaptive management of forest restoration: a review of remote sensing approaches, NEW FORESTS, 20191009, NIJHOFF, DORDRECHT, NL, ISSN 0169-4286 *
Chen Yun; Guerschman Juan P; Cheng Zhibo; Guo Longzhu, Remote sensing for vegetation monitoring in carbon capture storage regions: A Review, APPLIED ENERGY ELSEVIER SCIENCE PUBLISHERS, GB, 2018-11-29, ISSN 0306-2619 *
Fatholahi Masoumeh; Fallah Asghar; Hojjati Seyed Mohammad; Kalbi Siavash, Estimation of aboveground tree carbon stock using SPOT-HRG data (a case study: Darabkola forests), JOURNAL OF FORESTRY RESEARCH, 20170412, EDITORIAL BOARD OF JOURNAL OF FORESTRY RESEARCH, HARBIN, CN, ISSN 1007-662X *
Lin Chuen Horng; Yu Chia Ching; Wang Ting You; Chen Tsung Yi, Classification of the tree for aerial image using a deep convolution neural network and visual feature clustering, The Journal of Supercomputing, 20191003, Springer US, New York, ISSN 0920-8542 *
Watts J D; Lawrence R L; Miller P R; Montagne C, Monitoring of cropland practices for carbon sequestration purposes in north central Montana by Landsat remote sensing, REMOTE SENSING OF ENVIRONMENT, 20090901, ELSEVIER, XX, ISSN 0034-4257 *
Ziêba-Kulawik Karolina; Hawry³o Pawe³; Wê¿yk Piotr; Matczak Piotr; PrzewoŸna Patrycja; Inglot Adam; M¹czka Krzysztof, Improving methods to calculate the loss of ecosystem services provided by urban trees using LiDAR and aerial orthophotos, Urban Forestry & Urban Greening, 20210521, Elsevier, AMSTERDAM, NL, ISSN 1618-8667 *

Also Published As

Publication number Publication date
SE544695C2 (en) 2022-10-18

Similar Documents

Publication Publication Date Title
Cimoli et al. Application of low-cost UASs and digital photogrammetry for high-resolution snow depth mapping in the Arctic
Newton Forest ecology and conservation: a handbook of techniques
JP6570039B2 (en) Forest resource information calculation method and forest resource information calculation device
US20110200249A1 (en) Surface detection in images based on spatial data
Verhoeven et al. Trying to break new ground in aerial archaeology
Roberts et al. Using UAV based 3D modelling to provide smart monitoring of road pavement conditions
Brocks et al. Toward an automated low-cost three-dimensional crop surface monitoring system using oblique stereo imagery from consumer-grade smart cameras
CN110991430A (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
Gonzalez Musso et al. Applying unmanned aerial vehicles (UAVs) to map shrubland structural attributes in northern Patagonia, Argentina
SE2150439A1 (en) System and method for measuring carbon sequestration
Bhandari et al. Unmanned aerial system‐based high‐throughput phenotyping for plant breeding
Rumora et al. Spatial video remote sensing for urban vegetation mapping using vegetation indices
Albuquerque et al. Remotely piloted aircraft imagery for automatic tree counting in forest restoration areas: a case study in the Amazon
Green et al. Plantation Loblolly pine seedling counts with unmanned aerial vehicle imagery: a case study
Lozano-Garzon et al. Remote sensing and machine learning modeling to support the identification of sugarcane crops
JP6798337B2 (en) Plant discrimination device, plant discrimination method and computer program for plant discrimination
Wijesingha Geometric quality assessment of multi-rotor unmanned aerial vehicle borne remote sensing products for precision agriculture
Li et al. Measuring detailed urban vegetation with multisource high-resolution remote sensing imagery for environmental design and planning
Xu Obtaining forest description for small-scale forests using an integrated remote sensing approach
Gaur et al. Emerging Trends, Techniques, and Applications in Geospatial Data Science
Torsvik et al. Detection of macroplastic on beaches using drones and object-based image analysis
Obeng-Manu Assessing the accuracy of UAV-DTM generated under different forest canopy density and its effect on estimation of aboveground carbon in Asubima forest, Ghana
Roberts et al. Mountain pine beetle detection and monitoring: remote sensing evaluations
Lawas Complementary use of aiborne LiDAR and terrestrial laser scanner to assess above ground biomass/carbon in Ayer Hitam tropical rain forest reserve
Kim et al. Deep Learning Performance Comparison Using Multispectral Images and Vegetation Index for Farmland Classification