CN110070488B - Multi-angle remote sensing image forest height extraction method based on convolutional neural network - Google Patents
Multi-angle remote sensing image forest height extraction method based on convolutional neural network Download PDFInfo
- Publication number
- CN110070488B CN110070488B CN201910336776.3A CN201910336776A CN110070488B CN 110070488 B CN110070488 B CN 110070488B CN 201910336776 A CN201910336776 A CN 201910336776A CN 110070488 B CN110070488 B CN 110070488B
- Authority
- CN
- China
- Prior art keywords
- images
- remote sensing
- forest height
- model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 24
- 238000000605 extraction Methods 0.000 title claims abstract description 5
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000005520 cutting process Methods 0.000 claims abstract description 22
- 238000011160 research Methods 0.000 claims abstract description 20
- 238000009826 distribution Methods 0.000 claims abstract description 10
- 238000012952 Resampling Methods 0.000 claims abstract description 7
- 238000004519 manufacturing process Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 21
- 238000012795 verification Methods 0.000 claims description 9
- 241000132092 Aster Species 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 abstract description 5
- 238000013213 extrapolation Methods 0.000 abstract description 2
- 230000003287 optical effect Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000002028 Biomass Substances 0.000 description 1
- 238000004177 carbon cycle Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a convolutional neural network-based multi-angle remote sensing image forest height extraction method, which sequentially comprises the following steps of: performing orthorectification and resampling on the resource three-dimensional multi-angle remote sensing image; extracting the forest height based on the laser radar data, and recording longitude and latitude coordinates of the corresponding light spots; cutting the multi-angle remote sensing image by taking the spot point coordinate as a center to manufacture a training sample set; constructing a convolutional neural network, training the network and storing a model; cutting the multi-angle remote sensing image in a sliding cutting mode; and extracting the stored model to predict the forest height, and making a forest height distribution map based on the research area. The invention provides a new idea for realizing the scale extrapolation of the forest height, the programming is easy to realize, the operation efficiency is higher, the generalization capability is stronger, and the generated forest height distribution diagram has good regional consistency.
Description
Technical Field
The invention relates to a method for extracting forest height from multi-angle remote sensing images based on a convolutional neural network, belongs to the field of deep learning and forestry, has strong generalization and feasibility, and can be used for realizing the research of forest height mapping in a continuous area.
Background
The forest height is an important characteristic for representing the vertical structure of the forest, is important for researching carbon cycle, and plays an irreplaceable role in forest biomass estimation and dynamic change research. The traditional forest height survey adopts sampling survey and manual measurement, so that time and labor are wasted, and acquisition is difficult. The appearance of the remote sensing technology greatly makes up the defects of the traditional survey, and the laser radar technology is used as a new technology, so that the possibility of accurately measuring the vertical structure of the forest is provided.
The research for estimating forest height based on remote sensing technology is mainly divided into 3 categories: based on optical remote sensing data, because optical remote sensing signals have no penetrability, forest vertical structure information cannot be acquired, and therefore forest height parameters are rarely extracted; 2, based on laser radar data, the laser radar can provide accurate forest vertical structure measurement data, and a great deal of research is carried out on extracting forest height by using laser point cloud data or waveform data; and 3, combining the laser radar data and the optical remote sensing data, utilizing the characteristics of continuity and easiness in acquisition of the optical remote sensing data, overcoming the limitation of the laser radar data in estimating the forest height of a continuous area, and combining the two data to invert the forest height is a hotspot of current research.
In recent years, deep learning has been rapidly developed, and high-level features of an image have been possible to be learned autonomously. The convolutional neural network is a famous model in deep learning and is excellent in image processing field, and the remote sensing image has the characteristics of rich information, high resolution, continuous imaging and the like and can reflect some information of ground objects to a certain extent, so that the characteristic of reflecting tree height information can be extracted from the remote sensing image by utilizing the autonomous learning capability of the convolutional neural network on the high-level characteristics of the image. At present, the work of utilizing the convolutional neural network to combine with the remote sensing image to predict the forest height is very little, so that the invention designs an effective forest height prediction model based on the convolutional neural network principle and combining with the laser radar tree height data and the optical remote sensing image, and realizes the prediction of the forest height in a large range and a continuous area.
Disclosure of Invention
The invention aims to overcome the limitation that the laser radar data can only estimate the forest height in a small range and a discrete area, and provides a new and effective method for the forest height scale extrapolation research. The multi-angle remote sensing image used by the invention is a resource third remote sensing image, and comprises a panchromatic front view, a panchromatic front view and a panchromatic rear view. In order to realize the purpose, the invention adopts the following technical scheme:
a multi-angle remote sensing image forest height extraction method based on a convolutional neural network is characterized in that the forest height estimated by a laser radar and a resource three-angle remote sensing image training convolutional neural network model are used for realizing prediction of forest height of a large-range continuous region, and the design method sequentially comprises the following steps:
step 1: performing orthorectification and resampling on the resource third multi-angle remote sensing image; the method comprises the following specific steps:
step 1.1: acquiring 30m resolution data of a Digital Elevation Model (DEM), namely ASTER GDEM, of a research area;
step 1.2: mosaic and splice a plurality of acquired DEM images to generate a composite DEM image;
step 1.3: performing orthorectification on the resource three-dimensional multi-angle remote sensing image by using Arcgis software and a synthesized DEM image;
step 1.4: resampling the multi-angle remote sensing image by using a cubic convolution interpolation method, so that the images with different angles are resampled to the same grid resolution, namely the grid resolution is L meters;
step 2: storing all forest height data extracted based on laser radar data and located in the research area, longitude and latitude coordinates of a corresponding light spot center and a unique identification ID of the light spot into a file in the same shape format;
and step 3: respectively cutting images with different angles to obtain a plurality of images with fixed sizes, and manufacturing a training sample set; the method comprises the following specific steps:
step 3.1: the quotient of the laser radar spot diameter being H meters and the grid resolution of the multi-angle image being L meters is used as the size of the image to be cut, namely the size of the image is n multiplied by n pixels;
step 3.2: reading a shapefile, converting the geographic coordinates of the light spots into pixel coordinates falling on the multi-angle remote sensing image, respectively cutting the remote sensing images of different angles by taking the pixel where the light spot is positioned as the center, wherein the size of the cut image is n multiplied by n pixels and is named by the ID of the light spot, and repeating the step 3.2 until all the light spots falling on the image are cut;
step 3.3: adjusting the images with different angles to have the same number, namely if the images with different angles have the same naming ID, keeping the images, otherwise, abandoning the images, and finally obtaining the images with different angles, wherein the images have the same number and are in one-to-one correspondence with the names;
and 4, step 4: constructing a convolutional neural network model, dividing the samples obtained in the step into training samples and verification samples, wherein the training samples are used for training the model, and the verification samples are used for verifying the performance of the model; the method comprises the following specific steps:
step 4.1: overlapping the images with the same names and different angles to generate a new sample set;
step 4.2: allocating a new sample set, wherein about 2/3 of samples are used for training, and the rest 1/3 of samples are used for verification;
step 4.3: constructing a convolutional neural network model, forming a data pair by the training sample and forest height data corresponding to the light spot ID as the input of the network, outputting the data pair as the predicted forest height, training the network and storing the model;
step 4.4: extracting the model stored in the step 4.3, verifying that a data pair formed by the sample and the forest height data corresponding to the light spot ID is used as the input of the model, evaluating the performance of the model, if the performance of the model does not reach the expected target, adjusting network parameters or structure, and turning to the step 4.3;
and 5: continuously and seamlessly cutting the multi-angle remote sensing image in a sliding cutting mode, extracting a stored model, and making a forest height distribution map with the grid resolution of H meters in a research area; the method comprises the following specific steps:
step 5.1: respectively intercepting images of the overlapped areas of the multi-angle remote sensing images with the same grid resolution obtained in the step 1.5;
and step 5.2: respectively cutting out a plurality of images with the size of n multiplied by n pixels from images of the overlapped areas at different angles in a seamless sliding cutting mode;
step 5.3: overlapping the images with the same name and different angles to generate a new sample set;
step 5.4: extracting the model finally stored in the step 4.4, inputting a new sample set, and outputting forest height data;
step 5.5: generating a grid image which has the same shape, size and position as the overlapped area and has the resolution of H meters, and correspondingly adding forest height data as grid attributes one by one;
step 5.6: and displaying different colors according to the classification of the grid forest height attribute values, and generating a forest height distribution map with the grid resolution of H meters in the research area.
Drawings
FIG. 1 is a schematic flow diagram of the basic process of the present invention;
FIG. 2 is a remote image of an exemplary full-color front view angle;
FIG. 3 is a diagram of an exemplary convolutional neural network model architecture;
fig. 4 is a diagram of the resulting forest height distribution of the example.
Detailed Description
The embodiment of the invention provides a convolutional neural network-based method for extracting forest height of multi-angle remote sensing images, and the invention is explained and illustrated by combining related drawings.
The data set used in the embodiment of the invention is resource three-number multi-angle remote sensing image of a certain area in 2017, the data set comprises panchromatic front view, panchromatic front view and panchromatic back view, a TensorFlow deep learning framework is selected to construct a convolutional neural network model, and a training model is used for generating a forest height distribution map of the research area. The specific implementation scheme of the embodiment of the invention is as follows:
step 1: performing orthorectification and resampling on the resource three-dimensional multi-angle remote sensing image; the method comprises the following specific steps:
step 1.1: acquiring 30m resolution data of a Digital Elevation Model (DEM), namely ASTER GDEM, of a research area;
step 1.2: mosaic and splice the obtained 4 DEM images to generate a composite DEM image;
step 1.3: performing orthorectification on the resource three-dimensional multi-angle remote sensing image by using Arcgis software and a synthesized DEM image;
step 1.4: resampling the multi-angle remote sensing image by using a cubic convolution interpolation method, so that the images with different angles are resampled to the same raster resolution, namely the raster resolution is 2.3 m, and the full-color front-view remote sensing image is shown in FIG. 2;
step 2: storing all forest height data extracted based on laser radar data and located in the research area, longitude and latitude coordinates of the corresponding light spot center and the unique identification ID of the light spot into a file in the same shape format;
and 3, step 3: respectively cutting out images with different angles, cutting out a plurality of images with fixed sizes, and manufacturing a training sample set; the method comprises the following specific steps:
step 3.1: taking the quotient of the laser radar spot diameter of 30 meters and the grid resolution of the multi-angle image of 2.3 meters as the size of the image to be cut, namely the image size is 13 multiplied by 13 pixels;
step 3.2: reading a shape file, converting the geographic coordinates of the light spots into pixel coordinates falling on the multi-angle remote sensing image, respectively cutting the remote sensing images at different angles by taking the pixel where the light spot is positioned as the center, wherein the size of the cut image is 13 multiplied by 13 pixels and the cut image is named by the ID of the light spot, and repeating the step 3.2 until all the light spots falling on the image are cut;
step 3.3: adjusting the images with different angles to have the same number, namely if the images with different angles have the same naming ID, keeping the images, otherwise, abandoning the images, and finally obtaining the images with different angles, wherein the images have the same number and are in one-to-one correspondence with the names;
and 4, step 4: constructing a convolutional neural network model, dividing the samples obtained in the step into training samples and verification samples, wherein the training samples are used for training the model, and the verification samples are used for verifying the performance of the model; the method comprises the following specific steps:
step 4.1: overlapping the images with the same names and different angles to generate a new sample set;
step 4.2: distributing a new sample set, wherein about 2/3 of samples are used for training, and the rest 1/3 of samples are used for verification;
step 4.3: constructing a convolutional neural network model, forming a data pair by the training sample and forest height data corresponding to the light spot ID as the input of the network, outputting the data pair as the predicted forest height, training the network and storing the model;
step 4.4: extracting the model stored in the step 4.3, verifying that a data pair formed by the sample and the forest height data corresponding to the light spot ID is used as the input of the model, evaluating the performance of the model, if the performance of the model does not reach an expected target, adjusting network parameters or structure, turning to the step 4.3, and finally obtaining a convolutional neural network structure diagram as shown in fig. 3;
and 5: continuously and seamlessly cutting the multi-angle remote sensing image in a sliding cutting mode, extracting a stored model, and making a forest height distribution map with the grid resolution of 30 meters in a research area; the method comprises the following specific steps:
step 5.1: respectively intercepting images of the overlapped areas of the multi-angle remote sensing images with the same grid resolution obtained in the step 1.5;
step 5.2: respectively cutting out a plurality of images with the size of 13 multiplied by 13 pixels from the images of the overlapping areas at different angles in a seamless sliding cutting mode;
step 5.3: overlapping the images with the same name and different angles to generate a new sample set;
step 5.4: extracting the model finally stored in the step 4.4, inputting a new sample set, and outputting forest height data;
step 5.5: generating a grid image which has the same shape, size and position as the overlapped area and has the resolution of 30 meters, and correspondingly adding forest height data as grid attributes one by one;
step 5.6: and displaying different colors according to the classification of the grid forest height attribute values, and generating a forest height distribution map of the research area with the grid resolution of 30 meters, as shown in fig. 4.
The above examples are only used to describe the present invention, and do not limit the technical solutions described in the present invention. Therefore, all technical solutions and modifications that do not depart from the spirit and scope of the present invention should be construed as being included in the scope of the appended claims.
Claims (3)
1. A multi-angle remote sensing image forest height extraction method based on a convolutional neural network is characterized by sequentially comprising the following steps:
step 1: performing orthorectification and resampling on the resource three-dimensional multi-angle remote sensing image;
and 2, step: storing all forest height data extracted based on laser radar data and located in the research area, longitude and latitude coordinates of the corresponding light spot center and the unique identification ID of the light spot into a file in the same shape format;
and step 3: respectively cutting images with different angles to obtain a plurality of images with fixed sizes, and manufacturing a training sample set;
and 4, step 4: constructing a convolutional neural network model, and dividing the obtained sample into a training sample and a verification sample, wherein the training sample is used for training the model, and the verification sample is used for verifying the performance of the model;
and 5: continuously and seamlessly cutting the multi-angle remote sensing image in a sliding cutting mode, extracting a stored model, and making a forest height distribution map with the grid resolution of H meters in a research area;
the specific implementation process of step 3 is as follows,
step 3.1: taking the quotient of the laser radar spot diameter of H meters and the grid resolution of the multi-angle image of L meters as the size of the image to be cut, namely the size of the image is n multiplied by n pixels;
step 3.2: reading a shape file, converting the geographic coordinates of the light spots into pixel coordinates falling on the multi-angle remote sensing image, respectively cutting the remote sensing images at different angles by taking the pixel where the light spot is located as the center, wherein the size of the cut image is n multiplied by n pixels and the cut image is named by the ID of the light spot; repeating the step 3.2 until all the light spots on the image are cut;
step 3.3: adjusting the images with different angles to have the same number, namely if the images with different angles have the same naming ID, keeping the images, otherwise, abandoning the images, and finally obtaining the images with different angles, wherein the images have the same number and are in one-to-one correspondence with the names;
the implementation process of the step 4 is as follows:
step 4.1: overlapping the images with the same name and different angles to generate a new sample set;
step 4.2: distributing a new sample set, wherein about 2/3 of samples are used for training, and the rest 1/3 of samples are used for verification;
step 4.3: constructing a convolutional neural network model, forming a data pair by the training sample and forest height data corresponding to the light spot ID as the input of the network, outputting the data pair as the predicted forest height, training the network and storing the model;
step 4.4: and 4.3, extracting the model stored in the step 4.3, verifying that a data pair formed by the sample and the forest height data corresponding to the light spot ID is used as the input of the model, evaluating the performance of the model, if the performance of the model does not reach an expected target, adjusting network parameters or structure, and turning to the step 4.3.
2. The method for extracting the forest height of the multi-angle remote sensing image based on the convolutional neural network as claimed in claim 1, wherein the specific implementation steps of the step 1 are as follows:
step 1.1: acquiring 30m resolution data of a digital elevation model DEM (digital elevation model), namely ASTER GDEM (auto-ranging GDEM) of a research area;
step 1.2: mosaic and splice a plurality of acquired DEM images to generate a composite DEM image;
step 1.3: performing orthorectification on the resource three-dimensional multi-angle remote sensing image by using Arcgis software and a synthesized DEM image;
step 1.4: and (3) resampling the multi-angle remote sensing image by using a cubic convolution interpolation method, so that the images with different angles are resampled to the same grid resolution, namely the grid resolution is L meters.
3. The method for extracting the forest height of the multi-angle remote sensing image based on the convolutional neural network as claimed in claim 1, wherein the specific implementation process of the step 5 is as follows:
step 5.1: respectively intercepting images of the overlapped areas of the multi-angle remote sensing images with the same grid resolution obtained in the step 1.5;
step 5.2: respectively cutting out a plurality of images with the size of n multiplied by n pixels from images of the overlapped areas at different angles in a seamless sliding cutting mode;
step 5.3: overlapping the images with the same names and different angles to generate a new sample set;
step 5.4: extracting the model finally stored in the step 4.4, inputting a new sample set, and outputting forest height data;
step 5.5: generating a grid image which has the same shape, size and position as the overlapped area and has the resolution of H meters, and correspondingly adding forest height data as grid attributes one by one;
step 5.6: and displaying different colors according to the grid forest height attribute values in a classified manner, and generating a forest height distribution map of which the grid resolution is H meters in the research area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910336776.3A CN110070488B (en) | 2019-04-25 | 2019-04-25 | Multi-angle remote sensing image forest height extraction method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910336776.3A CN110070488B (en) | 2019-04-25 | 2019-04-25 | Multi-angle remote sensing image forest height extraction method based on convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110070488A CN110070488A (en) | 2019-07-30 |
CN110070488B true CN110070488B (en) | 2023-01-03 |
Family
ID=67368737
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910336776.3A Active CN110070488B (en) | 2019-04-25 | 2019-04-25 | Multi-angle remote sensing image forest height extraction method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110070488B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113673596B (en) * | 2021-08-20 | 2022-06-03 | 自然资源部国土卫星遥感应用中心 | Remote sensing image target detection sample generation method based on traversal source target |
CN113920438B (en) * | 2021-12-14 | 2022-03-04 | 武汉大学 | Method for checking hidden danger of trees near power transmission line by combining ICESat-2 and Jilin image I |
CN114037911B (en) * | 2022-01-06 | 2022-04-15 | 武汉大学 | Large-scale forest height remote sensing inversion method considering ecological zoning |
CN114972989B (en) * | 2022-05-18 | 2023-01-10 | 中国矿业大学(北京) | Single remote sensing image height information estimation method based on deep learning algorithm |
CN115100630B (en) * | 2022-07-04 | 2023-07-14 | 小米汽车科技有限公司 | Obstacle detection method, obstacle detection device, vehicle, medium and chip |
CN117435848B (en) * | 2023-12-06 | 2024-03-12 | 天津师范大学 | Satellite multi-angle index-based large-scale forest height inversion method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103760565A (en) * | 2014-02-10 | 2014-04-30 | 中国科学院南京地理与湖泊研究所 | Regional scale forest canopy height remote sensing retrieval method |
CN105866792A (en) * | 2016-05-31 | 2016-08-17 | 中国科学院遥感与数字地球研究所 | Novel satellite-borne laser radar tree height extraction method |
CN108038448A (en) * | 2017-12-13 | 2018-05-15 | 河南理工大学 | Semi-supervised random forest Hyperspectral Remote Sensing Imagery Classification method based on weighted entropy |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7639842B2 (en) * | 2002-05-03 | 2009-12-29 | Imagetree Corp. | Remote sensing and probabilistic sampling based forest inventory method |
-
2019
- 2019-04-25 CN CN201910336776.3A patent/CN110070488B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103760565A (en) * | 2014-02-10 | 2014-04-30 | 中国科学院南京地理与湖泊研究所 | Regional scale forest canopy height remote sensing retrieval method |
CN105866792A (en) * | 2016-05-31 | 2016-08-17 | 中国科学院遥感与数字地球研究所 | Novel satellite-borne laser radar tree height extraction method |
CN108038448A (en) * | 2017-12-13 | 2018-05-15 | 河南理工大学 | Semi-supervised random forest Hyperspectral Remote Sensing Imagery Classification method based on weighted entropy |
Also Published As
Publication number | Publication date |
---|---|
CN110070488A (en) | 2019-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110070488B (en) | Multi-angle remote sensing image forest height extraction method based on convolutional neural network | |
Balsa-Barreiro et al. | Generation of visually aesthetic and detailed 3D models of historical cities by using laser scanning and digital photogrammetry | |
US10297074B2 (en) | Three-dimensional modeling from optical capture | |
US20190026400A1 (en) | Three-dimensional modeling from point cloud data migration | |
Gomez-Gutierrez et al. | Using 3D photo-reconstruction methods to estimate gully headcut erosion | |
Baltsavias et al. | High‐quality image matching and automated generation of 3D tree models | |
Gens | Remote sensing of coastlines: detection, extraction and monitoring | |
CN111274337A (en) | Two-dimensional and three-dimensional integrated GIS system based on live-action three-dimension | |
CN103884321A (en) | Remote-sensing image mapping process | |
CN109086286B (en) | Method for producing and publishing color topographic map | |
CN116504032B (en) | Landslide hazard monitoring and early warning method and system based on live-action three-dimension | |
CN109670789B (en) | Remote sensing monitoring system for water and soil conservation of production and construction projects | |
CN109883418A (en) | A kind of indoor orientation method and device | |
CN103606188A (en) | Geographical information on-demand acquisition method based on image point cloud | |
CA2951533A1 (en) | Automated generation of digital elevation models | |
CN115688491A (en) | Water conservancy digital twin simulation method based on block chain | |
CN115375868B (en) | Map display method, remote sensing map display method, computing device and storage medium | |
CN113192192A (en) | Live-action three-dimensional digital twin channel scene construction method | |
Lambers et al. | Optical 3D measurement techniques in archaeology: recent developments and applications | |
CN102496185B (en) | Method for establishing dynamic effect model (DEM) based on multi-resolution remote sensing image discrete point fusion | |
CN117433513A (en) | Map construction method and system for topographic mapping | |
CN104573239A (en) | High spatial resolution remote sense image-based tidal flat DEM (Digital Elevation Model) optimization method | |
CN113433568B (en) | Laser radar observation simulation method and device | |
CN114972672A (en) | Method, device and equipment for constructing power transmission line live-action three-dimensional model and storage medium | |
CN113538679A (en) | Mixed real-scene three-dimensional channel scene construction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |