CN116310800B - Terrace automatic extraction method and device based on deep learning - Google Patents

Terrace automatic extraction method and device based on deep learning Download PDF

Info

Publication number
CN116310800B
CN116310800B CN202310128146.3A CN202310128146A CN116310800B CN 116310800 B CN116310800 B CN 116310800B CN 202310128146 A CN202310128146 A CN 202310128146A CN 116310800 B CN116310800 B CN 116310800B
Authority
CN
China
Prior art keywords
terrace
remote sensing
training sample
sensing image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310128146.3A
Other languages
Chinese (zh)
Other versions
CN116310800A (en
Inventor
卢亚晗
王学
辛良杰
李秀彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN202310128146.3A priority Critical patent/CN116310800B/en
Publication of CN116310800A publication Critical patent/CN116310800A/en
Application granted granted Critical
Publication of CN116310800B publication Critical patent/CN116310800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a terrace automatic extraction method and device based on deep learning, which are implemented by acquiring a first ultrahigh resolution remote sensing image, gradient correction data and spectrum correction data of a research area; performing visual interpretation on the first ultrahigh resolution remote sensing image to obtain and perform model training on a U-net++ deep learning model based on a first terraced field training sample collection, and performing supervision classification on the first ultrahigh resolution remote sensing image based on the trained first U-net++ deep learning model to obtain a primary terraced field distribution range; and correcting the distribution range of the primary terraces based on the gradient correction data and the spectrum correction data to obtain a terraced field space distribution diagram with high spatial resolution. Compared with the prior art, the method has the advantages that the U-net++ model is applied to terrace automatic extraction with obvious texture characteristics, the acquired primary terrace distribution range is corrected through the terrain and spectrum characteristics, and the extraction precision of terrace distribution areas is improved.

Description

Terrace automatic extraction method and device based on deep learning
Technical Field
The invention relates to the technical field of satellite remote sensing ground object classification, in particular to a terrace automatic extraction method and device based on deep learning.
Background
The terraced fields are strip-shaped bench-type or wave-shaped section fields built on hilly and hilly lands along the contour line direction, and a great number of students are currently researching the natural value, ecological value, agricultural value and cultural value of the terraced fields, and important research results are obtained, so that the key effect on sustainable development of mountain areas is achieved; however, the current research is only focused on small areas and small watercourses, and the research on the large area scale is insufficient, and the reason is that the method for extracting the detailed space information of the terraced fields on the large area scale has important significance on the related research and the land resource protection and the mountain ecological construction because of the lack of the space distribution information of the terraced fields.
At present, the research on the terrace space distribution extraction method is less, but some scientific researchers have made earlier attempts and applications; the method mainly comprises the following four terraced fields extraction methods: (1) Using long-time series of spectral features, in combination with most advanced classification algorithms, such as random forests; (2) Using conventional computer vision methods, such as image thresholding; (3) directly performing manual visual interpretation; (4) The most advanced unmanned aerial vehicle is used for carrying a laser radar to extract the terrace with high precision.
Although the above methods can extract terraces well to some extent, there are four disadvantages: (1) it is difficult to distinguish terraces from hills; terraces are defined according to morphological characteristics, and the spectral characteristics are weak, so that terraces are extracted based on the spectral characteristics, and are difficult to distinguish; (2) The space resolution is low, the terraced fields are mainly distributed in hilly areas of mountain lands, the terraced fields are often finer and smaller due to the limitation of terrains, and the phenomenon of mixing pixels and spiced salt is obvious in remote sensing data with 30 meters resolution, so that larger errors are brought, and deep research on the terraced fields, including research on the landing of terraced fields, are hindered; (3) Although the traditional computer vision method can utilize abundant texture details to extract terraces, the manual threshold method is only applicable to specific areas due to the multi-modal property and complexity of terraces in different areas and has low accuracy in large-scale terraces extraction; (4) The economic and time costs are high, and although the human vision interpretation is very accurate, it requires significant economic and time costs; although the high-precision terrace utilization range can be extracted by carrying the laser radar on the unmanned aerial vehicle, the unmanned aerial vehicle is more complicated to transport due to the large economic cost, and large-area terrace drawing is difficult to carry out.
Disclosure of Invention
The invention aims to solve the technical problems that: the terrace automatic extraction method and device based on deep learning are provided, the acquired primary terrace distribution range is corrected through the topographic features and the spectral features, and the extraction precision of terrace distribution areas is improved.
In order to solve the technical problems, the invention provides a terrace automatic extraction method based on deep learning, which comprises the following steps:
acquiring an ultra-high resolution remote sensing image of a research area, DEM elevation data and a land cover product, performing pixel pretreatment on low-quality pixels of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, calculating the gradient of the research area based on the DEM elevation data, taking the gradient of the research area as gradient correction data, acquiring a product attribute value of the land cover product, re-assigning the product attribute value to obtain a land cover class value as spectrum correction data;
performing visual interpretation on the first ultrahigh-resolution remote sensing image to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are prone to be misplaced;
Performing data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and performing model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample set to obtain a trained first U-net++ deep learning model;
based on the first U-net++ deep learning model, performing supervision classification on the first ultra-high resolution remote sensing image to obtain a first terrace distribution range with high spatial resolution;
and correcting the primary terrace distribution range based on the gradient correction data and the spectrum correction data to obtain a terrace space distribution map with high spatial resolution.
In one possible implementation manner, the visual interpretation is performed on the first ultra-high resolution remote sensing image to obtain a terrace training sample set, which specifically includes:
selecting a sample area remote sensing image in the first ultrahigh resolution remote sensing image, carrying out data vectorization processing on the sample area remote sensing image to obtain a terrace vector data set, carrying out grid conversion on the terrace vector data set to obtain grid data, and carrying out image labeling on the sample area remote sensing image based on the grid data to obtain a labeled sample area remote sensing image;
Performing region selection on the remote sensing image of the marked sample region based on a preset feature selection criterion to obtain terrace region data of the remote sensing image, wherein the feature selection criterion comprises color texture features, topography features and space distribution features;
dividing the remote sensing image of the marked sample area based on the remote sensing image terrace area data to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are easy to misplace.
In one possible implementation manner, performing pixel preprocessing on low-quality pixels of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, which specifically includes:
performing mask processing on a low-quality pixel region in the ultra-high resolution remote sensing image based on a cloud detection spectrum threshold algorithm, wherein the low-quality pixel region comprises a remote sensing image region with cloud and fog shielding;
and selecting a high-quality pixel area in the ultra-high resolution remote sensing image to fill or splice the low-quality pixel area to obtain a first ultra-high resolution remote sensing image, wherein the high-quality pixel area comprises a remote sensing image area without cloud and fog shielding.
In one possible implementation manner, before model training is performed on the pre-constructed U-net++ deep learning model based on the first terraced field training sample set, the method further includes:
and setting network parameters of the pre-constructed U-net++ deep learning model, wherein the network parameters are divided into input image picture size, batch size, learning rate, iteration times, objective function, gradient descent strategy, momentum, attenuation rate and activation function.
In one possible implementation manner, based on the first U-net++ deep learning model, the first ultra-high resolution remote sensing image is subjected to supervised classification to obtain a first terrace distribution range with high spatial resolution, which specifically includes:
acquiring the size of an image picture set by network parameters in the U-net++ deep learning model, and setting a sliding window according to the size of the image picture;
based on the sliding window, the first ultra-high resolution remote sensing image is input into the U-net++ deep learning model in a sliding mode, so that the U-net++ deep learning model interprets the first ultra-high resolution remote sensing image input by each sliding window to obtain a plurality of interpretation results;
And performing splicing treatment on the interpretation results to obtain a primary terrace distribution range with high spatial resolution.
In one possible implementation manner, the data preprocessing is performed on the terraced field training sample collection to obtain a first terraced field training sample collection, which specifically includes:
obtaining brightness, gray scale and contrast of each terrace training sample in the terrace training sample collection, and carrying out enhancement treatment on the brightness, the gray scale and the contrast to obtain an enhanced terrace training sample collection;
and performing scaling adjustment and rotation degree adjustment on each terrace training sample in the reinforced terrace training sample collection based on a preset scaling and a preset rotation degree to obtain a first terrace training sample collection.
In one possible implementation, the correcting the first terrace distribution range based on the slope correction data and the spectrum correction data specifically includes:
acquiring a gradient value corresponding to each pixel in the primary terrace distribution range based on the gradient correction data, if the gradient value is not less than 2 degrees, considering the area where the pixel is located as a terrace area, and if the gradient value is less than 2 degrees, considering the area where the pixel is located as a non-terrace area;
And acquiring a land coverage type value corresponding to each pixel in the primary terrace distribution range based on the spectrum correction data, if the land coverage type value is not 1, considering the area where the pixel is located as a terrace area, and if the land coverage type value is 1, modifying the area where the pixel is located as a non-terrace area.
The invention also provides a terrace automatic extraction device based on deep learning, which comprises: the system comprises a research area data acquisition module, a training sample set generation module, a model training module, a primary terraced field distribution range acquisition module and a terraced field distribution range correction module;
the research area data acquisition module is used for acquiring ultra-high resolution remote sensing images, DEM elevation data and land cover products, performing pixel pretreatment on low-quality pixels of the ultra-high resolution remote sensing images to obtain first ultra-high resolution remote sensing images, calculating the gradient of the research area based on the DEM elevation data, acquiring product attribute values of the land cover products, reassigning the product attribute values to obtain land cover category values as spectrum correction data;
the training sample set generation module is used for performing visual interpretation on the first ultra-high resolution remote sensing image to obtain a terraced field training sample set, wherein the terraced field training sample set comprises a terraced field feature obvious training sample set, a terraced field feature fuzzy training sample set, a non-terraced field training sample set and a non-terraced field feature training sample set which are prone to be misplaced;
The model training module is used for carrying out data preprocessing on the terraced field training sample collection to obtain a first terraced field training sample collection, and carrying out model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample collection to obtain a trained first U-net++ deep learning model;
the first terrace distribution range acquisition module is used for performing supervision classification on the first ultrahigh resolution remote sensing image based on the first U-net++ deep learning model to obtain a first terrace distribution range with high spatial resolution;
the terrace distribution range correction module is used for correcting the primary terrace distribution range based on the gradient correction data and the spectrum correction data to obtain a terrace space distribution map with high spatial resolution.
In one possible implementation manner, the training sample set generating module is configured to perform visual interpretation on the first ultra-high resolution remote sensing image to obtain a terrace training sample set, and specifically includes:
selecting a sample area remote sensing image in the first ultrahigh resolution remote sensing image, carrying out data vectorization processing on the sample area remote sensing image to obtain a terrace vector data set, carrying out grid conversion on the terrace vector data set to obtain grid data, and carrying out image labeling on the sample area remote sensing image based on the grid data to obtain a labeled sample area remote sensing image;
Performing region selection on the remote sensing image of the marked sample region based on a preset feature selection criterion to obtain terrace region data of the remote sensing image, wherein the feature selection criterion comprises color texture features, topography features and space distribution features;
dividing the remote sensing image of the marked sample area based on the remote sensing image terrace area data to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are easy to misplace.
In one possible implementation manner, the research area data acquisition module is configured to perform pixel preprocessing on low quality pixels of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, and specifically includes:
performing mask processing on a low-quality pixel region in the ultra-high resolution remote sensing image based on a cloud detection spectrum threshold algorithm, wherein the low-quality pixel region comprises a remote sensing image region with cloud and fog shielding;
and selecting a high-quality pixel area in the ultra-high resolution remote sensing image to fill or splice the low-quality pixel area to obtain a first ultra-high resolution remote sensing image, wherein the high-quality pixel area comprises a remote sensing image area without cloud and fog shielding.
In one possible implementation manner, the model training module is further configured to perform network parameter setting on the pre-constructed U-net++ deep learning model, where the network parameter setting is divided into an input image picture size, a batch size, a learning rate, an iteration number, an objective function, a gradient descent strategy, momentum, an attenuation rate, and an activation function.
In one possible implementation manner, the first terrace distribution range obtaining module is configured to perform supervised classification on the first ultra-high resolution remote sensing image based on the first U-net++ deep learning model, to obtain a first terrace distribution range with high spatial resolution, and specifically includes:
acquiring the size of an image picture set by network parameters in the U-net++ deep learning model, and setting a sliding window according to the size of the image picture;
based on the sliding window, the first ultra-high resolution remote sensing image is input into the U-net++ deep learning model in a sliding mode, so that the U-net++ deep learning model interprets the first ultra-high resolution remote sensing image input by each sliding window to obtain a plurality of interpretation results;
and performing splicing treatment on the interpretation results to obtain a primary terrace distribution range with high spatial resolution.
In one possible implementation manner, the model training module is configured to perform data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and specifically includes:
obtaining brightness, gray scale and contrast of each terrace training sample in the terrace training sample collection, and carrying out enhancement treatment on the brightness, the gray scale and the contrast to obtain an enhanced terrace training sample collection;
and performing scaling adjustment and rotation degree adjustment on each terrace training sample in the reinforced terrace training sample collection based on a preset scaling and a preset rotation degree to obtain a first terrace training sample collection.
In one possible implementation manner, the terrace distribution range correction module is configured to correct the first terrace distribution range based on the slope correction data and the spectrum correction data, and specifically includes:
acquiring a gradient value corresponding to each pixel in the primary terrace distribution range based on the gradient correction data, if the gradient value is not less than 2 degrees, considering the area where the pixel is located as a terrace area, and if the gradient value is less than 2 degrees, considering the area where the pixel is located as a non-terrace area;
And acquiring a land coverage type value corresponding to each pixel in the primary terrace distribution range based on the spectrum correction data, if the land coverage type value is not 1, considering the area where the pixel is located as a terrace area, and if the land coverage type value is 1, modifying the area where the pixel is located as a non-terrace area.
The invention also provides a terminal device, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor realizes the terrace automatic extraction method based on deep learning when executing the computer program.
The invention also provides a computer readable storage medium, which comprises a stored computer program, wherein the computer program is used for controlling equipment where the computer readable storage medium is located to execute the terrace automatic extraction method based on deep learning.
Compared with the prior art, the terrace automatic extraction method and device based on deep learning have the following beneficial effects:
Acquiring first ultra-high resolution remote sensing images, gradient correction data and spectrum correction data of a research area; performing visual interpretation on the first ultrahigh resolution remote sensing image to obtain a terrace training sample set, performing data preprocessing on the terrace training sample set to obtain and perform model training on a U-net++ deep learning model based on the first terrace training sample set, and performing supervision classification on the first ultrahigh resolution remote sensing image based on the trained first U-net++ deep learning model to obtain a primary terrace distribution range; and correcting the distribution range of the primary terraces based on the gradient correction data and the spectrum correction data to obtain a terraced field space distribution diagram with high spatial resolution. Compared with the prior art, the method corrects the acquired primary terrace distribution range through the topographic features and the spectral features, improves the extraction precision of the terrace distribution area, has lower economic and time costs, and can automatically extract the terrace in a large area range.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a terrace automatic extraction method based on deep learning provided by the invention;
fig. 2 is a schematic structural view of an embodiment of a terrace automatic extraction device based on deep learning.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a terrace automatic extraction method based on deep learning, as shown in fig. 1, the method includes steps 101 to 105, specifically as follows:
step 101: obtaining ultra-high resolution remote sensing images, DEM elevation data and land cover products, performing pixel pretreatment on low-quality pixels of the ultra-high resolution remote sensing images to obtain first ultra-high resolution remote sensing images, calculating the gradient of a research area based on the DEM elevation data, taking the gradient of the research area as gradient correction data, obtaining product attribute values of the land cover products, performing reassignment on the product attribute values, and obtaining land cover category values as spectrum correction data.
In one embodiment, ultra-high resolution images are acquired based on NASA or EESA or high score series; preferably, in this embodiment, the images provided by GeoEye-1, world view-2, world view-3 of the EESA platform are selected as the ultra-high resolution remote sensing images, and the spatial resolution of the selected ultra-high resolution images is in the meter level, so that the classification error caused by introducing mixed pixels can be effectively reduced, the smoothness and accuracy of the edges of the extraction range of the terrace can be improved, and the classification precision can be effectively improved, wherein the mixed pixels refer to the fact that different types of ground objects exist in one pixel and mainly appear at the boundaries of the ground class; the existence of the mixed pixels is one of main factors affecting the identification and classification precision, and particularly has a prominent effect on the classification and identification of linear ground objects and fine ground objects.
In one embodiment, elevation data is obtained based on a GEE platform, wherein the elevation data is a space shuttle radar topography survey mission STRM4 (Shuttle Radar Topography Mission). The space plane radar topography measurement refers to remote sensing measurement performed on the earth surface by taking space vehicles such as artificial earth satellites, spacecraft, space plane and the like as working platforms; compared with the prior aerospace mapping, because the precision is limited, the method can only generally manufacture middle and small scale maps, and the spatial resolution of STRM data adopted in the embodiment is higher, so that the use requirement can be effectively met. Preferably, the SRTM DEM has multiple versions (V1, V2, V4), and in this example, the V4.1 version is adopted, because the V4 version interpolates and repairs the missing data area, so that the product precision is higher, the coverage area and the error are smaller, the subsequent correction and use requirements on the terrace area can be effectively met, and the accuracy of finally extracting the terrace space range is improved.
In one embodiment, land cover products are obtained based on the GlobeLand30 website. The global land surface coverage data with 30 m spatial resolution developed in China is global land surface coverage data with 30 m spatial resolution, the classification precision of the obtained land coverage products is high, the classification time of the currently used V2020 edition is close to the shooting time of the used ultra-high definition remote sensing images, errors caused by different product time can be effectively reduced, the subsequent correction and use requirements on terraced fields can be effectively met, and the accuracy of finally extracting the terraced fields in spatial range is improved.
In an embodiment, performing pixel pretreatment on a low-quality pixel of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, wherein the pixel pretreatment comprises masking, filling and splicing; specifically, cloud detection is carried out on the ultra-high resolution remote sensing image based on a cloud detection spectrum threshold algorithm, a remote sensing image area with cloud content larger than 20% in the ultra-high resolution remote sensing image is obtained, the remote sensing image area with cloud content larger than 20% is used as a low-quality pixel area in the ultra-high resolution remote sensing image, and mask processing is carried out on the low-quality pixel area in the ultra-high resolution remote sensing image; and selecting a high-quality pixel area in the ultra-high resolution remote sensing image to fill or splice the low-quality pixel area to obtain a first ultra-high resolution remote sensing image, wherein the high-quality pixel area comprises a remote sensing image area without cloud and fog shielding. Preferably, the high-quality pixel area in the ultra-high resolution remote sensing image is selected; by selecting the ultra-high resolution remote sensing images of the region provided by GeoEye-1, worldView-2 and WorldView-3, which are closest to the target shooting time.
In one embodiment, a study area gradient is calculated based on the DEM elevation data and used as gradient correction data; specifically, the gradient of a research area is calculated by using DEM elevation data, namely, the gradient of the specific research area is obtained by calculating the DEM elevation data by using a gradient tool of a grid surface module in a 3D analysis tool in arcgis software; the DEM elevation data are elevation data corresponding to the ultra-high resolution remote sensing image area, and the gradient of the research area is obtained as a gradient value corresponding to each pixel in the ultra-high resolution remote sensing image.
In an embodiment, the spectrum data of the earth covering product is obtained, and the spectrum data of the earth covering product is used as spectrum correction data; preferably, the ground cover products include glaciers and permanent snow, bodies of water, bare land and man-made ground surfaces.
Specifically, a land cover product of a research area is obtained from a GlobeLand30 website, and the land cover product of the research area is spliced with a first ultrahigh resolution remote sensing image of the research area so as to be related with the first ultrahigh resolution remote sensing image, wherein the splicing can adopt a mosaic tool in grid data set classification of a grid module in a data processing tool of arcgis; and obtaining a product attribute value of the land cover product, setting the product attribute value to be 1 if the product attribute value is a preset data value, setting the product attribute value to be 0 if the product attribute value is not the preset data value, taking the reset product attribute value as a land cover class value of the land cover product, and taking the land cover class value as spectrum correction data, wherein the preset data value is 60, 80, 90 and 100.
In an embodiment, the product attribute value of the land cover product may be obtained from the GlobeLand 30V 2020 data, where the product attribute value of glacier and permanent snow is 100, the product attribute value of water is 60, the product attribute value of bare land is 90, and the product attribute value of artificial ground is 80.
In one embodiment, since the land cover products such as glaciers, permanent snow, water bodies, bare lands and artificial land surfaces are land types which are difficult to confuse with terraces in terms of spectral information, the land cover type values of the land cover products such as glaciers, permanent snow, water bodies, bare lands and artificial land surfaces are selected as the spectral correction data, so that the accuracy of the subsequent resolution of terraces can be improved.
Step 102: and performing visual interpretation on the first ultrahigh-resolution remote sensing image to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are prone to error division.
In one embodiment, selecting a sample area remote sensing image in the first ultrahigh resolution remote sensing image; specifically, the area of a research area corresponding to the first ultrahigh resolution remote sensing image is obtained, a remote sensing image area corresponding to the first ultrahigh resolution remote sensing image, which is 5% -15% of the area of the research area, is selected, and the remote sensing image area is used as a remote sensing image of a selected sample area; preferably, if the area of the research area is smaller, a remote sensing image area larger than 15% of the area of the research area can be selected as the remote sensing image of the selected sample area.
In an embodiment, when the first ultra-high resolution remote sensing image is visually interpreted, firstly, data vectorization processing is performed on the sample area remote sensing image by using arcgis as a base map and the sample area remote sensing image is performed to obtain a terrace vector data set, grid conversion is performed on the terrace vector data set to obtain grid data, and image labeling is performed on the sample area remote sensing image based on the grid data to obtain a labeling sample area remote sensing image.
In one embodiment, when the terrace vector data set is subjected to grid conversion, the terrace vector data set is converted into a grid, and the information of the grid must be the same as the information of the base map, including the size of the grid, the grid processing range and the coordinate system of the grid, and the grid is output into a tiff format.
In an embodiment, performing region selection on the remote sensing image of the marked sample region based on a preset feature selection criterion to obtain terrace region data of the remote sensing image, wherein the feature selection criterion comprises color texture features, topography features and space distribution features; specifically, for color texture features: because the shooting time and imaging quality of different data sources, namely, the brightness, gray scale, color contrast and the like of the image are different, obvious splicing difference is easy to form, and in order to improve the generalization capability of the model, the sample selection in the terrace color texture aspect needs to cover seasons, such as spring, summer, autumn, winter and the like, different brightness, gray scale, color contrast and the like as much as possible; for topographical features: selecting terraces covering various shapes such as strip terraces, ellipses and the like and distributed in areas such as gentle slopes in front of mountains and steep slopes in the mountains as much as possible; for spatial distribution characteristics: the samples are distributed more uniformly in space as much as possible.
In an embodiment, the remote sensing image of the marked sample area is divided based on the remote sensing image terrace area data to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and an error-prone non-terrace feature training sample set. In deep learning, the construction of a training sample data set directly affects the finally trained deep learning model, so how to construct a more suitable training sample data set is particularly critical; for the ground objects with the indistinct distinction degree, namely the ground objects which are easy to confuse with the terraced fields, the important attention is needed, so that when the training sample data set is constructed, the method is different from the prior method which is directly divided into the terraced field data set and the non-terraced field data set, and the model can more accurately grasp the difference characteristics of the terraced fields which are easy to confuse with the terraced fields when the training characteristics of the whole terraced fields are extracted by adding the terraced field characteristic fuzzy training sample set and the error-prone non-terraced field characteristic training sample set, so that the extraction accuracy and the extraction generalization capability are improved, and the large-area use capability is realized.
Step 103: and carrying out data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and carrying out model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample set to obtain a trained first U-net++ deep learning model.
In one embodiment, a U-net++ deep learning model is built based on a U-net++ network, and network parameter setting is performed on the pre-built U-net++ deep learning model, wherein the network parameter setting is divided into an input image picture size, a batch size, a learning rate, iteration times, an objective function, a gradient descent strategy, momentum, an attenuation rate and an activation function, and the image picture size is a framing remote sensing image size.
Specifically, the network parameter settings are divided into the input image picture sizes: setting the size of the image picture to 400 x 400 pixels; for batch size: since the batch size mainly affects the convergence of the model, if each batch is 1, the model is easily subjected to random disturbance. If the batch is infinitely large, the requirement on the video memory of the equipment is high, so that the batch size is set to be 6 according to the limit of the current equipment; for the learning rate: the strategy adopted in the parameter selection of the learning rate is divided into two steps, wherein the learning rate is selected to be 0.001 in the first 500 generations, the goal is to quickly converge to a target area, then the learning rate is set to be 0.0001 in the 500-1000 generations, the model is selected to be finely adjusted by using a smaller learning rate, and finally the model with the highest classification precision is found; for gradient descent strategy: adam is selected on a gradient descent strategy, and the convergence speed is accelerated by using momentum and self-adaptive learning rate; for the objective function: the objective function selects a cross entropy classification loss function to improve the degree of differentiation between different classes. Meanwhile, the area of the non-terraced fields in the sample is far larger than that of the terraced fields, so that the influence caused by unbalance of the non-terraced fields and terraced fields is reduced, and the classification accuracy of terraced fields is more concerned in training. Multiplying the terrace and non-terrace loss values in the objective function by 5 and 1 respectively; default settings for the U-net++ network are used for other momentum, decay rate, and activation functions.
In an embodiment, the brightness, the gray scale and the contrast of each terrace training sample in the terrace training sample collection are obtained, and the brightness, the gray scale and the contrast are enhanced to obtain an enhanced terrace training sample collection, so that the recognition capability of the U-net++ deep learning model in the color characteristics is enhanced in a subsequent model training stage.
In an embodiment, a preset scaling ratio and a preset rotation degree are further set, scaling adjustment is performed on each terrace training sample in the reinforced terrace training sample set based on the preset scaling ratio, rotation degree adjustment is performed on each terrace training sample in the reinforced terrace training sample set based on the preset rotation degree, namely, rotation change of 0 to 360 degrees is performed on each terrace training sample, so that a first terrace training sample set is obtained, recognition capability of a model in morphological characteristics is improved, and further accuracy of extraction of a terrace in a region by a U-net++ deep learning model is improved.
In an embodiment, for each terrace training sample in the terrace training sample set, since the image size may be larger than the size of the input influence image set in the U-net++ deep learning model, when each terrace training sample in the terrace training sample set is input into the U-net++ deep learning model, a sliding window is set based on the image size set by acquiring the network parameters in the U-net++ deep learning model, and based on the sliding window, each terrace training sample in the terrace training sample set is slidingly input into the U-net++ deep learning model, so that the U-net++ deep learning model is trained based on each terrace training sample in the terrace training sample set.
In an embodiment, after model training is performed on a pre-constructed U-net++ deep learning model, a training result output by the U-net++ deep learning model is obtained, the training result is verified, a model classification accuracy is obtained, when the model classification accuracy is greater than a preset accuracy threshold, a first U-net++ deep learning model is determined, if the model classification accuracy is not greater than the preset accuracy threshold, a classification condition of the U-net++ deep learning model is obtained, wherein the classification condition comprises which areas are classified better and which features are prone to causing wrong classification, a training sample set is readjusted based on the classification condition, the occupation ratio of error prone samples is increased, and model training is performed on the U-net++ deep learning model again. Because the training sample data set is manually selected and is difficult to select the most suitable training sample data set at one time, the embodiment can be finely adjusted according to the actual training effect, the ground feature type easy to misplace and the proportion of the training sample data set after the training sample data set is firstly constructed, so that the model can more effectively grasp the difference characteristics of the ground types easily confused with the terraces when the training characteristics of the whole terraces are extracted, and further the extraction accuracy and the extraction generalization capability can be used in a large area.
In one embodiment, the trained first U-net++ deep learning model is stored as a.h5 file, wherein the.h5 file stores parameter data in the whole U-net++ network; when the model is used later, the model file is directly imported by using the load_model () function in the kernel.
In the embodiment, by constructing four types of terraced fields training data sets, the methods of data preprocessing, data enhancement and the like are performed on the four types of terraced fields training data sets, so that the generalization capability of a U-Net++ deep learning model can be effectively improved, the terraced fields extraction precision can be effectively improved, the influence of color and form differences caused by the imaging time and shooting angle of remote sensing images can be reduced, a terraced fields spatial distribution map with high precision and high spatial resolution can be accurately provided, powerful guarantee can be provided for subsequent large-scale terraced fields spatial extraction, and powerful support is provided for scientific researches on terraced fields grain safety, ecological service and the like in relevant areas and even national scales and relevant legal and regulatory policy formulation.
Step 104: and based on the first U-net++ deep learning model, performing supervision classification on the first ultra-high resolution remote sensing image to obtain a high spatial resolution primary terrace distribution range.
In an embodiment, because the size of the input image picture is set in the network parameter setting in the first U-net++ deep learning model, the size of the input image picture is a fixed value, but the size of the input ultra-high resolution remote sensing image is often larger so as to exceed the size of the input image picture, in this embodiment, by obtaining the size of the image picture set in the network parameter setting in the first U-net++ deep learning model, a sliding window is set according to the size of the image picture, and based on the sliding window, the first ultra-high resolution remote sensing image is slidingly input into the first U-net++ deep learning model, so that the first U-net++ deep learning model interprets the first ultra-high resolution remote sensing image input by each sliding window to obtain a plurality of interpretation results, and performs a splicing process on the plurality of interpretation results to obtain a first-stage gradient distribution range with high spatial resolution. The sliding window can effectively reduce the use difficulty of operators and improve the use convenience, namely, the operators can input the remote sensing images of the target research area according to the use requirement, and whether the remote sensing images are required to be cut or not is not considered.
Preferably, the first ultrahigh resolution remote sensing image may be further divided based on the set size of the image picture, so as to obtain a plurality of regional ultrahigh resolution remote sensing images, the plurality of regional ultrahigh resolution remote sensing images are input into the first U-net++ deep learning model, so that an interpretation result corresponding to each regional ultrahigh resolution remote sensing image is output in the first U-net++ deep learning model, and the plurality of interpretation results are spliced to obtain a first terrace distribution range with high spatial resolution, wherein the interpretation result is a terrace distribution region in the regional ultrahigh resolution remote sensing image.
Preferably, when the set size of the input image picture is 400×400 pixels, the output interpretation result is 400×400 pixels. The input image picture size is the optimal size tested according to the actual interpretation result. Because the larger the size of the input picture image is, the larger the demand on the video memory of the graphic processor unit (Graphics Processing Unit, GPU) is, and the requirement on hardware is improved; the smaller the size of the input picture image is, the weaker the morphological and texture characteristics of the terrace covered by the image picture are, and the classification accuracy is reduced; thus, the 400 x 400 pixel size is used for the test in this embodiment.
In one embodiment, a U-Net++ deep learning network suitable for extracting texture features of ground objects is introduced into terrace extraction, and simultaneously, the existing ultra-high definition image dataset is combined, so that the terrace distribution range with high spatial resolution can be automatically extracted.
Step 105: and correcting the primary terrace distribution range based on the gradient correction data and the spectrum correction data to obtain a terrace space distribution map with high spatial resolution.
In an embodiment, based on the gradient correction data, a gradient value corresponding to each pixel in the primary terrace distribution range is obtained, if the gradient value is not less than 2 degrees, the area where the pixel is located is considered to be a terrace area, and if the gradient value is less than 2 degrees, the area where the pixel is located is considered to be a non-terrace area.
In an embodiment, based on the spectral correction data, a land coverage class value corresponding to each pixel in the primary terrace distribution range is obtained, if the land coverage class value is not 1, the area where the pixel is located is considered to be a terrace area, and if the land coverage class value is 1, the area where the pixel is located is modified to be a non-terrace area.
In one embodiment, the terrace training sample set participating in training is two-dimensional plane data, and lacks elevation and gradient information; the wave bands are only RGB three wave bands and lack other spectral information, so that the misclassification of 'same-spectrum foreign matter' and 'different-spectrum same matter' is easy to occur, and therefore, the calculated gradient correction data and the spectrum correction data are selected to correct the initial terrace distribution range of the first decoded high spatial resolution, the misclassification areas of 'same-spectrum foreign matter' and 'different-spectrum same matter' can be further corrected, and the accuracy of the finally obtained terrace spatial distribution map of the high spatial resolution can be improved.
In summary, according to the terrace automatic extraction method based on deep learning provided by the embodiment, a terrace extraction model based on deep learning is innovatively built by fully utilizing texture features of a terrace. The model takes a U-net++ deep learning network as a framework, takes a high-definition image as a data source, adopts a supervision pixel classification mode, supplements the topography features and the spectrum features which are not considered during extraction on the region after the initial terrace distribution range with high spatial resolution is extracted for the first time, corrects the initial terrace distribution range by utilizing gradient correction data obtained by DEM topography data and spectrum correction data obtained by GlobeLand30 vegetation data, and finally draws a terrace distribution map. The method can successfully solve the problems of time and labor waste and solves the problems of low spatial resolution of the interpreted remote sensing image and low extraction precision of the terraces due to the multi-modal property and the complexity of terraces in the same area, and has the characteristics of simplicity, easiness in use, high automation degree, high classification precision and the like.
Example 2
Referring to fig. 2, fig. 2 is a schematic structural diagram of an embodiment of a terrace automatic extraction device based on deep learning, and as shown in fig. 2, the device includes a research area data acquisition module 201, a training sample set generation module 202, a model training module 203, a primary terrace distribution range acquisition module 204 and a terrace distribution range correction module 205, which are specifically as follows:
the research area data acquisition module 201 is configured to acquire an ultra-high resolution remote sensing image, DEM elevation data and a land cover product, perform pixel preprocessing on a low quality pixel of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, calculate a gradient of the research area based on the DEM elevation data, acquire a product attribute value of the land cover product, and reassign the product attribute value to obtain a land cover class value as spectrum correction data.
The training sample set generating module 202 is configured to perform visual interpretation on the first ultra-high resolution remote sensing image to obtain a terrace training sample set, where the terrace training sample set includes a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set, and an error-prone non-terrace feature training sample set.
The model training module 203 is configured to perform data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and perform model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample set to obtain a trained first U-net++ deep learning model.
The primary terrace distribution range obtaining module 204 is configured to perform supervised classification on the first ultra-high resolution remote sensing image based on the first U-net++ deep learning model, so as to obtain a primary terrace distribution range with high spatial resolution.
The terrace distribution range correction module 205 is configured to correct the first terrace distribution range based on the gradient correction data and the spectrum correction data, so as to obtain a terrace space distribution map with high spatial resolution.
In one embodiment, the training sample set generating module 202 is configured to perform visual interpretation on the first ultra-high resolution remote sensing image to obtain a terraced field training sample set, and specifically includes: selecting a sample area remote sensing image in the first ultrahigh resolution remote sensing image, carrying out data vectorization processing on the sample area remote sensing image to obtain a terrace vector data set, carrying out grid conversion on the terrace vector data set to obtain grid data, and carrying out image labeling on the sample area remote sensing image based on the grid data to obtain a labeled sample area remote sensing image; performing region selection on the remote sensing image of the marked sample region based on a preset feature selection criterion to obtain terrace region data of the remote sensing image, wherein the feature selection criterion comprises color texture features, topography features and space distribution features; dividing the remote sensing image of the marked sample area based on the remote sensing image terrace area data to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are easy to misplace.
In an embodiment, the research area data obtaining module 201 is configured to perform pixel preprocessing on a low quality pixel of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, and specifically includes: performing mask processing on a low-quality pixel region in the ultra-high resolution remote sensing image based on a cloud detection spectrum threshold algorithm, wherein the low-quality pixel region comprises a remote sensing image region with cloud and fog shielding; and selecting a high-quality pixel area in the ultra-high resolution remote sensing image to fill or splice the low-quality pixel area to obtain a first ultra-high resolution remote sensing image, wherein the high-quality pixel area comprises a remote sensing image area without cloud and fog shielding.
In an embodiment, the model training module 203 is further configured to perform network parameter setting on the pre-constructed U-net++ deep learning model, where the network parameter setting is divided into an input image picture size, a batch size, a learning rate, an iteration number, an objective function, a gradient descent strategy, momentum, an attenuation rate, and an activation function.
In an embodiment, the primary terrace distribution range obtaining module 204 is configured to perform supervised classification on the first ultra-high resolution remote sensing image based on the first U-net++ deep learning model to obtain a primary terrace distribution range with high spatial resolution, and specifically includes: acquiring the size of an image picture set by network parameters in the U-net++ deep learning model, and setting a sliding window according to the size of the image picture; based on the sliding window, the first ultra-high resolution remote sensing image is input into the U-net++ deep learning model in a sliding mode, so that the U-net++ deep learning model interprets the first ultra-high resolution remote sensing image input by each sliding window to obtain a plurality of interpretation results; and performing splicing treatment on the interpretation results to obtain a primary terrace distribution range with high spatial resolution.
In an embodiment, the model training module 203 is configured to perform data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and specifically includes: obtaining brightness, gray scale and contrast of each terrace training sample in the terrace training sample collection, and carrying out enhancement treatment on the brightness, the gray scale and the contrast to obtain an enhanced terrace training sample collection; and performing scaling adjustment and rotation degree adjustment on each terrace training sample in the reinforced terrace training sample collection based on a preset scaling and a preset rotation degree to obtain a first terrace training sample collection.
In one embodiment, the terrace distribution range correction module 205 is configured to correct the first terrace distribution range based on the slope correction data and the spectrum correction data, and specifically includes: acquiring a gradient value corresponding to each pixel in the primary terrace distribution range based on the gradient correction data, if the gradient value is not less than 2 degrees, considering the area where the pixel is located as a terrace area, and if the gradient value is less than 2 degrees, considering the area where the pixel is located as a non-terrace area; and acquiring a land coverage type value corresponding to each pixel in the primary terrace distribution range based on the spectrum correction data, if the land coverage type value is not 1, considering the area where the pixel is located as a terrace area, and if the land coverage type value is 1, modifying the area where the pixel is located as a non-terrace area.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described apparatus, which is not described in detail herein.
It should be noted that, the embodiment of the terrace automatic extraction device based on deep learning is merely illustrative, where the modules described as separate components may or may not be physically separated, and the components displayed as the modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
On the basis of the embodiment of the terrace automatic extraction method based on deep learning, another embodiment of the invention provides a terrace automatic extraction terminal device based on deep learning, which comprises a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to realize the terrace automatic extraction method based on deep learning of any embodiment of the invention.
Illustratively, in this embodiment the computer program may be partitioned into one or more modules, which are stored in the memory and executed by the processor to perform the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the deep learning based terrace automatic extraction terminal device.
The terraced fields automatic extraction terminal equipment based on deep learning can be computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud server. The terraced fields automatic extraction terminal equipment based on deep learning can comprise, but is not limited to, a processor and a memory.
The processor may be a central processing unit (Central Processing Unit, CPU), a graphics processor unit (Graphics Processing Unit, GPU) may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor is a control center of the deep learning-based terrace automatic extraction terminal equipment, and various interfaces and lines are used to connect various parts of the whole deep learning-based terrace automatic extraction terminal equipment.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the deep learning-based terrace automatic extraction terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
On the basis of the embodiment of the terrace automatic extraction method based on deep learning, another embodiment of the invention provides a storage medium, which comprises a stored computer program, wherein when the computer program runs, equipment where the storage medium is located is controlled to execute the terrace automatic extraction method based on deep learning of any embodiment of the invention.
In this embodiment, the storage medium is a computer-readable storage medium, and the computer program includes computer program code, where the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form, and so on. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
In summary, the invention provides a terrace automatic extraction method and device based on deep learning, which are implemented by acquiring a first ultrahigh resolution remote sensing image, gradient correction data and spectrum correction data of a research area; performing visual interpretation on the first ultrahigh resolution remote sensing image to obtain a terrace training sample set, performing data preprocessing on the terrace training sample set to obtain and perform model training on a U-net++ deep learning model based on the first terrace training sample set, and performing supervision classification on the first ultrahigh resolution remote sensing image based on the trained first U-net++ deep learning model to obtain a primary terrace distribution range; and correcting the distribution range of the primary terraces based on the gradient correction data and the spectrum correction data to obtain a terraced field space distribution diagram with high spatial resolution. Compared with the prior art, the method corrects the acquired primary terrace distribution range through the topographic features and the spectral features, and improves the extraction precision of terrace distribution areas.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and substitutions can be made by those skilled in the art without departing from the technical principles of the present invention, and these modifications and substitutions should also be considered as being within the scope of the present invention.

Claims (9)

1. The terrace automatic extraction method based on deep learning is characterized by comprising the following steps of:
acquiring an ultra-high resolution remote sensing image of a research area, DEM elevation data and a land cover product, performing pixel pretreatment on low-quality pixels of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, calculating the gradient of the research area based on the DEM elevation data, taking the gradient of the research area as gradient correction data, acquiring a product attribute value of the land cover product, re-assigning the product attribute value to obtain a land cover class value as spectrum correction data;
performing visual interpretation on the first ultrahigh-resolution remote sensing image to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are prone to be misplaced;
Performing data preprocessing on the terraced field training sample set to obtain a first terraced field training sample set, and performing model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample set to obtain a trained first U-net++ deep learning model;
based on the first U-net++ deep learning model, performing supervision classification on the first ultra-high resolution remote sensing image to obtain a first terrace distribution range with high spatial resolution;
correcting the primary terrace distribution range based on the gradient correction data and the spectrum correction data to obtain a terrace space distribution map with high space resolution;
the correction of the primary terrace distribution range based on the gradient correction data and the spectrum correction data specifically comprises the following steps:
acquiring a gradient value corresponding to each pixel in the primary terrace distribution range based on the gradient correction data, if the gradient value is not less than 2 degrees, considering the area where the pixel is located as a terrace area, and if the gradient value is less than 2 degrees, considering the area where the pixel is located as a non-terrace area;
splicing the land cover product of the research area with a first ultra-high resolution remote sensing image of the research area so as to correlate the land cover product with the first ultra-high resolution remote sensing image, wherein mosaic tools in grid data set classification of grid modules in a data processing tool of arcgis are adopted for splicing; acquiring a product attribute value of the land cover product, setting the product attribute value to 1 if the product attribute value is a preset data value, setting the product attribute value to 0 if the product attribute value is not the preset data value, taking the reset product attribute value as a land cover class value of the land cover product, and taking the land cover class value as spectrum correction data, wherein the preset data value is 60, 80, 90 and 100;
And acquiring a land coverage type value corresponding to each pixel in the primary terrace distribution range based on the spectrum correction data, if the land coverage type value is not 1, considering the area where the pixel is located as a terrace area, and if the land coverage type value is 1, modifying the area where the pixel is located as a non-terrace area.
2. The method for automatically extracting a terrace based on deep learning as claimed in claim 1, wherein the visual interpretation of the first ultra-high resolution remote sensing image is performed to obtain a terrace training sample set, specifically comprising:
selecting a sample area remote sensing image in the first ultrahigh resolution remote sensing image, carrying out data vectorization processing on the sample area remote sensing image to obtain a terrace vector data set, carrying out grid conversion on the terrace vector data set to obtain grid data, and carrying out image labeling on the sample area remote sensing image based on the grid data to obtain a labeled sample area remote sensing image;
performing region selection on the remote sensing image of the marked sample region based on a preset feature selection criterion to obtain terrace region data of the remote sensing image, wherein the feature selection criterion comprises color texture features, topography features and space distribution features;
Dividing the remote sensing image of the marked sample area based on the remote sensing image terrace area data to obtain a terrace training sample set, wherein the terrace training sample set comprises a terrace feature obvious training sample set, a terrace feature fuzzy training sample set, a non-terrace training sample set and a non-terrace feature training sample set which are easy to misplace.
3. The terrace automatic extraction method based on deep learning as claimed in claim 1, wherein the pixel preprocessing is performed on the low quality pixels of the ultra-high resolution remote sensing image to obtain a first ultra-high resolution remote sensing image, and the method specifically comprises:
performing mask processing on a low-quality pixel region in the ultra-high resolution remote sensing image based on a cloud detection spectrum threshold algorithm, wherein the low-quality pixel region comprises a remote sensing image region with cloud and fog shielding;
and selecting a high-quality pixel area in the ultra-high resolution remote sensing image to fill or splice the low-quality pixel area to obtain a first ultra-high resolution remote sensing image, wherein the high-quality pixel area comprises a remote sensing image area without cloud and fog shielding.
4. The method for automatically extracting a terrace based on deep learning according to claim 1, wherein before model training is performed on a pre-constructed U-net++ deep learning model based on the first terrace training sample set, the method further comprises:
And setting network parameters of the pre-constructed U-net++ deep learning model, wherein the network parameters are divided into input image picture size, batch size, learning rate, iteration times, objective function, gradient descent strategy, momentum, attenuation rate and activation function.
5. The method for automatically extracting terraces based on deep learning according to claim 4, wherein the first ultra-high resolution remote sensing image is supervised and classified based on the first U-net++ deep learning model to obtain a high spatial resolution primary terraces distribution range, and the method specifically comprises:
acquiring the size of an image picture set by network parameters in the U-net++ deep learning model, and setting a sliding window according to the size of the image picture;
based on the sliding window, the first ultra-high resolution remote sensing image is input into the U-net++ deep learning model in a sliding mode, so that the U-net++ deep learning model interprets the first ultra-high resolution remote sensing image input by each sliding window to obtain a plurality of interpretation results;
and performing splicing treatment on the interpretation results to obtain a primary terrace distribution range with high spatial resolution.
6. The automatic terraced field extraction method based on deep learning of claim 1, wherein the data preprocessing is performed on the terraced field training sample set to obtain a first terraced field training sample set, specifically comprising:
obtaining brightness, gray scale and contrast of each terrace training sample in the terrace training sample collection, and carrying out enhancement treatment on the brightness, the gray scale and the contrast to obtain an enhanced terrace training sample collection;
and performing scaling adjustment and rotation degree adjustment on each terrace training sample in the reinforced terrace training sample collection based on a preset scaling and a preset rotation degree to obtain a first terrace training sample collection.
7. Terrace automatic extraction device based on deep learning, characterized by comprising: the system comprises a research area data acquisition module, a training sample set generation module, a model training module, a primary terraced field distribution range acquisition module and a terraced field distribution range correction module;
the research area data acquisition module is used for acquiring ultra-high resolution remote sensing images, DEM elevation data and land cover products, performing pixel pretreatment on low-quality pixels of the ultra-high resolution remote sensing images to obtain first ultra-high resolution remote sensing images, calculating the gradient of the research area based on the DEM elevation data, acquiring product attribute values of the land cover products, reassigning the product attribute values to obtain land cover category values as spectrum correction data;
The training sample set generation module is used for performing visual interpretation on the first ultra-high resolution remote sensing image to obtain a terraced field training sample set, wherein the terraced field training sample set comprises a terraced field feature obvious training sample set, a terraced field feature fuzzy training sample set, a non-terraced field training sample set and a non-terraced field feature training sample set which are prone to be misplaced;
the model training module is used for carrying out data preprocessing on the terraced field training sample collection to obtain a first terraced field training sample collection, and carrying out model training on a pre-constructed U-net++ deep learning model based on the first terraced field training sample collection to obtain a trained first U-net++ deep learning model;
the first terrace distribution range acquisition module is used for performing supervision classification on the first ultrahigh resolution remote sensing image based on the first U-net++ deep learning model to obtain a first terrace distribution range with high spatial resolution;
the terrace distribution range correction module is used for correcting the primary terrace distribution range based on gradient correction data and the spectrum correction data to obtain a terrace space distribution diagram with high space resolution;
the correction of the primary terrace distribution range based on the gradient correction data and the spectrum correction data specifically comprises the following steps:
Acquiring a gradient value corresponding to each pixel in the primary terrace distribution range based on the gradient correction data, if the gradient value is not less than 2 degrees, considering the area where the pixel is located as a terrace area, and if the gradient value is less than 2 degrees, considering the area where the pixel is located as a non-terrace area;
splicing the land cover product of the research area with a first ultra-high resolution remote sensing image of the research area so as to correlate the land cover product with the first ultra-high resolution remote sensing image, wherein mosaic tools in grid data set classification of grid modules in a data processing tool of arcgis are adopted for splicing; acquiring a product attribute value of the land cover product, setting the product attribute value to 1 if the product attribute value is a preset data value, setting the product attribute value to 0 if the product attribute value is not the preset data value, taking the reset product attribute value as a land cover class value of the land cover product, and taking the land cover class value as spectrum correction data, wherein the preset data value is 60, 80, 90 and 100;
and acquiring a land coverage type value corresponding to each pixel in the primary terrace distribution range based on the spectrum correction data, if the land coverage type value is not 1, considering the area where the pixel is located as a terrace area, and if the land coverage type value is 1, modifying the area where the pixel is located as a non-terrace area.
8. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the deep learning-based terrace automatic extraction method according to any one of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the deep learning based terrace automatic extraction method according to any one of claims 1 to 6.
CN202310128146.3A 2023-02-07 2023-02-07 Terrace automatic extraction method and device based on deep learning Active CN116310800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310128146.3A CN116310800B (en) 2023-02-07 2023-02-07 Terrace automatic extraction method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310128146.3A CN116310800B (en) 2023-02-07 2023-02-07 Terrace automatic extraction method and device based on deep learning

Publications (2)

Publication Number Publication Date
CN116310800A CN116310800A (en) 2023-06-23
CN116310800B true CN116310800B (en) 2023-08-29

Family

ID=86837008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310128146.3A Active CN116310800B (en) 2023-02-07 2023-02-07 Terrace automatic extraction method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN116310800B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315471B (en) * 2023-09-26 2024-07-05 中国水利水电科学研究院 Terrace identification method based on remote sensing image and machine learning
CN118115889B (en) * 2024-02-26 2024-08-27 华中师范大学 Terrace automatic extraction method and terrace automatic extraction system based on deep learning semantic segmentation model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899897A (en) * 2015-05-27 2015-09-09 中国科学院地理科学与资源研究所 High-resolution remote-sensing image land cover change detection method based on history data mining
CN113160237A (en) * 2021-03-02 2021-07-23 中国科学院地理科学与资源研究所 Method for drawing earth cover
CN115147727A (en) * 2022-06-02 2022-10-04 中山大学 Method and system for extracting impervious surface of remote sensing image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899897A (en) * 2015-05-27 2015-09-09 中国科学院地理科学与资源研究所 High-resolution remote-sensing image land cover change detection method based on history data mining
CN113160237A (en) * 2021-03-02 2021-07-23 中国科学院地理科学与资源研究所 Method for drawing earth cover
CN115147727A (en) * 2022-06-02 2022-10-04 中山大学 Method and system for extracting impervious surface of remote sensing image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
中国梯田撂荒程度及空间格局分异研究;董世杰;地理学报;第1-15页 *

Also Published As

Publication number Publication date
CN116310800A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN116310800B (en) Terrace automatic extraction method and device based on deep learning
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN110598784B (en) Machine learning-based construction waste classification method and device
US7983474B2 (en) Geospatial modeling system and related method using multiple sources of geographic information
Wang et al. Integration of Google Maps/Earth with microscale meteorology models and data visualization
Hormese et al. Automated road extraction from high resolution satellite images
CN112257605B (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN107918776B (en) Land planning method and system based on machine vision and electronic equipment
CN111626947B (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
CN110223300A (en) CT image abdominal multivisceral organ dividing method and device
CN113838064B (en) Cloud removal method based on branch GAN using multi-temporal remote sensing data
CN109448127A (en) A kind of farmland high-precision navigation map generation method based on unmanned aerial vehicle remote sensing
CN112883900B (en) Method and device for bare-ground inversion of visible images of remote sensing images
CN114373009A (en) Building shadow height measurement intelligent calculation method based on high-resolution remote sensing image
CN107463944B (en) A kind of road information extracting method using multidate High Resolution SAR Images
CN110059704B (en) Intelligent extraction method of remote sensing information of rare earth mining area driven by visual attention model
CN117808708A (en) Cloud and fog remote sensing image processing method, device, equipment and medium
CN116994029A (en) Fusion classification method and system for multi-source data
Maravelakis et al. Automatic building identification and features extraction from aerial images: Application on the historic 1866 square of Chania Greece
Spasov et al. Transferability assessment of open-source deep learning model for building detection on satellite data
CN116433596A (en) Slope vegetation coverage measuring method and device and related components
CN115019044A (en) Individual plant segmentation method and device, terminal device and readable storage medium
Pohle-Fröhlich et al. Roof Segmentation based on Deep Neural Networks.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant