CN111914611B - Urban green space high-resolution remote sensing monitoring method and system - Google Patents

Urban green space high-resolution remote sensing monitoring method and system Download PDF

Info

Publication number
CN111914611B
CN111914611B CN202010386282.9A CN202010386282A CN111914611B CN 111914611 B CN111914611 B CN 111914611B CN 202010386282 A CN202010386282 A CN 202010386282A CN 111914611 B CN111914611 B CN 111914611B
Authority
CN
China
Prior art keywords
image
remote sensing
net
model
urban green
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010386282.9A
Other languages
Chinese (zh)
Other versions
CN111914611A (en
Inventor
周艺
王丽涛
王世新
朱金峰
刘文亮
徐知宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202010386282.9A priority Critical patent/CN111914611B/en
Publication of CN111914611A publication Critical patent/CN111914611A/en
Application granted granted Critical
Publication of CN111914611B publication Critical patent/CN111914611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method and a system for monitoring urban green space high-resolution remote sensing, wherein the method comprises a training sample set construction step, a multi-dimensional feature space construction step, a U-Net + model construction step and an image post-processing step, the generalization and the robustness of the monitoring method are improved by constructing the multi-dimensional feature space, enhancing the feature richness, constructing a U-Net + deep learning model and combining the image post-processing method, and the overfitting problem which is easy to occur due to limited training samples is solved, so that the precision and the timeliness of the urban green space high-resolution remote sensing monitoring are improved.

Description

Urban green space high-resolution remote sensing monitoring method and system
Technical Field
The invention relates to the technical field of remote sensing monitoring, in particular to a method and a system for remote sensing monitoring of urban green space high score.
Background
The urban green land plays a very important role in the urban ecosystem and is closely related to human health, resident living quality, biodiversity and social safety. The distribution of the urban green land has heterogeneity and high dispersity, and how to realize accurate quantification of the space-time pattern of the urban green land is a hot problem of current research and is very important for planning and managing the urban green land.
At present, a great deal of research on classification and extraction of urban greenbelts is available, and with the development of remote sensing technology, more and more high-resolution remote sensing images are applied to urban greenbelt mapping and change analysis, such as SPOT, IKONOS, quick-Bird, worldview, high-resolution columns, and the like. The high-resolution remote sensing image contains abundant ground feature information, has obvious spectral characteristics, clearer geometric characteristics and richer texture details, can identify urban street trees and residential garden on a sub-meter scale, and provides more precise urban green land characteristics.
There are many methods for classification by remote sensing, and the traditional remote sensing image classification method is mainly based on pixel classification and object-oriented classification. The classification based on the pixels is mainly spectral information of the reference pixels, and the high-resolution remote sensing images have high spatial resolution, relatively deficient spectral information and relatively more characteristics such as shapes, textures and the like, so the information plays an important role in the classification of the high-resolution remote sensing images. The object-oriented classification method integrates the spectral feature, the geometric feature and the textural feature of the ground features, and the experimental steps are 'segmentation-feature selection-classification'. The object-oriented method has no fixed segmentation parameters, if the segmentation scale is too large, a large number of mixed pixels can appear, if the segmentation scale is too small, shape information is lost, and the parameters of the mixed pixels need to be determined through repeated experiments in the segmentation stage, so that a large amount of workload is undoubtedly increased in the classification process. Traditional machine learning methods such as support vector machines, decision trees and the like cannot fully learn from complex features, cannot adapt to high-resolution image samples with large data volume, and cannot meet the classification requirements of obtained classification results. Although the traditional urban green space remote sensing information classification and extraction method is mature, due to the limitation of the method, the optimal segmentation parameter selection and the object feature selection are needed manually, and the precision is relatively low.
The current deep learning method has strong application potential in the aspect of high-resolution remote sensing image classification, and particularly the U-Net model also has a brand-open head corner in the remote sensing image classification application. Deep learning is widely applied to various tasks such as image recognition, target detection and image classification, and has good effect. At present, a great number of experiments and researches on deep learning neural network models for remote sensing image classification, including a Deep Belief Network (DBN), a stacked self-coding network (SAE) and a Convolutional Neural Network (CNN), show that the convolutional neural network is superior in the fields of image classification, segmentation and the like, and is one of the most widely applied deep learning models. However, the following problems exist in the current research:
(1) The existing deep learning model has many defects. Such as the structure of the model, the generalization and robustness of the model, the computation model of the loss function, etc.; in particular, the small-scale image data set can cause problems of overfitting of a deep learning model, poor model robustness, poor model generalization capability and the like. And (2) feature learning richness is insufficient. The high-resolution second remote sensing image has limited wave band number, relatively lacks of spectral information and less feature richness, and the richness of deep learning on feature learning is limited to a certain extent. And (3) carrying out model misclassification. The classified images usually have fine and small wrong regions, and the boundaries of the ground objects are slightly smooth, so an image post-processing method is necessary to be added to optimize the classification result, and the classification result closer to the real ground object condition is obtained.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an urban green space high-resolution remote sensing monitoring method, which improves the generalization and robustness of the monitoring method by constructing a multi-dimensional feature space, enhancing feature richness, simultaneously constructing a U-Net + deep learning model and combining an image post-processing method, and solves the problem of overfitting which is easy to occur due to limited training samples, thereby achieving the purpose of improving the precision and timeliness of the urban green space high-resolution remote sensing monitoring. The invention also relates to an urban green space high-resolution remote sensing monitoring system.
The technical scheme of the invention is as follows:
a remote sensing monitoring method for urban green space high-resolution is characterized by comprising the following steps:
a training sample set construction step, namely selecting a sample area aiming at the characteristics of the high-resolution remote sensing image, and constructing a training sample set;
a multi-dimensional feature space construction step, wherein a training sample data set is subjected to data enhancement, random cutting and feature calculation processing, and a multi-dimensional feature space comprising vegetation features, spatial features, contrast features, textural features and phenological features is constructed; the method comprises the steps of constructing a normalized vegetation index to serve as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to serve as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to serve as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to serve as texture characteristics of the urban green land, and adding a contemporary winter image to serve as the climate characteristics of the urban green land;
a U-Net + model construction step, wherein based on the constructed multidimensional feature space, the U-Net model is improved by sequentially utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode for urban green land features, and a U-Net + deep learning model facing the urban green land and based on the multidimensional features is established; adding the multidimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting the spatial distribution of the urban green land after the training is finished to obtain a U-Net + model prediction result;
and an image post-processing step, namely performing post-processing on the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land.
Preferably, in the training sample set construction step, aiming at the characteristics of the high-resolution remote sensing image, a part of typical areas are used as sample areas, a visual interpretation method is used for extracting samples of the high-resolution remote sensing image, a vector file obtained through visual interpretation is subjected to element grid transformation to obtain a label image, surface real images of different types of vegetation positions in the remote sensing image are recorded, and a training sample set is constructed; the multi-dimensional feature space construction step comprises the steps of firstly carrying out image processing by using a data enhancement method to highlight effective spectral information, then cutting the label image and the remote sensing image subjected to data enhancement by using a random cutting method, and carrying out feature calculation processing based on the cut label image and the remote sensing image to construct a multi-dimensional feature space.
Preferably, the batch normalization processing improvement mode in the U-Net + model construction step is to add batch normalization processing behind the model convolution layer, and then input the data after normalization processing into the next layer of the model convolution layer; the regularization improvement is to add a dropout layer with a specific probability of discarding neurons after each deconvolution of the model.
Preferably, in the U-Net + model building step, the multidimensional feature data of the training sample is added into the U-Net + deep learning model for training, the remote sensing image, the multidimensional feature data and the corresponding label image of the training sample are input into the U-Net + deep learning model, the U-Net + deep learning model extracts the features of the input image in the encoding part, the spatial position and the resolution are restored in the decoding part, and each pixel is classified in the pixel classification layer to obtain the category information; calculating a loss value of the predicted classification graph and the input label graph in a cross entropy function, transmitting the loss value to a U-net + deep learning model for back propagation, optimizing parameters in the model layer by layer, and stopping training when the loss value reaches a certain threshold value.
Preferably, in the image post-processing step, a fully-connected CRFs image post-processing method is used for post-processing the U-Net + model prediction result, and the classification result is processed by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the high-resolution remote sensing monitoring result of the urban green space is obtained.
An urban green space high-resolution remote sensing monitoring system is characterized by comprising a training sample set construction module, a multi-dimensional characteristic space construction module, a U-Net + model construction module and an image post-processing module which are connected in sequence, wherein the training sample set construction module is connected with the U-Net + model construction module,
the training sample set construction module selects a sample area according to the characteristics of the high-resolution remote sensing image and constructs a training sample data set;
the multi-dimensional feature space construction module is used for carrying out data enhancement, random cutting and feature calculation processing on the training sample data set of the training sample set construction module to construct a multi-dimensional feature space comprising vegetation features, spatial features, contrast features, texture features and phenological features; the method comprises the steps of constructing a normalized vegetation index to serve as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to serve as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to serve as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to serve as texture characteristics of the urban green land, and adding a contemporary winter image to serve as the climate characteristics of the urban green land;
the U-Net + model building module is used for building a U-Net + deep learning model facing the urban green land and based on the multidimensional feature space constructed by the multidimensional feature space building module and oriented to the urban green land features by sequentially utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode to improve the U-Net model; adding the multi-dimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting urban green space distribution after training to obtain a U-Net + model prediction result;
and the image post-processing module is used for post-processing the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land.
Preferably, the training sample set construction module takes part of the typical area as a sample area for the characteristics of the high-resolution remote sensing image, extracts a sample of the high-resolution remote sensing image by using a visual interpretation method, performs element grid transformation on a vector file obtained by the visual interpretation to obtain a label image, records real earth surface images of different types of vegetation positions in the remote sensing image, and constructs a training sample set; the multi-dimensional feature space construction module firstly utilizes a data enhancement method to carry out image processing so as to highlight effective spectral information, then utilizes a random cutting method to cut the label image and the remote sensing image after data enhancement, and carries out feature calculation processing based on the cut label image and the remote sensing image so as to construct a multi-dimensional feature space.
Preferably, the improvement mode of batch normalization processing in the U-Net + model building module is to add batch normalization processing behind a model convolution layer and input data after normalization processing into the next layer of the model convolution layer; the regularization improvement is to add a dropout layer with a specific probability of discarding neurons after each deconvolution of the model.
Preferably, the U-Net + model building module adds the multidimensional feature data of the training sample into the U-Net + deep learning model for training, and inputs the remote sensing image, the multidimensional feature data and the corresponding label image of the training sample into the U-Net + deep learning model, the U-Net + deep learning model extracts the features of the input image in the encoding part, recovers the spatial position and the resolution of the input image in the decoding part, and classifies each pixel in the pixel classification layer to obtain the category information; calculating a loss value of the predicted classification graph and the input label graph in a cross entropy function, transmitting the loss value to a U-net + deep learning model for back propagation, optimizing parameters in the model layer by layer, and stopping training when the loss value reaches a certain threshold value.
Preferably, the image post-processing module performs post-processing on the U-Net + model prediction result by using a full-connection CRFs image post-processing method, and processes the classification result by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the high-resolution remote sensing monitoring result of the urban green space is obtained.
The invention has the following technical effects:
the invention relates to a high-resolution remote sensing monitoring method for urban green lands, which comprises the steps of obtaining sufficient training sample data sets in a sample area, processing the training sample data sets by adopting a method combining data enhancement, random cutting and feature calculation, and then constructing a specific multidimensional feature space, so that the overfitting problem is avoided while a network model is generalized; by constructing a multi-dimensional feature space of five types of remote sensing image features including vegetation features, spatial features, contrast features, textural features and phenological features, enhancing feature richness, particularly combining phenological features on the basis of the first four features, withering of leaf trees in winter, adding winter images into a deep learning model by adopting high-resolution remote sensing images and combining a phenological theory of vegetation to distinguish evergreen trees and leaf trees, optimizing classification results of single summer images and providing a basis for accuracy of follow-up urban greenbelt high-resolution remote sensing monitoring. Based on the constructed multi-dimensional feature space and the urban green land feature oriented U-Net model is improved by sequentially utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode, the urban green land oriented U-Net + deep learning model is established, the concept of the U-Net + deep learning model based on the multi-dimensional feature is provided, the U-Net model is optimized, a series of specific improvement modes are adopted, the U-Net + deep learning model has high stability and safety, multi-dimensional feature data of a training sample are added into the U-Net + deep learning model for training, training of the sample and training of the model are realized, prediction of urban green land space distribution is carried out after training is finished, a U-Net + model prediction result is obtained, and then an urban green land remote sensing high-resolution monitoring result is obtained by combining an image post-processing method, the generalization and robustness of the monitoring method are improved, the problem that the training sample is limited and easy to appear is solved, and the urban green land high-resolution monitoring precision and urban remote sensing monitoring precision of urban remote sensing are improved.
The invention also relates to an urban green land high-resolution remote sensing monitoring system, which corresponds to the urban green land high-resolution remote sensing monitoring method and can be understood as a system for realizing the urban green land high-resolution remote sensing monitoring method.
Drawings
FIG. 1 is a flow chart of the urban green space high-resolution remote sensing monitoring method.
FIG. 2 is a preferred flow chart of the urban green space high-resolution remote sensing monitoring method.
FIG. 3 is a partial image of a data set and corresponding label images in the training sample set construction step.
FIGS. 4a and 4b are comparative diagrams of the image edge zero filling improvement mode in the U-Net + model construction step.
FIGS. 5a and 5b are comparative diagrams of regularization improvement modes in the U-Net + model construction step.
FIG. 6 is a structural diagram of the U-Net + deep learning model based on multi-dimensional features according to the present invention.
FIG. 7 is a flowchart of training a U-Net + deep learning model for multi-dimensional features.
FIG. 8 is a comparison of the predicted U-Net + model of the present invention with the real earth's surface.
Fig. 9 is a structural diagram of the urban green space high-resolution remote sensing monitoring system.
Detailed Description
The present invention will be described with reference to the accompanying drawings.
The invention relates to a remote sensing monitoring method for urban green space high score, the flow of which is shown in figure 1, comprising the following steps: a training sample set construction step, namely selecting a sample area and constructing a training sample data set in the sample area aiming at the characteristics of the high-resolution remote sensing image; a multi-dimensional feature space construction step, namely, carrying out data enhancement, random cutting and feature calculation processing on the training sample data set, namely, carrying out feature calculation processing on the training sample data set after the data enhancement and random cutting processing, and constructing a multi-dimensional feature space of five types of features including vegetation features, spatial features, contrast features, texture features and phenological features; the method comprises the steps of constructing a normalized vegetation index to be used as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to be used as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to be used as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to be used as texture characteristics of the urban green land, and adding a contemporaneous winter image to be used as the phenological characteristics of the urban green land; a U-Net + model construction step, wherein based on the constructed multi-dimensional feature space, facing to the characteristics of the urban green land, an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode are sequentially utilized to improve a U-Net model, and a U-Net + deep learning model facing to the urban green land and based on the multi-dimensional features is established; adding the multi-dimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting urban green space distribution after training to obtain a U-Net + model prediction result; and an image post-processing step, namely performing post-processing on the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land. According to the method, a multi-dimensional feature space is constructed, feature richness is enhanced, a U-Net + deep learning model is constructed, sample training and model training are carried out, an image post-processing method is combined, prediction results are optimized, the generalization and robustness of the monitoring method are improved, the over-fitting problem that training samples are limited and prone to occurring is solved, and therefore the accuracy and timeliness of high-resolution remote sensing monitoring of urban greenbelts are improved.
FIG. 2 is a preferred flow chart of the urban green space high-resolution remote sensing monitoring method. Preferably, the first and second electrodes are formed of a metal,
1. a training sample set construction step:
aiming at the characteristics of the high-resolution remote sensing image, a part of typical area is used as a sample area, a visual interpretation method is used for extracting a sample of the high-resolution remote sensing image, a vector file obtained by visual interpretation is subjected to element transformation to grid to obtain a label image, surface real images of different types of vegetation positions in the remote sensing image are recorded, and a sufficient training sample data set is obtained.
As shown in FIG. 2, after the high-resolution remote sensing data is input, sample area selection and target interpretation are carried out. First, according to the spatial form, organization and vegetation type of the urban green land, a part of typical areas, which can also be referred to as typical sample areas, are selected as sample areas, such as several typical areas selected in a park area, several typical areas selected in a golf course, several typical areas selected in a residential area, and the like. And then, extracting a sample of the high-molecular remote sensing image by using a visual interpretation method, and performing element grid conversion on a vector file obtained by visual interpretation to obtain a label image, namely recording real earth surface images of different types of vegetation positions in the remote sensing image. The partial imagery of the dataset and corresponding label images, as shown in FIG. 3, show label images labeled deciduous trees, evergreen trees, grassland, and non-vegetation areas.
2. And (3) multi-dimensional feature space construction:
firstly, image processing is carried out by using a data enhancement method to highlight effective spectral information, then a label image and a remote sensing image after data enhancement are cut by using a random cutting method, and feature calculation processing is carried out on the basis of the cut label image and the remote sensing image to construct a multi-dimensional feature space. The data enhancement and random cutting are carried out on the training sample data set, the data set can be sufficient, the efficiency of constructing the training sample set is improved, the overfitting problem is avoided while a network model is generalized, and the basic premise of constructing the multi-dimensional feature space data is provided.
Firstly, image processing is carried out by using a data enhancement method, effective spectral information is highlighted, and convergence efficiency in a deep learning process is improved. The invention adopts a min-max standardization method and also becomes dispersion standardization, and the pixel value data range of each channel is reduced from the interval [0,255] to [0,1].
Figure RE-GDA0002699480330000071
Wherein x is min Represents the minimum value of the sample data, x max Representing the maximum value of the sample data.
Secondly, the number of training samples is increased by using a random cutting method, and the number requirement of deep learning samples is met. The size of the cutting size (e.g. 256 × 256) is set, and the cutting start position point is set according to the set number in combination with a random function. And cutting out sample data from the sample area and the high-resolution remote sensing image after visual interpretation according to the initial position point and the cutting size, thereby greatly improving the data volume of the sample data.
And then, performing feature calculation processing according to the training sample data set subjected to data enhancement and random clipping processing to construct a multi-dimensional feature space comprising five types of features including vegetation features, spatial features, contrast features, texture features and phenological features. Aiming at the current situations that the high-resolution remote sensing image data wave band is less and the learning abundance of the model features is limited, five types of remote sensing image features, namely vegetation features, spatial features, contrast features, texture features and phenological features, are respectively constructed from five different dimensions. Namely, a multidimensional feature space is constructed from five aspects of spectrum, space, contrast, texture, phenology and the like, so that deep learning information is enriched.
Firstly, constructing a normalized vegetation index to serve as vegetation characteristics of an urban green land; secondly, constructing an nDSM digital surface model and taking the nDSM digital surface model as the spatial characteristics of the urban green land; thirdly, calculating a local contrast characteristic map through an AC algorithm, and taking the local contrast characteristic map as a contrast characteristic of the urban green land; fourthly, acquiring an image texture feature map through a gray level co-occurrence matrix, and taking the image texture feature map as a texture feature of the urban green land; fifthly, adding the images in the same period of winter to be used as the phenological characteristics of the urban green land.
(1) Features of vegetation
The vegetation feature selects a normalized vegetation index which is an important index for measuring the vegetation coverage degree, detecting the plant growth condition and the like, and can integrate related spectral information, highlight vegetation in an image and reduce non-vegetation information in the image.
Figure RE-GDA0002699480330000072
In the formula, NIR represents a value of a near infrared band, and R represents a value of a red band. When NDVI <0, there may be terrain such as water, snow or clouds in the area; when NDVI >0, it indicates that the area may be covered with vegetation; when NDVI =0, the area is most likely covered with ground features such as rocks or bare land.
(2) Spatial characteristics
The present invention selects nDSMs as the spatial signature of an urban green space. The nDSM is a data model which eliminates the influence of terrain and records the height information of all ground objects relative to the ground. A Digital Surface Model (DSM) is a ground elevation Model that includes the height of Surface buildings, bridges, trees, etc. Digital Elevation Models (DEMs) are spatial data models that describe the morphological characteristics of the relief. And subtracting the DEM from the DSM containing the height of the ground object and the height of the relief to obtain a data model nDSM only containing the height of the ground object. The calculation formula is as follows:
nDSM(i,j)=DSM(i,j)-DEM(i,j) (3)
wherein: nDSM (i, j) represents the elevation value of nDSM at ith row and jth column; DSM (i, j) indicates an elevation value of the DSM at ith row and jth column; DEM (i, j) represents the elevation value of DEM in ith row and jth column.
(3) Contrast characteristics
The AC algorithm is a feature extraction algorithm based on local contrast, and the basic idea is to convert an image from an RGB color space to an LAB color space, then respectively calculate local contrast values on windows with different sizes for each sensing unit in the image, then add a plurality of obtained local contrast values under different window sizes, and in this way, traverse the whole image to obtain a final image feature map.
For example, assuming that the inner region R1 and the outer region R2 are the same, when the local contrast of the inner region R1 and the outer region R2 is calculated, the multi-scale saliency calculation is realized by changing the size of R2. The sensing unit R1 may be a pixel or a pixel block, and the neighborhood of the sensing unit is R2, and the average value of the feature values of all pixels included in (R1) R2 is used as the feature value of (R1) R2. Let pixel p be the center of R1 and R2, and the local contrast at the position of p is:
Figure RE-GDA0002699480330000081
where N1 and N2 are the number of pixels in R1 and R2, respectively. v. of k Is the eigenvalue or eigenvector of this position k. Since the AC method employs the Lab color space, the characteristic distance is calculated using the euclidean distance. R1 is defaulted to be one pixel, R2 is of length L/8]The smaller of the length and width is L. The feature saliency maps of multiple scales are directly added to obtain a complete saliency map.
(4) Texture features
The texture features of the images can better take the macroscopic properties and the fine structures of the images into consideration, and are important features for ground feature classification and extraction. The methods for extracting texture features are various, such as features based on local statistical characteristics, features based on a random field model, features based on spatial frequency, fractal features and the like, and the most widely applied method is the features based on gray value co-occurrence matrix.
The gray level co-occurrence matrix method proposed in 1973 is an important texture analysis method recognized at present. The co-occurrence matrix is a function of distance and direction, describing a pair of pixels separated by a distance of d pixels in the θ direction, having a probability of occurrence of gray values i and j, respectively, whose elements can be denoted as P (i, j | d, θ). The calculation formula of each element of the gray level co-occurrence matrix is as follows:
Figure RE-GDA0002699480330000091
assume that the image to be analyzed has Nx and Ny pixels in each of the horizontal and vertical directions. Let Zx = {1,2, \8230;, nx } be the horizontal spatial domain, and Zy = {1,2, \8230;, ny } be the vertical spatial domain. When the distance is 1, the formula when θ is 0 degrees, 45 degrees, 90 degrees, 135 degrees respectively is:
Figure RE-GDA0002699480330000092
Figure RE-GDA0002699480330000093
Figure RE-GDA0002699480330000094
Figure RE-GDA0002699480330000095
where k, m, l, and n respectively represent the variation in the selected calculation window, and # represents the pixel logarithm for which a brace holds.
And (4) calculating characteristic values, constructing co-occurrence matrixes in four directions according to the formula (6-9), and then respectively calculating the energy, entropy, moment of inertia and related texture parameters of the four co-occurrence matrixes to obtain the texture characteristics of the image.
(5) Characteristic of phenological phenomena
The method adopts high-resolution remote sensing images and multi-temporal remote sensing images, combines the phenological theory of vegetation to add winter images into a deep learning model to distinguish evergreen trees from deciduous trees, and optimizes the classification result of single summer images.
3. The U-Net + model construction step:
based on the constructed multi-dimensional feature space and oriented to the urban green land features, the U-Net model is improved by sequentially utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode, and the U-Net + deep learning model oriented to the urban green land and based on the multi-dimensional features is established.
An image edge filling strategy is adopted to ensure that the sizes of the images of the input network and the output network are consistent; in order to improve the generalization and robustness of the model, dropout regularization and a BN layer are added, and a U-Net model is improved, so that the training speed can be accelerated while over-fitting of the model is avoided; in order to eliminate the influence of low image edge prediction precision on the calculation result of the loss function, the cross entropy function is improved to improve the prediction precision.
Meanwhile, aiming at the first four types of characteristics, the high-resolution remote sensing image (true color/false color) is combined with the vegetation, space, contrast and texture characteristics constructed in advance in a wave band mode to obtain a multi-characteristic high-resolution image with 4 channels. Aiming at the phenological characteristics, the spectral difference between the winter deciduous vegetation and summer vegetation is obvious when the winter deciduous vegetation enters a dormant period, and a second high-resolution winter image is added into a model by combining a vegetation phenology theory, so that winter deciduous trees can be removed, and the summer image classification result is corrected. And inputting the well-made training set into a U-Net + deep learning model for training to obtain an optimal parameter combination, and obtaining a classification result by adopting expansion prediction during model prediction. And (4) performing voting method fusion on the obtained vegetation feature, spatial feature, contrast feature, textural feature and phenology feature classification maps, and further preferably selecting the optimal prediction result of urban green land such as the Beijing major urban area.
The U-Net model is an improved FCN structure and is a fully-connected neural network with the best expansibility at present. The structure of the device clearly presents a letter U shape and consists of a compression channel (compressing Path) on the left half and an expansion channel (expanding Path) on the right half. The U-Net skillfully integrates the characteristics of an encoding-decoding structure and a jump network, and a compression channel is an encoder and used for extracting the characteristics of the image layer by layer. The method adopts a structure of 2 convolutional layers and 1 maximum pooling layer, and the dimension of the characteristic diagram is increased by 1 time after each pooling operation. The expansion channel is a decoder and used for restoring the position information of the image, 1 time of deconvolution operation is firstly carried out on the expansion channel to halve the dimension of the feature map, then the feature map obtained by cutting the corresponding compression channel is spliced to form a feature map with the size 2 times that of the feature map again, 2 convolution layers are adopted to carry out feature extraction, and the structure is repeated. At the final output level, the 64-dimensional feature map is mapped into a 2-dimensional output map using 2 convolutional layers. The invention specifically improves the original network structure and establishes a U-Net + deep learning model facing to the urban green land and based on multi-dimensional characteristics.
(1) Image edge zero padding improvement
In order to ensure that the sizes of the input image and the output image are consistent, an image edge zero padding strategy is adopted, as shown in a comparison graph of fig. 4a and 4b, fig. 4a is a valid mode, that is, a zero padding condition does not exist; fig. 4b shows the same mode, i.e. zero padding.
(2) Batch normalization process improvement
The parameters are continuously updated in the training process through deep learning, so that besides an input layer, the updating of the training parameters causes the distribution of input data of a later layer to change, which easily causes gradient diffusion to occur, and the training speed is also reduced. Because of various network training parameters, in order to prevent gradient disappearance and gradient explosion, the invention adopts a Batch Normalization (BN) processing method, batch Normalization processing is added behind a convolution layer, and then normalized data is input into the next layer of the convolution layer, thereby preventing gradient disappearance and gradient explosion and improving the network training speed. This allows better coupling of the output of the upper network to the results of the upsampling of the last layer. The normalization layer is a learnable network layer with parameters, as shown in formula 1.
Figure RE-GDA0002699480330000101
Wherein the content of the first and second substances,
Figure RE-GDA0002699480330000102
normalizing the result for the standard deviation, i.e.
Figure RE-GDA0002699480330000103
γ (k) 、β (k) To learn parameters, wherein
Figure RE-GDA0002699480330000104
β (k) =E[x (k) ]。
(3) Dropout layer improvements
Dropout is a regularization method in deep learning, and means that neurons are temporarily discarded with a certain probability during network training, so that an overfitting phenomenon of a model in overfitting a small number of samples is prevented. If the Dropout layer is added after each deconvolution of the model, the overfitting situation is prevented to a certain extent, as shown in a comparison graph of fig. 5a and fig. 5b, fig. 5a is an original network without adding the Dropout layer, and fig. 5b is a network with the Dropout layer added.
The improved U-Net + deep learning model structure facing the urban green land and based on multi-dimensional features is shown in fig. 6, the input image size of the U-Net + model structure is 256 × 256 × c (c represents the total number of image bands after multi-source data is added), first 3 × 3 convolution and BN operations (horizontal black cropping) are performed once, the input data is converted into a 32-dimensional feature map, and then the structure of 1 maximum pooling layer and 2 convolution layers is repeatedly adopted. The extension channel is a decoder that can gradually restore the detail information and the position information of the image. In the expanded channel, 1 time of deconvolution operation (vertical hollow arrow) is firstly carried out to reduce the dimension of the feature map by half, then the feature maps (oblique line filling frames) corresponding to the compressed channels are spliced to form a feature map with 2 times of dimension again, then 2 convolution layers are adopted, and the structure is repeated, so that the splicing operation can lead the network to learn the features of multiple scales and different levels, the network robustness is increased, and the classification precision is improved; and in the final output layer, a 6-dimensional convolutional layer with a convolution kernel of 1 × 1 size is adopted to map the feature map obtained in the previous layer into a 6-dimensional output feature map. The letter "D" marked in the figure represents the addition of a Dropout layer with a probability of 0.5 for discarding neurons after the convolutional layer at that location. Relu was used as the linear activation function, max-firing was chosen for pooling, and Adam was used as the optimizer.
And adding the multi-dimensional characteristic data of the training sample into the U-net + model for training. As shown in the training flowchart of fig. 7, in the model training, 350 256 × 256 remote sensing images, multidimensional feature data, and corresponding labels are input into the U-net + model, the model extracts the features of the input images at the encoding part, recovers the spatial position and resolution at the decoding part, and classifies each pixel at the pixel classification layer to obtain the category information. And calculating a loss value (loss) in a cross entropy function by using the predicted classification graph and the input label graph, transmitting the loss value to the model for back propagation, and optimizing parameters in the model layer by layer.
The activation function of the convolutional layer adopts a RELU function, and the calculation formula of the convolutional layer can be expressed as:
Figure RE-GDA0002699480330000111
wherein the content of the first and second substances,
Figure RE-GDA0002699480330000112
weight values and bias terms of the kth dimension of the l layer respectively,
Figure RE-GDA0002699480330000113
is a characteristic diagram of the kth dimension of the l layer,
Figure RE-GDA0002699480330000114
an output feature map representing the k-dimension of the l-1 layer,
Figure RE-GDA0002699480330000115
representing a convolution operation.
The deep learning model generally uses a loss function to calculate a loss value between ground real data and prediction probability to quantify the difference between the ground real data and the prediction probability, and the smaller the loss value is, the more accurate the classification is. The invention uses a classification cross entropy function (category cross entropy) to calculate the loss value, and the formula is as follows:
Figure RE-GDA0002699480330000116
the model training process is the process of optimizing the loss function and reducing the loss value, namely back propagation. According to the method, the Adam optimization algorithm is preferably adopted for model training, parameters in the model are updated layer by layer, the Adam algorithm is easy to realize, the calculation efficiency is high, and the memory requirement is low. And when the loss value reaches a certain threshold value, stopping training.
After the U-net + model training is finished, the parameters of each layer obtain the optimal values, and at the moment, forward propagation is carried out by using the test set image again, namely the process of model prediction is carried out. The model predictions shown in FIG. 8 are compared to the real earth's surface.
4. Image post-processing step:
and (3) post-processing the U-Net + model prediction result, preferably performing post-processing on the U-Net + model prediction result by using a full-connection CRFs image post-processing method, and processing the classification result by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the high-resolution remote sensing monitoring result of the urban green land is obtained.
The full-connection CRFs connects all pixels in the image pairwise, describes the relationship between each pixel and all other pixels, measures the difference between the pixels by using the color and the actual relative distance between the pixels, and encourages the pixels with large difference to distribute different labels, so that the full-connection CRFs can be segmented at the boundary as much as possible, and the excessive smoothness of the boundary is not easy to cause.
In fully-connected CRFs, the energy of the predicted tag value x is defined as:
Figure RE-GDA0002699480330000121
wherein i, j is E {1, 2.. N, N is total number of image pixels }, psi u (x i ) Is a unitary potential energy of psi p (x i ,x j ) Is binary potential energy. Unitary potential energy psi u (x i ) The class probability distribution graph obtained by independently predicting each pixel i in the image by the network model is represented, and the class probability distribution graph contains a lot of noise and discontinuity. Binary potential energy psi p (x i ,x j ) The method is characterized in that a full-connection graph is represented, all pixel pairs in an image are connected, information provided by an original image is used as binary potential energy in the practical application process, and the model expression of the full-connection graph is as follows:
Figure RE-GDA0002699480330000122
wherein u (x) i ,y j ) For tag-compatible terms, it constrains the condition of conduction between pixels, i.e. energy can only conduct to each other under the condition of the same tag. Omega (m) As the weight value parameter, the weight value of the user,
Figure RE-GDA0002699480330000123
as a characteristic function, the formula is as follows:
Figure RE-GDA0002699480330000124
wherein, I i ,I j Is a color vector, p i ,p j Is a position vector. Characteristic function
Figure RE-GDA0002699480330000125
The "intimacy" before the different pixels is expressed in the form of a feature, the first term being called the surface kernel and the second term being called the smoothing kernel. In the practical process of carrying out image post-processing by fully connecting CRFs, unitary potential energy is probabilityA distribution graph is a result obtained by performing softmax function operation on a feature graph output by the network model according to the class distribution probability of each pixel; the position information and the color information in the binary potential energy are provided by the original image.
And (4) carrying out post-processing on the prediction result of the U-Net + model by using a full-connection CRFs method. The fully connected CRFs can process the classification result by combining the relation among all pixels in the original image, improve the misclassification phenomenon, refine the feature edge and optimize the prediction result, so that a high-precision urban green space high-resolution remote sensing monitoring result is formed.
The invention also relates to an urban green land high-resolution remote sensing monitoring system, which corresponds to the urban green land high-resolution remote sensing monitoring method and can be understood as a system for realizing the urban green land high-resolution remote sensing monitoring method, the system structure is shown in figure 9, the system comprises a training sample set construction module, a multi-dimensional characteristic space construction module, a U-Net + model construction module and an image post-processing module which are sequentially connected, the training sample set construction module is connected with the U-Net + model construction module, the modules are mutually cooperated to work, after a sufficient training sample data set is constructed, the generalization and robustness of the monitoring method are improved by constructing a multi-dimensional characteristic space and enhancing the characteristic richness, and meanwhile, a U-Net + deep learning model facing the urban green land and based on the multi-dimensional characteristics is established, and the image post-processing mode is combined, so that the high-resolution remote sensing monitoring precision of the urban green land is improved.
The training sample set construction module is used for selecting a sample area and constructing a training sample data set in the sample area according to the characteristics of the high-resolution remote sensing image; the multi-dimensional feature space construction module is used for performing data enhancement, random cutting and feature calculation processing on the training sample data set of the training sample set construction module to construct a multi-dimensional feature space comprising vegetation features, spatial features, contrast features, texture features and phenological features; the method comprises the steps of constructing a normalized vegetation index to be used as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to be used as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to be used as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to be used as texture characteristics of the urban green land, and adding a contemporaneous winter image to be used as the phenological characteristics of the urban green land; the U-Net + model building module is used for building a U-Net + deep learning model facing the urban green land and based on the multidimensional feature space built by the multidimensional feature space building module and facing the urban green land features by sequentially improving the U-Net model by utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode; adding the multi-dimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting urban green space distribution after training to obtain a U-Net + model prediction result; and the image post-processing module is used for post-processing the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land.
Preferably, the training sample set constructing module takes a part of the typical region as a sample region for the characteristics of the high-resolution remote sensing image, extracts a sample of the high-resolution remote sensing image by using a visual interpretation method, performs element transformation to grid on a vector file obtained by the visual interpretation to obtain a label image, records real images of the earth surface of different types of vegetation positions in the remote sensing image, and constructs a training sample data set, such as the partial image of the data set and the corresponding label image shown in fig. 3. The multi-dimensional feature space construction module firstly utilizes a data enhancement method to carry out image processing so as to highlight effective spectral information, then utilizes a random cutting method to cut the label image and the remote sensing image after data enhancement, and carries out feature calculation processing based on the cut label image and the remote sensing image so as to construct a multi-dimensional feature space.
Preferably, the improvement mode of zero padding for the image edge in the U-Net + model building module is shown in FIG. 4 b; the batch normalization processing (BN) improvement mode is that batch normalization processing is added behind a model convolution layer, and data after normalization processing is input into the next layer of the model convolution layer, so that the output of an upper layer network is connected to the result of sampling on the last layer, gradient disappearance and gradient explosion are prevented, and the network training speed is improved; regularization improvement approach (dropout) as shown in fig. 5b, a dropout layer with a certain probability of discarding neurons is added after each deconvolution of the model.
Preferably, the structure of the U-Net + deep learning model based on the multidimensional features is shown in fig. 6, the training process is shown in fig. 7, the U-Net + model building module adds the multidimensional feature data of the training sample into the U-Net + deep learning model for training, the remote sensing image, the multidimensional feature data and the corresponding label image in the multidimensional feature data of the training sample are input into the U-Net + deep learning model, the U-Net + deep learning model extracts the features of the input image in the encoding part, the spatial position and the resolution of the input image are recovered in the decoding part, and each pixel is classified in the pixel classification layer to obtain the category information; calculating a loss value of the predicted classification diagram and the input label diagram in a cross entropy function, transmitting the loss value to a U-net + deep learning model for back propagation, optimizing parameters in the model layer by layer, and stopping training when the loss value reaches a certain threshold value.
Preferably, the image post-processing module performs post-processing on the U-Net + model prediction result by using a full-connection CRFs image post-processing method, and processes the classification result by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the high-resolution remote sensing monitoring result of the urban green space is obtained.
It should be noted that the above-mentioned embodiments enable a person skilled in the art to more fully understand the invention, without restricting it in any way. Therefore, although the present invention has been described in detail with reference to the drawings and examples, it will be understood by those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. A remote sensing monitoring method for urban green space high score is characterized by comprising the following steps:
a training sample set construction step, namely selecting a sample area aiming at the characteristics of the high-resolution remote sensing image, and constructing a training sample set;
a multi-dimensional feature space construction step, wherein a training sample data set is subjected to data enhancement, random cutting and feature calculation processing, and a multi-dimensional feature space comprising vegetation features, spatial features, contrast features, textural features and phenological features is constructed; the method comprises the steps of constructing a normalized vegetation index to serve as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to serve as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to serve as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to serve as texture characteristics of the urban green land, and adding a contemporary winter image to serve as the climate characteristics of the urban green land;
a U-Net + model construction step, wherein based on the constructed multidimensional feature space, the U-Net model is improved by sequentially utilizing an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode for urban green land features, and a U-Net + deep learning model facing the urban green land and based on the multidimensional features is established; adding the multidimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting the spatial distribution of the urban green land after the training is finished to obtain a U-Net + model prediction result; the batch normalization processing improvement mode is that batch normalization processing is added behind the model convolution layer, and then data after normalization processing is input into the next layer of the model convolution layer; the regularization improvement mode is that a dropout layer with specific neuron discarding probability is added after each deconvolution of the model;
and an image post-processing step, namely performing post-processing on the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land.
2. The urban green space high-resolution remote sensing monitoring method according to claim 1, characterized in that the training sample set construction step is to take part of the typical area as a sample area for the characteristics of the high-resolution remote sensing image, extract the sample of the high-resolution remote sensing image by using a visual interpretation method, perform element-to-grid conversion on a vector file obtained by visual interpretation to obtain a label image, record ground surface real images of different types of vegetation positions in the remote sensing image, and construct a training sample set; the multi-dimensional feature space construction step comprises the steps of firstly carrying out image processing by using a data enhancement method to highlight effective spectral information, then cutting the label image and the remote sensing image subjected to data enhancement by using a random cutting method, and carrying out feature calculation processing based on the cut label image and the remote sensing image to construct a multi-dimensional feature space.
3. The urban green space high-resolution remote sensing monitoring method according to claim 1, wherein in the U-Net + model building step, multidimensional feature data of a training sample is added into a U-Net + deep learning model for training, a remote sensing image, the multidimensional feature data and a corresponding label image of the training sample are input into the U-Net + deep learning model, the U-Net + deep learning model extracts features of the input image in an encoding part, the spatial position and resolution of the input image are recovered in a decoding part, and each pixel is classified in a pixel classification layer to obtain category information; calculating a loss value of the predicted classification graph and the input label graph in a cross entropy function, transmitting the loss value to a U-net + deep learning model for back propagation, optimizing parameters in the model layer by layer, and stopping training when the loss value reaches a certain threshold value.
4. The urban green space high-resolution remote sensing monitoring method according to claim 1 or 2, characterized in that in the image post-processing step, a fully-connected CRFs image post-processing method is used for post-processing the U-Net + model prediction result, and the classification result is processed by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the urban green space high-resolution remote sensing monitoring result is obtained.
5. An urban green space high-resolution remote sensing monitoring system is characterized by comprising a training sample set construction module, a multi-dimensional characteristic space construction module, a U-Net + model construction module and an image post-processing module which are connected in sequence, wherein the training sample set construction module is connected with the U-Net + model construction module,
the training sample set construction module selects a sample area aiming at the characteristics of the high-resolution remote sensing image and constructs a training sample data set;
the multi-dimensional feature space construction module is used for carrying out data enhancement, random cutting and feature calculation processing on the training sample data set of the training sample set construction module to construct a multi-dimensional feature space comprising vegetation features, spatial features, contrast features, textural features and phenological features; the method comprises the steps of constructing a normalized vegetation index to serve as vegetation characteristics of an urban green land, constructing an nDSM digital surface model to serve as spatial characteristics of the urban green land, calculating a local contrast characteristic diagram through an AC algorithm to serve as contrast characteristics of the urban green land, obtaining an image texture characteristic diagram through a gray level co-occurrence matrix to serve as texture characteristics of the urban green land, and adding a contemporary winter image to serve as the climate characteristics of the urban green land;
the U-Net + model building module is used for building a U-Net + deep learning model facing the urban green land and based on the multidimensional feature space built by the multidimensional feature space building module and facing the urban green land features by sequentially improving the U-Net model by using an image edge zero filling improvement mode, a batch normalization processing improvement mode and a regularization improvement mode; adding the multidimensional characteristic data of the training sample into a U-Net + deep learning model for training, and predicting the spatial distribution of the urban green land after the training is finished to obtain a U-Net + model prediction result; the batch normalization processing improvement mode is that batch normalization processing is added behind the model convolution layer, and then data after normalization processing is input into the next layer of the model convolution layer; the regularization improvement mode is that a dropout layer with specific neuron discarding probability is added after each deconvolution of the model;
and the image post-processing module is used for post-processing the prediction result of the U-Net + model to obtain the high-resolution remote sensing monitoring result of the urban green land.
6. The urban green space high-resolution remote sensing monitoring system according to claim 5, characterized in that the training sample set construction module takes part of typical areas as sample areas for high-resolution remote sensing image features, performs sample extraction on the high-resolution remote sensing images by using a visual interpretation method, performs element transformation to grid on vector files obtained by visual interpretation to obtain label images, records ground surface real images of different types of vegetation positions in the remote sensing images, and constructs a training sample data set; the multi-dimensional feature space construction module firstly utilizes a data enhancement method to carry out image processing so as to highlight effective spectral information, then utilizes a random cutting method to cut the label image and the remote sensing image after data enhancement, and carries out feature calculation processing based on the cut label image and the remote sensing image so as to construct a multi-dimensional feature space.
7. The urban green space high-resolution remote sensing monitoring system according to claim 5, wherein the U-Net + model building module adds multidimensional feature data of a training sample into the U-Net + deep learning model for training, and inputs a remote sensing image, multidimensional feature data and a corresponding label image of the training sample into the U-Net + deep learning model, the U-Net + deep learning model extracts features of the input image at an encoding part, recovers spatial position and resolution at a decoding part, and classifies each pixel at a pixel classification layer to obtain category information; calculating a loss value of the predicted classification diagram and the input label diagram in a cross entropy function, transmitting the loss value to a U-net + deep learning model for back propagation, optimizing parameters in the model layer by layer, and stopping training when the loss value reaches a certain threshold value.
8. The urban green space high-resolution remote sensing monitoring system according to claim 5 or 6, wherein the image post-processing module performs post-processing on the U-Net + model prediction result by using a full-connection CRFs image post-processing method, and processes the classification result by combining the relation among all pixels in the original high-resolution remote sensing image, so that the prediction result is optimized, and the urban green space high-resolution remote sensing monitoring result is obtained.
CN202010386282.9A 2020-05-09 2020-05-09 Urban green space high-resolution remote sensing monitoring method and system Active CN111914611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010386282.9A CN111914611B (en) 2020-05-09 2020-05-09 Urban green space high-resolution remote sensing monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010386282.9A CN111914611B (en) 2020-05-09 2020-05-09 Urban green space high-resolution remote sensing monitoring method and system

Publications (2)

Publication Number Publication Date
CN111914611A CN111914611A (en) 2020-11-10
CN111914611B true CN111914611B (en) 2022-11-15

Family

ID=73237554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010386282.9A Active CN111914611B (en) 2020-05-09 2020-05-09 Urban green space high-resolution remote sensing monitoring method and system

Country Status (1)

Country Link
CN (1) CN111914611B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580484B (en) * 2020-12-14 2024-03-29 中国农业大学 Remote sensing image corn straw coverage recognition method and device based on deep learning
CN112651145B (en) * 2021-02-05 2021-09-10 河南省航空物探遥感中心 Urban diversity index analysis and visual modeling based on remote sensing data inversion
CN113011294B (en) * 2021-03-08 2023-11-07 中国科学院空天信息创新研究院 Method, computer equipment and medium for identifying circular sprinkling irrigation land based on remote sensing image
CN112990024B (en) * 2021-03-18 2024-03-26 深圳博沃智慧科技有限公司 Urban dust monitoring method
CN112990365B (en) * 2021-04-22 2021-08-17 宝略科技(浙江)有限公司 Training method of deep learning model for semantic segmentation of remote sensing image
CN113705326B (en) * 2021-07-02 2023-12-15 重庆交通大学 Urban construction land identification method based on full convolution neural network
CN113822220A (en) * 2021-10-09 2021-12-21 海南长光卫星信息技术有限公司 Building detection method and system
CN114283286A (en) * 2021-12-30 2022-04-05 北京航天泰坦科技股份有限公司 Remote sensing image segmentation method and device and electronic equipment
CN114529721B (en) * 2022-02-08 2024-05-10 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage recognition method based on deep learning
CN115761518B (en) * 2023-01-10 2023-04-11 云南瀚哲科技有限公司 Crop classification method based on remote sensing image data

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7284502B2 (en) * 2018-06-15 2023-05-31 大学共同利用機関法人情報・システム研究機構 Image processing device and method
CN109446992B (en) * 2018-10-30 2022-06-17 苏州中科天启遥感科技有限公司 Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
CN109919206B (en) * 2019-02-25 2021-03-16 武汉大学 Remote sensing image earth surface coverage classification method based on full-cavity convolutional neural network

Also Published As

Publication number Publication date
CN111914611A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111914611B (en) Urban green space high-resolution remote sensing monitoring method and system
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109871798B (en) Remote sensing image building extraction method based on convolutional neural network
CN109086773B (en) Fault plane identification method based on full convolution neural network
CN110969088B (en) Remote sensing image change detection method based on significance detection and deep twin neural network
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
CN111639719B (en) Footprint image retrieval method based on space-time motion and feature fusion
CN108052966A (en) Remote sensing images scene based on convolutional neural networks automatically extracts and sorting technique
CN111582194A (en) Multi-temporal high-resolution remote sensing image building extraction method based on multi-feature LSTM network
CN111639587B (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN110675421B (en) Depth image collaborative segmentation method based on few labeling frames
CN113312993B (en) Remote sensing data land cover classification method based on PSPNet
CN112001293A (en) Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
KR20180116588A (en) Method for Object Detection Using High-resolusion Aerial Image
CN113486886A (en) License plate recognition method and device in natural scene
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN116091937A (en) High-resolution remote sensing image ground object recognition model calculation method based on deep learning
CN115019163A (en) City factor identification method based on multi-source big data
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN110991374B (en) Fingerprint singular point detection method based on RCNN
CN112861802A (en) Full-automatic crop classification method based on space-time deep learning fusion technology
CN117132884A (en) Crop remote sensing intelligent extraction method based on land parcel scale
CN116682026A (en) Intelligent deep learning environment remote sensing system
Jing et al. Time series land cover classification based on semi-supervised convolutional long short-term memory neural networks
Zhang et al. Vegetation Coverage Monitoring Model Design Based on Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant