CN117475301A - Side slope vegetation classification method and device based on multi-mode depth features - Google Patents

Side slope vegetation classification method and device based on multi-mode depth features Download PDF

Info

Publication number
CN117475301A
CN117475301A CN202311317121.4A CN202311317121A CN117475301A CN 117475301 A CN117475301 A CN 117475301A CN 202311317121 A CN202311317121 A CN 202311317121A CN 117475301 A CN117475301 A CN 117475301A
Authority
CN
China
Prior art keywords
vegetation
features
remote sensing
sensing data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311317121.4A
Other languages
Chinese (zh)
Inventor
郝珖存
杨光
张子鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Fourth Harbor Engineering Co Ltd
CCCC Fourth Harbor Engineering Institute Co Ltd
Original Assignee
CCCC Fourth Harbor Engineering Co Ltd
CCCC Fourth Harbor Engineering Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Fourth Harbor Engineering Co Ltd, CCCC Fourth Harbor Engineering Institute Co Ltd filed Critical CCCC Fourth Harbor Engineering Co Ltd
Priority to CN202311317121.4A priority Critical patent/CN117475301A/en
Publication of CN117475301A publication Critical patent/CN117475301A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a side slope vegetation classification method based on multi-mode depth characteristics, which is characterized by comprising the following steps: acquiring multi-source remote sensing data of a target side slope region, wherein the multi-source remote sensing data comprises high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms; extracting multi-mode depth features from multi-source remote sensing data by using a preset deep learning algorithm, wherein the multi-mode depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features; carrying out feature fusion on the multi-mode depth features through a preset fusion device to obtain target fusion features; classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope area. Under the condition that the current slope vegetation information is difficult to extract, the slope vegetation classification precision is improved.

Description

Side slope vegetation classification method and device based on multi-mode depth features
Technical Field
The invention relates to the technical field of slope vegetation classification, in particular to a slope vegetation classification method and device based on multi-mode depth features.
Background
Due to the influence of natural disasters such as earthquake disasters or natural landslide and activities such as human mining or construction, a plurality of exposed side slopes appear, and the side slopes are repaired to be in normal high-frequency demands. The classification of the side slope vegetation in the side slope restoration area is an important data support for the simulation of the carbon balance process of the side slope restoration ecological system.
At present, slope vegetation classification research is usually aimed at areas with flat terrain, and classification objects are simpler, and classification is mainly carried out through a single data source collected by an unmanned aerial vehicle. However, the vegetation structure of the side slope is complex in form and composition, and the data acquisition and the information mining of a single data source are affected, so that vegetation information is difficult to accurately extract, and the classification precision of vegetation types is low.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a slope vegetation classification method and device based on multi-mode depth features, which can solve the technical problem of low slope vegetation classification precision caused by difficult extraction of current slope vegetation information.
In a first aspect, an embodiment of the present invention provides a method for classifying vegetation on a side slope based on multi-modal depth features, including:
acquiring multi-source remote sensing data of a target side slope region, wherein the multi-source remote sensing data comprises high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms;
extracting multi-modal depth features from the multi-source remote sensing data by using a preset deep learning algorithm, wherein the multi-modal depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features;
performing feature fusion on the multi-mode depth features through a preset fusion device to obtain target fusion features;
and classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope area.
In some embodiments of the present invention, the extracting the multi-modal depth features of the multi-source remote sensing data using a preset deep learning algorithm includes:
filtering the multi-source remote sensing data based on preset mask data to obtain target multi-source remote sensing data, wherein the preset mask data comprises vegetation mask data extracted based on historical remote sensing data;
carrying out space feature extraction on the target multi-source remote sensing data by using a preset cavity convolutional neural network to obtain vegetation space features;
carrying out spectral feature extraction on the target multi-source remote sensing data by using a preset dense self-encoder to obtain vegetation spectral features;
and carrying out depth feature extraction on the target multi-source remote sensing data based on the vegetation spatial features and the vegetation spectral features by using a preset depth neural network to obtain vegetation depth features.
In some embodiments of the present invention, the filtering the multi-source remote sensing data based on the preset mask data to obtain target multi-source remote sensing data includes:
image segmentation is carried out on a plurality of historical remote sensing data so as to extract vegetation mask areas in each historical remote sensing data;
weighting and fusing a plurality of vegetation mask areas to establish the vegetation mask data;
matching the multi-source remote sensing data with the vegetation mask data, and determining an overlapped area as a vegetation area in the multi-source remote sensing data;
and reserving the vegetation region in the multi-source remote sensing data to obtain the target multi-source remote sensing data.
In some embodiments of the present invention, the extracting the spatial features of the target multi-source remote sensing data by using a preset hole convolutional neural network to obtain vegetation spatial features includes:
extracting geometric features in the target multi-source remote sensing data, determining the geometric features as a geometric feature map, extracting texture features in the target multi-source remote sensing data, determining the texture features as a geometric feature map, extracting position features in the target multi-source remote sensing data, and determining the position features as a position feature map;
and rolling and fully connecting the geometric feature map, the texture feature map and the position feature map by using a cavity convolution kernel in the preset cavity convolution neural network to obtain the vegetation space feature.
In some embodiments of the present invention, the performing spectral feature extraction on the target multi-source remote sensing data by using a preset dense self-encoder to obtain vegetation spectral features includes:
compressing the target multi-source remote sensing data through an encoder in the preset dense self-encoding to obtain hidden representation with low latitude;
inputting the hidden representation of the low latitude into a decoder in the preset self-code of the bed-wetting, carrying out nonlinear inverse transformation on the hidden representation of the low latitude through the decoder, and reconstructing to obtain the target multi-source remote sensing data after dimension reduction;
and extracting spectral features of the target multi-source remote sensing data after dimension reduction to obtain vegetation spectral features in various wave bands.
In some embodiments of the present invention, the depth feature extraction of the target multisource remote sensing data by using a preset depth neural network based on the vegetation spatial feature and the vegetation spectral feature to obtain a vegetation depth feature includes:
performing depth feature extraction on the vegetation spatial features by using the preset depth neural network to obtain spatial variation features of the target multi-source remote sensing data;
performing depth feature extraction on the vegetation spectral features by using the preset depth neural network to obtain spectral variation features of the target multi-source remote sensing data;
the spatially varying features and the spectrally varying features are determined as vegetation depth features.
In some embodiments of the present invention, the classifying the target fusion feature by a preset vegetation classification model to obtain vegetation classification information of the target slope area includes:
rolling and fully connecting the target fusion characteristics by utilizing the preset vegetation classification model, and outputting various vegetation types of the target slope area;
and counting a plurality of vegetation types to obtain vegetation classification information.
In a second aspect, an embodiment of the present invention provides a slope vegetation classification device based on multi-modal depth features, including at least one control processor and a memory communicatively coupled to the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the multi-modal depth feature-based slope vegetation classification method as described in the first aspect above.
In a third aspect, an embodiment of the present invention provides an electronic device, including a slope vegetation classification device based on a multi-mode depth feature according to the second aspect.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium storing computer executable instructions for performing the method for classifying vegetation on a side slope based on the multi-modal depth features as described in the first aspect.
According to the multi-mode depth feature-based slope vegetation classification method provided by the embodiment of the invention, the method has at least the following beneficial effects: by acquiring multi-source remote sensing data of a target side slope region, the multi-source remote sensing data comprise high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms, and compared with a single data source of an unmanned plane, the multi-source remote sensing data can classify side slope vegetation from a plurality of data source dimensions; extracting multi-mode depth features of the multi-source remote sensing data by a preset deep learning algorithm, wherein the multi-mode depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features, and performing feature fusion on the multi-mode depth features through a preset fusion device to obtain target fusion features so as to extract side slope vegetation features from various feature dimensions and reduce the extraction difficulty of the side slope vegetation features; and finally, classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope area, improving the slope vegetation classification precision, and providing more reliable data support for restoring the ecological system in the slope area.
Drawings
FIG. 1 is a flow chart of a method for classifying vegetation on a side slope based on multi-modal depth features according to one embodiment of the present invention;
FIG. 2 is a flow chart of a method of extracting multi-modal depth features provided by another embodiment of the present invention;
FIG. 3 is a flow chart of a method for obtaining targeted multi-source remote sensing data according to another embodiment of the present invention;
FIG. 4 is a flow chart of a method of deriving a spatial signature of vegetation provided by another embodiment of the invention;
FIG. 5 is a flow chart of a method of deriving spectral features of vegetation provided in another embodiment of the invention;
FIG. 6 is a flow chart of a method for obtaining vegetation depth features according to another embodiment of the invention
FIG. 7 is a flowchart of a method for obtaining vegetation classification information by a preset vegetation classification model according to another embodiment of the invention;
FIG. 8 is a block diagram of a slope vegetation classification device based on multi-modal depth features according to another embodiment of the invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
The embodiment of the invention provides a side slope vegetation classification method based on multi-mode depth characteristics, which has at least the following beneficial effects: by acquiring multi-source remote sensing data of a target side slope region, the multi-source remote sensing data comprise high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms, and compared with a single data source of an unmanned plane, the multi-source remote sensing data can be used for side slope vegetation classification from a plurality of data source dimensions; the multi-mode depth features of the multi-source remote sensing data are extracted by a preset deep learning algorithm, the multi-mode depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features, the multi-mode depth features are subjected to feature fusion through a preset fusion device to obtain target fusion features, so that side slope vegetation features are extracted from various feature dimensions, the extraction difficulty of the side slope vegetation features is reduced, and meanwhile, the characteristics are extracted from space, spectrum and depth subspaces, so that the interference of complex side slope topography on vegetation classification can be reduced; and finally, classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope region, improving the vegetation classification precision of the slope, and providing more reliable data support for restoring the ecological system in the slope region.
The control method according to the embodiment of the present invention is further described below based on the drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for classifying vegetation on a side slope based on multi-modal depth features according to an embodiment of the present invention, where the method for classifying vegetation on a side slope based on multi-modal depth features includes, but is not limited to, the following steps:
step S101, multi-source remote sensing data of a target side slope area are obtained, wherein the multi-source remote sensing data comprise high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms;
the target slope region is a slope region in which vegetation is to be classified. The data source platform comprises various data storage libraries or commercial remote sensing data platforms, and the multi-source remote sensing data can be obtained based on optical sensors (such as satellites), synthetic Aperture Radar (SAR), thermal infrared sensors and the like. Alternatively, the high resolution optical data may be embodied as a high resolution optical image, the hyperspectral remote sensing data may be embodied as a hyperspectral remote sensing image, and the radar scattering data may be embodied as an unmanned aerial vehicle radar image. Compared with the existing common method for acquiring the slope vegetation data through the single data source of the unmanned aerial vehicle, the method and the device realize multi-dimensional extraction of the characteristics of the slope vegetation by acquiring the multi-source remote sensing data of the target slope area, and further improve the accuracy of subsequent classification of the slope vegetation.
Step S102, extracting multi-mode depth features from multi-source remote sensing data by using a preset deep learning algorithm, wherein the multi-mode depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features;
it should be noted that, the preset deep learning algorithm includes, but is not limited to, convolutional neural network, self-encoder, cyclic neural network, etc., wherein for spatial features, the spatial features of different scales can be extracted by using the hole convolutional neural network; aiming at vegetation spectral features, a dense self-encoder can be adopted to reduce data noise of multi-source remote sensing data and improve extraction accuracy of the vegetation spectral features; for the depth features, a cyclic neural network can be used for extracting the deep features of the multi-source remote sensing data. According to the embodiment, the multi-source remote sensing data is subjected to feature extraction from the space, the spectrum and the depth subspace, so that the multi-dimensional features of the multi-source remote sensing data are synthesized to classify the vegetation on the side slope, and the feature extraction difficulty is reduced.
Step S103, carrying out feature fusion on the multi-mode depth features through a preset fusion device to obtain target fusion features;
it should be noted that, the multi-mode depth features are connected, superimposed or weighted by the preset fusion device to form a new feature vector as the target fusion feature. Alternatively, feature fusion may be based on a simple merger, a weighted sum fusion, a Principal Component Analysis (PCA) fusion, or a learning fusion.
Step S104, classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope area.
It should be noted that, in this step, the preset vegetation classification model may be a convolutional neural network. Aiming at the problems of difficult extraction of vegetation information and low classification precision of vegetation types caused by complex structural morphology and composition of the side slope vegetation, the target fusion characteristics obtained by fusion of multi-level characteristics of the side slope vegetation space, spectrum and depth are substituted into a convolutional neural network to be processed, namely, after the multi-source remote sensing data of the target side slope area are subjected to characteristic extraction, the characteristic fusion is carried out, the target fusion characteristics have multi-dimensional characteristics, and the accuracy of the side slope vegetation classification can be improved by inputting the target fusion characteristics into a preset vegetation classification model, so that the fine extraction of the side slope vegetation information is realized.
In some embodiments, referring to fig. 2, fig. 2 is a flowchart of a method for extracting multi-modal depth features according to another embodiment of the present invention, step S102 further includes:
step S1021, filtering the multi-source remote sensing data based on preset mask data to obtain target multi-source remote sensing data, wherein the preset mask data comprises vegetation mask data extracted based on historical remote sensing data;
step S1022, performing spatial feature extraction on the target multi-source remote sensing data by using a preset cavity convolutional neural network to obtain vegetation spatial features;
step S1023, utilizing a preset dense self-encoder to extract spectral features of the target multi-source remote sensing data to obtain vegetation spectral features;
step S1024, depth feature extraction is performed on the target multi-source remote sensing data based on the vegetation spatial features and the vegetation spectral features by using a preset depth neural network, so as to obtain vegetation depth features.
It should be noted that, because the slope structure and the form are complex, a great amount of noise interference is caused to the slope vegetation classification, the application carries out filtering processing on the multi-source remote sensing data through the priori knowledge of the preset mask data so as to reduce the topography noise interference and the extraction difficulty of the slope vegetation characteristics; meanwhile, the multi-scale spatial features are extracted through the spatial convolution neural network, the spectral noise interference is reduced through the dense self-encoder, and the vegetation spatial features and deeper features of the vegetation spectral features are further extracted through the deep neural network, so that the slope vegetation features are effectively extracted from the multi-dimensions, and the slope vegetation classification accuracy is improved.
In some embodiments, referring to fig. 3, fig. 3 is a flowchart of a method for obtaining target multi-source remote sensing data according to another embodiment of the present invention, step S1021, further includes:
step S211, performing image segmentation on a plurality of historical remote sensing data to extract vegetation mask areas in each historical remote sensing data;
step S212, carrying out weighted fusion on a plurality of vegetation mask areas to establish vegetation mask data;
step S213, matching the multi-source remote sensing data with the vegetation mask data, and determining the overlapped area as a vegetation area in the multi-source remote sensing data;
step S214, reserving vegetation areas in the multi-source remote sensing data to obtain target multi-source remote sensing data.
The historical remote sensing data is multi-source remote sensing data of the target slope area in a recent historical time period. The image segmentation is an image processing method for screening out slope vegetation areas in remote sensing data and dividing the slope vegetation areas and non-slope vegetation areas, and the image processing method can be realized through an example segmentation algorithm, and finally vegetation mask areas of each historical remote sensing data can be obtained. Because the slope area can change with time and some topography changes appear, for example, the slope area can be influenced by landslide, so that slope vegetation is damaged and exposed slope appears, and new vegetation can grow out after a period of time, the application performs weighted fusion on vegetation mask areas at different times according to the slope area changes so as to establish vegetation mask data. Optionally, a higher weight is given to a vegetation mask area corresponding to the remote sensing data with a history, and a lower weight is given to a vegetation mask area corresponding to the remote sensing data with a history, so that the vegetation mask data is more in line with multi-source remote sensing data filtering processing of the current period, and filtering accuracy of the vegetation area of the side slope is improved. The multi-source remote sensing data and the vegetation mask data are matched, a remote sensing area overlapping with the vegetation mask data in the multi-source remote sensing data is used as a vegetation area, and the vegetation area in the multi-source remote sensing data is reserved to obtain target multi-source remote sensing data.
In some embodiments, referring to fig. 4, fig. 4 is a flowchart of a method for obtaining vegetation spatial features according to another embodiment of the present invention, step S1022 includes:
step S221, extracting geometric features in the target multi-source remote sensing data, determining the geometric features as a geometric feature map, extracting texture features in the target multi-source remote sensing data, determining the texture features as a geometric feature map, extracting position features in the target multi-source remote sensing data, and determining the position features as a position feature map;
step S222, rolling and fully connecting the geometric feature map, the texture feature map and the position feature map by using a cavity convolution kernel in a preset cavity convolution neural network to obtain vegetation space features.
In this embodiment, for spatial feature extraction, geometric features, position features and texture features of the target multi-source remote sensing data are extracted by a feature extraction algorithm such as Gray Level Co-occurrence Matrix (GLCM), gabor filter, local Binary Pattern (LBP) or GIST, and a cavity convolutional neural network is used to perform convolution and full connection on the geometric feature map, the texture feature map and the position feature map by a cavity convolutional check, so as to obtain vegetation spatial features. The cavity convolution kernel is introduced to expand the receptive field of the convolution kernel in space so as to more effectively identify the spatial features of different scales.
For space feature extraction, geometric features, position features and texture features of target multi-source remote sensing data are extracted through feature extraction algorithms such as Gray Level Co-occurrence Matrix (GLCM), gabor filters, local Binary Patterns (LBP) or GIST, and a cavity convolutional neural network is utilized to check the geometric feature map, the texture feature map and the position feature map through cavity convolutional check, and rolling and full connection are carried out to obtain vegetation space features. The cavity convolution kernel is introduced to expand the receptive field of the convolution kernel in space so as to more effectively identify the spatial features of different scales.
In some embodiments, referring to fig. 5, fig. 5 is a flowchart of a method for obtaining spectral features of vegetation according to another embodiment of the present invention, step S1023, including:
step S231, compressing target multi-source remote sensing data through a preset dense self-encoding encoder to obtain hidden representation of low latitude;
step S232, inputting the hidden representation of the low latitude into a decoder in a preset Airmet self-code, carrying out nonlinear inverse transformation on the hidden representation of the low latitude through the decoder, and reconstructing to obtain target multi-source remote sensing data after dimension reduction;
and step S233, extracting spectral features of the target multi-source remote sensing data subjected to dimension reduction to obtain vegetation spectral features in various wave bands.
The dense self-encoder (Dense Autoencoder) is an unsupervised learning model, which is composed of an encoder and a decoder, and can compress and represent input data and reconstruct approximate original data. The encoder part of the dense self-encoder maps the input data to a low-dimensional hidden representation, and the key features in the input data are learned and extracted through nonlinear transformation of a plurality of hidden layers; encoders typically employ a fully connected neural network structure in which the number of nodes per hidden layer is progressively reduced, ultimately compressing the input data into a hidden representation of lower dimension. The decoder section remaps the hidden representation to the same dimension as the original input data, reconstructing an output as close as possible to the original data by a nonlinear inverse transformation of the plurality of hidden layers. The decoder is structured in the opposite way to the encoder, and the number of nodes of the hidden layer gradually increases, and finally, an output similar to the original input data is generated. In the embodiment, the dense self-encoder is introduced to perform dimension reduction processing on the target multi-source remote sensing data, so that data redundancy and noise interference are reduced. Further, the plant spectrum has the following reflection characteristics: a small reflection peak with a reflectivity of 10% -20% near 0.55 μm of visible light, and two obvious absorption valleys near 0.45 μm and 0.67 μm; the reflection peak with the reflectance of 40 percent or more is formed between 0.7 and 1.1 mu m in the near infrared band, and the reflectance is increased sharply at a steep slope of 0.7 to 0.8 mu m; three absorption valleys at 1.4 μm,1.9 μm and 2.6 μm; healthy green plants contain a large amount of chlorophyll, usually reflect 40% -50% of near infrared (0.7-1.1 (m) wave band energy, absorb nearly 80% -90% of visible light wave (0.4-0.7 (m) energy) of visible light wave band, the embodiment adopts spectral feature extraction methods such as Principal Component Analysis (PCA), linear Discriminant Analysis (LDA), wavelet transformation and the like, and according to the spectral reflection characteristics of plants, the reflectivity or the radiance spectral features of different vegetation types and different wave bands are extracted to realize spectral feature extraction and analysis.
In some embodiments, referring to fig. 6, fig. 6 is a flowchart of a method for obtaining vegetation depth features according to another embodiment of the present invention, step S1024 includes:
s241, performing depth feature extraction on vegetation spatial features by using a preset depth neural network to obtain spatial variation features of target multi-source remote sensing data;
step S242, deep feature extraction is carried out on vegetation spectral features by utilizing a preset deep neural network to obtain spectral variation features of target multi-source remote sensing data;
step S243, determining the spatial variation feature and the spectral variation feature as vegetation depth features.
It should be noted that the preset depth neural network may be a recurrent neural network (Recurrent Neural Networks, RNN) that may be used to process time-series remote sensing data, such as meteorological data or time-series remote sensing image data, to help extract time-series information and long-term dependency in the sequence data. The slope region is easily damaged by natural disasters or people, and the spatial distribution and the spectrum of the slope vegetation have a certain time characteristic on a time line, so that the embodiment extracts deep features of the vegetation spatial characteristics through the cyclic neural network to obtain spatial variation characteristics, extracts deep features of the vegetation spectral characteristics to obtain spectral variation characteristics, and takes the spatial variation characteristics and the spectral variation characteristics as vegetation depth characteristics so as to improve the accuracy of vegetation classification.
In some embodiments, referring to fig. 7, fig. 7 is a flowchart of a method for obtaining vegetation classification information by presetting a vegetation classification model according to another embodiment of the present invention, step S104 includes:
step S1041, rolling and fully connecting the target fusion characteristics by using a preset vegetation classification model, and outputting various vegetation types of the target slope area;
step S1042, counting a plurality of vegetation types to obtain vegetation classification information.
It should be noted that, the preset vegetation classification model classifies all the target fusion features in the target slope area in turn, and outputs all the vegetation types of the target slope area; and obtaining vegetation classification information by counting the number of each vegetation type.
As shown in fig. 8, fig. 8 is a block diagram of a slope vegetation classification device based on multi-modal depth features according to an embodiment of the present invention. The invention also provides a side slope vegetation classification device based on the multi-mode depth characteristics, which comprises:
the processor 801 may be implemented by a general purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
the Memory 802 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). Memory 802 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present disclosure are implemented by software or firmware, relevant program codes are stored in memory 802, and the processor 801 invokes a method for executing the embodiments of the present disclosure;
an input/output interface 803 for implementing information input and output;
the communication interface 804 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g., USB, network cable, etc.), or may implement communication in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 805 that transfers information between the various components of the device (e.g., the processor 801, the memory 802, the input/output interface 803, and the communication interface 804);
wherein the processor 801, the memory 802, the input/output interface 803, and the communication interface 804 implement communication connection between each other inside the device through a bus 805.
The embodiment of the application also provides electronic equipment, which comprises the side slope vegetation classification device based on the multi-mode depth characteristics.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium, and the storage medium stores a computer program, and the computer program realizes the slope vegetation classification method based on the multi-mode depth characteristic when being executed by a processor.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The apparatus embodiments described above are merely illustrative, in which the elements illustrated as separate components may or may not be physically separate, implemented to reside in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically include computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit and scope of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A side slope vegetation classification method based on multi-mode depth features is characterized by comprising the following steps:
acquiring multi-source remote sensing data of a target side slope region, wherein the multi-source remote sensing data comprises high-resolution optical data, hyperspectral remote sensing data, a data elevation model and radar scattering data which are derived from a plurality of data source platforms;
extracting multi-modal depth features from the multi-source remote sensing data by using a preset deep learning algorithm, wherein the multi-modal depth features comprise vegetation space features, vegetation spectrum features and vegetation depth features;
performing feature fusion on the multi-mode depth features through a preset fusion device to obtain target fusion features;
and classifying the target fusion characteristics through a preset vegetation classification model to obtain vegetation classification information of the target slope area.
2. The method for classifying vegetation on a side slope based on multi-modal depth features of claim 1, wherein the extracting multi-modal depth features of the multi-source remote sensing data using a preset deep learning algorithm comprises:
filtering the multi-source remote sensing data based on preset mask data to obtain target multi-source remote sensing data, wherein the preset mask data comprises vegetation mask data extracted based on historical remote sensing data;
carrying out space feature extraction on the target multi-source remote sensing data by using a preset cavity convolutional neural network to obtain vegetation space features;
carrying out spectral feature extraction on the target multi-source remote sensing data by using a preset dense self-encoder to obtain vegetation spectral features;
and carrying out depth feature extraction on the target multi-source remote sensing data based on the vegetation spatial features and the vegetation spectral features by using a preset depth neural network to obtain vegetation depth features.
3. The method for classifying vegetation on a side slope based on multi-modal depth features according to claim 2, wherein the filtering the multi-source remote sensing data based on preset mask data to obtain target multi-source remote sensing data comprises:
image segmentation is carried out on a plurality of historical remote sensing data so as to extract vegetation mask areas in each historical remote sensing data;
weighting and fusing a plurality of vegetation mask areas to establish the vegetation mask data;
matching the multi-source remote sensing data with the vegetation mask data, and determining an overlapped area as a vegetation area in the multi-source remote sensing data;
and reserving the vegetation region in the multi-source remote sensing data to obtain the target multi-source remote sensing data.
4. The method for classifying vegetation on a side slope based on multi-modal depth features of claim 2, wherein the performing spatial feature extraction on the target multi-source remote sensing data by using a preset hole convolutional neural network to obtain vegetation spatial features comprises:
extracting geometric features from the target multi-source remote sensing data, determining the geometric features as a geometric feature map, extracting texture features in the target multi-source remote sensing data, determining the texture features as a geometric feature map, extracting position features in the target multi-source remote sensing data, and determining the position features as a position feature map;
and rolling and fully connecting the geometric feature map, the texture feature map and the position feature map by using a cavity convolution kernel in the preset cavity convolution neural network to obtain the vegetation space feature.
5. The method for classifying vegetation on a side slope based on multi-modal depth features of claim 2, wherein the performing spectral feature extraction on the target multi-source remote sensing data by using a preset dense self-encoder to obtain vegetation spectral features comprises:
compressing the target multi-source remote sensing data through an encoder in the preset dense self-encoding to obtain hidden representation with low latitude;
inputting the hidden representation of the low latitude into a decoder in the preset self-code of the bed-wetting, carrying out nonlinear inverse transformation on the hidden representation of the low latitude through the decoder, and reconstructing to obtain the target multi-source remote sensing data after dimension reduction;
and extracting spectral features of the target multi-source remote sensing data after dimension reduction to obtain vegetation spectral features in various wave bands.
6. The method for classifying vegetation on a side slope based on multi-modal depth features of claim 2, wherein the performing depth feature extraction on the target multi-source remote sensing data based on the vegetation spatial features and the vegetation spectral features by using a preset depth neural network to obtain vegetation depth features comprises:
performing depth feature extraction on the vegetation spatial features by using the preset depth neural network to obtain spatial variation features of the target multi-source remote sensing data;
performing depth feature extraction on the vegetation spectral features by using the preset depth neural network to obtain spectral variation features of the target multi-source remote sensing data;
the spatially varying features and the spectrally varying features are determined as vegetation depth features.
7. The method for classifying vegetation on a side slope based on multi-modal depth features of claim 1, wherein classifying the target fusion features by a preset vegetation classification model to obtain vegetation classification information of the target side slope region comprises:
rolling and fully connecting the target fusion characteristics by utilizing the preset vegetation classification model, and outputting various vegetation types of the target slope area;
and counting a plurality of vegetation types to obtain vegetation classification information.
8. A multi-modal depth feature-based slope vegetation classification device comprising at least one control processor and a memory communicatively coupled to the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the multi-modal depth feature-based slope vegetation classification method of any of claims 1 to 7.
9. An electronic device comprising the multi-modal depth feature-based slope vegetation classification device of claim 8.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the multi-modal depth feature-based slope vegetation classification method of any of claims 1 to 7.
CN202311317121.4A 2023-10-11 2023-10-11 Side slope vegetation classification method and device based on multi-mode depth features Pending CN117475301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311317121.4A CN117475301A (en) 2023-10-11 2023-10-11 Side slope vegetation classification method and device based on multi-mode depth features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311317121.4A CN117475301A (en) 2023-10-11 2023-10-11 Side slope vegetation classification method and device based on multi-mode depth features

Publications (1)

Publication Number Publication Date
CN117475301A true CN117475301A (en) 2024-01-30

Family

ID=89626628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311317121.4A Pending CN117475301A (en) 2023-10-11 2023-10-11 Side slope vegetation classification method and device based on multi-mode depth features

Country Status (1)

Country Link
CN (1) CN117475301A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671521A (en) * 2024-02-02 2024-03-08 中交四航工程研究院有限公司 Invasive species biomass inversion method and device based on multi-source remote sensing data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671521A (en) * 2024-02-02 2024-03-08 中交四航工程研究院有限公司 Invasive species biomass inversion method and device based on multi-source remote sensing data

Similar Documents

Publication Publication Date Title
Sameen et al. Landslide detection using residual networks and the fusion of spectral and topographic information
Adarme et al. Evaluation of Deep Learning Techniques for Deforestation Detection in the Brazilian Amazon and Cerrado Biomes From Remote Sensing Imagery.
Mondejar et al. Near infrared band of Landsat 8 as water index: a case study around Cordova and Lapu-Lapu City, Cebu, Philippines
Pu Mapping tree species using advanced remote sensing technologies: A state-of-the-art review and perspective
CN107358260B (en) Multispectral image classification method based on surface wave CNN
CN117475301A (en) Side slope vegetation classification method and device based on multi-mode depth features
Zhang et al. Mapping photovoltaic power plants in China using Landsat, random forest, and Google Earth Engine
Watanabe et al. Characterization of surface solar-irradiance variability using cloud properties based on satellite observations
Baraldi Satellite image automatic Mapper™(SIAM™)-a turnkey software executable for automatic near real-time multi-sensor multi-resolution spectral rule-based preliminary classification of spaceborne multi-spectral images
Nguyen et al. Mapping Land use/land cover using a combination of Radar Sentinel-1A and Sentinel-2A optical images
Natya et al. Land cover classification schemes using remote sensing images: a recent survey
Mohammadi et al. Land cover mapping using a novel combination model of satellite imageries: case study of a part of the Cameron Highlands, Pahang, Malaysia.
Forget et al. Complementarity between Sentinel-1 and Landsat 8 imagery for built-up mapping in Sub-Saharan Africa
Nimbalkar et al. Optimal band configuration for the roof surface characterization using hyperspectral and LiDAR imaging
Pokhariya et al. Evaluation of different machine learning algorithms for LULC classification in heterogeneous landscape by using remote sensing and GIS techniques
Vinod et al. Assessment of Trees Outside Forest (TOF) in Urban Landscape Using High-Resolution Satellite Images and Deep Learning Techniques
Carvalho et al. Optical and SAR imagery for mapping vegetation gradients in Brazilian savannas: Synergy between pixel-based and object-based approaches
CN111693463B (en) Antarctic peninsula optimized lichen coverage index extraction method
Van Coillie et al. Semi-automated forest stand delineation using wavelet based segmentation of very high resolution optical imagery
Sahithi et al. Comparison of support vector machine, artificial neural networks and spectral angle mapper classifiers on fused hyperspectral data for improved LULC classification
Zulkifle et al. Integrated NIR-HE based SPOT-5 image enhancement method for features preservation and edge detection
Sahithi et al. Hyperspectral data classification algorithms for Delineation of LULC Classes
Pugh Forest Terrain Feature Characterization using multi-sensor neural image fusion and feature extraction methods
Mani et al. Mixed pixel removal in north Tamil Nadu region for accurate area measurement
CN117671521A (en) Invasive species biomass inversion method and device based on multi-source remote sensing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination