CN112861732A - Method, system and device for monitoring land in ecological environment fragile area - Google Patents

Method, system and device for monitoring land in ecological environment fragile area Download PDF

Info

Publication number
CN112861732A
CN112861732A CN202110183122.9A CN202110183122A CN112861732A CN 112861732 A CN112861732 A CN 112861732A CN 202110183122 A CN202110183122 A CN 202110183122A CN 112861732 A CN112861732 A CN 112861732A
Authority
CN
China
Prior art keywords
image
monitoring
area
convolution
land
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110183122.9A
Other languages
Chinese (zh)
Other versions
CN112861732B (en
Inventor
王蕾
姚允龙
柴青宇
杨利金
柴一涵
贾佳
宁静
王佳轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202110183122.9A priority Critical patent/CN112861732B/en
Publication of CN112861732A publication Critical patent/CN112861732A/en
Application granted granted Critical
Publication of CN112861732B publication Critical patent/CN112861732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Tourism & Hospitality (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

A land monitoring method, a system and a device for a fragile area of an ecological environment belong to the field of combination of an image recognition technology and an environment monitoring technology. The method aims to solve the problem that the monitoring work efficiency is low or the monitoring accuracy is low in the current land monitoring of the ecological environment fragile area. According to the method, monitoring nodes are determined according to monitoring tasks, and image acquisition is carried out on a monitored area range; changing the monitoring area range image into a gray scale image, and recording as an image A; recording G, B channel images in RGB channels of the monitoring area range image as an image G and an image B respectively; recording the image without the B channel in the RGB channel as an image RG; predicting by using the trained neural network model to obtain a prediction result of the image under the time corresponding to the monitoring node; and monitoring the land change of the ecological environment fragile area according to the change of each partition area in the prediction result corresponding to the monitoring node time. The method is suitable for monitoring the land in the ecological environment fragile area.

Description

Method, system and device for monitoring land in ecological environment fragile area
Technical Field
The invention relates to a land monitoring method, a system and a device for a fragile area of an ecological environment. Belonging to the field of combination of image recognition technology and environment monitoring technology.
Background
China is one of the countries with the most vulnerable ecological types, such as northeast forest and grass staggered ecological fragile areas, northern farming and pasturing staggered ecological fragile areas, northwest desert oasis crossed ecological fragile areas, southern red soil hilly land ecological fragile areas, southwest karst mountain rock-desertified ecological fragile areas, southwest mountain farming and pasturing staggered ecological fragile areas, Qinghai-Tibet plateau composite erosion ecological fragile areas, coastal land and water crossed ecological fragile areas
According to the research of relevant experts, typical ecological fragile areas such as desertification of farming and pasturing alternate zones in northern areas, desertification of northwest areas, degradation of alpine grasses and grasses in Qinghai-Tibet plateau and the like show that the area of the ecological fragile areas in China accounts for 70% of the land area, and the area of the ecological fragile areas above the moderate area accounts for 55% of the land area in China. As the change of climate and the influence of human activities are intensified, the fragile ecosystem changes obviously along with the time, so that the ecological environment fragile area land monitoring and protecting and repairing method has extremely important significance, wherein the ecological environment fragile area land monitoring is an extremely important task and work, and a plurality of expert and scholars also develop various researches.
At present, land monitoring of the ecological environment fragile area is mostly completed based on manual on-site data acquisition, and the mode needs a large amount of manpower and material resources, and is long in monitoring working time and low in efficiency. At present, some experts and scholars invert the fragile ecological environment area by using a remote sensing imaging technology so as to monitor the land of the fragile ecological environment area, but the data obtained by the method generally has lower resolution which is basically more than 30 meters by 30 meters, so that different research results are very different, the accuracy rate is not guaranteed, the accuracy of the data obtained by different databases is different, and the reliability of the research results is further reduced.
Disclosure of Invention
The invention aims to solve the problem that the monitoring working efficiency is low or the monitoring accuracy is low in the current land monitoring of the ecological environment fragile area.
A land monitoring method for a fragile ecological environment area comprises the following steps:
determining the time of the two monitoring nodes as t1 and t2 respectively according to the monitoring tasks; for the t1 monitor node time and the t2 monitor node time, the following steps are respectively carried out:
acquiring an image aiming at a monitored area range to obtain a monitored area range image;
changing the monitoring area range image into a gray scale image, and recording as an image A;
recording G, B channel images in RGB channels of the monitoring area range image as an image G and an image B respectively;
recording an image with a B channel removed from an RGB channel of the monitoring area range image as an image RG;
inputting an image A into a first processing path of a feature extraction network, respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network, and predicting by using a trained neural network model to obtain a prediction result of the image at the time corresponding to the monitoring node;
monitoring the land change of the ecological environment fragile area according to the change of each partition area in the prediction result corresponding to the t1 monitoring node time and the t2 monitoring node time;
the neural network model adopts a Mask R-CNN network model, and the feature extraction network structure in the Mask R-CNN network model is as follows:
the first processing path comprises four convolution units from a first convolution unit to a fourth convolution unit, and the four convolution units are sequentially connected;
a first convolution unit: 3 × 3 convolutional layers +3 × 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the second processing path comprises four convolution units from the first convolution unit to the fourth convolution unit, and the four convolution units are sequentially connected;
a first convolution unit: 5 by 5 convolutional layers +3 by 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the third processing path and the fourth processing path have the same structure as the second processing path;
and the feature maps of the four processing paths are connected to a 3-by-3 pooling layer after feature fusion to obtain a final feature map.
Further, the length and the width of the pixel points of the monitoring area range image representing the actual real area are 5-10 meters and 5-10 meters respectively.
Further, in the processing process of the neural network model, the feature maps processed by the first convolution unit of the first processing path and the first convolution units of the other processing paths need to be the same in size by adjusting the step size of the convolution operation in the first convolution unit of the first processing path and the first convolution units of the other processing paths and combining padding operation.
Further, the training process of the neural network model comprises the following steps:
s1, acquiring images aiming at the area ranges corresponding to the fragile areas of different ecological environments to obtain area range images; carrying out image amplification on the acquired region range image, and constructing an image data set by using the amplified image;
performing image segmentation on each image in the image data set, recording the image segmentation as an actual segmentation result, and manually marking a segmentation area of the actual segmentation result, wherein the marking types comprise a sand area, a water area, a forest and a stone, and the sand area, the water area, the forest and the stone are used as backgrounds; and constructing a sample data set according to the marked images, and dividing the sample data set into a training set and a testing set.
s2, changing the images in the training set into a gray scale image, and recording the gray scale image as an image A;
recording G, B channel images in RGB channels of the images in the training set as an image G and an image B respectively;
recording images (reserved R and G) with B channels removed from RGB channels of the images in the training set as images RG;
training a Mask R-CNN network model, inputting an image A into a first processing path of a feature extraction network in the training process, and respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network;
s3, testing by using the test set to obtain a trained neural network model; otherwise, the training set and the test set are divided again, and the training is restarted.
Further, before the neural network model is trained, a feature extraction network in the Mask R-CNN network model is trained in advance.
Further, before the neural network model is trained, a feature extraction network in the Mask R-CNN network model is pre-trained by using an ISPRS data set.
Further, the ratio of training set to test set is 80%: 20 percent.
Further, in the process of monitoring the land change of the ecological environment fragile area according to the change of each divided area in the prediction results corresponding to the t1 monitoring node time and the t2 monitoring node time, meteorological data, geographic information data and soil data of the monitoring area are collected, an ecological sensitivity evaluation index system is constructed according to the 'ecological function zoning technology temporary regulations', and then the ecological environment sensitivity is evaluated.
An ecological fragile area land monitoring system is used for executing the ecological fragile area land monitoring method.
An ecological fragile area land monitoring device is used for storing and/or operating the ecological fragile area land monitoring system.
Has the advantages that:
the invention can greatly reduce the cost of manpower and material resources and has high monitoring efficiency. Meanwhile, more accurate segmentation results can be obtained in the detection process by using the method, monitoring areas can be divided more finely, the method can be used for fully monitoring areas such as desertification, stony desertification and the like, the detection results are more accurate, more accurate data can be provided for protection and improvement of the fragile areas of the ecological environment, the protection work is more targeted, and the phenomena of short-term green and long-term yellow and the like are reduced.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment;
fig. 2 is an exemplary diagram of a 2D image and a corresponding DSM image.
Detailed Description
The first embodiment is as follows:
the embodiment is a land monitoring method for a fragile area of an ecological environment, which comprises the following steps:
1. acquiring images aiming at the area ranges corresponding to the fragile areas of different ecological environments to obtain area range images;
in fact, images obtained in various modes can be used for processing, and based on the consideration of processing effect, the embodiment does not adopt a Google Earth Engine platform to obtain data, but adopts an aerial photography mode to obtain images;
based on the mode of extracting features and performing the feature extraction on the whole scheme, research and analysis show that the pixel points of the acquired image represent the actual real area and need to be smaller than or equal to the real block area threshold, the block area threshold is not more than 30 meters by 30 meters, theoretically, the smaller the pixel points represent the actual real area, the better the pixel points represent, but the prediction accuracy and the calculation efficiency of the comprehensive network structure and the influence of image fusion are better, the length and the width of the pixel points of the acquired image representing the actual real area are respectively 5-10 m and 5-10 m, and verification shows that the accuracy of processing the image with the resolution ratio by using the method can reach more than 87.6%, meanwhile, the calculated amount can be ensured, and the calculation efficiency is further ensured.
Due to the image acquisition equipment and the monitoring area range, the method is obtained by adopting a mode of acquiring local images and synthesizing the area images, and the length and the width of a pixel point of the synthesized area range image representing an actual real area are respectively 5-10 m meters and 5-10 m.
In order to solve the problem of overfitting caused by the number of samples, carrying out image amplification on the acquired region range image, and constructing an image data set by using the amplified image; and meanwhile, the robustness of the invention can be ensured.
Performing image segmentation on each image in the image data set, recording the image segmentation as an actual segmentation result, and manually marking a segmentation area of the actual segmentation result, wherein the marking types comprise a sand area, a water area, a forest and a stone, and the sand area, the water area, the forest and the stone are used as backgrounds; and constructing a sample data set according to the marked images, and dividing the sample data set into a training set and a testing set. The ratio of the training set to the test set in this embodiment is 80%: 20 percent.
2. Constructing and training a neural network model:
2.1, the invention adopts a Mask R-CNN network model, improves the Mask R-CNN network model, and improves the characteristic extraction network in the Mask R-CNN network model into the following structure in order to reduce the gradient diffusion, reduce the operation amount as much as possible and improve the operation efficiency on the basis of accurately extracting the characteristics:
the first processing path includes four convolution units connected in sequence, the four convolution units being as follows:
a first convolution unit: 3 × 3 convolutional layers +3 × 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the second processing path includes four convolution units connected in sequence, the four convolution units being as follows:
a first convolution unit: 5 by 5 convolutional layers +3 by 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the third processing path and the fourth processing path have the same structure as the second processing path; and the feature maps of the four processing paths are connected to a 3-by-3 pooling layer after feature fusion to obtain a final feature map.
Since the receptive fields in the first convolution unit of the first processing path and the first convolution units of the other processing paths are different, the sizes of the feature maps processed by the first convolution unit of the first processing path and the first convolution unit of the other processing paths need to be the same by adjusting the convolution step size in the respective convolution processes and combining the padding operation. The fourth convolution unit of the invention is convolution operation, and does not adopt full connection which is commonly used in ground information processing at present, thereby greatly reducing the calculation amount and ensuring the processing effect.
Before the neural network model is applied to the training process of the invention, the feature extraction network in the Mask R-CNN network model is trained in advance; and the feature extraction network in the Mask R-CNN network model is trained by utilizing an ISPRS data set.
In the prior art, a model should be trained by directly using collected data, but because the invention has the problem of limited number of data samples, if the model is trained by directly using the collected data, the problem of overfitting occurs, and the generalization capability is weak.
To solve the problem, more data samples are obtained, but the workload of sample collection and labeling is huge. Therefore, the present invention attempts to pre-train the feature extraction network and then train using the collected data samples.
Actually, the remote sensing data and the plane image data have data difference and data structure difference, the data of the ISPRS data set has spectral information and spatial information, and the data information content of the ISPRS data set is greatly different from the information content of the plane image data, so that a model which can not be directly trained by the remote sensing data can not be used for processing the plane image data in principle; meanwhile, the characteristics (especially high-level characteristics) in the image information embodied by the remote sensing data and the plane image data are different, and a model which is not suitable for direct remote sensing data training is determined to be used for plane image data processing. Even if the data in the 2D form is adopted for processing, most of the current researches aiming at the vulnerable area of the ecological environment are carried out in the large-scale range (the minimum resolution is more than 30 meters, the resolution is low, and the data precision is low) by utilizing the remote sensing data, so that the effect of applying the data to the invention is not ideal.
According to the method, the low-level features and the medium-level features of the actually acquired images are analyzed, and the high-level features are verified, so that the problem of overfitting can be effectively solved by pre-training the feature extraction network by using the ISPRS data set and then training the sample. In the process of pre-training the feature extraction network by using the ISPRS data set, in order to control workload and improve modeling efficiency, the invention does not adopt respectively training models aiming at RGB channels according to the actual processing process, but trains the feature extraction network by using a DSM image corresponding to a 2D image in the data set; the 2D image and the corresponding DSM image are shown in fig. 2, so that the influence of spectral information and RGB channels is avoided, and the influence of resolution can be weakened to a certain extent through analysis and verification, and the error is increased, but the process is only used as a pre-training process, and the training process of the image is performed in the later period, so that the model is adjusted in the forward direction, the prediction accuracy of the model cannot be influenced on the whole, and the generalization capability can be greatly improved. It should be noted that, only the feature extraction network is pre-trained, and other network structures (model parameters) except the feature extraction network are determined through the actual training process of the present invention.
2.2, changing the images in the training set into a gray level image, and recording the gray level image as an image A;
recording G, B channel images in RGB channels of the images in the training set as an image G and an image B respectively;
recording images (reserved R and G) with B channels removed from RGB channels of the images in the training set as images RG;
training a Mask R-CNN network model corresponding to the pre-trained feature extraction network, inputting an image A into a first processing path of the feature extraction network in the training process, and respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network;
the invention uses the idea of participating the characteristics of the gray-scale image into the characteristic extraction process to replace the process idea of adding the boundary information determined by an ACM (active contour model) into the characteristic extraction, the invention compensates the contour information of the low-level characteristics by using the high-level characteristics, compared with the neural network + ACM (such as FCN + ACM), the invention slightly sacrifices the performance of contour boundary determination, but greatly simplifies the model, because if the ACM extracted contour is adopted, the deconvolution operation needs to be added into the network for extracting the characteristics, so that the characteristic image is restored to be equal to the image size corresponding to the ACM (generally equal to the original image size), and the convolution network before deconvolution is assumed to be equal to the convolution network layer number of the invention, so that the invention is equivalent to reducing the model structure by half, so that the data processing amount is greatly reduced, thereby greatly improving the operation efficiency. More importantly, the adoption of the ACM requires a deconvolution operation, so that the feature information may be lost, which is actually an upsampling operation, and the feature map becomes large, which greatly increases the computation amount, compared with the method of the present invention, which can further reduce the computation amount.
The invention adopts an AdaBelief optimizer for training.
The network structure and the processing mode of the invention not only can greatly reduce the calculation amount, but also can fully utilize the advanced characteristics, thereby improving the precision of segmentation and prediction.
2.3, testing by using the test set to obtain a trained neural network model; otherwise, the training set and the test set are divided again, and the training is restarted.
3. Determining the time of the two monitoring nodes as t1 and t2 respectively according to the monitoring tasks; the following operations are performed for the t1 monitor node time and the t2 monitor node time, respectively:
acquiring an image aiming at a monitored area range to obtain a monitored area range image;
the monitoring area range image is obtained in the same way as the training set image.
Changing the monitoring area range image into a gray scale image, and recording as an image A;
recording G, B channel images in RGB channels of the monitoring area range image as an image G and an image B respectively;
recording images (reserved R and G) with B channels removed from RGB channels of the monitoring area range images as images RG;
inputting an image A into a first processing path of a feature extraction network, respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network, and predicting by using a trained neural network model to obtain a prediction result of the image at the time corresponding to the monitoring node;
and monitoring the land change of the ecological environment fragile area according to the change of each partition area in the prediction result corresponding to the t1 monitoring node time and the t2 monitoring node time.
The specific detection time nodes can be carried out according to the year, the month or the month, and comparison monitoring is carried out over several months, for example, images are collected according to the month and are predicted, then, several months with severe climate change/frequent human activities are used as key monitoring intervals, and prediction results before and after the key monitoring intervals are used for comparison, so that dynamic changes of various types of land are judged, and distribution and dynamic changes of desertification, water and soil loss, stony desertification and the like are realized by combining other data.
When the method is used for monitoring the land in the fragile ecological environment area, the image information can be effectively analyzed, different areas in the ecological staggered area are fully and more finely divided, and the change tendency of the fragile ecological environment area is fully analyzed. For example, vegetation covers a certain monitoring area, but the vegetation covering power is not strong in the vegetation degradation process, the existing mode cannot accurately divide desertified land between vegetation and vegetation, so that the monitoring result by using the prior art is not accurate, and an accurate analysis result cannot be obtained when the sensitivity of the ecological environment is analyzed.
For comparing the advantages of the present invention, the present embodiment compares the present invention with the existing embodiments, and specifically, see table 1.
TABLE 1
The accuracy rate% Sand area Water area Forest tree (Stone)
svm 61.0 82.1 67.1 59.2
Random forest 66.7 81.3 65.1 71.6
Multi scale CNN 81.4 89.6 71.7 74.2
The invention 91.6 94.3 90.9 87.6
It can be seen that the present invention can obtain more accurate segmentation results, when the present invention is used, the present invention can be used for more finely dividing the monitoring area, the present invention can be used for fully monitoring the areas of desertification, stony desertification, etc., and the present invention can obtain more accurate results, but the effect of other monitoring modes except manual on-site monitoring is not ideal.
The second embodiment is as follows:
in the embodiment, in the process of monitoring the land change of the fragile ecological environment area according to the change of each divided area in the prediction results corresponding to the t1 monitoring node time and the t2 monitoring node time, meteorological data, geographic information data (including elevation data) and soil data of the monitored area can be collected, and an ecological sensitivity evaluation index system is constructed by referring to 'ecological function zoning technology temporary regulations', so that the ecological sensitivity is evaluated, as shown in table 2.
TABLE 2 evaluation index and grading assignment of ecological environmental sensitivity
Figure BDA0002942655170000081
Other steps and parameters are the same as in the first embodiment.
The third concrete implementation mode:
the embodiment is a land monitoring system for an ecological environment fragile area, which is used for executing a land monitoring method for the ecological environment fragile area.
The fourth concrete implementation mode:
the embodiment is a land monitoring device for an ecological environment fragile area, which is used for storing and/or operating a land monitoring system for the ecological environment fragile area.
The embodiment comprises but is not limited to a memory such as a hard disk for storing the ecological fragile area land monitoring system, a PC (personal computer) for storing and/or operating the ecological fragile area land monitoring system, a mobile terminal device and the like.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (10)

1. A land monitoring method for a fragile ecological environment area is characterized by comprising the following steps:
determining the time of the two monitoring nodes as t1 and t2 respectively according to the monitoring tasks; for the t1 monitor node time and the t2 monitor node time, the following steps are respectively carried out:
acquiring an image aiming at a monitored area range to obtain a monitored area range image;
changing the monitoring area range image into a gray scale image, and recording as an image A;
recording G, B channel images in RGB channels of the monitoring area range image as an image G and an image B respectively;
recording an image with a B channel removed from an RGB channel of the monitoring area range image as an image RG;
inputting an image A into a first processing path of a feature extraction network, respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network, and predicting by using a trained neural network model to obtain a prediction result of the image at the time corresponding to the monitoring node;
monitoring the land change of the ecological environment fragile area according to the change of each partition area in the prediction result corresponding to the t1 monitoring node time and the t2 monitoring node time;
the neural network model adopts a Mask R-CNN network model, and the feature extraction network structure in the Mask R-CNN network model is as follows:
the first processing path comprises four convolution units from a first convolution unit to a fourth convolution unit, and the four convolution units are sequentially connected;
a first convolution unit: 3 × 3 convolutional layers +3 × 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the second processing path comprises four convolution units from the first convolution unit to the fourth convolution unit, and the four convolution units are sequentially connected;
a first convolution unit: 5 by 5 convolutional layers +3 by 3 pooling layers;
a second convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a third convolution unit: 1 × 1 convolution layer +3 × 3 convolution layer +1 × 1 convolution layer;
a fourth convolution unit: 1 x 1 convolutional layer;
the third processing path and the fourth processing path have the same structure as the second processing path;
and the feature maps of the four processing paths are connected to a 3-by-3 pooling layer after feature fusion to obtain a final feature map.
2. The method for monitoring the land in the eco-fragile area as claimed in claim 1, wherein the pixel points of the monitoring area range image represent the actual real area with the length and width of 5-10 m and 5-10 m respectively.
3. The method for monitoring land in the eco-vulnerable area as claimed in claim 1, wherein in the processing of the neural network model, the feature maps of the first convolution unit of the first processing path and the feature maps of the first convolution units of the other processing paths are required to have the same size after being processed by adjusting the step size of the convolution operation in the first convolution unit of the first processing path and the first convolution unit of the other processing paths and combining with padding operation.
4. The method for monitoring the land in the vulnerable area of ecological environment as claimed in claim 3, wherein the training process of the neural network model comprises the following steps:
s1, acquiring images aiming at the area ranges corresponding to the fragile areas of different ecological environments to obtain area range images; carrying out image amplification on the acquired region range image, and constructing an image data set by using the amplified image;
performing image segmentation on each image in the image data set, recording the image segmentation as an actual segmentation result, and manually marking a segmentation area of the actual segmentation result, wherein the marking types comprise a sand area, a water area, a forest and a stone, and the sand area, the water area, the forest and the stone are used as backgrounds; and constructing a sample data set according to the marked images, and dividing the sample data set into a training set and a testing set.
s2, changing the images in the training set into a gray scale image, and recording the gray scale image as an image A;
recording G, B channel images in RGB channels of the images in the training set as an image G and an image B respectively;
recording images (reserved R and G) with B channels removed from RGB channels of the images in the training set as images RG;
training a Mask R-CNN network model, inputting an image A into a first processing path of a feature extraction network in the training process, and respectively inputting an image G, an image B and an image RG into a second processing path to a fourth processing path of the feature extraction network;
s3, testing by using the test set to obtain a trained neural network model; otherwise, the training set and the test set are divided again, and the training is restarted.
5. The method for monitoring the land in the eco-fragile area as claimed in claim 4, wherein the neural network model is trained, and the feature extraction network in the Mask R-CNN network model is trained in advance.
6. The method for monitoring the land in the ecological fragile area as claimed in claim 5, wherein before the neural network model is trained, a feature extraction network in a Mask R-CNN network model is pre-trained by using an ISPRS data set.
7. The method for monitoring the land in the eco-fragile area as claimed in claim 6, wherein the ratio of the training set to the testing set is 80%: 20 percent.
8. The method for monitoring the land in the fragile ecological environment area according to any one of claims 1 to 7, wherein in the process of monitoring the land change in the fragile ecological environment area according to the change of each divided area in the prediction results corresponding to the t1 monitoring node time and the t2 monitoring node time, meteorological data, geographic information data and soil data of the monitored area are collected, and an ecological sensitivity evaluation index system is constructed by referring to "temporary ecological function zoning technology" to evaluate the ecological sensitivity.
9. An eco-fragile zone land monitoring system for performing an eco-fragile zone land monitoring method of one of claims 1 to 8.
10. An eco-fragile zone land monitoring device for storing and/or operating an eco-fragile zone land monitoring system according to claim 9.
CN202110183122.9A 2021-02-10 2021-02-10 Method, system and device for monitoring land in ecological environment fragile area Active CN112861732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183122.9A CN112861732B (en) 2021-02-10 2021-02-10 Method, system and device for monitoring land in ecological environment fragile area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183122.9A CN112861732B (en) 2021-02-10 2021-02-10 Method, system and device for monitoring land in ecological environment fragile area

Publications (2)

Publication Number Publication Date
CN112861732A true CN112861732A (en) 2021-05-28
CN112861732B CN112861732B (en) 2021-11-02

Family

ID=75988299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183122.9A Active CN112861732B (en) 2021-02-10 2021-02-10 Method, system and device for monitoring land in ecological environment fragile area

Country Status (1)

Country Link
CN (1) CN112861732B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516083A (en) * 2021-07-19 2021-10-19 中国农业科学院草原研究所 Ecological restoration modeling method for vegetation in abandoned farmland in grassland area
CN114564893A (en) * 2022-03-02 2022-05-31 东北林业大学 Wetland plant diversity monitoring and situation optimization method
CN115131370A (en) * 2022-07-04 2022-09-30 东北林业大学 Forest carbon sequestration and oxygen release capacity and benefit evaluation method and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670515A (en) * 2018-12-13 2019-04-23 南京工业大学 Method and system for detecting building change in unmanned aerial vehicle image
CN109886238A (en) * 2019-03-01 2019-06-14 湖北无垠智探科技发展有限公司 Unmanned plane Image Change Detection algorithm based on semantic segmentation
CN110930375A (en) * 2019-11-13 2020-03-27 广东国地规划科技股份有限公司 Method, system and device for monitoring land coverage change and storage medium
CN111462218A (en) * 2020-03-16 2020-07-28 西安理工大学 Urban waterlogging area monitoring method based on deep learning technology
CN111476129A (en) * 2020-03-27 2020-07-31 潍坊申海科技有限公司 Soil impurity detection method based on deep learning
CN111898477A (en) * 2020-07-13 2020-11-06 东南大学 Method for rapidly detecting changed building based on new and old time phase images of unmanned aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670515A (en) * 2018-12-13 2019-04-23 南京工业大学 Method and system for detecting building change in unmanned aerial vehicle image
CN109886238A (en) * 2019-03-01 2019-06-14 湖北无垠智探科技发展有限公司 Unmanned plane Image Change Detection algorithm based on semantic segmentation
CN110930375A (en) * 2019-11-13 2020-03-27 广东国地规划科技股份有限公司 Method, system and device for monitoring land coverage change and storage medium
CN111462218A (en) * 2020-03-16 2020-07-28 西安理工大学 Urban waterlogging area monitoring method based on deep learning technology
CN111476129A (en) * 2020-03-27 2020-07-31 潍坊申海科技有限公司 Soil impurity detection method based on deep learning
CN111898477A (en) * 2020-07-13 2020-11-06 东南大学 Method for rapidly detecting changed building based on new and old time phase images of unmanned aerial vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史宝坤 等: "基于Mask R-CNN网络的农田地块识别", 《现在信息科技》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516083A (en) * 2021-07-19 2021-10-19 中国农业科学院草原研究所 Ecological restoration modeling method for vegetation in abandoned farmland in grassland area
CN114564893A (en) * 2022-03-02 2022-05-31 东北林业大学 Wetland plant diversity monitoring and situation optimization method
CN115131370A (en) * 2022-07-04 2022-09-30 东北林业大学 Forest carbon sequestration and oxygen release capacity and benefit evaluation method and equipment
CN115131370B (en) * 2022-07-04 2023-04-18 东北林业大学 Forest carbon sequestration and oxygen release capacity and benefit evaluation method and equipment

Also Published As

Publication number Publication date
CN112861732B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US11521379B1 (en) Method for flood disaster monitoring and disaster analysis based on vision transformer
CN112861732B (en) Method, system and device for monitoring land in ecological environment fragile area
Li et al. Effects of land use changes on soil erosion in a fast developing area
Ioannis et al. Multi-temporal Landsat image classification and change analysis of land cover/use in the Prefecture of Thessaloiniki, Greece
CN103208028A (en) Waterfowl habitat suitability evaluation method based on combination of remote sensing and geographical information system (GIS)
CN112070056A (en) Sensitive land use identification method based on object-oriented and deep learning
CN113821925A (en) Wetland dynamic boundary determination method based on three elements of aquatic soil
Sabr et al. Assessment of land use and land cover change using spatiotemporal analysis of landscape: case study in south of Tehran
Bashir et al. Exploring geospatial techniques for spatiotemporal change detection in land cover dynamics along Soan River, Pakistan
Bindajam et al. Characterizing the urban decadal expansion and its morphology using integrated spatial approaches in semi-arid mountainous environment, Saudi Arabia
CN112166688B (en) Method for monitoring desert and desertification land based on minisatellite
CN117171533B (en) Real-time acquisition and processing method and system for geographical mapping operation data
CN113158770A (en) Improved mining area change detection method of full convolution twin neural network
Ramachandra et al. Exposition of urban structure and dynamics through gradient landscape metrics for sustainable management of Greater Bangalore
DEMİR et al. Analysis Temporal Land Use/Land Cover Change Based on Landscape Pattern and Dynamic Metrics in Protected Mary Valley, Trabzon from 1987 to 2015
Idris et al. Application of artificial neural network for building feature extraction in Abuja
Hiew et al. Land use classification and land use change analysis using satellite images in Lojing, Kelantan.
Vohra et al. Multi-scale extraction and spatial analysis of growth pattern changes in urban water bodies using sentinel-2 MSI imagery: a study in the central part of India
Yang et al. Post-earthquake spatio-temporal landslide analysis of Huisun Experimental Forest Station.
Parveen et al. Land use land cover mapping with change detection: A spatio-temporal analysis of NCT of Delhi from 1981 to 2015
Sawant et al. Temporal analysis of land use/land cover change in the Krishna river sub-basin using Google Earth Engine
Im et al. A genetic algorithm approach to moving threshold optimization for binary change detection
Al‐ysari et al. The effect of natural factors on changing soil uses in the marshes: An experimental study using Landsat satellite data
CN110440722B (en) Construction index construction method suitable for medium infrared-free data
Kaffo Applying a Landscape Ecology Approach to Forest Disturbance from Marcellus Shale Gas Development: A Case Study of Center Township, Greene County, Pennsylvania

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant