CN117315460A - FarSeg algorithm-based dust source extraction method for construction sites of urban construction area - Google Patents
FarSeg algorithm-based dust source extraction method for construction sites of urban construction area Download PDFInfo
- Publication number
- CN117315460A CN117315460A CN202311190699.8A CN202311190699A CN117315460A CN 117315460 A CN117315460 A CN 117315460A CN 202311190699 A CN202311190699 A CN 202311190699A CN 117315460 A CN117315460 A CN 117315460A
- Authority
- CN
- China
- Prior art keywords
- farseg
- data
- dust source
- algorithm
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000000428 dust Substances 0.000 title claims abstract description 76
- 238000010276 construction Methods 0.000 title claims abstract description 46
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 title abstract description 18
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000011218 segmentation Effects 0.000 claims abstract description 34
- 239000013598 vector Substances 0.000 claims abstract description 22
- 238000005457 optimization Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 11
- 238000012805 post-processing Methods 0.000 claims abstract description 7
- 238000004519 manufacturing process Methods 0.000 claims abstract description 4
- 238000002372 labelling Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 9
- 230000000750 progressive effect Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 101100295091 Arabidopsis thaliana NUDT14 gene Proteins 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 239000013589 supplement Substances 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 238000013135 deep learning Methods 0.000 description 5
- 238000013136 deep learning model Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for extracting dust sources in a construction site of an urban construction area based on a FarSeg algorithm, belonging to the technical field of computer vision; the method comprises the following specific steps: collecting remote sensing data of a built-up area of a city, and preprocessing; marking a construction site dust source on the preprocessed remote sensing data, manufacturing a marked sample, and carrying out data enhancement on the marked sample; training the FarSeg model, inputting data to be tested into the trained FarSeg model, performing image semantic segmentation, extracting dust sources, and obtaining corresponding vector results; and carrying out post-processing optimization on the vector result to obtain the dust source region boundary of the data to be detected. The invention realizes the high-precision extraction of dust sources in construction sites of the urban built-up areas of remote sensing images based on the FarSeg algorithm; through image semantic segmentation, the position and the boundary of dust sources of construction sites of urban built-up areas are comprehensively and accurately captured, more accurate environment monitoring data are provided, and urban management departments are helped to formulate targeted dust pollution treatment measures.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a method for extracting dust sources in a construction site of an urban construction area based on a FarSeg algorithm.
Background
Remote sensing image processing and deep learning techniques have been rapidly developed in recent years. In the field of remote sensing image processing, traditional image segmentation methods such as edge detection, threshold segmentation and the like can extract dust sources of construction sites to a certain extent, but have limited performances on complex scenes and remote sensing images. The rise of the deep learning technology, in particular to a semantic segmentation algorithm, brings new breakthrough for the extraction of dust sources in construction sites of urban built areas in remote sensing images.
At present, a plurality of methods for extracting dust sources from construction sites of urban built-up areas, such as a traditional dust source monitoring method, mainly are based on manual rules and thresholds, complex features in images are difficult to fully mine, the extraction precision of the dust sources is limited, the accuracy is not high, expert experience is required to be relied on, subjectivity and instability exist, a large amount of manual intervention and complex calculation are required in the traditional method, the extraction efficiency of the dust sources is low, and the real-time monitoring requirement cannot be met. The dust source extraction based on the image processing technology, common methods include denoising, registration, correction and the like, and although the technologies can improve the image quality, the dust source extraction problem is not optimized. Dust source extraction based on conventional machine learning methods are often used for remote sensing images, such as Support Vector Machines (SVMs) and Random Forests.
Although the methods have a certain effect on the extraction of dust sources at construction sites of urban built-up areas, the methods still have some problems and limitations, are difficult to cope with diversified dust source scenes of the urban built-up areas, and have unstable extraction effects on different types of sites.
Therefore, how to improve the accuracy and efficiency of dust source extraction is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides a method for extracting dust sources at construction sites of urban built-up areas based on a FarSeg algorithm, which realizes high-precision extraction of dust sources at construction sites of urban built-up areas of remote sensing images based on the FarSeg algorithm by combining deep learning and remote sensing image processing technologies; through image semantic segmentation, the position and the boundary of dust sources of construction sites of urban built-up areas are comprehensively and accurately captured, more accurate environment monitoring data are provided, and urban management departments are helped to formulate targeted dust pollution treatment measures.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a method for extracting dust source of construction site of urban construction area based on FarSeg algorithm comprises the following specific steps:
collecting remote sensing data of a built-up area of a city, and preprocessing the remote sensing data;
marking a construction site dust source on the preprocessed remote sensing data to obtain marked data; making the labeling data into labeling samples, and carrying out data enhancement on the labeling samples;
training the FarSeg model according to the labeling sample after data enhancement to obtain a trained FarSeg model;
inputting data to be tested into a trained FarSeg model, performing image semantic segmentation, extracting a dust source, and obtaining a corresponding vector result;
and carrying out post-processing optimization on the vector result to obtain the dust source region boundary of the data to be detected.
Preferably, the remote sensing data is obtained through satellite remote sensing or unmanned aerial vehicle, and has the characteristics of larger range and high resolution; the preprocessing step comprises removing cloud layer, radiometric calibration, atmospheric correction and registering the images, so as to ensure the quality and consistency of the data.
Preferably, labeling adopts a label at a pixel level so as to perform semantic segmentation; the annotation data comprises various types of dust source scenes, so that the model is ensured to learn rich characteristic representations.
Preferably, the process of making the labeling sample is: the method comprises the steps of rasterizing vectors of marking data, slicing images in the marking data and corresponding rasterized vectors according to preset sizes, wherein the specific slicing process is a process of converting a whole large remote sensing image and corresponding vector marking into small and mutually corresponding samples, and therefore marking samples are manufactured.
Preferably, data enhancement includes rotation, cropping, resampling, and contrast variation.
Preferably, the process of training the FarSeg model is: and putting the labeling samples into a neural network of the FarSeg model in batches, and learning the characteristics of the dust source scene through iterative optimization to improve the recognition accuracy of the dust source and obtain the trained FarSeg model.
Preferably, the FarSeg model comprises: the system comprises a backbone network, an ASPP module, a progressive segmentation module, a feature fusion module and an image semantic segmentation module;
the backbone network is used for extracting characteristics of input data to be detected to obtain a plurality of groups of characteristic diagrams with semantic information with different resolutions;
the ASPP module performs cavity convolution operation on the extracted feature images, enlarges the receptive field, captures semantic information of different scales under different cavity rates, and enables the model to effectively understand image contents under the conditions of smaller receptive field and larger receptive field;
the progressive segmentation module gradually upsamples the high-level features of the semantic information and fuses the high-level features with the low-level features of the semantic information, so as to gradually improve the resolution of the features and obtain segmentation results;
the feature fusion module fuses the segmentation results to obtain fused semantic information;
the image semantic segmentation module classifies each pixel in the data to be detected according to the fused semantic information, distinguishes the pixels of the dust source from the pixels of other categories, and extracts the dust source.
Preferably, the post-processing optimization includes: removing noise points, filling holes and smoothing edges; removing noise points, namely removing pattern spots with areas smaller than a preset value; filling the cavity is to supplement the cavity with the area of the pattern spot smaller than a preset value; edge smoothing is to grind all the edge noise raised portions of the pattern spots flat.
According to the technical scheme, the invention discloses a method for extracting the dust source in the construction site of the urban construction area based on the FarSeg algorithm, which fully utilizes the advantage of deep learning, extracts the dust source in a semantic segmentation mode, improves the accuracy and efficiency of the dust source extraction through an optimization algorithm and a post-treatment means, solves the problems of low accuracy, low efficiency, low universality and the like in the prior art, and is more suitable for application in the fields of actual urban planning, environmental protection and the like.
Compared with the prior art, the invention has the following advantages: 1. extraction accuracy is remarkably improved: according to the invention, the deep learning model FarSeg is adopted to extract the dust source, and compared with the traditional rule or threshold-based method, the deep learning model can learn complex characteristic representation, so that the accuracy of extracting the dust source is effectively improved, and the omission rate and the false detection rate are reduced; through the pixel-level semantic segmentation, the method can more comprehensively and accurately capture the position and the boundary of the dust source of the construction site of the urban built-up area.
2. The efficiency is greatly improved: the traditional method requires a large amount of manual intervention and complex calculation, so that the extraction efficiency of the dust source is low; the invention adopts a deep learning model, can perform parallel calculation, improves the efficiency of extracting the dust source, and is suitable for the real-time monitoring requirement of a large-scale urban built-up area.
3. The adaptability is strong: the invention adopts the remote sensing image as input data, has the advantages of obtaining a large range and high resolution, and is suitable for the dust source extraction task of the urban built-up area; meanwhile, the deep learning model can learn abstract features, so that the method is suitable for complex and various dust source scenes.
4. The universality is strong: the method adopts the FarSeg algorithm as a basic model, and the model has wide application value in the field of image semantic segmentation, so that the technical scheme of the method has strong universality, can be used for other remote sensing image segmentation tasks, and has an expanded application prospect.
5. And (3) fine environmental monitoring: the dust source of the construction site of the urban built-up area can be accurately extracted, so that finer data support can be provided for urban environment monitoring; the urban management department can formulate targeted dust pollution treatment measures according to dust source information extracted in real time, so that the air quality is effectively improved, and the health of residents is protected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a dust source extraction method provided by the invention;
FIG. 2 is a flow chart of the labeling sample provided by the invention;
FIG. 3 is a diagram showing the effect of data enhancement provided by the present invention;
FIG. 4 is a diagram showing a network structure of the FarSeg model provided by the invention;
FIG. 5 is a chart showing various model accuracy evaluation tables provided by the present invention;
FIG. 6 is a diagram showing the comparison result of verification points and identification points provided by the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment of the invention discloses a method for extracting dust sources in a construction site of an urban construction area based on a FarSeg algorithm, which comprises the following specific steps:
collecting remote sensing data of a built-up area of a city, and preprocessing the remote sensing data;
marking a construction site dust source on the preprocessed remote sensing data to obtain marked data; making the labeling data into labeling samples, and carrying out data enhancement on the labeling samples;
training the FarSeg model according to the labeling sample after data enhancement to obtain a trained FarSeg model;
inputting data to be tested into a trained FarSeg model, performing image semantic segmentation, extracting a dust source, and obtaining a corresponding vector result;
and carrying out post-processing optimization on the vector result to obtain the dust source region boundary of the data to be detected.
Specifically, the remote sensing data is obtained through satellite remote sensing or unmanned aerial vehicle, and has the characteristics of larger range and high resolution; the preprocessing step comprises removing cloud layer, radiometric calibration, atmospheric correction and registering the images, so as to ensure the quality and consistency of the data.
Specifically, labeling adopts a label at a pixel level so as to perform semantic segmentation; the annotation data comprises various types of dust source scenes, so that the model is ensured to learn rich characteristic representations.
Specifically, the process of making the labeling sample is as follows: the method comprises the steps of rasterizing vectors of marking data, carrying out 512 x 512 fixed-size slicing on images in the marking data and corresponding rasterized vectors according to preset sizes, wherein the specific slicing process is a process of converting a whole large remote sensing image and corresponding vector marking into small and mutually corresponding samples, so that marked samples are manufactured.
In particular, data enhancement includes rotation, cropping, resampling, and contrast variation.
Specifically, the process of training the FarSeg model is as follows: and putting the labeling samples into a neural network of the FarSeg model in batches, and learning the characteristics of the dust source scene through iterative optimization to improve the recognition accuracy of the dust source and obtain the trained FarSeg model.
Specifically, the FarSeg model includes: the system comprises a backbone network, an ASPP module, a progressive segmentation module, a feature fusion module and an image semantic segmentation module;
the backbone network is used for extracting characteristics of input data to be detected to obtain a plurality of groups of characteristic diagrams with semantic information with different resolutions;
the ASPP module performs cavity convolution operation on the extracted feature images, enlarges the receptive field, captures semantic information of different scales under different cavity rates, and enables the model to effectively understand image contents under the conditions of smaller receptive field and larger receptive field;
the progressive segmentation module gradually upsamples the high-level features of the semantic information and fuses the high-level features with the low-level features of the semantic information, so as to gradually improve the resolution of the features and obtain segmentation results;
the feature fusion module fuses the segmentation results to obtain fused semantic information;
the image semantic segmentation module classifies each pixel in the data to be detected according to the fused semantic information, distinguishes the pixels of the dust source from the pixels of other categories, and extracts the dust source.
Specifically, the post-processing optimization includes: removing noise points, filling holes and smoothing edges; removing noise points, namely removing pattern spots with areas smaller than a preset value; filling the cavity is to supplement the cavity with the area of the pattern spot smaller than a preset value; edge smoothing is to grind all the edge noise raised portions of the pattern spots flat.
The working flow of the embodiment is shown in fig. 1, and the working principle is as follows: the method comprises the steps of collecting remote sensing data, preprocessing the remote sensing data, wherein the preprocessing comprises cloud layer removal, radiometric calibration, atmospheric correction and image registration. After which the professional performs manual data labeling.
In the remote sensing field, the whole image is generally quite large, the marked result is basically in a Shapefile vector format, the format cannot be directly used for model training, marked data are required to be processed to be manufactured into a proper sample, and the sample manufacturing flow is shown in figure 2. In short, the method comprises the steps of rasterizing the vector, then carrying out 512 x 512 slicing on the image and the corresponding rasterized vector, wherein the specific process of slicing is to convert a whole large remote sensing image and the corresponding vector labels into small sample processes corresponding to each other, so as to manufacture samples.
Before formal training, the obtained samples can be subjected to data enhancement operations for improving the universality of the model, the data enhancement means are generally rotation, contrast change, clipping and other operations, the data enhancement means are shown in fig. 3, the images a and a are original images, the images B and B are rotation images, the images C and C are contrast change images, and the images D and D are clipping images, and the data enhancement means are used for training the deep learning model. The network model adopted in this embodiment is FarSeg, and the network structure is shown in fig. 4. The trained model can interpret other images to obtain an outcome vector, and generally the obtained vector outcome is not the final outcome, and post-processing optimization, such as noise point removal, void filling, edge smoothing and the like, is needed to be carried out, so that outcome data is finally formed.
The FarSeg model of the embodiment is improved and optimized based on the deep Lab v < 3+ > model, and mainly comprises a backbone network, a cavity convolution module, a progressive segmentation module and a feature fusion module. The FarSeg model adopts a backbone network as a feature extractor, can learn low-level and high-level features of images, and helps the model understand the content of the images. In order to capture the characteristic information under the anger and sense fields, the FarSeg model uses a cavity convolution module, and ASPP expands the sense fields by carrying out convolution operation under different cavity rates, so that semantic information with different scales in the image is effectively captured. The FarSeg model introduces a progressive segmentation module, which is the most significant difference from the DeepLab v3+ model, by progressively feature decoding, gradually upsampling features to the resolution of the original image and fusing with low-level features to obtain finer segmentation results.
In this embodiment, the FarSeg network model is compared with other models such as Unet, segNet, PSPNet, and finally, evaluation indexes such as IoU and F1 score, recall, precision of each model are calculated to obtain the best effect of the FarSeg network model, as shown in fig. 5.
Example 2
And selecting GF1 and GF6 image data of four months, namely 2022 month 5, 2022 month 10, 2023 month 2 and 2023 month 5 of A, and obtaining 24 remote sensing images with 2m resolution through preprocessing operations such as radiation correction, atmosphere correction, fusion and the like. And (3) completing sample marking through artificial visual interpretation, and slicing by using a Python development slicing script to obtain 13000 pieces of slice sample data with the size of 512 x 512. And training an image segmentation model by using a domestic PaddlePaddle deep learning framework to obtain an accurate result of the dust source. Finally, through verification, the pixel-level accuracy rate can reach 85%, and the recall rate can reach 89%.
In market B, simple verification was performed using the model trained by the examples. Firstly, GF1 image data of 6 months in 2023 is obtained, the remote sensing image with the resolution of 2m is subjected to preprocessing operations such as radiometric calibration, atmospheric correction and fusion, and an image segmentation model trained based on a FarSeg network is used to obtain an accurate interpretation result of a dust source. Then, checking whether the dust source of the construction site really exists or not manually, selecting 8 positions, checking the longitude and latitude of the eight positions in an arcmap, and comparing the obtained model interpretation result to find that the eight positions are correctly identified, as shown in figure 6.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (8)
1. A method for extracting dust source from construction sites of urban construction areas based on a FarSeg algorithm is characterized by comprising the following specific steps:
collecting remote sensing data of a built-up area of a city, and preprocessing the remote sensing data;
marking a construction site dust source on the preprocessed remote sensing data to obtain marked data; making the labeling data into labeling samples, and carrying out data enhancement on the labeling samples;
training the FarSeg model according to the labeling sample after data enhancement to obtain a trained FarSeg model;
inputting data to be tested into a trained FarSeg model, performing image semantic segmentation, extracting a dust source, and obtaining a corresponding vector result;
and carrying out post-processing optimization on the vector result to obtain the dust source region boundary of the data to be detected.
2. The method for extracting the dust source from the construction site of the urban construction area based on the FarSeg algorithm according to claim 1, wherein the remote sensing data are acquired through satellite remote sensing or unmanned aerial vehicles; the preprocessing steps include removing cloud cover, radiometric calibration, atmospheric correction, and registering the images.
3. The method for extracting the dust source from the construction site of the urban construction area based on the FarSeg algorithm, which is characterized in that labels with pixel level are marked; the annotation data includes various types of dust source scenes.
4. The method for extracting dust source from urban construction area construction sites based on FarSeg algorithm as claimed in claim 1, wherein the process of making labeling samples is as follows: and rasterizing the vector of the marking data, slicing the image in the marking data and the corresponding rasterized vector according to the preset size, and manufacturing all slices into marking samples.
5. The method for extracting dust source from urban construction area construction sites based on the FarSeg algorithm according to claim 1, wherein the data enhancement comprises rotation, cutting, resampling and contrast variation.
6. The method for extracting dust source from urban construction area construction sites based on FarSeg algorithm as claimed in claim 3, wherein the process of training the FarSeg model is as follows: and putting the labeling samples into a neural network of the FarSeg model in batches, and learning the characteristics of the dust source scene through iterative optimization to obtain the trained FarSeg model.
7. The method for extracting dust source from urban construction area construction sites based on FarSeg algorithm as claimed in claim 1, wherein the FarSeg model comprises: the system comprises a backbone network, an ASPP module, a progressive segmentation module, a feature fusion module and an image semantic segmentation module;
the backbone network is used for extracting characteristics of input data to be detected to obtain a plurality of groups of characteristic diagrams with semantic information with different resolutions;
the ASPP module performs cavity convolution operation on the extracted feature map, enlarges the receptive field, and captures semantic information of different scales under different cavity rates;
the progressive segmentation module gradually upsamples the high-level features of the semantic information and fuses the high-level features with the low-level features of the semantic information to obtain a segmentation result;
the feature fusion module fuses the segmentation results to obtain fused semantic information;
the image semantic segmentation module classifies each pixel in the data to be detected according to the fused semantic information, distinguishes the pixels of the dust source from the pixels of other categories, and extracts the dust source.
8. The method for extracting dust source from urban construction area construction sites based on the FarSeg algorithm as claimed in claim 1, wherein the post-treatment optimization comprises the following steps: removing noise points, filling holes and smoothing edges; removing noise points, namely removing pattern spots with areas smaller than a preset value; filling the cavity is to supplement the cavity with the area of the pattern spot smaller than a preset value; edge smoothing is to grind all the edge noise raised portions of the pattern spots flat.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311190699.8A CN117315460A (en) | 2023-09-15 | 2023-09-15 | FarSeg algorithm-based dust source extraction method for construction sites of urban construction area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311190699.8A CN117315460A (en) | 2023-09-15 | 2023-09-15 | FarSeg algorithm-based dust source extraction method for construction sites of urban construction area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117315460A true CN117315460A (en) | 2023-12-29 |
Family
ID=89254551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311190699.8A Pending CN117315460A (en) | 2023-09-15 | 2023-09-15 | FarSeg algorithm-based dust source extraction method for construction sites of urban construction area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315460A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175638A (en) * | 2019-05-13 | 2019-08-27 | 北京中科锐景科技有限公司 | A kind of fugitive dust source monitoring method |
CN114419449A (en) * | 2022-03-28 | 2022-04-29 | 成都信息工程大学 | Self-attention multi-scale feature fusion remote sensing image semantic segmentation method |
CN116071666A (en) * | 2023-02-16 | 2023-05-05 | 长光卫星技术股份有限公司 | Bare soil dust source monitoring method based on high-resolution satellite image |
-
2023
- 2023-09-15 CN CN202311190699.8A patent/CN117315460A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110175638A (en) * | 2019-05-13 | 2019-08-27 | 北京中科锐景科技有限公司 | A kind of fugitive dust source monitoring method |
CN114419449A (en) * | 2022-03-28 | 2022-04-29 | 成都信息工程大学 | Self-attention multi-scale feature fusion remote sensing image semantic segmentation method |
CN116071666A (en) * | 2023-02-16 | 2023-05-05 | 长光卫星技术股份有限公司 | Bare soil dust source monitoring method based on high-resolution satellite image |
Non-Patent Citations (2)
Title |
---|
ZHUO ZHENG 等: "Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery", CVF, 31 December 2020 (2020-12-31), pages 4096 - 4105 * |
刘春亭: "DeepLabv3+语义分割模型的济南市防尘绿网提取及时空变化分析", 遥感学报, 31 December 2022 (2022-12-31) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108648169B (en) | Method and device for automatically identifying defects of high-voltage power transmission tower insulator | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN110008956B (en) | Invoice key information positioning method, invoice key information positioning device, computer equipment and storage medium | |
CN110346699B (en) | Insulator discharge information extraction method and device based on ultraviolet image processing technology | |
CN112766136B (en) | Space parking space detection method based on deep learning | |
CN111951284B (en) | Optical remote sensing satellite image refined cloud detection method based on deep learning | |
CN111160328A (en) | Automatic traffic marking extraction method based on semantic segmentation technology | |
CN111027456A (en) | Mechanical water meter reading identification method based on image identification | |
CN111507398A (en) | Transformer substation metal instrument corrosion identification method based on target detection | |
CN115049640B (en) | Road crack detection method based on deep learning | |
Li et al. | Pixel‐Level Recognition of Pavement Distresses Based on U‐Net | |
CN116597270A (en) | Road damage target detection method based on attention mechanism integrated learning network | |
Raziq et al. | Automatic extraction of urban road centerlines from high-resolution satellite imagery using automatic thresholding and morphological operation method | |
CN114581886A (en) | Visibility discrimination method, device and medium combining semantic segmentation and frequency domain analysis | |
CN116385889B (en) | Railway identification-based power inspection method and device and electronic equipment | |
CN112233078A (en) | Stacked kilogram group weight identification and key part segmentation method | |
CN117315460A (en) | FarSeg algorithm-based dust source extraction method for construction sites of urban construction area | |
CN112036246B (en) | Construction method of remote sensing image classification model, remote sensing image classification method and system | |
CN115761439A (en) | Boiler inner wall sink detection and identification method based on target detection | |
CN112818865B (en) | Vehicle-mounted field image recognition method, recognition model establishment method, device, electronic equipment and readable storage medium | |
CN115100485A (en) | Instrument abnormal state identification method based on power inspection robot | |
CN110084125B (en) | Agricultural insurance investigation technical method based on deep learning | |
Rani et al. | Object Detection in Natural Scene Images Using Thresholding Techniques | |
CN117576416B (en) | Workpiece edge area detection method, device and storage medium | |
CN117830882B (en) | Deep learning-based aerial image recognition method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |