CN116580309A - Surface mine stope extraction method combining deep learning and object-oriented analysis - Google Patents
Surface mine stope extraction method combining deep learning and object-oriented analysis Download PDFInfo
- Publication number
- CN116580309A CN116580309A CN202310855540.7A CN202310855540A CN116580309A CN 116580309 A CN116580309 A CN 116580309A CN 202310855540 A CN202310855540 A CN 202310855540A CN 116580309 A CN116580309 A CN 116580309A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- remote sensing
- surface mine
- oriented
- researched
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 43
- 238000004458 analytical method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 230000011218 segmentation Effects 0.000 claims abstract description 96
- 238000013136 deep learning model Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 23
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 12
- 238000012795 verification Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 5
- 238000011161 development Methods 0.000 description 5
- 230000018109 developmental process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 229910052500 inorganic mineral Inorganic materials 0.000 description 2
- 239000011707 mineral Substances 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an extraction method of an open pit mine stope combining deep learning and object-oriented analysis, which comprises the following steps: performing preliminary identification on the remote sensing image of the area to be researched by adopting a deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and earth surface coverage areas; obtaining optimal segmentation parameters of an object-oriented segmentation model, and taking a segmentation result as a final object-oriented segmentation result of the remote sensing image of the region to be researched, wherein the segmentation parameters comprise segmentation scale, shape factor and compactness factor; and combining the space information extraction result of the surface mine stopes and the object-oriented segmentation result of the remote sensing image to obtain all the surface mine stopes and vector boundaries of the surface mine stopes in the remote sensing image of the region to be researched. The application realizes the fine extraction of the boundary of the surface mine stope, and the extracted surface mine stope contains more complete boundary information and has higher similarity with the actual boundary.
Description
Technical Field
The application relates to the technical field of mineral resource development and supervision, in particular to an extraction method of an open pit mine stope combining deep learning and object-oriented analysis.
Background
With the development of earth observation technology, remote sensing technology has been developed as an important means for mineral resource development and monitoring, mine geological environment investigation and monitoring, ecological environment monitoring and other works. While expert scholars in various places successively conduct related research work, in related research, information acquisition of typical surface elements of mining areas mainly depends on expert visual interpretation and conventional man-machine interaction, and the technical method has insufficient automation and intelligence.
The deep learning method can learn the most representative and separable characteristics in the data set in a layering manner end to end and is widely applied to the research fields of pattern recognition and the like. Compared with the classical machine learning algorithm which needs expert experience to construct and select target features, the deep learning can learn sample features autonomously without manually constructing features or designing rules, and the degree of automation and intelligence is effectively improved.
Although deep learning has shown good performance, there are some problems with applying to wide range surface mine stope boundary extraction. On one hand, deep learning requires a large number of sample supports, and the surface mine stope only occupies a small part of the ground, so that a small number of samples can be obtained when large-scale surface mine stope extraction is carried out, and boundary extraction of the surface mine stope cannot be well supported; on the other hand, the surface mine is complex in scene, the displayed spectrum features are complex, the deep learning model can only identify partial areas in the scene, and broken areas and holes are easy to appear.
Disclosure of Invention
Aiming at the existing state of the art, the application provides an extraction method of an open-pit mine stope combining deep learning and object-oriented analysis, which realizes the fine extraction of the boundary of the open-pit mine stope, and the extracted open-pit mine stope contains more complete boundary information and has higher similarity with the actual boundary.
In order to achieve the above purpose, the application adopts the following technical scheme:
the surface mine stope extraction method combining deep learning and object-oriented analysis comprises the following steps:
s1, extracting spatial information of an opencast mine stope:
s1.1, manually translating n open pit mine stopes from remote sensing images of a region to be researched, taking an obtained manually translated object as a training sample set, and training a deep learning model based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting a trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and ground surface coverage areas;
s2, object-oriented segmentation of the remote sensing image:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, for each group of object-oriented segmentation results, manually interpreting the results with m placesPerforming superposition analysis on images, screening out segmented objects intersected with the manually-interpreted object at m, and calculating the coincidence degree of each screened segmented object and the manually-interpreted object intersected with the corresponding segmented objectAnd calculate the average coincidence->The calculation formula is as follows:,/>wherein->Calculating a function for the area of the artificial interpretation object, < >>Calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioTaking the segmentation parameter under the maximum condition as the optimal segmentation parameter of the object-oriented segmentation model, and taking the segmentation result as the final object-oriented segmentation result of the remote sensing image of the region to be researched;
s3, extracting vector boundaries of the surface mine stope:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
Furthermore, n surface mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the area to be researched.
Furthermore, the deep learning model adopts a lightweight network model U-Net.
Further, in the lightweight network model U-Net, the sample slice adopts a pixel size ofAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size.
Furthermore, in the lightweight network model U-Net, the training sample slices are rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slices.
Further, in step S2.2, the coincidence degree between each screened segmented object and the corresponding manually interpreted object intersected with the segmented object is calculatedIn the case that the number of the segmentation results and/or the manual interpretation results is greater than 1 in a certain intersection relationship, the areas of the segmentation results and/or the manual interpretation results are combined and calculated, and then the calculation is performed.
Further, in step S3.2, for each of the remaining segmented objects, the following determination is also performed in combination with land use classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept.
Further, in step S3.2, the first set value is 10% and the second set value is 20%.
Further, the surface mine stopes at the k positions are manually decoded from the remote sensing image of the region to be researched, the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
The beneficial effects of the application are as follows:
according to the method, a deep learning model is adopted to initially identify remote sensing images of a region to be researched, spatial information results of all surface mine stopes in the remote sensing images of the region to be researched are obtained, spatial positions and surface coverage areas of potential surface mine stopes are located, then the remote sensing images of the region to be researched are subjected to object-oriented segmentation for many times, segmented objects and manually interpreted objects are subjected to superposition analysis, the coincidence degree is evaluated based on area similarity, optimal segmentation parameters of the object-oriented segmentation model are obtained, the segmentation results are used as final object-oriented segmentation results of the remote sensing images of the region to be researched, finally, the spatial information extraction results of the surface mine stopes and the vector boundaries of the remote sensing images are combined, and fine extraction of the boundaries of the surface mine stopes is achieved after screening, and the extracted surface mine stopes contain more complete boundary information and are higher in similarity with actual boundaries. Through verification, the accuracy of the method for identifying the spatial position of the stope of the surface mine is 0.862, and the extraction accuracy of the average spatial range is 0.78.
Drawings
FIG. 1 is a flow diagram of a surface mine stope extraction method combining deep learning and object oriented analysis in accordance with the present application;
FIG. 2 is a schematic view (partial) of the object-oriented segmentation result of a remote sensing image of a region to be studied;
fig. 3 is a schematic diagram (local) of a spatial information extraction result of an open pit mine stope of a remote sensing image of a region to be studied;
fig. 4 is a schematic diagram (local) of superposition analysis of an object-oriented segmentation result of a remote sensing image of a region to be studied and a spatial information extraction result of an opencast mine stope;
fig. 5 is a schematic diagram (local) of extraction results of surface mine stopes and vector boundaries thereof in remote sensing images of an area to be studied.
Detailed Description
The application is further described below with reference to the accompanying drawings.
Referring to fig. 1, the surface mine stope extraction method combining deep learning and object-oriented analysis includes the following steps: s1, extracting spatial information of an opencast mine stope; s2, object-oriented segmentation of the remote sensing image; s3, extracting vector boundaries of the surface mine stopes.
Referring to fig. 3, the spatial information extraction of the surface mine stope of the remote sensing image of the area to be studied includes the following steps:
s1.1, manually translating n open-pit mine stopes from a remote sensing image of a region to be researched, wherein the obtained manually translated object is used as a training sample set, the n open-pit mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the region to be researched, and a deep learning model is trained based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting the trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and surface coverage areas.
In this embodiment, the deep learning model uses a lightweight network model U-Net.
Specifically, in the lightweight network model U-Net, the sample slice adopts pixels with the size ofAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size. Furthermore, in the lightweight network model U-Net, the training sample slices are rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slices.
Referring to fig. 2, the remote sensing image object-oriented segmentation of the remote sensing image of the region to be studied includes the following steps:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, carrying out superposition analysis on each group of object-oriented segmentation results and the m manually-interpreted objects, screening out segmentation objects intersected with the m manually-interpreted objects, and calculating the coincidence degree of each screened segmentation object and the manually-interpreted object intersected with the screened segmentation objectAnd calculate the average coincidence->The calculation formula is as follows:,/>wherein->Calculating a function for the area of the artificial interpretation object, < >>Calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioThe segmentation parameter under the maximum condition is used as the optimal segmentation parameter of the object-oriented segmentation model, and the segmentation result is used as the final object-oriented segmentation result of the remote sensing image of the region to be researched.
In the step S2.2, when calculating the coincidence ratio of each screened segmented object and the corresponding manually decoded object intersected with the segmented object, a certain intersection relation is obtainedIf the number of the segmentation results and/or the manual interpretation results is greater than 1, the areas of the segmentation results and/or the manual interpretation results are combined and calculated, and then calculation is carried out. The average degree of overlapThe calculation formula of (2) should be adaptively adjusted.
The technical principle of performing remote sensing image object-oriented segmentation by using the optimal segmentation parameters is as follows:
the remote sensing image object-oriented segmentation is a process of dividing an image scene into a plurality of meaningful sub-areas based on homogeneity or heterogeneity criteria according to the difference of ground object targets on image characteristics. For any area to be researched, the types of mine development occupation area are more, the characteristic differences of pattern spectrum, texture, geometry and the like are obvious, and a single object capable of completely expressing the mine development occupation area outline information is not easy to directly separate. According to the application, segmentation results under different parameter combinations are generated by controlling segmentation scale, shape factor and compactness factor variables, and after superposition analysis is carried out on the segmentation results and the artificial interpretation objects, intersecting segmentation objects are screened out, and then the coincidence degree is evaluated based on area similarity. And finally selecting the segmentation scale, the shape factor and the compactness factor under the condition of highest average area similarity (optimal coincidence) as optimal segmentation parameters through multiple judgment. To avoid too much "fragmentation" of the segmentation result, the fewer segmented objects contained in the range, the better, i.e. the more complete the object, at the optimal overlap ratio.
Referring to fig. 4 to 5, the extraction of the mining field and the vector boundary thereof in the remote sensing image of the area to be studied includes the following steps:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
for each reserved segmentation object, the following judgment is also carried out by combining land utilization classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
In this embodiment, the first set value is 10% and the second set value is 20%.
And (3) manually decoding the k open-pit mine stopes from the remote sensing image of the region to be researched, wherein the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the open-pit mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
And (3) verifying the accuracy of the identification result:
for a test study area, the surface mine stope in the study area only accounts for about 1.4% of the total area of the study area, the method provided by the application is used for identifying the position of the surface mine stope 152, the position of the surface mine stope is correct 125, the position of the surface mine stope is wrong 27, the position of the surface mine stope is missing 13, the identification accuracy F1 is 0.862, and the extraction accuracy of the average space range is 0.78.
In general, the method comprises the steps of firstly carrying out preliminary identification on remote sensing images of a region to be researched by adopting a deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing images of the region to be researched, positioning spatial positions and surface coverage areas of potential surface mine stopes, then carrying out object-oriented segmentation on the remote sensing images of the region to be researched for many times, carrying out superposition analysis on segmented objects and human interpretation objects, evaluating the coincidence degree based on area similarity to obtain optimal segmentation parameters of an object-oriented segmentation model, taking the segmentation results as final object-oriented segmentation results of the remote sensing images of the region to be researched, finally combining the spatial information extraction results of the surface mine stopes with the object-oriented segmentation results of the remote sensing images, and obtaining all surface mine stopes and vector boundaries thereof in the remote sensing images of the region to be researched after screening, thereby realizing fine extraction of the boundaries of the surface mine stopes, wherein the extracted surface mine stopes contain more complete boundary information and have higher similarity with actual boundaries.
Of course, the above embodiments are only preferred embodiments of the present application, and the scope of the present application is not limited thereto, so that all equivalent modifications made in the principles of the present application are included in the scope of the present application.
Claims (9)
1. The surface mine stope extraction method combining deep learning and object-oriented analysis is characterized in that: the method comprises the following steps:
s1, extracting spatial information of an opencast mine stope:
s1.1, manually translating n open pit mine stopes from remote sensing images of a region to be researched, taking an obtained manually translated object as a training sample set, and training a deep learning model based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting a trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and ground surface coverage areas;
s2, object-oriented segmentation of the remote sensing image:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, carrying out superposition analysis on each group of object-oriented segmentation results and the m manually interpreted objects, screening out segmentation objects intersecting the m manually interpreted objects, and calculating each screened objectCoincidence degree of a segmented object and a manually interpreted object intersecting the segmented objectAnd calculate the average coincidence->The calculation formula is as follows:,/>wherein->Calculating a function for the area of the artificial interpretation object, < >>Calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioTaking the segmentation parameter under the maximum condition as the optimal segmentation parameter of the object-oriented segmentation model, and taking the segmentation result as the final object-oriented segmentation result of the remote sensing image of the region to be researched;
s3, extracting vector boundaries of the surface mine stope:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
2. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: the n open pit mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the area to be researched.
3. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: the deep learning model adopts a lightweight network model U-Net.
4. A surface mine stope extraction method combining deep learning and object oriented analysis as claimed in claim 3 wherein: in the lightweight network model U-Net, the pixel size adopted by the sample slice isAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size.
5. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 4, wherein: in the lightweight network model U-Net, a training sample slice is rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slice.
6. According to the weightsThe surface mine stope extraction method combining deep learning and object-oriented analysis of claim 1, wherein the method comprises the steps of: in step S2.2, calculating the coincidence ratio of each screened segmented object and the corresponding manually interpreted object intersected with the segmented objectIn the case that the number of the segmentation results and/or the manual interpretation results is greater than 1 in a certain intersection relationship, the areas of the segmentation results and/or the manual interpretation results are combined and calculated, and then the calculation is performed.
7. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: in step S3.2, for each of the segmented objects that remain, the following determination is also made in conjunction with land use classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept.
8. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: in step S3.2, the first set value is 10% and the second set value is 20%.
9. The surface mine stope extraction method combining deep learning and object oriented analysis according to any one of claims 1 to 8, wherein: and (3) manually decoding the k open-pit mine stopes from the remote sensing image of the region to be researched, wherein the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the open-pit mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310855540.7A CN116580309B (en) | 2023-07-13 | 2023-07-13 | Surface mine stope extraction method combining deep learning and object-oriented analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310855540.7A CN116580309B (en) | 2023-07-13 | 2023-07-13 | Surface mine stope extraction method combining deep learning and object-oriented analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116580309A true CN116580309A (en) | 2023-08-11 |
CN116580309B CN116580309B (en) | 2023-09-15 |
Family
ID=87536326
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310855540.7A Active CN116580309B (en) | 2023-07-13 | 2023-07-13 | Surface mine stope extraction method combining deep learning and object-oriented analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116580309B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030176931A1 (en) * | 2002-03-11 | 2003-09-18 | International Business Machines Corporation | Method for constructing segmentation-based predictive models |
EP3614308A1 (en) * | 2018-08-24 | 2020-02-26 | Ordnance Survey Limited | Joint deep learning for land cover and land use classification |
WO2020040734A1 (en) * | 2018-08-21 | 2020-02-27 | Siemens Aktiengesellschaft | Orientation detection in overhead line insulators |
CN111723712A (en) * | 2020-06-10 | 2020-09-29 | 内蒙古农业大学 | Method and system for extracting mulching film information based on radar remote sensing data and object-oriented mulching film information |
WO2021226977A1 (en) * | 2020-05-15 | 2021-11-18 | 安徽中科智能感知产业技术研究院有限责任公司 | Method and platform for dynamically monitoring typical ground features in mining on the basis of multi-source remote sensing data fusion and deep neural network |
CN115327613A (en) * | 2022-06-20 | 2022-11-11 | 华北科技学院 | Mine micro-seismic waveform automatic classification and identification method in multilayer multistage mode |
-
2023
- 2023-07-13 CN CN202310855540.7A patent/CN116580309B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030176931A1 (en) * | 2002-03-11 | 2003-09-18 | International Business Machines Corporation | Method for constructing segmentation-based predictive models |
WO2020040734A1 (en) * | 2018-08-21 | 2020-02-27 | Siemens Aktiengesellschaft | Orientation detection in overhead line insulators |
EP3614308A1 (en) * | 2018-08-24 | 2020-02-26 | Ordnance Survey Limited | Joint deep learning for land cover and land use classification |
WO2021226977A1 (en) * | 2020-05-15 | 2021-11-18 | 安徽中科智能感知产业技术研究院有限责任公司 | Method and platform for dynamically monitoring typical ground features in mining on the basis of multi-source remote sensing data fusion and deep neural network |
CN111723712A (en) * | 2020-06-10 | 2020-09-29 | 内蒙古农业大学 | Method and system for extracting mulching film information based on radar remote sensing data and object-oriented mulching film information |
CN115327613A (en) * | 2022-06-20 | 2022-11-11 | 华北科技学院 | Mine micro-seismic waveform automatic classification and identification method in multilayer multistage mode |
Non-Patent Citations (3)
Title |
---|
XIAO JIAN等: "MFPA-Net: An efficient deep learning network for automatic ground fissures extraction in UAV images of the coal mining area", INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, vol. 114, pages 1 - 14 * |
张仙等: "露天开采矿区要素遥感提取研究进展及展望", 自然资源遥感, vol. 35, no. 02, pages 25 - 33 * |
张正健;李爱农;雷光斌;边金虎;吴炳方;: "基于多尺度分割和决策树算法的山区遥感影像变化检测方法――以四川攀西地区为例", 生态学报, no. 24, pages 7222 - 7232 * |
Also Published As
Publication number | Publication date |
---|---|
CN116580309B (en) | 2023-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986099B (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
US9430499B2 (en) | Automated feature extraction from imagery | |
CN108765408A (en) | Build the method in cancer pathology image virtual case library and the multiple dimensioned cancer detection system based on convolutional neural networks | |
CN108256424A (en) | A kind of high-resolution remote sensing image method for extracting roads based on deep learning | |
CN107016665A (en) | A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks | |
CN110544293B (en) | Building scene recognition method through visual cooperation of multiple unmanned aerial vehicles | |
CN113657324B (en) | Urban functional area identification method based on remote sensing image ground object classification | |
CN111028255A (en) | Farmland area pre-screening method and device based on prior information and deep learning | |
CN107016403B (en) | A method of completed region of the city threshold value is extracted based on nighttime light data | |
CN103020649A (en) | Forest type identification method based on texture information | |
CN113989681B (en) | Remote sensing image change detection method and device, electronic equipment and storage medium | |
CN112232328A (en) | Remote sensing image building area extraction method and device based on convolutional neural network | |
CN106683102A (en) | SAR image segmentation method based on ridgelet filters and convolution structure model | |
CN108021920A (en) | A kind of method that image object collaboration is found | |
CN110246579A (en) | A kind of pathological diagnosis method and device | |
CN112329706A (en) | Mining land identification method based on remote sensing technology | |
Dong et al. | New quantitative approach for the morphological similarity analysis of urban fabrics based on a convolutional autoencoder | |
Ming et al. | Cropland extraction based on OBIA and adaptive scale pre-estimation | |
CN117036715A (en) | Deformation region boundary automatic extraction method based on convolutional neural network | |
CN102609721B (en) | Remote sensing image clustering method | |
Wang et al. | Underground defects detection based on GPR by fusing simple linear iterative clustering phash (SLIC-phash) and convolutional block attention module (CBAM)-YOLOv8 | |
CN117351359B (en) | Mining area unmanned aerial vehicle image sea-buckthorn identification method and system based on improved Mask R-CNN | |
CN117611987B (en) | Automatic identification method, device and medium for sea for cultivation | |
CN116580309B (en) | Surface mine stope extraction method combining deep learning and object-oriented analysis | |
CN108109125A (en) | Information extracting method and device based on remote sensing images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |