CN108363951B - Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification - Google Patents

Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification Download PDF

Info

Publication number
CN108363951B
CN108363951B CN201810026909.2A CN201810026909A CN108363951B CN 108363951 B CN108363951 B CN 108363951B CN 201810026909 A CN201810026909 A CN 201810026909A CN 108363951 B CN108363951 B CN 108363951B
Authority
CN
China
Prior art keywords
remote sensing
mask
sensing image
image
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810026909.2A
Other languages
Chinese (zh)
Other versions
CN108363951A (en
Inventor
张小国
贾友斌
陈孝烽
陈刚
韦国钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810026909.2A priority Critical patent/CN108363951B/en
Publication of CN108363951A publication Critical patent/CN108363951A/en
Application granted granted Critical
Publication of CN108363951B publication Critical patent/CN108363951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于土地利用遥感监测技术领域,特别涉及一种遥感影像地类识别对应深度学习样本库的自动获取方法,首先通过在同一坐标系下叠加土地利用现状矢量图和遥感图;通过设定阈值标记遥感图中梯度值较小的点,作为标记点;通过标记点进行漫水填充,并给对应于每个填充区域的掩膜赋值和保存地类信息;根据掩膜提取分割后图像,并根据掩膜保存的土地利用现状的地类信息进行分类保存,形成样本库;本发明通过叠加对比同时相的土地利用现状数据和遥感数据,实现不同地类对应遥感影像特征库的自动收集,相比于传统人工获取样本的工作量大、样本区域难获取的不足,本发明所用获取样本方法更加快速、准确,人工成本显著减少。

Figure 201810026909

The invention belongs to the technical field of land use remote sensing monitoring, and in particular relates to an automatic acquisition method of a deep learning sample database corresponding to remote sensing image land type recognition. Mark the points with small gradient values in the remote sensing image as marked points; perform flood filling through the marked points, and assign and save the ground type information to the mask corresponding to each filled area; extract the segmented image according to the mask, and According to the land type information of the current land use status saved by the mask, the sample database is formed by classification and storage; the invention realizes the automatic collection of the remote sensing image feature database corresponding to different land types by superimposing and comparing the land use status data and remote sensing data of the same phase. Compared with the traditional manual sample acquisition, the workload is large and the sample area is difficult to obtain. The sample acquisition method used in the present invention is faster and more accurate, and the labor cost is significantly reduced.

Figure 201810026909

Description

Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification
Technical Field
The invention belongs to the technical field of land utilization remote sensing monitoring, and particularly relates to an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification.
Background
In the technical field of land utilization current situation investigation, the aged land utilization information is important, and the full-automatic interpretation of the land class of the remote sensing image is a major technical problem of overcoming the scientific and technical efforts of national and native resources in China! In recent years, with the rapid development of machine learning technologies represented by deep learning, the application of deep learning to automatic interpretation of remote sensing images and the realization of automated land type recognition as much as possible are important research targets and directions of researchers in China at present. However, Deep learning works with Deep Neural Networks (DNN) on the premise that Deep networks are sufficiently trained, which all requires a large number of samples as training data. Traditionally, training images are obtained by manpower, manual labeling is performed, time and labor are wasted, workload is huge, and the training images are easily influenced by working emotion and working negligence of operators.
Disclosure of Invention
The invention solves the technical problems in the prior art and provides an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification.
In order to solve the problems, the technical scheme of the invention is as follows:
an automatic acquisition method of a deep learning sample library corresponding to remote sensing image land type identification comprises the following steps,
the method comprises the steps of segmenting a remote sensing image into image spots by overlapping a current land utilization vector diagram and a remote sensing diagram and utilizing the image spot boundary information of vector data, extracting mark points from the image spots, filling the image spots with water, classifying and extracting the image spots, and arranging the image spots to obtain a large number of training sample libraries required by training of remote sensing image recognition deep neural networks corresponding to different land types.
Preferably, the method for automatically acquiring the deep learning sample library corresponding to the remote sensing image land type identification comprises the following steps,
step 1: edge mapping, namely superposing a current land use state vector diagram and a remote sensing diagram under the same coordinate system, and then mapping the boundary of the current land use state vector diagram into a closed edge consisting of continuous pixels in the remote sensing diagram;
step 2: extracting the mark points, and selecting the mark points inside the closed edge; and step 3: filling flooding water, namely performing flooding water filling through the mark points, and assigning values to the masks corresponding to each filling area and storing land information;
and 4, step 4: and (4) image classification extraction, namely extracting the segmented images according to the mask, and performing classification storage according to the land type information of the land utilization status stored by the mask to form a sample library.
Preferably, the acquisition time of the land use status vector diagram and the acquisition time of the remote sensing diagram in the step 1 are the same.
Preferably, the mapping of the boundary of the land use status vector diagram to the closed edge consisting of the continuous pixels in the remote sensing diagram is realized by a linear rasterization method (numerical differentiation method).
Preferably, the pixels on the closed edge are marked as edge pixels, and the edge pixels are set to have higher pixel values, so as to ensure that the edge pixels have larger gradient values.
Preferably, in step 2, the following formula is used to extract the marked points:
Figure BDA0001545214670000021
in the formula: h and l are respectively the row number and the column number of the pixel; g (h, l) is the gradient value of the pixel; t (h, l) is a threshold value corresponding to the pixel; when the value of m (h, l) is 1, a marked point is represented, and when the value of m (h, l) is 0, an unmarked point is represented.
The mark points are internal points in the image, and the gradient value is small; unmarked points are points at the edge and the vicinity of the edge in the image, and have larger gradient values; therefore, a certain threshold value can be set to distinguish the two according to the above formula.
Preferably, in the step 3, each closed region is correspondingly assigned with a mask, and all pixel values are set to 0, and each mask is used to record the classification hierarchy structure and the name of the corresponding closed region in the present land use situation in the form of a file path.
Preferably, in step 3, the obtained mark point is used as a seed point to perform flood filling on the area in each closed edge until all the pixels in the edge constraint are marked, because the constraint boundary has a higher pixel value and a larger gradient value, the whole constraint area is obtained after the flood filling.
Preferably, in the step 3, the mark point with the smallest gradient value is selected as the seed point.
Preferably, in step 3, in the filling process of the overflowing water, for the mask corresponding to each closed region, the mask pixel value corresponding to the marked image element is set to be 1, and the remaining mask pixel values are 0.
Preferably, in the step 4, the image of each closed region is extracted according to the obtained mask, and each image is saved according to the file path and the name recorded by the mask, so as to finally generate training sample libraries corresponding to different categories.
Compared with the prior art, the invention has the advantages that,
the invention provides an automatic acquisition method for automatically acquiring a land type identification sample library required for remote sensing image interpretation by superposing the current land utilization situation and a remote sensing image, which realizes the automatic collection of remote sensing image feature libraries corresponding to different land types by superposing and comparing the current land utilization situation data and the remote sensing data at the same time phase, and uses the information in the current land utilization situation for extracting the remote sensing image land sample, thereby solving the problem of insufficient machine learning training samples in the task of remote sensing image land classification identification; compared with the defects of large workload and difficult acquisition of sample regions in the traditional method for manually acquiring samples, the method for acquiring the samples is faster and more accurate, and the labor cost is obviously reduced.
Drawings
FIG. 1 is a flow chart of an implementation of a method for automatically obtaining a deep learning sample library corresponding to remote sensing image terrain identification;
FIG. 2 is a schematic edge map, wherein (a) is a schematic view of superimposing a land utilization status and a remote sensing graphic and (b) is a schematic view of an edge pixel;
FIG. 3 is a schematic diagram of marker extraction;
FIG. 4 is a schematic of flood fill;
fig. 5 is a mask diagram.
Detailed Description
Example 1:
for a better understanding of the technical content of the present invention, specific embodiments are described below in conjunction with the appended drawings:
as shown in fig. 1, according to a preferred embodiment of the present invention, the method for automatically obtaining the deep learning sample library corresponding to the remote sensing image land type identification includes the following steps:
step 1: edge mapping, namely firstly, overlapping the current land use situation and a remote sensing image under the same coordinate system, and then mapping the boundary of a vector image of the current land use situation into a closed edge consisting of continuous pixels in the remote sensing image;
step 2: extracting the marking points, namely marking the points with smaller gradient values in the remote sensing image as the marking points by setting a threshold value;
and step 3: filling flooding water, namely performing flooding water filling through the mark points, and assigning values to the masks corresponding to each filling area and storing land information;
and 4, step 4: and (4) image classification extraction, namely extracting the segmented images according to the mask, and performing classification storage according to the land type information of the land utilization status stored by the mask to form a sample library.
In this embodiment, in step 1, the proposed method for superimposing the current land utilization situation and the remote sensing image first needs to uniformly convert the two data into the same coordinate system, so as to ensure that the current land utilization situation is completely matched with the remote sensing image, and ensure that incomplete or even wrong samples cannot be obtained, which causes the obtained samples to hardly help training the learner, and therefore, it is necessary to unify the two data into the same coordinate system.
In this embodiment, in step 1, the superposed and analyzed current land utilization state and the time for acquiring the remote sensing image are consistent, because the current land utilization state and the remote sensing image of different phases are updated later than the remote sensing image, and the changed land type information in the remote sensing image is not updated in the current land utilization state, which results in obtaining a wrong land sample subsequently, so that the current land utilization state and the remote sensing image which are actually the same are required to reduce errors caused by inconsistency between the current land utilization state and the remote sensing image.
Referring to fig. 2, in the foregoing step 1, the edge mapping maps the boundary of the area in the current land utilization vector diagram onto the remote sensing image to form a closed edge constraint, so that flood is limited inside the constrained area in the subsequent flood filling process, which is a key for implementing image segmentation under the vector diagram constraint. The method comprises the steps of mapping vector boundaries to a remote sensing image by superposing a current vector diagram and the remote sensing diagram on the land, then mapping the boundaries of the vector diagrams to closed edges consisting of continuous pixels by utilizing a linear rasterization algorithm in computer graphics, namely a numerical differentiation method, marking the pixels on the closed edges as the edge pixels and setting higher pixel values, for example, setting the pixel values to be maximum pixel (RGB) (255 x 255) so as to ensure that the pixels have larger gradient values, and forming a subsequent constraint area filled with the overflowing water.
Referring to fig. 2, in the step 2, after the edge mapping in the step 1, the image pixels are divided into 2 types of edge pixels and non-edge pixels. The markers are a set of spatially adjacent pixels of the non-edge pixels with smaller gradient values, corresponding to the interior regions of the image. The key of the extraction of the constraint area is the extraction of the mark point.
The marked points are internal points in the image, and the gradient value is smaller (as shown in figure 3); unmarked points are points at the edge and the vicinity of the edge in the image, and have larger gradient values; therefore, by setting a certain threshold T, the two can be distinguished according to the following formula.
Figure BDA0001545214670000041
In the formula: h and l are respectively the row number and the column number of the pixel; g (h, l) is the gradient value of the pixel; t (h, l) is a threshold corresponding to the pixel, and can be a global threshold irrelevant to the position or a local threshold relevant to the position. When the value of m (h, l) is 1, a marked point is represented, and when the value of m (h, l) is 0, an unmarked point is represented.
The threshold T is selected according to the actual image under the premise of ensuring that each constraint area has a mark point, and the method comprises the following steps:
1. first, the gradient value distribution of image elements, the minimum and maximum gradient values are calculated
Image gradient: g (x, y) ═ dx (i, j) + dy (i, j);
dx(i,j)=|(i+1,j)-|(i,j);
dy(i,j)=l(i,j+1)-l(i,j);
where l is the value of an image pixel (e.g., RGB value) and i, j is the coordinate of the pixel
2. The midpoint value of the maximum and minimum gradient values is chosen as T, and all points in the image where the gradient values are equal to or close (within 1 pixel value of error) to T are chosen,
3. searching a local minimum value point of the gradient value by using a point of which the gradient value is equal to or close to T as a starting point according to a gradient descent method,
4. judging whether a plurality of minimum value points belong to the same constraint area (no edge point exists between the two minimum values), and selecting the minimum gradient value as the seed point of the area.
Because the mark point will be used as the seed point of the following flood filling operation, but the mark point of each constraint area may be more than one due to the size of the threshold T, after the appropriate threshold T is set, if more than one mark point of a certain constraint area is used, the seed point with the smallest gradient value is selected.
In the step 3, after the marking is completed, it is first required to correspondingly allocate a mask to each closed region, and set all pixel values to 0, and record the classification hierarchy structure and the name of the land class to which the corresponding closed region belongs in the present land use situation in the form of a file path by using each mask. Such as: corresponding land type information in the land use current situation graph, the first-level classification is as follows: residential land, the second class classification is: in the rural homestead, the mask recording path information is as follows: "G: \ residential land \ rural homestead ", name: "rural homestead 001. jpg".
Referring to fig. 4, in the foregoing step 3, the obtained mark point is used as a seed point to perform flood filling on the area in each closed edge until all the pixels in the edge constraint are marked, because the constraint boundary has a higher pixel value and a larger gradient value, the whole constraint area is obtained after the flood filling.
Referring to fig. 5, in step 3, in the filling process of the overflowing water, for the mask corresponding to each closed region, the mask pixel value corresponding to the marked pixel is set to 1, and the remaining mask pixel values are still 0. Firstly, the mask pixel value of the seed point is 1, the pixels marked with 1 mask extend to the edge along with the process of filling the flood until all the mask pixel values of the constraint area are 1, and therefore, the mask can be used for extracting the complete image in the constraint area in the next step.
In the step 4, the image of each closed region is extracted according to the mask obtained in the previous step, and the region with the mask pixel value of 1 is the region of the image to be extracted (as shown in fig. 5), so that a complete image of each region is obtained, then each image is stored according to the file path and the name recorded by the mask in the step 3, and finally, training sample libraries corresponding to different categories are generated. Such as: the mask recording path information is: g, land for residence, rural homestead, name: and if a plurality of images of the same type exist, the images are sequentially ordered and named according to the storage sequence after the naming: the 'rural homestead 002. jpg' and 'rural homestead 003. jpg' … form a sample library finally.
It should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention, and all equivalent substitutions or substitutions made on the above-mentioned embodiments are included in the scope of the present invention.

Claims (9)

1.一种遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,包括如下步骤,叠加土地利用现状矢量图和遥感图,矢量数据的图斑边界信息将遥感影像分割成影像像斑,然后对影像像斑进行提取标记点,漫水填充,分类提取,得到样本库;1. An automatic acquisition method for a deep learning sample library corresponding to remote sensing image ground type identification, characterized in that it comprises the following steps, superimposing a vector map of the current land use and a remote sensing map, and dividing the remote sensing image into an image by the patch boundary information of the vector data The image spots are then extracted and marked, filled with water, classified and extracted, and the sample library is obtained; 具体包括如下步骤,Specifically, it includes the following steps: 步骤1:边缘映射,首先通过在同一坐标系下叠加土地利用现状矢量图和遥感图,然后将土地利用现状矢量图的边界映射为由遥感图中连续像元组成的闭合边缘;Step 1: Edge mapping, first by superimposing the current land use vector map and the remote sensing map in the same coordinate system, and then mapping the boundary of the current land use vector map as a closed edge composed of continuous pixels in the remote sensing map; 步骤2:标记点提取,在闭合边缘内部选取标记点;Step 2: Mark point extraction, select mark points inside the closed edge; 步骤3:漫水填充,通过标记点进行漫水填充,并给对应于每个填充区域的掩膜赋值和保存地类信息;Step 3: Flood filling, filling by marking points, and assigning and saving ground type information to the mask corresponding to each filling area; 步骤4:图像分类提取,根据掩膜提取分割后图像,并根据掩膜保存的土地利用现状的地类信息进行分类保存,形成样本库。Step 4: Image classification and extraction, extract the segmented image according to the mask, and classify and save the land type information according to the current land use status stored in the mask to form a sample library. 2.如权利要求1所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,步骤1所述土地利用现状矢量图和遥感图的获取时间相同。2 . The automatic acquisition method of the deep learning sample database corresponding to the recognition of remote sensing image land types according to claim 1 , wherein the acquisition time of the land use status vector map and the remote sensing map in step 1 is the same. 3 . 3.如权利要求1所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,步骤1所述将土地利用现状矢量图的边界映射为由遥感图中连续像元组成的闭合边缘通过直线光栅化方法实现。3. The automatic acquisition method of the corresponding deep learning sample library for remote sensing image recognition of land types as claimed in claim 1, wherein the step 1 is to map the boundary of the current land use vector map to be composed of continuous pixels in the remote sensing map. The closed edge of the is achieved by the line rasterization method. 4.如权利要求1所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,将步骤1所述闭合边缘上的像元标记为边缘像元,设置边缘像元有较高像素值。4. The automatic acquisition method of the corresponding deep learning sample library for remote sensing image ground type recognition as claimed in claim 1, wherein the pixel on the closed edge described in step 1 is marked as an edge pixel, and the edge pixel is set to have higher pixel value. 5.如权利要求4所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,所述步骤2中,采用下列公式提取标记点:5. the automatic acquisition method of the corresponding deep learning sample library of remote sensing image ground type identification as claimed in claim 4, is characterized in that, in described step 2, adopts following formula to extract mark point:
Figure FDA0003310112340000011
Figure FDA0003310112340000011
式中:h和l分别为像元的行号和列号;g(h,l)为像元的梯度值;T(h,l)为像元对应的阈值;m(h,l)取值为1时表示标记点,取值为0时表示未标记点。In the formula: h and l are the row number and column number of the pixel respectively; g(h, l) is the gradient value of the pixel; T(h, l) is the threshold corresponding to the pixel; m(h, l) takes A value of 1 indicates a marked point, and a value of 0 indicates an unmarked point.
6.如权利要求1所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,所述步骤3中,给每一个闭合区域对应分配一张掩膜且所有像素值设为0,并用每个掩膜以文件路径的形式记录相应闭合区域在土地利用现状中所属地类分类层级结构和命名。6. The automatic acquisition method of the corresponding deep learning sample library for remote sensing image recognition as claimed in claim 1, is characterized in that, in described step 3, assigns a mask to each closed area correspondingly and all pixel values are set. is 0, and each mask is used to record the land classification hierarchy and naming of the corresponding closed area in the current land use status in the form of file path. 7.如权利要求6所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,所述步骤3中,将得到的标记点作为种子点对每个闭合边缘内的区域进行漫水填充,直到标记完边缘约束内的所有像元。7. The automatic acquisition method of the corresponding deep learning sample library for remote sensing image recognition as claimed in claim 6, characterized in that, in the step 3, the obtained marked point is used as a seed point for the region within each closed edge Flood fill is done until all cells within the edge constraints are marked. 8.如权利要求7所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,所述步骤3中,在漫水填充过程中对于每一个闭合区域对应的掩膜,将已经标记的像元对应的掩膜像素值设为1,其余的掩膜像素值为0。8. The automatic acquisition method of the deep learning sample library corresponding to the recognition of remote sensing image ground types as claimed in claim 7, wherein in the step 3, in the flood filling process, for the mask corresponding to each closed area, The mask pixel value corresponding to the marked pixel is set to 1, and the remaining mask pixel value is 0. 9.如权利要求8所述的遥感影像地类识别对应深度学习样本库的自动获取方法,其特征在于,所述步骤4中,根据得到的掩膜分别提取每个闭合区域的图像,并根据掩膜记录的文件路径和命名分别保存每张图像,生成不同类别对应的训练样本库。9. The automatic acquisition method of the deep learning sample library corresponding to the recognition of the remote sensing image ground type as claimed in claim 8, wherein in the step 4, the image of each closed area is respectively extracted according to the obtained mask, and according to the obtained mask. The file path and name of the mask record are saved for each image separately, and the training sample library corresponding to different categories is generated.
CN201810026909.2A 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification Active CN108363951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810026909.2A CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810026909.2A CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Publications (2)

Publication Number Publication Date
CN108363951A CN108363951A (en) 2018-08-03
CN108363951B true CN108363951B (en) 2022-02-22

Family

ID=63010981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810026909.2A Active CN108363951B (en) 2018-01-11 2018-01-11 Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification

Country Status (1)

Country Link
CN (1) CN108363951B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657540B (en) * 2018-11-06 2020-11-27 北京农业信息技术研究中心 Dead tree location method and system
CN109657728B (en) * 2018-12-26 2021-03-30 江苏省基础地理信息中心 Sample production method and model training method
CN110363798B (en) * 2019-07-24 2022-02-18 宁波市测绘和遥感技术研究院 Method for generating remote sensing image interpretation sample set
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111563928B (en) * 2020-03-26 2021-05-25 广东省国土资源测绘院 Exception photo abnormity identification and reminding method and system
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN113223042B (en) * 2021-05-19 2021-11-05 自然资源部国土卫星遥感应用中心 Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN117788982A (en) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 Large-scale deep learning data set manufacturing method based on railway engineering topography result

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077400B (en) * 2012-12-26 2015-11-25 中国土地勘测规划院 The ground category information remote sensing automatic identifying method that Land Use Database is supported
CN103546726B (en) * 2013-10-28 2017-02-08 东南大学 Method for automatically discovering illegal land use
CN104484682A (en) * 2014-12-31 2015-04-01 中国科学院遥感与数字地球研究所 Remote sensing image classification method based on active deep learning
CN105678818A (en) * 2016-03-08 2016-06-15 浙江工商大学 Method for extracting estuary intertidal zone classification area by using object-oriented classification technology
CN105956058B (en) * 2016-04-27 2019-05-21 东南大学 A kind of variation land used rapid discovery method using unmanned aerial vehicle remote sensing images
CN107133360B (en) * 2017-05-31 2021-02-02 东南大学 Construction method of large-scale remote sensing image feature point library

Also Published As

Publication number Publication date
CN108363951A (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN108363951B (en) Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification
EP3896496B1 (en) Cuttings imaging for determining geological properties
Ping et al. A deep learning approach for street pothole detection
Xiao et al. Change detection of built-up land: A framework of combining pixel-based detection and object-based recognition
CN111598101B (en) Urban area intelligent extraction method, system and equipment based on remote sensing image scene segmentation
CN110059694A (en) The intelligent identification Method of lteral data under power industry complex scene
CN109033998A (en) Remote sensing image atural object mask method based on attention mechanism convolutional neural networks
CN112287807A (en) A road extraction method from remote sensing images based on multi-branch pyramid neural network
CN112084871B (en) High-resolution remote sensing target boundary extraction method based on weak supervised learning
CN108629777A (en) A kind of number pathology full slice image lesion region automatic division method
CN111160205A (en) Embedded multi-class target end-to-end unified detection method for traffic scene
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN112836614A (en) A high-score remote sensing image classification method based on residual network and transfer learning
CN107622239A (en) Detection method for remote sensing image specified building area constrained by hierarchical local structure
CN112749673A (en) Method and device for intelligently extracting stock of oil storage tank based on remote sensing image
CN117636160A (en) An automatic update method for high-scoring remote sensing cultivated land plots based on semi-supervised learning
CN116433634A (en) Industrial image anomaly detection method based on domain self-adaption
CN116681657A (en) Asphalt pavement disease detection method based on improved YOLOv7 model
Bickler et al. Scaling up deep learning to identify earthwork sites in Te Tai Tokerau, Northland, New Zealand
CN112784806A (en) Lithium-containing pegmatite vein extraction method based on full convolution neural network
He et al. Building extraction based on U-net and conditional random fields
CN117612136A (en) Automatic driving target detection method based on increment small sample learning
CN116543298A (en) Building Extraction Method of Remote Sensing Image Based on Fractal Geometric Features and Edge Supervision
CN116958801A (en) Karst cave identification method for open-air outcrop data
CN115423798A (en) Defect identification method, defect identification device, computer equipment, storage medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant