CN110443822A - A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary - Google Patents
A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary Download PDFInfo
- Publication number
- CN110443822A CN110443822A CN201910638370.0A CN201910638370A CN110443822A CN 110443822 A CN110443822 A CN 110443822A CN 201910638370 A CN201910638370 A CN 201910638370A CN 110443822 A CN110443822 A CN 110443822A
- Authority
- CN
- China
- Prior art keywords
- target
- boundary
- edge
- image
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 57
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 238000003708 edge detection Methods 0.000 claims abstract description 31
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 23
- 238000012946 outsourcing Methods 0.000 claims abstract 12
- 238000012549 training Methods 0.000 claims description 38
- 238000013135 deep learning Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000004519 manufacturing process Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 230000006399 behavior Effects 0.000 claims 1
- 230000007797 corrosion Effects 0.000 claims 1
- 238000005260 corrosion Methods 0.000 claims 1
- 238000009432 framing Methods 0.000 claims 1
- 238000010200 validation analysis Methods 0.000 claims 1
- 238000013461 design Methods 0.000 abstract description 4
- VMXUWOKSQNHOCA-UKTHLTGXSA-N ranitidine Chemical compound [O-][N+](=O)\C=C(/NC)NCCSCC1=CC=C(CN(C)C)O1 VMXUWOKSQNHOCA-UKTHLTGXSA-N 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008439 repair process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003628 erosive effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241001391944 Commicarpus scandens Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明属于遥感领域和计算机视觉中的目标检测及边缘提取领域,提出一种基于深度学习的目标检测和边缘检测结合的两阶段遥感目标精细提取方法。The invention belongs to the field of remote sensing and target detection and edge extraction in computer vision, and proposes a two-stage remote sensing target fine extraction method based on deep learning combined with target detection and edge detection.
技术背景technical background
目标提取是从遥感数据中提取信息的重要方法,随着遥感影像空间分辨率的不断提升,影像可观测目标及其提取精度要求也相应地不断提高,虽然可以采用一般计算机视觉中实例分割进行部分应用,但在实际地物边界的要求上却相距甚远,如目前普遍采用的目标检测-掩膜分割两阶段方法(如Mask-RCNN等)难以直接应用于高分遥感影像的精细目标提取。实际上,基于深度学习方法的目标检测由于仅要求定位目标并确定外包框,因此精度普遍较高;实际影响目标提取效果的主要在于掩膜分割阶段对地物实际边界的精细确定,由于地物目标在图像光谱表现、空间形状尺度等方面的复杂变化,以检测框内局部数据进行边界识别受限于当前深度分割模型仍难取得较好精度,这也是本方法的主要改进思路,通过引入语义边缘提高目标边界的提取精度,从而整体上实现高分遥感目标的精细提取,达到实用化能力。Target extraction is an important method to extract information from remote sensing data. With the continuous improvement of the spatial resolution of remote sensing images, the requirements for observable targets and their extraction accuracy are also increasing accordingly. Although instance segmentation in general computer vision can be used for some parts However, the requirements for the actual boundary of objects are far from each other. For example, the two-stage method of target detection and mask segmentation (such as Mask-RCNN, etc.) commonly used at present is difficult to be directly applied to the fine target extraction of high-resolution remote sensing images. In fact, the target detection based on the deep learning method only needs to locate the target and determine the outer frame, so the accuracy is generally high; the actual effect of the target extraction effect mainly lies in the fine determination of the actual boundary of the ground object in the mask segmentation stage. Due to the complex changes of the target in terms of image spectral performance, spatial shape scale, etc., it is still difficult to achieve good accuracy by detecting local data in the frame for boundary recognition due to the current depth segmentation model. This is also the main improvement idea of this method. By introducing semantics The edge improves the extraction accuracy of the target boundary, so as to realize the fine extraction of the high-scoring remote sensing target as a whole, and achieve the practical ability.
深度学习提出以来当前主流目标检测算法大致可以分为两类,分别为基于区域建议的目标检测卷积神经网络和端到端一体化的目标检测卷积神经网络。端到端的目标检测方法为了追求更快的检测速度,采用单次检测的识别结果作为最终的目标检测结果,其效果相比于多阶段训练的基于区域建议目标检测方法显得不够理想。在基于区域建议的目标检测领域中,R-CNN是开创性的提出,它改变以往繁琐的穷举滑动窗口形式,通过选择性搜索提取少量的合适的区域候选框,再对候选区域大小归一化以及使用深度卷积神经网络提取特征,最后使用SVM进行分类识别得出最终的目标检测结果。之后提出的Fast-RCNN,Faster-RCNN,FPN,Mask-RCNN等检测算法都是在此基础上进行针对性的改进。由于目标检测算法只要求对目标进行定位和分类,有着不错的检测精度。Since the introduction of deep learning, the current mainstream target detection algorithms can be roughly divided into two categories, namely, the target detection convolutional neural network based on region proposal and the end-to-end integrated target detection convolutional neural network. In order to pursue faster detection speed, the end-to-end target detection method adopts the recognition result of a single detection as the final target detection result, and its effect is not ideal compared to the multi-stage training-based target detection method based on region proposal. In the field of target detection based on region proposal, R-CNN is a pioneering proposal. It changes the previous tedious and exhaustive sliding window form, extracts a small number of suitable region candidate boxes through selective search, and then normalizes the size of the candidate region. and use deep convolutional neural network to extract features, and finally use SVM for classification and recognition to obtain the final target detection result. The detection algorithms such as Fast-RCNN, Faster-RCNN, FPN, and Mask-RCNN proposed later are all targeted improvements on this basis. Since the target detection algorithm only needs to locate and classify the target, it has a good detection accuracy.
边缘检测算法要求精准地确定目标边界,相比于单纯的对目标定位有着更大的技术难度。在基于深度学习的边缘检测算法中,RCF是比较有效的卷积神经网络模型。其主要思想是利用VGG模型进行微调,将VGG中5个阶段的多个卷积层降维后得到特征融合图,再分别反卷积并单独计算每一阶段的loss值,最后多尺度合成5个阶段的特征融合图得到最终的特征融合图并计算loss值。利用所有卷积层的特征相比于HED中只利用每个阶段的最后一层卷积层特征能够带来效果上的提升。在强边界的情况下,因边缘含有比较丰富的语义信息,再通过卷积获取全局信息,故有着不错的边缘检测效果。然而在弱边界的情况下,目标边缘难以提供有效的语义信息,甚至因高大物体的阴影遮挡,完全丧失了目标与非目标边缘的区分,使神经网络不能识别到目标边缘,从而出现边缘提取断线的情况。在实际要求中,断线的边缘不可认为是完整的边界提取,所以,以边界完整性为目标的边界修补工作在目标边界提取任务中可以算是非常重要的内容。The edge detection algorithm requires accurate determination of the target boundary, which is more technically difficult than simply locating the target. Among edge detection algorithms based on deep learning, RCF is a more effective convolutional neural network model. The main idea is to use the VGG model for fine-tuning, reduce the dimensionality of multiple convolutional layers in VGG to obtain a feature fusion map, then deconvolve and calculate the loss value of each stage separately, and finally synthesize 5 at multiple scales. The feature fusion map of each stage gets the final feature fusion map and calculates the loss value. Using the features of all convolutional layers can improve the effect compared to using only the features of the last convolutional layer of each stage in HED. In the case of strong boundaries, because the edges contain relatively rich semantic information, and then global information is obtained through convolution, it has a good edge detection effect. However, in the case of weak boundaries, it is difficult for the target edge to provide effective semantic information, and even due to the shadow occlusion of tall objects, the distinction between target and non-target edges is completely lost, so that the neural network cannot identify the target edge, resulting in edge extraction. line condition. In actual requirements, the edge of the broken line cannot be regarded as a complete boundary extraction, so the boundary repair work aiming at boundary integrity can be regarded as a very important content in the target boundary extraction task.
本方法尝试综合目标检测与边缘提取的优点,在精度较高的目标检测外包框指导下,利用边缘检测结果精细定位目标边界,而对于遮挡或被干扰边缘,则基于目标检测结果主动推测并连接补全,从而实现遥感目标的精细提取。This method attempts to combine the advantages of target detection and edge extraction. Under the guidance of the high-precision target detection outer frame, the edge detection results are used to finely locate the target boundary. For occluded or disturbed edges, it is based on the target detection results. Actively infer and connect Completion, so as to achieve fine extraction of remote sensing targets.
发明内容SUMMARY OF THE INVENTION
本发明要克服高分遥感影像中由于地物边界模糊、相互遮挡等因素导致的地物目标边缘提取不精细、不完整的缺点,提出了一种基于边缘辅助的高分遥感目标精细提取方法。In order to overcome the shortcomings of inaccurate and incomplete edge extraction of objects in high-resolution remote sensing images due to blurred object boundaries and mutual occlusion, the invention proposes a method for finely extracting high-resolution remote sensing objects based on edge assistance.
本发明可通过边缘辅助克服一般实例分割中掩膜边界不准确的缺陷,同时通过两阶段过程在外包框的辅助下又保证了目标边界完整,克服一般边缘提取易断线的缺陷,最终实现高分遥感目标的精细提取。The invention can overcome the defect of inaccurate mask boundary in general instance segmentation through edge assistance, and at the same time, through a two-stage process with the assistance of the outer frame, the target boundary can be guaranteed to be complete, overcome the defect of easy-to-break lines in general edge extraction, and finally achieve high Fine extraction of remote sensing targets.
为实现上述目的,本发明提出的技术方案如下:For achieving the above object, the technical scheme proposed by the present invention is as follows:
一种基于边缘辅助的高分遥感目标精细提取方法,方法包括如下步骤:A method for finely extracting high-scoring remote sensing targets based on edge assistance, the method comprises the following steps:
步骤1:制作遥感影像目标提取样本,对照影像描画目标的精细边界并确定类型;对同一幅影像样本生成两种不同的标注,即根据目标检测模型要求生成目标外包框标注,根据边缘检测模型要求生成目标边界标注;具体包括:Step 1: Make a remote sensing image target extraction sample, draw the fine boundary of the target against the image and determine the type; generate two different labels for the same image sample, that is, generate the target outer box label according to the requirements of the target detection model, and according to the requirements of the edge detection model. Generate target boundary annotations; specifically:
步骤1.1:获取高分遥感影像:采用具有可见光-近红外传感器的光学卫星遥感数据或搭载一般光学相机的航空遥感数据,根据分辨率要求可直接使用多光谱影像或融合全色影像;Step 1.1: Obtain high-resolution remote sensing images: use optical satellite remote sensing data with visible light-near-infrared sensors or aerial remote sensing data equipped with general optical cameras. According to the resolution requirements, multi-spectral images or fusion panchromatic images can be used directly;
步骤1.2:裁剪遥感影像:在生产区域选择典型目标所在范围将遥感影像按照统一像素进行裁剪;Step 1.2: Crop the remote sensing image: Select the typical target range in the production area and crop the remote sensing image according to uniform pixels;
步骤1.3:制作深度学习训练样本:使用ArcGIS或其他GIS软件绘制深度学习训练样本,标注每个地物目标的边界,得到对应的.shp文件,根据目标检测模型要求生成目标外包框标注;根据边缘检测模型要求生成目标边界标注;Step 1.3: Make deep learning training samples: Use ArcGIS or other GIS software to draw deep learning training samples, mark the boundaries of each feature target, get the corresponding .shp file, and generate target outer frame labels according to the requirements of the target detection model; The detection model requires the generation of target boundary annotations;
步骤1.4:根据任务需求采集200张以上影像及相应标注作为训练样本,为检测精度单独准备测试样本;Step 1.4: Collect more than 200 images and corresponding annotations as training samples according to task requirements, and prepare test samples separately for detection accuracy;
步骤2:使用准备好的影像样本训练目标检测模型和边缘检测模型;训练FasterR-CNN神经网络得到目标检测模型,训练RCF神经网络得到边缘检测模型,根据生产目标可相应替换、修改网络;具体包括:Step 2: Use the prepared image samples to train the target detection model and the edge detection model; train the FasterR-CNN neural network to obtain the target detection model, train the RCF neural network to obtain the edge detection model, and replace and modify the network accordingly according to the production target; specifically including :
步骤2.1:设计深度卷积神经网络:为了训练目标检测模型和边缘检测模型,选择的是RCF和Faster R-CNN这两种神经网络,根据生产目标可相应替换、修改网络;Step 2.1: Design a deep convolutional neural network: In order to train the target detection model and the edge detection model, two neural networks, RCF and Faster R-CNN, are selected, and the networks can be replaced and modified according to the production target;
步骤2.2:初始化权重:使用VGG预训练模型来初始化RCF网络权重,使用COCO数据集上的预训练模型来初始化Faster R-CNN的权重;Step 2.2: Initialize weights: Use the VGG pre-training model to initialize the RCF network weights, and use the pre-training model on the COCO dataset to initialize the Faster R-CNN weights;
步骤2.3:设置训练超参数:配置超参数,模型调优后具体设置数值;Step 2.3: Set training hyperparameters: configure hyperparameters, and set specific values after model tuning;
RCF的训练参数设定:迭代次数=8000,batch_size=4,学习率更新策略=step,学习率更新步长=[3200,4800,6400,8000],初始学习率=0.001,学习率更新系数=0.1;RCF training parameter settings: number of iterations = 8000, batch_size = 4, learning rate update strategy = step, learning rate update step = [3200, 4800, 6400, 8000], initial learning rate = 0.001, learning rate update coefficient = 0.1;
Faster R-CNN的训练参数设定:训练阶段数=3,各阶段迭代轮数=[40,120,40],每轮迭代次数=1000,验证间隔=50,batch_size=4,学习率更新策略=step,学习率更新步长=1000,初始学习率=0.001,学习率更新动量=0.9,权重衰变=0.0001;Faster R-CNN training parameter settings: number of training stages = 3, number of iterations in each stage = [40, 120, 40], number of iterations per round = 1000, verification interval = 50, batch_size = 4, learning rate update strategy = step , learning rate update step=1000, initial learning rate=0.001, learning rate update momentum=0.9, weight decay=0.0001;
步骤2.4:输入样本,训练模型:将训练样本输入到RCF模型中,按照步骤2.3中所述的超参数进行训练,得到能够提取地物目标边缘轮廓的边缘检测模型;将训练样本输入到Faster R-CNN中,按照步骤2.3中所述的超参数进行训练,得到能够提取地物目标外包框的目标检测模型;Step 2.4: Input samples and train the model: Input the training samples into the RCF model, and train according to the hyperparameters described in Step 2.3 to obtain an edge detection model that can extract the edge contours of the objects; input the training samples into Faster R -In CNN, train according to the hyperparameters described in step 2.3 to obtain a target detection model that can extract the outer frame of the object target;
步骤3:将用于测试的影像样本输入到目标检测模型和边缘检测模型中,获取地物目标外包框和边缘强度图;物目标边缘强度图能表明遥感影像中对应位置处是目标边缘的可能性;地物目标外包框是标明目标所处位置范围和目标类型的矩形限制框;具体包括:Step 3: Input the image samples for testing into the target detection model and the edge detection model, and obtain the outer frame of the object target and the edge intensity map; the edge intensity map of the object object can indicate the possibility that the corresponding position in the remote sensing image is the edge of the target The outer frame of the object target is a rectangular bounding box that indicates the location range and target type of the target; it includes:
步骤3.1:将高分遥感影像输入到目标检测模型,获取地物目标的矩形外包框;将高分遥感影像输入到边缘检测模型,获取地物目标边缘强度图;Step 3.1: Input the high-resolution remote sensing image into the target detection model to obtain the rectangular outer box of the object target; input the high-resolution remote sensing image into the edge detection model to obtain the edge intensity map of the object target;
步骤3.2:参数化外包框:将目标检测模型输出的矩形外包框参数转化为其左下顶点坐标和右上顶点坐标;具体地:Step 3.2: Parametric outer box: convert the parameters of the rectangular outer box output by the target detection model into the coordinates of the lower left vertex and the upper right vertex; specifically:
x1=x-w y1=y-hx1=x-w y1=y-h
x2=x+w y2=y+hx2=x+w y2=y+h
其中,x,y,w,h分别代表矩形外包框的中心横坐标,中心纵坐标,宽和高;Among them, x, y, w, h represent the center abscissa, center ordinate, width and height of the rectangular outer frame, respectively;
步骤4:将步骤3中边缘强度图细化成单像素宽的边界;具体包括:Step 4: Refine the edge intensity map in Step 3 into a single-pixel-wide boundary; the details include:
步骤4.1:根据设定的阈值,将地物目标边缘强度图由灰度图转化成二值图。用binary_image(x,y)来表示地物目标在图像(x,y)处的边缘判断情况(1表示是边缘,0表示不是边缘),可表示为:Step 4.1: According to the set threshold, the edge intensity map of the object object is converted from a grayscale image to a binary image. Use binary_image(x, y) to represent the edge judgment of the object at the image (x, y) (1 means it is an edge, 0 means it is not an edge), which can be expressed as:
其中threshold为区间[0,1]上的实数,可由用户设置,初始默认值为0.5。x,y为图像的横纵坐标。The threshold is a real number in the interval [0,1], which can be set by the user, and the initial default value is 0.5. x, y are the horizontal and vertical coordinates of the image.
步骤4.2:对于二值图中多个像素宽的边缘粗线,以每条边缘线的中心不断做腐蚀操作,直到所有边缘线只剩一个像素宽,达到骨架提取的目的。Step 4.2: For the thick edge lines with multiple pixel widths in the binary image, the erosion operation is performed continuously with the center of each edge line until all the edge lines are only one pixel wide, so as to achieve the purpose of skeleton extraction.
步骤5:以步骤3中地物目标外包框为约束,对步骤4中单像素宽的边界进行修补,以获取完整精细的多边形地物目标边界。Step 5: Using the outer frame of the object target in step 3 as a constraint, repair the single-pixel-wide boundary in step 4 to obtain a complete and fine polygon object boundary.
具体包括:Specifically include:
步骤5.1:找出地物目标边界线中未闭合的部分,将其分为三种类型,分别为图像边缘处的边界断线(由于骨架提取算法而导致图像边缘处边界不完整),图像内部的边界断线(边界检测模型未正确识别的部分),图像内部的边界线头(由于骨架提取算法而导致目标边界出现多余边界线)。Step 5.1: Find out the unclosed part of the boundary line of the object target, and divide it into three types, namely the boundary breakline at the edge of the image (the boundary at the edge of the image is incomplete due to the skeleton extraction algorithm), the inner part of the image. The boundary line breaks (the part not correctly identified by the boundary detection model), the boundary line head inside the image (the redundant boundary line on the target boundary due to the skeleton extraction algorithm).
步骤5.2:将三种类型的边界断线用三种不同的方法处理。Step 5.2: Three types of boundary breaks are handled in three different ways.
步骤5.2.1:对于图像边缘处的边界断线,以相应位置处的边缘强度图为指示,选取边缘强度值较高的像素点填补目标边界断线,若与图像边界仍有空隙,则采用与图像边界垂直的直线,使其与目标边界断线相连,并与图像边界连接形成闭合的目标边界线。Step 5.2.1: For the boundary broken line at the edge of the image, the edge intensity map at the corresponding position is used as an indicator, and the pixels with higher edge intensity values are selected to fill the target boundary broken line. If there is still a gap with the image boundary, use A straight line perpendicular to the image boundary, which is disconnected from the target boundary and connected to the image boundary to form a closed target boundary.
步骤5.2.2:对于图像内部的边界断线,重新设置阈值,并以相应位置处的边缘强度图为指示,将边缘强度图中强度值高于新阈值的像素点映射到二值图中,若目标边界断线仍未闭合,则将两个断点以原有的几何特性相连。其中,以原有几何特性修补断点的步骤如下。Step 5.2.2: For the boundary broken line inside the image, reset the threshold, and use the edge intensity map at the corresponding position as an indication, map the pixels whose intensity value is higher than the new threshold in the edge intensity map to the binary map, If the target boundary break line is still not closed, connect the two break points with the original geometric properties. Among them, the steps of repairing the breakpoint with the original geometric characteristics are as follows.
步骤5.2.2.1:取出与两个断点相连的最内圈边界线。Step 5.2.2.1: Take out the innermost circle boundary line connected with the two breakpoints.
步骤5.2.2.2::以斜率的变化情况将最内圈边界线划分成若干部分,确定边界线的大致几何形状。Step 5.2.2.2: Divide the boundary line of the innermost circle into several parts according to the change of the slope, and determine the rough geometric shape of the boundary line.
步骤5.2.2.3:根据几何形状从断点处修补成封闭图形,使修补处保持原有的几何特性。Step 5.2.2.3: Repair the broken point into a closed figure according to the geometric shape, so that the repaired place maintains the original geometric characteristics.
步骤5.2.3:对于图像内部的孤立边界线,即与其他断线都距离较远的孤立边界线,则予以删除。Step 5.2.3: Delete the isolated boundary lines inside the image, that is, the isolated boundary lines that are far away from other broken lines.
步骤5.3:再次进行骨架提取,将步骤5.2中填补的部分细化成单像素宽的目标边界线。Step 5.3: Perform skeleton extraction again, and refine the part filled in step 5.2 into a single-pixel wide target boundary line.
步骤5.4:重新遍历图像,删除掉所有不闭合的目标边界线,得到完整精细的多边形地物目标边界。Step 5.4: Re-traverse the image, delete all unclosed target boundary lines, and obtain a complete and fine polygonal object boundary.
由于采用了上述技术方案,本发明具有如下优点与有益效果:Owing to adopting the above-mentioned technical scheme, the present invention has the following advantages and beneficial effects:
1、本发明采用目标检测模型和边缘检测模型相结合的方法,既避免了一般实例分割算法中掩膜边界不准确的缺陷,又解决了一般边缘提取算法中易断线、轮廓粗的问题,能保证目标边界的完整性、精细化。1. The present invention adopts the method of combining the target detection model and the edge detection model, which not only avoids the defects of inaccurate mask boundaries in the general instance segmentation algorithm, but also solves the problems of easily broken lines and thick outlines in the general edge extraction algorithm, It can ensure the integrity and refinement of the target boundary.
2、与传统人工识别遥感目标相比,本发明具有更快的识别速度,对一幅有几百个目标的遥感影像,本发明识别不到一秒,而人工绘制往往需要数个小时;本发明具有较高的准确率,由于采用端到端的设计方法,避免了因为人工绘制误操作而导致准确度下降的情况。2. Compared with the traditional manual recognition of remote sensing targets, the present invention has a faster recognition speed. For a remote sensing image with hundreds of targets, the present invention can recognize less than one second, while manual drawing often takes several hours; The invention has a high accuracy rate, and because the end-to-end design method is adopted, the situation that the accuracy is reduced due to the misoperation of manual drawing is avoided.
3、本发明采用机器学习中的深度神经网络算法,只需改变样本训练就能对多种目标进行识别并提取目标边界,不需要再重新设计算法,具有很强的复用性和鲁棒性。3. The present invention adopts the deep neural network algorithm in machine learning, and only needs to change the sample training to identify various targets and extract the target boundary, without the need to redesign the algorithm, and has strong reusability and robustness .
附图说明Description of drawings
图1是本发明的边缘辅助的高分遥感目标精细提取方法示意图;1 is a schematic diagram of an edge-assisted high-resolution remote sensing target fine extraction method of the present invention;
图2是本发明的实施例中目标检测模型样本图;2 is a sample diagram of a target detection model in an embodiment of the present invention;
图3是本发明的实施例中边缘检测模型样本图;3 is a sample diagram of an edge detection model in an embodiment of the present invention;
图4是本发明的实施例中目标检测模型预测结果示例图;4 is an example diagram of a prediction result of a target detection model in an embodiment of the present invention;
图5是本发明的实施例中边缘检测模型预测结果示例图;5 is an example diagram of an edge detection model prediction result in an embodiment of the present invention;
图6是本发明的实施例中最终边界结果比较示例图。FIG. 6 is a diagram of an example comparison of final boundary results in an embodiment of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明,即所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述的本发明实施例的组件可以以各种不同的配置来布置和设计。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention, that is, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. The components of the embodiments of the invention generally described in the drawings herein may be arranged and designed in a variety of different configurations.
因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。Thus, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the invention as claimed, but is merely representative of selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present invention.
图1是边缘辅助的高分遥感目标精细提取方法示意图。Figure 1 is a schematic diagram of the edge-assisted high-resolution remote sensing target fine extraction method.
参照图1所示,为本发明提供较佳实施例,包括下列步骤:Referring to Figure 1, a preferred embodiment is provided for the present invention, comprising the following steps:
步骤1:制作遥感影像目标提取样本,对照影像描画目标的精细边界并确定类型;Step 1: Make a remote sensing image target extraction sample, draw the fine boundary of the target against the image and determine the type;
步骤2:使用准备好的影像样本训练目标检测模型和边缘检测模型;Step 2: Use the prepared image samples to train the target detection model and the edge detection model;
步骤3:将用于测试的影像样本输入到目标检测模型和边缘检测模型中,获取地物目标外包框和边缘强度图;Step 3: Input the image samples used for testing into the target detection model and the edge detection model, and obtain the outer frame of the object target and the edge intensity map;
步骤4:将步骤3中边缘强度图细化成单像素宽的边界;Step 4: Refine the edge intensity map in Step 3 into a single-pixel-wide boundary;
步骤5:以步骤3中地物目标外包框为约束,对步骤4中单像素宽的边界进行修补,以获取完整精细的多边形地物目标边界。Step 5: Using the outer frame of the object target in step 3 as a constraint, repair the single-pixel-wide boundary in step 4 to obtain a complete and fine polygon object boundary.
根据上述实施例,步骤1详细步骤如下:According to the above embodiment, the detailed steps of step 1 are as follows:
步骤1.1:获取高分遥感影像:采用具有可见光-近红外传感器的光学卫星遥感数据或搭载一般光学相机的航空遥感数据,根据分辨率要求可直接使用多光谱影像或融合全色影像。Step 1.1: Obtain high-resolution remote sensing images: use optical satellite remote sensing data with visible light-near-infrared sensors or aerial remote sensing data equipped with general optical cameras. According to the resolution requirements, multispectral images or fusion panchromatic images can be used directly.
步骤1.2:裁剪遥感影像:在生产区域选择典型目标所在范围将遥感影像统一裁剪成512*512像素大小(按生产区大小裁剪范围应具有一定随机性并广泛覆盖)。Step 1.2: Crop the remote sensing image: Select the range of typical targets in the production area and uniformly crop the remote sensing image to a size of 512*512 pixels (the cropping range should be random and widely covered according to the size of the production area).
步骤1.3:制作深度学习训练样本:使用ArcGIS或其他GIS软件绘制深度学习训练样本,在本实施例中需要标注每个地物目标的边界,得到对应的.shp文件,根据目标检测模型要求生成目标外包框标注,如图2所示;根据边缘检测模型要求生成目标边界标注,如图3所示。Step 1.3: Make deep learning training samples: Use ArcGIS or other GIS software to draw deep learning training samples. In this embodiment, it is necessary to mark the boundaries of each feature target, obtain the corresponding .shp file, and generate targets according to the requirements of the target detection model The outer box annotation is shown in Figure 2; the target boundary annotation is generated according to the requirements of the edge detection model, as shown in Figure 3.
步骤1.4:根据任务需求一般采集200张以上影像及相应标注作为训练样本,如需检测精度可单独准备测试样本。Step 1.4: Generally, more than 200 images and corresponding annotations are collected as training samples according to the task requirements. If the detection accuracy is required, test samples can be prepared separately.
步骤2详细步骤如下:The detailed steps of step 2 are as follows:
根据上述实施例,步骤2.1:设计深度卷积神经网络:为了训练目标检测模型和边缘检测模型,在本发明中选择的是RCF和Faster R-CNN这两种神经网络,根据生产目标网络可相应替换、修改。According to the above-mentioned embodiment, step 2.1: Design a deep convolutional neural network: in order to train the target detection model and the edge detection model, two kinds of neural networks, RCF and Faster R-CNN, are selected in the present invention. According to the production target network, the corresponding replace, modify.
步骤2.2:初始化权重:使用VGG预训练模型来初始化RCF网络权重,使用COCO数据集上的预训练模型来初始化Faster R-CNN的权重。Step 2.2: Initialize weights: Use the VGG pretrained model to initialize the RCF network weights, and use the pretrained model on the COCO dataset to initialize the Faster R-CNN weights.
步骤2.3:设置训练超参数:配置超参数,模型调优后具体数值设置如下所示。Step 2.3: Set training hyperparameters: Configure hyperparameters. The specific numerical settings after model tuning are as follows.
RCF的训练参数设定:迭代次数=8000,batch_size=4,学习率更新策略=step,学习率更新步长=[3200,4800,6400,8000],初始学习率=0.001,学习率更新系数=0.1。RCF training parameter settings: number of iterations = 8000, batch_size = 4, learning rate update strategy = step, learning rate update step = [3200, 4800, 6400, 8000], initial learning rate = 0.001, learning rate update coefficient = 0.1.
Faster R-CNN的训练参数设定:训练阶段数=3,各阶段迭代轮数=[40,120,40],每轮迭代次数=1000,验证间隔=50,batch_size=4,学习率更新策略=step,学习率更新步长=1000,初始学习率=0.001,学习率更新动量=0.9,权重衰变=0.0001。Faster R-CNN training parameter settings: number of training stages = 3, number of iterations in each stage = [40, 120, 40], number of iterations per round = 1000, verification interval = 50, batch_size = 4, learning rate update strategy = step , learning rate update step=1000, initial learning rate=0.001, learning rate update momentum=0.9, weight decay=0.0001.
步骤2.4:输入样本,训练模型:将训练样本输入到RCF模型中,按照步骤2.3中所述的超参数进行训练,得到能够提取地物目标边缘轮廓的边缘检测模型;将训练样本输入到Faster R-CNN中,按照步骤2.3中所述的超参数进行训练,得到能够提取地物目标外包框的目标检测模型。Step 2.4: Input samples and train the model: Input the training samples into the RCF model, and train according to the hyperparameters described in Step 2.3 to obtain an edge detection model that can extract the edge contours of the objects; input the training samples into Faster R -In the CNN, train according to the hyperparameters described in step 2.3 to obtain a target detection model that can extract the outer frame of the object target.
根据上述实施例,步骤3详细步骤如下:According to the above embodiment, the detailed steps of step 3 are as follows:
步骤3.1:将高分遥感影像输入到目标检测模型,获取地物目标的矩形外包框,如图4所示;将高分遥感影像输入到边缘检测模型,获取地物目标边缘强度图,如图5所示。Step 3.1: Input the high-resolution remote sensing image into the target detection model to obtain the rectangular outer box of the object target, as shown in Figure 4; input the high-resolution remote sensing image into the edge detection model to obtain the edge intensity map of the object target, as shown in the figure 5 shown.
步骤3.2:参数化外包框:将目标检测模型输出的矩形外包框参数转化为其左下顶点坐标和右上顶点坐标。具体地:Step 3.2: Parametric outer box: Convert the parameters of the rectangular outer box output by the target detection model into the coordinates of the lower left vertex and the upper right vertex. specifically:
x1=x-w y1=y-hx1=x-w y1=y-h
x2=x+w y2=y+hx2=x+w y2=y+h
其中,x,y,w,h分别代表矩形外包框的中心横坐标,中心纵坐标,宽和高。Among them, x, y, w, h represent the center abscissa, center ordinate, width and height of the rectangular outer box, respectively.
根据上述实施例,步骤4详细步骤如下:According to the above embodiment, the detailed steps of step 4 are as follows:
步骤4.1:根据设定的阈值,将地物目标边缘强度图由灰度图转化成二值图。用binary_image(x,y)来表示地物目标在图像(x,y)处的边缘判断情况(1表示是边缘,0表示不是边缘),可表示为:Step 4.1: According to the set threshold, the edge intensity map of the object object is converted from a grayscale image to a binary image. Use binary_image(x, y) to represent the edge judgment of the object at the image (x, y) (1 means it is an edge, 0 means it is not an edge), which can be expressed as:
其中threshold为区间[0,1]上的实数,可由用户设置,初始默认值为0.5。x,y为图像的横纵坐标。The threshold is a real number in the interval [0,1], which can be set by the user, and the initial default value is 0.5. x, y are the horizontal and vertical coordinates of the image.
步骤4.2:对于二值图中多个像素宽的边缘粗线,以每条边缘线的中心不断做腐蚀操作,直到所有边缘线只剩一个像素宽,达到骨架提取的目的。Step 4.2: For the thick edge lines with multiple pixel widths in the binary image, the erosion operation is performed continuously with the center of each edge line until all the edge lines are only one pixel wide, so as to achieve the purpose of skeleton extraction.
根据上述实施例,步骤5详细步骤如下:According to the above embodiment, the detailed steps of step 5 are as follows:
步骤5.1:找出地物目标边界线中未闭合的部分,将其分为三种类型,分别为图像边缘处的边界断线(由于骨架提取算法而导致图像边缘处边界不完整),图像内部的边界断线(边界检测模型未正确识别的部分),图像内部的边界线头(由于骨架提取算法而导致目标边界出现多余边界线)。Step 5.1: Find out the unclosed part of the boundary line of the object target, and divide it into three types, namely the boundary breakline at the edge of the image (the boundary at the edge of the image is incomplete due to the skeleton extraction algorithm), the inner part of the image. The boundary line breaks (the part not correctly identified by the boundary detection model), the boundary line head inside the image (the redundant boundary line on the target boundary due to the skeleton extraction algorithm).
步骤5.2:将三种类型的边界断线用三种不同的方法处理。Step 5.2: Three types of boundary breaks are handled in three different ways.
步骤5.2.1:对于图像边缘处的边界断线,以相应位置处的边缘强度图为指示,选取边缘强度值较高的像素点填补目标边界断线,若与图像边界仍有空隙,则采用与图像边界垂直的直线,使其与目标边界断线相连,并与图像边界连接形成闭合的目标边界线。Step 5.2.1: For the boundary broken line at the edge of the image, the edge intensity map at the corresponding position is used as an indicator, and the pixels with higher edge intensity values are selected to fill the target boundary broken line. If there is still a gap with the image boundary, use A straight line perpendicular to the image boundary, which is disconnected from the target boundary and connected to the image boundary to form a closed target boundary.
步骤5.2.2:对于图像内部的边界断线,重新设置阈值,并以相应位置处的边缘强度图为指示,将边缘强度图中强度值高于新阈值的像素点映射到二值图中,若目标边界断线仍未闭合,则将两个断点以原有的几何特性相连。其中,以原有几何特性修补断点的步骤如下。Step 5.2.2: For the boundary broken line inside the image, reset the threshold, and use the edge intensity map at the corresponding position as an indication, map the pixels whose intensity value is higher than the new threshold in the edge intensity map to the binary map, If the target boundary break line is still not closed, connect the two break points with the original geometric properties. Among them, the steps of repairing the breakpoint with the original geometric characteristics are as follows.
步骤5.2.2.1:取出与两个断点相连的最内圈边界线。Step 5.2.2.1: Take out the innermost circle boundary line connected with the two breakpoints.
步骤5.2.2.2::以斜率的变化情况将最内圈边界线划分成若干部分,确定边界线的大致几何形状。Step 5.2.2.2: Divide the boundary line of the innermost circle into several parts according to the change of the slope, and determine the rough geometric shape of the boundary line.
步骤5.2.2.3:根据几何形状从断点处修补成封闭图形,使修补处保持原有的几何特性。Step 5.2.2.3: Repair the broken point into a closed figure according to the geometric shape, so that the repaired place maintains the original geometric characteristics.
步骤5.2.3:对于图像内部的孤立边界线,即与其他断线都距离较远的孤立边界线,则予以删除。Step 5.2.3: Delete the isolated boundary lines inside the image, that is, the isolated boundary lines that are far away from other broken lines.
步骤5.3:再次进行骨架提取,将步骤5.2中填补的部分细化成单像素宽的目标边界线。Step 5.3: Perform skeleton extraction again, and refine the part filled in step 5.2 into a single-pixel wide target boundary line.
步骤5.4:重新遍历图像,删除掉所有不闭合的目标边界线,得到如图6-A所示的完整精细的地物目标边界,其中图6-B为Mask-RCNN的预测边界结果,两者对比可以发现,本发明有比较明显的精度优势。Step 5.4: Re-traverse the image, delete all unclosed target boundary lines, and obtain the complete and fine object boundary as shown in Figure 6-A, of which Figure 6-B is the predicted boundary result of Mask-RCNN. By comparison, it can be found that the present invention has a relatively obvious precision advantage.
本说明书实施例所述的内容仅仅是对发明构思的实现形式的列举,本发明的保护范围不应当被视为仅限于实施例所陈述的具体形式,本发明的保护范围也及于本领域技术人员根据本发明构思所能够想到的等同技术手段。The content described in the embodiments of the present specification is only an enumeration of the realization forms of the inventive concept, and the protection scope of the present invention should not be regarded as limited to the specific forms stated in the embodiments, and the protection scope of the present invention also extends to those skilled in the art. Equivalent technical means that can be conceived by a person based on the inventive concept.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910638370.0A CN110443822B (en) | 2019-07-16 | 2019-07-16 | A Semantic Edge Aided Method for Fine Extraction of High-score Remote Sensing Targets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910638370.0A CN110443822B (en) | 2019-07-16 | 2019-07-16 | A Semantic Edge Aided Method for Fine Extraction of High-score Remote Sensing Targets |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110443822A true CN110443822A (en) | 2019-11-12 |
CN110443822B CN110443822B (en) | 2021-02-02 |
Family
ID=68430338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910638370.0A Active CN110443822B (en) | 2019-07-16 | 2019-07-16 | A Semantic Edge Aided Method for Fine Extraction of High-score Remote Sensing Targets |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110443822B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111818557A (en) * | 2020-08-04 | 2020-10-23 | 中国联合网络通信集团有限公司 | Network coverage problem identification method, device and system |
CN111967526A (en) * | 2020-08-20 | 2020-11-20 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN112084872A (en) * | 2020-08-10 | 2020-12-15 | 浙江工业大学 | High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge |
CN112084871A (en) * | 2020-08-10 | 2020-12-15 | 浙江工业大学 | High-resolution remote sensing target boundary extraction method based on weak supervised learning |
CN113128388A (en) * | 2021-04-14 | 2021-07-16 | 湖南大学 | Optical remote sensing image change detection method based on space-time spectrum characteristics |
CN113160258A (en) * | 2021-03-31 | 2021-07-23 | 武汉汉达瑞科技有限公司 | Method, system, server and storage medium for extracting building vector polygon |
CN114219816A (en) * | 2021-11-22 | 2022-03-22 | 浙江工业大学 | A Semantic Edge Extraction Method for High-scoring Remote Sensing Images Based on Transfer Learning |
CN114241326A (en) * | 2022-02-24 | 2022-03-25 | 自然资源部第三地理信息制图院 | Progressive intelligent production method and system for ground feature elements of remote sensing images |
CN114485694A (en) * | 2020-11-13 | 2022-05-13 | 元平台公司 | System and method for automatically detecting building coverage area |
CN115273154A (en) * | 2022-09-26 | 2022-11-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium |
CN115797633A (en) * | 2022-12-02 | 2023-03-14 | 中国科学院空间应用工程与技术中心 | Remote sensing image segmentation method, system, storage medium and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7341841B2 (en) * | 2003-07-12 | 2008-03-11 | Accelr8 Technology Corporation | Rapid microbial detection and antimicrobial susceptibility testing |
CN103247032A (en) * | 2013-04-26 | 2013-08-14 | 中国科学院光电技术研究所 | Weak extended target positioning method based on attitude compensation |
US20180150958A1 (en) * | 2016-11-30 | 2018-05-31 | Brother Kogyo Kabushiki Kaisha | Image processing apparatus, method and computer-readable medium for binarizing scanned data |
US20180253828A1 (en) * | 2017-03-03 | 2018-09-06 | Brother Kogyo Kabushiki Kaisha | Image processing apparatus that specifies edge pixel in target image by calculating edge strength |
CN109712140A (en) * | 2019-01-02 | 2019-05-03 | 中楹青创科技有限公司 | Method and device of the training for the full link sort network of evaporating, emitting, dripping or leaking of liquid or gas detection |
-
2019
- 2019-07-16 CN CN201910638370.0A patent/CN110443822B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7341841B2 (en) * | 2003-07-12 | 2008-03-11 | Accelr8 Technology Corporation | Rapid microbial detection and antimicrobial susceptibility testing |
CN103247032A (en) * | 2013-04-26 | 2013-08-14 | 中国科学院光电技术研究所 | Weak extended target positioning method based on attitude compensation |
US20180150958A1 (en) * | 2016-11-30 | 2018-05-31 | Brother Kogyo Kabushiki Kaisha | Image processing apparatus, method and computer-readable medium for binarizing scanned data |
US20180253828A1 (en) * | 2017-03-03 | 2018-09-06 | Brother Kogyo Kabushiki Kaisha | Image processing apparatus that specifies edge pixel in target image by calculating edge strength |
CN109712140A (en) * | 2019-01-02 | 2019-05-03 | 中楹青创科技有限公司 | Method and device of the training for the full link sort network of evaporating, emitting, dripping or leaking of liquid or gas detection |
Non-Patent Citations (2)
Title |
---|
TIANJUN WU,ET AL.: "Prior Knowledge-Based Automatic Object-Oriented Hierarchical Classification for Updating Detailed Land Cover Maps", 《ISRS》 * |
贾涛: "基于卷积神经网络的目标检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111818557A (en) * | 2020-08-04 | 2020-10-23 | 中国联合网络通信集团有限公司 | Network coverage problem identification method, device and system |
CN112084872A (en) * | 2020-08-10 | 2020-12-15 | 浙江工业大学 | High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge |
CN112084871A (en) * | 2020-08-10 | 2020-12-15 | 浙江工业大学 | High-resolution remote sensing target boundary extraction method based on weak supervised learning |
CN112084872B (en) * | 2020-08-10 | 2024-12-31 | 浙江工业大学 | A high-resolution remote sensing target accurate detection method integrating semantic segmentation and edge |
CN112084871B (en) * | 2020-08-10 | 2024-02-13 | 浙江工业大学 | High-resolution remote sensing target boundary extraction method based on weak supervised learning |
CN111967526B (en) * | 2020-08-20 | 2023-09-22 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN111967526A (en) * | 2020-08-20 | 2020-11-20 | 东北大学秦皇岛分校 | Remote sensing image change detection method and system based on edge mapping and deep learning |
CN114485694A (en) * | 2020-11-13 | 2022-05-13 | 元平台公司 | System and method for automatically detecting building coverage area |
CN113160258A (en) * | 2021-03-31 | 2021-07-23 | 武汉汉达瑞科技有限公司 | Method, system, server and storage medium for extracting building vector polygon |
CN113128388A (en) * | 2021-04-14 | 2021-07-16 | 湖南大学 | Optical remote sensing image change detection method based on space-time spectrum characteristics |
CN113128388B (en) * | 2021-04-14 | 2022-09-02 | 湖南大学 | Optical remote sensing image change detection method based on space-time spectrum characteristics |
CN114219816A (en) * | 2021-11-22 | 2022-03-22 | 浙江工业大学 | A Semantic Edge Extraction Method for High-scoring Remote Sensing Images Based on Transfer Learning |
CN114241326A (en) * | 2022-02-24 | 2022-03-25 | 自然资源部第三地理信息制图院 | Progressive intelligent production method and system for ground feature elements of remote sensing images |
CN115273154A (en) * | 2022-09-26 | 2022-11-01 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium |
CN115273154B (en) * | 2022-09-26 | 2023-01-17 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium |
CN115797633A (en) * | 2022-12-02 | 2023-03-14 | 中国科学院空间应用工程与技术中心 | Remote sensing image segmentation method, system, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110443822B (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443822A (en) | A kind of high score remote sensing target fine extracting method of semanteme edge auxiliary | |
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
Wei et al. | Toward automatic building footprint delineation from aerial images using CNN and regularization | |
CN112084872B (en) | A high-resolution remote sensing target accurate detection method integrating semantic segmentation and edge | |
CN109255776B (en) | An automatic identification method for cotter pin defects in transmission lines | |
CN109615611B (en) | Inspection image-based insulator self-explosion defect detection method | |
Huang et al. | Morphological building/shadow index for building extraction from high-resolution imagery over urban areas | |
US8155391B1 (en) | Semi-automatic extraction of linear features from image data | |
CN103400151B (en) | The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method | |
CA3174351A1 (en) | Feature extraction from mobile lidar and imagery data | |
CN103049763B (en) | Context-constraint-based target identification method | |
CN110321815A (en) | A kind of crack on road recognition methods based on deep learning | |
CN110378196A (en) | A kind of road vision detection method of combination laser point cloud data | |
Zhao et al. | Road network extraction from airborne LiDAR data using scene context | |
CN103839267B (en) | Building extracting method based on morphological building indexes | |
CN105180890A (en) | Rock mass structural plane attitude measuring method integrating laser point cloud and digital image | |
Chen et al. | Automatic building extraction via adaptive iterative segmentation with LiDAR data and high spatial resolution imagery fusion | |
CN103020970A (en) | Corn ear image grain segmentation method | |
CN111027538A (en) | Container detection method based on instance segmentation model | |
CN110309808A (en) | An adaptive smoke root node detection method in a large scale space | |
CN115731257A (en) | Image-based Leaf Shape Information Extraction Method | |
CN115512247A (en) | Regional building damage grade assessment method based on image multi-parameter extraction | |
CN112146647B (en) | Binocular vision positioning method and chip for ground texture | |
CN108022245A (en) | Photovoltaic panel template automatic generation method based on upper thread primitive correlation model | |
Gavrilov et al. | A method for aircraft labeling in aerial and satellite images based on continuous morphological models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |