CN106340005A - High-resolution remote sensing image unsupervised segmentation method based on scale parameter automatic optimization - Google Patents
High-resolution remote sensing image unsupervised segmentation method based on scale parameter automatic optimization Download PDFInfo
- Publication number
- CN106340005A CN106340005A CN201610664398.8A CN201610664398A CN106340005A CN 106340005 A CN106340005 A CN 106340005A CN 201610664398 A CN201610664398 A CN 201610664398A CN 106340005 A CN106340005 A CN 106340005A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- image
- scale
- value
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 230000011218 segmentation Effects 0.000 title claims abstract description 97
- 238000005457 optimization Methods 0.000 title abstract description 6
- 238000003709 image segmentation Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 230000006978 adaptation Effects 0.000 claims 3
- 238000011002 quantification Methods 0.000 claims 2
- AIMMVWOEOZMVMS-UHFFFAOYSA-N cyclopropanecarboxamide Chemical compound NC(=O)C1CC1 AIMMVWOEOZMVMS-UHFFFAOYSA-N 0.000 claims 1
- 230000010339 dilation Effects 0.000 claims 1
- 238000002474 experimental method Methods 0.000 abstract description 22
- 230000003044 adaptive effect Effects 0.000 abstract description 12
- 238000004458 analytical method Methods 0.000 description 13
- 230000003595 spectral effect Effects 0.000 description 13
- 239000000284 extract Substances 0.000 description 9
- 238000001228 spectrum Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于尺度参数自动优化的高分遥感影像非监督分割方法,主要包含3个步骤:1)基于局域同质性指标的J‑value的自适应SP选择;2)基于尺度间对象边界约束策略的图像分割;3)基于多特征的区域合并。通过对不同传感器类型的多组高分辨率遥感影像进行实验,并与知名商业软件eCognition及传统监督的分割方法进行比较,证明所提出方法定位对象边缘更加准确,提取对象轮廓更为完整,且分割过程无需人工干预,是一种通用性强且有效的非监督解决方案。
The invention discloses a non-supervised segmentation method of high-resolution remote sensing images based on automatic optimization of scale parameters, which mainly includes three steps: 1) adaptive SP selection based on J-value of local homogeneity index; 2) based on scale 3) Image segmentation based on object boundary constraint strategy; 3) Region merging based on multi-features. Through experiments on multiple sets of high-resolution remote sensing images of different sensor types, and comparing with well-known commercial software eCognition and traditional supervised segmentation methods, it is proved that the proposed method is more accurate in locating the edge of the object, extracting the outline of the object is more complete, and segmentation The process requires no human intervention and is a general and effective unsupervised solution.
Description
技术领域technical field
本发明涉及一种基于尺度参数自动优化的高分遥感影像非监督分割方法,属于遥感影像分割技术领域。The invention relates to a non-supervised segmentation method of high-resolution remote sensing images based on automatic optimization of scale parameters, and belongs to the technical field of remote sensing image segmentation.
背景技术Background technique
近年来,基于对象的图像分析OBIA(object-based image analysis)在GIScience(geographic information science)以及遥感领域(尤其是高分辨率遥感影像应用领域)中正越来越受到重视。而图像分割是OBIA中的核心步骤之一,其实现了场景中地理对象的轮廓信息提取,是后续进行特征提取及目标识别的基础与前提。与普通图像相比,遥感影像具备覆盖范围广泛、多波段、多空间分辨率等诸多特性,包含的地物种类也更加丰富,因此传统的分割方法难以直接应用于在遥感影像上。与此同时,随着遥感卫星空间分辨率的不断提高,如SPOT 5、IKONOS、QuickBird等为代表米级、亚米级的高分辨率数据已广泛应用于农作物产量调查、城市土地规划、灾害监测与预警等社会生活的各个领域,因此针对高分辨率遥感影像的图像分割技术已经成为遥感领域的研究热点之一。In recent years, OBIA (object-based image analysis) is getting more and more attention in the field of GIScience (geographic information science) and remote sensing (especially the application of high-resolution remote sensing images). Image segmentation is one of the core steps in OBIA, which realizes the extraction of contour information of geographical objects in the scene, and is the basis and premise of subsequent feature extraction and target recognition. Compared with ordinary images, remote sensing images have many characteristics such as wide coverage, multi-band, and multi-spatial resolution, and contain more types of ground objects. Therefore, traditional segmentation methods are difficult to directly apply to remote sensing images. At the same time, with the continuous improvement of the spatial resolution of remote sensing satellites, such as SPOT 5, IKONOS, QuickBird, etc., representing meter-level and sub-meter-level high-resolution data, have been widely used in crop yield surveys, urban land planning, and disaster monitoring. Therefore, image segmentation technology for high-resolution remote sensing images has become one of the research hotspots in the field of remote sensing.
与中、低分辨率遥感影像相比,高分辨率遥感影像带来了更加丰富的光谱与纹理特征,对象的结构、形状等空间细节信息能够得到清楚的表达,以往突出的混合像元问题也得到了有效解决,从而提高了相邻地物的类间可分性。另一方面,空间分辨率的提高也给图像分割造成了新的困难与挑战:“同物异谱”现象更加突出,即相同种类的地物间光谱特征可能具有显著的差异;细节信息增加的同时地物阴影、噪声、云层遮盖等干扰因素造成影响也更加显著;城市场景中多变的生态环境、丰富的地物种类以及结构复杂的人造地物等都给准确提取geographical objects造成了困难。Compared with medium- and low-resolution remote sensing images, high-resolution remote sensing images bring richer spectral and texture features, and spatial details such as the structure and shape of objects can be clearly expressed. It is effectively solved, thus improving the separability between classes of adjacent ground objects. On the other hand, the improvement of spatial resolution has also brought new difficulties and challenges to image segmentation: the phenomenon of "same object but different spectrum" is more prominent, that is, the spectral characteristics of the same type of ground objects may have significant differences; At the same time, the influence of interference factors such as object shadows, noise, and cloud cover is more significant; the changing ecological environment, rich types of land objects, and man-made objects with complex structures in urban scenes have caused difficulties in accurately extracting geographical objects.
为此,学者们已经提出一些解决对策,其中最重要的手段之一是引入多尺度的分割策略,从而更好地揭示对象在不同尺度下的空间结构特征。例如,C.Burnet等提出了一种基于分形的多尺度分割算法,通过估计局部区域光谱特征的同质和异质性并进行迭代优化,取得了良好的效果[1];知名遥感商业软件eCognition采用了分形网络演化算法(fractal net evolution algorithm,FNEA)进行多尺度分割,充分利用了对象的光谱、纹理、形状、层次以及类间信息。需要指出的是,现有技术中多尺度分割方法都需要通过人工解译或者试错法来确定尺度参数(scale parameter,SP),这些方法都不能被称为自动化的图像分割。而目前SP参数自适应的多尺度分割解决方案还不多见,这也成为了制约OBIA技术广泛应用的主要瓶颈之一。To this end, scholars have proposed some solutions, one of the most important means is to introduce a multi-scale segmentation strategy, so as to better reveal the spatial structure characteristics of objects at different scales. For example, C.Burnet et al. proposed a fractal-based multi-scale segmentation algorithm, which achieved good results by estimating the homogeneity and heterogeneity of spectral features in local areas and performing iterative optimization [1] ; the well-known remote sensing commercial software eCognition The fractal net evolution algorithm (FNEA) is used for multi-scale segmentation, which makes full use of the object's spectrum, texture, shape, hierarchy and inter-class information. It should be pointed out that the multi-scale segmentation methods in the prior art all need to determine the scale parameter (scale parameter, SP) through manual interpretation or trial and error, and these methods cannot be called automatic image segmentation. At present, there are not many multi-scale segmentation solutions with adaptive SP parameters, which has become one of the main bottlenecks restricting the wide application of OBIA technology.
参考文献references
[1]Burnett C,Blaschke T.A multi-scale segmentation/objectrelationship modelling methodology for landscape analysis[J].Ecologicalmodelling,2003,168(3):233-249.[1] Burnett C, Blaschke T.A multi-scale segmentation/object relationship modeling methodology for landscape analysis[J].Ecologicalmodelling,2003,168(3):233-249.
[2]Shao P,Yang G,Niu X,et al.Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation[J].Sustainability,2014,6(8):5300-5310.[2] Shao P, Yang G, Niu X, et al. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation [J]. Sustainability, 2014, 6(8): 5300-5310.
[3]Deng Y,Manjunath B S.Unsupervised segmentation of color-textureregions in images and video[J].Pattern Analysis and Machine Intelligence,IEEETransactions on,2001,23(8):800-810.[3] Deng Y, Manjunath B S. Unsupervised segmentation of color-texture regions in images and video [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2001, 23(8): 800-810.
[4]Baraldi A,Boschetti L.Operational automatic remote sensing imageunderstanding systems:Beyond Geographic Object-Based and Object-OrientedImage Analysis(GEOBIA/GEOOIA).Part 2:Novel system architecture,information/knowledge representation,algorithm design and implementation[J].RemoteSensing,2012,4(9):2768-2817.[4]Baraldi A, Boschetti L. Operational automatic remote sensing image understanding systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA). Part 2: Novel system architecture, information/knowledge representation, algorithm design and implementation [J ]. RemoteSensing, 2012, 4(9): 2768-2817.
发明内容Contents of the invention
发明目的:针对现有技术中存在的问题与不足,本发明通过将传统彩色纹理分割方法JSEG中的多尺度J-image序列引入高分辨率遥感影像分割,提出了基于局域同质性指标的J-value的自适应SP选择策略、尺度间对象边界约束策略以及基于多特征的区域合并策略,实现了自动化的多尺度分割。通过对不同传感器类型、不同空间分辨的遥感影像进行实验,并与两种监督的分割方法(eCognition和文献2)实验结果进行比较,证明了所提出算法不但定位对象边缘更加准确,提取对象轮廓更加完整且无需人工干预,提高了分割过程的自动化程度和鲁棒性。Purpose of the invention: Aiming at the problems and deficiencies in the prior art, the present invention introduces the multi-scale J-image sequence in the traditional color texture segmentation method JSEG into high-resolution remote sensing image segmentation, and proposes a method based on the local homogeneity index. The adaptive SP selection strategy of J-value, the boundary constraint strategy of inter-scale objects, and the region merging strategy based on multi-features realize automatic multi-scale segmentation. Through experiments on remote sensing images with different sensor types and different spatial resolutions, and comparing with the experimental results of two supervised segmentation methods (eCognition and Document 2), it is proved that the proposed algorithm not only locates the edge of the object more accurately, but also extracts the outline of the object more accurately. Complete and without human intervention, it improves the automation and robustness of the segmentation process.
技术方案:一种基于尺度参数自动优化的高分遥感影像非监督分割方法,主要包括三个步骤:1)基于局域同质性指标的J-value的自适应SP选择,从而确定了最佳多尺度J-image序列;2)基于尺度间对象边界约束策略的图像分割,实现了由粗到精的多尺度分割;3)基于多特征的区域合并,以应对分割结果中可能存在的过分割现象。Technical solution: an unsupervised segmentation method for high-resolution remote sensing images based on automatic optimization of scale parameters, which mainly includes three steps: 1) Adaptive SP selection based on J-value of local homogeneity index, thus determining the best Multi-scale J-image sequence; 2) Image segmentation based on inter-scale object boundary constraint strategy, realizing multi-scale segmentation from coarse to fine; 3) Region merging based on multi-features to deal with possible over-segmentation in the segmentation results Phenomenon.
SP自适应选择SP adaptive selection
选择J-image序列作为多尺度分析平台,并提出了一种SP的自适应选择策略。The J-image sequence was chosen as the multi-scale analysis platform, and an adaptive selection strategy for SP was proposed.
多尺度J-image的计算过程如下:首先对原始影像在LUV空间进行颜色量化。在量化影像中,设定以像素z为中心尺寸为M×M(M即为SP)像素的窗口Z,并将窗口中的每个像素的坐标z(x,y)作为其像素值,且z(x,y)∈Z。同时将窗口中的角点去除。The calculation process of the multi-scale J-image is as follows: Firstly, the color quantization is performed on the original image in LUV space. In the quantized image, set a window Z with a size of M×M (M is SP) pixels centered on the pixel z, and use the coordinate z(x,y) of each pixel in the window as its pixel value, and z(x,y)∈Z. At the same time, the corner points in the window are removed.
设量化影像中灰度级总数为P,令Zp为窗口Z中属于灰度级p的所有像素的集合,mp为所有属于灰度级p的像素对应的像素均值,则窗口Z中属于同一灰度级像素的方差的和可表示为:Let the total number of gray levels in the quantized image be P, let Z p be the set of all pixels belonging to gray level p in window Z, and m p be the mean value of all pixels corresponding to gray level p, then the window Z belongs to The sum of variances of pixels with the same gray level can be expressed as:
窗口Z内所有像素的总体方差可表示为:The overall variance of all pixels within window Z can be expressed as:
其中,m为窗口Z内所有像素的均值。则局部同质性指标J-value可定义为:Among them, m is the mean value of all pixels in the window Z. Then the local homogeneity index J-value can be defined as:
J-value=(ST-SW)/SW (3)J-value=(S T -S W )/S W (3)
此时,将像素z对应的J-value作为该像素的像素值,遍历整幅量化影像,可获得SP为M时的J-image,通过改变SP可获得多尺度的J-image序列。At this time, the J-value corresponding to the pixel z is used as the pixel value of the pixel, and the entire quantized image is traversed to obtain the J-image when the SP is M. By changing the SP, a multi-scale J-image sequence can be obtained.
基于J-value的自适应SP选择策略:Adaptive SP selection strategy based on J-value:
Step1:计算SP为M(M=5,6....N)时的J-image序列,其中M=5是J-image允许的最小窗口尺寸,N代表了最粗糙尺度的J-image。Step1: Calculate the J-image sequence when SP is M (M=5,6....N), where M=5 is the minimum window size allowed by J-image, and N represents the J-image with the roughest scale.
Step2:计算所有尺度J-image下的像素J-value均值并构建曲线。Step2: Calculate the mean value of pixel J-value under all scales J-image and build curve.
Step3:在众多曲线的拐点中,仅选择一些最为突出的拐点,这些拐点应满足 Step3: in many Among the inflection points of the curve, only some of the most prominent inflection points are selected, and these inflection points should satisfy
约束的多尺度分割Constrained Multiscale Segmentation
在分割阶段,提出基于尺度间对象边界约束的分割策略。设最佳J-image序列包含L个尺度,可表示为Sk(k=1,2...L),具体实现过程如下:In the segmentation stage, a segmentation strategy based on inter-scale object boundary constraints is proposed. Assuming that the optimal J-image sequence contains L scales, which can be expressed as S k (k=1,2...L), the specific implementation process is as follows:
Step1:首先对最粗糙尺度S1进行分割。根据公式(4)确定阈值T1进行种子区域提取,其中μk和σk分别表示尺度Sk中所有像素的J-value均值和方差。Step1: First segment the roughest scale S 1 . The threshold T 1 is determined according to formula (4) for seed region extraction, where μ k and σ k represent the mean and variance of J-value of all pixels in scale S k , respectively.
Tk=μk-0.2σk,(k=1,2...L) (4)T k =μ k -0.2σ k , (k=1,2...L) (4)
在S1中,所有J-value值小于T1的像素采用四连接方法构成联通区域,作为一个个种子区域。以种子区域为起点,按照上下左右四个方向以J-value值从小到大的顺序进行区域增长,相邻区域发生交汇时的边界就构成了S1下的分割结果。In S 1 , all pixels whose J-value is smaller than T 1 use the four-connection method to form a connected area, which is used as a seed area. Starting from the seed area, the area is grown in the order of J-value from small to large in the four directions of up, down, left, and right. The boundary when adjacent areas meet constitutes the segmentation result under S 1 .
Step2:将上一尺度的对象边界映射到当前尺度并进行修正。将当前尺度J-image转化为一幅二值图像,仅保留通过尺度间映射提取的对象边界,并进行形态学膨胀操作。膨胀结构单元尺寸设定为M×M像素,M为当前尺度的SP。利用膨胀后的边界将当前尺度J-image划分为一个个独立的种子区域,并对这些种子区域按照J-value值从小到大进行区域增长,相邻区域发生交汇时的边界即为边界修正的结果。Step2: Map the object boundary of the previous scale to the current scale and correct it. Convert the current scale J-image into a binary image, retain only the object boundaries extracted by inter-scale mapping, and perform morphological expansion operations. The size of the dilated structure unit is set to M×M pixels, where M is the SP of the current scale. Use the expanded boundary to divide the current scale J-image into independent seed regions, and grow these seed regions according to the J-value value from small to large. The boundary when adjacent regions meet is the boundary correction. result.
此时,当前尺度下的分割仅在由修正后边界提取的对象内部进行。而为了避免过分割现象,对于内部均质程度较高的对象即认为其已与实际地物类型匹配,在当前尺度下不再进行分割。判断规则为该对象内部的J-value均值应小于当前尺度对应的阈值Tk(参见公式4)。在此基础上,根据阈值Tk对对剩余对象进行分割,分割过程与尺度1相同,并将分割结果映射到下一尺度。At this time, the segmentation at the current scale is only performed inside the object extracted by the corrected boundary. In order to avoid over-segmentation, objects with a high degree of internal homogeneity are considered to have matched the actual object types, and no segmentation is performed at the current scale. The judgment rule is that the average J-value inside the object should be smaller than the threshold T k corresponding to the current scale (see formula 4). On this basis, the remaining objects are segmented according to the threshold Tk pairs, the segmentation process is the same as scale 1, and the segmentation results are mapped to the next scale.
Step3:重复Step2的分割过程,直到尺度L分割完毕,从而获得初步的分割结果。Step3: Repeat the segmentation process of Step2 until the scale L is segmented to obtain preliminary segmentation results.
多特征的区域合并Multi-featured region merging
根据最佳SP计算原始影像每个波段对应性多尺度J-image序列。设原始影像包含F个波段,对于任意一个对象q,定义特征向量Jqf=(Jq1,Jq2,…,JqF),其中每个分量代表了对象q在每个波段L个尺度J-image下的J-value均值,那么对于任意波段f(f=1,2…F),有根据J-value的定义可知,J-value综合反映了局部区域(对象)的光谱、纹理及尺度信息,因此通过判断相邻对象qA和qB的特征向量间的欧式距离来判断其相似程度,如公式(5)所示。Calculate the multi-scale J-image sequence corresponding to each band of the original image according to the best SP. Assuming that the original image contains F bands, for any object q, define the feature vector J qf =(J q1 ,J q2 ,...,J qF ), where each component represents the object q in each band L scale J- The J-value mean under the image, then for any band f (f=1,2...F), there are According to the definition of J-value, it can be seen that J-value comprehensively reflects the spectrum, texture and scale information of the local area (object), so the similarity can be judged by judging the Euclidean distance between the feature vectors of adjacent objects q A and q B , as shown in formula (5).
采用RAG(Region Adjacency Graphics)来进行区域合并,具体过程如下:Use RAG (Region Adjacency Graphics) to merge regions. The specific process is as follows:
Step1:根据尺度L中的分割结果,生成所有相邻对象的RAG。Step1: According to the segmentation results in scale L, generate RAGs of all adjacent objects.
Step2:选择与任一对象qA相邻的所有对象,根据公式(5)计算欧式距离。Step2: Select all objects adjacent to any object q A , and calculate the Euclidean distance according to formula (5).
Step3:若存在对象qB满足D(qA,qB)≤0.1,则认为qA和qB属于同一对象,合并qA和qB并生成新的RAG。否则,返回Step2。Step3: If there is an object q B satisfying D(q A , q B )≤0.1, it is considered that q A and q B belong to the same object, and q A and q B are merged to generate a new RAG. Otherwise, return to Step2.
Step4:重复Step2至Step3,遍历所有对象,获得最终分割结果。Step4: Repeat Step2 to Step3 to traverse all objects to obtain the final segmentation result.
附图说明Description of drawings
图1本发明方法流程图;Fig. 1 method flowchart of the present invention;
图2为M=9时特定尺寸窗口Z;Fig. 2 is the specific size window Z when M=9;
图3为2005年QucikBird影像;Figure 3 is the image of QucikBird in 2005;
图4为曲线及最佳SP选择;Figure 4 is Curve and optimal SP selection;
图5为本发明方法分割结果;Fig. 5 is the segmentation result of the method of the present invention;
图6为方法二分割结果;Figure 6 is the method two segmentation results;
图7为方法三分割结果;Fig. 7 is the method three segmentation results;
图8为2005年QucikBird影像;Figure 8 is the image of QucikBird in 2005;
图9为曲线及最佳SP选择;Figure 9 is Curve and optimal SP selection;
图10为本发明方法分割结果;Fig. 10 is the segmentation result of the method of the present invention;
图11为方法二分割结果;Figure 11 is the method two segmentation results;
图12为方法三方法分割结果;Fig. 12 is method three method segmentation results;
图13为实验一精度评价;Figure 13 is Experiment 1 Accuracy evaluation;
图14为实验二精度评价。Figure 14 is Experiment 2 Accuracy evaluation.
具体实施方式detailed description
下面结合具体实施例,进一步阐明本发明,应理解这些实施例仅用于说明本发明而不用于限制本发明的范围,在阅读了本发明之后,本领域技术人员对本发明的各种等价形式的修改均落于本申请所附权利要求所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.
如图1所示,基于尺度参数自动优化的高分遥感影像非监督分割方法,主要包括三个步骤:1)基于局域同质性指标的J-value的自适应SP选择,从而确定了最佳多尺度J-image序列;2)基于尺度间对象边界约束策略的图像分割,实现了由粗到精的多尺度分割;3)基于多特征的区域合并,以应对分割结果中可能存在的过分割现象。As shown in Figure 1, the unsupervised segmentation method of high-resolution remote sensing images based on automatic optimization of scale parameters mainly includes three steps: 1) Adaptive SP selection based on the J-value of the local homogeneity index, thus determining the optimal Jia multi-scale J-image sequence; 2) Image segmentation based on the inter-scale object boundary constraint strategy, which realizes multi-scale segmentation from coarse to fine; 3) Region merging based on multi-features to deal with possible overshoots in the segmentation results Segmentation phenomenon.
SP自适应选择SP adaptive selection
多尺度分割通常主要依赖一个核心控制参数即SP来将一幅图像划分为独立的地理对象。SP对分割结果中对象内部的光谱同质性以及对象的平均尺寸加以控制,也实际影响着分割结果中对象的数量。因此,选择合适的多尺度分析工具及SP如何确定是本发明首先需要解决的关键问题之一。Multi-scale segmentation usually mainly relies on a core control parameter, namely SP, to divide an image into independent geographic objects. SP controls the spectral homogeneity within the object in the segmentation result and the average size of the object, and actually affects the number of objects in the segmentation result. Therefore, selecting an appropriate multi-scale analysis tool and how to determine the SP is one of the key issues to be solved in the present invention.
多尺度J-image序列Multi-scale J-image sequence
JSEG采用的多尺度J-image序列能够充分反映局部区域光谱分布的同质性,还可避免采用传统多尺度分析工具(如小波变换、轮廓波变换)在计算多尺度影像时存在的仅对个别方向的高频信息敏感的局限,但也同样面临着SP的合理选择问题。因此,本方法选择J-image序列作为多尺度分析平台,并提出了一种SP的自适应选择策略。The multi-scale J-image sequence adopted by JSEG can fully reflect the homogeneity of the spectral distribution in the local area, and it can also avoid the traditional multi-scale analysis tools (such as wavelet transform, contourlet transform) when calculating multi-scale images. However, it also faces the problem of reasonable selection of SP. Therefore, this method chooses the J-image sequence as the multi-scale analysis platform, and proposes an adaptive selection strategy for SP.
多尺度J-image的计算过程如下:首先对原始影像在LUV空间进行颜色量化。在量化影像中,设定以像素z为中心尺寸为M×M(M即为SP)像素的窗口Z,并将窗口中的每个像素的坐标z(x,y)作为其像素值,且z(x,y)∈Z。同时,为了保证各个方向的一致性,窗口中的角点被去除。以M=9为例,以像素z为中心的窗口Z如图2所示:The calculation process of the multi-scale J-image is as follows: Firstly, the color quantization is performed on the original image in LUV space. In the quantized image, set a window Z with a size of M×M (M is SP) pixels centered on the pixel z, and use the coordinate z(x,y) of each pixel in the window as its pixel value, and z(x,y)∈Z. At the same time, in order to ensure consistency in all directions, corner points in the window are removed. Taking M=9 as an example, the window Z centered on pixel z is shown in Figure 2:
设量化影像中灰度级总数为P,令Zp为窗口Z中属于灰度级p的所有像素的集合,mp为所有属于灰度级p的像素对应的像素均值,则窗口Z中属于同一灰度级像素的方差的和可表示为:Let the total number of gray levels in the quantized image be P, let Z p be the set of all pixels belonging to gray level p in window Z, and m p be the mean value of all pixels corresponding to gray level p, then the window Z belongs to The sum of variances of pixels with the same gray level can be expressed as:
窗口Z内所有像素的总体方差可表示为:The overall variance of all pixels within window Z can be expressed as:
其中,m为窗口Z内所有像素的均值。则局部同质性指标J-value可定义为:Among them, m is the mean value of all pixels in the window Z. Then the local homogeneity index J-value can be defined as:
J-value=(ST-SW)/SW (3)J-value=(S T -S W )/S W (3)
此时,将像素z对应的J-value作为该像素的像素值,遍历整幅量化影像,可获得SP为M时的J-image,通过改变SP可获得多尺度的J-image序列。At this time, the J-value corresponding to the pixel z is used as the pixel value of the pixel, and the entire quantized image is traversed to obtain the J-image when the SP is M. By changing the SP, a multi-scale J-image sequence can be obtained.
基于J-value的最佳尺度选择Optimal Scale Selection Based on J-value
采用J-image序列作为多尺度分析工具,提出了一种基于J-value的自适应SP选择策略。Using J-image sequence as a multi-scale analysis tool, an adaptive SP selection strategy based on J-value is proposed.
Step1:计算SP为M(M=5,6....N)时的J-image序列,其中M=5是J-image允许的最小窗口尺寸,N(可根据实际图像尺寸进行适当调整,本文中设定N=30)代表了最粗糙尺度的J-image。Step1: Calculate the J-image sequence when SP is M (M=5,6....N), where M=5 is the minimum window size allowed by J-image, N (can be adjusted appropriately according to the actual image size, In this paper, N=30) represents the J-image with the roughest scale.
Step2:计算所有尺度J-image下的像素J-value均值并构建曲线。根据J-value的定义,曲线的拐点反映了与前后尺度相比,当前尺度下光谱分布的均质程度有了陡然增高。因此我们假设这些拐点说明场景中某些具有代表性的地物种类适合于在当前尺度下进行分割。也就是在这些拐点处,分割结果中的对象恰好与实际地物类型匹配,具有相同或者相似的光谱均质程度,而这些代表性地物能够对曲线产生显著影响。Step2: Calculate the mean value of pixel J-value under all scales J-image and build curve. According to the definition of J-value, The inflection point of the curve reflects the sharp increase in the homogeneity of the spectral distribution at the current scale compared with the front and back scales. Therefore, we assume that these inflection points indicate that some representative object types in the scene are suitable for segmentation at the current scale. That is, at these inflection points, the objects in the segmentation results just match the actual object types, and have the same or similar spectral homogeneity, and these representative objects can Curves have a noticeable effect.
Step3:在众多拐点中,仅选择一些最为突出的拐点,这些拐点应满足以尽量使场景中最具有代表性的地物种类得到有效提取。另外,为了保留影像的细节信息,最精细的尺度(即M=5)始终被选择。此时,可以综合所选择的这些拐点对应的SP以及最精细尺度来共同确定最佳多尺度J-image序列。Step3: Among the many inflection points, only select some of the most prominent inflection points, and these inflection points should satisfy In order to effectively extract the most representative types of ground objects in the scene as far as possible. In addition, in order to preserve the detailed information of the image, the finest scale (ie M=5) is always chosen. At this point, the SP corresponding to the selected inflection points and the finest scale can be integrated to jointly determine the optimal multi-scale J-image sequence.
约束的多尺度分割Constrained Multiscale Segmentation
在分割阶段,提出基于尺度间对象边界约束的分割策略。设最佳J-image序列包含L个尺度,可表示为Sk(k=1,2...L),具体实现过程如下:In the segmentation stage, a segmentation strategy based on inter-scale object boundary constraints is proposed. Assuming that the optimal J-image sequence contains L scales, which can be expressed as S k (k=1,2...L), the specific implementation process is as follows:
Step1:首先对最粗糙尺度S1进行分割。根据公式(4)确定阈值T1进行种子区域提取,其中μk和σk分别表示尺度Sk中所有像素的J-value均值和方差。Step1: First segment the roughest scale S 1 . The threshold T 1 is determined according to formula (4) for seed region extraction, where μ k and σ k represent the mean and variance of J-value of all pixels in scale S k , respectively.
Tk=μk-0.2σk,(k=1,2...L) (4)T k =μ k -0.2σ k , (k=1,2...L) (4)
在S1中,所有J-value值小于T1的像素采用四连接方法构成联通区域,作为一个个种子区域。以种子区域为起点,按照上下左右四个方向以J-value值从小到大的顺序进行区域增长,相邻区域发生交汇时的边界就构成了S1下的分割结果。In S 1 , all pixels whose J-value is smaller than T 1 use the four-connection method to form a connected area, which is used as a seed area. Starting from the seed area, the area is grown in the order of J-value from small to large in the four directions of up, down, left, and right. The boundary when adjacent areas meet constitutes the segmentation result under S 1 .
Step2:将上一尺度的对象边界映射到当前尺度并进行修正。根据J-image的定义可知,J-image多尺度序列都与原始影像具有相同的尺寸。因此上一粗糙尺度分割结果所提取的对象边界都可以根据坐标映射到当前精细尺度的相同位置上去,并对当前尺度下的分割进行约束。而粗糙尺度下提取的边界尽管能够确定对象的位置及其大致轮廓,但难以准确定位对象的边缘,因此需要在当前尺度下进行修正,过程如下。Step2: Map the object boundary of the previous scale to the current scale and correct it. According to the definition of J-image, the J-image multi-scale sequence has the same size as the original image. Therefore, the object boundaries extracted from the previous coarse-scale segmentation results can be mapped to the same position at the current fine-scale according to the coordinates, and the segmentation at the current scale is constrained. Although the boundary extracted at the coarse scale can determine the position of the object and its approximate outline, it is difficult to accurately locate the edge of the object, so it needs to be corrected at the current scale. The process is as follows.
将当前尺度J-image转化为一幅二值图像,仅保留通过尺度间映射提取的对象边界,并进行形态学膨胀操作。膨胀结构单元尺寸设定为M×M像素,M为当前尺度的SP。利用膨胀后的边界将当前尺度J-image划分为一个个独立的种子区域,并对这些种子区域按照J-value值从小到大进行区域增长,相邻区域发生交汇时的边界即为边界修正的结果。Convert the current scale J-image into a binary image, retain only the object boundaries extracted by inter-scale mapping, and perform morphological expansion operations. The size of the dilated structure unit is set to M×M pixels, where M is the SP of the current scale. Use the expanded boundary to divide the current scale J-image into independent seed regions, and grow these seed regions according to the J-value value from small to large. The boundary when adjacent regions meet is the boundary correction. result.
此时,当前尺度下的分割仅在由修正后边界提取的对象内部进行。而为了避免过分割现象,对于内部均质程度较高的对象即认为其已与实际地物类型匹配,在当前尺度下不再进行分割。判断规则为该对象内部的J-value均值应小于当前尺度对应的阈值Tk(参见公式4)。在此基础上,根据阈值Tk对对剩余对象进行分割,分割过程与尺度1相同,并将分割结果映射到下一尺度。At this time, the segmentation at the current scale is only performed inside the object extracted by the corrected boundary. In order to avoid over-segmentation, objects with a high degree of internal homogeneity are considered to have matched the actual object types, and no segmentation is performed at the current scale. The judgment rule is that the average J-value inside the object should be smaller than the threshold T k corresponding to the current scale (see formula 4). On this basis, the remaining objects are segmented according to the threshold Tk pairs, the segmentation process is the same as scale 1, and the segmentation results are mapped to the next scale.
Step3:重复Step2的分割过程,直到尺度L分割完毕,从而获得初步的分割结果。Step3: Repeat the segmentation process of Step2 until the scale L is segmented to obtain preliminary segmentation results.
多特征的区域合并Multi-featured region merging
尽管在对各尺度分割之前已首先对内部均质程度较高的对象进行了判别,但过分割现象仍难以避免,还需要进一步的区域合并处理。由于高分辨率遥感影像中突出的“同物异谱”和“同谱异物”现象,仅利用对象内部的光谱特征可能产生误合并问题,因此提出了一种多特征的区域合并策略。Although objects with high internal homogeneity have been discriminated before each scale is segmented, over-segmentation is still unavoidable, and further region merging is required. Due to the prominent phenomenon of "same object with different spectrum" and "same spectrum with different object" in high-resolution remote sensing images, only using the spectral features inside the object may cause mis-merging problems. Therefore, a multi-feature region merging strategy is proposed.
根据最佳SP计算原始影像每个波段对应性多尺度J-image序列。设原始影像包含F个波段,对于任意一个对象q,定义特征向量Jqf=(Jq1,Jq2,…,JqF),其中每个分量代表了对象q在每个波段L个尺度J-image下的J-value均值,那么对于任意波段f(f=1,2…F),有根据J-value的定义可知,J-value综合反映了局部区域(对象)的光谱、纹理及尺度信息,因此通过判断相邻对象qA和qB的特征向量间的欧式距离来判断其相似程度,如公式(5)所示。Calculate the multi-scale J-image sequence corresponding to each band of the original image according to the best SP. Assuming that the original image contains F bands, for any object q, define the feature vector J qf =(J q1 ,J q2 ,...,J qF ), where each component represents the object q in each band L scale J- The J-value mean under the image, then for any band f (f=1,2...F), there are According to the definition of J-value, it can be seen that J-value comprehensively reflects the spectrum, texture and scale information of the local area (object), so the similarity can be judged by judging the Euclidean distance between the feature vectors of adjacent objects q A and q B , as shown in formula (5).
我们采用RAG(Region Adjacency Graphics)来进行区域合并,具体过程如下:We use RAG (Region Adjacency Graphics) to merge regions. The specific process is as follows:
Step1:根据尺度L中的分割结果,生成所有相邻对象的RAG。Step1: According to the segmentation results in scale L, generate RAGs of all adjacent objects.
Step2:选择与任一对象qA相邻的所有对象,根据公式(5)计算欧式距离。Step2: Select all objects adjacent to any object q A , and calculate the Euclidean distance according to formula (5).
Step3:若存在对象qB满足D(qA,qB)≤0.1,则认为qA和qB属于同一对象,合并qA和qB并生成新的RAG。否则,返回Step2。Step3: If there is an object q B satisfying D(q A , q B )≤0.1, it is considered that q A and q B belong to the same object, and q A and q B are merged to generate a new RAG. Otherwise, return to Step2.
Step4:重复Step2至Step3,遍历所有对象,获得最终分割结果。Step4: Repeat Step2 to Step3 to traverse all objects to obtain the final segmentation result.
实验与分析Experiment and Analysis
为验证所提出方法的有效性和可靠性,对两组不同分辨率、不同传感器类型的多光谱高分辨率遥感影像进行了实验,并与商业软件eCongnition(简称“方法二”)以及ShaoP et al.等人(文献2)提出的监督的高分辨率遥感影像分割方法(简称“方法三”)进行了比较。In order to verify the validity and reliability of the proposed method, experiments were carried out on two sets of multispectral high-resolution remote sensing images with different resolutions and different sensor types. The supervised high-resolution remote sensing image segmentation method (referred to as "method 3") proposed by et al. (document 2) was compared.
其中,eCongnition是德国Definiens Imaging开发的一款国际知名的面向对象的遥感影像分类软件,其采用的FNEA分割策略综合利用了对象的光谱、纹理及形状等特征,在高分辨率遥感影像的分割中的具有优异的性能。eCongnition在分割中主要受到三个参数的控制,即:尺度参数(Scale Parameter),主要控制分割结果中的对象平均尺寸;形状参数(Shape Parameter),有助于分割过程中保持对象轮廓的完整性;紧致度参数(CompactnessParameter),有助于提高对象的类间可分性。Shao P et al.等人提出的方法将传统的边缘检测引入高分辨率遥感影像分割中,通过构建对象层级实现了对象的多尺度特征提取及分割,在中国ZY-3卫星影像分割中取得了良好的效果。以上两种方法中相关参数的设定均需要通过人工解译实现,本发明在实验中通过试错法来确定最佳参数组合。Among them, eCongnition is an internationally renowned object-oriented remote sensing image classification software developed by Germany’s Definiens Imaging. has excellent performance. eCongnition is mainly controlled by three parameters in segmentation, namely: scale parameter (Scale Parameter), which mainly controls the average size of the object in the segmentation result; shape parameter (Shape Parameter), which helps to maintain the integrity of the object outline during the segmentation process ;CompactnessParameter (CompactnessParameter), which helps to improve the inter-class separability of objects. The method proposed by Shao P et al. introduces traditional edge detection into high-resolution remote sensing image segmentation, and realizes multi-scale feature extraction and segmentation of objects by constructing object levels. good effect. The setting of relevant parameters in the above two methods needs to be realized through manual interpretation, and the present invention determines the optimal parameter combination through trial and error in experiments.
实验一结果与分析Experiment 1 Results and Analysis
实验一采用2005年的QucikBird四波段彩色融合数据,所在地区为中国武汉,空间分辨率为2.4m,图像尺寸为512×512像素。图像主要为复杂背景下的典型城市场景区域,包含道路、操场、水体及结构复杂的人造目标等丰富的地物种类,如图3所示。Experiment 1 used QucikBird four-band color fusion data in 2005, located in Wuhan, China, with a spatial resolution of 2.4m and an image size of 512×512 pixels. The image is mainly a typical urban scene area under a complex background, including rich types of ground objects such as roads, playgrounds, water bodies, and man-made objects with complex structures, as shown in Figure 3.
本发明方法中曲线如图4所示,描述了随着尺度参数M不断增大,指标的变化情况。其中垂直虚线对应了最佳SP,这些尺度与最精细尺度一起构成了最佳多尺度J-image序列,对应的尺度参数为M∈[5,13,18,28],最终分割结果如图5所示。In the method of the present invention The curve is shown in Figure 4, which describes that as the scale parameter M continues to increase, the index changes. Among them, the vertical dotted line corresponds to the best SP. These scales together with the finest scale constitute the best multi-scale J-image sequence. The corresponding scale parameter is M∈[5,13,18,28]. The final segmentation result is shown in Figure 5 shown.
方法二中,设定Scale Parameter为77,Shape Parameter为50,CompactnessParameter为40。文献2中设定各波段所占比重相同,参数“Scale Parameter”为30,参数“Shape Heterogeneous Degree”为0.4,参数“Compactness Parameter”和“SmoothnessParameter”均为0.5。两种方法实验结果分别如图6、图7所示。In method two, set the Scale Parameter to 77, the Shape Parameter to 50, and the CompactnessParameter to 40. In Document 2, the proportion of each band is set to be the same, the parameter "Scale Parameter" is 30, the parameter "Shape Heterogeneous Degree" is 0.4, and the parameters "Compactness Parameter" and "SmoothnessParameter" are both 0.5. The experimental results of the two methods are shown in Figure 6 and Figure 7, respectively.
为了便于对不同方法实验结果进行比较,本发明对场景中的典型对象或位置进行了标注。其中,位置A、B为运动场,位置C、D、F为建筑物、位置E为道路,位置G为人工湖。通过目视分析可以看出,三种方法均有效提取了位置A的运动场区域,其中本方法和方法二定位草坪的边缘明显比方法三更加准确,且方法三存在一定的过分割现象;对于位置B的操场区域,方法二没有提取出草坪的轮廓,方法三则将部分跑道区域与草坪混淆;位置C的建筑物结构复杂,仅有本方法不但保持了对象轮廓的完整性,同时准确定位了对象边缘,方法二、三则分别存在误分割和过分割现象;位置D、F的建筑物形状规则,本方法和方法二定位楼顶的边缘更加准确,但在位置F方法二存在欠分割现象;仅有方法三有效提取了位置G的人工湖的轮廓信息。综合而言,本发明方法和方法二定位对象的边缘细节信息能力明显优于方法三,而本发明方法对于大块均质区域的轮廓保持的更加完整。方法三能有效识别属于不同种类但光谱特征相似的相邻地物,但也存在突出的定位精度低和过分割问题。In order to compare the experimental results of different methods, the present invention marks the typical objects or positions in the scene. Among them, positions A and B are sports fields, positions C, D and F are buildings, position E is a road, and position G is an artificial lake. Through visual analysis, it can be seen that the three methods can effectively extract the sports field area at position A. Among them, this method and method two are obviously more accurate than method three in locating the edge of the lawn, and method three has a certain over-segmentation phenomenon; for position A In the playground area of B, the second method does not extract the outline of the lawn, and the third method confuses part of the runway area with the lawn; the structure of the building at position C is complex, and only this method not only maintains the integrity of the object outline, but also accurately locates it. For the edge of the object, methods 2 and 3 have mis-segmentation and over-segmentation respectively; the shape of the building at positions D and F is regular, and this method and method 2 are more accurate in locating the edge of the roof, but there is under-segmentation at position F and method 2 ; Only method 3 effectively extracts the contour information of the artificial lake at position G. In general, the method of the present invention and method 2 are obviously better than method 3 in the ability to locate the edge detail information of the object, and the method of the present invention maintains a more complete outline of a large homogeneous area. Method 3 can effectively identify adjacent ground objects belonging to different types but with similar spectral characteristics, but there are also prominent problems of low positioning accuracy and over-segmentation.
实验二结果与分析Experiment 2 Results and Analysis
实验二选择三波段高分辨率航空遥感DOM(Digital Orthophoto Map)影像,数据采集时间为2009年3月,空间分辨率为0.6m,尺寸为512×512像素,所在地为中国南京,如图8所示。通过与图3对比可以看出,实验二采用的数据背景复杂程度有所降低,但不同种类地物的光谱、纹理、边缘等细节特征更为显著,对象包括形状规则的大面积均质区域和结构复杂、纹理特征丰富的人造建筑,因此对分割算法准确定位对象边缘和保持对象轮廓的完整性都提出了更高的要求。In Experiment 2, three-band high-resolution aerial remote sensing DOM (Digital Orthophoto Map) images were selected. The data collection time was March 2009, the spatial resolution was 0.6m, and the size was 512×512 pixels. The location was Nanjing, China, as shown in Figure 8. Show. It can be seen from the comparison with Figure 3 that the background complexity of the data used in Experiment 2 has been reduced, but the details of spectrum, texture, edge and other details of different types of ground objects are more significant, and the objects include large-area homogeneous areas with regular shapes and Man-made buildings with complex structures and rich texture features put forward higher requirements for segmentation algorithms to accurately locate object edges and maintain the integrity of object outlines.
本方法中曲线如图9所示。根据曲线可知,最佳多尺度序列对应的尺度参数为M∈[5,10,12,21,25,27],分割结果如图10所示。In this method The curve is shown in Figure 9. according to It can be seen from the curve that the scale parameter corresponding to the optimal multi-scale sequence is M∈[5,10,12,21,25,27], and the segmentation results are shown in Figure 10.
方法二中,设定Scale Parameter为100,Shape Parameter为50,CompactnessParameter为50。文献14中设定各波段所占比重相同,Scale Parameter为50,ShapeHeterogeneous Degree为0.3,Compactness Parameter和Smoothness Parameter均为0.5。两种方法实验结果分别如图11、图12所示。In method 2, set the Scale Parameter to 100, the Shape Parameter to 50, and the CompactnessParameter to 50. In Document 14, the proportion of each band is set to be the same, the Scale Parameter is 50, the ShapeHeterogeneous Degree is 0.3, and the Compactness Parameter and Smoothness Parameter are both 0.5. The experimental results of the two methods are shown in Figure 11 and Figure 12, respectively.
与实验一相同,本发明对场景中的的典型对象或位置进行了标注。通过对不同方法实验结果的目视分析可以看出,三种方法均准确分割出了位置A的操场草坪和位置B的操场跑道,但仅有本发明方法有效保持了操场轮廓的完整性,方法二在两个较大分割区域之间存在一些狭长的虚假单元,如跑道外侧与草坪相邻的区域;对于位置C的小块草坪,方法二存在欠分割现象;对于位置D的建筑物屋顶,三种方法分割效果相近,方法二、三进一步提取了屋顶内部的纹理信息,但方法三存在过分割现象且边缘定位不准确;位置E、F的操场看台结构复杂,纹理信息丰富,仅有本发明方法保持了看台的顶棚区域的完整并有效提取了两侧附属建筑的细节特征;本发明方法和方法二准确提取了位于位置G、H的道路区域,方法三则存在误分割问题;对于位置I的羽毛球场、位置J的网球场以及位置K的大面积植被区域,三种方法对对象的边缘定位均较为准确,但方法二中在网球场的分割中又出现了狭长的虚假单元问题。综合以上分析可以得出与实验一类似的结论,进一步验证了所提出算法的可靠性。Same as Experiment 1, the present invention marks typical objects or positions in the scene. Can find out by visual analysis to different method experimental results, all three kinds of methods have accurately segmented out the playground lawn of position A and the playground runway of position B, but only the method of the present invention has effectively kept the integrality of playground outline, method Second, there are some long and narrow spurious units between two large segmentation areas, such as the area adjacent to the lawn on the outside of the runway; for the small lawn at position C, there is under-segmentation in method 2; for the roof of the building at position D, The segmentation effects of the three methods are similar. Methods 2 and 3 further extract the texture information inside the roof, but method 3 has over-segmentation and inaccurate edge positioning; the playground stands at positions E and F have complex structures and rich texture information. The inventive method maintains the integrity of the ceiling area of the stand and effectively extracts the details of the auxiliary buildings on both sides; the inventive method and the second method accurately extract the road areas located at positions G and H, while the third method has the problem of mis-segmentation; for the position For the badminton court at position I, the tennis court at position J, and the large vegetation area at position K, the three methods are more accurate in locating the edge of the object, but in the second method, there are narrow and long false units in the division of the tennis court. Based on the above analysis, a conclusion similar to that of Experiment 1 can be drawn, which further verifies the reliability of the proposed algorithm.
精度评价Accuracy evaluation
上文主要通过目视分析对不同方法的分割效果进行了评价,此处将采用精度指标对实验结果做进一步的定量分析。在Deng et al.提出的方法中(文献3),指标J-value不但用于计算多尺度J-image序列,还作为分割精度的评价指标,定义如下:The above mainly evaluates the segmentation effect of different methods through visual analysis. Here, the accuracy index will be used to further quantitatively analyze the experimental results. In the method proposed by Deng et al. (Document 3), the index J-value is not only used to calculate the multi-scale J-image sequence, but also used as an evaluation index of segmentation accuracy, which is defined as follows:
其中,U为图像中的像素总数,R为分割结果中的区域总数,Wr和Jr分别为第r个区域内部的像素总数和其对应的J-value。当分割结果对应的精度指标越小时,说明分割结果中对象内部的平均光谱同质性越高,则分割效果越好。由于遥感影像中地物种类众多,在不同的应用场合中对分割精度的评价角度与标准也不尽相同,而Deng et al.等人提出的从整体上对分割结果中光谱分布情况进行评价,具有良好的通用性[4]。因此,本发明采用对三种方法的实验结果进行精度评价。Among them, U is the total number of pixels in the image, R is the total number of regions in the segmentation result, W r and J r are the total number of pixels inside the rth region and its corresponding J-value. When the accuracy index corresponding to the segmentation result The smaller the value, the higher the average spectral homogeneity within the object in the segmentation result, and the better the segmentation effect. Due to the large variety of ground objects in remote sensing images, the evaluation angles and standards for segmentation accuracy are not the same in different applications. However, Deng et al. It has good versatility to evaluate the spectral distribution of the segmentation results as a whole [4] . Therefore, the present invention adopts The accuracy of the experimental results of the three methods was evaluated.
对分割结果中Jr分布情况进行分析,将所有对象对应的J-value值在[0,1]区间内均匀量化为20个单位,则Jr分布曲线如图13、图14所示:Analyze the distribution of J r in the segmentation results, and uniformly quantify the J-value corresponding to all objects to 20 units in the interval [0,1], then the J r distribution curve is shown in Figure 13 and Figure 14:
图中分别用不同颜色的实线分别代表了三种方法中属于不同J-value区间的对象在分割结果中所占的比重,而虚线则代表了三种方法对应的指标。通过比较图13、14可以看出:在两组实验中,本发明提出方法精度指标均明显优于其他两种方法,与目视分解结果一致。两组实验中Jr曲线趋势大致相同,差异主要体现在场景中的典型地物集中的区间,如实验一中的[0.2,0.55]区间以及实验二中的[0.1,0.4]区间。另外在实验二中,三种算法的分割精度较实验一均有显著提高,主要原因在于实验二采用的遥感影像空间分辨率更高,背景相对简单且地物细节特征更加突出,因此实验提取的对象及其边界更加接近实际场景中的地物类型。In the figure, the solid lines of different colors represent the proportions of the objects belonging to different J-value intervals in the segmentation results of the three methods, while the dotted lines represent the proportions of the objects corresponding to the three methods. index. Can find out by comparing Fig. 13, 14: in two groups of experiments, the present invention proposes method accuracy index Both are significantly better than the other two methods, consistent with the results of visual decomposition. The trend of the J r curves in the two experiments is roughly the same, and the difference is mainly reflected in the intervals of typical feature concentration in the scene, such as the [0.2,0.55] interval in Experiment 1 and the [0.1,0.4] interval in Experiment 2. In addition, in Experiment 2, the segmentation accuracy of the three algorithms was significantly improved compared with Experiment 1. The main reason is that the spatial resolution of remote sensing images used in Experiment 2 is higher, the background is relatively simple, and the details of ground objects are more prominent. Objects and their boundaries are closer to the types of ground objects in the actual scene.
针对高分辨率遥感影像的自动化分割,本发明提出了一种新颖的非监督多尺度分割方法。该方法综合利用了对象的光谱及纹理特征,提出了基于传基于局域同质性指标的J-value的自适应scale parameter(SP)选择策略,使场景中的典型地物类型能够在与其匹配的最佳尺度J-image中进行分割。在此基础上,所提出的多尺度分割策略使区域分割受到上一尺度提取所对象边界的约束,并对这些边界在当前尺度下进行修正,避免了尺度间的误差积累。而基于多特征的区域合并策略则能有效区分具有相似光谱特征的不同种类地物,避免了误合并现象。实验表明,与eCognition及传统监督分割方法相比,所提出方法定位对象边缘准确且提取对象轮廓更加完整,具有更高的分割精度,同时能够实现自动化的高分辨率遥感影像分割,全程无需人工干预,是一种generic且有效的非监督解决方案。Aiming at the automatic segmentation of high-resolution remote sensing images, the present invention proposes a novel non-supervised multi-scale segmentation method. This method makes comprehensive use of the spectral and texture features of the object, and proposes an adaptive scale parameter (SP) selection strategy based on the J-value based on the local homogeneity index, so that the typical object types in the scene can be matched with it. Segmentation in the best scale J-image. On this basis, the proposed multi-scale segmentation strategy makes the region segmentation subject to the constraints of the object boundaries extracted at the previous scale, and corrects these boundaries at the current scale to avoid the accumulation of errors between scales. The regional merging strategy based on multi-features can effectively distinguish different types of ground objects with similar spectral characteristics, and avoid the phenomenon of false merging. Experiments show that compared with eCognition and traditional supervised segmentation methods, the proposed method locates the object edge accurately and extracts the object outline more completely, which has higher segmentation accuracy and can realize automatic high-resolution remote sensing image segmentation without manual intervention in the whole process , is a generic and efficient unsupervised solution.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610664398.8A CN106340005B (en) | 2016-08-12 | 2016-08-12 | The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610664398.8A CN106340005B (en) | 2016-08-12 | 2016-08-12 | The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106340005A true CN106340005A (en) | 2017-01-18 |
CN106340005B CN106340005B (en) | 2019-09-17 |
Family
ID=57824778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610664398.8A Active CN106340005B (en) | 2016-08-12 | 2016-08-12 | The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106340005B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610118A (en) * | 2017-09-25 | 2018-01-19 | 中国科学院遥感与数字地球研究所 | One kind is based on dMImage segmentation quality evaluating method |
CN107657616A (en) * | 2017-08-28 | 2018-02-02 | 南京信息工程大学 | A kind of high score Remote Sensing Image Segmentation towards geographic object |
CN109859219A (en) * | 2019-02-26 | 2019-06-07 | 江西理工大学 | In conjunction with the high score Remote Sensing Image Segmentation of phase and spectrum |
CN109918449A (en) * | 2019-03-16 | 2019-06-21 | 中国农业科学院农业资源与农业区划研究所 | A method and system for remote sensing extraction of agricultural disaster information based on the Internet of Things |
CN109948415A (en) * | 2018-12-30 | 2019-06-28 | 中国科学院软件研究所 | Object detection method of optical remote sensing image based on background filtering and scale prediction |
CN109993753A (en) * | 2019-03-15 | 2019-07-09 | 北京大学 | Method and device for segmentation of urban functional areas in remote sensing images |
CN116681711A (en) * | 2023-04-25 | 2023-09-01 | 中国科学院地理科学与资源研究所 | Multi-scale segmentation method for high-resolution remote sensing image under partition guidance |
CN118366059A (en) * | 2024-06-20 | 2024-07-19 | 山东锋士信息技术有限公司 | Crop water demand calculating method based on optical and SAR data fusion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632363A (en) * | 2013-08-27 | 2014-03-12 | 河海大学 | Object-level high-resolution remote sensing image change detection method based on multi-scale fusion |
CN104361589A (en) * | 2014-11-12 | 2015-02-18 | 河海大学 | High-resolution remote sensing image segmentation method based on inter-scale mapping |
CN105335966A (en) * | 2015-10-14 | 2016-02-17 | 南京信息工程大学 | Multi-scale remote-sensing image segmentation method based on local homogeneity index |
-
2016
- 2016-08-12 CN CN201610664398.8A patent/CN106340005B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103632363A (en) * | 2013-08-27 | 2014-03-12 | 河海大学 | Object-level high-resolution remote sensing image change detection method based on multi-scale fusion |
CN104361589A (en) * | 2014-11-12 | 2015-02-18 | 河海大学 | High-resolution remote sensing image segmentation method based on inter-scale mapping |
CN105335966A (en) * | 2015-10-14 | 2016-02-17 | 南京信息工程大学 | Multi-scale remote-sensing image segmentation method based on local homogeneity index |
Non-Patent Citations (3)
Title |
---|
YINING DENG等: "Color Image Segmentation", 《1999 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
YINING DENG等: "Unsupervised Segmentation of Color-Texture Regions in Images and Video", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
黄飞等: "基于 HIS空间的彩色图象分割", 《小型微型计算机系统》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657616A (en) * | 2017-08-28 | 2018-02-02 | 南京信息工程大学 | A kind of high score Remote Sensing Image Segmentation towards geographic object |
CN107657616B (en) * | 2017-08-28 | 2019-10-01 | 南京信息工程大学 | A kind of high score Remote Sensing Image Segmentation towards geographic object |
CN107610118A (en) * | 2017-09-25 | 2018-01-19 | 中国科学院遥感与数字地球研究所 | One kind is based on dMImage segmentation quality evaluating method |
CN107610118B (en) * | 2017-09-25 | 2020-12-08 | 中国科学院遥感与数字地球研究所 | A dM-based image segmentation quality assessment method |
CN109948415A (en) * | 2018-12-30 | 2019-06-28 | 中国科学院软件研究所 | Object detection method of optical remote sensing image based on background filtering and scale prediction |
CN109859219A (en) * | 2019-02-26 | 2019-06-07 | 江西理工大学 | In conjunction with the high score Remote Sensing Image Segmentation of phase and spectrum |
CN109993753A (en) * | 2019-03-15 | 2019-07-09 | 北京大学 | Method and device for segmentation of urban functional areas in remote sensing images |
CN109993753B (en) * | 2019-03-15 | 2021-03-23 | 北京大学 | Method and device for segmenting urban functional area in remote sensing image |
CN109918449A (en) * | 2019-03-16 | 2019-06-21 | 中国农业科学院农业资源与农业区划研究所 | A method and system for remote sensing extraction of agricultural disaster information based on the Internet of Things |
CN116681711A (en) * | 2023-04-25 | 2023-09-01 | 中国科学院地理科学与资源研究所 | Multi-scale segmentation method for high-resolution remote sensing image under partition guidance |
CN116681711B (en) * | 2023-04-25 | 2024-01-30 | 中国科学院地理科学与资源研究所 | Multi-scale segmentation method for high-resolution remote sensing image under partition guidance |
CN118366059A (en) * | 2024-06-20 | 2024-07-19 | 山东锋士信息技术有限公司 | Crop water demand calculating method based on optical and SAR data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN106340005B (en) | 2019-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106340005B (en) | The non-supervisory dividing method of high score remote sensing image based on scale parameter Automatic Optimal | |
CN103632363B (en) | Object level high-resolution remote sensing image change detecting method based on Multiscale Fusion | |
CN108573276B (en) | A change detection method based on high-resolution remote sensing images | |
CN105335966B (en) | Multiscale morphology image division method based on local homogeney index | |
CN103049763B (en) | Context-constraint-based target identification method | |
CN103034863B (en) | The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction | |
CN107564017B (en) | Method for detecting and segmenting urban high-resolution remote sensing image shadow | |
CN107067405B (en) | Remote sensing image segmentation method based on scale optimization | |
CN103839267B (en) | Building extracting method based on morphological building indexes | |
CN104361589A (en) | High-resolution remote sensing image segmentation method based on inter-scale mapping | |
CN112084871B (en) | High-resolution remote sensing target boundary extraction method based on weak supervised learning | |
CN103400151A (en) | Optical remote-sensing image, GIS automatic registration and water body extraction integrated method | |
CN111339947A (en) | Method, system, storage medium and device for extracting fuzzy boundary features from remote sensing images | |
US11804025B2 (en) | Methods and systems for identifying topographic features | |
CN101980317A (en) | Traffic flow prediction method based on road network extraction from remote sensing map based on improved C-V model | |
CN107301649B (en) | An Algorithm for Shoreline Detection in Region Merged SAR Images Based on Superpixels | |
CN109410233A (en) | A kind of accurate extracting method of high-definition picture road of edge feature constraint | |
CN114818851B (en) | Rural road vector data correction method based on characteristic difference | |
CN113822900B (en) | Method and system for automatically selecting new image sample based on vector constraint object-oriented | |
CN108648200B (en) | An indirect urban high-resolution impervious surface extraction method | |
CN107657616B (en) | A kind of high score Remote Sensing Image Segmentation towards geographic object | |
CN109063564A (en) | A kind of object variations detection method | |
CN117437489A (en) | Urban green space extraction method based on decision tree model | |
CN104268836A (en) | Watershed segmentation mark point extraction method based on local area homogeneity indexes | |
Li et al. | Road detection by using gradient magnitude image and adaptive thresholding based Watershed |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Gu Aihua Inventor before: Gu Aihua Inventor before: Wang Chao Inventor before: Li Shujun |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240326 Address after: 224000 No. 617, building 1, Longyuan Xincun business district, Yannan high tech Zone, Yancheng City, Jiangsu Province (CNW) Patentee after: Jiangsu Youji Technology Co.,Ltd. Country or region after: China Address before: 224002 No. 50, open avenue, Jiangsu, Yancheng City Patentee before: YANCHENG TEACHERS University Country or region before: China |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240703 Address after: Room 502, Unit 1, Building 2, Tongtai Meiling Garden, No. 181 Furong South Road, Tianxin District, Changsha City, Hunan Province, 410000 Patentee after: Hunan Liren Land Consulting Co.,Ltd. Country or region after: China Address before: 224000 No. 617, building 1, Longyuan Xincun business district, Yannan high tech Zone, Yancheng City, Jiangsu Province (CNW) Patentee before: Jiangsu Youji Technology Co.,Ltd. Country or region before: China |