CN110070545A - A kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas - Google Patents

A kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas Download PDF

Info

Publication number
CN110070545A
CN110070545A CN201910212324.4A CN201910212324A CN110070545A CN 110070545 A CN110070545 A CN 110070545A CN 201910212324 A CN201910212324 A CN 201910212324A CN 110070545 A CN110070545 A CN 110070545A
Authority
CN
China
Prior art keywords
image
cities
towns
density
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910212324.4A
Other languages
Chinese (zh)
Other versions
CN110070545B (en
Inventor
胡龙飞
仲波
罗小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910212324.4A priority Critical patent/CN110070545B/en
Publication of CN110070545A publication Critical patent/CN110070545A/en
Application granted granted Critical
Publication of CN110070545B publication Critical patent/CN110070545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明请求保护一种城镇纹理特征密度自动提取城镇建成区的方法,本发明属于遥感图像信息提取领域,该方法针对高分辨率遥感数据,利用多尺度简单线性迭代算法构建遥感图像城镇纹理特征密度,来自动提取城镇建成区。该方法首先利用J‑M距离分析了灰度共生矩阵的不同纹理特征在城镇提取中的敏感性,将纹理差异特征作为了分类依据;然后利用简单线性迭代聚类算法(SLIC)进行多个尺度的超像素分割,以超像素作为计算局部特征密度的范围,计算多次分割后的特征密度图像并进行叠加,构建待分类图像的特征密度图像。本方法提取的结果有较好的完整率与正确率,尤其是在完整性方面,该方法提取的城镇边缘与实际边缘符合度极高,可达到人工检测效果。

The present invention claims to protect a method for automatically extracting urban built-up area from urban texture feature density. The invention belongs to the field of remote sensing image information extraction. The method uses multi-scale simple linear iterative algorithm to construct remote sensing image urban texture feature density for high-resolution remote sensing data. , to automatically extract urban built-up areas. The method first uses the J‑M distance to analyze the sensitivity of different texture features of the gray level co-occurrence matrix in town extraction, and uses the texture difference features as the classification basis; then uses the Simple Linear Iterative Clustering (SLIC) to perform multiple scale Using superpixels as the range for calculating the local feature density, the feature density images after multiple segmentations are calculated and superimposed, and the feature density images of the images to be classified are constructed. The results extracted by this method have better completeness and correctness, especially in the aspect of completeness, the town edge extracted by this method is highly consistent with the actual edge, which can achieve the effect of manual detection.

Description

一种城镇纹理特征密度自动提取城镇建成区的方法A method for automatic extraction of urban built-up area by urban texture feature density

技术领域technical field

本发明属于遥感影像信息提取领域的方法,具体涉及的是一种针对高分辨率 遥感影像(10m及更高分辨率),基于多层超像素分割,构建城镇纹理特征密度, 实现自动化提取影像中城镇建成区的方法。The invention belongs to a method in the field of remote sensing image information extraction, and specifically relates to a method for high-resolution remote sensing image (10m and higher resolution), based on multi-layer superpixel segmentation, constructing urban texture feature density, and realizing automatic extraction of image data. method of urban built-up area.

背景技术Background technique

基于遥感影像的城镇信息提取是指,在遥感卫星影像、航空拍摄影像或无 人机拍摄的影像中,识别并标记城镇建成区。城镇建成区的范围、分布和变化 对于城市规划,土地利用资源分配和环境监测具有重要意义,同时也是很多研 究方向的数据基础,受到大量研究者的关注。然而复杂的地面信息以及多样城 镇居民点让城镇的提取成为十分具有挑战性的任务。随着全球航空航天事业的 飞速发展,获取的图像数据质量和分辨率的提高,高分辨率的影像变得更容易 获取,高分辨率遥感影像中有着清晰的城镇纹理信息,为基于遥感影像信息提 取城镇信息提供的重要数据来源。Urban information extraction based on remote sensing images refers to identifying and marking urban built-up areas in remote sensing satellite images, aerial images or images captured by drones. The scope, distribution and changes of urban built-up areas are of great significance to urban planning, land use resource allocation and environmental monitoring. They are also the data basis for many research directions, and have attracted the attention of a large number of researchers. However, complex ground information and diverse urban settlements make town extraction a very challenging task. With the rapid development of the global aerospace industry, the quality and resolution of acquired image data have improved, and high-resolution images have become easier to obtain. High-resolution remote sensing images have clear urban texture information, which is based on remote sensing image information. Extract important data sources provided by town information.

城镇建成区提取方法主要分为两类,基于光谱的方法和基于纹理的方法。 基于光谱的方法是指,通过中低分辨率数据的多光谱信息,分析城镇和非城镇 的在光谱曲线上的差异进行区分,多用于提取大面积的城镇和变化检测。然而, 中低分辨率数据受分辨率的限制,难以获得高精度的城镇范围。基于纹理的方 法是指,在高分辨率数据中以使城镇和非城镇的纹理差异进行区分。基于纹理 的方法又可分为:基于边缘密度的方法、基于直线检测的方法、基于纹理检测 模型的方法。其中前两种过于依赖输入数据的质量,且要求数据的分辨率高于2 米。基于纹理检测模型的方法应用广泛,通常使用Gabor滤波与灰度共生矩阵 构建纹理特征,其中Gabor滤波对数据的分辨率要求较高,而基于灰度共生矩 阵的方法可以应用于与多种分辨率数据。已有的基于纹理的方法对数据的质量 有很高的要求,通常需要高于5米分辨率的数据。然而高分辨率数据获取代价 高,不利于大范围的城镇建成区提取。另外现有的方法都过于依赖数据本身的 质量和提取的特征精确度,然而遥感影像“异物同谱”现象普遍存在,存在大量与 城镇纹理相似的自然地物,导致大量的误提取。The extraction methods of urban built-up areas are mainly divided into two categories, spectrum-based methods and texture-based methods. Spectral-based methods refer to analyzing the differences in spectral curves between urban and non-urban cities through multi-spectral information of low- and medium-resolution data, and are mostly used to extract large areas of towns and detect changes. However, low- and medium-resolution data are limited by resolution, making it difficult to obtain high-precision urban areas. Texture-based methods refer to distinguishing between urban and non-urban texture differences in high-resolution data. Texture-based methods can be further divided into: edge density-based methods, line detection-based methods, and texture detection model-based methods. The first two of these rely too much on the quality of the input data and require the data to have a resolution higher than 2 meters. The method based on texture detection model is widely used. Gabor filtering and gray co-occurrence matrix are usually used to construct texture features. Among them, Gabor filtering has higher requirements on the resolution of data, while the method based on gray co-occurrence matrix can be applied to a variety of resolutions. data. Existing texture-based methods have high requirements on data quality, usually requiring data with a resolution higher than 5 meters. However, high-resolution data acquisition is expensive, which is not conducive to the extraction of large-scale urban built-up areas. In addition, the existing methods rely too much on the quality of the data itself and the accuracy of the extracted features. However, the phenomenon of "different objects with the same spectrum" in remote sensing images is common, and there are a large number of natural features similar to town textures, resulting in a large number of wrong extractions.

空间分辨率为10米的遥感卫星数据,不仅拥有清晰的纹理信息,而且城镇 建成区内部纹理信息更平滑,数据还能免费获取,但至今在城镇提取上的应用 较少。本发明将根据10米分辨率遥感卫星影像数据的特点,探究适合提取城镇 的纹理特征,并提出一种利用超像素分割来构建特征密度的方法,提高城镇提 取的精度。Remote sensing satellite data with a spatial resolution of 10 meters not only has clear texture information, but also has smoother texture information in urban built-up areas, and the data can be obtained for free, but so far it has been rarely used in urban extraction. According to the characteristics of remote sensing satellite image data with a resolution of 10 meters, the present invention explores the texture features suitable for extracting towns, and proposes a method for constructing feature density by using superpixel segmentation to improve the accuracy of town extraction.

发明内容SUMMARY OF THE INVENTION

本发明旨在解决以上现有技术的问题。提出了一种提取的结果有较好的完整 率与正确率,尤其是在完整性方面,该方法提取的城镇边缘与实际边缘符合度 极高,可达到人工检测效果的方法。本发明的技术方案如下:The present invention aims to solve the above problems of the prior art. A method is proposed, which has better completeness and correctness of the extracted results, especially in the aspect of completeness. The urban edge extracted by this method is highly consistent with the actual edge, which can achieve the effect of manual detection. The technical scheme of the present invention is as follows:

一种城镇纹理特征密度自动提取城镇建成区的方法,其包括以下步骤:A method for automatically extracting urban built-up area from town texture feature density, which includes the following steps:

1)获取10m分辨率可见光波段的遥感影像,并进行裁剪、拼接、去云操作, 得到待提取区域;1) Obtain a remote sensing image in the visible light band with a resolution of 10 m, and perform cropping, splicing, and cloud removal operations to obtain an area to be extracted;

2)对预处理后的遥感影像构建灰度影像,对灰度影像基于傅里叶变换滤除 低频纹理,获得高频分量的灰度影像;2) Constructing a grayscale image of the preprocessed remote sensing image, filtering out low-frequency textures based on Fourier transform on the grayscale image, and obtaining a grayscale image of high-frequency components;

3)针对步骤2)中得到的高频分量的灰度影像,计算若干个方向的灰度共 生矩阵差异性特征,并提取旋转不变性纹理特征;3) for the grayscale image of the high frequency component obtained in step 2), calculate the grayscale co-occurrence matrix difference feature of several directions, and extract the rotational invariance texture feature;

4)将步骤3)得到的旋转不变性纹理特征组成的图像,映射为0到255的 灰度图像,使用大津阈值法OTSU提取特征区域;4) the image that the rotation invariance texture feature obtained in step 3) is formed is mapped to the grayscale image of 0 to 255, and uses the Otsu threshold method OTSU to extract the feature area;

5)通过简单线性迭代聚类SLIC算法对步骤1)中得到的待提取区域图像进 行多尺度分割,获得多层超像素分布图像,以超像素为单位,获得每个尺度下 的特征密度图像,叠加所有特征密度图像,以构建精细化的特征密度图像。5) Multi-scale segmentation is performed on the image of the region to be extracted obtained in step 1) by a simple linear iterative clustering SLIC algorithm to obtain a multi-layer superpixel distribution image, and the superpixel is used as a unit to obtain a feature density image at each scale, Overlay all feature density images to build a refined feature density image.

进一步的,所述步骤1)对获得的影像进行裁剪、拼接、去云处理,具体包 括:一景遥感图像通常不能完整覆盖待提取区域,需要下载多景相邻的图像, 使用ENVI软件的MapBased Mosaic功能拼接图像得到完整覆盖的图像,然后 使用ENVI软件的Subset Datafrom ROIs功能,根据待提取区域的范围(矢量文 件.shp),裁剪掉提取区域外的部分。遥感图像中地面物体被云遮挡的情况十分 普遍,待处理图像中被云覆盖的部分不能得到良好的提取效果,需要进行云去 除处理。常用的去云方法为:人工识别云区域,寻找该区域无云的其他时段图 像,裁剪出该区域图像,并拼接到待处理图像中,得到无云覆盖的干净图像。预处理工作可以使用ENVI软件完成。Further, described step 1) is to carry out cropping, splicing, and cloud removal processing to the obtained image, specifically including: a remote sensing image of one scene usually cannot completely cover the area to be extracted, and it is necessary to download adjacent images of multiple scenes, and use the MapBased of ENVI software. The Mosaic function stitches the images to obtain a completely covered image, and then uses the Subset Data from ROIs function of the ENVI software to crop out the part outside the extraction area according to the range of the area to be extracted (vector file .shp). It is very common for ground objects in remote sensing images to be occluded by clouds, and the part covered by clouds in the image to be processed cannot obtain a good extraction effect, so cloud removal processing is required. The commonly used cloud removal method is as follows: manually identify the cloud area, look for images of other time periods without clouds in the area, crop out the image of this area, and stitch it into the image to be processed to obtain a clean image without cloud coverage. Preprocessing can be done using ENVI software.

进一步的,所述步骤2)对灰度影像基于傅里叶变换滤除低频纹理,获得 高频分量的灰度影像,具体包括:使用二维快速傅里叶变换FFT将空间域图像 转换到频率域,用高斯低通滤波器对频率域影像滤波,将滤波结果用逆快速傅 里叶变换转换到空间域,得到背景影像,将原灰度影像与背景影像做相减运算, 得到抑制低频分量的灰度影像。Further, the step 2) filters out the low-frequency texture based on the Fourier transform of the grayscale image, and obtains the grayscale image of the high-frequency component, which specifically includes: using the two-dimensional fast Fourier transform FFT to convert the spatial domain image to frequency. The frequency domain image is filtered with a Gaussian low-pass filter, and the filtering result is converted to the spatial domain with the inverse fast Fourier transform to obtain the background image, and the original grayscale image and the background image are subtracted to obtain the suppressed low-frequency components. grayscale image.

进一步的,所述使用二维快速傅里叶变换FFT将空间域图像转换到频率域, 公式为:N分别表示影像的宽度 和高度,(x,y)表示影像的空间坐标,(u,v)表示频率图像的坐标,f(x,y)为输入 图像,F(u,v)为转化得到的傅里叶频率域图像,用高斯低通滤波器对频率域影 像滤波:F′(u,v)=F(u,v)·H(u,v)其 中H(x,y)是高斯滤波权函数,D表示高斯滤波窗口中点(u,v)到高斯窗口中心的 欧式距离,σ表示高斯权函数扩展的程度,F(x,y)是频域图像,F′(x,y)是滤波后 的频域图像,将滤波结果用逆快速傅里叶变换转换到空间域,得到背景影像 f′(x,y),将原灰度影像f(x,y)与背景影像f′(x,y)做相减运算,得到抑制低频分 量的灰度影像:G(x,y)=f(x,y)- f′(x,y),G(x,y)为最终获得的高频分量影像。Further, using the two-dimensional fast Fourier transform FFT to convert the spatial domain image to the frequency domain, the formula is: N represents the width and height of the image, respectively, (x, y) represents the spatial coordinates of the image, (u, v) represents the coordinates of the frequency image, f(x, y) is the input image, and F(u, v) is the converted image. The Fourier frequency-domain image of , filtered with a Gaussian low-pass filter in the frequency-domain image: F'(u,v)=F(u,v)·H(u,v) where H(x,y) is the Gaussian filter weight function, D represents the Gaussian filter window midpoint (u,v) to the Gaussian window center The Euclidean distance of , σ represents the degree of expansion of the Gaussian weight function, F(x, y) is the frequency domain image, F′(x, y) is the filtered frequency domain image, and the filtering result is converted by inverse fast Fourier transform Go to the spatial domain to obtain the background image f'(x,y), and subtract the original grayscale image f(x,y) from the background image f'(x,y) to obtain a grayscale image that suppresses low-frequency components: G(x,y)=f(x,y)-f'(x,y), G(x,y) is the finally obtained high-frequency component image.

进一步的,所述步骤3)使用上一步骤中得到的高频分量影像,构建若干个 方向的灰度共生矩阵差异性特征,并提取旋转不变性纹理特征,构建过程如下;Further, described step 3) uses the high-frequency component image obtained in the previous step, builds the gray level co-occurrence matrix difference feature of several directions, and extracts the rotation invariant texture feature, and the construction process is as follows;

所述灰度共生矩阵是一个大小为n×n的矩阵,n为图像的灰度级,灰度共 生矩阵定义如下:M(i,j)=[m(i,j,Δx,Δy);i,j∈(1,2,…n)]灰度共生矩阵M为 n×n的矩阵,第i行,第j列的元素值为:在图像某个大小为w×w的窗口范围内 满足位置偏移为(Δx,Δy),且灰度值分别为i和j的像素对出现的次数,差异性公 式为其中m(i,j)为灰度共 生矩阵M的第i行,第j列元素的值,以n1个方向构建n1种会共生矩阵,从而计 算得到n1种方向的差异性特征,选取n1个方向上的最小值构建旋转不变性纹 理特征。n1个方向分别为(Δx,Δy)∈{(1,0),(2,0),(2,1),(1,1),(1,2),(-2,1),(-1,1),(1,2), (0,2),(0,1)}。The gray-level co-occurrence matrix is a matrix of size n×n, where n is the gray level of the image, and the gray-level co-occurrence matrix is defined as follows: M(i,j)=[m(i,j,Δx,Δy); i,j∈(1,2,…n)] The grayscale co-occurrence matrix M is an n×n matrix, and the element value of the i-th row and the j-th column is: within a certain size of the image in the window range of w×w Satisfy that the position offset is (Δx, Δy), and the gray value is the number of occurrences of the pixel pair of i and j, respectively, the difference formula is in m (i,j) is the value of the elements in the i-th row and the j-th column of the gray-level co-occurrence matrix M, and constructs n1 kinds of co-occurrence matrices in n1 directions, so as to calculate the difference features of n1 kinds of directions, and select n1 kinds of directions The minimum value on the rotation-invariant texture feature is constructed. The n1 directions are (Δx,Δy)∈{(1,0),(2,0),(2,1),(1,1),(1,2),(-2,1),( -1,1),(1,2), (0,2),(0,1)}.

进一步的,所述步骤4)使用大津阈值法OTSU提取特征区域,具体包括:Further, the step 4) uses the Otsu threshold method OTSU to extract the feature area, which specifically includes:

1)将特征图像线性映射为0-255的灰度图像,统计灰度直方图,即每个灰 度级的像素个数;1) the feature image is linearly mapped to the grayscale image of 0-255, and the grayscale histogram is counted, that is, the number of pixels of each grayscale;

2)归一化直方图。即每个灰度级的像素个数除以像素总数;2) Normalized histogram. That is, the number of pixels per gray level divided by the total number of pixels;

3)计算最大类间方差。设t为灰度阈值,统计0到t灰度级的像素占整副图像 的比例w0,统计0到t灰度级的平均灰度值u0,统计t到255灰度级的像素占整 副图像的比例w1,统计t到255灰度级的平均灰度值u1,计算g=w0×w1× (u0-u1)2,依次将t设置为0-255,使g最大的t值即为阈值,大于t的部分为特征 区域,小于t的部分为其他区域。3) Calculate the maximum between-class variance. Let t be the grayscale threshold, count the proportion w 0 of the pixels from 0 to t gray level in the whole image, count the average gray value u 0 of the 0 to t gray level, and count the proportion of the pixels from t to 255 gray levels. The proportion w 1 of the whole image, count the average gray value u 1 from t to 255 gray levels, calculate g=w 0 ×w 1 × (u 0 -u 1 ) 2 , set t to 0-255 in turn, The t value that maximizes g is the threshold value, the part larger than t is the feature area, and the part smaller than t is the other area.

进一步的,所述步骤5)通过线性迭代聚类算法SLIC得到的超像素成细胞 状均匀排列,具体包括:基于SLIC算法对影像的可见光波段进行n次不同尺度 的分割,尺度分别为{Ni,i∈1,2,3…n},Ni代表第i次分割超像素的平均像元数量, 其中N1为最小分割尺度,Nn为最大分割尺度,从N1到Nn依次以相同的步长S增 加,分割得到的n个超像素层表示为{Li,i∈1,2,3…n},用{Pij,j∈1,2,3…}表示Li的第j个超像素,Numij表示Pij的像元数量;Further, in the step 5), the superpixels obtained by the linear iterative clustering algorithm SLIC are uniformly arranged in a cell shape, which specifically includes: dividing the visible light band of the image n times with different scales based on the SLIC algorithm, and the scales are {N i ,i∈1,2,3…n}, N i represents the average number of pixels in the i-th segmentation superpixel, where N 1 is the minimum segmentation scale, N n is the maximum segmentation scale, from N 1 to N n in order The same step size S is increased, and the n superpixel layers obtained by segmentation are denoted as {L i , i∈1,2,3…n}, and {P ij ,j∈1,2,3…} denote the L i The jth superpixel, Num ij represents the number of pixels in P ij ;

步骤3)中基于窗口大小为23的差异性特征图像,用OSTU方法得到特征图 记为,基于Lf计算所有Pij的特征密度Dij式中Numf代表Pij范围 内,在Lf中对应范围的特征点的数量,得到每个Li的特征密度L′i,最后累加L′i得 到最终的特征密度分布Ld表示将每个特征密度图像相对应的像 素相加,得到总的特征密度Ld,通过上述方法得到特征密度,最后叠加各层得 到特征密度图像Ld,对密度图像Ld设置合适的阈值,密度较大的部分即为城镇 提取的结果,所设置的阈值在0-1之间,设置的阈值将乘以分割的层数,大于阈 值的部分将被提取为城镇。In step 3), based on the difference feature image with a window size of 23, the feature map obtained by the OSTU method is denoted as, and the feature density D ij of all P ij is calculated based on L f : In the formula, Num f represents the number of feature points in the range of P ij , in the corresponding range of L f , to obtain the feature density L' i of each Li , and finally accumulate L' i to obtain the final feature density distribution L d : It means that the pixels corresponding to each feature density image are added to obtain the total feature density L d , the feature density is obtained by the above method, and finally the feature density image L d is obtained by stacking each layer, and an appropriate threshold is set for the density image L d , The part with higher density is the result of town extraction. The set threshold is between 0 and 1. The set threshold will be multiplied by the number of layers to be divided, and the part larger than the threshold will be extracted as towns.

本发明的优点及有益效果如下:The advantages and beneficial effects of the present invention are as follows:

本发明基于频率域滤波的优势提出了一种对遥感图像的滤波方法,该方法可 滤除原图中低频信息的干扰,因此,在步骤(3)计算的纹理特征图像中,城镇 纹理的灰度值与自然纹理的灰度值增大差距,从而使用OSTU算法得到更精确 的特征区域。Based on the advantages of frequency domain filtering, the present invention proposes a filtering method for remote sensing images, which can filter out the interference of low-frequency information in the original image. The difference between the intensity value and the gray value of the natural texture increases, so that the OSTU algorithm is used to obtain a more accurate feature area.

本发明提出使用超像素最为局部特征密度的计算范围,引入了地物的分界线 作为限制,彼此不同的地物不在同一局部范围内计算特征密度,得到了表现更 好的特征密度分布,通过叠加多层特征密度的增大城镇建成区内外密度差异, 同时精细化了城镇边缘。得到更精确的城镇信息,使用经验阈值进行提取能获 得更好的结果。The present invention proposes to use superpixels as the calculation range of the most local feature density, and introduces the boundary line of ground objects as a limit, and different ground objects do not calculate the feature density in the same local range, and obtain a better feature density distribution. The multi-layer feature density increases the density difference inside and outside the urban built-up area, and at the same time refines the urban fringe. To get more accurate town information, using the empirical threshold to extract can get better results.

通过Sentinel-2的MSIL1C可见光波段遥感影像的提取实验,本发明提出的 城镇建成区提取方法的提取结果,完整度和正确率较现有主流方法均有较大提 高。Through the extraction experiment of the MSIL1C visible light waveband remote sensing image of Sentinel-2, the extraction results of the urban built-up area extraction method proposed by the present invention have greater completeness and accuracy than the existing mainstream methods.

附图说明Description of drawings

图1为得到的Sentinel-2MSIL1C亮度图像Figure 1 is the obtained Sentinel-2MSIL1C brightness image

图2为频率域滤波影像后的图像Figure 2 is the image after filtering the image in the frequency domain

图3为计算得到的差异性旋转不变特征Figure 3 shows the calculated differential rotation invariant features

图4为OSTU二值化的特征区域Figure 4 shows the feature area of OSTU binarization

图5为通过多层超像素计算得到的特征密度Figure 5 shows the feature density calculated by multi-layer superpixels

图6为阈值法城镇提取结果Figure 6 shows the results of town extraction by threshold method

图7方法流程图Figure 7 method flow chart

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、 详细地描述。所描述的实例是本发明的部分实例。The technical solutions in the embodiments of the present invention will be described clearly and in detail below with reference to the accompanying drawings in the embodiments of the present invention. The described examples are partial examples of the invention.

本发明解决上述技术问题的技术方案是:The technical scheme that the present invention solves the above-mentioned technical problems is:

本发明针对10米遥感影像中地物的特点,选取10个方向的差异旋转不变 性特征,使用OUST阈值法得到特征区域。针对已经存在的方法过于依赖特征 提取的问题。本发明提出一种计算局部特征密度的方法,该方法基于SLIC贴合 地物边缘的分割特征,构建多层超像素,在超像素内部计算特征密度,降低了 城外特征密度,从而实现了城镇区域与非城镇区域的高精度划分。According to the characteristics of the ground objects in the 10-meter remote sensing image, the present invention selects the difference rotation invariance characteristics of 10 directions, and uses the OUST threshold method to obtain the characteristic area. In view of the problem that existing methods rely too much on feature extraction. The invention proposes a method for calculating local feature density. The method is based on the segmentation features of SLIC fitting the edge of objects, constructs multi-layer superpixels, calculates the feature density inside the superpixel, reduces the feature density outside the city, and realizes the realization of urban High-precision division of regional and non-urban areas.

本文方法技术方案如下。The technical scheme of the method in this paper is as follows.

(1)获取10m分辨率可见光波段的遥感影像,并进行相关预处理;(1) Obtain remote sensing images in the visible light band with a resolution of 10 m, and perform relevant preprocessing;

(2)构建灰度影像,基于傅里叶变换滤除低频信息,获得高频分量的灰度 影像;(2) Constructing a grayscale image, filtering out low-frequency information based on Fourier transform, and obtaining a grayscale image of high-frequency components;

(3)针对步骤(2)中得到的高频影像,计算多个方向的灰度共生矩阵差 异性特征,并提取旋转不变性纹理特征;(3) for the high-frequency image obtained in step (2), calculate the gray level co-occurrence matrix difference feature of multiple directions, and extract the rotation invariant texture feature;

(4)将步骤(3)得到的旋转不变性纹理特征图像,映射为0到255的灰 度图像,使用大津阈值法(OTSU)提取特征区域;(4) the rotation-invariant texture feature image obtained in step (3) is mapped to the grayscale image of 0 to 255, and uses the Otsu threshold method (OTSU) to extract the feature area;

(5)通过简单线性迭代聚类(simple linear iterative clustering;SLIC)算法对Sentinel-2的真彩合成图进行多尺度分割,获得多层超像素分布图像。以超像 素为单位,获得每个尺度下的特征密度图像,叠加所有特征密度图像,以构建 精细化的特征密度图像。(5) Multi-scale segmentation of the true-color composite image of Sentinel-2 is performed by a simple linear iterative clustering (SLIC) algorithm to obtain a multi-layer superpixel distribution image. Taking superpixels as the unit, the feature density images at each scale are obtained, and all feature density images are superimposed to construct a refined feature density image.

(6)最后选取合适的密度阈值,提取城镇区域。(6) Finally, select the appropriate density threshold to extract the urban area.

以下对本发明作进一步说明:The present invention is further described below:

(1)对获得的影像进行裁剪、拼接、去云处理,得到干净完整的可见光波 段影像。(1) Crop, splicing, and cloud removal of the obtained image to obtain a clean and complete image in the visible light band.

(2)利用处理完的多波段影像的亮度信息构建灰度影像,然后对该影像作 频域滤波处理。使用二维快速傅里叶变换(FFT)将空间域图像转换到频率域, 公式为:N分别表示影像的宽度 和高度,(x,y)表示影像的空间坐标,(u,v)表示频率图像的坐标,f(x,y)为输入 图像,F(u,v)为转化得到的傅里叶频率域图像。用高斯低通滤波器对频率域影 像滤波:F′(u,v)=F(u,v)·H(u,v)其 中H(x,y)是高斯滤波权函数,D表示高斯滤波窗口中点(u,v)到高斯窗口中心的 欧式距离,σ表示高斯权函数扩展的程度。F(x,y)是频域图像,F′(x,y)是滤波后 的频域图像,将滤波结果用逆快速傅里叶变换转换到空间域,得到背景影像 f′(x,y),将原灰度影像f(x,y)与背景影像f′(x,y)做相减运算,得到抑制低频分 量的灰度影像:G(x,y)=f(x,y)- f′(x,y)。G(x,y)为最终获得的高频分量影像。(2) A grayscale image is constructed using the brightness information of the processed multi-band image, and then the image is filtered in the frequency domain. Convert the spatial domain image to the frequency domain using a two-dimensional fast Fourier transform (FFT), the formula is: N represents the width and height of the image, respectively, (x, y) represents the spatial coordinates of the image, (u, v) represents the coordinates of the frequency image, f(x, y) is the input image, and F(u, v) is the converted image. Fourier frequency domain image of . Filter the frequency domain image with a Gaussian low-pass filter: F'(u,v)=F(u,v)·H(u,v) where H(x,y) is the Gaussian filter weight function, D represents the Gaussian filter window midpoint (u,v) to the Gaussian window center The Euclidean distance of , σ represents the degree of expansion of the Gaussian weight function. F(x,y) is the frequency domain image, F'(x,y) is the filtered frequency domain image, and the filtering result is converted to the spatial domain by inverse fast Fourier transform, and the background image f'(x,y) is obtained ), subtract the original grayscale image f(x,y) from the background image f′(x,y) to obtain a grayscale image that suppresses low-frequency components: G(x,y)=f(x,y)-f'(x,y). G(x,y) is the final high-frequency component image.

(3)灰度共生矩阵是一个大小为n×n的矩阵,n为图像的灰度级(通常灰 度图像的灰度级为256)。灰度共生矩阵定义如下:M(i,j)=[m(i,j,Δx,Δy);i,j∈ (1,2,…n)灰度共生矩阵M为n×n的矩阵,第i行,第j列的元素值为:在图像某个 大小为w×w的窗口范围内满足位置偏移为(Δx,Δy),且灰度值分别为i和j的像 素对出现的次数。本发明以10种位置关系构建灰度共生矩阵,10种位置关系分 别为(Δx,Δy)∈{(1,0),(2,0),(2,1),(1,1),(1,2),(-2,1),(-1,1),(1,2),(0,2),(0,1)}。 差异性公式为其中为 灰度共生矩阵M的第i行,第j列元素的值。以10个方向构建了10中会共生矩阵, 从而计算得到10种方向的差异性特征,选取10个方向上的最小值构建旋转不 变性纹理特征。(3) The grayscale co-occurrence matrix is a matrix of size n×n, where n is the grayscale level of the image (usually the grayscale level of a grayscale image is 256). The grayscale co-occurrence matrix is defined as follows: M(i,j)=[m(i,j,Δx,Δy); i,j∈(1,2,...n) The grayscale co-occurrence matrix M is an n×n matrix, The element value of the i-th row and the j-th column is: within a window range of a size w × w, the position offset is (Δx, Δy), and the gray value is the pixel pair of i and j, respectively. frequency. The present invention constructs a gray level co-occurrence matrix with 10 positional relationships, and the 10 kinds of positional relationships are (Δx, Δy)∈{(1,0),(2,0),(2,1),(1,1), (1,2),(-2,1),(-1,1),(1,2),(0,2),(0,1)}. The difference formula is in is the value of the element in the ith row and the jth column of the gray level co-occurrence matrix M. 10 co-occurrence matrices were constructed in 10 directions, and the difference features in 10 directions were calculated, and the minimum value in the 10 directions was selected to construct the rotation-invariant texture feature.

(4)大津阈值法(Nobuyuki Otsu;OTSU)是一个经典的自动阈值算法, 它依据图像的灰度特性,将图像分成背景和目标2部分。背景和目标之间的类 间方差越大,说明构成图像的2部分的差别越大,当部分目标错分为背景或部 分背景错分为目标都会导致2部分差别变小。因此,使类间方差最大的分割意 味着错分概率最小。我们将步骤(3)得到的旋转不变性纹理特征图像,映射为 0到255的灰度图像,用OSTU得到两部分图像,灰度值大的部分即为要提取的 特征区域。(4) Otsu thresholding method (Nobuyuki Otsu; OTSU) is a classic automatic thresholding algorithm, which divides the image into two parts, background and target, according to the grayscale characteristics of the image. The greater the inter-class variance between the background and the target, the greater the difference between the two parts of the image. When part of the target is mistakenly classified as the background or part of the background is mistakenly classified as the target, the difference between the two parts will become smaller. Therefore, the segmentation that maximizes the between-class variance means that the probability of misclassification is minimized. We map the rotation-invariant texture feature image obtained in step (3) to a grayscale image ranging from 0 to 255, and use OSTU to obtain two parts of the image. The part with large grayscale value is the feature region to be extracted.

(5)简单线性迭代聚类算法(simple linear iterative clustering;SLIC)是一种高效的超像素分割算法,通过SLIC算法得到的超像素成细胞状均匀排列,各 超像素的大小相近,且每个超像素都能遵从底图的边界进行自适应形变,能够 贴合地物边缘进行排列。SLIC算法的是基于CIELAB颜色空间进行计算的,每 个像素的(l,a,b,)颜色和空间坐标(x,y)组成一个5维向量,用欧氏距离度量两个 像素之间的相似性,其距离计算公式如下: 式中i,j分别代表两个像素,dc表示颜色距离,ds表示空间 距离,D表示两个像素的相似度,Nc为期望的类内最大颜色距离,Ns为类内期望 的最大空间距离。像元与聚类中心的距离D决定像素的从属超像素。其中Nc和Ns是影响聚类结果的两个重要的指标。一般情况下,Ns由聚类中心的个数决 定:N表示图像的像素总数,K为聚类中心的个数,是算法的唯一输 入参数。本发明中以超像素的平均像元个数作为衡量超像素分割的尺度。(5) Simple linear iterative clustering (SLIC) is an efficient superpixel segmentation algorithm. The superpixels obtained by the SLIC algorithm are uniformly arranged in a cell shape. Superpixels can be adaptively deformed according to the boundary of the base map, and can be arranged according to the edge of the feature. The SLIC algorithm is calculated based on the CIELAB color space. The (l, a, b,) color of each pixel and the spatial coordinates (x, y) form a 5-dimensional vector, and the Euclidean distance is used to measure the distance between the two pixels. Similarity, the distance calculation formula is as follows: where i and j represent two pixels respectively, d c represents the color distance, d s represents the spatial distance, D represents the similarity of the two pixels, N c is the desired maximum color distance within the class, and N s is the desired intra-class color distance. maximum spatial distance. The distance D of the pixel from the cluster center determines the subordinate superpixel of the pixel. Among them, N c and N s are two important indicators that affect the clustering results. In general, N s is determined by the number of cluster centers: N represents the total number of pixels in the image, and K is the number of cluster centers, which is the only input parameter of the algorithm. In the present invention, the average number of pixels of superpixels is used as a scale to measure the segmentation of superpixels.

本发明基于SLIC算法对影像的可见光波段进行n次不同尺度的分割,尺度分 别为{Ni,i∈1,2,3…n},Ni代表第i次分割超像素的平均像元数量。其中N1为最小 分割尺度,Nn为最大分割尺度,从N1到Nn依次以相同的步长S增加。分割得到 的n个超像素层表示为{Li,i∈1,2,3…n},用{Pij,j∈1,2,3…}表示Li的第j个超像 素,Numij表示Pij的像元数量。步骤(3)中基于窗口大小为23的差异性特征 图像,用OSTU方法得到特征图记为Lf。基于Lf计算所有Pij的特征密度Dij式中Numf代表Pij范围内,在Lf中对应范围的特征点的数量。得到 每个Li的特征密度L′i,最后累加L′i得到最终的特征密度分布Ld表 示将每个特征密度图像相对应的像素相加,得到总的特征密度Ld。设置N1为300, S=10,n=5,得到5层超像素分割结果,分别通过上述方法得到特征密度, 最后叠加各层得到特征密度图像LdBased on the SLIC algorithm, the present invention divides the visible light band of the image for n times with different scales, and the scales are {N i , i∈1, 2, 3...n} respectively, and Ni represents the average number of pixels in the i-th division of superpixels . Among them, N 1 is the minimum segmentation scale, and N n is the maximum segmentation scale. From N 1 to N n , the same step size S increases sequentially. The n superpixel layers obtained by segmentation are expressed as {L i ,i∈1,2,3…n}, and {P ij ,j∈1,2,3…} is used to denote the jth superpixel of Li, Num ij represents the number of pixels in P ij . In step (3), based on the difference feature image with a window size of 23, the feature map obtained by the OSTU method is denoted as L f . Calculate the eigendensity D ij of all P ij based on L f : In the formula, Num f represents the number of feature points in the corresponding range in L f within the range of P ij . The characteristic density L' i of each Li is obtained, and finally L' i is accumulated to obtain the final characteristic density distribution L d : Indicates that the pixels corresponding to each feature density image are added to obtain the total feature density L d . Set N 1 to 300, S=10, n=5 to obtain 5-layer superpixel segmentation results, obtain feature densities through the above methods respectively, and finally superimpose each layer to obtain a feature density image L d .

(6)基于步骤(5)中得到的特征密度图像Ld设置合适的阈值,密度较大的 部分即为城镇提取的结果。通常阈值设置为0.7×n可以获得较好的结果,n为分 割层数。(6) Set an appropriate threshold based on the feature density image L d obtained in step (5), and the part with higher density is the result of town extraction. Usually the threshold is set to 0.7×n to get better results, where n is the number of segmentation layers.

1.实例采用的是由欧洲太空局发射的Sentinel-2A卫星2017年9月北京市北 部的图像,序列号为“S2A_MSIL1C_20170922T025541_N0205_R032_T50”。本发 明基于图像的纹理信息提取城镇范围,一般情况下无需做辐射定标和大气校正, 只需做去云处理和裁剪。得到如图1所示图像。1. The example uses the image of the Sentinel-2A satellite launched by the European Space Agency of northern Beijing in September 2017, the serial number is "S2A_MSIL1C_20170922T025541_N0205_R032_T50". The present invention extracts the urban area based on the texture information of the image, and generally does not need to do radiometric calibration and atmospheric correction, but only needs to do cloud removal processing and cropping. The image shown in Figure 1 is obtained.

2.得到的Sentinel-2A卫星数据有多个波段,我们取可见光三个波段的最大 值得到亮度影像,如图2所示。亮度图像经过频率域滤波之后得到去除低频信 息的灰度影像如图3所示。实例中σ的值设置为M、N分别表示影 像的宽度和高度。2. The obtained Sentinel-2A satellite data has multiple bands. We take the maximum value of the three bands of visible light to obtain the brightness image, as shown in Figure 2. After the luminance image is filtered in the frequency domain, a grayscale image with low-frequency information removed is obtained as shown in Figure 3. The value of σ in the example is set to M and N represent the width and height of the image, respectively.

3.将上一步骤中获得的影像灰度降级到0-31,将窗口大小w设置为32,计 算10个方向的纹理差异特征(dissimilarity),取最小值得到特征图像,如图4 所示。3. Downgrade the grayscale of the image obtained in the previous step to 0-31, set the window size w to 32, calculate the texture dissimilarity in 10 directions, and take the minimum value to obtain the feature image, as shown in Figure 4 .

4.将上一步获得的特征图像映射到0-255的灰度级,经过大津阈值法选取阈 值,将图像分成两个部分,取灰度值较大的部分作为特征区域。如图5所示4. Map the feature image obtained in the previous step to the gray level of 0-255, select the threshold value through the Otsu threshold method, divide the image into two parts, and take the part with the larger gray value as the feature area. As shown in Figure 5

5.通过简单线性迭代聚类算法,设置最小分割尺度为300个像元,设置步 长为10,设置最大分割尺度为340,如此进行5次分割,得到5层超像素分割 图像。基于超像素计算每层超像素的特征密度,叠加计算得到的5层特征密度 图像,构建成总的特征密度图像。结果如图6所示。5. Through a simple linear iterative clustering algorithm, set the minimum segmentation scale to 300 pixels, set the step size to 10, and set the maximum segmentation scale to 340, and perform 5 segmentations in this way to obtain a 5-layer superpixel segmentation image. The feature density of each layer of superpixels is calculated based on superpixels, and the five layers of feature density images obtained by the calculation are superimposed to construct a total feature density image. The results are shown in Figure 6.

6.最后设置特征密度阈值,大于阈值的部分为城镇区域,实例设置的阈值为 3.56. Finally, set the feature density threshold, the part larger than the threshold is urban area, and the threshold set for the instance is 3.5

本发明提出的城镇提取方法能够提取精确的城镇范围。与人工解译的结果对 比,以上实例的完整率为0.96,正确率为0.92。The town extraction method proposed by the present invention can extract the precise town range. Compared with the results of human interpretation, the completeness rate of the above example is 0.96, and the correct rate is 0.92.

以上这些实施例应理解为仅用于说明本发明而不用于限制本发明的保护范 围。在阅读了本发明的记载的内容之后,技术人员可以对本发明作各种改动或 修改,这些等效变化和修饰同样落入本发明权利要求所限定的范围。The above embodiments should be understood as only for illustrating the present invention and not for limiting the protection scope of the present invention. After reading the contents of the description of the present invention, the skilled person can make various changes or modifications to the present invention, and these equivalent changes and modifications also fall within the scope defined by the claims of the present invention.

Claims (7)

1. a kind of method that textural characteristics density in cities and towns automatically extracts cities and towns built-up areas, which comprises the following steps:
1) remote sensing image of 10m resolution ratio visible light wave range is obtained, and cut, splice, cloud is gone to operate, area to be extracted is obtained Domain;
2) grayscale image is constructed to pretreated remote sensing image, Fourier transformation is based on to grayscale image and filters out low frequency texture, Obtain the grayscale image of high fdrequency component;
3) for the grayscale image of high fdrequency component obtained in step 2), the gray level co-occurrence matrixes otherness in several directions is calculated Feature, and extract rotational invariance textural characteristics;
4) image of the rotational invariance textural characteristics composition obtained step 3), is mapped as 0 to 255 gray level image, uses Otsu threshold method OTSU extracts characteristic area;
5) multiple dimensioned point is carried out to area image to be extracted obtained in step 1) using simple linear iteration cluster SLIC algorithm It cuts, obtains multilayer super-pixel distributed image, as unit of super-pixel, calculate the characteristic density image under each scale, be superimposed institute There is characteristic density image, constructs the characteristic density image of fining.
2. the method that a kind of cities and towns textural characteristics density according to claim 1 automatically extracts cities and towns built-up areas, feature It is, the step 1) cuts the image of acquisition, splices, cloud removing, and specifically include: a scape remote sensing images are usually not Region to be extracted can be completely covered, needs to download the adjacent image of more scapes, uses the Map Based Mosaic function of ENVI software The image that energy stitching image is completely covered, then uses the Subset Data from ROIs function of ENVI software, according to The range (vector file .shp) in region to be extracted crops the part extracted outside region;Ground object is by cloud in remote sensing images The case where blocking is very universal, cannot obtain good extraction effect by the part that cloud covers in image to be processed, need to carry out Cloud removal processing, removes cloud method are as follows: manual identified cloud sector domain finds other cloudless period images of the region, cuts out the area Area image, and be spliced in image to be processed, obtain the clean image of cloudless covering;Pretreatment work is complete using ENVI software At.
3. the method that a kind of cities and towns textural characteristics density according to claim 1 automatically extracts cities and towns built-up areas, feature It is, the step 2) is based on Fourier transformation to grayscale image and filters out low frequency texture, obtains the grayscale image of high fdrequency component, has Body includes: that space area image is transformed into frequency domain using two-dimensional fast fourier transform FFT, with gauss low frequency filter to frequency Filter result is transformed into spatial domain with inverse fast fourier transform, obtains background video by rate domain images filter, by former ash degree shadow As doing additive operation with background video, the grayscale image for the low frequency component that is inhibited.
4. the method that a kind of cities and towns textural characteristics density according to claim 3 automatically extracts cities and towns built-up areas, feature It is, it is described that space area image is transformed into frequency domain, formula using two-dimensional fast fourier transform FFT are as follows:M, N respectively indicates the width and height of image, (x, y) Indicate the space coordinate of image, (u, v) indicates that the coordinate of frequency image, f (x, y) are input picture, and F (u, v) is that conversion obtains Fourier frequency area image, with gauss low frequency filter to frequency domain images filter: Wherein H (x, y) is gaussian filtering weight function to F ' (u, v)=F (u, v) H (u, v), and D indicates gaussian filtering window midpoint (u, v) To the Euclidean distance at Gauss window center, σ indicates the degree of Gauss weight function extension, and F (x, y) is frequency domain image, and F ' (x, y) is Filter result is transformed into spatial domain with inverse fast fourier transform by filtered frequency domain image, obtain background video f ' (x, Y), former grayscale image f (x, y) and background video f ' (x, y) are done into additive operation, the grayscale image for the low frequency component that is inhibited:G (x, y)=f (x, y)-f ' (x, y), G (x, y) is final The high fdrequency component image of acquisition.
5. the method that a kind of cities and towns textural characteristics density according to claim 1 automatically extracts cities and towns built-up areas, feature It is, the step 3) constructs the gray scale symbiosis in several directions using high fdrequency component grayscale image obtained in previous step Matrix difference feature, and rotational invariance textural characteristics are extracted, process is as follows;
The gray level co-occurrence matrixes are the matrixes that a size is n × n, and n is the gray level of image, and gray level co-occurrence matrixes define such as Under: M (i, j)=[m (i, j, Δ x, Δ y);I, j ∈ (1,2 ... n)] matrix of the gray level co-occurrence matrixes M for n × n, the i-th row, the The element value of j column are as follows: meet positional shift in the window ranges that image some size be w × w for (Δ x, Δ y), and gray scale Value is respectively number of the pixel to appearance of i and j, and otherness formula is Whereinm(i, j)For the i-th row of gray level co-occurrence matrixes M, the value of jth column element is constructed with n1 direction N1 kind gray level co-occurrence matrixes choose the minimum value building on n1 direction so that the otherness feature in n1 kind direction be calculated Rotational invariance textural characteristics.N1 direction be respectively (Δ x, Δ y) ∈ (1,0), (2,0), (2,1), (1,1), (1,2), (- 2,1), (- 1,1), (1,2), (0,2), (0,1) }.
6. the method that a kind of cities and towns textural characteristics density according to claim 1 automatically extracts cities and towns built-up areas, feature It is, the step 4) extracts characteristic area using Otsu threshold method OTSU, it specifically includes:
1) gray level image for being 0-255 by characteristic image Linear Mapping counts grey level histogram, i.e., the pixel of each gray level Number;
2) normalization histogram.The number of pixels of i.e. each gray level is divided by sum of all pixels;
3) maximum between-cluster variance is calculated.If t is gray threshold, statistics 0 accounts for the ratio w of whole sub-picture to the pixel of t gray level0, system Meter 0 arrives the average gray value u of t gray level0, the pixel of statistics t to 255 gray levels accounts for the ratio w of whole sub-picture1, count t to 255 The average gray value u of gray level1, calculate g=w0×w1×(u0-u1)2, 0-255 successively is set by t, makes the maximum t value of g i.e. For threshold value, the part greater than t is characterized region, and the part less than t is other regions.
7. the method that a kind of cities and towns textural characteristics density according to claim 1 automatically extracts cities and towns built-up areas, feature It is, the super-pixel that the step 5) is obtained by linear iteraction clustering algorithm SLIC is specifically included at cellular evenly distributed: The segmentation of n times different scale is carried out to the visible light wave range of image based on SLIC algorithm, scale is respectively { Ni, i ∈ 1,2, 3...n }, NiThe average pixel quantity of i-th segmentation super-pixel is represented, wherein N1For smallest partition scale, NnFor maximum fractionation ruler Degree, from N1To NnSuccessively with identical step-length S increase, the n super-pixel layer divided is expressed as { Li, i ∈ 1,2,3...n }, With { Pij, j ∈ 1,2,3... } and indicate LiJ-th of super-pixel, NumijIndicate PijPixel quantity;
The otherness characteristic image for being 23 based on window size in step 3), obtains characteristic pattern with OSTU method and is denoted as, and is based on LfMeter Calculate all PijCharacteristic density Dij:Num in formulafRepresent PijIn range, in LfThe characteristic point of middle corresponding range Quantity, obtain each LiCharacteristic density L 'i, finally add up L 'iObtain final characteristic density distribution Ld:It indicates for the corresponding pixel of each characteristic density image to be added, obtains total characteristic density Ld, by upper The method of stating obtains characteristic density, is finally superimposed each layer and obtains characteristic density image Ld, to density image LdSuitable threshold value is set, The biggish part of density is cities and towns extraction as a result, set threshold value is between 0-1, and the threshold value of setting will be multiplied by segmentation The number of plies, the part greater than threshold value will be extracted as cities and towns.
CN201910212324.4A 2019-03-20 2019-03-20 A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density Active CN110070545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910212324.4A CN110070545B (en) 2019-03-20 2019-03-20 A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910212324.4A CN110070545B (en) 2019-03-20 2019-03-20 A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density

Publications (2)

Publication Number Publication Date
CN110070545A true CN110070545A (en) 2019-07-30
CN110070545B CN110070545B (en) 2023-05-26

Family

ID=67366390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910212324.4A Active CN110070545B (en) 2019-03-20 2019-03-20 A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density

Country Status (1)

Country Link
CN (1) CN110070545B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739073A (en) * 2020-06-24 2020-10-02 刘秀萍 Efficient and rapid image registration optimization method for handheld device
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN114943897A (en) * 2022-05-31 2022-08-26 南京大学 Town development boundary defining method based on superpixel segmentation
CN116310806A (en) * 2023-02-28 2023-06-23 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230910A (en) * 2005-02-28 2006-09-07 Konica Minolta Medical & Graphic Inc Image processor and image processing method
US20090212199A1 (en) * 2007-12-28 2009-08-27 Fujifilm Corporation Radiation image detection apparatus and manufacturing method of radiation image detector
CN101702019A (en) * 2009-11-26 2010-05-05 上海电机学院 Extraction method of residential area in SAR image
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
CN103065296A (en) * 2012-12-14 2013-04-24 中南大学 High-resolution remote sensing image residential area extraction method based on edge feature
CN103745453A (en) * 2013-12-11 2014-04-23 河海大学 Town information extraction method based on Google Earth remote sensing image
CN103824309A (en) * 2014-03-12 2014-05-28 武汉大学 Automatic extracting method of urban built-up area border
CN104182754A (en) * 2014-08-19 2014-12-03 山东临沂烟草有限公司 Rural resident point information extraction method based on high-resolution remote-sensing image
CN104376330A (en) * 2014-11-19 2015-02-25 西安电子科技大学 Polarization SAR image ship target detection method based on superpixel scattering mechanism
CN104508707A (en) * 2012-05-28 2015-04-08 奥普托斯股份有限公司 Methods and apparatus for image processing, and laser scanning ophthalmoscope having an image processing apparatus
CN104835016A (en) * 2015-05-27 2015-08-12 北京搜狐新媒体信息技术有限公司 Crowd density calculation method and device
CN105046262A (en) * 2015-06-29 2015-11-11 中国人民解放军国防科学技术大学 Robust extended local binary pattern textural feature extraction method
CN105096315A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Method for segmenting heterogeneous super-pixel SAR (Synthetic Aperture Radar) image based on Gamma distribution
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration
CN107330875A (en) * 2017-05-31 2017-11-07 河海大学 Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN109271864A (en) * 2018-08-17 2019-01-25 武汉烽火凯卓科技有限公司 A kind of crowd density estimation method based on wavelet transformation and support vector machines

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006230910A (en) * 2005-02-28 2006-09-07 Konica Minolta Medical & Graphic Inc Image processor and image processing method
US20090212199A1 (en) * 2007-12-28 2009-08-27 Fujifilm Corporation Radiation image detection apparatus and manufacturing method of radiation image detector
CN101702019A (en) * 2009-11-26 2010-05-05 上海电机学院 Extraction method of residential area in SAR image
CN102687140A (en) * 2009-12-30 2012-09-19 诺基亚公司 Methods and apparatuses for facilitating content-based image retrieval
CN104508707A (en) * 2012-05-28 2015-04-08 奥普托斯股份有限公司 Methods and apparatus for image processing, and laser scanning ophthalmoscope having an image processing apparatus
CN103065296A (en) * 2012-12-14 2013-04-24 中南大学 High-resolution remote sensing image residential area extraction method based on edge feature
CN103745453A (en) * 2013-12-11 2014-04-23 河海大学 Town information extraction method based on Google Earth remote sensing image
CN103824309A (en) * 2014-03-12 2014-05-28 武汉大学 Automatic extracting method of urban built-up area border
CN104182754A (en) * 2014-08-19 2014-12-03 山东临沂烟草有限公司 Rural resident point information extraction method based on high-resolution remote-sensing image
CN104376330A (en) * 2014-11-19 2015-02-25 西安电子科技大学 Polarization SAR image ship target detection method based on superpixel scattering mechanism
CN104835016A (en) * 2015-05-27 2015-08-12 北京搜狐新媒体信息技术有限公司 Crowd density calculation method and device
CN105096315A (en) * 2015-06-19 2015-11-25 西安电子科技大学 Method for segmenting heterogeneous super-pixel SAR (Synthetic Aperture Radar) image based on Gamma distribution
CN105046262A (en) * 2015-06-29 2015-11-11 中国人民解放军国防科学技术大学 Robust extended local binary pattern textural feature extraction method
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration
CN107330875A (en) * 2017-05-31 2017-11-07 河海大学 Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images
CN108171193A (en) * 2018-01-08 2018-06-15 西安电子科技大学 Polarization SAR Ship Target Detection method based on super-pixel local message measurement
CN109271864A (en) * 2018-08-17 2019-01-25 武汉烽火凯卓科技有限公司 A kind of crowd density estimation method based on wavelet transformation and support vector machines

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
SCOTT G J: "Fusion of deep convolutional neural networks for land cover classification of high-resolution imagery", 《IEEE GEOSCIENCE & REMOTE SENSING LETTERS.2017,14(9)》 *
姜慧伟: "利用多特征融合的面向对象的高分辨率遥感影像变化检测", 《陕西理工学院学报(自然科学版)》 *
张宁新等: "高分辨率遥感影像居民区检测算法研究", 《计算机工程与科学》 *
林祥国等: "融合直角点和直角边特征的高分辨率遥感影像居民点提取方法", 《测绘学报》 *
郑庆庆等: "改进的基于区域合并的纹理图像分割方法", 《华中科技大学学报(自然科学版)》 *
郭春梅等: "融合显著度时空上下文的超像素跟踪算法", 《模式识别与人工智能》 *
高杨等: "基于复合矢量场的改进ACM模型与医学图像分割", 《计算机工程与应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739073A (en) * 2020-06-24 2020-10-02 刘秀萍 Efficient and rapid image registration optimization method for handheld device
CN114037747A (en) * 2021-11-25 2022-02-11 佛山技研智联科技有限公司 Image feature extraction method and device, computer equipment and storage medium
CN114943897A (en) * 2022-05-31 2022-08-26 南京大学 Town development boundary defining method based on superpixel segmentation
CN114943897B (en) * 2022-05-31 2023-11-24 南京大学 Urban development boundary delineation method based on super-pixel segmentation
CN116310806A (en) * 2023-02-28 2023-06-23 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition
CN116310806B (en) * 2023-02-28 2023-08-29 北京理工大学珠海学院 Intelligent agriculture integrated management system and method based on image recognition

Also Published As

Publication number Publication date
CN110070545B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN108573276B (en) A change detection method based on high-resolution remote sensing images
Li et al. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery
Liang et al. Material based salient object detection from hyperspectral images
CN103971115B (en) Automatic extraction method for newly-increased construction land image spots based on NDVI and PanTex index
Zhang et al. Object-oriented shadow detection and removal from urban high-resolution remote sensing images
CN107358260B (en) A Multispectral Image Classification Method Based on Surface Wave CNN
CN105335966B (en) Multiscale morphology image division method based on local homogeney index
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN110070545B (en) A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN103489171B (en) Based on the even color method of the even light of the robotization of remote sensing image on a large scale in standard color storehouse
CN108898065A (en) Candidate regions quickly screen and the depth network Ship Target Detection method of dimension self-adaption
CN106971397B (en) Based on the city high-resolution remote sensing image dividing method for improving JSEG algorithms
CN104951765B (en) Remote Sensing Target dividing method based on shape priors and visual contrast
CN107301420A (en) A kind of thermal infrared imagery object detection method based on significance analysis
CN103077515A (en) Multi-spectral image building change detection method
CN103366373B (en) Multi-time-phase remote-sensing image change detection method based on fuzzy compatible chart
CN111339948A (en) An automatic identification method for newly added buildings in high-resolution remote sensing images
CN107292328A (en) The remote sensing image shadow Detection extracting method and system of multiple dimensioned multiple features fusion
CN113744191A (en) Automatic cloud detection method for satellite remote sensing image
Liu et al. Object-oriented detection of building shadow in TripleSat-2 remote sensing imagery
Chehata et al. Object-based forest change detection using high resolution satellite images
Huang et al. Multi-feature combined for building shadow detection in GF-2 Images
Manaf et al. Hybridization of SLIC and Extra Tree for Object Based Image Analysis in Extracting Shoreline from Medium Resolution Satellite Images.
CN110310263B (en) A method for detecting residential areas in SAR images based on saliency analysis and background priors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant