JP2013012034A - Area extraction method, area extraction program and area extraction device - Google Patents

Area extraction method, area extraction program and area extraction device Download PDF

Info

Publication number
JP2013012034A
JP2013012034A JP2011144266A JP2011144266A JP2013012034A JP 2013012034 A JP2013012034 A JP 2013012034A JP 2011144266 A JP2011144266 A JP 2011144266A JP 2011144266 A JP2011144266 A JP 2011144266A JP 2013012034 A JP2013012034 A JP 2013012034A
Authority
JP
Japan
Prior art keywords
area
region
discretization
building
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2011144266A
Other languages
Japanese (ja)
Inventor
Junichi Suzaki
純一 須▲崎▼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyoto University
Original Assignee
Kyoto University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyoto University filed Critical Kyoto University
Priority to JP2011144266A priority Critical patent/JP2013012034A/en
Priority to PCT/JP2012/061204 priority patent/WO2012169294A1/en
Priority to US14/122,082 priority patent/US20140081605A1/en
Publication of JP2013012034A publication Critical patent/JP2013012034A/en
Withdrawn legal-status Critical Current

Links

Abstract

PROBLEM TO BE SOLVED: To provide a technique for extracting an area of a building more correctly, on the basis of data of an aerial photograph or the like.SOLUTION: When an area of a building is extracted on the basis of data of an aerial photograph or the like, multiple different discretization widths are set, and for each of the discretization widths, a luminance value of the data is discretized to multiple values discretized and set by the discretization width. In a discretized image obtained by the discretization, pixels having the same value are connected, and an area having a shape close to a rectangle is extracted as a candidate for the area of the building. Subsequently, among the multiple areas extracted for the respective multiple different discretization widths, an area having the shape closer to the rectangle is adopted as the area of the building.

Description

本発明は、航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する方法、プログラム及び装置に関する。   The present invention relates to a method, a program, and an apparatus for extracting a building area based on photograph data taken from an aircraft or an artificial satellite.

都市を俯瞰した場合の特徴抽出、領域抽出は昔から重要なテーマであった(例えば、非特許文献1)。例えば、航空写真又は人工衛星画像からの道路抽出は盛んに研究されてきた(例えば、非特許文献2,3)。リモートセンシング画像の分類手法は、画素単位の分類と領域(オブジェクト)単位の分類とに大別できる。クラスタリング(いわゆる教師なし)、最尤法(教師つき)、Support Vector Machines (SVM)(教師つき)に代表される画素単位の分類手法は、統計的理論に基づき、各画素を分類クラスに割り当てていく(例えば、非特許文献4)。一方、領域単位の分類手法では、画素の周辺の情報(Context)を利用する。例えば、数学的 Morphological アプローチを用いた分類などが代表例として挙げられる(例えば、非特許文献5〜9)。   Feature extraction and area extraction in the case of bird's-eye view have been important themes from the past (for example, Non-Patent Document 1). For example, road extraction from aerial photographs or artificial satellite images has been actively studied (for example, Non-Patent Documents 2 and 3). Remote sensing image classification methods can be broadly classified into pixel-based classification and area (object) -based classification. Pixel-based classification techniques such as clustering (so-called unsupervised), maximum likelihood (supervised), and Support Vector Machines (SVM) (supervised) assign each pixel to a classification class based on statistical theory. (For example, Non-Patent Document 4). On the other hand, in the region-by-region classification method, information around the pixel (Context) is used. For example, classification using a mathematical Morphological approach is a typical example (for example, Non-Patent Documents 5 to 9).

図1の(a)及び(b)は、家屋が密集した市街地(京都市東山区)の航空写真の一例である。このような密集市街地では、従来の領域抽出手法が効果的に機能しない。その要因を分析すると、まず、(イ)建物同士が接近しているため、屋根の輝度値、テクスチャが似ていると、建物の境界線が明瞭でなくなる点である。また、(ロ)隣接した建物の影が写り込みやすい、という密集市街地特有の要因がある。さらに、日本の伝統的建物では波型の横断面形状を持つ屋根(例えばスレート屋根)が使用されていることもあり、(ハ)輝度値にばらつきのあるテクスチャの屋根の輪郭抽出が困難である、という対象地域の建物特性による要因も確認できる。   (A) and (b) of FIG. 1 are examples of aerial photographs of an urban area (Higashiyama Ward, Kyoto City) where houses are densely packed. In such a dense urban area, the conventional region extraction method does not function effectively. When analyzing the factors, first, (a) because the buildings are close to each other, if the brightness value and texture of the roof are similar, the boundary line of the building will not be clear. In addition, (b) there is a factor peculiar to dense urban areas that the shadows of adjacent buildings are easily reflected. Furthermore, roofs with corrugated cross-sectional shapes (for example, slate roofs) are sometimes used in traditional Japanese buildings, and (c) it is difficult to extract the contours of roofs with textures that vary in brightness. The factor due to the building characteristics in the target area can also be confirmed.

Weng, Q. and Quattrochi, D.A., "Urban Remote Sensing", CRC Press, 2007Weng, Q. and Quattrochi, D.A., "Urban Remote Sensing", CRC Press, 2007 Hu, J., Razdan, A., Femiani, J.C., Cui, M. and Wonka, P., "Road network extraction and intersection detection from aerial images by tracking road footprints", IEEE Transactions on Geoscience and Remote Sensing, Vol. 45, No. 12, pp. 4144 - 4157, 2007Hu, J., Razdan, A., Femiani, JC, Cui, M. and Wonka, P., "Road network extraction and intersection detection from aerial images by tracking road footprints", IEEE Transactions on Geoscience and Remote Sensing, Vol. 45, No. 12, pp. 4144-4157, 2007 Movaghati, S., Moghaddamjoo, A. and Tavakoli, A., "Road extraction from satellite images using particle filtering and extended Kalman filtering", IEEE Transactions on Geoscience and Remote Sensing, Vol. 48, No. 7, pp. 2807 - 2817, 2010Movaghati, S., Moghaddamjoo, A. and Tavakoli, A., "Road extraction from satellite images using particle filtering and extended Kalman filtering", IEEE Transactions on Geoscience and Remote Sensing, Vol. 48, No. 7, pp. 2807- 2817, 2010 Tso, B. and Mather, P.M., "Classification methods for remotely sensed data -- 2nd ed.", CRC Press, 2009Tso, B. and Mather, P.M., "Classification methods for remotely sensed data-2nd ed.", CRC Press, 2009 Soille, P. and Pesaresi, M. , " Advances in mathematical morphology applied to geoscience and remote sensing", IEEE Transactions on Geoscience and Remote Sensing, Vol. 40, No. 9, pp. 2042-2055, 2002Soille, P. and Pesaresi, M., "Advances in mathematical morphology applied to geoscience and remote sensing", IEEE Transactions on Geoscience and Remote Sensing, Vol. 40, No. 9, pp. 2042-2055, 2002 Benediktsson, J.A., Pesaresi, M. and Amason, K., "Classification and feature extraction for remote sensing images from urban areas based on morphological transformations", IEEE Transactions on Geoscience and Remote Sensing, Vol. 41, No. 9, pp. 1940 - 1949, 2003Benediktsson, JA, Pesaresi, M. and Amason, K., "Classification and feature extraction for remote sensing images from urban areas based on morphological transformations", IEEE Transactions on Geoscience and Remote Sensing, Vol. 41, No. 9, pp. 1940-1949, 2003 Benediktsson, J.A., Palmason, J.A. and Sveinsson, J.R., "Classification of hyperspectral data from urban areas based on extended morphological profiles", IEEE Transactions on Geoscience and Remote Sensing, Vol. 43, No. 3, pp. 480 - 491, 2005Benediktsson, J.A., Palmason, J.A. and Sveinsson, J.R., "Classification of hyperspectral data from urban areas based on extended morphological profiles", IEEE Transactions on Geoscience and Remote Sensing, Vol. 43, No. 3, pp. 480-491, 2005 Bellens, R., Gautama, S., Martinez-Fonte, L., Philips, W., Chan, J.C.-W., Canters, F., "Improved Classification of VHR Images of Urban Areas Using Directional Morphological Profiles", IEEE Transactions on Geoscience and Remote Sensing, Vol. 46, No. 10, pp. 2803 - 2813, 2008Bellens, R., Gautama, S., Martinez-Fonte, L., Philips, W., Chan, JC-W., Canters, F., "Improved Classification of VHR Images of Urban Areas Using Directional Morphological Profiles", IEEE Transactions on Geoscience and Remote Sensing, Vol. 46, No. 10, pp. 2803-2813, 2008 Tuia, D., Pacifici, F., Kanevski, M. and Emery, W.J., "Classification of Very High Spatial Resolution Imagery Using Mathematical Morphology and Support Vector Machines", IEEE Transactions on Geoscience and Remote Sensing, Vol. 47, No. 11, pp. 3866 - 3879, 2009Tuia, D., Pacifici, F., Kanevski, M. and Emery, WJ, "Classification of Very High Spatial Resolution Imagery Using Mathematical Morphology and Support Vector Machines", IEEE Transactions on Geoscience and Remote Sensing, Vol. 47, No. 11, pp. 3866-3879, 2009 Canny, J., "A Computational Approach to Edge Detection", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, pp. 679 - 698, 1986Canny, J., "A Computational Approach to Edge Detection", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, pp. 679-698, 1986 Zhong, J. and Ning, R., "Image denoising based on wavelets and multifractals for singularity detection", IEEE Transactions on Image Processing, Vol. 14, No. 10, pp. 1435 - 1447, 2005Zhong, J. and Ning, R., "Image denoising based on wavelets and multifractals for singularity detection", IEEE Transactions on Image Processing, Vol. 14, No. 10, pp. 1435-1447, 2005 Tonazzini, A., Bedini, L. and Salerno, E., "A Markov model for blind image separation by a mean-field EM algorithm", IEEE Transactions on Image Processing, Vol. 15, No. 2, pp. 473 - 482, 2006Tonazzini, A., Bedini, L. and Salerno, E., "A Markov model for blind image separation by a mean-field EM algorithm", IEEE Transactions on Image Processing, Vol. 15, No. 2, pp. 473- 482, 2006 Berlemont, S. and Olivo-Marin, J.-C., "Combining Local Filtering and Multiscale Analysis for Edge, Ridge, and Curvilinear Objects Detection", IEEE Transactions on Image Processing, Vol. 19, No. 1, pp. 74 - 84, 2010Berlemont, S. and Olivo-Marin, J.-C., "Combining Local Filtering and Multiscale Analysis for Edge, Ridge, and Curvilinear Objects Detection", IEEE Transactions on Image Processing, Vol. 19, No. 1, pp. 74 -84, 2010 Ding, J., Ma, R. and Chen, S., "A Scale-Based Connected Coherence Tree Algorithm for Image Segmentation", IEEE Transactions on Image Processing, Vol. 17, No. 2, pp. 204 - 216, 2008Ding, J., Ma, R. and Chen, S., "A Scale-Based Connected Coherence Tree Algorithm for Image Segmentation", IEEE Transactions on Image Processing, Vol. 17, No. 2, pp. 204-216, 2008 Chien, S .Y., Ma, S.Y. and Chen, L.G., "Efficient moving object segmentation algorithm using background registration technique", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12 , No. 7, pp. 577 - 586, 2002Chien, S .Y., Ma, SY and Chen, LG, "Efficient moving object segmentation algorithm using background registration technique", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, No. 7, pp. 577-586 , 2002 Tsai, V.J.D., "A comparative study on shadow compensation of color aerial images in invariant color models", IEEE Transactions on Geoscience and Remote Sensing, Vol. 44, No. 6, pp. 1661 - 1671, 2006Tsai, V.J.D., "A comparative study on shadow compensation of color aerial images in invariant color models", IEEE Transactions on Geoscience and Remote Sensing, Vol. 44, No. 6, pp. 1661-1671, 2006 Ma, H., Qin, Q. and Shen, X., "Shadow Segmentation and Compensation in High Resolution Satellite Images", Proceedings of IEEE International Geoscience and Remote Sensing Symposium, 2008, Vol. 2, pp. II-1036 - II-1039, 2008Ma, H., Qin, Q. and Shen, X., "Shadow Segmentation and Compensation in High Resolution Satellite Images", Proceedings of IEEE International Geoscience and Remote Sensing Symposium, 2008, Vol. 2, pp. II-1036-II -1039, 2008

上記(イ)の要因は、言い換えれば、領域のエッジが十分に得られていない場合に、どう対処するか、という課題を与えている。また、上記(ハ)の要因により過剰に抽出されるエッジから不要なエッジを除外することも課題である。ここで、(ハ)の課題は(イ)の課題とも関連していると考えられる。例えば、非特許文献10において、Cannyはノイズに強いエッジ抽出オペレータを提唱している。これは、これまでに幅広く使われている代表的なエッジ抽出オペレータの一つといえる。またWaveletを使ってノイズを除去したり(例えば、非特許文献11)、平均値(例えば、非特許文献12)や多重解像度画像(例えば、非特許文献13)を活用したりする手法が報告されている。   In other words, the above factor (a) gives a problem of how to deal with the case where the edge of the region is not sufficiently obtained. It is also a problem to exclude unnecessary edges from edges that are excessively extracted due to the above factor (c). Here, it is considered that the problem (c) is related to the problem (b). For example, in Non-Patent Document 10, Canny has proposed an edge extraction operator that is resistant to noise. This is one of the typical edge extraction operators widely used so far. In addition, a method of removing noise using Wavelet (for example, Non-Patent Document 11) or utilizing an average value (for example, Non-Patent Document 12) or a multi-resolution image (for example, Non-Patent Document 13) has been reported. ing.

また上記(ロ)の要因に関連して、影領域の輝度値を回復する研究事例が報告されている(例えば、非特許文献14〜17)。しかしながら例えば非特許文献17における回復結果を見る限りでは、回復後の画像において元々の影領域と非影領域との境界線が明瞭に出てしまうという問題点、及び、影領域内の補正値の決定が難しく、過剰に回復することもある、といった問題点が、依然として解決できていない。   In connection with the above factor (b), research cases for restoring the luminance value of the shadow region have been reported (for example, Non-Patent Documents 14 to 17). However, for example, as far as the restoration result in Non-Patent Document 17 is seen, there is a problem that the boundary line between the original shadow area and the non-shadow area appears clearly in the restored image, and the correction value in the shadow area Problems such as difficult decisions and sometimes overrecovering are still unsolved.

かかる従来の問題点に鑑み、本発明は、航空写真等のデータに基づいて、建物の領域をより正確に抽出する技術を提供することを目的とする。   In view of such conventional problems, an object of the present invention is to provide a technique for more accurately extracting a building area based on data such as aerial photographs.

(1)本発明は、航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出方法であって、複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化し、離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出し、前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用することを特徴とする。   (1) The present invention is an area extraction method for extracting an area of a building based on data of a photograph taken from an aircraft or an artificial satellite, and sets a plurality of different discretization widths. The luminance value of the data is discretized into a plurality of values set discretely with the discretization width, and pixels having the same value are connected in a discretized image obtained by discretization, and is close to a rectangle as a candidate for a building region A region having a shape is extracted, and a region having a shape closer to a rectangle is adopted as a region of a building among a plurality of region groups extracted for each of the plurality of different discretization widths.

上記のような領域抽出方法では、離散化によって、輝度値の分散が大きいテクスチャを持つ領域も、1つの領域として抽出可能である。また、同一値を持つ点を連結し、長方形に近い形状の領域を抽出することで、建物以外の領域を除去し易くなる。また、複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用することにより、局所的に最適な、空間的な平滑化パラメータを適用していることに等しい機能を発揮させることができる。   In the region extraction method as described above, a region having a texture with a large dispersion of luminance values can be extracted as one region by discretization. Further, by connecting points having the same value and extracting a region having a shape close to a rectangle, it becomes easy to remove regions other than buildings. In addition, by adopting a region of a more rectangular shape as a building region among a plurality of region groups extracted for a plurality of different discretization widths, a locally optimal spatial smoothing parameter A function equivalent to that of applying can be exhibited.

(2)また、上記(1)の領域抽出方法において、長方形に近い形状を表す指数として、(領域の面積/領域を取り囲む長方形の面積)によって定義される長方形指数を用いることができる。
この場合、長方形に近い形状における「近さ」の度合いを、指数として簡単かつ的確に表すことができる。
(2) In the region extraction method of (1), a rectangular index defined by (area of area / area of rectangle surrounding area) can be used as an index representing a shape close to a rectangle.
In this case, the degree of “closeness” in a shape close to a rectangle can be expressed simply and accurately as an index.

(3)また、上記(1)又は(2)の領域抽出方法において、上記抽出において、互いに隣接する領域同士が併合された場合にさらに長方形に近くなる場合は、当該併合を実行可能とするようにしてもよい。
この場合、より正確に建物の領域を捉えることができる。
(3) Also, in the region extraction method of (1) or (2), when the adjacent regions are merged in the extraction, the merger can be executed if the region becomes closer to a rectangle. It may be.
In this case, the area of the building can be captured more accurately.

(4)また、上記(2)の領域抽出方法において、長方形指数が所定値より小さい場合は、建物の領域として採用しないことが好ましい。
この場合、建物でない可能性が高い領域を抽出の対象から除外することができる。
(4) In the area extraction method of (2) above, when the rectangular index is smaller than a predetermined value, it is preferably not adopted as the area of the building.
In this case, an area that is highly likely not to be a building can be excluded from extraction targets.

(5)また、上記(1)の領域抽出方法において、RGB輝度値に基づいて、植生と推測される領域を抽出対象から除外してもよい。
この場合、植生の色の特徴を考慮して当該領域を抽出対象から除外することができる。
(5) In the region extraction method of (1) above, a region estimated to be vegetation may be excluded from the extraction target based on the RGB luminance values.
In this case, the region can be excluded from the extraction target in consideration of the color characteristics of the vegetation.

(6)一方、本発明は、航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出プログラムであって、複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化する機能、離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出する機能、及び、前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用する機能を、コンピュータによって実現させるための領域抽出プログラムである。   (6) On the other hand, the present invention is an area extraction program for extracting a building area based on data of a photograph taken from an aircraft or an artificial satellite, and sets a plurality of different discretization widths. A function of discretizing the luminance value of the data into a plurality of values discretely set with the discretization width, connecting pixels having the same value in the discretized image obtained by discretization, and candidate building areas A function of extracting a region having a shape close to a rectangle, and a function of adopting a region having a shape closer to a rectangle out of a plurality of region groups extracted for each of the plurality of different discretization widths as a region of a building This is an area extraction program to be realized by a computer.

(7)また、本発明は、航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出装置であって、複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化する機能と、離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出する機能と、前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用する機能とを有するものである。   (7) Further, the present invention is an area extraction device that extracts a building area based on data of a photograph taken from an aircraft or an artificial satellite, and sets a plurality of different discretization widths, and each discretization width is set. For the function of discretizing the luminance value of the data into a plurality of values discretely set with the discretization width, and connecting pixels having the same value in the discretized image obtained by discretization, A function of extracting an area having a shape close to a rectangle as a candidate, and a function of adopting an area having a shape closer to a rectangle out of the plurality of area groups extracted for each of the plurality of different discretization widths, as a building area It is what has.

上記(6)、(7)の領域抽出プログラム/領域抽出装置では、離散化によって、輝度値の分散が大きいテクスチャを持つ領域も、1つの領域として抽出可能である。また、同一値を持つ点を連結し、長方形に近い形状の領域を抽出することで、建物以外の領域を除去し易くなる。また、複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用することにより、局所的に最適な空間的な平滑化パラメータを適用していることに等しい機能を発揮させることができる。 In the area extraction program / area extraction apparatus of (6) and (7) above, an area having a texture with a large variance of luminance values can be extracted as one area by discretization. Further, by connecting points having the same value and extracting a region having a shape close to a rectangle, it becomes easy to remove regions other than buildings. In addition, by adopting a region of a more rectangular shape as a building region among a plurality of region groups extracted for each of a plurality of different discretization widths, a locally optimal spatial smoothing parameter can be obtained. The function equivalent to what is applied can be exhibited.

本発明によれば、航空写真等の写真のデータに基づいて、建物の領域をより正確に抽出することができる。   According to the present invention, it is possible to more accurately extract a building area based on photograph data such as aerial photographs.

家屋が密集した市街地(京都市東山区)の航空写真の一例である。It is an example of an aerial photograph of an urban area (Higashiyama-ku, Kyoto) where houses are densely packed. 京都市東山区法観寺から東大路通り周辺を地上で撮影した写真である。This is a photograph taken on the ground around Higashioji Road from Hokanji Temple in Higashiyama-ku, Kyoto. 領域抽出方法を実施するためのコンピュータ装置の一例を示すブロック図である。It is a block diagram which shows an example of the computer apparatus for implementing an area | region extraction method. 画素の連結の概念図である。It is a conceptual diagram of the connection of a pixel. 長方形指数算出の概念図である。It is a conceptual diagram of rectangular index calculation. 対象領域と、それに隣接する領域との統合の概念図である。It is a conceptual diagram of integration of a target region and a region adjacent thereto. 低層建物が密集する市街地(地域1)におけるエッジ抽出の結果を示す図である。It is a figure which shows the result of the edge extraction in the urban area (area 1) where a low-rise building is crowded. 高層ビルと低層建物とが混在する市街地(地域2)におけるエッジ抽出の結果を示す図である。It is a figure which shows the result of the edge extraction in the urban area (area 2) where a high-rise building and a low-rise building are mixed. 高木が建物に隣接している地域(地域3)におけるエッジ抽出の結果を示す図である。It is a figure which shows the result of the edge extraction in the area (area 3) where Takagi is adjacent to a building. 寄棟屋根作りの建物が多数存在する地域(地域4)におけるエッジ抽出の結果を示す図である。It is a figure which shows the result of the edge extraction in the area (area 4) where many buildings with a dormitory roof exist. 地域1における領域抽出の結果を示す図である。It is a figure which shows the result of the area | region extraction in the area 1. FIG. 地域2における領域抽出の結果を示す図である。It is a figure which shows the result of the area | region extraction in the area 2. FIG. 地域3における領域抽出の結果を示す図である。It is a figure which shows the result of the area | region extraction in the area | region 3. FIG. 地域4における領域抽出の結果を示す図である。It is a figure which shows the result of the area | region extraction in the area | region 4. FIG. 地域1についての領域抽出の比較結果を示す図である。It is a figure which shows the comparison result of the area | region extraction about the area | region 1. FIG. 地域2についての領域抽出の比較結果を示す図である。It is a figure which shows the comparison result of the area | region extraction about the area | region 2. FIG. 地域3についての領域抽出の比較結果を示す図である。It is a figure which shows the comparison result of the area | region extraction about the area | region 3. FIG. 地域4についての領域抽出の比較結果を示す図である。It is a figure which shows the comparison result of the area | region extraction about the area | region 4. FIG.

以下、本発明の一実施形態に係る領域抽出方法・領域抽出プログラムについて、説明する。領域抽出のモデル地域としては、京都市東山区法観寺から東大路通り周辺を対象とした。この地域では木造建築が敷地間口いっぱいに立ち並んでいる。図2は、当該地域を地上で撮影した写真である。また、この地域の航空写真データとして、25cm解像度、オルソ(正射投影)処理済みのデータ(市販品)を使用した。このデータは、平面直角座標系に合わせてある。なお、オルソ処理されていても、若干は建物の側面が見えるが、実用上は特に問題ない。   Hereinafter, a region extraction method / region extraction program according to an embodiment of the present invention will be described. As a model area for region extraction, the area around Higashioji Road from Hokanji Temple, Higashiyama-ku, Kyoto was targeted. In this area, wooden buildings are lined up across the entrance. FIG. 2 is a photograph of the area taken on the ground. Further, as the aerial photograph data of this area, data (commercially available product) having been subjected to ortho (orthogonal projection) processing at 25 cm resolution was used. This data is aligned with the planar rectangular coordinate system. In addition, even if it has been ortho-processed, the side of the building can be seen slightly, but there is no particular problem in practical use.

次に、建物の領域(屋根の輪郭形状)を抽出する領域抽出方法・領域抽出プログラムについて説明する。図3は、領域抽出方法を実施するためのコンピュータ装置10の一例を示すブロック図であり、また、領域抽出プログラムは、このコンピュータ装置10に機能を実現させるものである。また、領域抽出プログラムの機能が搭載されたコンピュータ装置10は、本発明の一実施形態に係る領域抽出装置である。   Next, an area extraction method and area extraction program for extracting a building area (roof outline shape) will be described. FIG. 3 is a block diagram showing an example of a computer apparatus 10 for carrying out the area extraction method, and the area extraction program causes the computer apparatus 10 to realize a function. In addition, the computer apparatus 10 having the function of the area extraction program is an area extraction apparatus according to an embodiment of the present invention.

このコンピュータ装置10は、CPU1と、バス2を介してCPU1と接続されたメモリ3と、ハードディスク等の補助記憶装置4と、インターフェース5,6と、インターフェース5及び6にそれぞれ接続されたドライブ7及びディスプレイ8とを含むものであり、典型的には、これは、パーソナルコンピュータである。入力装置としてのドライブ7に、航空写真を収めたCD,DVD等の記憶媒体9を装着することにより、航空写真のデータが読込み可能となる。なお、領域抽出プログラムは、航空写真とは別に、記憶媒体又はネットワーク経由で補助記憶装置4にインストールして実行される。すなわち、領域抽出プログラムは、記憶(記録)媒体としても存在・流通し得る。   The computer 10 includes a CPU 1, a memory 3 connected to the CPU 1 via a bus 2, an auxiliary storage device 4 such as a hard disk, interfaces 5, 6 and drives 7 connected to the interfaces 5 and 6, respectively. And a display 8, typically this is a personal computer. By attaching a storage medium 9 such as a CD or DVD containing aerial photographs to the drive 7 as an input device, the aerial photograph data can be read. The area extraction program is installed and executed in the auxiliary storage device 4 via a storage medium or a network separately from the aerial photograph. That is, the area extraction program can exist and be distributed as a storage (recording) medium.

《領域抽出の手順》
以下、領域抽出(方法・プログラムの主要ステップ)について詳細に説明する。
領域抽出の概要としては、輝度値の分散が大きいテクスチャを持つ屋根も効果的に抽出すべく、輝度値を所定数の値に離散化し、同一の離散化値を持つ領域をラベリングする。そして、平面視した建物に見られる長方形に近い形状の領域を優先的に抽出する。また、複数の異なる離散化幅を適用して得られた領域群に対し、より長方形に近い領域から順次採用していくことで、局所的に最適な、空間的な平滑化パラメータを適用したのと同様な処理を実現する。以下、ステップごとに詳細に説明する。
《Region extraction procedure》
Hereinafter, region extraction (a main step of the method / program) will be described in detail.
As an outline of region extraction, in order to effectively extract a roof having a texture with a large variance of luminance values, the luminance values are discretized into a predetermined number of values, and regions having the same discretized values are labeled. And the area | region of the shape close | similar to the rectangle seen in the building seen planarly is extracted preferentially. In addition, for regions obtained by applying a plurality of different discretization widths, the spatially smoothing parameters that were optimal locally were applied by sequentially adopting regions that were closer to a rectangle. The same processing is realized. Hereinafter, each step will be described in detail.

(第1ステップ)
まず、1バイト(輝度値の範囲:0〜255)×3バンド(RGB)画像の任意の1バンドの輝度値に対し、Ndisc種類の離散化幅を設定する。各離散化幅の下で、Noff種類の異なるオフセット値を適用し、画像輝度値を離散化する。例えば、「輝度値の離散化幅=40」「オフセット数=5」のときに「オフセット値幅=8」となり、「オフセット値={0,8,16,24,32}」のNoff種類の離散化画像が得られる。「オフセット値=0」の下では、原画像の輝度値「0〜39」が同一離散化値を与えられ、同様に「40〜79」「80〜119」「120〜159」「160〜199」「200〜239」「240〜255」を含めた7つの区分に離散化される。本実験では一例として次のパラメータを使用した。
(First step)
First, N disc types of discretization widths are set for the luminance values of an arbitrary 1 band of 1 byte (luminance value range: 0 to 255) × 3-band (RGB) image. Under each discretization width, N off types of different offset values are applied to discretize the image luminance values. For example, next to "offset value width = 8" when the "discretization width = 40 of the luminance value""Offset Number = 5", the N off type of "offset = {0,8,16,24,32}" A discretized image is obtained. Under “offset value = 0”, the luminance value “0-39” of the original image is given the same discretization value, and similarly “40-79” “80-119” “120-159” “160-199”. ”“ 200 to 239 ”and“ 240 to 255 ”. In this experiment, the following parameters were used as an example.

使用バンド:赤(R)バンド
輝度値の離散化幅Δd={40,30,20}
オフセット数Noff=5
オフセット値幅Δoff=Δd/Noff={8,6,4}
Band used: Red (R) band Discrete width Δd of luminance value = {40, 30, 20}
Number of offsets N off = 5
Offset value width Δ off = Δd / N off = {8, 6, 4}

(第2ステップ)
次に、各離散化画像において、4方向(上下左右)を探索し、同じ値を持つ画素を連結し領域として抽出する。一定面積以上の大領域(例えば6400画素以上)は除去する。また一定面積未満(例えば80画素未満)の小領域は周辺に一定面積以上の領域があれば統合し、なければ除去する。その後、各領域のエッジ(輪郭)を抽出する。
(Second step)
Next, in each discretized image, four directions (up, down, left and right) are searched, and pixels having the same value are connected and extracted as regions. A large region (for example, 6400 pixels or more) having a certain area or more is removed. In addition, small regions less than a certain area (for example, less than 80 pixels) are integrated if there is a region larger than a certain area in the periphery, and are removed if there are regions. Thereafter, the edge (contour) of each region is extracted.

(第3ステップ)
次に、ある離散化幅におけるNoff種類の離散化画像から得られたエッジを全て1枚に重ね合わせる。
(Third step)
Next, all the edges obtained from N off types of discretized images in a certain discretization width are superposed on one sheet.

(第4ステップ)
次に、一定強度以上のエッジを残し、それに連結するエッジも抽出する。この時点で隣接する屋根や建物がほぼ同様の輝度値を持つ場合にそれらのエッジが抽出されていないことも多いため、エッジが存在しなくても周辺に直線状のエッジが存在することが確認できれば、エッジを連結する。
(4th step)
Next, an edge having a certain strength or higher is left, and an edge connected to the edge is also extracted. At this time, if adjacent roofs and buildings have almost the same brightness value, those edges are often not extracted, so it is confirmed that there are straight edges around even if there are no edges If possible, join the edges.

このエッジ連結について、補足説明する。まず、エッジではない画素の周辺8方向(上下左右斜め)を探索し、一定画素数dの中に存在するエッジの数を調べる。
図4は、左上の方向にあるエッジ群グループ1aと、右下の方向にあるエッジ群グループ1bとを探索する例を示している。例えば、(a)グループ1aにd1以上、かつ、グループ1bにもd1以上にエッジ群が存在し、(b)グループ2aにd2以下、かつ、グループ2bにもd2以下のエッジ群しか存在しない、という条件を用意し、条件(a)、(b)を共に満足する場合に、非エッジである中心画素をエッジとして補完する。(b)の条件があることで、矩形領域の隅に近い領域内部の画素を誤ってエッジと抽出することを防いでいる。探索は、上下方向、左右方向、左上から右下、右上から左下の4回にわたって調べる。今回は、d=7、d1=5、d2=1と設定した。
This edge connection will be supplementarily described. First, eight directions (upward, downward, leftward and rightward) around a pixel that is not an edge are searched, and the number of edges existing in the fixed pixel number d is examined.
FIG. 4 shows an example of searching for the edge group 1a in the upper left direction and the edge group 1b in the lower right direction. For example, (a) group 1a has d1 or more edge groups, and group 1b has d1 or more edge groups, (b) group 2a has d2 or less edge groups, and group 2b has only d2 or less edge groups. If both conditions (a) and (b) are satisfied, the non-edge center pixel is complemented as an edge. The condition (b) prevents the pixels inside the area near the corners of the rectangular area from being erroneously extracted as edges. The search is performed four times, in the vertical direction, left and right direction, upper left to lower right, and upper right to lower left. This time, d = 7, d1 = 5, and d2 = 1 were set.

(第5ステップ)
次に、エッジで閉じた領域にラベル番号を付与(ラベリング)し、RGB輝度値から判断して、植生らしい領域を除去する。ステップ2と同様に、一定面積以上の大領域(例えば6400画素以上)を除去し、一定面積以下(例えば30画素以下)の小領域は周辺に一定面積以上の領域があれば統合し、なければ除去する。
(5th step)
Next, a label number is assigned (labeled) to the region closed by the edge, and the region that seems to be vegetation is removed based on the RGB luminance value. As in step 2, a large area (for example, 6400 pixels or more) having a certain area or more is removed, and a small area having a certain area or less (for example, 30 pixels or less) is integrated if there is a certain area or more in the periphery. Remove.

植生の除去について補足説明する。対象地域では、植生には緑系統と赤系統の植生が確認できた。そのため、
(1)Bgrn ≧ Tveg, DN, Bblue ≧ Tveg, DN, and Bgrn / Bblue ≧ Tveg, g2b_ratio
(2)Bred ≧ Tveg, DN, Bblue ≧ Tveg, DN, and Bred / Bblue ≧ Tveg, r2b_ratio
のいずれかを満足する画素が占有する割合Rgrn_veg, Rred_vegを計算する。ここでBred、 Bgrn 、Bblueは赤、緑、青バンドの輝度値を、Tveg, DNは輝度値の閾値を、Tveg, g2b_ratio、Tveg, r2b_ratioは指標の閾値を意味する。Rgrn_veg, Rred_vegのいずれかが閾値Tveg, ratio以上であれば、植生領域として除去する。ここでは、Tveg, DN =20、 Tveg, g2b_ratio =1.25、 Tveg, r2b_ratio =2.0、 Tveg, ratio =0.3の値を採用した。
A supplementary explanation will be given for the removal of vegetation. In the target area, green and red vegetation were confirmed. for that reason,
(1) B grn ≧ T veg, DN , B blue ≧ T veg, DN , and B grn / B blue ≧ T veg, g2b_ratio
(2) B red ≧ T veg, DN , B blue ≧ T veg, DN , and B red / B blue ≧ T veg, r2b_ratio
The ratios R grn_veg and R red_veg occupied by pixels satisfying any of the above are calculated. Here, B red , B grn , and B blue represent the luminance values of the red, green, and blue bands, T veg and DN represent the luminance value threshold values, and T veg, g2b_ratio , and T veg and r2b_ratio represent the index threshold values . If any of R grn_veg and R red_veg is greater than or equal to the threshold value T veg, ratio , it is removed as a vegetation region. Here , values of T veg, DN = 20, T veg, g2b_ratio = 1.25, T veg, r2b_ratio = 2.0, and T veg, ratio = 0.3 were adopted.

(第6ステップ)
次に、各領域の長方形指数と呼ぶ指数を、次のように計算する。
(ア)領域のエッジの集合(エッジ群と呼ぶ。)の2次元座標から、第1軸及び第2軸を決定する。
(イ)領域の存在範囲を第1軸と第2軸の値で表現し、第1軸の(最大値−最小値+1)、第2軸の(最大値−最小値+1)を掛け合わせて、領域を取り囲む矩形の面積を得る。
(ウ)(実際の領域の面積/領域を取り囲む矩形の面積)を長方形指数と定義する。
ここで、長方形指数が一定値(例えば0.4)を下回ると、建物でない可能性が高いと判断して、抽出の対象から除外する。
(6th step)
Next, an index called a rectangular index for each region is calculated as follows.
(A) The first axis and the second axis are determined from the two-dimensional coordinates of a set of edges in the region (referred to as an edge group).
(B) Express the existence range of the area with the values of the first axis and the second axis, and multiply (maximum value-minimum value + 1) of the first axis and (maximum value-minimum value + 1) of the second axis. , Get the rectangular area surrounding the region.
(C) (Area area / Area area surrounding the area) is defined as a rectangle index.
Here, if the rectangular index falls below a certain value (for example, 0.4), it is determined that there is a high possibility that it is not a building, and is excluded from the extraction target.

長方形指数について補足説明する。図5に長方形指数算出の概念図を示す。ある領域のエッジ群から第1,第2軸を計算し、各々の辺が第1,第2軸に平行な長方形のうち、対象領域を取り囲む、図示のような最小の長方形を求める。この第1,第2軸の決定には、2点間の距離が一定値(5画素以上20画素以下)にあるエッジ群を用いて直線の傾きを投票し、最大の頻度が発生する傾きを第1軸の向きとし、第2軸は第1軸に直交するように定めた。
dx = Sactual/Srect ・・・(1)
と定義する。ここでidxは長方形指数を、Sactualはある領域の面積を、Srectはその領域を取り囲む長方形の面積を表す。長方形指数は0から1の範囲の値を取り、1に近いほど長方形に近い形状を有する。このような指数により、長方形に近い形状における「近さ」の度合いを、指数として簡単かつ的確に表すことができる。
A supplementary explanation of the rectangular index will be given. FIG. 5 shows a conceptual diagram of rectangular index calculation. The first and second axes are calculated from the edge group of a certain region, and the smallest rectangle as shown in the figure surrounding the target region is obtained from the rectangles whose sides are parallel to the first and second axes. In determining the first and second axes, the slope of the straight line is voted by using an edge group in which the distance between two points is a constant value (between 5 pixels and 20 pixels), and the slope at which the maximum frequency occurs is determined. The orientation was the first axis, and the second axis was determined to be orthogonal to the first axis.
i dx = S actual / S rect (1)
It is defined as Here, i dx represents the rectangular index, S actual represents the area of a certain region, and S rect represents the area of the rectangle surrounding the region. The rectangular index takes a value in the range of 0 to 1, and the closer to 1, the closer to the rectangle. With such an index, the degree of “closeness” in a shape close to a rectangle can be expressed simply and accurately as an index.

(第7ステップ)
式(1)の長方形指数を用いて、ステップ7では、影で分断されている屋根・建物を統合し、本来の領域を推定する。
例えば、ある領域Aの近隣の領域を探索し、
(ア)併合した場合の長方形指数が各々の長方形指数よりも向上する。
(イ)指定した閾値(0.7)以上である。
(ウ)領域の平均輝度値の差が一定値(30)以内である。
という条件を満たすか否かを判定する。満たす場合には、近隣の領域のうち、併合時の長方形指数が最大となる領域を併合の候補とし、これを、仮に領域Bとする。領域Bでも同様に探索し、領域Aが(ア)〜(ウ)の全ての条件を満たし、かつ併合時の長方形指数が最大となる領域である場合に、領域AとBとを互いに併合する。
(7th step)
In step 7, using the rectangular index of equation (1), the roof / building divided by the shadow is integrated to estimate the original area.
For example, search for a region near a certain region A,
(A) The rectangular index when merged is improved over each rectangular index.
(B) The specified threshold value (0.7) or more.
(C) The difference in the average luminance value of the region is within a certain value (30).
It is determined whether or not the condition is satisfied. When satisfy | filling, let the area | region where the rectangle index | exponent at the time of merging becomes the largest among the adjacent area | regions as a candidate for merging, and let this be the area | region B temporarily. A similar search is performed in the region B. When the region A satisfies all the conditions (a) to (c) and the rectangular index at the time of merging is maximized, the regions A and B are merged with each other. .

例えば図6に示すように、対象領域αに隣接する領域β、γ、δの全てを対象に、仮に統合した場合の長方形指数を計算する。この際の第1,第2軸は、2つの領域のエッジ群のうち、互いの領域に近いエッジ群は除外したエッジ群を用いて計算される。軸の決定には、2点間の距離が一定値(5画素以上20画素以下)にあるエッジ群を用いて直線の傾きを投票し、最大の頻度が発生する傾きを第1軸の向きとし、第2軸は第1軸に直交するように定めた。   For example, as shown in FIG. 6, a rectangular index is calculated when all the regions β, γ, and δ adjacent to the target region α are integrated. In this case, the first and second axes are calculated using an edge group excluding the edge groups close to each other among the edge groups of the two areas. The axis is determined by voting the slope of a straight line using an edge group in which the distance between two points is a constant value (5 pixels or more and 20 pixels or less), and the slope with the highest frequency is taken as the direction of the first axis. The second axis was determined to be orthogonal to the first axis.

(第8ステップ)
次に、Ndisc種類の離散化幅の下で得られた一定面積以上の領域を、長方形指数が高い順に選んでいく。ただし、当該領域の一部でも既に選ばれた領域に重なっていれば、その領域は選ばない。
(8th step)
Next, regions of a certain area or more obtained under N disc types of discretization widths are selected in descending order of the rectangular index. However, if any part of the area overlaps the already selected area, the area is not selected.

(第9ステップ)
選ばれなかった領域に対し、再度選定していく。今度は既に選ばれた領域との重なりが一定値未満(30%)で、かつ一定面積(80画素)未満であれば、重なっていない部分だけを新たな領域として追加選出する。ただし重なっていない部分に対しても長方形指数を算出し、閾値(0.45)以上である場合に限る。
(9th step)
The area that was not selected is selected again. If the overlap with the already selected region is less than a certain value (30%) and less than a certain area (80 pixels), only the non-overlapping portion is additionally selected as a new region. However, only when the rectangular index is calculated for the non-overlapping portion and is equal to or greater than the threshold (0.45).

(第10ステップ)
最後に、領域内の穴を埋める。
(10th step)
Finally, fill the holes in the area.

《実例》
以下、実例を示す。図7は、低層建物が密集する市街地(以下、地域1という。)におけるエッジ抽出の結果を示す図である。(a)は撮影の解像度300×300画素で、実寸法約75m×75mのエリアを撮影した航空写真である。(b)は、(a)のデータに対して既知のキャニー(Canny)フィルタを用いた結果である。(c)は、上述の離散化において離散化幅40を用いた際のエッジ抽出結果(第1段階)である。さらに、(d)は、離散化幅30を用いた際のエッジ抽出結果(第2段階)、(e)は、離散化幅20を用いた際のエッジ抽出結果(第3段階)である。
"Illustration"
Examples are shown below. FIG. 7 is a diagram showing a result of edge extraction in an urban area where low-rise buildings are densely gathered (hereinafter referred to as area 1). (A) is an aerial photograph in which an area having an actual size of about 75 m × 75 m is photographed with a photographing resolution of 300 × 300 pixels. (B) is a result of using a known Canny filter for the data of (a). (C) is an edge extraction result (first stage) when the discretization width 40 is used in the discretization described above. Further, (d) is an edge extraction result (second stage) when the discretization width 30 is used, and (e) is an edge extraction result (third stage) when the discretization width 20 is used.

図8は、高層ビルと低層建物とが混在する市街地(以下、地域2という。)におけるエッジ抽出の結果を示す図である。(a)〜(e)は、図7と同様であり、(a)は航空写真、(b)はキャニーフィルタを用いた結果、(c)は離散化幅40を用いた際のエッジ抽出結果(第1段階)、(d)は離散化幅30を用いた際のエッジ抽出結果(第2段階)、(e)は離散化幅20を用いた際のエッジ抽出結果(第3段階)である。   FIG. 8 is a diagram showing a result of edge extraction in an urban area where high-rise buildings and low-rise buildings are mixed (hereinafter referred to as region 2). (A)-(e) are the same as FIG. 7, (a) is an aerial photograph, (b) is a result of using a Canny filter, (c) is an edge extraction result when using the discretization width 40 (First stage), (d) is an edge extraction result (second stage) when the discretization width 30 is used, and (e) is an edge extraction result (third stage) when the discretization width 20 is used. is there.

図9は、高木が建物に隣接している地域(以下、地域3という。)におけるエッジ抽出の結果を示す図である。(a)〜(e)は、図7と同様であり、(a)は航空写真、(b)はキャニーフィルタを用いた結果、(c)は離散化幅40を用いた際のエッジ抽出結果(第1段階)、(d)は離散化幅30を用いた際のエッジ抽出結果(第2段階)、(e)は離散化幅20を用いた際のエッジ抽出結果(第3段階)である。   FIG. 9 is a diagram showing a result of edge extraction in an area where a tree is adjacent to a building (hereinafter referred to as area 3). (A)-(e) are the same as FIG. 7, (a) is an aerial photograph, (b) is a result of using a Canny filter, (c) is an edge extraction result when using the discretization width 40 (First stage), (d) is an edge extraction result (second stage) when the discretization width 30 is used, and (e) is an edge extraction result (third stage) when the discretization width 20 is used. is there.

図10は、寄棟屋根作りの建物が多数存在する地域(以下、地域4という。)におけるエッジ抽出の結果を示す図である。(a)〜(e)は、図7と同様であり、(a)は航空写真、(b)はキャニーフィルタを用いた結果、(c)は離散化幅40を用いた際のエッジ抽出結果(第1段階)、(d)は離散化幅30を用いた際のエッジ抽出結果(第2段階)、(e)は離散化幅20を用いた際のエッジ抽出結果(第3段階)である。   FIG. 10 is a diagram showing the result of edge extraction in an area where there are many buildings with a dormitory roof (hereinafter referred to as area 4). (A)-(e) are the same as FIG. 7, (a) is an aerial photograph, (b) is a result of using a Canny filter, (c) is an edge extraction result when using the discretization width 40 (First stage), (d) is an edge extraction result (second stage) when the discretization width 30 is used, and (e) is an edge extraction result (third stage) when the discretization width 20 is used. is there.

図11,図12,図13,図14はそれぞれ、上述の地域1(図7),地域2(図8),地域3(図9),地域4(図10)における領域抽出の結果を示す図である。図11〜14の各図において、(a)は各地域の航空写真、(b)は離散化幅40を用いた際の領域抽出結果(第1段階)、(c)は離散化幅30を用いた際の領域抽出結果(第2段階)、(d)は離散化幅20を用いた際のエッジ抽出結果(第3段階)、(e)は、長方形指数を用いて3つの結果((b)〜(d))を統合した最終結果の図である。図11〜14のいずれにおいても、(e)は、(b)〜(d)の各結果と比較すると明らかなように、局所的に3つの結果の最も良いところを取った状態となり、建物の領域(エッジ)がより正確に抽出されている。   FIGS. 11, 12, 13, and 14 show the results of region extraction in the above-described region 1 (FIG. 7), region 2 (FIG. 8), region 3 (FIG. 9), and region 4 (FIG. 10), respectively. FIG. In each figure of FIGS. 11-14, (a) is an aerial photograph of each area, (b) is a region extraction result when using a discretization width 40 (first stage), (c) is a discretization width 30 Region extraction result when used (second stage), (d) is an edge extraction result when using the discretization width 20 (third stage), and (e) is a result of three results (( It is a figure of the final result which integrated b)-(d)). In any of FIGS. 11-14, (e) becomes the state which took the best place of three results locally so that it may become clear when compared with each result of (b)-(d). Regions (edges) are extracted more accurately.

図15は、地域1についての図であり、(a)〜(f)はそれぞれ、(a)航空写真、(b)領域抽出結果を比較するための参照点、(c)領域抽出結果(主たる方向を計算する際に主成分分析を利用)、(d)領域抽出結果(主たる方向を計算する際に2点間を通る直線の傾きを投票して決定)、(e)ITT Visual Information Solutions の画像処理ソフトENVI EXでScale=50, Merge=50に設定した時の領域抽出結果、(f)ENVI EXでScale=50, Merge=80に設定した時の領域抽出結果、である。なお、(c)〜(f)はカラー画像では建物のエッジが明瞭に表示されるが、この図では建物の縁取りのように見えているのがエッジである。   FIG. 15 is a diagram of the region 1, where (a) to (f) are (a) aerial photographs, (b) reference points for comparing region extraction results, and (c) region extraction results (main) Principal component analysis is used when calculating the direction), (d) region extraction results (deciding by voting the slope of a straight line passing between two points when calculating the main direction), (e) ITT Visual Information Solutions These are the region extraction results when Scale = 50 and Merge = 50 are set in the image processing software ENVI EX, and (f) the region extraction results when Scale = 50 and Merge = 80 are set in ENVI EX. Note that in (c) to (f), the edge of the building is clearly displayed in the color image, but in this figure, the edge looks like the edging of the building.

なお、ENVI EXの「Feature Extraction」という機能で設定を要求されるパラメータは、「Scale Level」と「Merge Level」である。本例の対象地域では分散値の大きいテクスチャを持つ屋根が多い。「Merge Level」を変更することで最終的に残される領域の大きさや数が変わるため、「Scale Level = 50.0」は共通とし、「Merge Level = 50.0」「Merge Level = 80.0」の2種類の値で実行した。   It should be noted that the parameters required to be set by the function called “Feature Extraction” of ENVI EX are “Scale Level” and “Merge Level”. In the target area of this example, there are many roofs with textures with large dispersion values. Changing the “Merge Level” changes the size and number of areas that will ultimately be left, so “Scale Level = 50.0” is the same, and two types of values, “Merge Level = 50.0” and “Merge Level = 80.0” Executed with.

図16は、地域2についての図であり、(a)〜(f)はそれぞれ、(a)航空写真、(b)領域抽出結果を比較するための参照点、(c)領域抽出結果(主たる方向を計算する際に主成分分析を利用)、(d)領域抽出結果(主たる方向を計算する際に2点間を通る直線の傾きを投票して決定)、(e)ENVI EXでScale=50, Merge=50に設定した時の領域抽出結果、(f)ENVI EXでScale=50, Merge=80に設定した時の領域抽出結果、である。   FIG. 16 is a diagram regarding the area 2. (a) to (f) are (a) aerial photographs, (b) reference points for comparing region extraction results, and (c) region extraction results (main) (Principal component analysis is used when calculating the direction), (d) Region extraction result (decided by voting the slope of a straight line passing between two points when calculating the main direction), (e) Scale = in ENVI EX (F) Region extraction result when Scale = 50, Merge = 80 is set in ENVI EX.

図17は、地域3についての図であり、(a)〜(f)はそれぞれ、(a)航空写真、(b)領域抽出結果を比較するための参照点、(c)領域抽出結果(主たる方向を計算する際に主成分分析を利用)、(d)領域抽出結果(主たる方向を計算する際に2点間を通る直線の傾きを投票して決定)、(e)ENVI EXでScale=50, Merge=50に設定した時の領域抽出結果、(f)ENVI EXでScale=50, Merge=80に設定した時の領域抽出結果、である。   FIG. 17 is a diagram for the region 3. (a) to (f) are (a) aerial photographs, (b) reference points for comparing region extraction results, and (c) region extraction results (mainly). (Principal component analysis is used when calculating the direction), (d) Region extraction result (decided by voting the slope of a straight line passing between two points when calculating the main direction), (e) Scale = in ENVI EX (F) Region extraction result when Scale = 50, Merge = 80 is set in ENVI EX.

図18は、地域4についての図であり、(a)〜(f)はそれぞれ、(a)航空写真、(b)領域抽出結果を比較するための参照点、(c)領域抽出結果(主たる方向を計算する際に主成分分析を利用)、(d)領域抽出結果(主たる方向を計算する際に2点間を通る直線の傾きを投票して決定)、(e)ENVI EXでScale=50, Merge=50に設定した時の領域抽出結果、(f)ENVI EXでScale=50, Merge=80に設定した時の領域抽出結果、である。   18A and 18B are diagrams for the area 4. (a) to (f) are (a) an aerial photograph, (b) a reference point for comparing region extraction results, and (c) region extraction results (mainly). (Principal component analysis is used when calculating the direction), (d) Region extraction result (decided by voting the slope of a straight line passing between two points when calculating the main direction), (e) Scale = in ENVI EX (F) Region extraction result when Scale = 50, Merge = 80 is set in ENVI EX.

図15〜18のそれぞれにおいて、中段の(c),(d)と下段の(e),(f)との比較により明らかなように、本実施形態の領域抽出による(c),(d)では、建物の長方形の領域が、より正確に抽出されていることがわかる。ENVI EXによる(e),(f)では不要なエッジや、細かすぎるエッジが非常に多く、(c),(d)に比べると領域抽出の出来映えが良くない。   In each of FIGS. 15 to 18, (c) and (d) by region extraction of this embodiment, as is clear from comparison between the middle (c) and (d) and the lower (e) and (f). It can be seen that the rectangular area of the building is extracted more accurately. In ENVI EX (e) and (f), there are a large number of unnecessary edges and edges that are too fine. Compared with (c) and (d), the region extraction performance is poor.

《考察》
上記実施形態に係る領域抽出の手法では異なる離散化幅を適用して得られたエッジを統合しているが、このことは異なる空間解像度へ変換し処理することと本質的には同じである。ただし、オフセット値を変えて適用し統合することで、単なる平滑化フィルタと異なりエッジは保存されている。何より重要なのは、複数の異なる離散化幅を適用し得られた領域群に対し、長方形指数の大きい領域から順次採用していくことは局所的に最適な、空間的な平滑化パラメータを適用していることに等しい、ということである。
<Discussion>
In the region extraction method according to the above embodiment, edges obtained by applying different discretization widths are integrated. This is essentially the same as conversion to different spatial resolutions and processing. However, the edge is preserve | saved unlike a simple smoothing filter by applying and integrating by changing an offset value. Most importantly, for regions obtained by applying different discretization widths, adopting a spatial smoothing parameter that is locally optimal to adopt sequentially from regions with a large rectangular index. It is equivalent to being.

図11〜14からは、統合する処理を通じて、屋根や建物の大きさに応じて局所的に最適な空間スケールパラメータを選択できていることが示されている。ただし、離散化後の画像を用いたラベリングには時間を要し、画像サイズが大きくなるほど処理時間が増大する。例えば、クロック3.2GHzで動作するCPUに、6GBのメモリを使用するコンピュータの場合、各対象地域での平均処理時間は約90秒であった。   FIGS. 11 to 14 show that the optimum spatial scale parameter can be selected locally according to the size of the roof or building through the integration process. However, labeling using the discretized image takes time, and the processing time increases as the image size increases. For example, in the case of a computer using a 6 GB memory for a CPU operating at a clock of 3.2 GHz, the average processing time in each target area was about 90 seconds.

なお、領域のエッジから算出される形状に関する指標を活用して、屋根や建物に見られる長方形の形状の領域を優先的に抽出することが好ましい。その際にエッジを抽出しても完全に閉じない屋根や建物が多く、このことが領域抽出の精度を下げる要因であるが、完全に閉じていないエッジ群に対しては、長方形や三角形の形状の可能性が高い場合にエッジを閉じる処理を加えることが好ましい。   Note that it is preferable to preferentially extract a rectangular-shaped region found on a roof or a building by using an index related to the shape calculated from the edge of the region. At that time, there are many roofs and buildings that do not completely close even if edges are extracted, which is a factor that lowers the accuracy of area extraction, but for edges that are not completely closed, the shape of a rectangle or triangle It is preferable to add a process of closing the edge when there is a high possibility.

また上述の、例えば3つの異なる離散化幅を用いることで、特にΔd=20の結果(図7〜10における(e))から図15〜18の参照点1−a,1−b,3−a,4−aのような影がかかっている屋根のエッジを明瞭に抽出できていることがわかる。この場合でもエッジを閉じる処理がなければ分離されないことも確認できており、エッジ補完(連結)処理の有効性が確認できた。すなわち、エッジ補完処理と、異なる離散化幅の結果の統合という2つの要因が効果を発揮して、領域抽出精度が向上したと考えられる。ただし、ステップ7に示した「領域の平均輝度値の差が一定値以内」という条件がないと、明瞭に分かれている屋根も過剰に統合する結果が確認された。   Further, by using, for example, three different discretization widths as described above, reference points 1-a, 1-b, 3- in FIGS. 15-18 are obtained from the result of Δd = 20 ((e) in FIGS. 7-10). It can be seen that the edges of the roof with shadows such as a and 4-a are clearly extracted. Even in this case, it can be confirmed that the image is not separated without the edge closing process, and the effectiveness of the edge complementation (concatenation) process can be confirmed. That is, it is considered that the area extraction accuracy has been improved by the effects of the two factors of edge complement processing and integration of results of different discretization widths. However, if the condition of “the difference in the average luminance value of the areas is within a certain value” shown in step 7 is not present, the result of excessively integrating the clearly separated roofs was confirmed.

寄棟屋根に見られる三角形の領域も、図15〜18の結果からは良好に抽出されていることが分かる。理想的な三角形の長方形指数は0.5に留まり、優先的に採用される訳ではない。今回良好に抽出できた要因として、三角形領域周辺の長方形領域が良好に抽出された点が挙げられる。離散化幅やオフセット値によっては、三角形領域と長方形領域が統合されることもある。しかしながら、3種類の離散化幅で得られた領域抽出結果を長方形指数という尺度で評価すると、三角形領域と長方形領域が統合される領域の長方形指数は、本来の長方形単独の領域の長方形指数より低く採用されにくい。このような選考過程に基づいて、長方形領域が先に選抜され、続いて三角形領域が抽出された。   From the results of FIGS. 15 to 18, it can be seen that the triangular area seen on the dormitory roof is also well extracted. The ideal triangle rectangle index remains at 0.5 and is not preferentially adopted. A factor that has been successfully extracted this time is that the rectangular area around the triangular area has been successfully extracted. Depending on the discretization width and the offset value, the triangular area and the rectangular area may be integrated. However, when the region extraction results obtained with three different discretization widths are evaluated on a scale called the rectangle index, the rectangle index of the area where the triangle area and the rectangle area are integrated is lower than the rectangle index of the original rectangle alone area. It is hard to be adopted. Based on such a selection process, the rectangular area was first selected, and then the triangular area was extracted.

既存の領域抽出手法の結果に比べて、上述の手法の結果では領域が存在しない範囲が広い。これは領域面積に上限を設けたことも一因であるが、領域の長方形指数が一定値(0.4)未満であると屋根・建物の可能性が低いとして除去していることに大きく起因している。全てではないが大半の植生や、図15の墓地を含む領域は形状が長方形には程遠く、結果的に除去できている。すなわち、建物領域抽出という観点では十分に機能していると言える。   Compared with the result of the existing region extraction method, the range of the region does not exist in the result of the above method. This is partly due to the upper limit on the area of the area, but it is largely due to the fact that if the rectangular index of the area is less than a certain value (0.4), the possibility of roofs / buildings is low. doing. Most but not all vegetation and the area including the cemetery in FIG. 15 are far from being rectangular in shape and can be removed as a result. In other words, it can be said that it functions sufficiently in terms of building area extraction.

長方形指数を領域面積の観点から述べる。本手法では長方形指数が高い領域から順次抽出していくため、比較的小さな領域が優先的に抽出されることが多くなり、多少長方形指数が小さくても大まかな外形を捉えている領域が選ばれないこともありえる。そのような大まかな外形を捉えたい応用に対応するため、その方法を検討した。例えば、下記の式(2)を用いた補正を実行することで、一部の領域では図15〜18に示す結果よりも大きく、複数の屋根・建物に対応した領域が得られた。
Didx=C・ln(Sactual・C+C) ・・・(2)
The rectangular index will be described from the viewpoint of area. In this method, since the rectangle index is extracted sequentially, a relatively small area is often extracted preferentially, and an area that captures a rough outline is selected even if the rectangle index is slightly small. It may not be. In order to deal with applications that would like to capture such rough outlines, we examined the method. For example, by executing the correction using the following equation (2), a region corresponding to a plurality of roofs / buildings is obtained in some regions, which is larger than the results shown in FIGS.
Di dx = C 1 · ln (S actual · C 2 + C 3 ) (2)

ここでDidxは長方形指数の補正値、C〜Cは経験的に定める係数を表す。前述の対象地域に適用して経験的に確認した結果、C=0.05,C=100.0,C=1.0とした。式(2)の係数の中でもCが特に最終抽出結果に大きな影響を与えることが判明した。C=1.0と設定すると、補正値が大きすぎて道路やそれに繋がる大きな面積を持つ領域の長方形指数が高くなり、それに応じて優先的に選ばれるようになり、本来目的としている屋根・建物の抽出数が減少する結果となった。今回は伝統的な日本建物が建ち並ぶ地域を対象にしているが、この長方形指数補正における関数や、パラメータ値は、抽出したい建物を念頭に調整するべきである。 Here, Di dx represents a correction value of the rectangular index, and C 1 to C 3 represent coefficients determined empirically. As a result of empirical confirmation by applying to the above target area, C 1 = 0.05, C 2 = 100.0, and C 3 = 1.0. Among the coefficients of equation (2), it was found that C 1 has a great influence on the final extraction result. If C 1 = 1.0 is set, the correction value is too large, and the rectangular index of the road and the area having a large area connected to it becomes high, so that it is preferentially selected accordingly. The number of buildings extracted decreased. This time, we are targeting the area where traditional Japanese buildings are lined up, but the functions and parameter values in this rectangular index correction should be adjusted with the building to be extracted in mind.

一方、長方形指数を算出する場合、図15〜18の(c)には主成分分析で軸を決めた場合の領域抽出の結果が示されている。参照点2−a、3−b、3−c、4−bにおいては、主成分分析を適用するとスレート屋根の領域が分割されて、一部が欠けて抽出された。完全な長方形でない限り、エッジ群を用いて主成分分析を適用すると、特に今回対象にしているような影で分断されている屋根では、第1主成分は屋根の辺に平行になるとは限られない結果が得られた。その反面、1−cでは主成分分析を用いた領域抽出結果では、4つの屋根が一つの大きな領域として抽出されている。   On the other hand, in the case of calculating the rectangular index, FIGS. 15 to 18 (c) show the result of region extraction when the axis is determined by principal component analysis. At the reference points 2-a, 3-b, 3-c, and 4-b, when the principal component analysis was applied, the slate roof region was divided and partly extracted. If the principal component analysis is applied using the edge group unless it is a complete rectangle, the first principal component is not always parallel to the roof edge, especially in the roof that is divided by the shadow as the object of this time. No results were obtained. On the other hand, in the area extraction result using principal component analysis in 1-c, four roofs are extracted as one large area.

このように主成分分析を用いることで領域抽出結果の安定性を欠くことが確認できたため、主成分分析を適用せず、2点間の距離が一定値にあるエッジ群を用いて直線の傾きを投票し、最大の頻度が発生する傾きを第1軸の向きとし、第2軸は第1軸に直交するように定めた。この2点間の距離の上限、下限を変更すると一部領域抽出の結果にも変化が生じたが、主成分分析を用いた結果ほど大きな変化は見られなかった。この閾値は対象地域の特性に応じて経験的に決めることが求められる。   Since it was confirmed that the region extraction result lacks stability by using principal component analysis in this way, the principal component analysis is not applied, and the slope of the straight line using an edge group in which the distance between two points is a constant value. The slope at which the maximum frequency occurs was defined as the direction of the first axis, and the second axis was determined to be orthogonal to the first axis. When the upper limit and lower limit of the distance between the two points were changed, the result of partial region extraction also changed, but the change was not as great as the result of principal component analysis. This threshold value should be determined empirically according to the characteristics of the target area.

最後に、植生領域の除去について述べる。本実施形態では市街地の建物抽出が目的であり、植生は除去すべき対象であった。領域抽出の前処理として、画素単位の輝度値を参照して植生らしさが高い画素を除去する方法も考えられる。しかしながら、屋根や道路に部分的にかかる植生を除去することで本来の屋根や建物の形状が得られず、分断されたり、あるいはそもそも領域が小さくなりすぎて抽出されなくなったりすることがある。   Finally, the removal of the vegetation area will be described. In the present embodiment, the purpose is to extract buildings in urban areas, and vegetation was a target to be removed. As a pre-process for region extraction, a method of removing pixels with high vegetation by referring to luminance values in pixel units is also conceivable. However, by removing the vegetation partially applied to the roof or road, the original shape of the roof or building may not be obtained, and it may be divided or the area may be too small to be extracted.

そこで、検討の結果、植生領域も領域抽出の対象に含めながら処理を進め、最終段階で適用することにした。この方針は現実的に高精度の抽出に貢献しているといえる。例えば赤色の植生を除去しようとすると、赤色の屋根が誤って除去されることがあった。そのため、赤色植生を慎重に除去すべく、閾値をTveg, r2b_ratio =2.0と高めに設定した。同様の発想で影も一つの領域として抽出し、長方形指数が向上する場合には近隣の領域と統合するように対処した。影らしい領域には道路が多数含まれており、輝度値を用いて除去することも検討したが、誤って屋根や建物も除去される事例が確認できたため、最終的に除去しなかった。 Therefore, as a result of the study, we decided to apply the process at the final stage, including the vegetation area as an area extraction target. It can be said that this policy has contributed to high-precision extraction in practice. For example, when trying to remove red vegetation, the red roof may be removed by mistake. Therefore, in order to carefully remove red vegetation, the threshold value was set high as T veg, r2b_ratio = 2.0. With the same idea, shadows were extracted as one region, and when the rectangular index improved, it was dealt with so that it could be integrated with neighboring regions. The shadow-like area contains many roads and we considered removing it using luminance values. However, we confirmed that there were cases where roofs and buildings were removed by mistake, so we did not remove them.

以上のように、本実施形態の領域抽出によれば、密集市街地で効果的に建物あるいは屋根を領域として抽出できる。当該手法では、輝度値の分散が大きいテクスチャを持つ屋根も効果的に抽出するため、輝度値を少数個の値に離散化し、同一の離散化値を持つ領域をラベリングする。そして、領域のエッジから算出される長方形指数を活用して、屋根や建物に見られる長方形の形状の領域を優先的に抽出するようにした。また完全に閉じていないエッジ群に対し、長方形や三角形の形状の可能性が高い場合にエッジを閉じる処理を加えた。   As described above, according to the region extraction of this embodiment, a building or a roof can be effectively extracted as a region in a densely built-up area. In this method, in order to effectively extract a roof having a texture with a large variance of luminance values, the luminance values are discretized into a small number of values, and regions having the same discretized values are labeled. Then, the rectangular index calculated from the edge of the area is used to preferentially extract the rectangular area seen on the roof or building. In addition, for the edge group that is not completely closed, a process of closing the edge when the possibility of a rectangular or triangular shape is high was added.

京都市の伝統的な建物が密集する地域が撮影された25cm解像度の航空写真を用いた実験では、3つの異なる離散化幅を適用したが、エッジ補完処理と異なる離散化幅の結果の統合という2つの要因が効果を発揮して、既存手法では十分に抽出できにくい影領域も明瞭に抽出されるようになった。当該手法の最も重要な特徴としては、複数の異なる離散化幅を適用し得られた領域群に対し、長方形指数の大きい領域から順次採用していくことで、局所的に最適な、空間的な平滑化パラメータを適用した処理を実現している点である。   In an experiment using a 25cm resolution aerial photograph in which an area where traditional buildings in Kyoto City are densely photographed, three different discretization widths were applied. Two factors are effective, and shadow regions that are difficult to extract with the existing method are clearly extracted. The most important feature of the method is that the region group obtained by applying a plurality of different discretization widths is adopted sequentially from the region with a large rectangular index, so that the spatially optimum This is the point that realizes the processing using the smoothing parameter.

また寄棟屋根に見られる三角形の領域は、長方形指数は低く優先的に採用される訳ではないが、結果的には良好に抽出できた。三角形領域と長方形領域が統合される領域の長方形指数は、本来の長方形単独の領域の長方形指数より低く採用されにくく、長方形領域が先に選抜されることで、続いて三角形領域が抽出された。最終的に、低層建物が密集する市街地、高層ビルと低層建物が混在する市街地、高木が建物に隣接している地域、寄棟屋根作りの建物が多数存在する地域の全ての対象地域において、従来の領域抽出手法より良好な結果を得ることができた。よって、輝度値の離散化と長方形指数の活用に特徴を有する手法は有用であることが確認できた。   In addition, the triangular area seen on the dormitory roof has a low rectangular index and is not preferentially adopted, but as a result, it was successfully extracted. The rectangular index of the area where the triangular area and the rectangular area are integrated is less likely to be adopted than the rectangular index of the original single rectangle area, and the triangular area is extracted by selecting the rectangular area first. Finally, in all the target areas of urban areas where low-rise buildings are densely populated, urban areas where high-rise buildings and low-rise buildings are mixed, areas where high trees are adjacent to buildings, and areas where many buildings with dormitory roofs exist Better results were obtained than the region extraction method. Therefore, it was confirmed that the method characterized by the discretization of the luminance value and the utilization of the rectangular index is useful.

なお、上記実施形態では元データとして航空写真を利用したが、航空写真に代えて、高精度に撮影された人工衛星から撮影した写真のデータを用いることも可能である。
また、今回開示された実施の形態はすべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は特許請求の範囲によって示され、特許請求の範囲と均等の意味及び範囲内での全ての変更が含まれることが意図される。
In the above embodiment, the aerial photograph is used as the original data, but it is also possible to use data of a photograph taken from an artificial satellite photographed with high accuracy instead of the aerial photograph.
In addition, it should be considered that the embodiment disclosed this time is illustrative and not restrictive in all respects. The scope of the present invention is defined by the terms of the claims, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.

10:コンピュータ装置(領域抽出装置) 10: Computer device (region extraction device)

Claims (7)

航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出方法であって、
複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化し、
離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出し、
前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用する
ことを特徴とする領域抽出方法。
An area extraction method for extracting an area of a building based on photo data taken from an aircraft or an artificial satellite,
A plurality of different discretization widths are set, and for each discretization width, the luminance value of the data is discretized into a plurality of values discretely set by the discretization width,
Connect the pixels with the same value in the discretized image obtained by discretization, extract a region of a shape close to a rectangle as a candidate for the region of the building,
Of the plurality of region groups extracted for each of the plurality of different discretization widths, a region having a shape closer to a rectangle is adopted as a region of the building.
長方形に近い形状を表す指数として、(領域の面積/領域を取り囲む長方形の面積)によって定義される長方形指数を用いる請求項1記載の領域抽出方法。   The region extraction method according to claim 1, wherein a rectangle index defined by (area of area / area of rectangle surrounding area) is used as an index representing a shape close to a rectangle. 前記抽出において、互いに隣接する領域同士が併合された場合にさらに長方形に近くなる場合は、当該併合を実行可能とする請求項1又は2に記載の領域抽出方法。   3. The region extraction method according to claim 1, wherein in the extraction, when adjacent regions are merged with each other, the merger can be executed when the regions become more rectangular. 長方形指数が所定値より小さい場合は、建物の領域として採用しない請求項2記載の領域抽出方法。   The area extracting method according to claim 2, wherein when the rectangular index is smaller than a predetermined value, the area is not adopted as a building area. RGB輝度値に基づいて、植生と推測される領域を抽出対象から除外する請求項1記載の領域抽出方法。   The region extraction method according to claim 1, wherein a region estimated to be vegetation is excluded from extraction targets based on the RGB luminance values. 航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出プログラムであって、
複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化する機能、
離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出する機能、及び、
前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用する機能を、コンピュータによって実現させるための領域抽出プログラム。
An area extraction program for extracting an area of a building based on data of a photograph taken from an aircraft or an artificial satellite,
A function of setting a plurality of different discretization widths and discretizing the luminance value of the data into a plurality of values discretely set in the discretization width for each of the discretization widths;
A function of connecting pixels having the same value in a discretized image obtained by discretization and extracting a region having a shape close to a rectangle as a candidate for a region of the building; and
A region extraction program for causing a computer to realize a function of adopting a region having a shape closer to a rectangle as a region of a building among a plurality of region groups extracted for each of the plurality of different discretization widths.
航空機又は人工衛星から撮影した写真のデータに基づいて建物の領域を抽出する領域抽出装置であって、
複数の異なる離散化幅を設定し、それぞれの離散化幅について、前記データの輝度値を当該離散化幅で離散設定された複数の値に離散化する機能と、
離散化して得られた離散化画像において同一値を持つ画素を連結し、建物の領域の候補として長方形に近い形状の領域を抽出する機能と、
前記複数の異なる離散化幅ごとに抽出された複数の領域群のうち、より長方形に近い形状の領域を、建物の領域として採用する機能と
を有することを特徴とする領域抽出装置。
An area extraction device that extracts an area of a building based on photo data taken from an aircraft or an artificial satellite,
A function of setting a plurality of different discretization widths, and for each discretization width, discretizing the luminance value of the data into a plurality of values discretely set in the discretization width;
A function of connecting pixels having the same value in a discretized image obtained by discretization and extracting a region having a shape close to a rectangle as a candidate for a region of the building;
A region extracting apparatus having a function of adopting a region closer to a rectangle as a region of a building among a plurality of region groups extracted for each of the plurality of different discretization widths.
JP2011144266A 2011-06-09 2011-06-29 Area extraction method, area extraction program and area extraction device Withdrawn JP2013012034A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2011144266A JP2013012034A (en) 2011-06-29 2011-06-29 Area extraction method, area extraction program and area extraction device
PCT/JP2012/061204 WO2012169294A1 (en) 2011-06-09 2012-04-26 Dtm estimation method, dtm estimation program, dtm estimation device, and method for creating 3-dimensional building model, as well as region extraction method, region extraction program, and region extraction device
US14/122,082 US20140081605A1 (en) 2011-06-09 2012-04-26 Dtm estimation method, dtm estimation program, dtm estimation device, and method for creating 3-dimensional building model, and region extraction method, region extraction program, and region extraction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011144266A JP2013012034A (en) 2011-06-29 2011-06-29 Area extraction method, area extraction program and area extraction device

Publications (1)

Publication Number Publication Date
JP2013012034A true JP2013012034A (en) 2013-01-17

Family

ID=47685868

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011144266A Withdrawn JP2013012034A (en) 2011-06-09 2011-06-29 Area extraction method, area extraction program and area extraction device

Country Status (1)

Country Link
JP (1) JP2013012034A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019028657A (en) * 2017-07-28 2019-02-21 株式会社パスコ Learned model for building region extraction
CN109919990A (en) * 2019-02-19 2019-06-21 北京工业大学 Forest Height Prediction method is carried out using depth perception network and parallax remote sensing image
CN112697068A (en) * 2020-12-11 2021-04-23 中国计量大学 Method for measuring length of bubble of tubular level bubble
KR20220162487A (en) * 2021-06-01 2022-12-08 국방과학연구소 Meghod and apparatus for generating digital building and terrain model, computer-readable storage medium and computer program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019028657A (en) * 2017-07-28 2019-02-21 株式会社パスコ Learned model for building region extraction
JP7048225B2 (en) 2017-07-28 2022-04-05 株式会社パスコ Trained model for building area extraction
CN109919990A (en) * 2019-02-19 2019-06-21 北京工业大学 Forest Height Prediction method is carried out using depth perception network and parallax remote sensing image
CN112697068A (en) * 2020-12-11 2021-04-23 中国计量大学 Method for measuring length of bubble of tubular level bubble
KR20220162487A (en) * 2021-06-01 2022-12-08 국방과학연구소 Meghod and apparatus for generating digital building and terrain model, computer-readable storage medium and computer program
KR102550233B1 (en) * 2021-06-01 2023-06-30 국방과학연구소 Meghod and apparatus for generating digital building and terrain model, computer-readable storage medium and computer program

Similar Documents

Publication Publication Date Title
Lalonde et al. Detecting ground shadows in outdoor consumer photographs
WO2012169294A1 (en) Dtm estimation method, dtm estimation program, dtm estimation device, and method for creating 3-dimensional building model, as well as region extraction method, region extraction program, and region extraction device
CN107835997B (en) Vegetation management for powerline corridor monitoring using computer vision
Xiao et al. Image-based street-side city modeling
Qin et al. A hierarchical building detection method for very high resolution remotely sensed images combined with DSM using graph cut optimization
Figueroa et al. Background recovering in outdoor image sequences: An example of soccer players segmentation
Montoya-Zegarra et al. Semantic segmentation of aerial images in urban areas with class-specific higher-order cliques
US10062005B2 (en) Multi-scale correspondence point matching using constellation of image chips
US8744177B2 (en) Image processing method and medium to extract a building region from an image
KR101436369B1 (en) Apparatus and method for detecting multiple object using adaptive block partitioning
Liu et al. Tracking objects using shape context matching
Wendel et al. Unsupervised facade segmentation using repetitive patterns
JP2008217706A (en) Labeling device, labeling method and program
KR101417527B1 (en) Apparatus and method for topographical change detection using aerial images photographed in aircraft
JP2013012034A (en) Area extraction method, area extraction program and area extraction device
Ibrahim et al. CNN-based watershed marker extraction for brick segmentation in masonry walls
Awad A morphological model for extracting road networks from high-resolution satellite images
Femiani et al. Shadow-based rooftop segmentation in visible band images
Recky et al. Façade segmentation in a multi-view scenario
Fengping et al. Road extraction using modified dark channel prior and neighborhood FCM in foggy aerial images
Giveki et al. Atanassov's intuitionistic fuzzy histon for robust moving object detection
Nagahashi et al. Image segmentation using iterated graph cuts based on multi-scale smoothing
Favorskaya et al. Digital video stabilization in static and dynamic scenes
Raikar et al. Automatic building detection from satellite images using internal gray variance and digital surface model
Kröhnert et al. Segmentation of environmental time lapse image sequences for the determination of shore lines captured by hand-held smartphone cameras

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20140902