CN108491826B - Automatic extraction method of remote sensing image building - Google Patents
Automatic extraction method of remote sensing image building Download PDFInfo
- Publication number
- CN108491826B CN108491826B CN201810307327.1A CN201810307327A CN108491826B CN 108491826 B CN108491826 B CN 108491826B CN 201810307327 A CN201810307327 A CN 201810307327A CN 108491826 B CN108491826 B CN 108491826B
- Authority
- CN
- China
- Prior art keywords
- layer
- pixel point
- building
- value
- num
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/247—Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an automatic extraction method of a remote sensing image building. The method comprises the following steps: step 1, inputting a multispectral remote sensing image; step 2, traversing distances from all pixel points to two randomly selected points, and extracting two hierarchies; step 3, calculating the fluctuation intensity of the two layers; step 4, judging candidate building layers and non-building layers; step 5, initializing the number of the areas; step 6, processing the candidate building image layer; step 7, judging the iteration stop condition of the step 6; step 8, labeling processing; step 9, post-treatment; and step 10, outputting the result. The method can accurately extract buildings, especially dense buildings, in the multispectral remote sensing image, and can be applied to updating of the buildings in the urban geographical basic information database.
Description
Technical Field
The invention relates to the field of remote sensing image processing, in particular to an automatic extraction method of a remote sensing image building.
Background
The building is one of main geographic elements of a city and is important content of various city thematic maps, and the research on the extraction of the building has important significance for comprehensively investigating the city geographic information environment. With the rapid development of the high-resolution remote sensing image acquisition technology, the remote sensing image has better data sources for processing, analyzing and applying, and the digital product has wider and deeper application. The computer image processing technology, the pattern recognition, the artificial intelligence and the like all make progress to different degrees, and the possibility is provided for efficiently extracting effective information in massive images. However, the building information is much more difficult to extract than other information such as roads and water bodies, and the main reasons are as follows:
(1) the data source is mainly a two-dimensional remote sensing image, and direct three-dimensional data is lacked in most cases;
(2) different remote sensing images often have larger difference due to different factors such as spectral range, resolution, geometric images of the sensor, imaging conditions and the like;
(3) the appearances, texture details and the like of different types of buildings are varied, the differences on remote sensing images are large, a unified building model base is difficult to establish, and automatic extraction of information is difficult;
(4) the complexity of the scene of the building, such as low contrast, mutual shielding of houses, shadows of the building itself, shadows of other objects, and the like, makes it difficult to automatically extract the building with clear boundaries from the background.
Disclosure of Invention
The invention provides an automatic extraction method of a remote sensing image building, which can overcome the problem of difficulty in extracting the building in the current remote sensing image, fully utilizes the characteristics of three components of R, G and B in the remote sensing image, can detect the building target in the remote sensing image based on the distance between characteristic vectors, does not need manual intervention and has high automation degree.
The technical scheme adopted for realizing the aim of the invention is as follows: the method comprises the following steps:
step 1: preprocessing an input multispectral remote sensing image containing three color components of R, G and B to obtain an image Iin(ii) a Step 2: in image IinTwo points are randomly selected from the RGB color space SPA and are respectively marked as P1And P2Calculating an image IinPixel point P inxRespectively to P1And P2Are respectively marked as d1xAnd d2xWhen d is1x≤d2xThen, the pixel point P is setxAnd P1Are combined into the same set and marked as S1Otherwise, the pixel point P isxAnd P2Are combined into the same set and marked as S2When S is1After adding new pixel, P is added1Is updated to S1Average position of all pixels (rounding the position average coordinate value) in S2After adding new pixel, P is added2Is updated to S2Average position of all pixels in (rounding the position average coordinate value); iterating the above process of step 2 until the image I is traversedinObtaining two layered layers of all the pixel points in the image1And Layer2;
And step 3: using the following formula to pair the two layered layers in step 21And Layer2And (3) processing:
in the formula (1), Fltlayer_numIs a layered Layerlayer_numIntensity of fluctuation of (1), meanlayer_numIs a layered Layerlayer_numThe mean value in the RGB color space SPA, layer _ num, is the two hierarchical numbers in step 2, the values are 1 and 2, IiIs the value of the ith pixel in the RGB color space SPA, Nlayer_numIs a layered Layerlayer_numThe number of pixels in (1);
and 4, step 4: when Flt is presentlayer_numSatisfies the condition T0Then, the Layer will be layeredlayer_numJudging as a candidate building LayercbOtherwise, Layer will be layeredlayer_numJudging as non-building Layernb;
And 5: the Layer of the candidate building in the step 4 is LayercbIs initialized to C0;
Step 6: using the following formula to perform Layer mapping on the candidate buildingcbAnd (3) processing:
in the formula (2), ObjF is an objective function for region extraction, HiIs the eigenvector, R, of the ith pixel point in the RGB color space SPAiIs the value of the ith pixel point in the R component, GiIs the value of the ith pixel point in the G component, BiIs the value of the ith pixel point in the B component, HQIs the characteristic vector of the centroid pixel point Qc of the region Q in the RGB color space SPA, RQcIs the value of pixel point Qc in R component, GQcIs the value of pixel point Qc in the G component, BQcFor taking pixel point Qc from component BValue, pQ(i) Is the probability that the ith pixel belongs to the region Q, | Hi-HQI represents solving feature vector HiAnd HQThe distance between them;
and 7: when | ObjF(k+1)-ObjF(k)If | ≦ Thr, obtaining a region set RS, entering the step 8, otherwise, adding C0Is updated to C0+1, return to step 5, ObjF(k)The value of objF after the kth iteration;
and 8: labeling the region set RS in the step 7;
and step 9: deletion area less than S0The non-building area of (2) extracting the building with the squareness and the aspect ratio as constraints; step 10: and outputting and extracting a building result.
The preprocessing in step 1 includes geometric correction, radiation correction and contrast enhancement.
The distance H in the step 6i-HQThe method for obtaining the | | adopts the Chebyshev distance.
The invention has the beneficial effects that: the method can accurately extract buildings, especially dense buildings, in the multispectral remote sensing image, and can be applied to updating of the buildings in the urban geographical basic information database.
Drawings
FIG. 1 is an overall process flow diagram of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
In step 101, a multispectral remote sensing image containing three color components of R, G and B is input.
In step 102, the input multispectral remote sensing image of step 101 is preprocessed, including geometric correction, radiation correction and contrast enhancement, to obtain an image Iin。
In step 103, in the image IinTwo points are randomly selected from the RGB color space SPA and are respectively marked as P1And P2。
In step 104, image I is computedinIn (1) imagePrime point PxRespectively to P1And P2Are respectively marked as d1xAnd d2xWhen d is1x≤d2xThen, the pixel point P is setxAnd P1Are combined into the same set and marked as S1Otherwise, the pixel point P isxAnd P2Are combined into the same set and marked as S2When S is1After adding new pixel, P is added1Is updated to S1Average position of all pixels (rounding the position average coordinate value) in S2After adding new pixel, P is added2Is updated to S2The average position of all pixels in (rounding the position average coordinate value).
In step 105, it is determined whether to traverse the image IinIf yes, two layered layers are obtained1And Layer2And proceeds to step 106, otherwise to step 104.
In step 106, the two layered layers in step 105 are processed using the following formula1And Layer2And (3) processing:
in formula (3), Fltlayer_numIs a layered Layerlayer_numIntensity of fluctuation of (1), meanlayer_numIs a layered Layerlayer_numThe mean value in the RGB color space SPA, layer _ num, is the two hierarchical numbers in step 2, the values are 1 and 2, IiIs the value of the ith pixel in the RGB color space SPA, Nlayer_numIs a layered Layerlayer_numThe number of pixels in (1).
In step 107, Flt is judgedlayer_numWhether or not condition T is satisfied0If yes, then Layer will be layeredlayer_numJudging as a candidate building LayercbAnd proceeds to step 109, otherwise Layer will be layeredlayer_numJudging as non-building LayernbAnd proceeds to step 108, where condition T is used to extract dense buildings based on experimental data0Need to be set to Fltlayer_numNot less than 465, when used for extracting sparse buildings, condition T0It is set to 250. ltoreq. Fltlayer_num<465。
In step 108, a non-building Layer is obtainednbThe input multispectral remote sensing image of step 101 is shown to contain no buildings.
In step 109, the candidate building Layer in step 107 is appliedcbIs initialized to C0In order to take account of the running speed and the building extraction effect, C is added0Set to 10.
At step 110, a region extraction objective function ObjF is constructed:
in the formula (4), ObjF is an objective function for region extraction, HiIs the eigenvector, R, of the ith pixel point in the RGB color space SPAiIs the value of the ith pixel point in the R component, GiIs the value of the ith pixel point in the G component, BiIs the value of the ith pixel point in the B component, HQIs the characteristic vector of the centroid pixel point Qc of the region Q in the RGB color space SPA, RQcIs the value of pixel point Qc in R component, GQcIs the value of pixel point Qc in the G component, BQcIs the value of the pixel point Qc in the B component, pQ(i) Is the probability that the ith pixel belongs to the region Q, | Hi-HQI represents solving feature vector HiAnd HQThe chebyshev distance therebetween.
In step 111, the area extraction objective function objF in step 110 is used to candidate building LayercbAnd (6) processing.
In step 112, using ObjF(k)Representing the value of ObjF after the kth iteration, and judging whether the ObjF meets the stopping condition | ObjF(k+1)-ObjF(k)Thr is less than or equal to | if yes, a region set RS is obtained, the step 113 is entered, otherwise C is added0Is updated to C0+1 and go to step 111 for assuranceAccuracy of extraction result, threshold Thr is set to 10-4。
In step 113, labeling processing is performed on the region set RS in step 112.
At step 114, post-processing is performed, including deleting areas less than S0The non-building area of (2) is obtained by extracting a building using the squareness and the aspect ratio as constraints, and using S to prevent interference of a small area ground feature0Set to 50.
In step 115, the result is output.
Claims (3)
1. An automatic extraction method of a remote sensing image building is characterized by comprising the following steps:
step 1: preprocessing an input multispectral remote sensing image containing three color components of R, G and B to obtain an image Iin;
Step 2: in image IinTwo points are randomly selected from the RGB color space SPA and are respectively marked as P1And P2Calculating an image IinPixel point P inxRespectively to P1And P2Are respectively marked as d1xAnd d2xWhen d is1x≤d2xThen, the pixel point P is setxAnd P1Are combined into the same set and marked as S1Otherwise, the pixel point P isxAnd P2Are combined into the same set and marked as S2When S is1After new pixel is added, for S1The average coordinate value of the positions of all the pixels is rounded and taken as P1When new position of S2After new pixel is added, for S2The average coordinate value of the positions of all the pixels is rounded and taken as P2The new location of (2); step 2 is executed iteratively until the image I is traversedinObtaining two layered layers of all the pixel points in the image1And Layer2;
And step 3: using the following formula to pair the two layered layers in step 21And Layer2And (3) processing:
in the formula (1), Fltlayer_numIs a layered Layerlayer_numIntensity of fluctuation of (1), meanlayer_numIs a layered Layerlayer_numThe mean value in the RGB color space SPA, layer _ num, is the two hierarchical numbers in step 2, the values are 1 and 2, IiIs the value of the ith pixel in the RGB color space SPA, Nlayer_numIs a layered Layerlayer_numThe number of pixels in (1);
and 4, step 4: when Flt is presentlayer_numIf the interval falls within the interval, the condition is met;
and 5: the Layer of the candidate building in the step 4 is LayercbIs initialized to C0;
Step 6: using the following formula to perform Layer mapping on the candidate buildingcbAnd (3) processing:
in the formula (2), ObjF is an objective function for region extraction, HiIs the eigenvector, R, of the ith pixel point in the RGB color space SPAiIs the value of the ith pixel point in the R component, GiIs the value of the ith pixel point in the G component, BiIs the value of the ith pixel point in the B component, HQIs the characteristic vector of the centroid pixel point Qc of the region Q in the RGB color space SPA, RQcIs the value of pixel point Qc in R component, GQcIs the value of pixel point Qc in the G component, BQcIs the value of the pixel point Qc in the B component, pQ(i) Is the probability that the ith pixel belongs to the region Q, | Hi-HQI represents solving feature vector HiAnd HQThe distance between them;
and 7: when | ObjF(k+1)-ObjF(k)If | ≦ Thr, wherein Thr is an accuracy threshold of the extraction result, obtaining a region set RS, entering step 8, otherwise, adding C0Is updated to C0+1, return to step 5, ObjF(k)The value of objF after the kth iteration;
and 8: labeling the region set RS in the step 7;
and step 9: deletion area less than S0The non-building area of (2) extracting the building with the squareness and the aspect ratio as constraints;
step 10: and outputting and extracting a building result.
2. The method for automatically extracting the remote sensing image building according to claim 1, wherein the preprocessing in the step 1 comprises geometric correction, radiation correction and contrast enhancement.
3. The method for automatically extracting buildings according to claim 1, wherein the distance H in step 6i-HQThe method for obtaining the | | adopts the Chebyshev distance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810307327.1A CN108491826B (en) | 2018-04-08 | 2018-04-08 | Automatic extraction method of remote sensing image building |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810307327.1A CN108491826B (en) | 2018-04-08 | 2018-04-08 | Automatic extraction method of remote sensing image building |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108491826A CN108491826A (en) | 2018-09-04 |
CN108491826B true CN108491826B (en) | 2021-04-30 |
Family
ID=63315035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810307327.1A Expired - Fee Related CN108491826B (en) | 2018-04-08 | 2018-04-08 | Automatic extraction method of remote sensing image building |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108491826B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635715B (en) * | 2018-12-07 | 2022-09-30 | 福建师范大学 | Remote sensing image building extraction method |
CN110298348B (en) * | 2019-06-12 | 2020-04-28 | 苏州中科天启遥感科技有限公司 | Method and system for extracting remote sensing image building sample region, storage medium and equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102015200260A1 (en) * | 2014-01-10 | 2015-07-16 | Mitsubishi Electric Corporation | Method of creating a descriptor for a scene image |
CN104794478A (en) * | 2015-05-04 | 2015-07-22 | 福建师范大学 | Method for extracting buildings with uniform spectral characteristics from remote sensing images |
CN105761266A (en) * | 2016-02-26 | 2016-07-13 | 民政部国家减灾中心 | Method of extracting rectangular building from remote sensing image |
-
2018
- 2018-04-08 CN CN201810307327.1A patent/CN108491826B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102015200260A1 (en) * | 2014-01-10 | 2015-07-16 | Mitsubishi Electric Corporation | Method of creating a descriptor for a scene image |
CN104794478A (en) * | 2015-05-04 | 2015-07-22 | 福建师范大学 | Method for extracting buildings with uniform spectral characteristics from remote sensing images |
CN105761266A (en) * | 2016-02-26 | 2016-07-13 | 民政部国家减灾中心 | Method of extracting rectangular building from remote sensing image |
Non-Patent Citations (2)
Title |
---|
基于目标识别和参数化技术的城市建筑群三维重建研究;吴宁;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20140715(第7期);第4章 * |
多光谱遥感影像建筑物提取;施文灶 等;《计算机系统应用》;20170831(第8期);第201-205页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108491826A (en) | 2018-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Deschaud et al. | A fast and accurate plane detection algorithm for large noisy point clouds using filtered normals and voxel growing | |
CN113706482A (en) | High-resolution remote sensing image change detection method | |
CN108428220B (en) | Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence | |
CN110796691B (en) | Heterogeneous image registration method based on shape context and HOG characteristics | |
Hu et al. | Efficient and automatic plane detection approach for 3-D rock mass point clouds | |
CN112883850A (en) | Multi-view aerospace remote sensing image matching method based on convolutional neural network | |
CN112489099A (en) | Point cloud registration method and device, storage medium and electronic equipment | |
CN111814792B (en) | Feature point extraction and matching method based on RGB-D image | |
CN108491826B (en) | Automatic extraction method of remote sensing image building | |
Zhang et al. | Lidar-guided stereo matching with a spatial consistency constraint | |
Kong et al. | Local stereo matching using adaptive cross-region-based guided image filtering with orthogonal weights | |
CN113887624A (en) | Improved feature stereo matching method based on binocular vision | |
CN109635715B (en) | Remote sensing image building extraction method | |
CN113409332B (en) | Building plane segmentation method based on three-dimensional point cloud | |
Parmehr et al. | Automatic parameter selection for intensity-based registration of imagery to LiDAR data | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN110766708B (en) | Image comparison method based on contour similarity | |
CN116681839A (en) | Live three-dimensional target reconstruction and singulation method based on improved NeRF | |
CN116385892A (en) | Digital elevation model extraction method based on target context convolution neural network | |
CN108596088B (en) | Building detection method for panchromatic remote sensing image | |
Liu et al. | Adaptive algorithm for automated polygonal approximation of high spatial resolution remote sensing imagery segmentation contours | |
Haque et al. | Robust feature-preserving denoising of 3D point clouds | |
van de Wouw et al. | Hierarchical 2.5-d scene alignment for change detection with large viewpoint differences | |
Huang et al. | Geological segmentation on UAV aerial image using shape-based LSM with dominant color | |
Liu et al. | Binocular depth estimation using convolutional neural network with Siamese branches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210430 |