CN116258958B - Building extraction method and device for homologous high-resolution images and DSM data - Google Patents

Building extraction method and device for homologous high-resolution images and DSM data Download PDF

Info

Publication number
CN116258958B
CN116258958B CN202211655920.8A CN202211655920A CN116258958B CN 116258958 B CN116258958 B CN 116258958B CN 202211655920 A CN202211655920 A CN 202211655920A CN 116258958 B CN116258958 B CN 116258958B
Authority
CN
China
Prior art keywords
building
preset
dsm
object layer
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211655920.8A
Other languages
Chinese (zh)
Other versions
CN116258958A (en
Inventor
丁媛
邢云飞
杜凤兰
原媛
屈鸿钧
文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Twenty First Century Aerospace Technology Co ltd
Original Assignee
Twenty First Century Aerospace Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Twenty First Century Aerospace Technology Co ltd filed Critical Twenty First Century Aerospace Technology Co ltd
Priority to CN202211655920.8A priority Critical patent/CN116258958B/en
Publication of CN116258958A publication Critical patent/CN116258958A/en
Application granted granted Critical
Publication of CN116258958B publication Critical patent/CN116258958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Abstract

The invention discloses a building extraction method and device for homologous high-resolution images and DSM data, relates to the technical field of data extraction, and mainly aims to realize rapid extraction of a building under a large-scale complex scene. The main technical scheme of the invention is as follows: acquiring high-resolution orthophoto data of a research area and DSM data corresponding to the high-resolution orthophoto data; performing multi-scale segmentation on the object by using a preset multi-scale segmentation rule to generate a parent object layer and a child object layer, wherein the parent object layer is used for generating characteristic auxiliary child object layer extraction; removing the non-building objects on the sub-object layer by using a preset non-building object removing rule; extracting building objects on the sub-object layer by using a preset building classification extraction rule; based on the building objects extracted from the sub-object layers, building area growth corresponding to the building objects is carried out by utilizing a preset building area growth rule, broken building pattern spots are combined, and a target building is obtained. The invention is used for building extraction.

Description

Building extraction method and device for homologous high-resolution images and DSM data
Technical Field
The invention relates to the technical field of data extraction, in particular to a building extraction method and device for homologous high-resolution images and DSM data.
Background
In recent years, with the development of earth observation technology, the comprehensive observation ability of human beings on the earth reaches an unprecedented level. The remote sensing image data acquisition speed is increased, the updating period is shortened, and the timeliness is higher and higher. The spatial resolution of the remote sensing image is obviously improved, the remote sensing image shows richer ground feature detail information, and the geometric structure, texture features and the like are more obvious. The method for quickly and accurately extracting the building from the remote sensing image has important significance in the aspects of land utilization monitoring, digital city construction, smart city construction, city planning and the like. The remote sensing technology is used for rapid and large-scale building extraction and gradual development of advantages.
At present, according to different data sources, high-resolution remote sensing images are mainly adopted to obtain information such as spectrum, texture, geometric shape and the like, three-dimensional information data are used for obtaining height information, and the two are combined to realize extraction of buildings based on two-dimensional image data and three-dimensional information data with complementary advantages.
However, the data sources of the method are not uniform, different data sources have different acquisition time, different image resolution, different photographing angles and the like, and the existence of a plurality of differences affects the later extraction process, so that two groups of data from different data sources can only be separated, and the respective extraction results can be combined later, so that the two groups of data can not be organically combined for use, and the extraction speed of a building under a large-scale complex scene is lower, and the universality of the extraction results is lower.
Disclosure of Invention
In view of the above problems, the present invention provides a method and apparatus for extracting building from homologous high resolution images and DSM data, which mainly aims to achieve rapid extraction of buildings in large-scale complex scenes.
In order to solve the technical problems, the invention provides the following scheme:
in a first aspect, the present invention provides a method of building extraction of homologous high resolution imagery and DSM data, the method comprising:
acquiring high-resolution orthographic image data of a research area and DSM data corresponding to the high-resolution orthographic image data;
performing multi-scale segmentation on the high-resolution orthophoto data of the research area and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale segmentation rule to generate a father object layer and a child object layer, wherein the father object layer is used for generating characteristic auxiliary child object layer extraction;
removing non-building objects on the sub-object layer by using a preset non-building object removing rule, wherein the non-building objects at least comprise vegetation, water and shadows;
extracting building objects on the sub object layer by using a preset building classification extraction rule, wherein the preset building classification extraction rule is used for classifying and extracting according to different colors of building roofs;
And based on the building objects extracted from the sub-object layer, growing building areas corresponding to the building objects by utilizing a preset building area growing rule, and merging broken building pattern spots corresponding to the building objects after the building areas are grown to obtain a target building.
Preferably, the acquiring the high resolution orthographic image data of the research area and the DSM data corresponding to the high resolution orthographic image data includes:
acquiring high-resolution stereopair data of the research area, wherein the high-resolution stereopair data at least comprises front-view high-resolution image data, rear-view high-resolution image data and lower-view high-resolution image data, and comprises panchromatic wave band information and multispectral wave band information;
performing preprocessing such as atmospheric correction, orthographic correction, data fusion and the like on the downward-looking high-resolution image data corresponding to the high-resolution stereopair data to obtain the high-resolution orthographic image data;
the DSM data is generated from the high resolution stereopair data.
Preferably, the multi-scale segmentation of the high-resolution orthographic image data of the research area and the DSM data corresponding to the high-resolution orthographic image data using a preset multi-scale segmentation rule generates a parent object layer and a child object layer, including:
Acquiring R, G, B and NIR band corresponding data from the high-resolution orthophoto data of the research area;
and respectively carrying out multi-scale segmentation on the data corresponding to the R, G, B and NIR wave bands and the DSM data according to preset proportion weights to generate the father object layer and the child object layer.
Preferably, the removing the non-building objects on the sub-object layer by using a preset non-building object removing rule includes:
based on the normalized vegetation index NDVI and the mean_blue characteristic, removing vegetation on the sub-object layer by using a preset vegetation removing rule;
based on the normalized water index NDWI and Brightness characteristics, removing the water on the sub-object layer by utilizing a preset water removal rule;
and eliminating shadows on the sub-object layer by using a preset shadow elimination rule based on the Brightness feature and the mean_scale_filtered feature.
Preferably, the building object comprises at least a highlight house, a red roof house, a blue roof house, a brown roof house, a gray roof house and a black roof house; the preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof room extraction rules, preset blue roof room extraction rules, preset brown roof room extraction rules, preset gray roof room extraction rules and preset black roof room extraction rules;
The extracting the building object on the sub-object layer by using a preset building classification extraction rule comprises the following steps:
extracting the highlight room on the sub-object layer by using a preset highlight room extracting rule based on the Brightness Brightness feature, the mean_NIR feature, the Area feature and the Rectangular fitting degree Rectangular fit feature;
based on RRI red wave duty ratio index features, area features and Brightness features, extracting red roof rooms on the sub-object layer by using a preset red roof room extraction rule;
based on the Mean Blue characteristic, the BRDI Blue-red difference index, the Area characteristic and the Brightness characteristic, extracting the Blue top room on the sub-object layer by utilizing a preset Blue top room extraction rule;
based on Diff (DSM) features, mean scale_filtered features, shape index (Shape index) features, rectangular fitting degree Rectangular fit features and Area features, extracting brown-top rooms on the sub-object layer by using the preset brown-top room extraction rule;
based on Diff (DSM), unclassified feature, mean scale_filtered feature, shape index feature, rectangular fitting degree Rectangular fit feature and Area feature, extracting the gray top room on the sub-object layer by using a preset gray top room extraction rule;
And extracting the black top room on the sub-object layer by using a preset black top room extraction rule based on Diff (DSM), unconlassified, mean scale_filtered, mean Red, shape index and Area features.
Preferably, the building object extracting from the sub-object layer is used for growing a building area corresponding to the building object according to a preset building area growing rule, and the broken building pattern spots corresponding to the building object after the building area is grown are combined to obtain a target building, which comprises the following steps:
based on Diff (DSM) features and Mean scale_filtered features, removing objects with a height lower than that of the nearby unclassified category from the building objects to obtain remaining building objects, and determining the remaining building objects as building seed nodes;
and growing a building area by taking an object adjacent to the building seed node as a candidate area to obtain broken building pattern spots, and merging the broken building pattern spots to obtain the target building.
Preferably, after the building object extracted from the sub-object layer is used to perform building area growth corresponding to the building object according to a preset building area growth rule, and the broken building pattern spots corresponding to the building object after the building area growth are combined to obtain the target building, the method further includes:
And optimizing the target building by using a preset optimizing rule, wherein the preset optimizing rule at least comprises building supplementary optimization, boundary smooth optimization and vector optimization.
In order to achieve the above object, according to a second aspect of the present invention, there is provided an apparatus for performing the building extraction method of homologous high resolution image and DSM data according to the first aspect.
In order to achieve the above object, according to a third aspect of the present invention, there is provided a storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method for extracting homologous high resolution image and DSM data according to the first aspect.
In order to achieve the above object, according to a fourth aspect of the present invention, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing all or part of the steps of the building extraction apparatus for homologous high resolution imagery and DSM data as described in the second aspect when the program is executed.
By means of the technical scheme, the building extraction method and device for the homologous high-resolution image and the DSM data are mainly used for extracting the building, and the problem that the building extraction speed is low in a large-scale complex scene is solved by mainly adopting the extraction method based on two-dimensional image data and three-dimensional information data. To this end, the present invention is implemented by acquiring high-resolution orthophoto data of a study area and DSM data corresponding to the high-resolution orthophoto data; performing multi-scale segmentation on the high-resolution orthographic image data of the research area and DSM data corresponding to the high-resolution orthographic image data by using a preset multi-scale segmentation rule to generate a father object layer and a child object layer; removing the non-building objects on the sub-object layer by using a preset non-building object removing rule; extracting building objects on the sub-object layer by using a preset building classification extraction rule; and based on the building objects extracted from the sub-object layer, growing building areas corresponding to the building objects by utilizing a preset building area growing rule, and merging broken building pattern spots corresponding to the building objects after the building areas are grown to obtain a target building. The invention can be realized in complex scene areas with various building types, complex structures and rich material types, does not need a large amount of training samples, and can realize rapid, wide-area and low-cost building extraction. It should be noted that other methods are slow at present because a large number of high-quality sample tags need to be manufactured, and the method can achieve a fast effect because no sample is required to be manufactured, and building extraction can be directly performed based on the generated fusion image and DSM data; and other methods often solve the problem of single building with simple structure in a small range.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a method for building extraction of homologous high resolution images and DSM data according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for building extraction of homologous high resolution images and DSM data according to an embodiment of the present invention;
FIG. 3 is a block diagram showing a building extraction apparatus for homologous high resolution imaging and DSM data according to an embodiment of the present invention;
fig. 4 shows a block diagram of another apparatus for building extraction of homologous high resolution images and DSM data according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
At present, the extraction method based on two-dimensional image data and three-dimensional information data is adopted, the high-resolution remote sensing image can acquire information such as spectrum, texture, geometric shape and the like, the three-dimensional information data acquires height information, and the two information data are combined to realize complementary extraction of advantages and improve a building. However, the method cannot organically combine two sets of data, so that the advantages of the two sets of data are brought into full play, and the large-scene building extraction cannot be met. In view of the above problems, the inventors have conceived that buildings can be quickly extracted from complex scenes with various types of buildings, complex structures and rich types of materials by extracting the buildings from the homologous high-resolution images and the DSM data.
Therefore, the embodiment of the invention provides a building extraction method for homologous high-resolution images and DSM data, which can realize rapid extraction of buildings in large-scale complex scenes, and the specific implementation steps are shown in fig. 1, and the method comprises the following steps:
Firstly, a user-defined characteristic, a key threshold determining method and a cyclic iteration algorithm applied in feature extraction which are commonly used in the invention are explained.
When the scheme is used for constructing the feature space, the spectrum, the shape and the texture features of the high-resolution image data are fully utilized, and the context information is provided for the extraction process by creating custom features of Mean scale_filtered and Diff (DSM). The Mean scale_filtered and Diff (DSM, class) feature is presented as follows:
the Mean scale_filtered feature reflects to some extent the uniformity of the spectrum inside the object and the complexity of the texture, with higher values indicating lower uniformity of the spectrum inside the object and higher complexity of the texture. The Mean scale_filtered feature creation process is as follows: firstly, calculating edge characteristics of an image through an absolute average deviation filtering algorithm, and generating a filtered wave band. And then calculating Mean characteristic Mean scale_filtered of band filtered in a parent object layer created by large-scale segmentation by taking the parent object as a unit, generating a scale_filtered layer, inheriting the Mean scale_filtered characteristic of the large-scale object plaque in the parent object layer to the small-scale object plaque in the child object layer, and providing context characteristics for the objects in the child object layer.
The Diff (DSM) features are generated based on the DSM data, the advantages that the DSM data can directly represent the height features of the features are fully exerted, the relative height value is formed by calculating the difference of the features between adjacent target features, and the difference of the features is used as the spatial feature for screening the features with the height to extract the building.
Diff(DSM,class)=Mean diff.to DSM,class=(v a -v b )
Diff (DSM) features describe the difference between two adjacent objects, calculated as the average difference, v, over the DSM band for object a and its adjacent class object b a Representing the eigenvalues of an object on the DSM band, v b Representing the eigenvalues of the b object on the DSM band.
In the scheme, a threshold value for determining the constraint condition mainly adopts a hierarchical sample extremum method. The specific method is as follows:
the hierarchical sample extremum method randomly selects sample data meeting initial threshold conditions on the basis of determining a lower limit (or an upper limit) of a threshold value by the sample extremum method, groups samples according to upper and lower quartiles of core features (specifically combining the complexity of the class), selects values corresponding to the upper and lower quartiles of the features to conduct secondary subdivision on a threshold value interval, determines a step threshold value, and combines other auxiliary features to conduct constraint, so that the hierarchical extraction of complex ground features is realized. The threshold is exemplified by the minimum value determination:
K 1 =max(k 1 ,k 2 ,...,k n )K 2 =Q 1 (q 1 ,q 2 ,...,q n )K 3 =Q 3 (q 1 ,q 2 ,...,q n )
Where K1 represents a threshold value determined based on the critical point sample set (a threshold value determined by a sample extremum method); q1, q2 …, qn represent randomly selected values of sample 1, sample 2, …, sample n over a certain feature that satisfies the threshold K1; q (Q) 1 () Representing a lower quartile function; q (Q) 3 () The upper quartile function is represented (the choice of the quartile is related to the distribution of the actual class).
Q()=quantile[a](L i ),a∈(0,100)
In the formula, Q () represents a calculation quantile function, a is a quantile value, and Li is a band participating in operation.
The hierarchical sample extremum method classifies the hierarchical thresholds according to the difficulty that the object meets the thresholds, sets the threshold with the highest condition requirement as a first-level threshold, and correspondingly reduces the threshold level along with the reduction of the condition requirement. Because of the harsh conditions, the first-order threshold often depends on a single feature to extract the target ground object. However, the number of the target ground objects meeting the first-level threshold is small, and a large number of target ground objects are missed by simply relying on the first-level threshold as a basis. Therefore, the target ground object needs to be extracted in a complementary manner according to low-level thresholds such as a second level threshold and a third level threshold in the step thresholds. In contrast to the situation of primary threshold extraction, the number of target features meeting the low-level threshold is increased, and meanwhile, the number of other features mistakenly extracted as target features is also increased, and at the moment, the step threshold groups are required to be formed by combining the step thresholds of other features to extract the target features, so that the mistaken extraction number is reduced.
In the scheme, a cyclic iteration algorithm applied during the extraction of the ground objects is a region growing algorithm. The specific method is as follows:
region growing refers to the process of developing groups of pixels or regions into larger areas. Starting from a collection of seed points, the region growing from these points is by merging into this region adjacent pixels that have similar properties like intensity, gray level, texture color, etc. as each seed point. The invention uses the area growth algorithm as the judgment basis for selecting the seed node object by taking the threshold value (the advanced threshold value determined by the hierarchical threshold value method) meeting the higher characteristic requirement. Then, through the adjacency relation, taking the object close to the seed point as a researched object, selecting a proper low-level threshold combination in the step threshold as a judgment basis for extracting the ground object, and carrying out cyclic iteration extraction on the ground object through the mechanism.
The complete scheme of the invention is as follows:
101. high-resolution orthophoto data of the investigation region and DSM data corresponding to the high-resolution orthophoto data are acquired.
Firstly, acquiring high-resolution stereopair data of a research area, and preprocessing lower-view high-resolution image data corresponding to the high-resolution stereopair data to form high-resolution orthographic fusion images by performing atmospheric correction, orthographic correction, data fusion and the like, so as to acquire the high-resolution orthographic image data; and generating the DSM data according to the high-resolution stereopair data.
It should be noted that the DSM data is of the same satellite origin as the high resolution orthographic image data. And "corresponding" in the DSM data corresponding to the high resolution orthophoto data may be understood as providing the data of the corresponding region by other means (e.g. a drone) and may also be understood as acquired by the same satellite.
102. And performing multi-scale segmentation on the high-resolution orthophoto data of the research area and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale segmentation rule to generate a father object layer and a child object layer.
The method is a step of generating an object layer by self-adaptive multi-scale segmentation.
Firstly, selecting a R, G, B, NIR wave band from the high-resolution orthophoto data of the research area obtained in the step 101; combining with DSM wave bands (namely DSM data corresponding to the high-resolution orthophoto image data), and performing multi-scale segmentation on the two groups of images according to a large segmentation scale and a small segmentation scale to generate an upper object layer and a lower object layer of a father and son object layer, wherein the father object layer is used for generating characteristic auxiliary sub object layer extraction. It should be noted that the two sets of images refer to images corresponding to R, G, B, NIR band and images corresponding to the DSM band selected from the high-resolution orthophoto data of the study area.
The preset multi-scale segmentation rule (i.e., multi-scale segmentation method) is to segment the image to form an object by setting the weight, shape heterogeneity, spectrum heterogeneity, compactness and other parameters of the layers involved in segmentation. The average heterogeneity among different objects after segmentation is the largest, and the heterogeneity among pixels in the same object is the smallest. The algorithm realized in the invention is based on a region merging algorithm with minimum regional heterogeneity, and based on the principle of ensuring the minimum overall heterogeneity from a pixel hierarchy, a method of merging and growing adjacent image regions in pairs is adopted to form a larger image object until merging of any object can not be performed on a specified scale. For example: s1 and s2 are combined into s, then the regional heterogeneity of s is formulated as follows:
f=w color h color +(1-w color )h shape
w in co l or Generating spectral weights of objects for merging pixels, h co l or And h shape The spectral heterogeneity and shape heterogeneity of the object, respectively.
According to the invention, the optimal segmentation scale is selected according to the change rate ROC-LV (rates of change of LV) of local change, the ROC-LV of the image homogeneity of the image under different segmentation scale parameters is calculated, and when the ROC-LV is maximum, namely a peak value appears, the segmentation scale corresponding to the point is the optimal segmentation scale.
The segmentation scale is determined through self-adaptive recursion optimization, a segmentation scale a corresponding to the maximum value of ROC-LV and a segmentation scale b corresponding to the second maximum value of ROC-LV are selected, the smaller segmentation scale in a and b is selected to be used for generating a child object layer, the larger segmentation scale is used for generating a father object layer, and the father object layer is used for generating characteristic auxiliary child object layer extraction.
103. And eliminating the non-building objects on the sub-object layer by utilizing a preset non-building object eliminating rule.
Wherein the non-architectural objects include at least vegetation, water, and shadows; the preset non-building object rejection rules at least comprise preset vegetation rejection rules, preset water rejection rules and preset shadow rejection rules;
and (3) removing non-building ground such as vegetation, water body, shadow and the like on the self-adaptive multi-scale segmentation generation sub-object layer obtained in the step (102).
(1) Preset vegetation removing rules
Vegetation with a high level can interfere with building extraction in processing the DSM data, so vegetation is rejected first. When rejecting, extracting vegetation by using a normalized vegetation index NDVI, and rejecting non-vegetation with high NDVI value by using a Mean Blue characteristic. NDVI and Mean Blue formulas are as follows:
wherein NIR is near infrared band gray value, red is Red band gray value;
The average value is used for describing the average intensity value of the object on a wave band, wherein V represents the segmented object, i represents a certain pixel on a blue wave band, and n represents the number of pixels.
Further, the minimum value of the sample is analyzed through a hierarchical sample extremum method to determine the threshold value of the NDVI index and the Mean Blue, and an NDVI threshold value ladder is constructed.
Further, taking the object meeting the high-level threshold condition as a vegetation seed node, then combining the adjacent object relation, determining the object to be grown by utilizing the characteristics that Rel. Boundary to vegetation is larger than or equal to a, a E (0, 1), and removing the vegetation object meeting the low-level threshold combination by referring to the growth of the vegetation area by the area growth algorithm.
The boundary coincidence ratio is used for representing the boundary coincidence duty ratio of two objects; wherein i is the object under investigation, j is the adjacent object, e i&j Representing the overlap length, p, of objects i and j i Representing the perimeter of the i object.
(2) Preset water body removing rule
The water body is easy to be confused with the shadow, and the water body is removed before the shadow is removed. Extracting a water body by using a normalized water body index NDWI, and eliminating a non-water body with a high NDWI value by using Brightness features. NDWI, brightness the minimum value of the sample is selected as the threshold value by the sample extremum method. NDWI and bright formulas are as follows:
Wherein Green is a Green band gray value, and NIR is a near infrared band gray value;
brightness represents the spectral mean of an object over all bands; wherein n is the total number of layers of the multispectral band of the image,is the gray value of the object at the i-th band.
(3) Presetting shadow eliminating rules
On the basis of removing vegetation and water, aiming at shadows which are easy to mix with black buildings, taking Brightness Brightness as a core characteristic and mean_scale_filtered as an auxiliary characteristic to remove shadow areas. When shadows are removed, a hierarchical sample extremum method is applied to select a sample maximum value, a Brightness threshold ladder is constructed, objects meeting a first-level threshold are used as shadow seed nodes, then adjacent object relations are combined, an adjacent object relation Rel. Boundary to shadows are larger than or equal to a, a E (0, 1) is utilized to determine objects to be grown, shadow region growth is carried out, and shadow objects meeting low-level threshold conditions are removed.
104. And extracting the building objects on the sub-object layer by using a preset building classification extraction rule.
The preset building classification extraction rules are used for classification extraction according to different colors of building roofs. The preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof extraction rules, preset blue roof extraction rules, preset brown roof extraction rules, preset gray roof extraction rules and preset black roof extraction rules.
After removing vegetation, water and shadows, the sub-object layer classifies the building into six categories of a highlight room, a red roof room, a blue roof room, a brown roof room, a gray roof room and a black roof room according to roof colors, performs classification extraction, and builds brightness steps by adopting a classification sample extremum method to determine various building extraction thresholds.
The specific method comprises the following steps:
(1) Presetting a highlight room extraction rule:
the gray value of each wave band of the highlight room is relatively high, and the highlight room is extracted by using the Brightness characteristics and combining the characteristic combinations formed by mean_NIR, area and Rectangular fitting degree Rectangular fit. The threshold combination selects the minimum value of the samples as the threshold value by a hierarchical sample extremum method.
Wherein, area describes the size of the object, ai is a certain pixel in the object;
the Rectangular fitting degree of the Rectangular is the matching condition of the study object and the rectangle with similar shape and size, and the value of the Rectangular fitting degree is the area of the object outside the rectangle and the area of the rectangle outside the objectA ratio of 0 indicates no match and a value of 1 indicates complete match. Wherein: p (P) v (x, y) is the pixel, Q, contained in the object v v (x, y) corresponds to the pixel contained in the fitting rectangle for the object v.
(2) Presetting a red roof room extraction rule:
the Red roof house has high gray value in Red wave band, and the invention creates RRI (Red ratio index) Red wave duty ratio index features as core features, and combines the combination of the features formed by Area and Brightness to jointly extract the Red roof house. The threshold combination selects the minimum value of the samples as the threshold value by a hierarchical sample extremum method.
Wherein Red is the Red band gray value and Brightness is the luminance value.
(3) Presetting a blue top room extraction rule:
the Blue roof house has high gray value in Blue wave band, and the Blue roof house is extracted by using Mean Blue characteristic and combining BRDI (Blue Red Deviation index) Blue-red difference index, area and Brightness formation characteristic combination. The threshold combination selects the minimum value of the samples as the threshold value by a hierarchical sample extremum method.
Wherein Red is the Red band gray value and Blue is the green band gray value.
(4) Presetting a brown top room extraction rule:
the invention distinguishes between the bare and the brown-topped house by creating the height difference information provided by Diff (DSM) features, and extracts the brown-topped house together by means of Mean scale_filtered, shape index, rectangular fit and Area features. The Mean scale filtered, area threshold selects the minimum value of the sample as the threshold by the hierarchical sample extremum method, and the Shape index, rectanglar fi threshold selects the maximum value of the sample as the threshold by the hierarchical sample extremum method.
Shape index is used to describe whether an object is regular; p represents perimeter, describes object boundary length, ei is edge length of each segment in the object; area represents Area.
(5) Presetting a ash top room extraction rule:
the gray top room spectrum does not have obvious characteristics, a Brightness Brightness interval needs to be determined when gray is described, a cement floor is distinguished from the gray top room by using height difference information provided by Diff (DSM) unclassified characteristics, and the gray top room is jointly extracted by combining with the characteristics of Mean scale_filtered, shape index, rectangular fit and Area.
(6) Presetting a black top room extraction rule:
the black top room is similar to the spectral features of the shadows, the bright Brightness interval is determined to distinguish the black top room from the shadows as much as possible when describing the black top room, the ground is distinguished from the black top room by using the height difference information provided by the Diff (DSM) features, and the gray top room is extracted by combining the Mean scale_filtered, mean Red, shape index and Area features. The Mean scale_filtered, mean Red and Area thresholds select the minimum value of the samples as the threshold by a hierarchical sample extremum method, and the Shape index threshold selects the maximum value of the samples as the threshold by a hierarchical sample extremum method.
105. And based on the building objects extracted from the sub-object layer, growing building areas corresponding to the building objects by utilizing a preset building area growing rule, and merging broken building pattern spots corresponding to the building objects after the building areas grow to obtain the target building.
Classifying the highlight room, the red roof room, the blue roof room, the brown roof room, the gray roof room and the black roof room extracted in the step 104 as buildings, removing objects with the height lower than that of the nearby unclassified categories from the existing objects classified as the buildings according to Diff (DSM) and Mean scale_filtered characteristics, and selecting the rest building objects as seed nodes.
Further, building area growth optimization is performed with the object adjacent to the building seed node as a candidate area. The characteristics of the spectral values of the red roof room and the blue roof room are obvious, the plaque of the object generated during segmentation is complete, and accordingly, the extracted result is free from broken plaque and growth optimization is not needed. For four types of highlight houses, brown roof houses, gray roof houses and black roof houses, combining DSM wave bands, calculating the adjacent object ground potential difference by using Diff (DSM, building) characteristics, classifying the objects meeting the condition-1 < Diff (DSM, building) <1 through a region growing algorithm, and growing the classified regions according to low-level threshold combinations extracted by four building classifications to form new buildings and classifying the new buildings as seed points, and continuously iterating the process until no object meeting the condition exists.
And further, merging the broken building pattern spots after the region growth to obtain the target building.
Based on the implementation manner of the embodiment of fig. 1, the invention provides a building extraction method for homologous high-resolution images and DSM data, which can be implemented in complex scene areas with various building types, complex structures and rich material types, does not need a large number of training samples, and can realize rapid, wide-area and low-cost building extraction.
Further, as a refinement and extension to the embodiment shown in fig. 1, the embodiment of the present invention further provides another building extraction method for homologous high resolution images and DSM data, as shown in fig. 2, which specifically includes the following steps:
201. high-resolution orthophoto data of the investigation region and DSM data corresponding to the high-resolution orthophoto data are acquired.
This step is described in conjunction with step 101 in the above method, and the same contents are not repeated here.
Illustrating:
taking the Beijing three-number high-resolution stereopair data with the acquisition time of 2021 and the resolution of 0.3m as an example, the invention selects the high-resolution stereopair remote sensing image (namely the high-resolution stereopair data) of the south-river area of Nanning city as research data, the data is shot by a Beijing three-number satellite, the acquisition time is 2021 and the day of 12 months is 14, the image quality is good, no cloud coverage exists, wherein multispectral wave band (comprising red, green, blue and near-red wave bands) images have the spatial resolution of 1.2m, and the full-color wave band images have the spatial resolution of 0.3m. Performing preprocessing such as atmospheric correction, orthographic correction, data fusion and the like on the downward-looking high-resolution image data corresponding to the high-resolution stereopair data to form a high-resolution orthographic image; the DSM data is generated by processing the high resolution stereopair data, and the grid size is 0.3m.
202. And performing multi-scale segmentation on the high-resolution orthophoto data of the research area and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale segmentation rule to generate a father object layer and a child object layer.
This step is described in conjunction with step 102 in the above method, and the same contents are not repeated here.
Acquiring high-resolution stereopair data of the research area, wherein the high-resolution stereopair data at least comprises front-view high-resolution image data, rear-view high-resolution image data and lower-view high-resolution image data, and comprises panchromatic wave band information and multispectral wave band information; performing preprocessing such as atmospheric correction, orthographic correction, data fusion and the like on the downward-looking high-resolution image data corresponding to the high-resolution stereopair data to obtain the high-resolution orthographic image data; the DSM data is generated from the high resolution stereopair data.
Continuing with the example of 201, the illustration is:
r, G, B and NIR wave bands in the high-resolution orthophoto data of the research area are selected, and the upper and lower object layers of the father object layer and the child object layer are generated by multi-scale segmentation by combining DSM wave bands (namely DSM data corresponding to the high-resolution orthophoto data) and by using equal-ratio weights of 1:1:1:1:1. Determining an optimal segmentation scale of the sub-object layer as 40 through self-adaptive recursion optimization, wherein the shape index is set as 0.2, and the compactness index is set as 0.5; the optimal segmentation scale of the parent object layer is 95, the shape index is set to 0.3, and the compactness index is set to 0.5.
203. And eliminating the non-building objects on the sub-object layer by utilizing a preset non-building object eliminating rule.
This step is described in conjunction with step 103 in the above method, and the same contents are not repeated here.
Based on the normalized vegetation index NDVI and the mean_blue characteristic, removing vegetation on the sub-object layer by using a preset vegetation removing rule; based on the normalized water index NDWI and Brightness characteristics, removing the water on the sub-object layer by utilizing a preset water removal rule; and eliminating shadows on the sub-object layer by using a preset shadow elimination rule based on the Brightness feature and the mean_scale_filtered feature. From step 202, a sub-object layer of the research area may be obtained, and non-building features such as vegetation, water, shadows, etc. in the sub-object layer need to be removed.
Continuing with the example of 202, the illustration:
(1) Removing vegetation
Vegetation with a high level can interfere with building extraction in processing the DSM data, with vegetation first removed. Extracting vegetation by using a normalized vegetation index NDVI, and rejecting non-vegetation with high NDVI value by using a Mean Blue characteristic. Selecting a sample minimum value by using a hierarchical sample extremum method, and constructing an NDVI threshold step: NDVI >0.35; NDVI >0.26&Mean Blue<500;
Taking an object meeting an advanced threshold as a vegetation seed, then combining an adjacent object relation, determining an object to be grown by utilizing the adjacent object relation, carrying out vegetation region cyclic growth, eliminating the vegetation object meeting the condition, and the characteristic combination threshold of the cyclic growth is as follows:
rel. Border to vegetation >0.1&NDVI>0.16&Mean Blue<530
(2) Removing water body
The water body is easy to be confused with the shadow, and the water body is removed before the shadow is removed. When the method is used for eliminating, the normalized water index NDWI features are used for extracting water, and the Brightness features are used for eliminating non-water with high NDWI values. NDWI, brightness the minimum value of the sample is selected as a threshold value by a sample extremum method, and the characteristics are combined as follows: NDWI >0.15& bright >310;
(3) Shadow removal
On the basis of removing vegetation and water, aiming at shadows which are easy to mix with black buildings, taking Brightness Brightness as a core characteristic, and taking mean_scale_filtered and Rectangular fit as auxiliary characteristics to remove shadow areas. Selecting a sample maximum value by using a hierarchical sample extremum method, and constructing a Brightness threshold step: brightness <330; brightness <340&5< mean_scale_filtered <14;
and taking the object meeting the step threshold as a shadow seed node, then combining the adjacent object relations, determining the object to be grown by utilizing the adjacent object relations to perform shadow region growth, and eliminating the shadow object meeting the low-level threshold condition. The following feature combinations are specific descriptions of shadow growth in the examples:
Rel.border to shadow ∈0.25& Brightness <330& diff (DSM, shadow) <1.
204. And extracting the building objects on the sub-object layer by using a preset building classification extraction rule.
This step is described in conjunction with step 104 in the above method, and the same contents are not repeated here.
The building objects at least comprise a highlight room, a red roof room, a blue roof room, a brown roof room, a gray roof room and a black roof room; the preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof room extraction rules, preset blue roof room extraction rules, preset brown roof room extraction rules, preset gray roof room extraction rules and preset black roof room extraction rules; extracting the highlight room on the sub-object layer by using a preset highlight room extracting rule based on the Brightness Brightness feature, the mean_NIR feature, the Area feature and the Rectangular fitting degree Rectangular fit feature; based on RRI red wave duty ratio index features, area features and Brightness features, extracting red roof rooms on the sub-object layer by using a preset red roof room extraction rule; based on the Mean Blue characteristic, the BRDI Blue-red difference index, the Area characteristic and the Brightness characteristic, extracting the Blue top room on the sub-object layer by utilizing a preset Blue top room extraction rule; based on Diff (DSM) features, mean scale_filtered features, shape index (Shape index) features, rectangular fitting degree Rectangular fit features and Area features, extracting brown-top rooms on the sub-object layer by using the preset brown-top room extraction rule; based on Diff (DSM), unclassified feature, mean scale_filtered feature, shape index feature, rectangular fitting degree Rectangular fit feature and Area feature, extracting the gray top room on the sub-object layer by using a preset gray top room extraction rule; and extracting the black top room on the sub-object layer by using a preset black top room extraction rule based on Diff (DSM), unconlassified, mean scale_filtered, mean Red, shape index and Area features.
Continuing with the example of 203, the illustration:
after the vegetation, the water body and the shadow are removed by the object layer, the building is divided into six types of a highlight room, a red roof room, a blue roof room, a brown roof room, a gray roof room and a black roof room according to the roof color, classification extraction is carried out, and a hierarchical sample extremum method is adopted to construct brightness steps to determine various building thresholds.
(1) Extracting a highlight room:
the core feature of the highlight room extraction is Brightness Brightness value, and the highlight room is extracted by combining the characteristic combination formed by mean_NIR, area and Rectangular fitting degree Rectangular fit. And selecting a certain amount of highlight room samples, applying a hierarchical sample extremum method to each spatial feature of the samples, and finally obtaining the optimal spatial feature step threshold combination of the highlight room, and extracting the highlight room according to the optimal spatial feature step threshold combination. The following feature combinations are specifically described in the embodiments: (1) brightness >700&Mean NIR>800&Area>60&Rectangular fit>0.87; (2) brightness >660&Mean NIR>750&Area>500&Rectangular fit>0.81; (3) brightness >600&Mean NIR>600&Area>1500&Rectangular fit>0.78.
(2) Extracting a red roof house:
the Red roof house has high gray value in Red wave band, the RRI (Red ratio index) Red wave duty ratio index constructed by the invention can reflect the spectrum characteristics of the Red roof house to the greatest extent, and RRI Red wave duty ratio index is taken as a core characteristic during extraction, and the Red roof house is constrained by combining the characteristic combination formed by Area and Brightness. And selecting a certain amount of red roof room samples, applying a hierarchical sample extremum method to each spatial characteristic of the samples, and finally obtaining the optimal spatial characteristic step threshold combination of the red roof room, and extracting the red roof room according to the optimal spatial characteristic step threshold combination. The following feature combinations are specifically described in the embodiments: (1) RRI >0.025; (2) RRI >0.017& area >100& Brightness <1000.
(3) Extracting a blue top room:
similar to the red roof room extraction thought, the Blue roof room has high Blue wave band gray value, the BRDI (Blue Red Deviation index) Blue-red difference index constructed by the invention can reflect the spectrum characteristics of the Blue roof room to a certain extent, BRI, area and Brightness are selected as auxiliary characteristics during the extraction, and a step threshold is established around Mean Blue to form characteristic combinations to jointly extract the Blue roof room. And selecting a certain amount of blue top room samples, applying a hierarchical sample extremum method to each spatial feature of the samples, and finally obtaining an optimal spatial feature step threshold combination of the blue top room, and extracting the blue top room according to the optimal spatial feature step threshold combination. The following feature combinations are specifically described in the embodiments: (1) mean Blue >500& BRI >0.4& Brightness >425& area >100; (2) mean Blue >550& BRI >0.17& Brightness >425& area >100; (3) the Mean Blue >600&BRI>0.05&Brightness>425&Area>100&Rel.border to Blue roof house is more than or equal to 0.01.
(4) Extracting brown top house:
the method comprises the steps of selecting a certain amount of brown top room samples, determining a Brightness interval of the brown top room samples to be Brightness E (500, 650) by using a sample extremum method, determining a threshold by combining the mean_scale_filtered, shape index and Area characteristics, and extracting the brown top room by using a grading sample extremum method. The following feature combinations are specifically described in the embodiments: (1) 500<Brightness<650&Area>100&Mean scale_filtered>20&Shape index<2.3; (2) 500<Brightness<650&Area>100&Mean scale_filtered>15&Shape index<2; in this case, a plurality of bare land objects exist in the extracted brown roof house, and in order to ensure the extraction accuracy, the bare land is distinguished from the brown roof house by the height difference information provided by Diff (DSM) features, and brown land features satisfying Diff (DSM) <0 are removed.
(5) Extracting a dust top room:
the gray roof house does not have obvious spectral characteristics, a Brightness interval is required to be determined when the gray ground object is described, the Brightness interval is determined to be Brightness epsilon (450, 550) by using a sample extremum method, and a threshold value is determined by combining with Mean scale_filtered, shape index and Area characteristics and using a grading sample extremum method to extract the gray ground object. The following feature combinations are specifically described in the embodiments: (1) 450<Brightness<550&Area>100&Mean scale_filtered>30&Shape index<2.5; (2) 450<Brightness<550&Area>300&Mean scale_filtered>15&Shape index<3; (3) 450<Brightness<550&Area>300&Rectangular fit>0.85&Shape index<2&Diff (DSM, unclassified) >0; only grey ground objects are extracted at this time, and a large number of non-building objects, such as cement ground, are mixed in the extraction result. To ensure extraction accuracy, the grey roof house is distinguished from other ground grey features by height difference information provided by Diff (DSM) features, and grey features satisfying Diff (DSM) <0 are removed.
(6) Extracting a black top room:
the spectral characteristics of the black roof house and the shadow are similar, the Brightness interval of Brightness is determined to distinguish the black roof house from the shadow as far as possible when describing the black roof house, the Brightness interval is Brightness epsilon (350, 450) determined by a sample extremum method, the characteristics of Mean scale_filtered, mean Red, shape index and Area are combined, the threshold is determined by a grading sample extremum method, and the black ground feature is extracted. The following feature combinations are specifically described in the embodiments: (1) 350<Brightness<450&Area>100&Mean scale_filtered>25&Mean Red>300&Shape index<2.5; (2) 350<Brightness<450&Area>400&Mean scale_filtered>20&Mean Red>300&Shape index<2.5; (3) 350<Brightness<450&Area>300&Rectangular fit>0.85&Shape index<2&Diff (DSM, unclassified) >0; only grey ground objects are extracted at this time, and a large number of non-building objects such as roads and dark ground are mixed in the extraction result. To ensure extraction accuracy, the black roof house is distinguished from other ground black features by height difference information provided by Diff (DSM) features, and black features satisfying Diff (DSM) <0 are removed.
205. And based on the building objects extracted from the sub-object layer, growing building areas corresponding to the building objects by utilizing a preset building area growing rule, and merging broken building pattern spots corresponding to the building objects after the building areas grow to obtain the target building.
This step is described in conjunction with step 105 of the above method, and the same contents are not repeated here.
Based on Diff (DSM) features and Mean scale_filtered features, removing objects with a height lower than that of the nearby unclassified category from the building objects to obtain remaining building objects, and determining the remaining building objects as building seed nodes; and growing a building area by taking an object adjacent to the building seed node as a candidate area to obtain broken building pattern spots, and merging the broken building pattern spots to obtain the target building.
Continuing with the example of 204, the illustration:
classifying the highlight room, the red roof room, the blue roof room, the brown roof room, the gray roof room and the black roof room extracted in the step 204 as buildings, eliminating objects meeting the condition of Diff (DSM) below 0.5&Mean scale_filtered<20 from building categories, and taking the rest building objects as seed nodes. On the basis, an object meeting the adjacency object relation Rel. Boundary to building >0 is taken as a candidate area, the object meeting-1 < diff (DSM, building) <1 is subjected to classified area growth according to a low-level threshold combination extracted by four building classifications of a highlight room, a gray top room, a brown top room and a black top room by an area growth algorithm, and the following characteristic combinations are specifically described as optimization of the growth of various types of buildings in the embodiment:
Specific description of highlight house growth optimization: brightness >600&Mean NIR>750&Area>500&Rel.border to building >0.5 & -1< diff (DSM, building) <1;
specific description of growth optimization of ash roof: brightness of>520&Mean NIR>600&Area>500&Rel. Border to building>0.4& - 1<Diff (DSM, building)<1;
Detailed description of brown top house growth optimization: brightness of>460&Mean NIR>400&Area>900&Rel. Border to building>0.4& - 1<Diff (DSM, building)<1;
Specific description of black top room growth optimization: brightness >360&Mean NIR>370&Area>300&Rel.border to building >0.4& -1< diff (DSM, building) <1;
merging the buildings after the region growth into seed points, and continuously iterating the process until no object meeting the condition exists.
And further, merging the broken building pattern spots after the region growth, and performing complementary optimization on the initial extraction result to obtain the target building.
206. And optimizing the target building by using a preset optimizing rule.
The preset optimization rules at least comprise building supplementary optimization, boundary smooth optimization and vector optimization.
(1) Building supplement: the patches that are mistakenly lifted as vegetation, shadows, are converted into buildings.
(a) Vegetation- > building: firstly, vegetation is combined, objects surrounded by buildings are screened out from the vegetation through the characteristics that the building of Rel.boundary to is more than or equal to a and the characteristic that a epsilon (0, 1), the value of a is determined by adopting a grading sample extremum method, a step threshold combination is established by combining the characteristics of Area and Rectangular fit, and vegetation meeting the conditions is converted into the building. The Area and the Rectangular fit select the maximum value of the sample as a threshold value through a sample extremum method. For example: firstly, merging vegetation, screening out objects to be researched from vegetation objects through the characteristics that Rel.border to building is more than or equal to 0.7, and directly converting vegetation meeting the condition that Rel.border to building is more than or equal to 0.95 into building; and establishing a step threshold combination of Rectangular fit <0.8& Area <2000 by combining the Area and Rectangular fit characteristics, and converting vegetation meeting the conditions into buildings.
(b) Shadow- > building: and screening out objects surrounded by the building through the characteristics that Rel.boundary to building is more than or equal to a and a epsilon (0, 1) from shadow objects, determining the value of a by adopting a hierarchical sample extremum method, calculating the shadow objects meeting Diff (DSM, building) >0, and converting the shadow objects into the building. And merging shadows, continuously screening out shadow objects surrounded by a building through the characteristics that the building is more than or equal to a, the a epsilon (0, 1) of the Rel. Boundary to, determining the value of a by adopting a hierarchical sample extremum method, calculating the shadow objects meeting Diff (DSM, building) > -0.5, and converting the shadow objects into buildings by combining with Brightness Brightness characteristics. For example: screening out objects to be studied from shadow objects through the characteristic that Rel.binder to shadow is more than or equal to 0.8, calculating the shadow objects meeting Diff (DSM, building) is more than or equal to 0, and converting the shadow objects into buildings; further, merging the shadow objects, continuously screening out the object to be studied through the characteristic that the shadow of Rel.sender to is more than or equal to 0.5, calculating the shadow object meeting Diff (DSM, building) is more than or equal to-0.5, and converting the shadow object meeting the condition into the building by combining with Brightness >300 for constraint.
(2) Boundary smoothing: and (3) carrying out post-processing on the extraction result, removing noise and holes by adopting opening and closing operation, and optimizing the extraction result of the building contour by combining small spot removing operation.
For example: in order to optimize the building extraction result, firstly, setting the size of a structural element to be 5, carrying out open operation on a building object, removing isolated small points, burrs and bridges, wherein the following characteristic combination is a specific description of removing small spots after the open operation: (1) area <55& rel. Border to building <0.8; (2) area <110&Rel.border to building <0.8&Shape index>1.5; the size of the structural element is set to be 4, the building object is closed, small holes are filled up, gaps are closed, and the following specific description of removing the small spots after the closing operation is made by the following characteristic combination: area <240&Rel.border to building <0.8&Shape index>1.5.
(3) Vector optimization: aiming at the jagged phenomenon of the building outline of the extraction result, the building extraction object outline is converted into a vector, and the building outline is optimized on the basis. The image simplification adopts a grading processing strategy, a Fabry-Perot algorithm is used for simplifying the polygon, and the grading sample extremum method is combined to set different simplification percentages according to the area characteristics in three stages for grading and simplifying the outline of the building. Wherein the objective of the douglas-peck algorithm is to find a similar curve with fewer points. The degree of simplification is controlled by the maximum distance between the original shape and the approximation vector or based on the percentage of points that remain with the original shape.
For example: aiming at the jagged phenomenon of the building outline of the extraction result, converting the outline of the building extraction object into a vector, and optimizing the building outline on the basis; firstly, setting the maximum distance between the nodes and the segments to be 0.3 by using the Fabry-Perot algorithm to preliminarily simplify vector boundaries; on the basis, the subsequent simplification process adopts a grading treatment strategy, and combines a grading sample extremum method to carry out grading simplification on the outline of the building by setting different simplification percentages according to the area characteristics in three stages; the following feature combinations are detailed descriptions of hierarchical vector simplifications: (1) area <5000: the percentage=80%; (2) 5000< area <30000: the percentage=50%; (3) area >30000: faceage=35%.
Based on the implementation of fig. 2, the present invention provides a building extraction method for homologous high-resolution images and DSM data, and proposes a RRI (Red ratio index) red wave duty index, which is applied to a red roof room extraction step to effectively extract red features, and calculates the elevation difference between adjacent objects to form a relative height value by creating Diff (DSM) features, so as to be used as a spatial feature for screening the features with height for building extraction. The method is applied to the step of optimizing the growth of the extracted pattern spots of the classified extracted buildings and the extracted pattern spots of the buildings. The satellite DSM data is introduced to extract the building according to the elevation features of the building which are most obvious in difference with other ground low-level features, and in the image segmentation process, an image wave band and a DSM wave band participate in multi-scale segmentation to form objects, so that the segmentation quality is improved; in the building extraction process, the adjacent object height difference Diff (DSM) characteristics are utilized for a plurality of times, DSM data advantages are exerted, and the bottleneck encountered when two-dimensional images such as brown roof house and bare land mixed division, gray roof house and cement ground mixed division, black roof house and road mixed division and the like are extracted and built is effectively solved. The invention adopts a grading sample extremum method to determine a step threshold, combines other characteristics, establishes a threshold combination of optimal characteristics, realizes grading extraction of complex ground features, uses the characteristic of small elevation difference between adjacent ground surfaces as a reference for a region growing algorithm, sets a threshold for ground growth, continuously removes low non-buildings from candidate buildings, and finally leads the rest objects. The invention can extract large-area-scale buildings, and can greatly reduce the cost compared with aviation oblique photography (unmanned aerial vehicle and the like) or LiDAR data serving as a data source; the method is suitable for complex scene areas with various building types, complex structures and rich material types, does not need a large number of training samples, and can realize rapid, wide-area and low-cost building extraction.
Furthermore, as an implementation of the method shown in fig. 1, the embodiment of the invention further provides a building extraction device for homologous high-resolution images and DSM data, which is used for implementing the method shown in fig. 1. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. As shown in fig. 3, the apparatus includes:
an acquisition unit 31 for acquiring high-resolution orthophoto data of a study area and DSM data corresponding to the high-resolution orthophoto data;
a dividing unit 32, configured to perform multi-scale division on the high-resolution orthophoto data of the research area obtained from the obtaining unit 31 and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale division rule to generate a parent object layer and a child object layer, where the parent object layer is used to generate feature auxiliary child object layer extraction;
a removing unit 33, configured to remove non-building objects on the sub-object layer obtained from the dividing unit 32 by using a preset non-building object removing rule, where the non-building objects include at least vegetation, water, and shadows;
An extracting unit 34, configured to extract the building objects on the sub-object layer obtained from the rejecting unit 33 by using a preset building classification extracting rule, where the preset building classification extracting rule performs classification extraction according to different colors of building roofs;
and a merging unit 35, configured to perform building area growth corresponding to the building object according to a preset building area growth rule based on the building object extracted from the sub-object layer obtained from the extracting unit 34, and merge broken building pattern spots corresponding to the building object after the building area growth, so as to obtain a target building.
Further, as an implementation of the method shown in fig. 2, another apparatus for extracting building with high resolution image and DSM data is provided in the embodiment of the present invention, for implementing the method shown in fig. 2. The embodiment of the device corresponds to the embodiment of the method, and for convenience of reading, details of the embodiment of the method are not repeated one by one, but it should be clear that the device in the embodiment can correspondingly realize all the details of the embodiment of the method. As shown in fig. 4, the apparatus includes:
An acquisition unit 31 for acquiring high-resolution orthophoto data of a study area and DSM data corresponding to the high-resolution orthophoto data;
a dividing unit 32, configured to perform multi-scale division on the high-resolution orthophoto data of the research area obtained from the obtaining unit 31 and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale division rule to generate a parent object layer and a child object layer, where the parent object layer is used to generate feature auxiliary child object layer extraction;
a removing unit 33, configured to remove non-building objects on the sub-object layer obtained from the dividing unit 32 by using a preset non-building object removing rule, where the non-building objects include at least vegetation, water, and shadows;
an extracting unit 34, configured to extract the building objects on the sub-object layer obtained from the rejecting unit 33 by using a preset building classification extracting rule, where the preset building classification extracting rule performs classification extraction according to different colors of building roofs;
a merging unit 35, configured to perform building area growth corresponding to the building object by using a preset building area growth rule based on the building object extracted from the sub-object layer obtained from the extracting unit 34, and merge broken building pattern spots corresponding to the building object after the building area growth, so as to obtain a target building;
An optimizing unit 36, configured to perform optimization processing on the target building obtained from the merging unit 35 by using preset optimization rules, where the preset optimization rules include at least building supplementary optimization, boundary smoothing optimization, and vector optimization.
Further, the obtaining unit 31 includes:
a first acquisition module 311, configured to acquire high-resolution stereopair data of the study area, where the high-resolution stereopair data includes at least front-view high-resolution image data, rear-view high-resolution image data, and lower-view high-resolution image data, and includes panchromatic band and multispectral band information;
a second obtaining module 312, configured to perform preprocessing such as atmospheric correction, orthographic correction, and data fusion on the lower-view high-resolution image data corresponding to the high-resolution stereopair data obtained from the first obtaining module 311, to obtain the high-resolution orthographic image data;
a generation module 313 for generating the DSM data from the high resolution stereopair data obtained from the first acquisition module 311.
Further, the dividing unit 32 includes:
an obtaining module 321, configured to obtain R, G, B and data corresponding to an NIR band from high resolution orthophoto data of the research area;
The segmentation module 322 is configured to perform multi-scale segmentation on the data corresponding to the R, G, B and NIR bands and the DSM data obtained from the obtaining module 321 according to a preset proportion weight, so as to generate the parent object layer and the child object layer.
Further, the rejection unit 33 includes:
the first rejecting module 331 is configured to reject vegetation on the sub-object layer by using a preset vegetation reject rule based on the normalized vegetation index NDVI and the mean_blue feature;
the second rejection module 332 is configured to reject the water on the sub-object layer obtained from the first rejection module 331 using a preset reject water rule based on the normalized water index NDWI and bright feature;
and a third culling module 333, configured to cull shadows on the sub-object layer obtained from the second culling module 332 by using a preset culling shadow rule based on the bright feature and the mean_scale_filtered feature.
Further, the building object at least comprises a highlight room, a red roof room, a blue roof room, a brown roof room, a gray roof room and a black roof room; the preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof room extraction rules, preset blue roof room extraction rules, preset brown roof room extraction rules, preset gray roof room extraction rules and preset black roof room extraction rules; the extraction unit 34 includes:
A first extraction module 341, configured to extract a highlight room on the sub-object layer by using a preset highlight room extraction rule based on a Brightness bright feature, a mean_nir feature, an Area feature, and a Rectangular fitting degree Rectangular feature;
a second extraction module 342, configured to extract a red roof room on the sub-object layer by using a preset red roof room extraction rule based on RRI red wave duty index features, area features, and bright features;
a third extraction module 343, configured to extract the Blue top room on the sub-object layer by using a preset Blue top room extraction rule based on the Mean Blue feature, BRDI Blue-red difference index, area feature and bright feature;
a fourth extraction module 344, configured to extract the brown-top rooms on the sub-object layer using the preset extraction brown-top room rule based on Diff (DSM, un cl feature, mean scale_filtered feature, shape index feature, rectangular fitting degree Rectangular fit feature, and Area feature;
a fifth extraction module 345, configured to extract the gray top room on the sub-object layer using a preset gray top room extraction rule based on Diff (DSM, un cl feature, mean scale_filtered feature, shape index feature, rectangular fitting degree Rectangular fit feature, and Area feature;
A sixth extraction module 346 is configured to extract the black top room on the sub-object layer using a preset black top room extraction rule based on Diff (DSM, un c l feature, mean scale_filtered feature, mean Red feature, shape index feature, and Area feature.
Further, the merging unit 35 includes:
a culling module 351, configured to cull objects with a height lower than that of a neighboring unclassified category in the building objects based on Diff (DSM) features and Mean scale_filtered features, obtain remaining building objects, and determine the remaining building objects as building seed nodes;
and a merging module 352, configured to perform building area growth with an object adjacent to the building seed node obtained from the culling module 351 as a candidate area, obtain broken building spots, and merge the broken building spots to obtain a target building.
Further, an embodiment of the present invention further provides a processor, where the processor is configured to run a program, where the program runs to perform the building extraction method for homologous high resolution images and DSM data described in fig. 1-2.
Further, an embodiment of the present invention further provides a storage medium, where the storage medium is configured to store a computer program, where the computer program controls a device where the storage medium is located to execute the method for extracting the building from the homologous high resolution image and the DSM data described in fig. 1-2.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the methods and apparatus described above may be referenced to one another. In addition, the "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent the merits and merits of the embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
Furthermore, the memory may include volatile memory, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), in a computer readable medium, the memory including at least one memory chip.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (7)

1. A method of building extraction of homologous high resolution imagery and DSM data, the method comprising:
acquiring high-resolution orthographic image data of a research area and DSM data corresponding to the high-resolution orthographic image data;
performing multi-scale segmentation on the high-resolution orthophoto data of the research area and DSM data corresponding to the high-resolution orthophoto data by using a preset multi-scale segmentation rule to generate a father object layer and a child object layer, wherein the child object layer is used for extracting building objects, and the father object layer is used for generating characteristic auxiliary child object layers to extract the building objects; the preset multi-scale segmentation rule is to segment an image to form an object by setting the weight, shape heterogeneity, spectrum heterogeneity and compactness of the layers participating in segmentation;
Removing non-building objects on the sub-object layer by using a preset non-building object removing rule, wherein the non-building objects at least comprise vegetation, water and shadows;
extracting building objects on the sub object layer by using a preset building classification extraction rule, wherein the preset building classification extraction rule is used for classifying and extracting according to different colors of building roofs;
based on the building objects extracted from the sub-object layer, growing building areas corresponding to the building objects by utilizing a preset building area growing rule, and merging broken building pattern spots corresponding to the building objects after the building areas grow to obtain a target building;
the method for eliminating the non-building objects on the sub-object layer by using a preset non-building object eliminating rule comprises the following steps:
based on the normalized vegetation index NDVI and the mean_blue characteristic, removing vegetation on the sub-object layer by using a preset vegetation removing rule; wherein the mean_blue feature is
The average value is used for describing the average intensity value of the object on a wave band, wherein V represents the segmented object, i represents a certain pixel on a blue wave band, and n represents the number of pixels;
based on the normalized water index NDWI and Brightness characteristics, removing the water on the sub-object layer by utilizing a preset water removal rule; wherein the Brightness feature is
Brightness represents the spectral mean of an object over all bands; wherein n is the total number of layers of the multispectral band of the image,a gray value on the ith band for the object;
based on the Brightness feature and the mean_scale_filtered feature, eliminating shadows on the sub-object layer by using a preset shadow elimination rule; the Mean scale_filtered feature creation process is as follows: firstly, calculating edge characteristics of an image through an absolute average deviation filtering algorithm to generate a filtered wave band, then calculating Mean characteristic Mean scale_filtered of wave band filtered by taking a father object as a unit in a father object layer created by large-scale segmentation to generate a scale_filtered layer, inheriting Mean scale_filtered characteristics of large-scale object plaques in the father object layer to small-scale object plaques in a child object layer, and providing context characteristics for objects in the child object layer;
wherein the building object at least comprises a highlight house, a red roof house, a blue roof house, a brown roof house, a gray roof house and a black roof house; the preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof room extraction rules, preset blue roof room extraction rules, preset brown roof room extraction rules, preset gray roof room extraction rules and preset black roof room extraction rules;
The extracting the building object on the sub-object layer by using a preset building classification extraction rule comprises the following steps:
extracting the highlight room on the sub-object layer by using a preset highlight room extracting rule based on the Brightness Brightness feature, the mean_NIR feature, the Area feature and the Rectangular fitting degree Rectangular fit feature; wherein the Area is characterized by
Wherein, area describes the size of the object, ai is a certain pixel in the object;
based on RRI red wave duty ratio index features, area features and Brightness features, extracting red roof rooms on the sub-object layer by using a preset red roof room extraction rule;
based on the Mean Blue characteristic, the BRDI Blue-red difference index, the Area characteristic and the Brightness characteristic, extracting the Blue top room on the sub-object layer by utilizing a preset Blue top room extraction rule; wherein the BRDI blue-red difference index is
Wherein Red is a Red band gray value, and Blue is a green band gray value;
based on Diff (DSM) features, mean scale_filtered features, shape index (Shape index) features, rectangular fitting degree Rectangular fit features and Area features, extracting brown-top rooms on the sub-object layer by using the preset brown-top room extraction rule; wherein the Diff (DSM, unclassified) is characterized by Diff (DSM, class) =mean Diff to DSM, class= (v) a -v b ) The Diff (DSM) feature describes the difference between two adjacent objects, the result of which is the average difference, v, over the DSM band for object a and its adjacent class object b a Representing the eigenvalues of an object on the DSM band, v b Representing the characteristic value of the b object on the DSM wave band;
based on Diff (DSM), unclassified feature, mean scale_filtered feature, shape index feature, rectangular fitting degree Rectangular fit feature and Area feature, extracting the gray top room on the sub-object layer by using a preset gray top room extraction rule;
based on Diff (DSM), unclassified, mean scale_filtered, mean Red, shape index (Shape index) and Area features, extracting a black top room on the sub-object layer by using a preset black top room extraction rule;
the method for obtaining the target building based on the building objects extracted from the sub-object layer comprises the steps of utilizing a preset building area growth rule to grow building areas corresponding to the building objects, combining broken building pattern spots corresponding to the building objects after the building areas grow, and obtaining the target building, wherein the method comprises the following steps:
based on Diff (DSM) features and Mean scale_filtered features, removing objects with a height lower than that of the nearby unclassified category from the building objects to obtain remaining building objects, and determining the remaining building objects as building seed nodes;
And growing a building area by taking an object adjacent to the building seed node as a candidate area to obtain broken building pattern spots, and merging the broken building pattern spots to obtain the target building.
2. The method of claim 1, wherein the acquiring high resolution orthographic image data of the study area and DSM data corresponding to the high resolution orthographic image data comprises:
acquiring high-resolution stereopair data of the research area, wherein the high-resolution stereopair data at least comprises front-view high-resolution image data, rear-view high-resolution image data and lower-view high-resolution image data, and comprises panchromatic wave band information and multispectral wave band information;
performing preprocessing of atmospheric correction, orthographic correction and data fusion on the downward-looking high-resolution image data corresponding to the high-resolution stereopair data to obtain the high-resolution orthographic image data;
the DSM data is generated from the high resolution stereopair data.
3. The method of claim 1, wherein the multi-scale segmentation of the high resolution orthographic image data of the study area and the DSM data corresponding to the high resolution orthographic image data using a preset multi-scale segmentation rule to generate a parent object layer and a child object layer, comprises:
Acquiring R, G, B and NIR band corresponding data from the high-resolution orthophoto data of the research area;
and respectively carrying out multi-scale segmentation on the data corresponding to the R, G, B and NIR wave bands and the DSM data according to preset proportion weights to generate the father object layer and the child object layer.
4. A method according to any one of claims 1-3, wherein after said building object extracted from said sub-object layer is based on a preset building area growing rule to perform building area growth corresponding to said building object, and broken building spots corresponding to said building object after said building area growth are combined to obtain a target building, the method further comprises:
and optimizing the target building by using a preset optimizing rule, wherein the preset optimizing rule at least comprises building supplementary optimization, boundary smooth optimization and vector optimization.
5. A building extraction apparatus for homologous high resolution imaging and DSM data, comprising:
an acquisition unit, configured to acquire high-resolution orthophoto data of a study area and DSM data corresponding to the high-resolution orthophoto data;
The segmentation unit is used for carrying out multi-scale segmentation on the high-resolution orthographic image data of the research area and DSM data corresponding to the high-resolution orthographic image data by utilizing a preset multi-scale segmentation rule to generate a father object layer and a child object layer, wherein the child object layer is used for extracting a building object, and the father object layer is used for generating a characteristic auxiliary child object layer to extract the building object; the preset multi-scale segmentation rule is to segment an image to form an object by setting the weight, shape heterogeneity, spectrum heterogeneity and compactness of the layers participating in segmentation;
the eliminating unit is used for eliminating the non-building objects on the sub-object layer by utilizing a preset non-building object eliminating rule, wherein the non-building objects at least comprise vegetation, water and shadow;
the extraction unit is used for extracting the building objects on the sub object layer by utilizing a preset building classification extraction rule, wherein the preset building classification extraction rule carries out classification extraction according to different colors of building roofs;
the merging unit is used for carrying out building area growth corresponding to the building object by utilizing a preset building area growth rule based on the building object extracted from the sub-object layer, and merging broken building pattern spots corresponding to the building object after the building area growth to obtain a target building;
The rejecting unit includes:
the first eliminating module is used for eliminating vegetation on the sub-object layer by utilizing a preset vegetation eliminating rule based on the normalized vegetation index NDVI and the mean_blue characteristic; wherein the mean_blue feature is
The average value is used for describing the average intensity value of the object on a wave band, wherein V represents the segmented object, i represents a certain pixel on a blue wave band, and n represents the number of pixels;
the second rejecting module is used for rejecting the water body on the sub-object layer by utilizing a preset rejecting water body rule based on the normalized water body index NDWI and Brightness characteristics; wherein the Brightness feature is
Brightness indicates that the object is over all bandsIs a spectrum average value of (2); wherein n is the total number of layers of the multispectral band of the image,a gray value on the ith band for the object;
the third eliminating module is used for eliminating shadows on the sub-object layer by utilizing a preset shadow eliminating rule based on the Brightness characteristic and the mean_scale_filtered characteristic; the Mean scale_filtered feature creation process is as follows: firstly, calculating edge characteristics of an image through an absolute average deviation filtering algorithm to generate a filtered wave band, then calculating Mean characteristic Mean scale_filtered of wave band filtered by taking a father object as a unit in a father object layer created by large-scale segmentation to generate a scale_filtered layer, inheriting Mean scale_filtered characteristics of large-scale object plaques in the father object layer to small-scale object plaques in a child object layer, and providing context characteristics for objects in the child object layer;
The building objects at least comprise a highlight room, a red roof room, a blue roof room, a brown roof room, a gray roof room and a black roof room; the preset building classification extraction rules at least comprise preset highlight room extraction rules, preset red roof room extraction rules, preset blue roof room extraction rules, preset brown roof room extraction rules, preset gray roof room extraction rules and preset black roof room extraction rules; the extraction unit includes:
the first extraction module is used for extracting the highlight rooms on the sub-object layer by utilizing a preset highlight room extraction rule based on Brightness Brightness features, mean_NIR features, area features and Rectangular fitting degree Rectangular features; wherein the Area is characterized by
Wherein, area describes the size of the object, ai is a certain pixel in the object;
the second extraction module is used for extracting the red roof rooms on the sub-object layer by utilizing a preset red roof room extraction rule based on RRI red wave duty index features, area features and Brightness features;
the third extraction module is used for extracting the Blue top room on the sub-object layer by utilizing a preset Blue top room extraction rule based on the Mean Blue characteristic, the BRDI Blue-red difference index, the Area characteristic and the Brightness characteristic; wherein the BRDI blue-red difference index is
Wherein Red is a Red band gray value, and Blue is a green band gray value;
a fourth extraction module, configured to extract a brown top room on the sub-object layer using the preset brown top room extraction rule based on a Diff (DSM) feature, a Mean scale_filtered feature, a Shape index feature, a Rectangular fitting degree Rectangular fit feature, and an Area feature; wherein the Diff (DSM) is characterized by
Diff(DSM,class)=Mean diff.to DSM,class=(v a -v b ),
Diff (DSM) features describe the difference between two adjacent objects, calculated as the average difference, v, over the DSM band for object a and its adjacent class object b a Representing the eigenvalues of an object on the DSM band, v b Representing the characteristic value of the b object on the DSM wave band;
a fifth extraction module, configured to extract a top ash room on the sub-object layer using a preset top ash room extraction rule based on a Diff (DSM) feature, a Mean scale_filtered feature, a Shape index feature, a Rectangular fitting degree Rectangular fit feature, and an Area feature;
a sixth extraction module, configured to extract a black top room on the sub-object layer by using a preset black top room extraction rule based on a Diff (DSM) feature, a Mean scale_filtered feature, a Mean Red feature, a Shape index feature, and an Area feature;
The merging unit includes:
the rejecting module is used for rejecting objects with the height lower than that of the nearby unclassified category in the building objects based on Diff (DSM) characteristics and Mean scale_filtered characteristics to obtain residual building objects, and determining the residual building objects as building seed nodes;
and the merging module is used for growing building areas by taking objects adjacent to the building seed nodes as candidate areas to obtain broken building pattern spots, and merging the broken building pattern spots to obtain the target building.
6. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the method of building extraction of homologous high resolution imagery and DSM data according to any one of claims 1 to 4.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements a method of building extraction of homologous high resolution imagery and DSM data according to any of claims 1 to 4.
CN202211655920.8A 2022-12-22 2022-12-22 Building extraction method and device for homologous high-resolution images and DSM data Active CN116258958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211655920.8A CN116258958B (en) 2022-12-22 2022-12-22 Building extraction method and device for homologous high-resolution images and DSM data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211655920.8A CN116258958B (en) 2022-12-22 2022-12-22 Building extraction method and device for homologous high-resolution images and DSM data

Publications (2)

Publication Number Publication Date
CN116258958A CN116258958A (en) 2023-06-13
CN116258958B true CN116258958B (en) 2023-12-05

Family

ID=86683456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211655920.8A Active CN116258958B (en) 2022-12-22 2022-12-22 Building extraction method and device for homologous high-resolution images and DSM data

Country Status (1)

Country Link
CN (1) CN116258958B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279951A (en) * 2013-05-13 2013-09-04 武汉理工大学 Object-oriented remote sensing image building and shade extraction method of remote sensing image building
CN106384081A (en) * 2016-08-30 2017-02-08 水利部水土保持监测中心 Slope farmland extracting method and system based on high-resolution remote sensing image
CN107784283A (en) * 2017-10-24 2018-03-09 防灾科技学院 The unmanned plane high score image coal mine fire area land cover classification method of object-oriented
WO2019043356A1 (en) * 2017-09-04 2019-03-07 LLEO Limited Improvements relating to crop harvesting
CN110427961A (en) * 2019-06-19 2019-11-08 华南农业大学 A kind of rule-based and samples fusion architecture information extracting method and system
CN110852207A (en) * 2019-10-29 2020-02-28 北京科技大学 Blue roof building extraction method based on object-oriented image classification technology
CN111178175A (en) * 2019-12-12 2020-05-19 中国资源卫星应用中心 Automatic building information extraction method and system based on high-view satellite image
CN111707620A (en) * 2020-06-11 2020-09-25 中国电建集团华东勘测设计研究院有限公司 Classification rule set for land utilization and water and soil loss monitoring method and system
CN112927252A (en) * 2021-04-12 2021-06-08 二十一世纪空间技术应用股份有限公司 Newly-added construction land monitoring method and device
CN113362359A (en) * 2021-06-18 2021-09-07 天津市勘察设计院集团有限公司 Building automatic extraction method of oblique photography data fused with height and spectrum information
CN114119634A (en) * 2021-11-22 2022-03-01 武汉大学 Automatic building extraction method and system combining vegetation elimination and image feature consistency constraint
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390267B (en) * 2019-06-25 2021-06-01 东南大学 Mountain landscape building extraction method and device based on high-resolution remote sensing image

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279951A (en) * 2013-05-13 2013-09-04 武汉理工大学 Object-oriented remote sensing image building and shade extraction method of remote sensing image building
CN106384081A (en) * 2016-08-30 2017-02-08 水利部水土保持监测中心 Slope farmland extracting method and system based on high-resolution remote sensing image
WO2019043356A1 (en) * 2017-09-04 2019-03-07 LLEO Limited Improvements relating to crop harvesting
CN107784283A (en) * 2017-10-24 2018-03-09 防灾科技学院 The unmanned plane high score image coal mine fire area land cover classification method of object-oriented
CN110427961A (en) * 2019-06-19 2019-11-08 华南农业大学 A kind of rule-based and samples fusion architecture information extracting method and system
CN110852207A (en) * 2019-10-29 2020-02-28 北京科技大学 Blue roof building extraction method based on object-oriented image classification technology
CN111178175A (en) * 2019-12-12 2020-05-19 中国资源卫星应用中心 Automatic building information extraction method and system based on high-view satellite image
CN111707620A (en) * 2020-06-11 2020-09-25 中国电建集团华东勘测设计研究院有限公司 Classification rule set for land utilization and water and soil loss monitoring method and system
CN112927252A (en) * 2021-04-12 2021-06-08 二十一世纪空间技术应用股份有限公司 Newly-added construction land monitoring method and device
CN113362359A (en) * 2021-06-18 2021-09-07 天津市勘察设计院集团有限公司 Building automatic extraction method of oblique photography data fused with height and spectrum information
CN114119634A (en) * 2021-11-22 2022-03-01 武汉大学 Automatic building extraction method and system combining vegetation elimination and image feature consistency constraint
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Building Segmentation of Satellite Image Based on Area and Perimeter using Region Growing;Ervin Yohannes等;《Building Segmentation of Satellite Image Based on Area and Perimeter using Region Growing》;第579-585页 *
基于无人机影像的一种农田景观小尺度地物的分类方法研究;陈柳;《中国优秀硕士学位论文全文数据库农业科技辑》(第8期);第D043-23页 *

Also Published As

Publication number Publication date
CN116258958A (en) 2023-06-13

Similar Documents

Publication Publication Date Title
Ghaffarian Automatic building detection based on supervised classification using high resolution Google Earth images
Jung et al. A framework for land cover classification using discrete return LiDAR data: Adopting pseudo-waveform and hierarchical segmentation
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN110879992A (en) Grassland surface covering object classification method and system based on transfer learning
CN109829426A (en) Railway construction temporary building monitoring method and system based on high score remote sensing image
Zhan et al. Quantitative analysis of shadow effects in high-resolution images of urban areas
Lak et al. A new method for road detection in urban areas using high-resolution satellite images and Lidar data based on fuzzy nearest-neighbor classification and optimal features
Attarzadeh et al. Object-based building extraction from high resolution satellite imagery
Van de Voorde et al. Extraction of land use/land cover related information from very high resolution data in urban and suburban areas
CN112989985B (en) Urban built-up area extraction method integrating night light data and Landsat8OLI images
CN114387195A (en) Infrared image and visible light image fusion method based on non-global pre-enhancement
Wang et al. Region-line association constraints for high-resolution image segmentation
O’Neil-Dunne et al. Incorporating contextual information into object-based image analysis workflows
CN104008374B (en) Miner&#39;s detection method based on condition random field in a kind of mine image
CN112927252A (en) Newly-added construction land monitoring method and device
CN116258958B (en) Building extraction method and device for homologous high-resolution images and DSM data
Saba et al. The optimazation of multi resolution segmentation of remotely sensed data using genetic alghorithm
JP6334281B2 (en) Forest phase analysis apparatus, forest phase analysis method and program
Jiang et al. Object-oriented building extraction by DSM and very highresolution orthoimages
Hashim et al. Multi-level image segmentation for urban land-cover classifications
Lari et al. Automatic extraction of building features from high resolution satellite images using artificial neural networks
JP6218678B2 (en) Forest phase analysis apparatus, forest phase analysis method and program
Barrile et al. Fast extraction of roads for emergencies with segmentation of satellite imagery
CN113378924A (en) Remote sensing image supervision and classification method based on space-spectrum feature combination
Milan An integrated framework for road detection in dense urban area from high-resolution satellite imagery and Lidar data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant