CN112529075A - Method for classifying buildings by using building facades - Google Patents

Method for classifying buildings by using building facades Download PDF

Info

Publication number
CN112529075A
CN112529075A CN202011433405.6A CN202011433405A CN112529075A CN 112529075 A CN112529075 A CN 112529075A CN 202011433405 A CN202011433405 A CN 202011433405A CN 112529075 A CN112529075 A CN 112529075A
Authority
CN
China
Prior art keywords
building
image
facade
classifying
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011433405.6A
Other languages
Chinese (zh)
Inventor
肖长林
洪竞科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202011433405.6A priority Critical patent/CN112529075A/en
Publication of CN112529075A publication Critical patent/CN112529075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for classifying buildings by using building facades, which comprises the steps of obtaining texture information of the building facades from a three-dimensional building model reconstructed by oblique photography; carrying out image segmentation and feature extraction on the texture information of the outer vertical face; and classifying and identifying the building functions by using the texture information of the external vertical surface after the segmentation and the feature extraction. Using an unmanned aerial vehicle aerial squint image as input, wherein the input comprises texture information of the building facade which is not provided with rough remote sensing data; the classification of buildings can be made from the texture information of the building facade.

Description

Method for classifying buildings by using building facades
Technical Field
The invention relates to the technical field of remote sensing classification, in particular to a method for classifying buildings by using building facades.
Background
In recent years, classifying buildings according to their use is useful for city design and management. Such information may provide an index of issues related to population, resources, and environment, such as demographics, power supply, and traffic system design. They are the basis for urban planning, policy making and disaster management. Currently, statistics are mostly collected manually from street data, or roughly extracted from laborious and rough remote sensing data. Using remote sensing data, some methods of classification of land use can distinguish residential areas, identify airports, public facilities, or industrial areas, but cannot identify the category of individual buildings. On the other hand, classification techniques have made it possible to classify and detect individual buildings for large-scale data (e.g., satellite and aerial images). However, most of these building detection methods use top view information, such as the appearance of the roof and height information for DSM (digital surface model), which is not sufficient for individual building category identification. Therefore, how to efficiently obtain such information on a large scale (e.g., cities) remains a problem. However, the squint images of aerial imagery can greatly assist us in identifying the category of buildings, for example, balconies are commonly present in residential buildings.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned problems occurring in the conventional classification of buildings.
Therefore, the technical problem solved by the invention is as follows: the statistics of building data is carried out by utilizing manual work and remote sensing data, and the data statistics is laborious and rough; when building identification is performed, a single building category cannot be identified.
In order to solve the technical problems, the invention provides the following technical scheme: acquiring texture information of the outer facade of the building from the three-dimensional building model reconstructed by oblique photography; carrying out image segmentation and feature extraction on the texture information of the outer vertical face; and classifying and identifying the building functions by using the texture information of the external vertical surface after the segmentation and the feature extraction.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the method for acquiring the texture information of the building facade comprises the steps of selecting the most representative facade of the building, converting an oblique view of the facade into a front view through projection transformation, selecting the texture of the most representative facade by using a texture quality measurement algorithm, and checking the visibility of the selected texture by using a projection distance.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the selecting of the most representative facade of the building comprises the steps of extracting edges of the three-dimensional building model into regular polygons by using a Douglas Pock algorithm, then taking two longest edges as vertical planes where the building facade is located, setting a space rectangle with four corner points for each vertical plane and the building edge thereof, wherein the four corner points are respectively (P1, P2, P3 and P4), and obtaining geographical reference 3D coordinates (X; Y; Z) of four points in an object space from an orthographic image and a digital surface model, so that the corresponding oblique image coordinates can be calculated through perspective transformation, and the formula is expressed as follows:
Figure BDA0002827454400000021
wherein: s is an oblique image coordinate, P is four corner point coordinates, and the selected three-dimensional coordinate point is converted to a two-dimensional plane; selecting an optimal representative image from a plurality of oblique images by using a texture quality measurement algorithm, wherein the selection of the optimal representative image and the optimal representative image is realized by calculating different oblique image qualities by using three different weighing factors and weights thereof, and the formula is as follows:
Q(f)=m1*V(f)+m2*N(f)+m3*O(f)
wherein: q (f) is the final quality of the image f, m1,m2,m3For the weights of the different scale factors, v (f) is the angle between the normal to the plane and the camera imaging plane and the better its quality the closer 90 degrees, n (f) is the angle between the normal and the line through the camera and the center of the face and the better its quality the closer 0 degrees, o (f) is the proportion of the observable and the better its quality the larger.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the checking visibility through the distance comprises the steps that the checking visibility is calculation of an observable part of an oblique image, whether the image is blocked or not is checked through three-dimensional space ray projection, the distance between the image and a collision point of a ground surface three-dimensional model and a building elevation is checked, aiming at a pixel in each elevation oblique image, a light ray is emitted from the center of a camera to the pixel, if the light ray and the collision point of a digital surface model are close to the spatial three-dimensional elevation of the building, the pixel is an effective pixel of a selected elevation image, and if the light ray and the collision point of the digital surface model are not close to the spatial three-dimensional elevation of the building, the pixel is an invalid pixel.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the image segmentation and feature extraction on the external facade comprises the steps of extracting features of the image of the optimal generation surface, calculating the average color and standard deviation of the image of the optimal generation surface through red, green and blue color channels, and extracting haar-like features, color features and haar-like feature combinations.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the extraction of the quasi-haar features comprises the steps that the rectangular quasi-haar features are defined as the difference of the sum of pixel intensities in different rectangles, and for the texture information of the outer vertical face, three rectangular pattern haar-shaped structures with 3 different sizes are designed and used in the vertical direction and the horizontal direction respectively.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the classifying the features includes classifying the combined features using a random forest classifier.
As a preferable aspect of the method for classifying buildings by using facades of buildings according to the present invention, wherein: the step of obtaining the texture information of the outer facade of the building further comprises the steps of utilizing the unmanned aerial vehicle to tilt and carry out aerial photography from the side face of the building to obtain aerial photography data of the outer facade of the building, and obtaining an outer facade image of the building from the reconstructed digital surface model.
The invention has the beneficial effects that: using an unmanned aerial vehicle aerial squint image as input, wherein the input comprises texture information of the building facade which is not provided with rough remote sensing data; the classification of buildings can be made from the texture information of the building facade.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic flow chart of a method for classifying buildings using building facades according to a first embodiment of the present invention;
FIG. 2 is a method for classifying buildings using facades of buildings according to a first embodiment of the present invention for extracting a texture map of a facade from a multi-view tilted image;
FIG. 3 is an illustration of occlusion detection in a method of classifying a building using a building facade in accordance with a first embodiment of the present invention;
FIG. 4 is an exemplary illustration of a side view image of an oblique aerial image of a method of classifying a building using a building facade in accordance with a second embodiment of the present invention;
fig. 5 is a diagram illustrating the result of classifying building categories according to different training sample numbers of the method for classifying buildings using building facades according to the second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 3, a first embodiment of the present invention provides a method for classifying buildings by using building facades, including:
s1: and acquiring texture information of the outer facade of the building from the three-dimensional building model reconstructed by oblique photography. In which it is to be noted that,
acquiring texture information of the outer facade of the building, wherein acquiring the texture image of the outer facade of the building by using an unmanned aerial vehicle comprises the steps that the unmanned aerial vehicle inclines to take aerial photos from the side surface of the building, and acquiring texture image information of the outer facade of the building from a floor area or detail level 2(LoD2, level-of-detail 2) building model of the existing building, and referring to the elevation texture extracted from a multi-view inclined image, wherein a 3D vertical surface of the building is displayed in the diagram (a), and possible projections of the multi-view inclined image are displayed in the diagram (b); selecting a facade representing a building, converting an oblique view of the facade into a front view by unidirectionality, selecting an optimal generation surface using a texture quality measurement algorithm and checking visibility by distance.
Further, the edges of the three-dimensional building model are extracted into regular polygons by using a Douglas-Peuker (Douglas-Peuker) algorithm, and then the two longest edges are taken as vertical surfaces where the building vertical surfaces are located. For each vertical plane and its building edge, a spatial rectangle with four corner points is set, where the four corner points are (P1, P2, P3, P4), georeferencing 3D coordinates (X; Y; Z) of four points in the object space are obtained from the orthographic image and the digital surface model, so their corresponding oblique image coordinates can be calculated by perspective transformation, which is expressed as follows:
Figure BDA0002827454400000051
wherein: s is an oblique image coordinate, P is four lattice point coordinates, the selected three-dimensional coordinate point is converted onto a two-dimensional plane, and the original and corrected facade textures are displayed with reference to images (c) and (d) in fig. 2;
selecting an optimal representative image from a plurality of oblique images by using a texture quality measurement algorithm, wherein the selection of the optimal representative image is realized by calculating different oblique image qualities by using three different weighing factors and weights thereof, and the formula is as follows:
Q(f)=m1*V(f)+m2*N(f)+m3*O(f)
wherein: q (f) is the final quality of the image f, m1,m2,m3V (f) is the angle between the normal to the plane and the camera imaging plane and the better its quality the closer 90 degrees, n (f) is the angle between the normal to the plane and the line passing through the camera and the center of the face and the better its quality the closer 0 degrees, o (f) is the proportion of the observable and the better its quality the larger; and checking visibility through distance, wherein observable parts of oblique images are calculated in a texture quality measurement algorithm for extracting texture of the building facade texture from a three-dimensional building model reconstructed by oblique photography; checking the distance between the collision point of the three-dimensional space ray projection and the ground surface three-dimensional model and the building facade to determine whether the image is blocked, emitting light from the center of the camera towards the collision point of the three-dimensional space ray projection for each pixel in the facade oblique image, if the collision point of the light and the digital surface model is near the three-dimensional facade of the building, the pixel is the effective pixel of the selected facade image, otherwise the ineffective pixel of the characteristic extraction is considered invisible, and referring to P2 in FIG. 3, a texture point (such as P1) is required to be close to the facade (light rectangle) in the object space, otherwise (such as P2 points to a tree), the texture point is required to be a blocking point, and the finally generated mask image is referred to image (d) in FIG. 2.
S2: and carrying out image segmentation and feature extraction on the texture information of the external face. In which it is to be noted that,
performing image segmentation and feature extraction comprises performing feature extraction on an image of an optimal generation surface, calculating the average color and standard deviation of the image of the generation surface through red, green and blue (R, G and B) color channels, extracting Haar-like (Haar) features and color feature and Haar feature combinations, wherein the Haar-like features comprise that rectangular Haar-like features are defined as the difference of the sum of pixel intensities in different rectangles, and for a vertical texture, three-rectangle pattern Haar-like structures (for example, black and white) with 3 different sizes (total 6 feature vectors) are designed and used in the vertical and horizontal directions respectively.
S3: and classifying and identifying the building functions by using the texture information of the outer vertical surface after segmentation and feature extraction. In which it is to be noted that,
and classifying the combined features by using a random forest classifier.
Example 2
Referring to fig. 4-5, for the second embodiment of the present invention, the current statistical data is mainly collected manually from street data or roughly extracted from laborious or too coarse remote sensing data, and by using remote sensing data, on one hand, land use classification methods can automatically identify airports from residential areas or industrial areas from public facilities, but do not identify the category of individual buildings in detail, and on the other hand, can classify and detect individual buildings from summary data (such as satellite and aerial images), but most building detection methods use top view information, such as the appearance of roofs and height information of DSM (digital surface model), which is not sufficient for individual building category identification, and the present invention can help us identify the category of individual buildings using squint images; in order to better verify and explain the technical effects adopted in the method, buildings around the campus of the National University of Singapore (NUS) are selected for testing in the embodiment, and the test results are demonstrated by means of scientific demonstration to verify the real effect of the method;
selecting 306 aerial images as research data of the experiment, wherein the aerial images comprise 73 top views, 64 front views, 47 back views, 62 left views and 60 right views which are shot by 5 Leica RCD30 on board, all the obtained images have the size of 10336x 7788 pixels when four inclined cameras are installed at an inclination angle of 35 degrees, the images are calibrated by professional test software Pix4Dmap and an orthophoto image and a DSM are generated, and the Ground Sampling Distance (GSD) of the orthophoto image and the DSM is about 7.8 cm; referring to fig. 4, the boundaries of 106 buildings, including conventional houses, teaching buildings, offices and the front view generated by cropping, selecting and correcting 262 facade images from the boundaries of these buildings, were manually selected and drawn around the singapore national university campus to generate an experimental data set, and detailed statistical information and image examples can be respectively referred to fig. 4 and table 1 below;
table 1: and (5) counting the images of the different types of building facades.
Type (B) Number of
Ordinary house 110
Education 78
Office room 31
Apartment house 43
Total of 226
The inside images in fig. 4 are, from left to right, examples of elevation images of a house (conventional) education, office and apartment, respectively, average colors and standard deviations in R, G, and B channels are calculated for capturing elevation features while texture description is performed with Haar-like features, three rectangular Haar-like structures are designed to be used in vertical and horizontal directions, respectively, and have 3 different sizes (6 features in total), and the features are combined to describe elevation textures to recognize them; in the classification using the random forest classifier, 500 decision trees are used for training, and the number of variables for classification is set as the square root of the feature dimension, which is 12 in the experiment, and the overall classification accuracy and Kappa value (including training data) of all the facade images in the case where the number of training samples is different refer to fig. 5.
As can be seen from fig. 5, the accuracy of the overall classification and the Kappa value both increase with the increase of the training times, and when 30% of the data is used as the training sample, the classification accuracy can still reach 0.6, and because the experimental data are limited; when training on smaller subdata sets, intra-class variability and inter-class similarity may be the primary picks, but the results suggest that facade textures may provide useful clues for the classification of building classes; the present embodiment provides only a small sample, and the building category classification based on the appearance image can have higher accuracy and more practicability if more samples, more complicated feature extractors and classifiers (such as neural networks) are used.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (8)

1. A method for classifying buildings by using building facades is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
acquiring texture information of the outer facade of the building from the three-dimensional building model reconstructed by oblique photography;
carrying out image segmentation and feature extraction on the texture information of the outer vertical face;
and classifying and identifying the building functions by using the texture information of the external vertical surface after the segmentation and the feature extraction.
2. The method of classifying a building using a building facade of claim 1, wherein: the obtaining of the texture information of the building facade comprises,
selecting the most representative facade of a building, converting an oblique view of the facade into a front view through projection transformation, selecting an optimal surface by using a texture quality measurement algorithm, and checking visibility through distance.
3. The method of classifying a building using a building facade of claim 2, wherein: the selecting of the most representative facade of the building comprises,
extracting the edges of the three-dimensional building model into regular polygons by using a Douglas pock algorithm, then taking the two longest edges as vertical planes where building facades are located, setting a spatial rectangle with four corner points for each vertical plane and the building edges thereof, wherein the four corner points are respectively (P1, P2, P3 and P4), and acquiring geographical reference 3D coordinates (X; Y; Z) of four points in an object space from an orthographic image and a digital surface model, so that the corresponding oblique image coordinates can be calculated through perspective transformation, and the formula is expressed as follows:
Figure FDA0002827454390000011
wherein: s is an oblique image coordinate, P is four corner point coordinates, and the selected three-dimensional coordinate point is converted to a two-dimensional plane; selecting an optimal representative image from a plurality of oblique images by using a texture quality measurement algorithm, wherein the selection of the optimal representative image and the optimal representative image is realized by calculating different oblique image qualities by using three different weighing factors and weights thereof, and the formula is as follows:
Q(f)=m1*V(f)+m2*N(f)+m3*O(f)
wherein: q (f) is the final quality of the image f, m1,m2,m3For the weights of the different scale factors, v (f) is the angle between the normal to the plane and the camera imaging plane and the better its quality the closer 90 degrees, n (f) is the angle between the normal and the line through the camera and the center of the face and the better its quality the closer 0 degrees, o (f) is the proportion of the observable and the better its quality the larger.
4. A method of classifying a building using a building facade as recited in claim 3, wherein: the checking for visibility by distance includes,
the observable part calculation of the checking visibility, namely the inclined image, checks the collision point between the three-dimensional space ray projection and the ground surface three-dimensional model and the distance between the three-dimensional space ray projection and the building elevation to judge whether the image is blocked, emits light rays from the center of the camera to the pixel in each elevation inclined image, if the collision point between the light rays and the digital surface model is close to the space three-dimensional elevation of the building, the pixel is an effective pixel of the selected elevation image, and if the collision point between the light rays and the digital surface model is not close to the space three-dimensional elevation of the building, the pixel is an invalid pixel extracted from the characteristics, namely the pixel.
5. The method for classifying buildings according to any one of claims 1 to 4, wherein the method comprises the steps of: the image segmentation and feature extraction of the facade comprises,
and performing feature extraction on the image of the optimal generation surface, calculating the average color and standard deviation of the image of the optimal generation surface through red, green and blue color channels, and extracting a haar-like feature and a combination of the color feature and the haar-like feature.
6. The method of classifying a building using a building facade of claim 5, wherein: the extracting of the haar-like features comprises,
the rectangle haar-like feature is defined as the difference of the sum of pixel intensities in different rectangles, and for the texture information of the outer vertical face, three rectangular pattern haar-like structures with 3 different sizes are designed and used in the vertical direction and the horizontal direction respectively.
7. The method of classifying a building using a building facade of claim 6, wherein: the classifying the features may include classifying the features,
and classifying the combined features by using a random forest classifier.
8. The method of classifying a building using a building facade of claim 1, wherein: the obtaining the texture information of the building facade further comprises,
and (3) carrying out aerial photography from the side surface of the building by utilizing the inclination of the unmanned aerial vehicle to obtain aerial photography data of the outer facade of the building, and obtaining an outer facade image of the building from the reconstructed digital surface model.
CN202011433405.6A 2020-12-10 2020-12-10 Method for classifying buildings by using building facades Pending CN112529075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011433405.6A CN112529075A (en) 2020-12-10 2020-12-10 Method for classifying buildings by using building facades

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011433405.6A CN112529075A (en) 2020-12-10 2020-12-10 Method for classifying buildings by using building facades

Publications (1)

Publication Number Publication Date
CN112529075A true CN112529075A (en) 2021-03-19

Family

ID=74999938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011433405.6A Pending CN112529075A (en) 2020-12-10 2020-12-10 Method for classifying buildings by using building facades

Country Status (1)

Country Link
CN (1) CN112529075A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219958A (en) * 2021-11-18 2022-03-22 中铁第四勘察设计院集团有限公司 Method, device, equipment and storage medium for classifying multi-view remote sensing images
CN114528625A (en) * 2022-02-17 2022-05-24 深圳须弥云图空间科技有限公司 Building spacing calibration method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292339A (en) * 2017-06-16 2017-10-24 重庆大学 The unmanned plane low altitude remote sensing image high score Geomorphological Classification method of feature based fusion
CN109186551A (en) * 2018-08-08 2019-01-11 广州市城市规划勘测设计研究院 Oblique photograph measures building feature point extracting method, device and storage medium
CN109816708A (en) * 2019-01-30 2019-05-28 北京建筑大学 Building texture blending method based on oblique aerial image
CN110796152A (en) * 2020-01-06 2020-02-14 杭州鲁尔物联科技有限公司 Group building earthquake damage extraction method and system based on oblique photography
CN111191628A (en) * 2020-01-06 2020-05-22 河海大学 Remote sensing image earthquake damage building identification method based on decision tree and feature optimization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292339A (en) * 2017-06-16 2017-10-24 重庆大学 The unmanned plane low altitude remote sensing image high score Geomorphological Classification method of feature based fusion
CN109186551A (en) * 2018-08-08 2019-01-11 广州市城市规划勘测设计研究院 Oblique photograph measures building feature point extracting method, device and storage medium
CN109816708A (en) * 2019-01-30 2019-05-28 北京建筑大学 Building texture blending method based on oblique aerial image
CN110796152A (en) * 2020-01-06 2020-02-14 杭州鲁尔物联科技有限公司 Group building earthquake damage extraction method and system based on oblique photography
CN111191628A (en) * 2020-01-06 2020-05-22 河海大学 Remote sensing image earthquake damage building identification method based on decision tree and feature optimization

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. XIAO等: "EFFICIENT BUILDING CATEGORY CLASSIFICATION WITH FAADE INFORMATION FROM OBLIQUE AERIAL IMAGES", 《THE INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》, vol. 43, no. 2, 14 August 2020 (2020-08-14), pages 1309 - 1313 *
刘贵文 等: "信息物理融合下的建筑施工现场碳排放实时监测系统", 《重庆大学学报》, vol. 43, no. 9, 15 September 2020 (2020-09-15), pages 24 - 31 *
杨玉荣 等: "震后建筑物受灾程度遥感监测方法研究", 《地震研究》, vol. 41, no. 4, 15 October 2018 (2018-10-15), pages 630 - 636 *
梁晨: "桥梁景观艺术在拱桥结构设计中的运用研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 5, 15 May 2015 (2015-05-15), pages 034 - 117 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219958A (en) * 2021-11-18 2022-03-22 中铁第四勘察设计院集团有限公司 Method, device, equipment and storage medium for classifying multi-view remote sensing images
CN114528625A (en) * 2022-02-17 2022-05-24 深圳须弥云图空间科技有限公司 Building spacing calibration method and device

Similar Documents

Publication Publication Date Title
Rau et al. Analysis of oblique aerial images for land cover and point cloud classification in an urban environment
WO2020192355A1 (en) Method and system for measuring urban mountain viewing visible range
Kaartinen et al. Accuracy of 3D city models: EuroSDR comparison
CN106228609A (en) A kind of oblique photograph three-dimensional modeling method based on spatial signature information
EP2983131A1 (en) Method and device for camera calibration
Nyaruhuma et al. Verification of 2D building outlines using oblique airborne images
TW201022708A (en) Method of change detection for building models
CN111383335B (en) Crowd funding photo and two-dimensional map combined building three-dimensional modeling method
CN112529075A (en) Method for classifying buildings by using building facades
US20210199433A1 (en) Feature/ground height-based colored image generating apparatus and feature height-based colored image generating program
TW201928913A (en) Apparatus for generating feature height-specific colored image, and program for generating feature height-specific colored image
CN110580443B (en) Low-altitude near-real-time building earthquake damage assessment method
Gerke et al. Supervised and unsupervised MRF based 3D scene classification in multiple view airborne oblique images
CN113378754A (en) Construction site bare soil monitoring method
CN115375779A (en) Method and system for marking AR (augmented reality) real scene of camera
Myint et al. A novel image classification algorithm using overcomplete wavelet transforms
Liu et al. Discovering potential illegal construction within building roofs from UAV images using semantic segmentation and object-based change detection
US20240290089A1 (en) Method for extracting forest parameters of wetland with high canopy density based on consumer-grade uav image
Alshawabkeh et al. Automatic multi-image photo texturing of complex 3D scenes
Jhan et al. Integrating UAV and ground panoramic images for point cloud analysis of damaged building
Bartels et al. Rule-based improvement of maximum likelihood classified LIDAR data fused with co-registered bands
Bartels et al. Maximum likelihood classification of LIDAR data incorporating multiple co-registered bands
Lehrbass et al. Urban tree cover mapping with relief-corrected aerial imagery and LiDAR
Xiao et al. Efficient Building Category Classification with Façade Information From Oblique Aerial Images
CN114693820A (en) Object extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination