CN114742968A - Elevation map generation method based on building elevation point cloud - Google Patents

Elevation map generation method based on building elevation point cloud Download PDF

Info

Publication number
CN114742968A
CN114742968A CN202210659630.4A CN202210659630A CN114742968A CN 114742968 A CN114742968 A CN 114742968A CN 202210659630 A CN202210659630 A CN 202210659630A CN 114742968 A CN114742968 A CN 114742968A
Authority
CN
China
Prior art keywords
image
building
point cloud
elevation
band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210659630.4A
Other languages
Chinese (zh)
Other versions
CN114742968B (en
Inventor
于冰
胡金龙
王冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210659630.4A priority Critical patent/CN114742968B/en
Publication of CN114742968A publication Critical patent/CN114742968A/en
Application granted granted Critical
Publication of CN114742968B publication Critical patent/CN114742968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an elevation map generation method based on building elevation point cloud, which comprises the steps of generating an initial characteristic map based on the building elevation point cloud; carrying out image enhancement on the initial characteristic image to obtain a single-band characteristic image and a three-band characteristic image; building boundaries are extracted from the single-band characteristic image, and door and window boundaries are extracted from the three-band characteristic image; and combining the building boundary and the door and window boundary to obtain a building elevation. The invention provides an elevation map generation method based on building elevation point cloud, which aims to solve the technical problem of extracting an elevation map from building elevation point cloud in the prior art, realize automatic and stable elevation map extraction based on the elevation point cloud, reduce the quality requirement on point cloud data and improve the mobility of the elevation map extraction method.

Description

Elevation map generation method based on building elevation point cloud
Technical Field
The invention relates to the field of building surveying and mapping, in particular to a method for generating an elevation map based on a point cloud of an elevation of a building.
Background
Buildings are a major type of man-made object in urban market scenes. With the continuous development of city planning, smart cities and Building Information Models (BIMs), the demand of the fields for building structure information and characteristics thereof is increasing, the demand is also increasing, and how to efficiently and accurately acquire the data and the information required by 3D modeling is one of the main problems faced by the current building surveying and mapping field. The Building elevation map (Building facade map) reflects the geometric structure and the characteristics of the surface of a Building, can directly serve old city reconstruction, city planning, smart city construction and the like, and provides a simple and flexible mode for large-scale establishment of a 3D Building model. However, since the geometry of most early urban buildings was not recorded, it was necessary to obtain elevation information for buildings by measurement techniques, and collecting such elevation data by conventional measurement methods was inefficient and costly.
With the continuous development of remote sensing technology, obtaining 3D point cloud data of an object by laser scanning has become a mature means, so people gradually try to directly extract an elevation map from the 3D point cloud data of a building, and the main method is as follows: directly extracting: extracting a facade map directly from the original or preprocessed 3D point cloud based on the geometric information of the 3D point cloud, such as distance change, normal change, curvature change, density of the 3D point cloud and the like; (II) indirect extraction: and (3) dividing the vertical face of the building, and identifying the vertical face of the building by performing feature extraction on the features (such as doors and windows) of the building to obtain a vertical face image.
However, there are still many limitations in the practical application of these prior arts: (1) the requirement on 3D point cloud data is high, the point cloud data only containing coordinate information is difficult to deal with depending on various feature information such as point cloud RGB color, intensity information, 2D depth images and the like, and the hardware requirement and cost of point cloud collection are improved; (2) the robustness is not high, the effect is poor, the precision is low, and even the operation cannot be performed completely under the conditions that the data quality is poor, and the vertical point cloud is blocked, and noise, density and the like exist; (3) the method has higher requirements on the user participation degree, and is difficult to realize the automatic extraction of the building elevation map from disordered 3D point cloud data; (4) the migratability is low and a method or model will generally work well for data of one particular scene and perform poorly for other scenes and even other data.
In addition, the inventor proposes a 'constrained building facade orthophoto map extraction method' (No. CN 113256813B), which discloses generating a building facade orthophoto map based on acquired point cloud of a building facade, but relies on conventional plane equation projection to generate a facade orthophoto map; and as is common knowledge in the art, the elevation orthographic projection image is a picture of a grid, which is completely different from the vectorized elevation image that the present application intends to obtain.
In conclusion, how to stably extract the building elevation from unordered elevation point cloud data with different qualities is still a technical problem to be solved urgently.
Disclosure of Invention
The invention provides an elevation map generation method based on building elevation point cloud, which aims to solve the technical problem of extracting an elevation map from building elevation point cloud in the prior art, realize automatic and stable elevation map extraction based on the elevation point cloud, reduce the quality requirement on point cloud data and improve the mobility of the elevation map extraction method.
The invention is realized by the following technical scheme:
a facade graph generation method based on a building facade point cloud comprises the following steps:
generating an initial characteristic map based on the point cloud of the building facade;
carrying out image enhancement on the initial characteristic image to obtain a single-band characteristic image and a three-band characteristic image;
building boundaries are extracted from the single-band characteristic image, and door and window boundaries are extracted from the three-band characteristic image;
and combining the building boundary and the door and window boundary to obtain a building elevation.
The invention provides a building facade point cloud-based facade image generation method, which is used for generating an initial feature image by taking the building facade point cloud as a known condition on the basis, then carrying out image enhancement operation on the initial feature image, obtaining a single-waveband feature image and a three-waveband feature image through image enhancement, and preparing for extracting the feature image for a subsequent facade image. And then, extracting a building boundary and a door and window boundary contained in the building based on the single-wave-band characteristic image and the three-wave-band characteristic image respectively, and combining the building boundary and the door and window boundary to obtain the building elevation map required by the application.
According to the method and the device, a direct or indirect extraction mode in the prior art is abandoned, and a facade picture extraction idea completely different from that in the prior art is adopted, so that the geometric information of the 3D point cloud is not required to be considered, the facade of the building is not required to be identified through the building characteristics, the vectorized facade picture can be obtained stably and automatically from the facade point cloud, auxiliary information such as point cloud strength and color is not required, the data requirement is low, excessive manual participation is not required, and the method and the device have strong mobility and universality.
The method and the device have the advantages that the three-band characteristic images with more characteristics are created by means of characteristic enhancement, the door and window boundaries are extracted from the obtained three-band characteristic images, and the effect of subsequent deep learning is improved remarkably, so that the extraction accuracy of the door and window boundaries is enhanced remarkably.
Further, the method for generating the initial feature map based on the building facade point cloud comprises the following steps:
converting the set of the point clouds of the building facade into a point set which takes the corresponding building facade as a reference coordinate system;
discarding the height parameter of the point concentrated point cloud relative to the plane to obtain a two-dimensional plane point cloud projected to the corresponding vertical surface by the three-dimensional point cloud;
creating a grid, dividing the two-dimensional plane point cloud through the grid, counting the number of the point clouds in each grid, and rasterizing the point clouds into a single-waveband two-dimensional image by taking the number of the point clouds as a pixel value;
and masking the area with the pixel value of 0 in the single-waveband two-dimensional image to obtain an initial characteristic diagram.
The number of the point clouds is used as one of typical characteristics of the 3D point clouds, the technology realizes the generation of point cloud characteristic images which lack intensity and color information and only have geometric information by counting the characteristics, reduces the equipment requirement of point cloud acquisition, and further reduces the cost of point cloud acquisition.
Further, the method for image enhancement of the initial feature map comprises the following steps:
executing histogram equalization processing on the initial characteristic graph to obtain a first characteristic imageI 1
Performing band-pass filtering and histogram equalization processing on the initial characteristic graph in sequence to obtain a second characteristic imageI 2
Carrying out edge detection on the initial characteristic image to obtain an edge detection characteristic imageI 3
To be provided withI 1As a single band feature image; mergingI 1I 2AndI 3and obtaining a three-band characteristic image.
The scheme provides an image enhancement method special for generating the elevation map, and the image enhancement method can obviously enhance the image effect and obtain a single-band characteristic image and a three-band characteristic image which are required subsequently.
Wherein the second characteristic imageI 2Firstly, band-pass filtering operation is carried out on the initial characteristic diagram, and then histogram equalization processing is carried out on the result of the band-pass filtering.
Further, the histogram equalization process includes:
calculating cumulative histogram of processed feature maphist c
Calculating a mapping relationship for histogram equalizationcdfcdf= hist c ×(2 q /(m×n) Therein), whereinmnRespectively the length and the width of the processed characteristic diagram;qthe quantization bit number of the initial characteristic diagram;
based on mapping relationcdfAnd mapping the original image to obtain a feature image after histogram equalization.
It should be noted that the histogram equalization processing method in the present embodiment may be used in a process of obtaining the first feature image and the second feature image at the same time.
Further, the band pass filtering is performed by the following formula:
Figure 428463DEST_PATH_IMAGE001
in the formula:G(u, v) For fourier transformation of the band-pass filtered output image,F(u, v) Is the fourier transform domain of the original image containing noise,B(u, v) In order to be a band-pass filtering transfer function,D(u, v) Representing the euclidean distance to the origin,D 0D 1the minimum and maximum values of the truncation frequency are respectively.
Further, the method for edge detection includes:
respectively using a horizontal detection operator and a vertical detection operator in the Prewitt operator to carry out edge detection operation on the initial characteristic diagram; adding the results of the two edge detections to obtain an edge detection characteristic imageI 3
According to the edge detection method, the edge detection is achieved at the edge by utilizing the gray difference of the upper and lower adjacent points and the left and right adjacent points of the pixel point, so that part of pseudo edges are removed, and the noise is smoothed.
Those skilled in the art will appreciate that the Prewitt operator is a term commonly used in the art, and there is no standard chinese translation.
Further, the method for extracting the building boundary on the single-waveband feature image comprises the following steps:
and performing closed operation on the single-waveband characteristic image by the following formula:
Figure 345604DEST_PATH_IMAGE002
in the formula:O 1the feature images after the closed operation are obtained,Ois an original single-waveband characteristic image,Bin order to close the structural element of the operation,
Figure 63024DEST_PATH_IMAGE003
in order to do the operation of the dilation,
Figure 458233DEST_PATH_IMAGE004
is the corrosion operator;
performing neighborhood connected domain detection on the image after the closed operation;
acquiring a maximum connected domain based on a neighborhood connected domain detection result, performing grid-to-vector operation, and taking an acquired vector surface as an initial boundary of a building;
filling a cavity in the initial boundary of the building, wherein the filling object is a cavity with the area less than 20% of the whole area;
and optimizing the filled boundary through boundary simplification and orthogonalization operation to obtain the building boundary.
According to the scheme, the influence of noise and shielding can be reduced through closed operation, and the area corresponding to the building boundary can be effectively obtained through neighborhood connected domain detection. In addition, in the obtained initial boundary of the building, due to factors such as shielding, holes may exist in the obtained surface boundary data, and real holes may exist in the holes, so that the scheme can overcome shielding interference through hole filling and reserve the existence of the real holes as much as possible.
Further, a seed filling method is adopted to perform neighborhood connected domain detection on the image after the closed operation, and the specific detection method comprises the following steps:
step a, taking all pixels with pixel values of 0 in the image after the closed operation as background pixels without processing; taking all non-0 pixels in the image as foreground pixels and marking the foreground pixels as 1 to obtain a binary image;
step b, initializing the label in the seed filling methodlabelStack, StacksLet us orderlabel=1;
C, traversing the binary image from left to right and from top to bottom from the upper left corner of the image until finding a point which meets the condition that the pixel is 1 as a seed point;
step d, modifying the pixel value of the seed point into a labellabelCorresponding value, then storing all foreground pixels adjacent to the seed point into the stacksPerforming the following steps;
e, removing the top pixel, and modifying the pixel value of the top pixel into a labellabelCorresponding value, and storing all foreground pixels adjacent to the top pixel in the stacksThe preparation method comprises the following steps of (1) performing;
step f, repeating step e until stacksIs empty;
step g, orderlabel=label+1;
And h, repeating the steps c to g until all foreground pixels in the image are marked, and obtaining all connected regions in the image.
Further, the method for extracting the door and window boundary on the three-band characteristic image comprises the following steps:
establishing a target detection model based on deep learning by using a Faster R-CNN model as a main network and a ResNet50 model as a pre-training network;
training the target detection model by using the marked door and window samples;
and substituting the three-band characteristic image slices needing to be subjected to door and window detection into the trained target detection model to obtain detection results of all the slices, and combining the detection results to obtain the required door and window boundary.
Wherein, the Faster R-CNN model and the ResNet50 model are both existing network models in deep learning, and are not described herein again.
And further, non-maximum suppression operation is carried out on the detection results of all the slices, so that only the detection result with the highest confidence coefficient is reserved at the same position of the same plane, and only one door and window target at the same position of the same plane of the elevation map is ensured to exist so as to fit the actual situation.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. according to the method for generating the elevation map based on the point cloud of the building elevation, a direct or indirect extraction mode in the prior art is abandoned, and an elevation map extraction idea completely different from that in the prior art is adopted, so that the geometric information of the 3D point cloud is not required to be considered, the building elevation is not required to be identified through building features, the vectorized elevation map can be stably and automatically obtained from the elevation point cloud, and auxiliary information such as point cloud intensity and color is not required.
2. According to the building facade point cloud-based facade map generation method, the dependence on the 3D point cloud data quality is reduced, the building facade map can still be stably extracted under the conditions of poor data quality, shading, noise, uneven density and the like of the facade point cloud, and the robustness is remarkably improved.
3. According to the building facade point cloud-based facade map generation method, only the geometric information of the point cloud is used for generating the facade map, other information such as the intensity and the color of the point cloud is not required to be acquired, the requirements on point cloud acquisition equipment are lowered, and the point cloud acquisition cost is further lowered.
4. The invention relates to a method for generating an elevation map based on point cloud of an elevation of a building, which does not need excessive human participation and has stronger transferability or universality in the using process.
5. The invention relates to a facade diagram generation method based on building facade point cloud. Compared with the traditional method based on geometry, the method can still obtain better effect under the condition of poor data quality (such as shading, noise, uneven density and the like).
The invention relates to a method for generating a facade picture based on point cloud of a facade of a building, which is characterized in that the initial building edge is obtained through a series of digital image processing technologies, and hole filling and boundary optimization are carried out based on the characteristics of the building edge, and the finally obtained building edge is horizontal and vertical without finely broken side lines and holes, thereby meeting the application requirements of the building edge.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a building facade point cloud in an embodiment of the invention;
FIG. 3 is a diagram illustrating an elevation extraction result according to an embodiment of the present invention;
FIG. 4 is a box plot of elevation extraction accuracy indicators during coarse extraction according to an embodiment of the present invention;
FIG. 5 is a box plot of elevation extraction accuracy index during accurate extraction according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention. In the description of the present application, it is to be understood that the terms "front", "back", "left", "right", "upper", "lower", "vertical", "horizontal", "high", "low", "inner", "outer", etc. indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus should not be construed as limiting the scope of the present application.
Example 1:
fig. 1 shows an elevation map generation method based on a point cloud of an elevation of a building:
generating an initial characteristic map based on the point cloud of the building facade;
performing image enhancement on the initial characteristic diagram to obtain a single-band characteristic image and a three-band characteristic image;
building boundaries are extracted from the single-band characteristic image, and door and window boundaries are extracted from the three-band characteristic image;
and combining the building boundary and the door and window boundary to obtain a building elevation map.
Preferably, the single-band feature image in this embodiment is a feature image with 1 band and only 1 band, and the three-band feature image is a three-band feature image composed of three single-band feature images subjected to image enhancement processing.
Example 2:
a method for generating a facade graph based on a point cloud of a facade of a building mainly comprises the following procedures:
feature image generation
(1) Initial feature map generation
Building facade point cloud set obtained by facade extractionB(x,y,z) Converting into point set using corresponding building facade as reference coordinate systemB’(x’,y’,z’) The conversion formula is as follows:
Figure 272605DEST_PATH_IMAGE005
in the formula,ABare all plane general equation coefficients corresponding to the vertical face of the building;αis the angle between the building facade and the plane XOZ.
Discarding the height parameter relative to the planez’Obtaining the two-dimensional plane point cloud of the three-dimensional point cloud projected to the corresponding vertical surfaceB”(x’,y’). Creating a mesh using specified side lengths and point clouds on a two-dimensional planeB”(x’,y’) And dividing, counting the number of point clouds in each grid network, and rasterizing the point clouds into a single-waveband two-dimensional image by taking the value as a pixel value. Then, masking the area with the pixel value of 0 in the two-dimensional image to obtain the initial areaCharacteristic diagramI
(2) Image enhancement
Setting an initial feature map of an inputIIs of the size ofm×nStorage is done using 8-bit quantization. The image enhancement process mainly comprises the following steps:
a) histogram equalization
First, an initial feature map is calculatedICumulative histogram of (2):hist c =cumsum(histogram(I));
whereincumsumIn order to add the operator, the operator is,histogramto obtain the histogram operator.
Secondly, calculating the mapping relation of histogram equalization:cdf= hist c ×(256/(m×n) Therein) are provided withmAndnrespective initial feature mapILength and width.
Finally, mapping the original image to obtain the image with equalized histogramI 1I 1= cdf[I]。
b) Bandpass filtering and histogram equalization
First, the initial feature map is alignedIPerforming band-pass filtering:
Figure 95068DEST_PATH_IMAGE006
in the formula:G(u, v) For fourier transformation of the band-pass filtered output image,F(u, v) Is the fourier transform domain of the original image containing noise,B(u, v) In order to be a band-pass filtering transfer function,D(u, v) Representing the euclidean distance to the origin,D 0D 1the minimum and maximum values of the truncation frequency are respectively.
Then, the histogram equalization processing described in the above "a) histogram equalization" is performed on the band-pass filtered image to obtain a second feature imageI 2
c) Prewitt edge detection
The Prewitt operator is an edge detection of a first-order differential operator, and the edge is detected by using the gray difference of upper, lower, left and right adjacent points of a pixel point to reach an extreme value at the edge, so that part of a pseudo edge is removed, and the Prewitt operator has a smoothing effect on noise. The principle is to perform neighborhood convolution with an image in an image space by using two directional templates, one for detecting a horizontal edge and the other for detecting a vertical edge. The edge detection operator can be calculated by the following formula:
Figure 362101DEST_PATH_IMAGE008
in the formula,G x G y respectively a horizontal detection operator and a vertical detection operator;f(i, j) Representing an imageiGo to the firstjAnd the corresponding pixel values are listed. According to the above formula, a Prewitt template can be obtained as follows:
Figure 561001DEST_PATH_IMAGE009
for the initial characteristic diagramIAre used separatelyG x AndG y the operator carries out edge detection, and the results of the edge detection and the operator are added to obtain an edge detection characteristic imageI 3
(3) Feature image generation
Images obtained by the above image enhancement methodI 1I 2AndI 3synthesizing a characteristic image containing 3 wave bands as a three-wave-band characteristic image; equalizing the histogram to the resulting imageI 1As a single band feature image.
(II) door and window detection
The door and window detection in the embodiment is performed on the three-band characteristic image obtained above, and the Faster R-CNN model with higher precision in the image target detection model is used for extraction. The model is mainly characterized in that a Region selection suggestion network is designed, a candidate Region is generated by utilizing a feature map after CNN convolution operation, and methods such as Selective Search and Edge Boxes are replaced, so that the detection speed is improved, and the detection precision is guaranteed.
It should be noted that, according to the feature that there is only one target at the same position on the same plane of the building elevation, the present embodiment performs non-maximum suppression on the detection result, so that only the extraction result with the highest confidence is retained at the same position.
Preferably, the present embodiment uses the Faster R-CNN model provided by the Torchvision module as a backbone network, and uses the ResNet50 as a pre-training network. And training the model by using the labeled sample to obtain the building door and window detection model suitable for the embodiment. And slicing the characteristic images needing to be subjected to door and window detection, then transmitting the sliced characteristic images into the model for reasoning, and after reasoning of all slices is finished, combining results obtained by all slices to obtain the required door and window boundary.
Wherein the Torchvision module is a module in the PyTorch framework specifically designed to process images, without standard chinese translation, as will be appreciated by those skilled in the art.
(III) building boundary extraction
Let the single band feature image obtained in the foregoing beOThe process of building boundary extraction is as follows:
(1) feature image closing operation
In order to reduce the influence of noise and shielding, the single-waveband feature image is subjected to closed operation:
Figure 164633DEST_PATH_IMAGE002
in the formula:O 1the feature images after the closed operation are obtained,Ois an original single-waveband characteristic image,Bin order to close the structural element of the operation,
Figure 157997DEST_PATH_IMAGE010
in order to do the operation of the dilation,
Figure 912327DEST_PATH_IMAGE011
for corrosion transportationAn operator;
in the present embodiment, the images are alignedOThe dilation and erosion operation of (a) is expressed as:
Figure 649338DEST_PATH_IMAGE012
in the formula,B x structural element for closed operationBTranslationxAnd obtaining the structural element.
(2) Connected component detection
In order to obtain the area corresponding to the building boundary, 8-neighborhood connected domain detection is performed on the image after the closed operation by using a seed filling method, and the specific process is as follows:
a) and (6) carrying out image binarization. Taking all pixels with the pixel value of 0 in the image as background pixels without processing; all non-0 pixels in the image are regarded as foreground pixels and labeled as 1, namely, the binarization result is:
Figure 172724DEST_PATH_IMAGE013
b) initialization taglabel=1, stack for storing process datas
c) Selecting seed points: traversing the binary image from left to right and from top to bottom from the upper left corner of the imageO 2Until a pixel is foundO 2(i,j) Point of = 1.
d) To connect the pixelO 2(i,j) =1 as seed and its pixel value is modified tolabelCorresponding values, and then storing all foreground pixels adjacent to the seed in the stacks
e) Popping the top pixel, and changing its pixel value tolabelCorresponding value, storing all the foreground pixels adjacent to the top pixel into the stack againsIn (1).
f) And e, repeating the step e until the stack is empty. So far, the image is foundO 2Of a connected region, the pixel values within the region being labeled aslabel
g) Order tolabel=label+1
h) And e) repeating the steps c) to g) until all foreground pixels in the image are marked, and ending the scanning. Thus, an image is obtainedO 2In all the connected regions, the marked image isO 3
(3) Building initial boundary acquisition
FromO 3And acquiring a maximum connected domain, realizing grid vector conversion by using a RasterToPolygon tool in an Esri ArcGIS platform, and taking an acquired vector surface as an initial boundary of the building.
(4) Initial boundary data hole filling
Due to factors such as occlusion and the like, a hole may exist in the acquired initial boundary data, so that a surface hole filling is performed on the initial boundary of the building by using an EliminatePolygonPart tool in an eri ArcGIS platform. Considering that a real hole may exist in the surface boundary, filling the hole with the area less than 20% of the whole area, and if the area of the hole is more than 20% of the whole area, reserving the hole.
(5) Boundary optimization
Considering that most building boundaries are straight lines and are mutually orthogonal, the method uses a SimplifyBuilding tool in an Esri ArcGIS platform to simplify and orthogonalize the boundaries of building data, and completes optimization.
(IV) outcome generation
And combining the boundary of the building subjected to boundary optimization with the extracted door and window boundary to obtain a final building elevation map.
In this embodiment, the method for extracting elevation map of the present application is verified using 13 building elevations from I to XIII as shown in fig. 2, and the verification results extracted using three typical elevations IV, VI, and VII as examples are illustrated, and the elevation map extraction results are shown in fig. 3.
As can be seen from fig. 3, most of the windows and doors are successfully detected, and the extracted window and door boundaries are mostly distributed horizontally and vertically and better matched with the actual window and door boundaries. The extraction precision of the building boundary is slightly low, but the general trend is consistent with the reality, and the extraction effect of the method is expected.
This example further performs quantitative evaluation on the result of window extraction.
In the field of object detection, the result Accuracy evaluation is often performed using indices such as Precision, Recall, F1 score (F1 score), Accuracy, fusion Matrix, Average Precision, AP, and IoU (Intersection over unit). Since the window extraction in this embodiment is single-class object detection, and the confusion matrix is not applicable to indexes detected by multiple classes of objects, the remaining 4 indexes are used for evaluation, and the extraction accuracy of each facade window is shown in table 1.
TABLE 1 extraction accuracy of each vertical face window
Figure 540251DEST_PATH_IMAGE015
As can be seen from table 1, the overall accuracy, recall and F1 scores reached 0.982, 0.977 and 0.979 respectively at the rough extraction of the frames (minimum intersection product =50%), meaning that the most of the frames achieved the correct extraction. The overall accuracy, recall and F1 scores reached 0.887, 0.882 and 0.884 respectively at the time of accurate extraction of the window (minimum intersection =85%), meaning that most of the window can get accurate edges. Meanwhile, when the minimum cross product =85%, the minimum values of F1 score and average accuracy are 0.774 and 0.621 respectively, and the average values are 0.990 and 0.827 respectively, so the overall accuracy is high.
The further analysis and extraction precision of the embodiment is as follows:
the box plot is used to show the extraction accuracy index for each facade as shown in fig. 4 and 5.
As can be seen from fig. 4, during rough extraction, the median of the 4 precision indexes exceeds 0.97, the lower quartile exceeds 0.96, and the median of the precision indexes is closer to 1.00;
as can be seen from fig. 5, at the time of accurate extraction, the median of the accuracy rate, recall rate and F1 scores all exceeded 0.90, and the lower quartile all exceeded 0.85.
In summary, the present embodiment can achieve rough extraction of all the frames and precise extraction of most frames. In addition, the quantitative evaluation of the door body extraction result can be verified in the same way, so that the rough extraction of all door bodies and the accurate extraction of most door bodies can be realized. Therefore, the extraction accuracy of the present embodiment meets the application requirements.
Example 3:
in the facade map generation method according to any of the above embodiments, the point cloud of the facade of the building, which is the raw data, is obtained by the following method:
1. and obtaining three-dimensional point cloud data of the building based on the laser scanner technology, and performing point cloud pretreatment.
Because the point cloud data contains a large number of ground points and has high density, the ground points are removed firstly. In addition, in order to reduce the amount of computation, the point cloud is translated to the origin of coordinates and voxel down sampling (voxel down sampling) is performed. Finally, statistical outlier removal (statistical outlier removal) is performed on the down-sampled data to remove noise point clouds.
2. Extraction of potential planes by improved 3D Hough transform
(1) Creating counters and offset counts
When the discretization of the parameters of the 3D Hough transform plane is carried out,ρdiscretized into collections according to the prior artQθAndφin the prior art discretization into collectionsM{0, s θ ,2s θ ,…,2πAndN{0, s φ ,2 s φ ,…,2πafter that, again areMNCreating an offset for each element, respectivelys θ /2、s φ Copy of/2, defined as offset copyM’N’
Figure 781877DEST_PATH_IMAGE016
Wherein,θthe included angle between the normal vector of the plane of the point cloud and the z axis is set;φthe included angle between the plane normal vector of the point cloud and the x axis is shown;ρthe distance from the origin to the point cloud plane;s θ is a setMThe discretization step length of (a);s φ is a setNIs used to determine the discretized step size.
On the basis of the above-mentioned counter respectively creating the following countersAA’
Figure 57000DEST_PATH_IMAGE017
Wherein,θ’is a pair ofOffset of theta s θ A replica element of/2;φ’is a pair ofφOffset ofs φ A replica element of/2; subscriptjijRespectively representing the position of the elements
(2) 3D high pass filtering
Respectively to the counterAA’3D high-pass filtering is carried out, and the low-frequency part in the counter is removed to weaken the influence of peak blurring. In this embodiment, the central cell value of the convolution kernel of the 3D high-pass filtering is 1/2, the values of the remaining cells are determined according to the distance from the central cell based on the inverse distance weighting principle, and the sum of the entire convolution kernel values is 1.
(3) Potential plane acquisition
For the filtered counterAA’Voting to obtain a candidate plane set satisfying the conditionS(θ, φ, ρ) AndS’(θ’, φ’, ρ) (ii) a Using a union of the twoSS’As the final set of candidate planes.
3. Facade restraint
The method is mainly realized through coplanar constraint and vertical plane constraint.
(1) Coplanar constraint
The same plane constraint aims to further remove the problems of the same plane and a pseudo plane caused by overlarge point cloud density, improper algorithm threshold setting and peak blurring, and is mainly determined by three characteristics of a plane dihedral angle, a plane distance and a common point proportion. If two planes arep 1p 2Satisfying the following formula, willp 1p 2Treated as the same plane and merged:
Figure 700471DEST_PATH_IMAGE019
in the formula,r 12respectively at the origin in the planep 1p 2Distance vector between the upper drop legs;n 1n 2are respectively a planep 1p 2A plane normal vector of (a);ComPropan operator for solving the proportion of the common points of the two planes;α th is a plane dihedral angle between two planes; deltad th Is the distance between two planes;cp th a threshold value corresponding to the common point proportion between the two planes;maxto solve the maximum operator; Λ is a logical or operator; the V is a logical and operator.
(2) Vertical plane constraint
The potential plane includes not only the facade of the building but also many other planes. In general, the building facade should be a vertical plane, so that the constraint on the vertical angle of each plane after constraint on the same plane can exclude non-building facades, and the constraint conditions of the vertical plane constraint are as follows:
Figure 35637DEST_PATH_IMAGE020
in the formula,m’a plane normal vector of the current plane;n’a plane normal vector of a vertical plane;α v,th the threshold is constrained for the vertical plane.
If the potential plane does not meet the vertical plane constraint, the potential plane is directly discarded.
And obtaining the potential vertical face after the coplanar constraint and the vertical face constraint.
4. Elevation refining
(1) Iterating each potential vertical face, and acquiring a point cloud corresponding to each potential vertical face by using a RANSAC (random sample consensus) algorithm;
(2) clustering each potential facade point cloud by using an HDBSCAN algorithm (the HDBSCAN algorithm (high-level sensitivity-based Spatial Clustering of Applications with Noise, the general term in the field, no standard Chinese translation, and direct translation as a 'Clustering algorithm based on Hierarchical Density with Noise'), so as to obtain each point cloud cluster;
(3) acquiring a plane equation of a potential vertical face corresponding to each point cloud cluster by using a RANSAC algorithm, and taking a result as a new potential vertical face;
(4) and (5) potential facade constraint, removing the same facade and the pseudo plane to obtain the refined building facade.
It should be noted that, since the embodiment is directed to facade extraction, and a building facade is usually a vertical surface, when plane extraction is performed by using the RANSAC algorithm, the plane equation of the algorithm is madeAx+By+Cz+DIn =0C=0, the plane is constrained to be a vertical plane to improve the plane extraction accuracy.
5. Elevation boundary calibration
Clustering the point clouds after the vertical face refining by using an advanced HDBSCAN algorithm, then extracting a vertical face equation and a corresponding vertical face point cloud from each generated clustered point cloud by using a RANSAC algorithm respectively, and taking a minimum bounding box of the vertical face point cloud as a vertical face boundary.
Because clustering of the HDBSCAN algorithm causes that each clustered point cloud can be separated, and a cluster can be divided into a plurality of clusters, the 'facade constraint' recorded in the step 3 needs to be executed again on the result of facade boundary calibration, so that the building facade with the boundary and the corresponding point cloud data thereof are obtained, and the building facade with the boundary and the corresponding point cloud data can be directly used for subsequent facade map generation.
The method for obtaining the point cloud of the building facade can solve the problems of peak value blurring, incapability of balancing precision and efficiency, facade boundary confusion and the like when the 3D Hough transform is used for building facade extraction, achieves the effects of balancing precision and efficiency, improving robustness and accuracy of building facade extraction, overcoming peak value blurring and error combination and the like.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

Claims (10)

1. A method for generating a facade graph based on a point cloud of a facade of a building is characterized by comprising the following steps:
generating an initial characteristic map based on the point cloud of the building facade;
carrying out image enhancement on the initial characteristic image to obtain a single-band characteristic image and a three-band characteristic image;
building boundaries are extracted from the single-band characteristic image, and door and window boundaries are extracted from the three-band characteristic image;
and combining the building boundary and the door and window boundary to obtain a building elevation map.
2. The method for generating the elevation map based on the point cloud of the building elevation as claimed in claim 1, wherein the method for generating the initial feature map based on the point cloud of the building elevation comprises:
converting the set of the point clouds of the building facade into a point set which takes the corresponding building facade as a reference coordinate system;
discarding the height parameter of the point concentrated point cloud relative to the plane to obtain a two-dimensional plane point cloud projected to the corresponding vertical surface by the three-dimensional point cloud;
creating a grid, dividing the two-dimensional plane point cloud through the grid, counting the number of the point clouds in each grid, and rasterizing the point clouds into a single-waveband two-dimensional image by taking the number of the point clouds as a pixel value;
and masking the area with the pixel value of 0 in the single-waveband two-dimensional image to obtain an initial characteristic diagram.
3. The method of claim 1, wherein the method of image enhancement of the initial feature map comprises:
executing histogram equalization processing on the initial characteristic graph to obtain a first characteristic imageI 1
Performing band-pass filtering and histogram equalization processing on the initial characteristic graph in sequence to obtain a second characteristic imageI 2
Carrying out edge detection on the initial characteristic image to obtain an edge detection characteristic imageI 3
To be provided withI 1As a single band feature image; mergingI 1I 2AndI 3and obtaining a three-band characteristic image.
4. The method of claim 3, wherein the histogram equalization process comprises:
calculating cumulative histogram of processed feature maphist c
Calculating a mapping relationship for histogram equalizationcdfcdf=hist c ×(2 q /(m×n) Therein), whereinmnRespectively the length and the width of the processed characteristic diagram;qthe quantization bit number of the initial characteristic diagram;
based on mapping relationcdfAnd mapping the original image to obtain a feature image after histogram equalization.
5. The method of claim 3, wherein the band-pass filtering is performed by the following equation:
Figure 226492DEST_PATH_IMAGE001
in the formula:G(u, v) For fourier transformation of the band-pass filtered output image,F(u, v) Is the fourier transform domain of the original image containing noise,B(u, v) In order to be a band-pass filtering transfer function,D(u, v) Representing the euclidean distance to the origin,D 0D 1the minimum and maximum values of the truncation frequency are respectively.
6. The method of claim 3, wherein the edge detection method comprises:
respectively using a horizontal detection operator and a vertical detection operator in the Prewitt operator to carry out edge detection operation on the initial characteristic diagram; adding the two edge detection results to obtain an edge detection characteristic imageI 3
7. The method of claim 1, wherein the method of extracting the boundaries of the building on the one-band feature image comprises:
and performing closed operation on the single-waveband characteristic image by the following formula:
Figure 42001DEST_PATH_IMAGE002
in the formula:O 1the feature images after the closed operation are obtained,Ois an original single-waveband characteristic image,Bin order to close the structural element of the operation,
Figure 94009DEST_PATH_IMAGE003
in order to do the operation of the dilation,
Figure 653166DEST_PATH_IMAGE004
is the corrosion operator;
performing neighborhood connected domain detection on the image after the closed operation;
acquiring a maximum connected domain and performing grid-to-vector operation based on a neighborhood connected domain detection result, and taking an acquired vector surface as an initial boundary of a building;
filling a cavity in the initial boundary of the building, wherein the filling object is a cavity with the area less than 20% of the whole area;
and optimizing the filled boundary through boundary simplification and orthogonalization operation to obtain the building boundary.
8. The method for generating the elevation map based on the point cloud of the facade of the building according to claim 7, wherein a neighborhood connected domain detection is performed on the image after the closed operation by adopting a seed filling method, and the specific detection method comprises the following steps:
step a, taking all pixels with pixel values of 0 in the image after the closed operation as background pixels without processing; taking all non-0 pixels in the image as foreground pixels and marking the foreground pixels as 1 to obtain a binary image;
step b, initializing the label in the seed filling methodlabelStack, StacksLet us orderlabel=1;
C, traversing the binary image from left to right and from top to bottom from the upper left corner of the image until finding a point which meets the condition that the pixel is 1 as a seed point;
step d, modifying the pixel value of the seed point into a labellabelCorresponding value, then storing all foreground pixels adjacent to the seed point into the stacksPerforming the following steps;
e, removing the top pixel, and modifying the pixel value of the top pixel into a labellabelCorresponding value, and storing all foreground pixels adjacent to the top pixel in the stacksPerforming the following steps;
step f, repeating step e until stacksIs empty;
step g, orderlabel=label+1;
And h, repeating the steps c to g until all foreground pixels in the image are marked, and obtaining all connected areas in the image.
9. The method of claim 1, wherein the method of extracting the door and window boundaries on the three-band feature image comprises:
establishing a target detection model based on deep learning by using a Faster R-CNN model as a main network and a ResNet50 model as a pre-training network;
training the target detection model by using the marked door and window samples;
and substituting the three-band characteristic image slices needing to be subjected to door and window detection into the trained target detection model to obtain detection results of all the slices, and combining the detection results to obtain the required door and window boundary.
10. The method of claim 9, further comprising performing a non-maximum suppression operation on the detection results of all slices, such that only the detection result with the highest confidence level is retained at the same position of the same plane.
CN202210659630.4A 2022-06-13 2022-06-13 Elevation map generation method based on building elevation point cloud Active CN114742968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210659630.4A CN114742968B (en) 2022-06-13 2022-06-13 Elevation map generation method based on building elevation point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210659630.4A CN114742968B (en) 2022-06-13 2022-06-13 Elevation map generation method based on building elevation point cloud

Publications (2)

Publication Number Publication Date
CN114742968A true CN114742968A (en) 2022-07-12
CN114742968B CN114742968B (en) 2022-08-19

Family

ID=82287967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210659630.4A Active CN114742968B (en) 2022-06-13 2022-06-13 Elevation map generation method based on building elevation point cloud

Country Status (1)

Country Link
CN (1) CN114742968B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937461A (en) * 2022-11-16 2023-04-07 泰瑞数创科技(北京)股份有限公司 Multi-source fusion model construction and texture generation method, device, medium and equipment
CN116310197A (en) * 2023-05-11 2023-06-23 四川省非物质文化遗产保护中心 Three-dimensional model construction method, device and storage medium for non-genetic building

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063683A1 (en) * 2013-08-28 2015-03-05 Autodesk, Inc. Building datum extraction from laser scanning data
CN109816708A (en) * 2019-01-30 2019-05-28 北京建筑大学 Building texture blending method based on oblique aerial image
CN110910387A (en) * 2019-10-09 2020-03-24 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device
CN113256813A (en) * 2021-07-01 2021-08-13 西南石油大学 Constrained building facade orthophoto map extraction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063683A1 (en) * 2013-08-28 2015-03-05 Autodesk, Inc. Building datum extraction from laser scanning data
CN109816708A (en) * 2019-01-30 2019-05-28 北京建筑大学 Building texture blending method based on oblique aerial image
CN110910387A (en) * 2019-10-09 2020-03-24 西安理工大学 Point cloud building facade window extraction method based on significance analysis
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device
CN113256813A (en) * 2021-07-01 2021-08-13 西南石油大学 Constrained building facade orthophoto map extraction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
宋永存: "多视激光点云数据融合与三维建模方法研究", 《测绘通报》 *
李瑶: "基于多源LiDAR数据的LoD3城市建筑物模型自动重建研究", 《中国优秀博硕士学位论文全文数据库(硕士) 基础科学辑(月刊)》 *
梁艳等: "近景图像序列线特征约束的建筑物立面重建", 《测绘科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937461A (en) * 2022-11-16 2023-04-07 泰瑞数创科技(北京)股份有限公司 Multi-source fusion model construction and texture generation method, device, medium and equipment
CN115937461B (en) * 2022-11-16 2023-09-05 泰瑞数创科技(北京)股份有限公司 Multi-source fusion model construction and texture generation method, device, medium and equipment
CN116310197A (en) * 2023-05-11 2023-06-23 四川省非物质文化遗产保护中心 Three-dimensional model construction method, device and storage medium for non-genetic building
CN116310197B (en) * 2023-05-11 2023-08-25 四川省非物质文化遗产保护中心 Three-dimensional model construction method, device and storage medium for non-genetic building

Also Published As

Publication number Publication date
CN114742968B (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN112489212B (en) Intelligent building three-dimensional mapping method based on multi-source remote sensing data
CN115861135B (en) Image enhancement and recognition method applied to panoramic detection of box body
CN109522908B (en) Image significance detection method based on region label fusion
CN114742968B (en) Elevation map generation method based on building elevation point cloud
CN101901343B (en) Remote sensing image road extracting method based on stereo constraint
WO2018107939A1 (en) Edge completeness-based optimal identification method for image segmentation
Brandtberg et al. Automated delineation of individual tree crowns in high spatial resolution aerial images by multiple-scale analysis
CN110309781B (en) House damage remote sensing identification method based on multi-scale spectrum texture self-adaptive fusion
CN103258203B (en) The center line of road extraction method of remote sensing image
Ma et al. Two graph theory based methods for identifying the pectoral muscle in mammograms
CN106778668B (en) A kind of method for detecting lane lines of robust that combining RANSAC and CNN
Friedman et al. Online detection of repeated structures in point clouds of urban scenes for compression and registration
CN110047036B (en) Polar grid-based ground laser scanning data building facade extraction method
CN114742957B (en) Building facade extraction method based on point cloud data
CN106327451A (en) Image restorative method of ancient animal fossils
CN114387288A (en) Single standing tree three-dimensional information extraction method based on vehicle-mounted laser radar point cloud data
Ouma et al. Urban features recognition and extraction from very-high resolution multi-spectral satellite imagery: a micro–macro texture determination and integration framework
CN110675396A (en) Remote sensing image cloud detection method, device and equipment and computer readable storage medium
CN106780718A (en) A kind of three-dimensional rebuilding method of paleontological fossil
CN117292076A (en) Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery
CN115063698A (en) Automatic identification and information extraction method and system for slope surface deformation crack
CN117611485B (en) Three-dimensional core permeability prediction method based on space-time diagram neural network
Peng et al. Incorporating generic and specific prior knowledge in a multiscale phase field model for road extraction from VHR images
Jiang et al. Semi-automatic building extraction from high resolution imagery based on segmentation
CN111091071A (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant