CN113569856B - Model semantic segmentation method for actual measurement and laser radar - Google Patents

Model semantic segmentation method for actual measurement and laser radar Download PDF

Info

Publication number
CN113569856B
CN113569856B CN202110789947.5A CN202110789947A CN113569856B CN 113569856 B CN113569856 B CN 113569856B CN 202110789947 A CN202110789947 A CN 202110789947A CN 113569856 B CN113569856 B CN 113569856B
Authority
CN
China
Prior art keywords
voxel
voxels
point cloud
cloud data
data points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110789947.5A
Other languages
Chinese (zh)
Other versions
CN113569856A (en
Inventor
李辉
金海建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Angrui Hangzhou Information Technology Co ltd
Original Assignee
Angrui Hangzhou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Angrui Hangzhou Information Technology Co ltd filed Critical Angrui Hangzhou Information Technology Co ltd
Priority to CN202110789947.5A priority Critical patent/CN113569856B/en
Publication of CN113569856A publication Critical patent/CN113569856A/en
Application granted granted Critical
Publication of CN113569856B publication Critical patent/CN113569856B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a model semantic segmentation method and a laser radar for actual measurement, wherein the model semantic segmentation method comprises the following steps: acquiring three-dimensional point cloud data of a target area; voxelized three-dimensional point cloud data to obtain voxels, wherein each voxel comprises a plurality of point cloud data points; acquiring expression information of each voxel; acquiring the adjacent relation of the voxels by using the expression information; acquiring connection relations of all voxels according to the adjacent relation; dividing the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain voxel groups; and acquiring semantic information of the voxel groups according to the relation and the position among the voxel groups. The model semantic segmentation method and the laser radar can informatize and normalize the original scanned spatial data, provide conditions for further calculation and processing of the model, improve informatization degree of the building industry, provide convenience for user acceptance and modification of the wall, and accelerate construction progress.

Description

Model semantic segmentation method for actual measurement and laser radar
Technical Field
The invention relates to a model semantic segmentation method for actual measurement and a laser radar.
Background
The actual measurement refers to a method which uses a measuring tool to test, measure and truly reflect the quality data of a product on site. And according to the related quality acceptance standard, measuring and controlling engineering quality data errors to be in a range allowed by national housing construction standards.
The project development stage related to actual measurement mainly comprises a main body structure stage, a masonry stage, a plastering stage, a device installation stage and a finishing stage. The measuring range comprises a concrete structure, a masonry engineering, a plastering engineering, a waterproof engineering, a door and window engineering, a paint engineering, a finishing engineering and the like.
The existing actual measurement tool has single function and low measurement and correction efficiency.
Disclosure of Invention
The invention aims to overcome the defects of single function and low measuring and rectifying efficiency of the existing actual measurement tool in the prior art, and provides the model semantic segmentation method and the laser radar for the actual measurement, which can informatize and normalize the original scanned spatial data, provide conditions for further calculation and processing of a model, improve the informatization degree of the building industry, provide convenience for user acceptance and rectifying of a wall body and accelerate the construction progress.
The invention solves the technical problems by the following technical scheme:
the model semantic segmentation method for the actual measurement real quantity is characterized by comprising the following steps of:
acquiring three-dimensional point cloud data of a target area;
voxelized three-dimensional point cloud data to obtain voxels, wherein each voxel comprises a plurality of point cloud data points;
Acquiring expression information of each voxel;
Acquiring the adjacent relation of the voxels by using the expression information;
acquiring connection relations of all voxels according to the adjacent relation;
Dividing the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain voxel groups;
and acquiring semantic information of the voxel groups according to the relation and the position among the voxel groups.
Preferably, the expression information is used for recording the adjacent relation and the connection relation, all voxels are of the same size, and the expression information comprises voxel center point coordinates or centroid point coordinates.
Preferably, for one target voxel, the target voxel is a cuboid, and the adjacent relation includes a relation between 26 adjacent voxels adjacent to the target voxel and the target voxel, and the adjacent relation is expressed by using expression information.
Preferably, the dividing the voxel cluster according to the morphology of the point cloud data point in each voxel and the connection relation to obtain a voxel group includes:
For a seed voxel, acquiring a normal vector of a point cloud data point fitting surface of the seed voxel;
according to the connection relation, taking the seed voxels as starting points, and obtaining voxels similar to the normal vector of the seed voxels as the same kind of the seed voxels;
and dividing all voxels by class to obtain the voxel group.
Preferably, the model semantic segmentation method comprises the following steps:
and selecting a seed voxel, wherein the normal vector of adjacent voxels of the seed voxel is similar to the normal vector of the seed voxel.
Preferably, the model semantic segmentation method comprises the following steps:
acquiring an integral element plane according to coordinates of the voxels;
And acquiring a voxel at the center of the voxel plane as the seed voxel.
Preferably, the model semantic segmentation method comprises the following steps:
for a voxel, connecting point cloud data points in the voxel into a plurality of triangles;
judging whether two triangles with included angles larger than a preset value exist in all triangles under the same voxel, and if not, taking the normal vector of the plane where the triangle exists as the normal vector of the plane formed by the point cloud data points of the voxel.
Preferably, the expression information includes quality information, the quality information of the voxel is high if all the point cloud data points in the voxel are fitted to one plane, and the quality information of the voxel is low if all the point cloud data points in the voxel cannot be fitted to one plane, and the model semantic segmentation method includes:
for a low-quality voxel, searching adjacent voxels of the low-quality voxel according to the adjacent relation;
For one point cloud data point in the low-quality voxel, searching adjacent data points with the distance smaller than a preset value from the point cloud data point in the low-quality voxel and the adjacent voxels;
the point cloud data points in the low quality voxels are merged into a high quality voxel merge where the associated neighboring data points reside.
Preferably, a point cloud data point in a low quality voxel is associated with a neighboring data point if the point cloud data point in the low quality voxel fits to the same plane as the neighboring data point.
The invention also provides a laser radar which is characterized in that the laser radar is used for realizing the model semantic segmentation method.
On the basis of conforming to the common knowledge in the field, the above preferred conditions can be arbitrarily combined to obtain the preferred examples of the invention.
The invention has the positive progress effects that:
the model semantic segmentation method and the laser radar can informatize and normalize the original scanned spatial data, provide conditions for further calculation and processing of the model, improve informatization degree of the building industry, provide convenience for user acceptance and modification of the wall, and accelerate construction progress.
Drawings
Fig. 1 is a flowchart of a model semantic segmentation method according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of a model semantic segmentation method according to embodiment 1 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
The present embodiment provides a lidar for actual measurement of real quantities.
The laser radar comprises a scanning component and a calculating module.
In this embodiment, the processing and calculation of the data are directly generated after the laser radar scanning, and in other embodiments, the processing of the three-dimensional point cloud data may also be performed through a tablet and a PC.
The scanning component is used for scanning three-dimensional point cloud data of a target area.
The computing module is used for acquiring three-dimensional point cloud data of a target area;
the computing module is used for voxelizing the three-dimensional point cloud data to obtain voxels, and each voxel comprises a plurality of point cloud data points;
In this embodiment, the scanned point cloud is first voxelized, and the point cloud is only a set of points in the three-dimensional space, the cloud is discontinuous in space and is in a discrete state, and the voxels are pixels in the 3D space. Quantized, fixed-size point clouds. Each voxel unit is of a fixed size and coordinates.
The center point or centroid point within each cube (voxel) is used to express the corresponding voxel cube.
The data set formed by all center points or centroid points is voxel cloud data.
The voxel possibly contains a plurality of point cloud data points of original three-dimensional point cloud data, and then the voxel is utilized, so that the downsampling of a large amount of original point cloud can be realized, the surface shape characteristics and the geometric structure of the point cloud data are maintained while the point cloud data are greatly reduced, and the processing speed of the system to the point cloud data can be accelerated.
Acquiring expression information of each voxel;
the expression information in this embodiment includes voxel center point coordinates.
Acquiring the adjacent relation of the voxels by using the expression information;
acquiring connection relations of all voxels according to the adjacent relation;
Dividing the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain voxel groups;
The cluster segmentation may be: and judging the same surface according to the coordinates in the expression information, and classifying and aggregating the voxels on the same surface together, so that the wall segmentation of the model is realized.
And acquiring semantic information of the voxel groups according to the relation and the position among the voxel groups.
The three-dimensional data point cloud is provided with azimuth information during scanning, and the information can be used for acquiring which wall bodies are the wall bodies and which are the ceiling and the ground, so that the meaning of the segmented voxel group can be judged.
The expression information is used for recording the adjacent relation and the connection relation, all voxels are of the same size, and the expression information comprises voxel center point coordinates or centroid point coordinates.
For a target voxel, the target voxel is a cuboid, the adjacent relation comprises a relation between 26 adjacent voxels adjacent to the target voxel and the target voxel, and the adjacent relation is expressed by using expression information.
After the voxels are established, the neighborhood relation of the voxels is established, and three topological structures exist in the space after the point cloud data points are subjected to voxelization treatment: 6 adjacency, 18 adjacency, and 26 adjacency, the present embodiment uses a 26 neighborhood relationship to know the neighborhood relationship of one voxel, i.e., to know the connection relationship of all voxels in space.
Specifically, the computing module is configured to:
For a seed voxel, acquiring a normal vector of a point cloud data point fitting surface of the seed voxel;
according to the connection relation, taking the seed voxels as starting points, and obtaining voxels similar to the normal vector of the seed voxels as the same kind of the seed voxels;
and dividing all voxels by class to obtain the voxel group.
In this embodiment, the normal vector similarity means that the included angle of the normal vector is smaller than a preset angle.
The computing module is further for:
and selecting a seed voxel, wherein the normal vector of adjacent voxels of the seed voxel is similar to the normal vector of the seed voxel.
In this embodiment, the seed voxels are selected to have an adjacency, and the normal vector of the adjacency is similar to the normal vector of the seed voxels.
Specifically, the computing module is configured to:
acquiring an integral element plane according to coordinates of the voxels;
And acquiring a voxel at the center of the voxel plane as the seed voxel.
Further, the computing module is configured to:
for a voxel, connecting point cloud data points in the voxel into a plurality of triangles;
judging whether two triangles with included angles larger than a preset value exist in all triangles under the same voxel, and if not, taking the normal vector of the plane where the triangle exists as the normal vector of the plane formed by the point cloud data points of the voxel.
Further, the expression information includes quality information, the quality information of the voxel is high if all the point cloud data points in the voxel are fitted to one plane, and the quality information of the voxel is low if all the point cloud data points in the voxel cannot be fitted to one plane, and the calculation module is specifically configured to:
for a low-quality voxel, searching adjacent voxels of the low-quality voxel according to the adjacent relation; the adjacent voxels comprise adjacent voxels of low quality voxels and also adjacent voxels of adjacent voxels, and the specific number of adjacent voxels is selected according to a preset value.
For one point cloud data point in the low-quality voxel, searching adjacent data points with the distance smaller than a preset value from the point cloud data point in the low-quality voxel and the adjacent voxels;
the point cloud data points in the low quality voxels are merged into a high quality voxel merge where the associated neighboring data points reside.
The low quality voxels are typically corner regions.
Fitting the point cloud data points in the low quality voxels to the same plane as the neighboring data points determines that the point cloud data points in the low quality voxels are associated with the neighboring data points.
Searching the percentage of the number of point cloud data points combined with high-quality voxels in low-quality voxels to the total number of point cloud data points in the low-quality voxels, and if the percentage is smaller than a preset value, determining the low-quality voxels as wall explosion points.
And searching low-quality adjacent voxels of the low-quality voxels according to the adjacent relation for the low-quality voxels of the wall explosion point, judging whether a high-quality data point exists between one low-quality data point in the low-quality voxels of the wall explosion point and one low-quality data point in the low-quality adjacent voxels, and if not, regarding the low-quality adjacent voxels as the low-quality voxels of the same wall explosion point.
And adjacent voxels judge the percentage of the total number of point cloud data points combined with high quality voxels in the total number of point cloud data points in low quality voxels
Referring to fig. 1, with the above laser radar, this embodiment further provides a model semantic segmentation method, including:
step 100, acquiring three-dimensional point cloud data of a target area;
step 101, voxelizing three-dimensional point cloud data to obtain voxels, wherein each voxel comprises a plurality of point cloud data points;
102, obtaining expression information of each voxel;
step 103, acquiring the adjacent relation of the voxels by using the expression information;
104, acquiring connection relations of all voxels according to the adjacent relation;
step 105, dividing the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain voxel groups;
and 106, acquiring semantic information of the voxel groups according to the relation and the position among the voxel groups.
Specifically, the expression information is used for recording the adjacent relation and the connection relation, all voxels are of the same size, and the expression information comprises voxel center point coordinates or centroid point coordinates.
Wherein, for a target voxel, the target voxel is a cuboid, the adjacent relation comprises the relation between 26 adjacent voxels adjacent to the target voxel and the target voxel, and the adjacent relation is expressed by using the expression information.
Step 105 specifically includes:
For a seed voxel, acquiring a normal vector of a point cloud data point fitting surface of the seed voxel;
according to the connection relation, taking the seed voxels as starting points, and obtaining voxels similar to the normal vector of the seed voxels as the same kind of the seed voxels;
and dividing all voxels by class to obtain the voxel group.
The seed voxels are a starting point, and each voxel point is calculated by using the connection relation expansion from the starting point.
The step 105 is preceded by:
and selecting a seed voxel, wherein the normal vector of adjacent voxels of the seed voxel is similar to the normal vector of the seed voxel.
The seed voxel selection comprises the following steps:
acquiring an integral element plane according to coordinates of the voxels;
And acquiring a voxel at the center of the voxel plane as the seed voxel.
The model semantic segmentation method further comprises the following steps:
for a voxel, connecting point cloud data points in the voxel into a plurality of triangles;
judging whether two triangles with included angles larger than a preset value exist in all triangles under the same voxel, and if not, taking the normal vector of the plane where the triangle exists as the normal vector of the plane formed by the point cloud data points of the voxel.
Further, the expression information includes quality information, the quality information of the voxel is high if all the point cloud data points in the voxel are fitted to one plane, and the quality information of the voxel is low if all the point cloud data points in the voxel cannot be fitted to one plane, and the step 105 further includes: :
Step 1051, for a low-quality voxel, searching adjacent voxels of the low-quality voxel according to the adjacent relation;
step 1052, for a point cloud data point in the low-quality voxel, searching for a neighboring data point with a distance smaller than a preset value from the low-quality voxel and the neighboring voxel;
step 1053, merging the point cloud data points in the low quality voxels into the high quality voxel merge where the associated neighboring data points are located.
The point cloud data points in the low quality voxels are associated with the neighboring data points if the point cloud data points in the low quality voxels are fit to the same plane as the neighboring data points.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (8)

1. A model semantic segmentation method for actually measured real quantities, characterized in that the model semantic segmentation method comprises:
acquiring three-dimensional point cloud data of a target area;
voxelized three-dimensional point cloud data to obtain voxels, wherein each voxel comprises a plurality of point cloud data points;
Acquiring expression information of each voxel;
Acquiring the adjacent relation of the voxels by using the expression information;
acquiring connection relations of all voxels according to the adjacent relation;
Dividing the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain voxel groups;
acquiring semantic information of the voxel groups according to the relation and the position among the voxel groups;
The expression information comprises quality information, the quality information of a voxel is high when all point cloud data points in the voxel are fitted to one plane, and the quality information of the voxel is low when all point cloud data points in the voxel cannot be fitted to one plane, and the model semantic segmentation method comprises the following steps:
for a low-quality voxel, searching adjacent voxels of the low-quality voxel according to the adjacent relation;
For one point cloud data point in the low-quality voxel, searching adjacent data points with the distance smaller than a preset value from the point cloud data point in the low-quality voxel and the adjacent voxels;
and merging the point cloud data points in the low-quality voxels into a high-quality voxel merge where the associated adjacent data points are located, wherein the point cloud data points in the low-quality voxels are associated with the adjacent data points if the point cloud data points in the low-quality voxels are fit to the same plane as the adjacent data points.
2. The model semantic segmentation method according to claim 1, wherein the expression information is used for recording the adjacent relation and the connection relation, all voxels are of the same size, and the expression information comprises voxel center point coordinates or centroid point coordinates.
3. The model semantic segmentation method according to claim 2, wherein for a target voxel, the target voxel is a cuboid, the neighboring relationship includes a relationship of 26 neighboring voxels neighboring the target voxel and the target voxel, and the neighboring relationship is expressed using expression information.
4. The model semantic segmentation method according to claim 1, wherein the segmenting the voxel clusters according to the morphology of the point cloud data points in each voxel and the connection relation to obtain a voxel group comprises:
For a seed voxel, acquiring a normal vector of a point cloud data point fitting surface of the seed voxel;
according to the connection relation, taking the seed voxels as starting points, and obtaining voxels similar to the normal vector of the seed voxels as the same kind of the seed voxels;
and dividing all voxels by class to obtain the voxel group.
5. The model semantic segmentation method according to claim 4, characterized in that the model semantic segmentation method comprises:
and selecting a seed voxel, wherein the normal vector of adjacent voxels of the seed voxel is similar to the normal vector of the seed voxel.
6. The model semantic segmentation method according to claim 4, characterized in that the model semantic segmentation method comprises:
acquiring an integral element plane according to coordinates of the voxels;
And acquiring a voxel at the center of the voxel plane as the seed voxel.
7. The model semantic segmentation method according to claim 4, characterized in that the model semantic segmentation method comprises:
for a voxel, connecting point cloud data points in the voxel into a plurality of triangles;
judging whether two triangles with included angles larger than a preset value exist in all triangles under the same voxel, and if not, taking the normal vector of the plane where the triangle exists as the normal vector of the plane formed by the point cloud data points of the voxel.
8. A lidar for implementing a model semantic segmentation method according to any of claims 1 to 7.
CN202110789947.5A 2021-07-13 2021-07-13 Model semantic segmentation method for actual measurement and laser radar Active CN113569856B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110789947.5A CN113569856B (en) 2021-07-13 2021-07-13 Model semantic segmentation method for actual measurement and laser radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110789947.5A CN113569856B (en) 2021-07-13 2021-07-13 Model semantic segmentation method for actual measurement and laser radar

Publications (2)

Publication Number Publication Date
CN113569856A CN113569856A (en) 2021-10-29
CN113569856B true CN113569856B (en) 2024-06-04

Family

ID=78164632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110789947.5A Active CN113569856B (en) 2021-07-13 2021-07-13 Model semantic segmentation method for actual measurement and laser radar

Country Status (1)

Country Link
CN (1) CN113569856B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855663A (en) * 2012-05-04 2013-01-02 北京建筑工程学院 Method for building CSG (Constructive Solid Geometry) model according to laser radar grid point cloud
CN106600622A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Point cloud data partitioning method based on hyper voxels
WO2020004740A1 (en) * 2018-06-25 2020-01-02 재단법인실감교류인체감응솔루션연구단 Three-dimensional plane extraction method and device therefor
CN111325837A (en) * 2020-01-23 2020-06-23 江西理工大学 Side slope DEM generation method based on ground three-dimensional laser point cloud
KR20200080970A (en) * 2018-12-27 2020-07-07 포항공과대학교 산학협력단 Semantic segmentation method of 3D reconstructed model using incremental fusion of 2D semantic predictions
CN111932688A (en) * 2020-09-10 2020-11-13 深圳大学 Indoor plane element extraction method, system and equipment based on three-dimensional point cloud
CN112149677A (en) * 2020-09-14 2020-12-29 上海眼控科技股份有限公司 Point cloud semantic segmentation method, device and equipment
WO2021097618A1 (en) * 2019-11-18 2021-05-27 深圳市大疆创新科技有限公司 Point cloud segmentation method and system, and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11125861B2 (en) * 2018-10-05 2021-09-21 Zoox, Inc. Mesh validation
US11321398B2 (en) * 2019-01-30 2022-05-03 Sony Group Corporation Discretization for big data analytics

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855663A (en) * 2012-05-04 2013-01-02 北京建筑工程学院 Method for building CSG (Constructive Solid Geometry) model according to laser radar grid point cloud
CN106600622A (en) * 2016-12-06 2017-04-26 西安电子科技大学 Point cloud data partitioning method based on hyper voxels
WO2020004740A1 (en) * 2018-06-25 2020-01-02 재단법인실감교류인체감응솔루션연구단 Three-dimensional plane extraction method and device therefor
KR20200080970A (en) * 2018-12-27 2020-07-07 포항공과대학교 산학협력단 Semantic segmentation method of 3D reconstructed model using incremental fusion of 2D semantic predictions
WO2021097618A1 (en) * 2019-11-18 2021-05-27 深圳市大疆创新科技有限公司 Point cloud segmentation method and system, and computer storage medium
CN111325837A (en) * 2020-01-23 2020-06-23 江西理工大学 Side slope DEM generation method based on ground three-dimensional laser point cloud
CN111932688A (en) * 2020-09-10 2020-11-13 深圳大学 Indoor plane element extraction method, system and equipment based on three-dimensional point cloud
CN112149677A (en) * 2020-09-14 2020-12-29 上海眼控科技股份有限公司 Point cloud semantic segmentation method, device and equipment

Also Published As

Publication number Publication date
CN113569856A (en) 2021-10-29

Similar Documents

Publication Publication Date Title
CN109828284B (en) Actual measurement method and device based on artificial intelligence
KR102125959B1 (en) Method and apparatus for determining a matching relationship between point cloud data
Wunderlich et al. Areal Deformation Analysis from TLS Point Clouds-The Challenge/Flächenhafte Deformationsanalyse Aus TLS-Punktwolken-Die Herausforderung
CN106338277B (en) A kind of building change detecting method based on baseline
CN116030103B (en) Method, device, apparatus and medium for determining masonry quality
CN114332291A (en) Oblique photography model building outer contour rule extraction method
CN113569856B (en) Model semantic segmentation method for actual measurement and laser radar
CN117095038A (en) Point cloud filtering method and system for laser scanner
CN113654538B (en) Room square finding method, laser radar and measuring system for actual measurement
CN113436244B (en) Model processing method and system for actual measurement actual quantity and laser radar
CN113375556B (en) Full stack type actual measurement real quantity system, measurement method and laser radar
CN113805157B (en) Height measurement method, device and equipment based on target
CN115453549A (en) Method for extracting environment right-angle point coordinate angle based on two-dimensional laser radar
CN114332178A (en) Tower tilt model registration method and device
CN114089376A (en) Single laser radar-based negative obstacle detection method
Wiemann et al. Optimizing triangle mesh reconstructions of planar environments
CN115931901A (en) Wall cavity measuring method with high precision, laser radar and system
Zeng et al. An improved extraction method of individual building wall points from mobile mapping system data
CN116203554B (en) Environment point cloud data scanning method and system
CN117607829B (en) Ordered reconstruction method of laser radar point cloud and computer readable storage medium
Smith et al. 3-D urban modelling using airborne oblique and vertical imagery
CN117092655A (en) Point cloud processing method for actual measurement and laser radar
WO2024042661A1 (en) Obstacle proximity detection device, obstacle proximity detection method, and obstacle proximity detection program
CN115130191A (en) Positive and reverse BIM fusion modeling engine
CN117475002A (en) Building inclination measuring method based on laser scanning technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 390, Building 17, No. 2723 Fuchunwan Avenue, Chunjiang Street, Fuyang District, Hangzhou City, Zhejiang Province, 311400

Applicant after: Angrui (Hangzhou) Information Technology Co.,Ltd.

Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai

Applicant before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant