CN111932671A - Three-dimensional solid model reconstruction method based on dense point cloud data - Google Patents

Three-dimensional solid model reconstruction method based on dense point cloud data Download PDF

Info

Publication number
CN111932671A
CN111932671A CN202010853185.6A CN202010853185A CN111932671A CN 111932671 A CN111932671 A CN 111932671A CN 202010853185 A CN202010853185 A CN 202010853185A CN 111932671 A CN111932671 A CN 111932671A
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
dimensional
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010853185.6A
Other languages
Chinese (zh)
Inventor
扆亮海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010853185.6A priority Critical patent/CN111932671A/en
Publication of CN111932671A publication Critical patent/CN111932671A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

According to the dense point cloud data-based three-dimensional solid model reconstruction method provided by the invention, dense point cloud data is acquired by a laser 3D scanner, 3D laser scanning can rapidly acquire three-dimensional coordinate data of the surface of a measured object in a large area and high resolution, and a rapid and high-precision modeling method is matched, so that the technology can more prominently show the characteristics of rapidness, non-contact, initiative, precision and the like, the data acquired in real time has the characteristics of high precision, high density and the like, the reconstruction of a three-dimensional model becomes more convenient and rapid, a digital model of an actual object can be created in a virtual world, and rapid high-precision three-dimensional solid model reconstruction based on dense point cloud data is realized; the semantic-based feature extraction method saves a large amount of working time, the precision and accuracy of feature extraction are effectively improved by adopting the semi-automatic mode to operate, the cost is greatly reduced, and the precision of the reconstructed three-dimensional solid model is obviously improved.

Description

Three-dimensional solid model reconstruction method based on dense point cloud data
Technical Field
The invention relates to a three-dimensional solid model reconstruction method, in particular to a three-dimensional solid model reconstruction method based on dense point cloud data, and belongs to the technical field of point cloud three-dimensional reconstruction.
Background
3D laser scanning, also known as live-action replication, is a technological revolution following the global positioning system technology in the field of mapping. The 3D laser scanning technology breaks through the traditional single-point measurement method, has the unique advantages of high efficiency and high precision, and provides an excellent technical means for establishing a three-dimensional solid model of an object by acquiring the information of the outer surface of the target rapidly in a large-scale high-resolution manner through the high-speed laser scanning measurement method. With the wide application of computer technology in various fields, reverse engineering is now one of the main technologies for fast three-dimensional reconstruction, and the key technology of reverse engineering is to perform three-dimensional model reconstruction on a physical model according to the obtained dense point cloud data.
The high-precision three-dimensional reconstruction of the target object needs to go through two steps: firstly, quickly and accurately acquiring initial data of the surface of a real object; and secondly, reconstructing a high-quality three-dimensional model of the real object. Firstly, aiming at obtaining initial data of the surface of a target object, a 3D laser scanner is generally adopted, and the 3D laser scanner can acquire array type space point location information of the surface of a real object in a point cloud mode through high-resolution, high-efficiency, high-precision, digital and non-contact measurement. The 3D laser scanner can also perform various later operations such as three-dimensional city model generation, real object digitization and the like on the 3D laser point cloud data. The three-dimensional scanners which are dominant at present are optical three-dimensional scanners and laser three-dimensional scanners, the optical scanners acquire point cloud data and perform tracking analysis, feature matching and other work on grating stripes which are hit on the surfaces of targets to finish the target of acquiring the point cloud data of the surfaces of the targets, wherein the grating stripes are emitted by a projection system of the optical scanners, the optical three-dimensional scanners are widely applied and can be used for measuring the width, the caliber size and the like of regular objects and the appearance of irregular objects, and the optical scanners have obvious effects when acting on small objects, have low price and better economy compared with other large object modeling.
The laser three-dimensional scanner is mainly applied to large-scale scanning of large scenes such as streets, cities and the like, further a panoramic three-dimensional point cloud data and a simulation model are constructed, an airborne laser three-dimensional radar system LiDAR is generally formed by matching with technologies such as a GPS, an INS, a high-resolution camera and the like, the LiDAR transmits signals through a radar transmitting system, is collected by a receiving system after being reflected by a target, and determines the target distance by measuring the running time of reflected light, the LiDAR is divided into a ground LiDAR and an airborne LiDAR, laser pulses transmitted by an airborne LiDAR sensor can partially penetrate through forest shielding to directly acquire high-precision three-dimensional surface topographic data, and after the data is processed by software data, a high-precision digital ground model DEM, a contour map and a forward projection map can be generated, the advantages that the traditional photogrammetry and the ground conventional survey cannot be replaced are achieved, the method has wide development prospect and vigorous application demand in various fields such as three-dimensional city modeling and the like.
The dense point cloud data is acquired by a laser 3D scanner, the content components of the acquired point cloud data are large, the space three-dimensional coordinates, the reflection intensity and the like of the point cloud data are included, the points are used for building a model after being processed mathematically, basic three-dimensional information is expressed, and then other research applications are further carried out on the basis of the three-dimensional model, such as feature extraction, measurement and feature analysis of a measured target are realized, topological relation and shape information are obtained, and a target object is expressed more visually.
The three-dimensional coordinate data of the surface of the measured object can be rapidly acquired by 3D laser scanning in a large area and high resolution mode, and then the characteristics of rapidness, non-contact, initiative, accuracy and the like are more highlighted by the technology by matching with a rapid and high-precision modeling method, the data acquired in real time has the characteristics of high precision, high density and the like, the reconstruction of a three-dimensional model becomes more convenient and rapid by applying the three-dimensional scanning, a digital model of an actual object can be created in a virtual world, and the brand-new technical means can cause great revolution in the surveying and mapping field.
The point cloud data reconstruction work comprises point cloud data acquisition, processing and reconstruction, a plurality of achievements are obtained in the prior art in the aspect of point cloud data acquisition and processing, the achievements are applied to reverse engineering, the working modes of the point cloud data are different according to different requirements, in the application of computer graphics, the reconstruction of an object polygonal grid model needs to be carried out by utilizing the advantages of hardware for reconstructing high-quality objects, in the design of mechanical products, because the scale is small, object table information needs to be vividly simulated to meet the production requirement, and therefore, only the effort can be made in the direction of enabling curved surface parameters to tend to be smooth. The 3D laser scanning is a technology for rapidly and directly acquiring surface data of a measured target, and the final purpose is to establish a digital accurate three-dimensional model of the measured target to serve various fields of people production and life.
When the polygonal mesh is established, data acquired from a laser scanner are disordered initially, in order to enable the data processing to be more convenient, the scattered point cloud is required to be generated into the polygonal mesh, the generated mesh also defines the structural relationship among the scattered point cloud data, the structural relationship is generally related to polygons, and the method for generating the triangular mesh for the scattered and disordered point cloud data comprises a triangulation method in the prior art. In Delaunay, the establishment of polygonal meshes is realized, the relationship between points is also established, and a foundation is laid for later data processing. The method for realizing Delaunay triangulation in the prior art is most representative of a Lawson method and a Bowyer-Watson method, wherein the Lawson method is realized by diagonal exchange, and the Bowyer-Watson method is based on point-by-point insertion application. However, in practical use of the Lawson method, when the encountered data is more, the network efficiency is slowed down, when the point cloud set range is not a convex region or an inner ring exists, an illegal triangle is also generated, the time complexity of random point insertion and the time complexity of triangulation of the Bowyer-Watson method are larger, and the efficiency of finite element network generation is very low.
In the prior art, various methods adopting RANSAC are used for identifying and extracting basic information in point cloud data information on the premise of statistics. The method has the defects that when the sampling data information facing a specific shape is less, namely the sampling data occupies a smaller proportion of all point cloud data, the searching time in all point clouds is longer and the efficiency is low.
In the prior art, basic geometric shapes are extracted from point cloud data, a RANSAC method is adopted, contour points to be selected are extracted one by one for each basic geometric shape, the extracted contour points are optimized, the optimized contour points are used as standards to be reconstructed, and the optimization is performed by reducing an energy function until the minimum. However, the method of RANSAC has the disadvantages: optimization is single, and constraints among the geometric bodies are not considered, so that the connection is discordant at the splicing position among the geometric bodies of the reconstructed model, and gaps are easy to generate. The main defect of the reconstruction based on the small-area shape in the prior art is that the large-area missing data cannot be supplemented, because the reconstruction effect of the method has strong adherence to the small-area standard shape template, the adopted geometric template is the description of the small area of the point cloud shape, the matching process of the small-area shape is long in time, and the method is low in efficiency.
In summary, the prior art has some obvious disadvantages, which are shown in the following aspects:
firstly, in the prior art, the three-dimensional coordinate data of the surface of a large-area and high-resolution measured object is difficult to obtain quickly, a quick and high-precision modeling method is also lacked, the technology has no characteristics of quickness, non-contact, initiative, accuracy and the like, the obtained data has no characteristics of high precision, high density and the like, and the reconstruction of a three-dimensional model has many difficulties and is not beneficial to creating a digital model of a real object in a virtual world;
secondly, in the aspect of point cloud data acquisition, the point cloud data acquisition cost is high, the point cloud data acquisition speed is low, dense point cloud data which meet the requirements of subsequent processing cannot be acquired, and a foundation cannot be laid for high-precision three-dimensional solid model reconstruction; the point cloud data has low registration precision and low automation degree; in the prior art, a large number of man-machine interaction drawing and extracting methods consume a large amount of working time and have high working cost;
and thirdly, when the polygonal mesh is built in the prior art, the Delaunay triangulation method adopts a Lawson method and a Bowyer-Watson method, wherein the Lawson method is realized by diagonal exchange, and the Bowyer-Watson method is based on point-by-point insertion application. However, in practical use of the Lawson method, when a large amount of data is encountered, the network efficiency is slowed down, when a point cloud set range is not a convex region or an inner ring exists, an illegal triangle is also generated, the time complexity of random point insertion and the time complexity of triangulation of the Bowyer-Watson method are large, and the efficiency of finite element network generation is low;
fourthly, various methods adopting RANSAC in the prior art have the defects that when the information of sampling data in a specific shape is less, namely the ratio of the sampling data to all point cloud data is small, the time consumption for searching all point clouds is large, and the efficiency is low; the RANSAC method in the prior art is single in optimization consideration and does not consider the constraint between the geometric bodies, so that the connection is not harmonious at the splicing position between the geometric bodies of the reconstructed model, and gaps are easy to generate; the main defect of the reconstruction based on the small-area shape in the prior art is that the large-area missing data cannot be supplemented, because the reconstruction effect of the method has strong adherence to the small-area standard shape template, the adopted geometric template is the description of the small area of the point cloud shape, the matching process of the small-area shape is long in time, and the method is low in efficiency. .
Disclosure of Invention
According to the dense point cloud data-based three-dimensional solid model reconstruction method, dense point cloud data is acquired by a laser 3D scanner, 3D laser scanning can rapidly acquire three-dimensional coordinate data of the surface of a measured object in a large area and high resolution, and a rapid and high-precision modeling method is matched, so that the technology can more prominently show the characteristics of rapidness, non-contact, initiative, precision and the like, the data acquired in real time has the characteristics of high precision, high density and the like, the reconstruction of a three-dimensional model is more convenient and rapid, and a digital model of an actual object can be created in a virtual world; in the aspect of point cloud data acquisition, the acquisition cost of the point cloud data can be effectively saved, the acquisition speed of the point cloud data is improved, dense point cloud data which are required to be processed in the follow-up process can be acquired, and a good foundation is laid for high-precision three-dimensional solid model reconstruction; compared with a large number of man-machine interaction drawing extraction methods, the semantic-based feature extraction method saves a large amount of working time, the semi-automatic mode is adopted for operation, the precision and accuracy of feature extraction are effectively improved, the cost is greatly reduced, and the precision of the reconstructed three-dimensional solid model is obviously improved.
In order to achieve the technical effects, the technical scheme adopted by the invention is as follows:
a three-dimensional solid model reconstruction method based on dense point cloud data carries out high-precision three-dimensional reconstruction on a target object, and comprises the following two steps: the method comprises the steps of firstly, acquiring and processing dense point cloud data, wherein the acquisition of initial data information of the surface of a target object is efficiently completed, the acquisition of the point cloud data by a laser scanner is characterized by massive scattered point cloud information, three-dimensional coordinates and reflection intensity information of the point cloud data are acquired, and then a model structure is established for the point cloud by a mathematical method to visually express three-dimensional information of the point cloud; secondly, high-precision three-dimensional reconstruction is carried out on the target object, the target object surface data information of the obtained point cloud is analyzed and solved, and then the feature extraction is carried out on the three-dimensional model, and the feature extraction method comprises two methods: firstly, point cloud data are segmented and feature extracted by directly utilizing an algorithm; converting LiDAR data into images, and segmenting and extracting the data; the semantic description is the description of a geometric body, and comprises the geometric characteristics and the geometric categories of point cloud data and the mutual relation information among the geometric bodies, and the understanding capability of a computer on the point cloud data is enhanced by semantic-based characteristic extraction;
the method comprises the steps that firstly, a 3D laser scanner is adopted to obtain initial data of the surface of a target object, the 3D laser scanner is used for carrying out non-contact measurement and digital acquisition on array type space point location information of the surface of a real object in a point cloud mode, then three-dimensional model reconstruction is carried out, and the working flow of the three-dimensional reconstruction mainly comprises data acquisition, data registration, data fusion and network construction and texture mapping;
the dense point cloud data-based three-dimensional solid model reconstruction method comprises the steps of obtaining and processing dense point cloud data and three-dimensional reconstruction of the dense point cloud data, wherein the three-dimensional reconstruction of the dense point cloud data comprises the following steps: firstly, a point cloud library is displayed based on a VS platform: packaging each module of the point cloud library, processing dense point cloud data by using a VS platform, and presenting PCD files of the point cloud data by using a Microsoft basic class library; secondly, automatic registration of a plurality of depth images: improving an automatic registration method, and finding out various characteristics including a normal line, key points and VFH; thirdly, automatically segmenting the aggregation point cloud image; fourthly, a feature extraction method based on semantics; the automatic registration of the several depth images includes automatic registration of the depth images and registration of depth images based on curved features.
The invention relates to a three-dimensional solid model reconstruction method based on dense point cloud data, which comprises the following steps of:
firstly, establishing octree indexes for initial point cloud data and storing the octree indexes into a database by adopting an Oracle database management mode for data based on hundreds of GB levels, then realizing dynamic scheduling of dense point cloud data according to the octree indexes, and selecting the dense point cloud data to index when being browsed and measured;
secondly, for data of hundreds of MB to dozens of GB levels, an octree index multi-file point cloud management mode based on hard disk external memory is adopted, and after direct processing, a dynamic scheduling engine is used for variable scheduling, wherein the dynamic scheduling is specific to octree nodes;
and thirdly, for dozens of MB data, a management mode based on a memory is adopted, and the method can be directly adopted, namely point cloud data is completely read into the memory, so that point cloud post-processing operation is facilitated.
The invention discloses a three-dimensional entity model reconstruction method based on dense point cloud data, and further discloses a method for displaying a point cloud base based on a VS platform and fully utilizing point cloud base resources to find out a convenient platform for displaying, wherein a development platform of the method is Visual Studio, each module in the point cloud base is opened by VS, compiling processing is carried out on a VS interface, programming is carried out on tasks, a good communication bridge is built in users and resources, a point cloud data graph is processed by one key, and the displaying and the function of each module are specifically as follows:
the input and output module is used for transmitting point cloud data to the point cloud library after acquiring and obtaining the point cloud data on the surface of the target object, comparing the point cloud data, wherein the step is carried out in the input module, filtering or dividing the point cloud data after comparing the initial point cloud data, and transferring the input point cloud data to other processing modules, namely outputting the point cloud;
the K-D tree module is used for quickly finishing the classification of the approximate point cloud data and the dissimilar point cloud data through the search of a K-D tree, and carrying out permutation and combination, completion or deletion on the ragged point cloud data; expanding the binary search tree to obtain a K-D tree, applying to multi-dimensional retrieval, and being suitable for three-dimensional point cloud, wherein the K-D tree has the property of a binary tree if not an empty tree;
the system comprises an octree module, a data structure and a data processing module, wherein the octree module is a data structure which is popularized to a three-dimensional space by a quadtree, well manages point cloud data in the three-dimensional space, and determines the specific position of a small target object according to eight sub-nodes of each node of the octree;
the three modules are basic modules, have basic functions of processing point cloud data, and are also advanced processing modules including point cloud filtering, depth image, sampling consistency, point cloud registration, point cloud segmentation and point cloud curved surface reconstruction besides the basic modules.
A three-dimensional solid model reconstruction method based on dense point cloud data is further provided,
the point cloud filtering module is used for filtering a point cloud data set with large noise, automatically shielding data which do not belong to the surface of a measured target object, and obtaining an inner point and an outer point through point cloud filtering, wherein the inner point is point cloud data surrounded by boundary points, and the outer point is a boundary point of the point cloud data, namely a contour line; after filtering the collected point cloud data, establishing a primary three-dimensional model, namely realizing the visualization processing of the initial point cloud data, facilitating the more intuitive understanding of graphic information and facilitating the subsequent work processing;
the depth image processing module is mainly used for carrying out depth estimation on the point cloud data, has rich estimation contents, and can acquire and analyze information of the image according to the brightness of the image and the understanding of the content of the image;
the key point extraction module is used for extracting different measured objects with different geometric characteristics, rapidly acquiring geometric information of the measured objects and realizing rapid 3D modeling;
the point cloud data image is subjected to local division processing by the point cloud data image segmentation module, so that quick indexing is realized, and the precision of a modeling image, namely the point cloud data segmentation processing is improved;
and (3) displaying the point cloud base based on the VS platform, packaging all modules of the point cloud base, processing intensive point cloud data by using the VS platform, and presenting a PCD file of the point cloud data by using the Microsoft basic class base.
The three-dimensional solid model reconstruction method based on dense point cloud data further comprises the following steps of automatic registration of a plurality of depth images: scanning a scene from different visual angles, splicing point clouds obtained by a plurality of stations to obtain a three-dimensional data point set under a unified coordinate system, and registering based on the point cloud data;
the point cloud data are registered to find the corresponding relation between the two point cloud data sets, and the point cloud data in one coordinate system are converted into the point cloud data in the other coordinate system; firstly, obtaining a corresponding relation, and then resolving a transformation parameter;
the key of automatic registration is to automatically realize feature matching, and the feature matching is firstly determined; consider two feature sets, F and G, containing n and m features, respectively, F ═ F1,f2,...,fn},G={g1,g2,...,gmFeature matching is to find a one-to-one mapping from some subset of F to some subset of G:
fi1←→gj1
fi2←→gj2
……
fik←→gjk
wherein i1,i2,...,ik∈{1,2,...,n},j1,j2,...,jk∈{1,2,...,m};
The mapping relation of the optimal feature matching found by the invention satisfies the following conditions:
condition one, any pair of corresponding features in the mapping relationship should have an approximation, that is, for any l ∈ {1,2,, y }, there should be filAnd gjlCorresponding feature approximation;
secondly, on the premise of meeting the first condition, the number y of the corresponding features in the mapping relation is as large as possible;
the line registration process comprises the following steps: extracting line segments from adjacent point clouds → finding homonymic line segments → solving unit direction vectors of homonymic line segments → solving rotation parameters → determining the intersection of two line segments → solving translation parameters → point cloud registration.
The three-dimensional solid model reconstruction method based on dense point cloud data further comprises the following steps of automatic registration of a depth image:
step 1, data acquisition: before image processing, successfully obtaining a plane image of a three-dimensional space target object, acquiring point cloud data by utilizing a trigonometric principle of a laser scanner and a high-resolution color image, acquiring data by utilizing a registration system DCR (digital cell radar) obtained by matching a CCD (charge coupled device) camera and the laser scanner, and taking the data acquired by a 3D (three-dimensional) laser scanner as depth data;
and 2, feature extraction: the feature extraction is divided into three aspects, namely extraction of feature points, feature lines and regions; when extracting the characteristic points, firstly determining a selection method, wherein the characteristic point extraction method comprises three methods: three methods of applying directional derivatives, applying image brightness contrast relation and applying mathematical morphology are applied;
and 3, stereo matching: the stereo matching enables a plurality of acquired point cloud data images to be connected into a complete model for highlighting the actual 3D shape information of the target object, and some uncertain factors need to be noticed or avoided;
and 4, data deduplication and networking: sharing the acquired point cloud data in the same coordinate system, and in order to avoid the data from appearing for many times, removing the duplication of the data and integrating scattered data together;
step 5, geometric mapping: the acquired picture information contains color information, gray level display represents color display, and geometric mapping is realized, so that 3D modeling has better color reality.
The dense point cloud data-based three-dimensional solid model reconstruction method is characterized in that the depth image registration based on the curved surface features is to match and sew most of two similar curved surfaces, point cloud data with similar features correspond to each other in the sewing process, the two point cloud data are matched in pairs to obtain a complete three-dimensional solid model, and rigid transformation is solved in the implementation process, namely the shape of the point cloud data is not changed during rotation and translation transformation; and placing the same scene obtained by scanning different directions and angles under the same coordinate system for registration so as to enable the multi-angle graphs to correspond to each other.
SACMODEL _ PERPENDICULAR _ PLANE is adopted to segment the vertical PLANE, a Sample Consensus Model Perpendicular PLANE is adopted to obtain the relative perpendicularity of a detection PLANE and a known vector, an angle critical value is required to be set, and the PLANE is detected;
when the point cloud data of a large complex scene is processed, the point cloud data is segmented, and the data volume processed after segmentation and scribing is greatly reduced; after the local point cloud data matching is completed, clustering work is carried out, namely, all fine small parts are reassembled, and vivid 3D modeling is completed.
The dense point cloud data-based three-dimensional solid model reconstruction method comprises the following steps of: for 3D modeling of a large complex scene, a point cloud data graph is locally divided, so that quick indexing can be realized, and the accuracy of a modeling image can be improved, namely the point cloud data is segmented:
step one, the value of the Z coordinate of the center point of the contour line of the segment represents the height of the segment, and the coordinate of the specific center point is as follows: when the segmentation sheet is vertical to the plane XOY, the Z value of the projection point coordinate of the segmentation sheet on the plane YOZ or the plane XOZ is the Z coordinate of the segmentation sheet; when the segmentation sheet is not perpendicular to the plane XOY, the projection of the center point F of the segmentation sheet on the XOY plane is an F 'point, the center point of the segmentation sheet on the two-dimensional plane is solved, then the distance h from the F' point to the segmentation sheet is solved, and the distance AA 'of AA' is solved as h/cosαThe distance AA' is the Z value of the center point of the division piece;
step two, direction: the included angle between the plane XOY and the segmentation sheet represents the direction of the segmentation sheet;
thirdly, an area calculating method in the three-dimensional space comprises the following steps: setting m points in total, circulating the points i (0< i < m-1), calculating the area of the points i and the adjacent two points on the X, Y, Z surface, summing the areas with the previous projected areas, and calculating the area of the segment in the three-dimensional space:
Areayz+=Zi*(Yi+1-Yi-1)
Areaxz+=Xi*(Zi+1-Zi-1)
Areaxy+=Xi*(Yi+1-Yi-1)
Figure BDA0002645497250000071
fourthly, a method for calculating the projection area on the XOY plane comprises the following steps: for the circulation of the point i (0< i < m-1), calculating the geometric area of two adjacent points on the XOY surface, and then calculating the total area of the total area division piece on the XOY surface;
area+=(Xi*Yi+1-Xi+1*Yi)
AreaXOY=0.5*|area|
after the point cloud data are segmented and combined, the key point is to realize automatic operation and reduce manual interaction as much as possible.
The invention discloses a three-dimensional solid model reconstruction method based on dense point cloud data, and further relates to semantic-based feature extraction, wherein the method comprises two steps:
firstly, directly utilizing an algorithm to carry out segmentation and feature extraction on point cloud data, managing the point cloud data and extracting features of a target object, and mainly adopting an octree method based on segmentation-fusion-interpolation;
secondly, converting LiDAR data into an image, segmenting and extracting the data, performing semantic feature extraction on each scene, describing each feature before feature extraction, then filtering point cloud data acquired by each scene, reducing image noise points, performing classification processing and labeling on acquired different geometric bodies, extracting contour lines, performing intelligent semantic definition, and performing local cleaning processing on the basis to enable an originally fuzzy three-dimensional model to become more accurate and close to a real object;
after the semantics are established, extracting the characteristics of the contour line according to the characteristic description of the semantics, refining the object under the condition of a large frame constraint, removing redundant points, complementing the missing points, enabling the detected target to be clear, complete and real, and realizing high-precision three-dimensional modeling through three-dimensional reconstruction.
Compared with the prior art, the invention has the following contributions and innovation points:
firstly, the dense point cloud data is acquired by a laser 3D scanner, the dense point cloud data is modeled after being processed mathematically by using the points, basic three-dimensional information is expressed, the three-dimensional coordinate data of the surface of a measured object can be rapidly acquired in a large area and at high resolution by 3D laser scanning, and the characteristics of rapidness, non-contact, initiative, accuracy and the like can be more highlighted by the technology by matching with a rapid high-precision modeling method, the data acquired in real time has the characteristics of high precision, high density and the like, the reconstruction of a three-dimensional model is more convenient and rapid, a digital model of a real object can be created in a virtual world, and the brand new technical means can bring about great change in the surveying and mapping field, so that the rapid high-precision three-dimensional solid model reconstruction based on the dense point cloud data is realized;
secondly, the dense point cloud data-based three-dimensional solid model reconstruction method provided by the invention provides a dense point cloud data acquisition and processing method, and in the aspect of point cloud data acquisition, the method can effectively save the point cloud data acquisition cost, improve the point cloud data acquisition speed, and acquire dense point cloud data meeting the requirements of subsequent processing of the method, thereby laying a good foundation for high-precision three-dimensional solid model reconstruction;
thirdly, the dense point cloud data-based three-dimensional solid model reconstruction method provided by the invention has the advantages that the three-dimensional solid model reconstruction utilizes point cloud feature description and extraction, dense point cloud data is put into a point cloud base platform for processing, a point cloud data PCD file is displayed by using the point cloud base, an automatic registration method based on line feature point cloud data is provided, and an example is given, so that the point cloud data registration precision is high, the automation degree is high, and the three-dimensional solid model reconstruction method has obvious innovation and outstanding advantages;
fourthly, the invention provides a semantic-based feature extraction method based on the constructed three-dimensional model, compared with a large amount of man-machine interaction drawing extraction methods, the method saves a large amount of working time, and the semi-automatic method is adopted for operation, so that the precision and accuracy of feature extraction are effectively improved, the cost is greatly reduced, and the precision of the reconstructed three-dimensional solid model is obviously improved.
Drawings
FIG. 1 is a schematic diagram of a comparison between a point cloud filtering process and a point cloud filtering process according to the present invention.
Fig. 2 is a schematic diagram for comparing the effect before and after the geometric mapping of the present invention.
FIG. 3 is a schematic view of the positional relationship of the divided pieces of the present invention.
FIG. 4 is a schematic diagram of contour line extraction effect of semantic feature extraction according to the present invention.
FIG. 5 is a schematic diagram of line features of a contour line for semantic feature extraction according to the present invention.
Detailed Description
The technical solution of the dense point cloud data-based three-dimensional solid model reconstruction method provided by the present invention is further described below with reference to the accompanying drawings, so that those skilled in the art can better understand the present invention and can implement the present invention.
The 3D laser scanning breaks through the traditional single-point measurement, has the unique advantages of high efficiency and high precision, and can acquire target object table information rapidly in large range and high resolution by a high-speed laser scanning measurement method, thereby providing a brand new technical means for establishing a three-dimensional solid model of an object. The reverse engineering is one of the main technologies of rapid three-dimensional reconstruction, a geometric modeling automation system in the reverse engineering reflects the characteristic modeling characteristics of design intention, the organization mode of data points is not limited, an output B-rep model is completely compatible with a CAD system, and the key points of the geometric modeling automation system are automatic feature extraction and smooth connection of a combined free-form surface. The key of the reverse engineering is to reconstruct a three-dimensional solid model according to the obtained dense point cloud data.
The invention provides a dense point cloud data-based three-dimensional solid model reconstruction method, which is used for performing high-precision three-dimensional reconstruction on a target object and comprises the following two steps: the method comprises the steps of firstly, acquiring and processing dense point cloud data, wherein the acquisition of initial data information of the surface of a target object is efficiently completed, the acquisition of the point cloud data by a laser scanner is characterized by massive scattered point cloud information, three-dimensional coordinates and reflection intensity information of the point cloud data are acquired, and then a model structure is established for the point cloud by a mathematical method to visually express three-dimensional information of the point cloud; secondly, high-precision three-dimensional reconstruction is carried out on the target object, the target object surface data information of the obtained point cloud is analyzed and solved, and then the feature extraction is carried out on the three-dimensional model, and the feature extraction method comprises two methods: firstly, point cloud data are segmented and feature extracted by directly utilizing an algorithm; converting LiDAR data into images, or segmenting and extracting the data; the point cloud data obtained by the invention only comprises three-dimensional geometric sampling data of an object to be reconstructed, and no other information can be referred to, especially description of semantics, wherein the semantic description is description of a geometric body, is abstract and high-level description, and comprises the geometric characteristics and the geometric category of the point cloud data and also comprises the mutual relation information between the geometric bodies. The addition of semantic description strengthens the comprehension capability of the computer on point cloud data, and plays a great role in the reconstruction of the point cloud data; the method comprises the steps of firstly, acquiring initial data of the surface of a target object by using a 3D laser scanner, acquiring array type space point location information of the surface of a real object in a point cloud mode in a high-precision, high-resolution, high-efficiency and non-contact measuring mode by using the 3D laser scanner, then, carrying out three-dimensional model reconstruction, wherein the working process of the three-dimensional reconstruction mainly comprises data acquisition, data registration, data fusion and network construction and texture mapping.
The dense point cloud data-based three-dimensional entity model reconstruction method comprises the steps of obtaining and processing dense point cloud data and three-dimensional reconstruction of the dense point cloud data, wherein the three-dimensional reconstruction of the dense point cloud data comprises displaying a point cloud base based on a VS platform, automatically registering a plurality of depth images, automatically segmenting and aggregating the point cloud images and extracting features based on semantics, and the automatically registering of the plurality of depth images comprises the automatically registering of the depth images and the registering of the depth images based on curved surface features.
Acquisition and processing of dense point cloud data
Acquiring dense point cloud data
The invention provides surface measurement on the basis of point measurement and line measurement, the surface measurement is carried out through the displacement of a group of gratings, then the data information of the gratings transmitted to the surface of an object through a sensor is collected, and the scanner based on the surface measurement has the three-dimensional rapid modeling capability.
Management of dense point cloud data
The point cloud data can be efficiently applied only through efficient management, and the dense point cloud data management of the invention is mainly divided according to the size of a point cloud data file:
firstly, establishing octree indexes for initial point cloud data and storing the octree indexes into a database by adopting an Oracle database management mode for data based on hundreds of GB levels, then realizing dynamic scheduling of dense point cloud data according to the octree indexes, and selecting the dense point cloud data to index when being browsed and measured;
secondly, for data of hundreds of MB to dozens of GB levels, an octree index multi-file point cloud management mode based on hard disk external memory is adopted, after direct processing, a dynamic scheduling engine is used for variable scheduling, and the dynamic scheduling is specific to octree nodes, so that point cloud data can be comprehensively processed more finely;
and thirdly, for dozens of MB data, a management mode based on a memory is adopted, and the method can be directly adopted, namely point cloud data is completely read into the memory, so that point cloud post-processing operation is facilitated.
Storage of dense point cloud data
With the rapid development of the hardware and software of the 3D laser scanning system, the range of a scanned target scene is continuously enlarged, the complexity is continuously improved, the mobile three-dimensional scanning vehicle is rapidly developed, the force for acquiring dense point cloud data is continuously improved, and the data volume of the acquired point cloud data reaches GB level or even TB level. Therefore, a key core problem to be solved is to reliably and efficiently store the acquired dense point cloud data.
The initial point cloud data acquired by the three-dimensional laser scanner has the characteristics of mass and dispersion, and the dispersed point cloud data is not only an isolated point but also contains rich information such as three-dimensional coordinates, reflection intensity and echo frequency information. The three-dimensional laser scanner stores laser data in a plurality of data formats, and is provided with post-processing software which provides a read-write module for the formats.
The custom format and the common format of each instrument system are listed according to different instruments, data obtained by scanning are directly exported in the custom format, and then the data are processed by combining with a matched software system of the instrument. The scanning data can also be directly exported or exported through matched software, and the exported common format is convenient to be suitable for most systems and is also convenient for the conversion and storage of the three-dimensional laser scanning data among different systems.
1. Storing in a custom format
The RIEGL instrument self-defines a binary 3DD format, stores three-dimensional space data and attaches a time tag, and support software of the 3DD format comprises Orpheus, QTSculptor and Reconstructor. In order to be suitable for different platforms, the RIEGL instrument is provided with a RISCANLIB library decoding 3DD file, and the LEICA instrument directly exports data into a common PTX file in use; the RWP format of the TRIMBLE instrument is a self-defined format aiming at matching software RealworksSurvey, the project file formats self-defined by TRIMBLE series instruments POCKETSCAPE and POINTSCAPE are DCP and PPF respectively, and the self-defined format of a TRIMBLE instrument scanning file is an SOI format; the OPTECH instrument is in a custom IXF format, mainly for the scanning system ILRIS; after the instrument matching software is read, the instrument matching software is converted into a corresponding file format according to the requirement.
2. Storing in a common format
(1) ASCII formatted file parsing
The ASCII format file is a commonly adopted data format and comprises main file formats of XYZ and PTX, wherein the ASCII type file consists of three-dimensional coordinates of a file header and recording points and other values such as reflectivity, and the file header indicates auxiliary information of the file; the PTX format occurs in the work of exchanging scanning points and coordinate transformations corresponding to the points. All values are given in ASCII form and units are required in metric. According to the PTX file obtained by Cyrax scanning, there are 100 (rows) × 80 (columns) dots in the file, from the number of rows and columns of dots in row 1 and row 2, row 3 is the average component of the dots, i.e. the translation component, row 4, 5, 6 represents the rotation matrix (3 × 3) of the dots, row 7, 8, 9, 10 represents the global transformation matrix (4 × 4), which is a transformation matrix, obtained by the multiplication transformation of the two transformations of translation transformation and rotation transformation, which depend on the translation component and rotation matrix, respectively, and the (X, Y, Z) coordinates and reflectivity of the dots starting from row 11. Cyrax2500 works by using radar to measure distance, when a scanning laser does not touch the surface of a target object, or when the probability of reflecting the laser is low, the scanning laser is filled with 0000 in a file, and the point is an invalid point.
Each row of the XYZ file contains one scan point, and the XYZ file has two variables: the first variable is the x, y, z coordinates and reflection values assigned to each scanning point; the second variable specifies two other specifications for each scan point, including rows and columns in a flat panel display; converting the coordinates to correspond to the derived unit selected at the time when deriving, deriving a reflection value for screen representation, the size of which is between 0 and 255; the coordinates are received in a selected import unit in an assumption of import; the ASCII type format has the advantages of: the structure is approximately the same, the structure is simple, the reading and the writing are easy, and the device can be supported by most instruments and software; the disadvantages are that: the ASCII type data occupies a large space, and mass LiDAR data is difficult to store and process; the format data only stores basic information of (X, Y, Z) coordinates and reflection values, and other information quantity of points is incomplete, which is not beneficial to information extraction and data application.
(2) PTC point cloud format resolution
The PTC is in binary point cloud format, and the PTC file can be converted or written out by some software, or can be directly exported by an instrument. The PTC file stores both three-dimensional coordinate information and high-resolution data-corresponding image pixel information. Advantages of the PTC point cloud format: compared with the ASCII format, the method is simpler in storage, and the point cloud data can be more efficiently imported, displayed and drawn in the AutoCAD; the disadvantages are as follows: the memory requirement is large because the export of data is to collect all scan points first and then write the files.
(3) Binary OBJ file parsing
The binary OBJ file is generated directly by vidid 910, and its rows have the following specific meanings: 1. 2, 3 and 4, behavior file headers which comprise file names, file types and creation time information of the files; the line of "g" indicates the start of recording point cloud data information, which starts from the next line, each line starting with "v", three numbers immediately after the end of "v" are XYZ coordinate components of the data point, starting from the first line starting with "v", and if each line starts with "v", the line indicates a point, and the points of each line are numbered sequentially; after the coordinates of the points in the point cloud data are recorded, representing the coordinates by 'gscene-cut 01', and then starting recording the patch information; each subsequent line starts with 'f', each line records information of a patch, and the patch is a polygonal patch; three and four integers immediately following "f" are the numbers of the vertices of the graph in the vertex sequence; when each quadrangle is drawn, the quadrangle is divided into two triangles to be processed.
The OBJ format is characterized by: discrete points can be stored, line data can be recorded, and data of polygons and free-form surfaces can be recorded. The obvious physical information and topological relation information enables data to be modeled and displayed with characteristics better. However, the attribute information of the point is not complete, and the format coding and decoding added with the point attribute information are complex, so that the application of the point attribute information is limited.
(4) LAS point cloud format parsing
The LAS point cloud format is a general laser radar data format, and LAS files are stored according to the organization structures of the LAS point cloud format and contain rich information. The complete LAS file format consists of: common header, variable length records, and LiDAR point set records; the header covers all basic information of the LAS file and then displays the basic information of the header file with the envi _ info _ wid.
(IV) dense point cloud data feature analysis
The 3D laser scanner has different structures and point cloud data acquisition principles, and the point cloud data are arranged in different forms, wherein the obtained point cloud data mainly comprises scanning line point clouds, array point clouds, grid point clouds and scattered point clouds; the scanning line point cloud, the array point cloud and the grid type point cloud belong to ordered or partially ordered point cloud data in an arrangement mode, and certain topological relations exist between the point cloud data points and the points; the point cloud data features mainly refer to feature points, feature lines and feature surfaces in a measured target. After point cloud data characteristics are analyzed, most important point cloud characteristics are extracted, and the method mainly comprises the following steps:
firstly, the extracted point cloud features are mainly used for constraint point cloud modeling; secondly, in the technical field of rapid prototyping, firstly, obtaining a contour characteristic line of an object, and then inputting the contour characteristic line into a rapid prototyping machine to produce an object model; converting the point cloud data into a slice form, then extracting the contour line of each layer of slice, and realizing object modeling based on the contour line; third, salient object features may be described and represented; fourthly, the method can be used for the registration of point clouds among different coordinate systems, and the reliability and the precision of the registration can be greatly improved based on the point cloud registration of the characteristics or the registration of other remote sensing data and the point cloud;
the feature extraction comprises feature point extraction, feature line extraction and feature surface extraction, the feature line is important for later model reconstruction in point cloud data processing, the point cloud feature extraction mainly surrounds the feature line extraction, and the point cloud feature extraction mainly aims at two aspects: firstly, directly extracting features from scattered point clouds; secondly, extracting features from the grid.
Two-dimensional reconstruction of dense point cloud data
The 3D laser scanning is a technology for rapidly and directly obtaining a surface model of a measured target, and the purpose is to establish a digital accurate three-dimensional model of the measured target and serve various fields of production and life of people. After data acquisition is performed by 3D laser scanning, high-density point cloud data of a target surface is acquired, and the point cloud data is a complete and clear expression of the measured target surface, but under a common condition, the point cloud data in the prior art only stores visible information. I.e., the points in the point cloud are scattered, the spatial relationship between the points is unknown and noisy, making post-modeling exceptionally difficult. Therefore, the three-dimensional modeling of the acquired point cloud data through improved optimization is a primary problem of 3D laser scanning, and the essence of the three-dimensional modeling is the process of changing the point into the volume. In the process, firstly, the point is changed into the surface, and the point cloud data is directly reconstructed.
And processing the acquired point cloud data by using reverse engineering to restore the point cloud data into a vivid model for reconstruction. The point cloud data of the target surface are obtained by 3D laser scanning, the point cloud reconstruction restores the real shape of the measured target and digitally expresses the surface model of the measured target, the point cloud reconstruction plays an important role in implementation of a subsequent method, efficiency optimization of the method, generation of a final model and correct information extraction, and the point cloud data reconstruction is the basis and the premise of other post-processing.
The rapid three-dimensional reconstruction of dense point cloud data is mainly based on two aspects: aiming at a large complex scene, when a laser scanner works, massive dense point cloud data need to be scanned for multiple times at different viewpoints to obtain a large number of depth images, and then data of the complete scene are obtained; and secondly, the depth image obtained by the three-dimensional reconstruction requirement is accurately registered, and point cloud feature description and extraction are mainly utilized. The requirement for fast reconstruction of massive point cloud data is more difficult, and the improvement is that a large amount of interaction is reduced and the registration automation is realized to a greater extent.
The invention searches a platform application point cloud library, so that the module functions of the point cloud library can be integrated to play a role together, and the reconstruction can be carried out efficiently and quickly. When data is acquired in an external scene by using a 3D laser scanner, the data needs to be registered and reconstructed. The purpose can be achieved by using a point cloud database, but when a large amount of point cloud data is encountered, the processing is troublesome if the point cloud data is processed one by one. Therefore, a method is needed to quickly implement reconstruction, and the specific scheme is as follows:
the method comprises the steps of firstly, displaying a point cloud base based on a VS platform, packaging each module of the point cloud base, processing intensive point cloud data by using the VS platform, and displaying a PCD file of the point cloud data by using a Microsoft basic class base;
secondly, automatic registration of a plurality of depth images is carried out, an automatic registration method is improved, and various features including a normal line, key points and VFH are found out;
thirdly, automatically segmenting and aggregating point cloud images: and modifying the three-dimensional image, and finding a method as far as possible to freely divide and patch the three-dimensional image on the three-dimensional image by using a mouse.
(I) showing point cloud base based on VS platform
The point cloud library consists of a plurality of modules, convenient platforms need to be found out for displaying by fully utilizing point cloud library resources, and the development platform of the invention is Visual Studio. Opening each module in the point cloud library by using VS, compiling at a VS interface, programming required by a task, building a good communication bridge in users and resources, processing a point cloud data graph by one key, and specifically showing and functioning each module as follows:
the input and output module is used for transmitting point cloud data to the point cloud library after acquiring and obtaining the point cloud data on the surface of the target object, comparing the point cloud data, wherein the step is carried out in the input module, filtering or dividing the point cloud data after comparing the initial point cloud data, and transferring the input point cloud data to other processing modules, namely outputting the point cloud;
and the K-D tree module is used for quickly finishing the classification of the approximate point cloud data and the dissimilar point cloud data through the K-D tree search, and carrying out permutation and combination, completion or deletion on the ragged point cloud data. Expanding the binary search tree to obtain a K-D tree, applying to multi-dimensional retrieval, and being suitable for three-dimensional point cloud, wherein the K-D tree has the property of a binary tree if not an empty tree;
and the octree module is a data structure popularized to a three-dimensional space from a quadtree, well manages point cloud data in the three-dimensional space, and determines the specific position of the small target object according to eight sub-nodes of each node of the octree.
The three modules are basic modules, have basic functions of processing point cloud data, and are also advanced processing modules including point cloud filtering, depth image, sampling consistency, point cloud registration, point cloud segmentation and point cloud curved surface reconstruction besides the basic modules.
The point cloud filtering module is used for filtering a point cloud data set with large noise, automatically shielding data which do not belong to the surface of a measured target object, and obtaining an inner point and an outer point through point cloud filtering, wherein the inner point is point cloud data surrounded by boundary points, and the outer point is a boundary point of the point cloud data, namely a contour line; after the collected point cloud data is filtered, a primary three-dimensional model is established, namely, the initial point cloud data is visualized, so that the graphic information can be more intuitively understood, and the subsequent work processing is facilitated.
As shown in figure 1, (a) is an initial image, and (b) and (c) are filtered inner point images and outer point images, wherein the images have a plurality of outliers, noise points and other factors which interfere with image clarity, and the filtered images are clear and are only ready for processing.
The depth image processing module is mainly used for carrying out depth estimation on the point cloud data, has rich estimation contents, and can acquire and analyze information of the image according to the brightness of the image and the understanding of the content of the image. The invention relates to a depth estimation method based on an image.
The key point extraction module is used for extracting different measured objects with different geometric characteristics, and the extraction of the key points can quickly acquire geometric information of the measured objects so as to realize quick 3D modeling;
and the point cloud segmentation module is used for carrying out local division processing on the point cloud data map, realizing quick indexing and improving the precision of a modeling image, namely the segmentation processing of the point cloud data.
The above are some modules mainly adopted by the invention, these modules are small parts of the point cloud library, and only contain various processing capabilities, and the VS platform is used to gather these modules together, and there is no need to find the processing modules step by step, but still there is a defect that even if these small modules are displayed on the VS platform, they are still tedious, and the work required to process the point cloud data on the VS platform still requires manual operation. And (3) displaying the point cloud base based on the VS platform, packaging all modules of the point cloud base, processing intensive point cloud data by using the VS platform, and presenting a PCD file of the point cloud data by using the Microsoft basic class base.
Automated registration of (two) several depth images
Ground 3D laser scanning, field angle limitation of a laser scanner and shielding influence among objects, wherein each station of scanning can only acquire point cloud data under a coordinate system of the current scanner, and complete object table information cannot be acquired through one-time scanning; meanwhile, the scanner cannot scan the vertical part of the normal vector of the object and the laser emission direction, and in the area with frequent change of the normal vector of the object surface, the viewpoint must be changed for many times during scanning to obtain a complete surface point. Therefore, the method scans scenes from different visual angles, splices point clouds obtained by a plurality of stations to obtain a three-dimensional data point set under a unified coordinate system, and is based on point cloud data registration.
The point cloud data are registered to find the corresponding relation between the two point cloud data sets, and the point cloud data in one coordinate system are converted into the point cloud data in the other coordinate system; firstly, a corresponding relation is obtained, and then transformation parameters are solved.
The key of automatic registration is to automatically realize feature matching, and the feature matching is firstly determined; consider two feature sets, F and G, containing n and m features, respectively, F ═ F1,f2,...,fn},G={g1,g2,...,gmFeature matching is to find a one-to-one mapping from some subset of F to some subset of G:
fi1←→gj1
fi2←→gj2
……
fik←→gjk
wherein i1,i2,...,ik∈{1,2,...,n},j1,j2,...,jk∈{1,2,...,m};
The mapping relation of the optimal feature matching found by the invention satisfies the following conditions:
condition one, any pair of corresponding features in the mapping relationship should have an approximation, that is, for any l ∈ {1,2,, y }, there should be fi1And gj1Corresponding feature approximation;
secondly, on the premise of meeting the first condition, the number y of the corresponding features in the mapping relation is as large as possible;
the line registration process comprises the following steps: extracting line segments from adjacent point clouds → finding homonymic line segments → solving unit direction vectors of homonymic line segments → solving rotation parameters → determining the intersection of two line segments → solving translation parameters → point cloud registration.
Performing line feature registration by using simulation data: w1, W2, W3 and W4 are four initial line segments, and W1, W2, W3 and W4 are transformed line segments obtained by rotating the initial line segments by 45 degrees around the z axis of the coordinate system and translating the origin of coordinates (6,6,6), and are displayed in the same graph; according to the line feature registration method, rotation and translation parameters are solved by using two sets of line segment data, the calculated values are basically the same as set values, and automatic registration can be well carried out by using the line feature-based registration method.
1. Automated registration of depth images
Step 1, data acquisition: before image processing, successfully obtaining a plane image of a three-dimensional space target object, acquiring point cloud data by utilizing a trigonometric principle of a laser scanner and a high-resolution color image, acquiring data by utilizing a registration system DCR (digital cell radar) obtained by matching a CCD (charge coupled device) camera and the laser scanner, and taking the data acquired by a 3D (three-dimensional) laser scanner as depth data;
and 2, feature extraction: the feature extraction is divided into three aspects, namely extraction of feature points, feature lines and regions; when extracting the characteristic points, firstly determining a selection method, wherein the characteristic point extraction method comprises three methods: three methods of applying directional derivatives, applying image brightness contrast relation and applying mathematical morphology are applied;
and 3, stereo matching: the stereo matching makes the acquired several point cloud data images connected into a complete model for highlighting the actual 3D shape information of the target object, and some uncertain factors such as light intensity, chemical and physical changes, noise interference, target object deformation and camera characteristics need to be noticed or avoided.
And 4, data deduplication and networking: sharing the acquired point cloud data in the same coordinate system, and in order to avoid the data from appearing for many times, removing the duplication of the data and integrating scattered data together;
step 5, geometric mapping: the acquired picture information contains color information, gray level display represents color display, and geometric mapping is realized, so that 3D modeling has better color reality. Fig. 2 is a comparison between before and after effect maps of the geometric mapping.
2. Depth image registration based on curved surface features
The curved surface depth image registration is to match and sew most approximate two curved surfaces, point cloud data with approximate characteristics correspond to each other in the sewing process, the two point cloud data are matched in pairs to obtain a complete three-dimensional solid model, and rigid transformation is solved in the implementation process, namely the shape of the point cloud data is not changed when the point cloud data is rotated and translated; and placing the same scene obtained by scanning different directions and angles under the same coordinate system for registration so as to enable the multi-angle graphs to correspond to each other.
(III) automatically segmenting the aggregated point cloud image
Aiming at the division of the vertical surface, a pcl is adopted for dividing the vertical surface, a Sample Consensus Model Perpendicular PLANE is adopted, the relative perpendicularity of a detection PLANE and a known vector is obtained, an angle critical value is required to be set, and the PLANE is detected.
When the point cloud data of a large complex scene is processed, the acquired plurality of point cloud pictures are placed under the same coordinate system to be processed more complexly, so that the data volume processed after the point cloud pictures are segmented and sliced is greatly reduced, and the modeling precision is also improved; after the local point cloud data matching is completed, clustering work is carried out, namely, all fine small parts are reassembled, and vivid 3D modeling is completed.
A point cloud segmentation module: for 3D modeling of a large complex scene, point cloud data is not integrally processed, and a processed data information graph is rough and unclear; the point cloud data graph is divided locally, so that quick indexing can be realized, and the accuracy of a modeling image can be improved, namely the point cloud data is divided:
step one, the value of the Z coordinate of the center point of the contour line of the segment represents the height of the segment, and the coordinate of the specific center point is as follows: as shown in fig. 3, when the divided piece is perpendicular to the plane XOY, the Z value of the projection point coordinate of the divided piece on the plane YOZ or the plane XOZ is the Z coordinate of the divided piece; when the dividing sheet is not parallel to the plane XOYWhen the two-dimensional plane is vertical, the projection of the center point F of the segmentation sheet on the XOY plane is an F ' point, the center point of the segmentation sheet on the two-dimensional plane is calculated, then the distance h from the point F ' to the segmentation sheet is calculated, and the distance AA ' between the point F ' and the segmentation sheet is calculated, wherein the distance AA ' is h/cosαThe distance AA' is the Z value of the center point of the division piece;
step two, direction: the included angle between the plane XOY and the segmentation sheet represents the direction of the segmentation sheet;
thirdly, an area calculating method in the three-dimensional space comprises the following steps: setting m points in total, circulating the points i (0< i < m-1), calculating the area of the points i and the adjacent two points on the X, Y, Z surface, summing the areas with the previous projected areas, and calculating the area of the segment in the three-dimensional space:
Areayz+=Zi*(Yi+1-Yi-1)
Areaxz+=Xi*(Zi+1-Zi-1)
Areaxy+=Xi*(Yi+1-Yi-1)
Figure BDA0002645497250000171
fourthly, a method for calculating the projection area on the XOY plane comprises the following steps: for the circulation of the point i (0< i < m-1), calculating the geometric area of two adjacent points on the XOY surface, and then calculating the total area of the total area division piece on the XOY surface;
area+=(Xi*Yi+1-Xi+1*Yi)
AreaXOY=0.5*|area|
after the point cloud data are segmented and combined, the key point is to realize automatic operation and reduce manual interaction as much as possible.
(IV) semantic-based feature extraction
The point cloud data obtained by the invention only comprises three-dimensional geometric sampling data of an object to be reconstructed, and no other information is available for reference, especially for description of a semantic meaning. The semantic description refers to the description of a geometric body, is abstract and high-level description, and comprises the geometric characteristics and the geometric category of the point cloud data and the mutual relation information between the geometric bodies. The addition of the semantic description strengthens the understanding of the computer to the point cloud data, and plays a great role in reconstructing the point cloud data.
After the dense point cloud data is subjected to high-quality reconstruction work, the target object surface data information of the acquired point cloud data can be analyzed and solved, and then the features of the three-dimensional model are extracted. For feature extraction, the method of the invention has two kinds:
firstly, directly utilizing an algorithm to carry out segmentation and feature extraction on point cloud data, managing the point cloud data and extracting features of a target object, and mainly adopting an octree method based on segmentation-fusion-interpolation;
secondly, LiDAR data is converted into images, the data is segmented and extracted, for example, semantic feature extraction is carried out on three scenes of playgrounds, buildings and indoor furniture, all features need to be described before feature extraction, a scene at one corner of a campus is shown in figure 4, a playground opposite to a roof of a low-grade student dormitory can be seen in a bedroom, and the three scenes are mainly defined from four aspects: the size of the area, the mutual relation among the positions, the direction under a certain space and the topological relation among the three scenes.
Then, the point cloud data obtained from the three scenes are filtered to reduce image noise, the obtained different geometric bodies are classified and labeled, contour lines are extracted, intelligent semantic definition is carried out, and the extraction of the contour lines is displayed as shown in fig. 4: the scene shows a colorful three-dimensional object simulation graph after feature extraction processing, although a plurality of noise impurities are not filtered and some wrong points exist, the shape of the whole model can be clearly shown after contour line extraction, and the original fuzzy three-dimensional model can be more accurate and close to a real object by local cleaning processing on the basis.
After the semantics are established, extracting the characteristics of the contour line according to the characteristic description of the semantics, as shown in fig. 5, refining the object under the condition of a large frame constraint, removing redundant points, complementing the missing points, enabling the detected object to have clear, complete and real performance, and realizing high-precision three-dimensional modeling through three-dimensional reconstruction.
The invention relates to a three-dimensional reconstruction method of dense point cloud data, which is improved aiming at the reconstruction method in the prior art and provides a scheme for realizing the reconstruction of the dense point cloud data: firstly, a point cloud library is displayed based on a VS platform: packaging each module of the point cloud library, processing dense point cloud data by using a VS platform, and presenting PCD files of the point cloud data by using a Microsoft basic class library; automatically segmenting an aggregated point cloud image; thirdly, automatic registration of a plurality of depth images: the automatic registration method is improved, and various features including a normal line, key points and VFH are found out. On the premise of the established high-precision three-dimensional model, the invention provides a semantic-based feature extraction method and a semantic-based feature extraction case.

Claims (10)

1. The dense point cloud data-based three-dimensional solid model reconstruction method is characterized in that the high-precision three-dimensional reconstruction of a target object is carried out, and the method comprises the following two steps: the method comprises the steps of firstly, acquiring and processing dense point cloud data, wherein the acquisition of initial data information of the surface of a target object is efficiently completed, the acquisition of the point cloud data by a laser scanner is characterized by massive scattered point cloud information, three-dimensional coordinates and reflection intensity information of the point cloud data are acquired, and then a model structure is established for the point cloud by a mathematical method to visually express three-dimensional information of the point cloud; secondly, high-precision three-dimensional reconstruction is carried out on the target object, the target object surface data information of the obtained point cloud is analyzed and solved, and then the feature extraction is carried out on the three-dimensional model, and the feature extraction method comprises two methods: firstly, point cloud data are segmented and feature extracted by directly utilizing an algorithm; converting LiDAR data into images, and segmenting and extracting the data; the semantic description is the description of a geometric body, and comprises the geometric characteristics and the geometric categories of point cloud data and the mutual relation information among the geometric bodies, and the understanding capability of a computer on the point cloud data is enhanced by semantic-based characteristic extraction;
the method comprises the steps that firstly, a 3D laser scanner is adopted to obtain initial data of the surface of a target object, the 3D laser scanner is used for carrying out non-contact measurement and digital acquisition on array type space point location information of the surface of a real object in a point cloud mode, then three-dimensional model reconstruction is carried out, and the working flow of the three-dimensional reconstruction mainly comprises data acquisition, data registration, data fusion and network construction and texture mapping;
the dense point cloud data-based three-dimensional solid model reconstruction method comprises the steps of obtaining and processing dense point cloud data and three-dimensional reconstruction of the dense point cloud data, wherein the three-dimensional reconstruction of the dense point cloud data comprises the following steps: firstly, a point cloud library is displayed based on a VS platform: packaging each module of the point cloud library, processing dense point cloud data by using a VS platform, and presenting PCD files of the point cloud data by using a Microsoft basic class library; secondly, automatic registration of a plurality of depth images: improving an automatic registration method, and finding out various characteristics including a normal line, key points and VFH; thirdly, automatically segmenting the aggregation point cloud image; fourthly, a feature extraction method based on semantics; the automatic registration of the several depth images includes automatic registration of the depth images and registration of depth images based on curved features.
2. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 1, wherein the dense point cloud data management of the present invention is divided mainly according to the size of the point cloud data file:
firstly, establishing octree indexes for initial point cloud data and storing the octree indexes into a database by adopting an Oracle database management mode for data based on hundreds of GB levels, then realizing dynamic scheduling of dense point cloud data according to the octree indexes, and selecting the dense point cloud data to index when being browsed and measured;
secondly, for data of hundreds of MB to dozens of GB levels, an octree index multi-file point cloud management mode based on hard disk external memory is adopted, and after direct processing, a dynamic scheduling engine is used for variable scheduling, wherein the dynamic scheduling is specific to octree nodes;
and thirdly, for dozens of MB data, a management mode based on a memory is adopted, and the method can be directly adopted, namely point cloud data is completely read into the memory, so that point cloud post-processing operation is facilitated.
3. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 1, wherein a point cloud library is displayed based on a VS platform, and the point cloud library resources are fully utilized, and a convenient platform needs to be found for displaying, the development platform of the invention is Visual Studio, each module in the point cloud library is opened by VS, compiling processing is performed on a VS interface, programming is performed on tasks, a good communication bridge is built in users and resources, a point cloud data graph is processed by one key, and the appearance and functions of each module are specifically as follows:
the input and output module is used for transmitting point cloud data to the point cloud library after acquiring and obtaining the point cloud data on the surface of the target object, comparing the point cloud data, wherein the step is carried out in the input module, filtering or dividing the point cloud data after comparing the initial point cloud data, and transferring the input point cloud data to other processing modules, namely outputting the point cloud;
the K-D tree module is used for quickly finishing the classification of the approximate point cloud data and the dissimilar point cloud data through the search of a K-D tree, and carrying out permutation and combination, completion or deletion on the ragged point cloud data; expanding the binary search tree to obtain a K-D tree, applying to multi-dimensional retrieval, and being suitable for three-dimensional point cloud, wherein the K-D tree has the property of a binary tree if not an empty tree;
the system comprises an octree module, a data structure and a data processing module, wherein the octree module is a data structure which is popularized to a three-dimensional space by a quadtree, well manages point cloud data in the three-dimensional space, and determines the specific position of a small target object according to eight sub-nodes of each node of the octree;
the three modules are basic modules, have basic functions of processing point cloud data, and are also advanced processing modules including point cloud filtering, depth image, sampling consistency, point cloud registration, point cloud segmentation and point cloud curved surface reconstruction besides the basic modules.
4. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 3, wherein:
the point cloud filtering module is used for filtering a point cloud data set with large noise, automatically shielding data which do not belong to the surface of a measured target object, and obtaining an inner point and an outer point through point cloud filtering, wherein the inner point is point cloud data surrounded by boundary points, and the outer point is a boundary point of the point cloud data, namely a contour line; after filtering the collected point cloud data, establishing a primary three-dimensional model, namely realizing the visualization processing of the initial point cloud data, facilitating the more intuitive understanding of graphic information and facilitating the subsequent work processing;
the depth image processing module is mainly used for carrying out depth estimation on the point cloud data, has rich estimation contents, and can acquire and analyze information of the image according to the brightness of the image and the understanding of the content of the image;
the key point extraction module is used for extracting different measured objects with different geometric characteristics, rapidly acquiring geometric information of the measured objects and realizing rapid 3D modeling;
the point cloud data image is subjected to local division processing by the point cloud data image segmentation module, so that quick indexing is realized, and the precision of a modeling image, namely the point cloud data segmentation processing is improved;
and (3) displaying the point cloud base based on the VS platform, packaging all modules of the point cloud base, processing intensive point cloud data by using the VS platform, and presenting a PCD file of the point cloud data by using the Microsoft basic class base.
5. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 1, wherein the automated registration of several depth images: scanning a scene from different visual angles, splicing point clouds obtained by a plurality of stations to obtain a three-dimensional data point set under a unified coordinate system, and registering based on the point cloud data;
the point cloud data are registered to find the corresponding relation between the two point cloud data sets, and the point cloud data in one coordinate system are converted into the point cloud data in the other coordinate system; firstly, obtaining a corresponding relation, and then resolving a transformation parameter;
the key of automatic registration is to automatically realize feature matching, and the feature matching is firstly determined; consider two feature sets, F and G, containing n and m features, respectively, F ═ F1,f2,...,fn},G={g1,g2,...,gmFeature matching is to find a one-to-one mapping from some subset of F to some subset of G:
fi1←→gj1
fi2←→gj2
……
fik←→gjk
wherein i1,i2,...,ik∈{1,2,...,n},j1,j2,...,jk∈{1,2,...,m};
The mapping relation of the optimal feature matching found by the invention satisfies the following conditions:
condition one, any pair of corresponding features in the mapping relationship should have an approximation, that is, for any l ∈ {1,2,, y }, there should be filAnd gjlCorresponding feature approximation;
secondly, on the premise of meeting the first condition, the number y of the corresponding features in the mapping relation is as large as possible;
the line registration process comprises the following steps: extracting line segments from adjacent point clouds → finding homonymic line segments → solving unit direction vectors of homonymic line segments → solving rotation parameters → determining the intersection of two line segments → solving translation parameters → point cloud registration.
6. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 5, wherein the automatic registration of the depth image is:
step 1, data acquisition: before image processing, successfully obtaining a plane image of a three-dimensional space target object, acquiring point cloud data by utilizing a trigonometric principle of a laser scanner and a high-resolution color image, acquiring data by utilizing a registration system DCR (digital cell radar) obtained by matching a CCD (charge coupled device) camera and the laser scanner, and taking the data acquired by a 3D (three-dimensional) laser scanner as depth data;
and 2, feature extraction: the feature extraction is divided into three aspects, namely extraction of feature points, feature lines and regions; when extracting the characteristic points, firstly determining a selection method, wherein the characteristic point extraction method comprises three methods: three methods of applying directional derivatives, applying image brightness contrast relation and applying mathematical morphology are applied;
and 3, stereo matching: the stereo matching enables a plurality of acquired point cloud data images to be connected into a complete model for highlighting the actual 3D shape information of the target object, and some uncertain factors need to be noticed or avoided;
and 4, data deduplication and networking: sharing the acquired point cloud data in the same coordinate system, and in order to avoid the data from appearing for many times, removing the duplication of the data and integrating scattered data together;
step 5, geometric mapping: the acquired picture information contains color information, gray level display represents color display, and geometric mapping is realized, so that 3D modeling has better color reality.
7. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 5, wherein the depth image registration based on the surface features is to match and stitch most of two approximate surfaces, the point cloud data with the approximate features correspond to each other in the stitching process, the two point cloud data are matched in pairs to obtain a complete three-dimensional solid model, and rigid transformation is solved in the implementation process, namely, the shape of the point cloud data is not changed during rotation and translation transformation; and placing the same scene obtained by scanning different directions and angles under the same coordinate system for registration so as to enable the multi-angle graphs to correspond to each other.
8. The dense point cloud data-based three-dimensional solid Model reconstruction method according to claim 1, wherein in the automatic segmentation and aggregation of the point cloud image, for the segmentation of the vertical PLANE, pcl is adopted, SACMODEL _ PERPENDICULAR _ PLANE is adopted to segment the vertical PLANE, a Sample Consensus Model Perpendicular PLANE is adopted to obtain the relative perpendicularity of the detection PLANE and the known vector, an angle critical value is required to be set, and the PLANE is detected;
when the point cloud data of a large complex scene is processed, the point cloud data is segmented, and the data volume processed after segmentation and scribing is greatly reduced; after the local point cloud data matching is completed, clustering work is carried out, namely, all fine small parts are reassembled, and vivid 3D modeling is completed.
9. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 8, wherein the point cloud segmentation module: for 3D modeling of a large complex scene, a point cloud data graph is locally divided, so that quick indexing can be realized, and the accuracy of a modeling image can be improved, namely the point cloud data is segmented:
step one, the value of the Z coordinate of the center point of the contour line of the segment represents the height of the segment, and the coordinate of the specific center point is as follows: when the segmentation sheet is vertical to the plane XOY, the Z value of the projection point coordinate of the segmentation sheet on the plane YOZ or the plane XOZ is the Z coordinate of the segmentation sheet; when the segmentation sheet is not perpendicular to the plane XOY, the projection of the center point F of the segmentation sheet on the XOY plane is an F 'point, the center point of the segmentation sheet on the two-dimensional plane is solved, then the distance h from the F' point to the segmentation sheet is solved, and the distance AA 'of AA' is solved as h/cosαThe distance AA' is the Z value of the center point of the division piece;
step two, direction: the included angle between the plane XOY and the segmentation sheet represents the direction of the segmentation sheet;
thirdly, an area calculating method in the three-dimensional space comprises the following steps: setting m points in total, circulating the points i (0< i < m-1), calculating the area of the points i and the adjacent two points on the X, Y, Z surface, summing the areas with the previous projected areas, and calculating the area of the segment in the three-dimensional space:
Areayz +=Zi*(Yi+1-Yi-1)
Areaxz +=Xi*(Zi+1-Zi-1)
Areaxy +=Xi*(Yi+1-Yi-1)
Figure FDA0002645497240000041
fourthly, a method for calculating the projection area on the XOY plane comprises the following steps: for the circulation of the point i (0< i < m-1), calculating the geometric area of two adjacent points on the XOY surface, and then calculating the total area of the total area division piece on the XOY surface;
area+=(Xi*Yi+1-Xi+1*Yi)
AreaXOY=0.5*|area|
after the point cloud data are segmented and combined, the key point is to realize automatic operation and reduce manual interaction as much as possible.
10. The dense point cloud data-based three-dimensional solid model reconstruction method according to claim 1, wherein the method of the present invention is based on semantic feature extraction, and comprises two methods:
firstly, directly utilizing an algorithm to carry out segmentation and feature extraction on point cloud data, managing the point cloud data and extracting features of a target object, and mainly adopting an octree method based on segmentation-fusion-interpolation;
secondly, converting LiDAR data into an image, segmenting and extracting the data, performing semantic feature extraction on each scene, describing each feature before feature extraction, then filtering point cloud data acquired by each scene, reducing image noise points, performing classification processing and labeling on acquired different geometric bodies, extracting contour lines, performing intelligent semantic definition, and performing local cleaning processing on the basis to enable an originally fuzzy three-dimensional model to become more accurate and close to a real object;
after the semantics are established, extracting the characteristics of the contour line according to the characteristic description of the semantics, refining the object under the condition of a large frame constraint, removing redundant points, complementing the missing points, enabling the detected target to be clear, complete and real, and realizing high-precision three-dimensional modeling through three-dimensional reconstruction.
CN202010853185.6A 2020-08-22 2020-08-22 Three-dimensional solid model reconstruction method based on dense point cloud data Withdrawn CN111932671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010853185.6A CN111932671A (en) 2020-08-22 2020-08-22 Three-dimensional solid model reconstruction method based on dense point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010853185.6A CN111932671A (en) 2020-08-22 2020-08-22 Three-dimensional solid model reconstruction method based on dense point cloud data

Publications (1)

Publication Number Publication Date
CN111932671A true CN111932671A (en) 2020-11-13

Family

ID=73304618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010853185.6A Withdrawn CN111932671A (en) 2020-08-22 2020-08-22 Three-dimensional solid model reconstruction method based on dense point cloud data

Country Status (1)

Country Link
CN (1) CN111932671A (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381945A (en) * 2020-11-27 2021-02-19 中国科学院自动化研究所 Reconstruction method and system of three-dimensional model transition surface
CN112595258A (en) * 2020-11-23 2021-04-02 扆亮海 Ground object contour extraction method based on ground laser point cloud
CN112614234A (en) * 2020-12-28 2021-04-06 深圳市人工智能与机器人研究院 Method for editing mixed reality three-dimensional scene and mixed reality equipment
CN112649813A (en) * 2020-12-15 2021-04-13 北京星天地信息科技有限公司 Method for indoor safety inspection of important place, inspection equipment, robot and terminal
CN112734760A (en) * 2021-03-31 2021-04-30 惠州高视科技有限公司 Semiconductor bump defect detection method, electronic device, and storage medium
CN112837271A (en) * 2021-01-11 2021-05-25 浙江大学 Muskmelon germplasm resource character extraction method and system
CN112862929A (en) * 2021-03-10 2021-05-28 网易(杭州)网络有限公司 Method, device and equipment for generating virtual target model and readable storage medium
CN112967384A (en) * 2021-03-24 2021-06-15 扆亮海 Point cloud intelligent segmentation method for identifying building surveying and mapping component
CN113140036A (en) * 2021-04-30 2021-07-20 中德(珠海)人工智能研究院有限公司 Three-dimensional modeling method, device, equipment and storage medium
CN113223173A (en) * 2021-05-11 2021-08-06 华中师范大学 Three-dimensional model reconstruction migration method and system based on graph model
CN113283102A (en) * 2021-06-08 2021-08-20 中国科学院光电技术研究所 Rapid simulation method for astronomical telescope cloud cluster to pass through field of view
CN113344956A (en) * 2021-06-21 2021-09-03 深圳市武测空间信息有限公司 Ground feature contour extraction and classification method based on unmanned aerial vehicle aerial photography three-dimensional modeling
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN113470060A (en) * 2021-07-08 2021-10-01 西北工业大学 Coronary artery multi-angle curved surface reconstruction visualization method based on CT image
CN113674574A (en) * 2021-07-05 2021-11-19 河南泊云电子科技股份有限公司 Augmented reality semi-physical complex electromechanical device training system
CN113689558A (en) * 2021-08-04 2021-11-23 广州市运通水务有限公司 Three-dimensional reconstruction system based on three-dimensional laser scanner and PCL point cloud base
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
CN113870267A (en) * 2021-12-03 2021-12-31 深圳市奥盛通科技有限公司 Defect detection method, defect detection device, computer equipment and readable storage medium
CN114036643A (en) * 2021-11-10 2022-02-11 中国科学院沈阳自动化研究所 Deformation cabin digital twin body modeling method
CN114119731A (en) * 2021-11-29 2022-03-01 苏州中科全象智能科技有限公司 Equal-interval sampling method for point cloud contour line of line laser 3D camera
CN114140586A (en) * 2022-01-29 2022-03-04 苏州工业园区测绘地理信息有限公司 Indoor space-oriented three-dimensional modeling method and device and storage medium
CN114373358A (en) * 2022-03-07 2022-04-19 中国人民解放军空军工程大学航空机务士官学校 Aviation aircraft maintenance operation simulation training system based on rapid modeling
US20220187463A1 (en) * 2020-12-14 2022-06-16 Luminar, Llc Generating Scan Patterns Using Cognitive Lidar
CN114677468A (en) * 2022-05-27 2022-06-28 深圳思谋信息科技有限公司 Model correction method, device, equipment and storage medium based on reverse modeling
CN114937124A (en) * 2022-07-25 2022-08-23 武汉大势智慧科技有限公司 Three-dimensional reconstruction method, device and equipment of sheet-shaped target object based on oblique photography
CN115256950A (en) * 2022-09-27 2022-11-01 西安知象光电科技有限公司 Three-dimensional copying device and working method thereof
CN115880442A (en) * 2023-02-06 2023-03-31 宝略科技(浙江)有限公司 Three-dimensional model reconstruction method and system based on laser scanning
CN115901621A (en) * 2022-10-26 2023-04-04 中铁二十局集团第六工程有限公司 Digital identification method and system for concrete defects on outer surface of high-rise building
WO2023076913A1 (en) * 2021-10-29 2023-05-04 Hover Inc. Methods, storage media, and systems for generating a three-dimensional line segment
CN116109788A (en) * 2023-02-15 2023-05-12 张春阳 Method for modeling and reconstructing solid piece
CN116188716A (en) * 2022-12-09 2023-05-30 北京城建集团有限责任公司 Method and device for reconstructing three-dimensional model of irregular complex building
CN116244730A (en) * 2022-12-14 2023-06-09 思看科技(杭州)股份有限公司 Data protection method, device and storage medium
CN116524109A (en) * 2022-12-30 2023-08-01 中铁大桥科学研究院有限公司 WebGL-based three-dimensional bridge visualization method and related equipment
CN116758238A (en) * 2023-08-17 2023-09-15 山东高速工程检测有限公司 Road guardrail automatic modeling method based on vehicle-mounted laser point cloud
CN116993905A (en) * 2023-07-10 2023-11-03 中建三局第三建设工程有限责任公司 Three-dimensional pipeline reconstruction method and system based on B/S architecture
CN117078873A (en) * 2023-07-19 2023-11-17 达州市斑马工业设计有限公司 Three-dimensional high-precision map generation method, system and cloud platform
CN117190983A (en) * 2023-09-05 2023-12-08 湖南天桥嘉成智能科技有限公司 Tunnel ultra-underexcavation detection system, method, equipment and storage medium
CN117274527A (en) * 2023-08-24 2023-12-22 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment
CN117496092A (en) * 2023-12-29 2024-02-02 先临三维科技股份有限公司 Three-dimensional scanning reconstruction method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299260A (en) * 2014-09-10 2015-01-21 西南交通大学 Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299260A (en) * 2014-09-10 2015-01-21 西南交通大学 Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN106709947A (en) * 2016-12-20 2017-05-24 西安交通大学 RGBD camera-based three-dimensional human body rapid modeling system
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
宁德怀: "地面三维激光点云数据的滑坡变形分析与预测研究", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 11, 15 November 2017 (2017-11-15), pages 008 - 16 *
杨军建: "点云数据处理系统设计与实现", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 11, 15 November 2016 (2016-11-15), pages 008 - 29 *
胡敏捷 等: "海量点云数据管理方法的研究", 《船舶设计通讯》, no. 136, 30 September 2013 (2013-09-30), pages 62 - 66 *
裴书玉: "基于LiDAR点云数据的建筑物特征提取及建模研究", 《中国优秀硕士学位论文全文数据库基础科学辑》, no. 05, 15 May 2019 (2019-05-15), pages 008 - 142 *

Cited By (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112595258A (en) * 2020-11-23 2021-04-02 扆亮海 Ground object contour extraction method based on ground laser point cloud
CN112595258B (en) * 2020-11-23 2022-04-22 湖南航天智远科技有限公司 Ground object contour extraction method based on ground laser point cloud
CN112381945A (en) * 2020-11-27 2021-02-19 中国科学院自动化研究所 Reconstruction method and system of three-dimensional model transition surface
CN112381945B (en) * 2020-11-27 2021-05-25 中国科学院自动化研究所 Reconstruction method and system of three-dimensional model transition surface
US20220187463A1 (en) * 2020-12-14 2022-06-16 Luminar, Llc Generating Scan Patterns Using Cognitive Lidar
CN112649813A (en) * 2020-12-15 2021-04-13 北京星天地信息科技有限公司 Method for indoor safety inspection of important place, inspection equipment, robot and terminal
CN112649813B (en) * 2020-12-15 2022-02-08 北京星天地信息科技有限公司 Method for indoor safety inspection of important place, inspection equipment, robot and terminal
CN112614234A (en) * 2020-12-28 2021-04-06 深圳市人工智能与机器人研究院 Method for editing mixed reality three-dimensional scene and mixed reality equipment
CN112837271A (en) * 2021-01-11 2021-05-25 浙江大学 Muskmelon germplasm resource character extraction method and system
CN112837271B (en) * 2021-01-11 2023-11-10 浙江大学 Melon germplasm resource character extraction method and system
CN112862929B (en) * 2021-03-10 2024-05-28 网易(杭州)网络有限公司 Method, device, equipment and readable storage medium for generating virtual target model
CN112862929A (en) * 2021-03-10 2021-05-28 网易(杭州)网络有限公司 Method, device and equipment for generating virtual target model and readable storage medium
CN112967384A (en) * 2021-03-24 2021-06-15 扆亮海 Point cloud intelligent segmentation method for identifying building surveying and mapping component
CN112734760A (en) * 2021-03-31 2021-04-30 惠州高视科技有限公司 Semiconductor bump defect detection method, electronic device, and storage medium
CN112734760B (en) * 2021-03-31 2021-08-06 高视科技(苏州)有限公司 Semiconductor bump defect detection method, electronic device, and storage medium
CN113140036B (en) * 2021-04-30 2024-06-07 中德(珠海)人工智能研究院有限公司 Three-dimensional modeling method, device, equipment and storage medium
CN113140036A (en) * 2021-04-30 2021-07-20 中德(珠海)人工智能研究院有限公司 Three-dimensional modeling method, device, equipment and storage medium
CN113223173A (en) * 2021-05-11 2021-08-06 华中师范大学 Three-dimensional model reconstruction migration method and system based on graph model
CN113223173B (en) * 2021-05-11 2022-06-07 华中师范大学 Three-dimensional model reconstruction migration method and system based on graph model
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN113362445B (en) * 2021-05-25 2023-05-05 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN113283102B (en) * 2021-06-08 2023-08-22 中国科学院光电技术研究所 Quick simulation method for astronomical telescope cloud cluster crossing field of view
CN113283102A (en) * 2021-06-08 2021-08-20 中国科学院光电技术研究所 Rapid simulation method for astronomical telescope cloud cluster to pass through field of view
CN113344956B (en) * 2021-06-21 2022-02-01 深圳市武测空间信息有限公司 Ground feature contour extraction and classification method based on unmanned aerial vehicle aerial photography three-dimensional modeling
CN113344956A (en) * 2021-06-21 2021-09-03 深圳市武测空间信息有限公司 Ground feature contour extraction and classification method based on unmanned aerial vehicle aerial photography three-dimensional modeling
CN113674574A (en) * 2021-07-05 2021-11-19 河南泊云电子科技股份有限公司 Augmented reality semi-physical complex electromechanical device training system
CN113674574B (en) * 2021-07-05 2023-10-13 河南泊云电子科技股份有限公司 Augmented reality semi-physical complex electromechanical equipment training system
CN113470060A (en) * 2021-07-08 2021-10-01 西北工业大学 Coronary artery multi-angle curved surface reconstruction visualization method based on CT image
CN113689558A (en) * 2021-08-04 2021-11-23 广州市运通水务有限公司 Three-dimensional reconstruction system based on three-dimensional laser scanner and PCL point cloud base
CN113822914A (en) * 2021-09-13 2021-12-21 中国电建集团中南勘测设计研究院有限公司 Method for unifying oblique photography measurement model, computer device, product and medium
WO2023076913A1 (en) * 2021-10-29 2023-05-04 Hover Inc. Methods, storage media, and systems for generating a three-dimensional line segment
CN114036643B (en) * 2021-11-10 2024-05-14 中国科学院沈阳自动化研究所 Digital twin body modeling method for deformation cabin
CN114036643A (en) * 2021-11-10 2022-02-11 中国科学院沈阳自动化研究所 Deformation cabin digital twin body modeling method
CN114119731A (en) * 2021-11-29 2022-03-01 苏州中科全象智能科技有限公司 Equal-interval sampling method for point cloud contour line of line laser 3D camera
CN114119731B (en) * 2021-11-29 2024-06-25 苏州中科全象智能科技有限公司 Equidistant sampling method for line laser 3D camera point cloud contour line
CN113870267A (en) * 2021-12-03 2021-12-31 深圳市奥盛通科技有限公司 Defect detection method, defect detection device, computer equipment and readable storage medium
CN114140586A (en) * 2022-01-29 2022-03-04 苏州工业园区测绘地理信息有限公司 Indoor space-oriented three-dimensional modeling method and device and storage medium
CN114140586B (en) * 2022-01-29 2022-05-17 苏州工业园区测绘地理信息有限公司 Three-dimensional modeling method and device for indoor space and storage medium
CN114373358A (en) * 2022-03-07 2022-04-19 中国人民解放军空军工程大学航空机务士官学校 Aviation aircraft maintenance operation simulation training system based on rapid modeling
CN114373358B (en) * 2022-03-07 2023-11-24 中国人民解放军空军工程大学航空机务士官学校 Aviation aircraft maintenance operation simulation training system based on rapid modeling
CN114677468A (en) * 2022-05-27 2022-06-28 深圳思谋信息科技有限公司 Model correction method, device, equipment and storage medium based on reverse modeling
CN114937124A (en) * 2022-07-25 2022-08-23 武汉大势智慧科技有限公司 Three-dimensional reconstruction method, device and equipment of sheet-shaped target object based on oblique photography
CN115256950B (en) * 2022-09-27 2023-02-28 西安知象光电科技有限公司 Three-dimensional copying device and working method thereof
CN115256950A (en) * 2022-09-27 2022-11-01 西安知象光电科技有限公司 Three-dimensional copying device and working method thereof
CN115901621A (en) * 2022-10-26 2023-04-04 中铁二十局集团第六工程有限公司 Digital identification method and system for concrete defects on outer surface of high-rise building
CN116188716A (en) * 2022-12-09 2023-05-30 北京城建集团有限责任公司 Method and device for reconstructing three-dimensional model of irregular complex building
CN116244730B (en) * 2022-12-14 2023-10-13 思看科技(杭州)股份有限公司 Data protection method, device and storage medium
CN116244730A (en) * 2022-12-14 2023-06-09 思看科技(杭州)股份有限公司 Data protection method, device and storage medium
CN116524109B (en) * 2022-12-30 2024-02-02 中铁大桥科学研究院有限公司 WebGL-based three-dimensional bridge visualization method and related equipment
CN116524109A (en) * 2022-12-30 2023-08-01 中铁大桥科学研究院有限公司 WebGL-based three-dimensional bridge visualization method and related equipment
CN115880442A (en) * 2023-02-06 2023-03-31 宝略科技(浙江)有限公司 Three-dimensional model reconstruction method and system based on laser scanning
CN115880442B (en) * 2023-02-06 2023-06-09 宝略科技(浙江)有限公司 Three-dimensional model reconstruction method and system based on laser scanning
CN116109788B (en) * 2023-02-15 2023-07-04 张春阳 Method for modeling and reconstructing solid piece
CN116109788A (en) * 2023-02-15 2023-05-12 张春阳 Method for modeling and reconstructing solid piece
CN116993905B (en) * 2023-07-10 2024-05-28 中建三局第三建设工程有限责任公司 Three-dimensional pipeline reconstruction method and system based on B/S architecture
CN116993905A (en) * 2023-07-10 2023-11-03 中建三局第三建设工程有限责任公司 Three-dimensional pipeline reconstruction method and system based on B/S architecture
CN117078873A (en) * 2023-07-19 2023-11-17 达州市斑马工业设计有限公司 Three-dimensional high-precision map generation method, system and cloud platform
CN116758238B (en) * 2023-08-17 2024-01-23 山东高速工程检测有限公司 Road guardrail automatic modeling method based on vehicle-mounted laser point cloud
CN116758238A (en) * 2023-08-17 2023-09-15 山东高速工程检测有限公司 Road guardrail automatic modeling method based on vehicle-mounted laser point cloud
CN117274527A (en) * 2023-08-24 2023-12-22 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment
CN117274527B (en) * 2023-08-24 2024-06-11 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment
CN117190983A (en) * 2023-09-05 2023-12-08 湖南天桥嘉成智能科技有限公司 Tunnel ultra-underexcavation detection system, method, equipment and storage medium
CN117190983B (en) * 2023-09-05 2024-04-26 湖南天桥嘉成智能科技有限公司 Tunnel ultra-underexcavation detection system, method, equipment and storage medium
CN117496092A (en) * 2023-12-29 2024-02-02 先临三维科技股份有限公司 Three-dimensional scanning reconstruction method, device, equipment and storage medium
CN117496092B (en) * 2023-12-29 2024-04-19 先临三维科技股份有限公司 Three-dimensional scanning reconstruction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111932671A (en) Three-dimensional solid model reconstruction method based on dense point cloud data
Wang 3D building modeling using images and LiDAR: A review
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
De Luca et al. Reverse engineering of architectural buildings based on a hybrid modeling approach
Stoter et al. 3D GIS, where are we standing
Cheng et al. BIM applied in historical building documentation and refurbishing
Fabio From point cloud to surface: the modeling and visualization problem
Moyano et al. Operability of point cloud data in an architectural heritage information model
Zhu et al. Leveraging photogrammetric mesh models for aerial-ground feature point matching toward integrated 3D reconstruction
Manferdini et al. Reality-based 3D modeling, segmentation and web-based visualization
Pan et al. Rapid scene reconstruction on mobile phones from panoramic images
CN111784840B (en) LOD (line-of-sight) level three-dimensional data singulation method and system based on vector data automatic segmentation
US8600713B2 (en) Method of online building-model reconstruction using photogrammetric mapping system
CN114820975B (en) Three-dimensional scene simulation reconstruction system and method based on all-element parameter symbolization
Grussenmeyer et al. 4.1 ARCHITECTURAL PHOTOGRAMMETRY
Galanakis et al. SVD-based point cloud 3D stone by stone segmentation for cultural heritage structural analysis–The case of the Apollo Temple at Delphi
Laing et al. Monuments visualization: from 3D scanned data to a holistic approach, an application to the city of Aberdeen
Gaiani et al. A mono-instrumental approach to high-quality 3D reality-based semantic models application on the PALLADIO library
Wu et al. [Retracted] Intelligent City 3D Modeling Model Based on Multisource Data Point Cloud Algorithm
Cheng The workflows of 3D digitizing heritage monuments
Fang et al. 3D shape recovery of complex objects from multiple silhouette images
Liu et al. A survey on processing of large-scale 3D point cloud
Jazayeri Trends in 3D land information collection and management
Qing et al. Research on Application of 3D Laser Point Cloud Technology in 3D Geographic Location Information Modeling of Electric Power
CN204229472U (en) A kind of system of the museum objects 3-dimensional digital modeling based on reverse-engineering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201113