CN114140586A - Indoor space-oriented three-dimensional modeling method and device and storage medium - Google Patents

Indoor space-oriented three-dimensional modeling method and device and storage medium Download PDF

Info

Publication number
CN114140586A
CN114140586A CN202210109965.9A CN202210109965A CN114140586A CN 114140586 A CN114140586 A CN 114140586A CN 202210109965 A CN202210109965 A CN 202210109965A CN 114140586 A CN114140586 A CN 114140586A
Authority
CN
China
Prior art keywords
point cloud
cloud data
indoor
standard template
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210109965.9A
Other languages
Chinese (zh)
Other versions
CN114140586B (en
Inventor
沈姜威
钱程杨
蒋如乔
邢万里
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuance Information Technology Co ltd
Original Assignee
Suzhou Industrial Park Surveying Mapping And Geoinformation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Industrial Park Surveying Mapping And Geoinformation Co ltd filed Critical Suzhou Industrial Park Surveying Mapping And Geoinformation Co ltd
Priority to CN202210109965.9A priority Critical patent/CN114140586B/en
Publication of CN114140586A publication Critical patent/CN114140586A/en
Application granted granted Critical
Publication of CN114140586B publication Critical patent/CN114140586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application discloses a three-dimensional modeling method, a three-dimensional modeling device and a storage medium for indoor space, which relate to the technical field of automatic modeling, wherein the method comprises the following steps: acquiring indoor point cloud data of an indoor space through laser scanning equipment; testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud; for each semantic category, carrying out example segmentation on the point cloud belonging to the semantic category according to a point cloud standard template of the semantic category to obtain example point cloud data of each example; matching each example point cloud data with a corresponding point cloud standard template; carrying out automatic three-dimensional white mould construction according to the matched example point cloud data; and placing a prefabricated fine three-dimensional model template according to the geometric center and the main direction of the matched example point cloud data. The problems that in the prior art, due to numerous noise points and holes, indoor elements are difficult to automatically divide, models are rough and poor in quality, the refinement degree is low, and models of similar objects are inconsistent are solved.

Description

Indoor space-oriented three-dimensional modeling method and device and storage medium
Technical Field
The invention relates to a three-dimensional modeling method and device for an indoor space and a storage medium, and belongs to the technical field of automatic modeling.
Background
The indoor space is a special space created by human beings. With the continuous development of building technology and indoor space design, the indoor space area is continuously enlarged and the complexity is gradually increased, and the activities of human beings in the indoor space are more frequent, so that the complex applications of indoor navigation positioning, floor sweeping robots, intelligent buildings, emergency fire fighting and the like are more challenged. Conventional two-dimensional maps have failed to meet the needs of these complex space applications. Efficient and convenient data acquisition modes and high-precision indoor three-dimensional model construction and drawing technologies become keys for supporting various indoor complex applications.
As a novel modern measurement technology, the three-dimensional laser scanning technology has the characteristics of large scanning range, high efficiency, high data precision and the like, and mass point cloud data obtained by the three-dimensional laser scanning technology can completely, truly and finely depict a complex real world and meet the modeling requirement of an indoor space. The current model construction method of the indoor point cloud mainly comprises the following steps: and (3) automatically extracting and segmenting the characteristic lines and the characteristic surfaces of the indoor point cloud by using a point cloud processing algorithm (such as RANSAC, Euclidean clustering, region growing and the like), and three-dimensionally stretching line surface elements by combining priori knowledge and context information, thereby realizing the automatic modeling of the simple surface structure. However, the method loses a large amount of texture information and semantic information, has low refinement degree and low intelligent recognition degree of different targets, only has good effect on the rough model construction of simple structures such as walls, floors, doors and windows, and the like, and cannot intelligently construct a refined three-dimensional model of indoor elements such as office furniture and the like with more complex structures.
Disclosure of Invention
The invention aims to provide a three-dimensional modeling method, a three-dimensional modeling device and a storage medium for an indoor space, which are used for solving the problems in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme:
according to a first aspect, an embodiment of the present invention provides a three-dimensional modeling method for an indoor space, where the method includes:
acquiring indoor point cloud data of an indoor space through laser scanning equipment;
testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud;
for each semantic category, carrying out example segmentation on the point cloud belonging to the semantic category according to the point cloud standard template of the semantic category to obtain example point cloud data of each example;
matching each example point cloud data with a corresponding point cloud standard template;
automatically constructing a three-dimensional white model according to the matched example point cloud data;
and placing a three-dimensional refined model template according to the geometric center and the main direction of the matched example point cloud data.
Optionally, the testing the indoor point cloud data through the semantic segmentation network to obtain the semantic category of each point cloud includes:
preprocessing the indoor point cloud data to obtain a preprocessed data file;
and testing the data file through the semantic segmentation network to obtain the semantic category of each point cloud.
Optionally, the preprocessing the indoor point cloud data to obtain a preprocessed data file includes:
carrying out coordinate system normalization operation on the indoor point cloud data, wherein the normalized coordinate origin is the initial scanning position of the laser scanning equipment;
arranging the normalized indoor point cloud data according to a preset sequence, wherein the preset sequence is XYZRGB;
and preprocessing the sorted indoor point cloud data to obtain a processed data file.
Optionally, for each semantic category, performing instance segmentation on the point cloud belonging to the semantic category according to the point cloud standard template of the semantic category to obtain instance point cloud data of each instance, including:
for each semantic category, denoising and dividing the point cloud belonging to the semantic category by an Euclidean clustering method;
and comparing the geometric information of the point cloud standard template with the geometric attributes of the example segmentation result after the Euclidean denoising, and dividing the point cloud belonging to the semantic category into a set consisting of a plurality of point cloud standard templates.
Optionally, the matching of each instance point cloud data with a corresponding point cloud standard template includes:
extracting homonymous feature point pairs according to the geometric information of the example point cloud data and the geometric information of the corresponding point cloud standard template, and performing primary matching on the example point cloud data and the point cloud standard template by using a singular value decomposition algorithm;
and finally matching the preliminarily matched example point cloud data with the point cloud standard template through a closest point iteration algorithm.
Optionally, the extracting a pair of homonymous feature points according to the geometric information of the example point cloud data and the geometric information of the corresponding point cloud standard template, and performing preliminary matching on the example point cloud data and the point cloud standard template by using a singular value decomposition algorithm includes:
extracting m characteristic angular points in the example point cloud data, wherein m is a positive integer;
determining a primary direction of the example point cloud data;
according to the determined main direction, corresponding the extracted m characteristic angular points to the characteristic angular points in the point cloud standard template;
and according to each characteristic angle pair, preliminarily matching the example point cloud data with the point cloud standard template through the singular value decomposition algorithm.
Optionally, the final matching of the preliminarily matched example point cloud data and the point cloud standard template through a closest point iterative algorithm includes:
iterating a transformation matrix between the preliminarily matched example point cloud data and the point cloud standard template through the closest point iterative algorithm;
and continuously registering the preliminarily matched example point cloud data and the point cloud standard template according to the obtained transformation matrix, and obtaining a final matching result.
Optionally, the automatically constructing a three-dimensional white model according to the matched example point cloud data includes:
determining the corresponding category and association relation of the example point cloud data in a CityGML standard data model according to the semantic category of each example point cloud data;
splitting a point cloud standard template corresponding to the example point cloud data into line-surface combinations;
filling matched line-surface characteristic contour coordinates of the example point cloud data into a line ring under a polygonal surface label according to the line-surface combination and the characteristic contour points of the point cloud standard template;
and reading the wire loop through the CityGML visualization software or program, and automatically constructing the three-dimensional white model.
In a second aspect, there is provided an indoor-space-oriented three-dimensional modeling apparatus, the apparatus comprising a memory and a processor, the memory having at least one program instruction stored therein, the processor implementing the method according to the first aspect by loading and executing the at least one program instruction.
In a third aspect, there is provided a computer storage medium having stored therein at least one program instruction which is loaded and executed by a processor to implement the method of the first aspect.
Acquiring indoor point cloud data of an indoor space through laser scanning equipment; testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud; for each semantic category, carrying out example segmentation on the point cloud belonging to the semantic category according to the point cloud standard template of the semantic category to obtain example point cloud data of each example; matching each example point cloud data with a corresponding point cloud standard template; automatically constructing a three-dimensional white model according to the matched example point cloud data; and placing a three-dimensional refined model template according to the geometric center and the main direction of the matched example point cloud data. The problems that in the prior art, due to numerous noise points and holes, indoor elements are difficult to automatically divide, models are rough and poor in quality, the refinement degree is low, and models of similar objects are inconsistent are solved. The effect of quickly and automatically constructing a three-dimensional model of refined indoor furniture based on the indoor point cloud data is achieved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
Fig. 1 is a flowchart of a method of three-dimensional modeling for an indoor space according to an embodiment of the present invention;
fig. 2 is a schematic diagram of indoor point cloud data acquired for a medium-sized indoor scene according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a segmentation result obtained after semantic segmentation according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of example point cloud data of an office table and chair obtained by segmentation according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the point cloud and the contour of the office table and chair obtained by matching according to an embodiment of the present invention;
FIGS. 6 and 7 are schematic diagrams of modeled three-dimensional models of desks and chairs provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of one possible implementation of the City GML model for office chairs and desks constructed according to an embodiment of the present invention;
FIG. 9 is a schematic representation of the interior space after placement of the three-dimensional model provided by one embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, a flowchart of a method for three-dimensional modeling of an indoor space according to an embodiment of the present application is shown, where the method includes:
step 101, acquiring indoor point cloud data of an indoor space through laser scanning equipment;
in one possible embodiment of the present application, the present step includes:
firstly, field work acquires data, and three-dimensional laser point cloud data of an indoor space are acquired by using laser radar mobile equipment and 3dScanner software;
in practical implementation, the mobile scanning device moves according to a normal walking speed, the laser radar mobile device is rotated left and right to scan the ground objects on two sides of the body and in front of the body, the molding condition of parts on a screen is noticed, and the objects in a scene are guaranteed to be scanned successfully.
Secondly, performing interior point cloud calculation, processing point cloud data by using 3d Scanner software, adding textures to obtain a true color three-dimensional network model, and exporting the true color three-dimensional network model into a txt file containing XYZ and RGB six-dimensional data.
Among them, the indoor space can be divided into a large size, a medium size and a small size. For example, a complete indoor scene is large, half is medium, and quarter is small.
In a possible implementation manner of this embodiment, please refer to fig. 2, which shows a schematic diagram of indoor point cloud data acquired for a medium-sized indoor scene.
102, testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud;
the method comprises the following steps:
firstly, preprocessing the indoor point cloud data to obtain a preprocessed data file;
(1) carrying out coordinate system normalization operation on the indoor point cloud data, wherein the normalized coordinate origin is the initial scanning position of the laser scanning equipment;
in practice, after the normalization operation, the floor plane is parallel to the XY-axis plane, and the elevation direction represents the Z-axis.
(2) Arranging the normalized indoor point cloud data according to a preset sequence, wherein the preset sequence is XYZRGB;
the indoor point cloud data, namely txt files, have 6 columns, namely xyrgb, and the second column and the third column are exchanged to obtain XYZRGB. In actual implementation, because the formats of data output by different laser radar acquisition devices are different, the specific adjustment mode of the adjustment sequence in this step may be different, and only the adjusted data is in the XYZRGB format.
(3) And preprocessing the sorted indoor point cloud data to obtain a processed data file.
In practical implementation, according to a data storage mode of S3DIS (Stanford Large-Scale 3D index Spaces database), Large, medium and small Area point clouds are sequentially placed into the office _1, office _2 and office _3 folders of Area _1, and each office folder contains txt format scanning point clouds processed in the steps. After the data format and the path are set, the side length of an input sampling grid is 0.04 m, the data _ prepare _ s3dis. py script file under a RandLA-Net open source framework is used for automatically preprocessing point clouds, and each Area point cloud obtains three preprocessed files, such as an Area _1_ office _1.ply file after grid sampling, an Area _1_ office _1_ KDTree.pkl file for quickly searching neighboring areas and an Area _1_ office _1_ j.pkl file for mapping the result back to the original size in the process of prediction.
Secondly, testing the data file through the semantic segmentation network to obtain the semantic category of each point cloud.
The semantic segmentation network described in this embodiment may be a network obtained by training in advance. Specifically, the S3DIS data set may be trained using a main _ S3DIS. py file under the RandLA-Net open source framework to obtain a trained S3DIS model, where the trained S3DIS model is a semantic segmentation network.
After the semantic segmentation network is obtained through training, reading the preprocessed data file through the trained semantic segmentation network, testing the preprocessed data file by using a main _ s3dis. py file under a RandLA-Net open source framework, outputting classification labels (such as 14 object labels in total of a table, a chair, a floor, a wall, a door, a window and the like) of each point, and splicing the original point cloud coordinates XYZ and the RGB values corresponding to the classified labels to obtain the final semantic segmentation point cloud. Wherein, the same point clouds have consistent colors, such as yellow point cloud representing office desk, dark green point cloud representing office chair, light green point cloud representing floor, cyan point cloud representing ceiling, blue point cloud representing sundries, and red point cloud representing wall. For example, please refer to fig. 3, which shows a schematic diagram of a possible segmentation.
In practical implementation, the step of stitching the original point cloud coordinates XYZ and the RGB values corresponding to the classified labels may include: and storing the original point cloud coordinates and the RGB values corresponding to the classified labels as txt data according to the XYZRGB sequence.
103, for each semantic category, carrying out example segmentation on the point clouds belonging to the semantic category according to the point cloud standard template of the semantic category to obtain example point cloud data of each example;
before the step is executed, point cloud standard templates corresponding to various different instances can be constructed. For example, complete point clouds of desks and chairs are selected as point cloud standard templates of respective categories, and the geometric center coordinates, the serial numbers, the characteristic contour point coordinates of the templates and the length and the width of the desktops and the chair surfaces are recorded. The characteristic contour points of the office desk comprise the edge contour of the desktop and the contours of the circular bottoms of the three legs, and the characteristic contour points of the office chair comprise two edge corner points at the highest position of the chair back and two corresponding corner points on the chair surface which are farthest away from the perpendicular line of the two corner points of the chair back.
Accordingly, the step may include:
(1) for each semantic category, carrying out denoising and segmentation on the point cloud belonging to the semantic category by an Euclidean clustering method;
and for a certain point P in the point cloud belonging to the semantic category, finding k points nearest to the point P through a KD tree neighbor search algorithm, and clustering the points with the distance smaller than a set threshold value into a set Q. If the number of elements in Q is not increased, the whole clustering process is ended; otherwise, selecting points other than the point P in the set Q, and repeating the process until the number of elements in Q is not increased any more.
In a possible embodiment, the neighbor search radius of the office table is set to be 0.15 m, the neighbor search radius of the office chair is set to be 0.02 m, the minimum point number of the clusters is 1000, the maximum point number is 200000, and finally 4 office table cluster blocks and 14 office chair cluster blocks are obtained;
(2) and comparing the geometric information of the point cloud standard template with the geometric attributes of the example segmentation result after the Euclidean denoising, and dividing the point cloud belonging to the semantic category into a set consisting of a plurality of point cloud standard templates.
Comparing the geometric information of the standard template of the office table point cloud (the length L is 3.5 meters, the width W is 1.4 meters) and the geometric attributes of the euclidean clustering segmentation result (for example, the length L is about 7 meters, and the width W is about 3 meters), the clustering block can be segmented into 4 approximate standard templates according to an aspect ratio calculation formula (L/L × W/W), example segmentation is carried out along the central lines of the long sides and the short sides, the above operation is carried out on all clustering blocks in the same way, please refer to a diagram a in fig. 4, finally 10 new office table clusters are obtained, please refer to a diagram b in fig. 4, and 14 office chair clusters can be obtained in the same way. In actual implementation, L/L and W/W may be rounded and multiplied.
104, matching each example point cloud data with a corresponding point cloud standard template;
firstly, extracting homonymous characteristic point pairs according to the geometric information of the example point cloud data and the geometric information of a corresponding point cloud standard template, and performing primary matching on the example point cloud data and the point cloud standard template by using a singular value decomposition algorithm;
(1) extracting m characteristic angular points in the example point cloud data, wherein m is a positive integer;
in the present embodiment, m is an integer of 4 or more.
In a possible embodiment, taking an example as an office table as an example, four feature corner points of the table top can be obtained by using the getMinMax3D function of the PCL library, and the feature corner points can be used for solving a rotation transformation matrix; for an office chair, the four characteristic angular points comprise two angular points of a chair back and two angular points of a chair surface, the two angular points of the chair back can be obtained in batch based on an elevation threshold (at a higher position of the office chair) and a Euclidean distance maximum value (with the maximum distance between the two angular points), and the two distances are the length of the chair back; obtaining two angular points of the chair back, obtaining a plane straight line equation Ax + By + C =0, and obtaining the distance from any point (x 0, y 0) to the straight lineFormula of distance calculation
Figure DEST_PATH_IMAGE001
Coordinate points farthest from the straight line in the office chair plane projection point cloud can be obtained, the length d of the perpendicular line is the width of the chair surface, the length d of the perpendicular line extends towards the direction of the chair surface through the chair back angular points respectively, and the other end point of the line is the coordinate of the chair surface angular point corresponding to the chair back angular point, so that four characteristic points are obtained.
(2) Determining a main direction of the example point cloud data;
taking an office table in the scene as an example, the main direction information can be simplified into an M type and a W type, firstly, the long side of example point cloud data to be matched is aligned with an X axis, the short side of the example point cloud data is aligned with a Y axis, then, gridding processing is carried out on the example point cloud data, a plane space is divided into 0.2M by 0.2M square grids, the number of grids with point clouds is counted from bottom to top line by line along the Y axis, and if the number of the grids with the point clouds at the bottom is larger than that of the grids with the point clouds at the top, the grid can be judged to be the W type; otherwise, the model is 'M type'; in terms of office chairs, the direction of the perpendicular line from the chair back to the chair seat is the main direction, and the one-to-one correspondence of the four characteristic angular points can be ensured.
(3) The m extracted feature angular points correspond to feature angular points in the point cloud standard template according to the determined main direction;
(4) and preliminarily matching the example point cloud data with the point cloud standard template through a singular value decomposition algorithm according to each characteristic angle point pair.
And solving a rotation transformation matrix by using a singular value decomposition algorithm according to at least four pairs of homonymous point pairs to realize the preliminary matching of the point cloud of the table and the chair and respective standard templates.
Secondly, carrying out final matching on the preliminarily matched example point cloud data and the point cloud standard template through a closest point iterative algorithm.
(1) Iterating the initially matched example point cloud data and the transformation matrix between the point cloud standard template through the closest point iterative algorithm;
(2) and continuously registering the preliminarily matched example point cloud data and the point cloud standard template according to the obtained transformation matrix, and obtaining a final matching result.
And iterating and calculating a transformation matrix between the template Point Cloud and the Point Cloud after rough matching by adopting an ICP (Iterative Closest Point) algorithm in a PCL (Point Cloud Library) Library, and continuously carrying out scene registration until the optimal matching is obtained. Specifically, the rotation parameter of the rotation transformation matrix obtained by calculation is R, and the translation vector is t; and calculating the coordinate P of the preliminarily matched example point cloud data according to the formula (R x P + t) to obtain a new point cloud coordinate, and continuously matching in sequence until an optimal matching result is obtained.
For example, please refer to a diagram a in fig. 5, which shows the point cloud after the desk template matching, and refer to b diagram in fig. 5, which shows the schematic diagram of the feature contour point after the desk template matching; similarly, please refer to fig. 5 c, which shows a schematic diagram of the point cloud and the feature contour point after office chair matching.
105, automatically constructing a three-dimensional white model according to the matched example point cloud data;
firstly, determining the corresponding category and association relation of the example point cloud data in a CityGML standard data model according to the semantic category of each example point cloud data;
the xml format file is newly created, the head of the office desk and chair in the CityGML standard is defined as the building furniture class < bldg: building furniture > under the indoor furniture class < bldg: interarorfurniture > based on the semantic information, and the building furniture class < bldg: building furniture > has the detail level information of LOD4 and is expressed by < bldg: LOD4Geometry >.
Secondly, splitting a point cloud standard template corresponding to the example point cloud data into line-surface combinations;
the CityGML model is formed by combining patches, for example, an office desktop is formed by desktop contour points of a standard template, two bottom surfaces of a cylindrical desk leg are sequentially represented by the bottom surface contour points in a connected mode, the side surface of the cylindrical desk leg is approximately represented by a plurality of rectangular patches formed by the contour points corresponding to the two bottom surfaces, and the patches of other indoor elements are similar in composition rule.
Thirdly, filling matched line-surface feature contour coordinates of the example point cloud data into a line ring under a polygonal surface label according to the line-surface combination and feature contour points of the point cloud standard template;
fourthly, reading the wire loop and the semantic category through the CityGML visualization software or program, and automatically constructing the three-dimensional white model.
According to the combination rule of each indoor element, all matched characteristic contour points are transmitted to a < gml: posList > storing patch coordinate information in a closed loop string form with consistent head and tail coordinates, the class is positioned in a face member < gml: surface member > under a complex polyhedron < gml: Multisurface >, each new filling of the complete < gml: surface member > indicates that a patch is newly constructed, the generation modes of all similar indoor elements are consistent, and the model construction can be automatically completed only by transmitting different contour point coordinates according to the rule.
Please refer to fig. 6, which shows a possible schematic diagram of the CityGML model of the desk constructed. Similarly, please refer to fig. 7, which shows a possible schematic diagram of the CityGML model of the constructed office chair. Referring to fig. 8, a possible schematic diagram of the resultant CityGML model of office chairs and desks is shown.
And 106, placing a refined three-dimensional model template according to the geometric center and the main direction of the matched example point cloud data.
And reading the geometric center and the main direction of the matched point cloud, placing refined three-dimensional model templates in a commercial modeling software in batches, and completely reproducing the whole indoor furniture scene with high precision and high reduction. The refined three-dimensional model template is obtained by constructing and rendering a refined three-dimensional model of typical furniture by using commercial modeling software.
Referring to FIG. 9, a schematic of the interior space after placement of the three-dimensional model is shown.
In summary, indoor point cloud data of an indoor space is acquired through the laser scanning device; testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud; for each semantic category, carrying out example segmentation on the point cloud belonging to the semantic category according to the point cloud standard template of the semantic category to obtain example point cloud data of each example; matching each example point cloud data with a corresponding point cloud standard template; automatically constructing a three-dimensional white model according to the matched example point cloud data; and placing a three-dimensional refined model template according to the geometric center and the main direction of the matched example point cloud data. The problems that in the prior art, due to numerous noise points and holes, indoor elements are difficult to automatically divide, models are rough and poor in quality, the refinement degree is low, and models of similar objects are inconsistent are solved. The effect of quickly and automatically constructing a three-dimensional model of refined indoor furniture based on the indoor point cloud data is achieved.
The method supports automatic and accurate example segmentation of all elements (walls, floors, tables, chairs and the like) in the indoor point cloud, retains semantic information of indoor furniture, and solves the problem that specific objects are often segmented in the traditional point cloud segmentation process, and the indoor elements are represented by non-semantic surface, line and column features.
The method supports multi-level three-dimensional display of indoor office furniture, a set of modeling process can not only obtain a pure point cloud model and a CityGML white model after template matching, but also construct a refined three-dimensional rendering model, and meets modeling requirements of different scenes.
The application provides a three-dimensional modeling device facing an indoor space, which comprises a memory and a processor, wherein at least one program instruction is stored in the memory, and the processor loads and executes the at least one program instruction to realize the method.
There is provided a computer storage medium having stored therein at least one program instruction that is loaded and executed by a processor to implement a method as described above.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A three-dimensional modeling method for an indoor space, the method comprising:
acquiring indoor point cloud data of an indoor space through laser scanning equipment;
testing the indoor point cloud data through a semantic segmentation network to obtain the semantic category of each point cloud; the method comprises the following specific steps: carrying out coordinate system normalization operation on the indoor point cloud data, wherein the normalized coordinate origin is the initial scanning position of the laser scanning equipment; arranging the normalized indoor point cloud data according to a preset sequence, wherein the preset sequence is XYZRGB; preprocessing the sorted indoor point cloud data to obtain a processed data file; testing the data file through the semantic segmentation network to obtain the semantic category of each point cloud;
for each semantic category, carrying out example segmentation on the point cloud belonging to the semantic category according to the point cloud standard template of the semantic category to obtain example point cloud data of each example;
matching each example point cloud data with a corresponding point cloud standard template;
automatically constructing a three-dimensional white model according to the matched example point cloud data;
and placing a three-dimensional refined model template according to the geometric center and the main direction of the matched example point cloud data.
2. The indoor-space-oriented three-dimensional modeling method according to claim 1, wherein for each semantic category, performing instance segmentation on the point clouds belonging to the semantic category according to the point cloud standard template of the semantic category to obtain instance point cloud data of each instance, and the method comprises:
for each semantic category, denoising and dividing the point cloud belonging to the semantic category by an Euclidean clustering method;
and comparing the geometric information of the point cloud standard template with the geometric attributes of the example segmentation result after the Euclidean denoising, and dividing the point cloud belonging to the semantic category into a set consisting of a plurality of point cloud standard templates.
3. The method of claim 1, wherein the matching of each instance point cloud data with a corresponding point cloud standard template comprises:
extracting homonymous feature point pairs according to the geometric information of the example point cloud data and the geometric information of the corresponding point cloud standard template, and performing primary matching on the example point cloud data and the point cloud standard template by using a singular value decomposition algorithm;
and finally matching the preliminarily matched example point cloud data with the point cloud standard template through a closest point iteration algorithm.
4. The three-dimensional modeling method for the indoor space according to claim 3, wherein the extracting the homonymous feature point pairs according to the geometric information of the example point cloud data and the geometric information of the corresponding point cloud standard template, and performing the preliminary matching on the example point cloud data and the point cloud standard template by using a singular value decomposition algorithm comprises:
extracting m characteristic angular points in the example point cloud data, wherein m is a positive integer;
determining a primary direction of the example point cloud data;
according to the determined main direction, corresponding the extracted m characteristic angular points to the characteristic angular points in the point cloud standard template;
and according to each characteristic angle pair, preliminarily matching the example point cloud data with the point cloud standard template through the singular value decomposition algorithm.
5. The three-dimensional modeling method for the indoor space according to claim 3, wherein the final matching of the preliminarily matched example point cloud data and the point cloud standard template by the closest point iterative algorithm comprises:
iterating a transformation matrix between the preliminarily matched example point cloud data and the point cloud standard template through the closest point iterative algorithm;
and continuously registering the preliminarily matched example point cloud data and the point cloud standard template according to the obtained transformation matrix, and obtaining a final matching result.
6. The indoor-space-oriented three-dimensional modeling method according to claim 1, wherein the automatic three-dimensional white model modeling from the matched example point cloud data comprises:
determining the corresponding category and association relation of the example point cloud data in a CityGML standard data model according to the semantic category of each example point cloud data;
splitting a point cloud standard template corresponding to the example point cloud data into line-surface combinations;
filling the matched line-surface characteristic contour coordinates of the example point cloud data to a line ring under a polygonal surface label according to the line-surface combination and the characteristic contour points of the point cloud standard template;
and reading the wire loops and the semantic categories through the CityGML visualization software or program, and automatically constructing the three-dimensional white model.
7. An indoor-space-oriented three-dimensional modeling apparatus comprising a memory and a processor, wherein the memory stores at least one program instruction, and the processor implements the indoor-space-oriented three-dimensional modeling method according to any one of claims 1 to 6 by loading and executing the at least one program instruction.
8. A computer storage medium having stored therein at least one program instruction, which is loaded and executed by a processor to implement the method of three-dimensional modeling of an indoor space as claimed in any one of claims 1 to 6.
CN202210109965.9A 2022-01-29 2022-01-29 Three-dimensional modeling method and device for indoor space and storage medium Active CN114140586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210109965.9A CN114140586B (en) 2022-01-29 2022-01-29 Three-dimensional modeling method and device for indoor space and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210109965.9A CN114140586B (en) 2022-01-29 2022-01-29 Three-dimensional modeling method and device for indoor space and storage medium

Publications (2)

Publication Number Publication Date
CN114140586A true CN114140586A (en) 2022-03-04
CN114140586B CN114140586B (en) 2022-05-17

Family

ID=80381826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210109965.9A Active CN114140586B (en) 2022-01-29 2022-01-29 Three-dimensional modeling method and device for indoor space and storage medium

Country Status (1)

Country Link
CN (1) CN114140586B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926699A (en) * 2022-07-20 2022-08-19 深圳大学 Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN115273645A (en) * 2022-08-09 2022-11-01 南京大学 Map making method for automatically clustering indoor surface elements
CN116188713A (en) * 2023-04-25 2023-05-30 煤炭科学研究总院有限公司 Method and device for dynamically generating coal mine three-dimensional scene based on point cloud mirror image model
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN117058314A (en) * 2023-08-16 2023-11-14 广州葛洲坝建设工程有限公司 Cast-in-situ structure template reverse modeling method based on point cloud data

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887082A (en) * 2019-01-22 2019-06-14 武汉大学 A kind of interior architecture three-dimensional modeling method and device based on point cloud data
CN110097593A (en) * 2019-04-15 2019-08-06 上海海事大学 A method of identifying cylindrical surface from multi-line laser radar point cloud data
CN110349247A (en) * 2018-04-08 2019-10-18 哈尔滨工业大学 A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding
CN110363849A (en) * 2018-04-11 2019-10-22 株式会社日立制作所 A kind of interior three-dimensional modeling method and system
CN111932671A (en) * 2020-08-22 2020-11-13 扆亮海 Three-dimensional solid model reconstruction method based on dense point cloud data
CN113051654A (en) * 2021-06-02 2021-06-29 苏州工业园区测绘地理信息有限公司 Indoor stair three-dimensional geographic entity model construction method based on two-dimensional GIS data
CN113129311A (en) * 2021-03-10 2021-07-16 西北大学 Label optimization point cloud example segmentation method
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
US20210358206A1 (en) * 2020-05-14 2021-11-18 Star Institute Of Intelligent Systems Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology
CN113740864A (en) * 2021-08-24 2021-12-03 上海宇航系统工程研究所 Self-pose estimation method for soft landing tail segment of detector based on laser three-dimensional point cloud
WO2022016311A1 (en) * 2020-07-20 2022-01-27 深圳元戎启行科技有限公司 Point cloud-based three-dimensional reconstruction method and apparatus, and computer device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349247A (en) * 2018-04-08 2019-10-18 哈尔滨工业大学 A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding
CN110363849A (en) * 2018-04-11 2019-10-22 株式会社日立制作所 A kind of interior three-dimensional modeling method and system
CN109887082A (en) * 2019-01-22 2019-06-14 武汉大学 A kind of interior architecture three-dimensional modeling method and device based on point cloud data
CN110097593A (en) * 2019-04-15 2019-08-06 上海海事大学 A method of identifying cylindrical surface from multi-line laser radar point cloud data
US20210358206A1 (en) * 2020-05-14 2021-11-18 Star Institute Of Intelligent Systems Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology
WO2022016311A1 (en) * 2020-07-20 2022-01-27 深圳元戎启行科技有限公司 Point cloud-based three-dimensional reconstruction method and apparatus, and computer device
CN111932671A (en) * 2020-08-22 2020-11-13 扆亮海 Three-dimensional solid model reconstruction method based on dense point cloud data
CN113129311A (en) * 2021-03-10 2021-07-16 西北大学 Label optimization point cloud example segmentation method
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN113051654A (en) * 2021-06-02 2021-06-29 苏州工业园区测绘地理信息有限公司 Indoor stair three-dimensional geographic entity model construction method based on two-dimensional GIS data
CN113740864A (en) * 2021-08-24 2021-12-03 上海宇航系统工程研究所 Self-pose estimation method for soft landing tail segment of detector based on laser three-dimensional point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TEPPEI SUZUKI .ETC: ""Rethinking PointNet Embedding for Faster and Compact Model"", 《2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV)》 *
贾小凤 等: ""基于激光点云数据的室内精细三维模型建立"", 《北京测绘》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926699A (en) * 2022-07-20 2022-08-19 深圳大学 Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN114926699B (en) * 2022-07-20 2022-12-06 深圳大学 Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN115273645A (en) * 2022-08-09 2022-11-01 南京大学 Map making method for automatically clustering indoor surface elements
CN115273645B (en) * 2022-08-09 2024-04-09 南京大学 Map making method for automatically clustering indoor surface elements
CN116246069A (en) * 2023-02-07 2023-06-09 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116246069B (en) * 2023-02-07 2024-01-16 北京四维远见信息技术有限公司 Method and device for self-adaptive terrain point cloud filtering, intelligent terminal and storage medium
CN116188713A (en) * 2023-04-25 2023-05-30 煤炭科学研究总院有限公司 Method and device for dynamically generating coal mine three-dimensional scene based on point cloud mirror image model
CN116188713B (en) * 2023-04-25 2023-08-15 煤炭科学研究总院有限公司 Method and device for dynamically generating coal mine three-dimensional scene based on point cloud mirror image model
CN117058314A (en) * 2023-08-16 2023-11-14 广州葛洲坝建设工程有限公司 Cast-in-situ structure template reverse modeling method based on point cloud data
CN117058314B (en) * 2023-08-16 2024-04-12 广州葛洲坝建设工程有限公司 Cast-in-situ structure template reverse modeling method based on point cloud data

Also Published As

Publication number Publication date
CN114140586B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN114140586B (en) Three-dimensional modeling method and device for indoor space and storage medium
EP3574473B1 (en) Apparatus, method, and system for alignment of 3d datasets
US7728833B2 (en) Method for generating a three-dimensional model of a roof structure
US20210027532A1 (en) Primitive-based 3d building modeling, sensor simulation, and estimation
US7639250B2 (en) Sketching reality
CN105354883B (en) The quick subtle three-dimensional modeling methods of 3ds Max and system based on a cloud
CN110349247B (en) Indoor scene CAD three-dimensional reconstruction method based on semantic understanding
CN111008422A (en) Building live-action map making method and system
IL266060A (en) Robust merge of 3d textured meshes
CN110363849A (en) A kind of interior three-dimensional modeling method and system
CN110827398A (en) Indoor three-dimensional point cloud automatic semantic segmentation algorithm based on deep neural network
CN114926699B (en) Indoor three-dimensional point cloud semantic classification method, device, medium and terminal
CN109118588B (en) Automatic color LOD model generation method based on block decomposition
JP2019168976A (en) Three-dimensional model generation device
Zhang et al. A data-driven approach for adding facade details to textured lod2 citygml models
Gruen et al. Semantically enriched high resolution LoD 3 building model generation
Cantzler Improving architectural 3D reconstruction by constrained modelling
Sahebdivani et al. Deep learning based classification of color point cloud for 3D reconstruction of interior elements of buildings
Weibel et al. Robust 3D object classification by combining point pair features and graph convolution
Nguyen et al. High resolution 3d content creation using unconstrained and uncalibrated cameras
CN111915720B (en) Automatic conversion method from building Mesh model to CityGML model
CN109509249B (en) Virtual scene light source intelligent generation method based on components
US11107257B1 (en) Systems and methods of generating playful palettes from images
Gruen et al. An Operable System for LoD3 Model Generation Using Multi-Source Data and User-Friendly Interactive Editing
Xiong Reconstructing and correcting 3d building models using roof topology graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 215000 No. 101, Suhong Middle Road, industrial park, Wuzhong District, Suzhou City, Jiangsu Province

Patentee after: Yuance Information Technology Co.,Ltd.

Address before: 215000 No. 101, Suhong Middle Road, industrial park, Wuzhong District, Suzhou City, Jiangsu Province

Patentee before: SUZHOU INDUSTRIAL PARK SURVEYING MAPPING AND GEOINFORMATION Co.,Ltd.

CP01 Change in the name or title of a patent holder