CN117974899A - Three-dimensional scene display method and system based on digital twinning - Google Patents

Three-dimensional scene display method and system based on digital twinning Download PDF

Info

Publication number
CN117974899A
CN117974899A CN202410160269.XA CN202410160269A CN117974899A CN 117974899 A CN117974899 A CN 117974899A CN 202410160269 A CN202410160269 A CN 202410160269A CN 117974899 A CN117974899 A CN 117974899A
Authority
CN
China
Prior art keywords
component
semantic
scene
fixed
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410160269.XA
Other languages
Chinese (zh)
Inventor
郑昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuntu Digital Creative Technology Co ltd
Original Assignee
Shenzhen Yuntu Digital Creative Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuntu Digital Creative Technology Co ltd filed Critical Shenzhen Yuntu Digital Creative Technology Co ltd
Priority to CN202410160269.XA priority Critical patent/CN117974899A/en
Publication of CN117974899A publication Critical patent/CN117974899A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a three-dimensional scene display method and a system based on digital twinning, wherein the method comprises the following steps: extracting a plurality of semantic component units from the point cloud data by acquiring indoor scene point cloud data; generating a scene layout diagram according to the position information and the size information of the semantic member units; dividing each semantic component unit into corresponding scene component semantic categories, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and obtaining a digital twin object model corresponding to the auxiliary object; assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene; and carrying out three-dimensional scene display on the three-dimensional model. The invention reduces the data processing amount in the reconstruction process and improves the fineness of the geometric surfaces of the fixed component and the accessory object.

Description

Three-dimensional scene display method and system based on digital twinning
Technical Field
The invention relates to the technical field of digital twinning, in particular to a three-dimensional scene display method and system based on digital twinning.
Background
Three-dimensional reconstruction refers to a technique of scanning and computing a real physical scene by using various sensor devices to generate a corresponding digitized model. Three-dimensional reconstruction has many applications in the fields of digital content authoring, industrial design, mapping, smart cities, virtual reality/meta universe, and the like. The method is divided into object-level three-dimensional reconstruction and scene-level three-dimensional reconstruction from reconstructed objects, wherein the scene-level three-dimensional reconstruction can be divided into indoor scenes, outdoor scenes and the like. Compared with an outdoor scene, the three-dimensional scene display method based on digital twinning is more complex and difficult, has the problems of large number of object objects, complex layout, mutual shielding and the like, and provides great challenges for three-dimensional reconstruction of an indoor scene. In the related art, a triangular mesh model is generated according to point cloud data by acquiring the point cloud data of an indoor scene, and then the triangular mesh model is segmented to obtain a three-dimensional reconstruction result.
Disclosure of Invention
Aiming at the technical problems existing in the prior art, the invention provides a three-dimensional scene display method and a system thereof based on digital twinning, which are used for solving the defects that the traditional three-dimensional scene display method based on digital twinning has a large amount of redundant data in the reconstruction process, the memory occupation is high, and components in a reconstruction model are obtained based on triangular mesh optimization, so that the geometric surfaces of the components are uneven and the details are rough.
The technical scheme for solving the technical problems is as follows: a digital twinning-based three-dimensional scene display method, comprising:
Acquiring indoor scene point cloud data, and extracting a plurality of semantic member units from the point cloud data;
Calculating the position information and the size information of each semantic member unit, and generating a scene layout chart according to the position information and the size information of the semantic member units;
Dividing each semantic component unit into corresponding scene component semantic categories, wherein each scene component semantic category comprises fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object;
assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
And displaying the three-dimensional scene of the three-dimensional model.
In an embodiment, after extracting the plurality of semantic member units from the point cloud data, the method further includes:
Optimizing each semantic component unit specifically comprises the following steps:
Initializing an optimized point set corresponding to each semantic component unit, wherein the optimized point set is used for storing the points contained in the optimized semantic component units;
Randomly selecting three non-collinear points from all points of the semantic component unit, and calculating a semantic component unit plane according to coordinates of the three non-collinear points;
Calculating the distances from other points in the semantic member unit to the plane of the semantic member unit respectively, counting the number of points with the distance smaller than a preset error threshold, if the number of points with the distance smaller than the preset error threshold is greater than a preset number record variable, setting the preset number record variable as the number of points with the current distance smaller than the preset error threshold, and storing the points with the distance smaller than the preset error threshold into a corresponding optimized point set;
And re-selecting three non-collinear points in the semantic component units, calculating a semantic component unit plane, updating a corresponding optimized point set according to the distance from other points to the semantic component unit plane until the preset optimization times are reached, and taking the points in the optimized set as the points of the optimized semantic component units to obtain the optimized semantic component units.
In an embodiment, when the semantic member units are fixed members, the calculating the position information and the size information of each semantic member unit includes:
Respectively calculating the maximum coordinate position and the minimum coordinate position of each optimized fixed component in the three-dimensional coordinate axis direction;
And calculating the position coordinates of the central point of the optimized fixing member and the geometric dimension of the optimized fixing member according to the maximum coordinate position and the minimum coordinate position of each coordinate axis direction.
In an embodiment, when the semantic component units are attached objects, the calculating the location information and the size information of each semantic component unit includes:
calculating the maximum coordinate position and the minimum coordinate position of the axis alignment bounding box of the accessory object in the three-dimensional coordinate axis direction;
and calculating the center point position coordinates of the auxiliary object according to the maximum coordinate position and the minimum coordinate position of the three-dimensional coordinate axis direction, and acquiring the geometric dimension of the auxiliary object according to the dimension of the axis alignment bounding box.
In one embodiment, the scene layout is a hierarchical structure; the number of layers of the scene layout diagram, the spatial structure of the indoor scene and the types and the number of semantic member units included in the indoor scene are determined;
Each layer of the scene layout map corresponds to one or more component categories, and each layer of the scene layout map is expressed as a unit grid, the unit grid comprises a plurality of unit grids, each unit grid comprises a semantic label, a central position, a geometric dimension and an identifier, and the identifier is used for assisting in indicating position information of one or more component categories corresponding to fixed components.
In an embodiment, the component categories include at least one of floor layers, walls, doors, windows, ceilings, beams, and columns.
In one embodiment, generating a scene layout according to the position information and the size information of the fixed member and the subordinate object includes:
projecting points in the fixed component optimization point set corresponding to each component category to an XY plane, and respectively calculating the maximum coordinate position and the minimum coordinate position of the projected points in the X, Y direction;
Calculating vertex coordinates of the fixed component in the scene layout according to the maximum and minimum coordinate positions of the X, Y directions;
Acquiring a coordinate range of a cell corresponding to the fixed member according to the vertex coordinates of the fixed member in the scene layout;
determining a center point of the fixed member according to the coordinate range of the corresponding cell of the fixed member;
If the fixed construction is a wall component, calculating the distance from other projection points of the wall component to the center point of the wall component, selecting a cell corresponding to the center point with the smallest distance as a wall component cell, and if no data exists in the wall component cell, filling the semantic label center position, the geometric dimension and the identifier of the wall component into the wall component cell; if the wall member cells have data, averaging the central positions in the wall member cells and the stored central positions in the wall member cells to obtain new central positions; taking the union of the geometric dimensions in the wall member cells and the stored geometric dimensions of the wall member cells as new geometric dimensions;
If the fixed construction is a door and window component, calculating the distance from other projection points of the door and window component to the center point of the door and window component, selecting a cell corresponding to the center point with the smallest distance as a door and window component cell, and if no data exists in the door and window component cell, filling the semantic label center position, the geometric dimension and the identifier of the door and window component into the door and window component cell; if the door and window component cells have data, filling the door and window component cells in a layer with high priority in the scene layout chart according to a preset priority, and taking the average value of the central positions in the door and window component cells and the stored central positions in the door and window component cells as a new central position; and taking the union set of the geometric dimensions in the door and window type component cells and the stored geometric dimensions of the door and window type component cells as a new geometric dimension, and indicating the wall where the door and window type component is located through an identifier.
In an embodiment, generating a fixed component entity structure according to a component class corresponding to each fixed component in the scene layout includes:
Calculating the space coordinates of all vertexes of the fixed component according to the central position and the geometric dimension in each fixed component cell in the scene layout;
performing triangular mesh dissection by taking the space coordinates of all vertexes as key points to obtain a triangular mesh model of the fixed component;
Inputting the component category corresponding to the fixed component into a texture generation neural network model, and obtaining a texture map of the fixed component, wherein the texture generation neural network model is obtained based on the component category and the corresponding texture map training;
and generating a solid structure of the fixing member according to the triangular mesh model of the fixing member and the texture map of the fixing member.
The invention also provides a three-dimensional scene display system based on digital twinning, which comprises:
the acquisition module is used for acquiring indoor scene point cloud data and extracting a plurality of semantic member units from the point cloud data;
The computing module is used for computing the position information and the size information of each semantic component unit and generating a scene layout chart according to the position information and the size information of the semantic component units;
The generation module is used for dividing each semantic component unit into corresponding scene component semantic categories, wherein each scene component semantic category comprises fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object;
The assembling module is used for assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
And the three-dimensional scene display module is used for displaying the three-dimensional scene of the three-dimensional model.
The invention also provides electronic equipment, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the three-dimensional scene display method based on digital twinning when executing the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the digital twinning-based three-dimensional scene representation method of any of the above.
The invention also provides a computer product having stored thereon a computer program which when executed by a processor implements the digital twinning-based three-dimensional scene representation method according to any of the preceding claims.
The beneficial effects of the invention are as follows: firstly generating a scene layout diagram, refining specific components according to the scene, then generating a three-dimensional model according to the specific components, reducing the data processing amount in the reconstruction process, acquiring the fixed components according to component types, acquiring the auxiliary objects according to an asset library, and improving the fineness of the geometric surfaces of the fixed components and the auxiliary objects.
Drawings
FIG. 1 is a flow diagram of a digital twinning-based three-dimensional scene display method provided by the invention;
FIG. 2 is a schematic diagram of a scene layout provided by the present invention;
FIG. 3 is a schematic diagram of a two-layer scene layout diagram in accordance with the present invention;
FIG. 4 is a schematic diagram of a texture generating neural network model according to the present invention;
FIG. 5 is a schematic diagram of a digital twinning-based three-dimensional scene display system provided by the invention;
Fig. 6 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present application, the term "for example" is used to mean "serving as an example, instance, or illustration. Any embodiment described as "for example" in this disclosure is not necessarily to be construed as preferred or advantageous over other embodiments. The following description is presented to enable any person skilled in the art to make and use the application. In the following description, details are set forth for purposes of explanation. It will be apparent to one of ordinary skill in the art that the present application may be practiced without these specific details. In other instances, well-known structures and processes have not been described in detail so as not to obscure the description of the application with unnecessary detail. Thus, the present application is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Fig. 1 is a flowchart of a three-dimensional scene display method based on digital twinning according to an embodiment of the present invention, and as shown in fig. 1, the three-dimensional scene display method based on digital twinning according to an embodiment of the present invention includes:
Step 101, acquiring indoor scene point cloud data, and extracting a plurality of semantic member units from the point cloud data.
In the embodiment of the invention, the indoor scene to be reconstructed can be directly scanned by radar, a scanner and other equipment to obtain the corresponding three-dimensional point cloud, or the three-dimensional point cloud of the scene can be obtained by acquiring image or depth information by using a camera and the like and then calculating by using photogrammetry or multi-view solid geometry principle. The invention does not limit the indoor scene point cloud data technology.
Step 102, calculating the position information and the size information of each semantic member unit, and generating a scene layout diagram according to the position information and the size information of the semantic member units;
Step 103, dividing each semantic component unit into corresponding scene component semantic categories, wherein the scene component semantic categories comprise fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, and searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index to obtain a digital twin object model corresponding to the auxiliary object.
In the embodiment of the invention, the design of the semantic class set S is shown in table 1, and the whole semantic class set S is divided into two classes of fixed components and accessory objects. The fixed components include the scene itself invariable component objects such as walls, floors, ceilings, beams, etc.; the auxiliary objects include additional variable objects in the scene, such as lights, tables, chairs, air conditioners, etc. It should be noted that, those skilled in the art may increase the semantic categories according to the actual situation.
TABLE 1 scene component semantic Categories table
And 104, assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene.
And 105, performing three-dimensional scene display on the three-dimensional model.
The traditional three-dimensional scene display method based on digital twinning is to obtain point cloud data of an indoor scene, generate a triangular mesh model according to the point cloud data, and then divide the triangular mesh model to obtain a three-dimensional reconstruction result.
According to the digital twinning-based three-dimensional scene display method provided by the embodiment of the invention, a plurality of semantic member units are extracted from point cloud data by acquiring indoor scene point cloud data; calculating the position information and the size information of each semantic member unit, and generating a scene layout according to the position information and the size information of the semantic member units; dividing each semantic component unit into corresponding scene component semantic categories, wherein the scene component semantic categories comprise fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in a scene layout chart, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object; and assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene, and displaying the three-dimensional scene by the three-dimensional model, so that the scene layout diagram is generated firstly, specific components are refined according to the scene, the three-dimensional model is generated according to the specific components, the data processing amount in the reconstruction process is reduced, the fixed components are acquired according to the component types, the accessory object is acquired according to the asset library, and the fineness of the geometric surfaces of the fixed components and the accessory object can be improved.
Based on any one of the above embodiments, the digital twinning-based three-dimensional scene display method provided by the embodiment of the present invention includes:
in step 201, a plurality of semantic component units in the point cloud data are identified through the point cloud network structure, each semantic component unit includes a structural semantic tag, and the structural semantic tag corresponds to a specific category in the component category or a specific category of the object category.
In the embodiment of the invention, all semantic component units { Pi } in the point cloud are identified through PointNet, pointNet ++ and other network structures, each semantic component unit Pi consists of a semantic tag and all contained points, and the identified structural semantic tag corresponds to a certain category in a semantic category set S. PointNet directly takes the point cloud data as input, and can output a class integral label or output labels point by point for each point.
Step 202, optimizing each semantic member unit;
in the embodiment of the invention, each semantic member unit is optimized, and the method specifically comprises the following steps:
Initializing an optimized point set corresponding to each semantic component unit, wherein the optimized point set is used for storing the points contained in the optimized semantic component units;
Randomly selecting three non-collinear points from all points of the semantic component unit, and calculating a semantic component unit plane according to coordinates of the three non-collinear points;
calculating distances from other points in the semantic member units to the plane of the semantic member unit respectively, counting the number of points with the distance smaller than a preset error threshold, if the number of points with the distance smaller than the preset error threshold is larger than a preset number record variable, setting the preset number record variable as the number of points with the current distance smaller than the preset error threshold, and storing the points with the distance smaller than the preset error threshold into a corresponding optimized point set;
And re-selecting three non-collinear points in the semantic component units, calculating a semantic component unit plane, updating a corresponding optimized point set according to the distance from other points to the semantic component unit plane until the preset optimization times are reached, and taking the points in the optimized set as the points of the optimized semantic component units to obtain the optimized semantic component units.
Taking a certain component Pi as an example when optimizing and refining the identified semantic component point cloud, wherein the included points are Pi1, pi2, pin, and the optimizing and refining the Pi comprises the following specific steps of:
a) Initializing a preset error threshold d for controlling the precision of optimizing refinement, wherein d is determined according to the actual precision requirement; initializing a quantity record variable c=0; setting the optimization iteration times K, wherein the larger the K is, the higher the corresponding optimization precision is; the set of points comprised by the optimized component Pi is initialized to Q.
B) Three non-collinear points are randomly selected from all the points of Pi, and the plane A where the three points are located can be calculated through the coordinates of the three points.
C) And respectively calculating the distances from other points in Pi to the plane A, counting the number x of points with the distances smaller than a preset threshold d, setting c=x if x > c, and simultaneously updating the set Q to the points with the distances smaller than the preset threshold d to the plane A.
D) Repeating the steps b) and c) until the preset optimization times K are reached. The points in the set Q are now the points of the member Pi after optimization.
In the embodiment of the invention, the identified semantic component point cloud is optimized and refined, and the optimized component point cloud is obtained through random sampling and repeated iterative optimization, so that the accuracy of the component point cloud is higher than that of the original component point cloud.
Step 203, calculating the position information and the size information of each optimized semantic member unit, and generating a scene layout according to the position information and the size information of the semantic member units;
204, dividing each optimized semantic component unit into corresponding scene component semantic categories, wherein the scene component semantic categories comprise fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, and searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index to acquire a digital twin object model corresponding to the auxiliary object;
Step 205, assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain the three-dimensional model of the indoor scene.
In the embodiment of the invention, the space layout of the scene is represented by dividing the objects in the scene into two types of fixed members and auxiliary objects and combining the multi-layer layout diagram, so that the space structural representation of the reconstructed target scene is realized. Aiming at the problem of poor quality of model details in the traditional reconstruction method, the quality of the reconstructed model can be improved by combining a scene layout diagram and different object classifications, generating fixed component objects based on mesh subdivision and accessory objects based on asset library replacement, and realizing the structured three-dimensional reconstruction of the scene by means of assembling the scene diagram.
In the embodiment of the present invention, when the semantic component units are fixed components, calculating the position information and the size information of each semantic component unit includes:
Respectively calculating the maximum coordinate position and the minimum coordinate position of each optimized fixed component in the three-dimensional coordinate axis direction;
And calculating the central point position coordinates of the optimized fixing member and the geometric dimension of the optimized fixing member according to the maximum coordinate position and the minimum coordinate position of each coordinate axis direction.
In the embodiment of the invention, for the fixed component members, respectively calculating the maximum coordinate position and the minimum coordinate position (x min,xmax),(ymin,ymax),(zmin,zmax) of each component in the X, Y, Z coordinate axis direction; for the attached object, the maximum coordinate position and the minimum coordinate position of the axis alignment bounding box in the X, Y, Z coordinate axis direction are calculated, and the geometric dimension of the fixed member component is (x max-xmin)*(ymax-ymin)*(zmax-zmin).
In the embodiment of the present invention, when the semantic component units are attached objects, calculating the position information and the size information of each semantic component unit includes:
calculating the maximum coordinate position and the minimum coordinate position of the axis alignment bounding box of the accessory object in the three-dimensional coordinate axis direction;
and calculating the central point position coordinates of the auxiliary object according to the maximum coordinate position and the minimum coordinate position of the three-dimensional coordinate axis direction, and acquiring the geometric dimension of the auxiliary object according to the dimension of the axis alignment bounding box.
In the embodiment of the invention, the position coordinate Oi of the center point of each component and the geometric dimension of the component can be calculated according to the maximum coordinate position and the minimum coordinate position of each component in the X, Y, Z coordinate axis direction; the geometry of the accessory member is the size of its axis aligned bounding box.
Based on any of the above embodiments, the scene layout is mainly used for representing the spatial layout of the scene fixing member, the scene layout is a layered structure, and the number of layers of the scene layout, the spatial structure of the indoor scene and the types and the number of semantic member units included in the indoor scene are determined;
each layer of the scene layout map corresponds to one or more component categories, each layer of the scene layout map is expressed as a cell grid, the cell grid comprises a plurality of cells, each cell comprises a semantic label, a center position, a geometric dimension and an identifier, and the identifier is used for assisting in indicating position information of the one or more component categories corresponding to fixed components.
In an embodiment of the invention, the component categories include at least one of floor layers, walls, doors, windows, ceilings, beams and columns. Ceilings include simple-shape ceilings and complex-shape ceilings; the simple shape ceiling is represented by a central ceiling cell, the other ceiling cells being empty; a complex-shaped ceiling is represented by a combination of a plurality of ceiling cells.
As shown in fig. 2, the first layer represents a floor layer, wall, door, window; the second layer represents the ceiling, door, window. The layout representation method is extensible, for example, when the fixed component exists in the category of components such as beams, columns and the like, the representation can be performed by adding a new layout layer. The following description will be given by taking two-layer representation as an example.
Each layer of the layout is represented as a 3*3-cell grid, and the data for each cell is defined as follows:
(semantic tags, central location, geometry, identifier)
The semantic tag is the semantic tag result obtained in the first step, and the center position and the geometric dimension are the results obtained in the second step. The identifier is used for assisting in indicating the corresponding position information of the door and window type components, such as the door component in the first cell in the upper left corner of the first layer, and when the identifier of the cell is 0, the door component is positioned on the wall represented by the adjacent cell on the right side; when the identifier of the unit is 1, it means that the door member is located on the wall of the adjacent unit below. The identifier of the wall, ceiling, etc. member is empty.
The second layer is primarily used to represent ceiling elements in which multiple ceiling cells can be combined to represent a complex-shaped ceiling, a simple-shaped planar ceiling can be represented by only the central ceiling cell, and the other ceiling cells can be empty.
The second tier of fenestration units is used to aid in the fenestration presentation of the first tier, where there are door or window members on both adjacent walls, the members can be distributed in two layouts, e.g., where there are doors on both the left and upper walls, which can be accomplished with the presentation shown in fig. 3.
Based on any one of the above embodiments, the digital twinning-based three-dimensional scene display method provided by the present invention further includes:
Step 301, constructing a mapping relation between the value of the identifier and the position information of the door and window type components;
step 302, obtaining the value of the identifier in the unit cell of the door and window type component, and obtaining auxiliary indication information according to the matching between the value of the identifier and the mapping relation of the value of the identifier and the position information of the door and window type component;
step 303, obtaining the position information corresponding to the door and window components according to the auxiliary indication information.
In the embodiment of the invention, the cells of different layers of the scene layout are used for representing the position information of different fixed components or representing the distribution of the same fixed component at different positions in space.
In the embodiment of the invention, in the construction process of the scene layout diagram, the central positions of the cells are iteratively updated;
the center position of the current cell is the average value of the center position calculated in the last iteration and the center position calculated in the current iteration.
In the embodiment of the invention, in the construction process of the scene layout diagram, the geometric dimension of the cell is updated iteratively;
the geometric dimension of the current cell is the union of the geometric dimension calculated in the previous iteration and the geometric dimension calculated in the current iteration.
Based on any of the above embodiments, generating a scene layout from the position information and the size information of the fixed member and the subordinate object includes:
projecting points in the fixed component optimization point set corresponding to each component category to an XY plane, and respectively calculating the maximum coordinate position and the minimum coordinate position of the projected points in the X, Y direction;
Calculating vertex coordinates of the fixed component in the scene layout according to the maximum and minimum coordinate positions in the X, Y directions;
Acquiring a coordinate range of a cell corresponding to the fixed member according to the vertex coordinates of the fixed member in the scene layout;
Determining a center point of the fixed member according to the coordinate range of the corresponding cell of the fixed member;
If the wall member is fixedly constructed, calculating the distance from other projection points of the wall member to the center point of the wall member, selecting a cell corresponding to the center point with the smallest distance as a wall member cell, and if no data exists in the wall member cell, filling the semantic tag center position, the geometric dimension and the identifier of the wall member into the wall member cell; if the wall member cells have data, averaging the central positions in the wall member cells and the stored central positions in the wall member cells to obtain new central positions; taking the union of the geometric dimensions in the wall member cells and the stored geometric dimensions of the wall member cells as new geometric dimensions;
if the door and window components are fixedly constructed, calculating the distances from other projection points of the door and window components to the center point of the door and window components, selecting a cell corresponding to the center point with the smallest distance as a door and window component cell, and if no data exists in the door and window component cell, filling the central position, the geometric dimension and the identifier of the semantic tag of the door and window component into the door and window component cell; if the door and window component cells have data, filling the door and window component cells in the layer with high priority in the scene layout according to the preset priority, and taking the average value of the central positions in the door and window component cells and the stored central positions in the door and window component cells as a new central position; and taking the union set of the geometric dimensions in the door and window type component cells and the stored geometric dimensions of the door and window type component cells as a new geometric dimension, and indicating the wall where the door and window type component is located through the identifier.
Specific examples of generating a scene layout according to fig. 3 include:
a) Initializing a layout, namely projecting points in the optimized point cloud set of all the components to an XY plane, and respectively solving the maximum coordinate position and the minimum coordinate position (X min,Xmax)、(Ymin,Ymax) of the projected points in the X, Y direction, wherein the coordinates of four vertexes of the layout are (X min,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax). From the four vertex positions, the coordinate range of each sub-cell in the layout can be calculated.
B) And (3) projecting the central points of all the wall components to an XY plane, respectively calculating the distances from the projection points to four wall sub-cells, selecting the sub-cell closest to the central point, and filling the sub-cell at the corresponding position of the 1 st layer of the layout diagram by using the component. If the sub-cell already has data, the cell data is processed as follows:
center position: taking an average value of the central position of the current processing component and the central position stored by the cell as a new central position;
geometric dimensions: the geometry (W1, L1, H1) of the current processing means is combined with the geometry (W2, L2, H2) of the cell storage as a new geometry, i.e. (W1, W2), max (L1, L2), max (H1, H2)).
C) The projection calculation method for the ceiling type member is similar to that for the wall type member, and the ceiling type member is represented by 5 sub-cells of layer 2.
D) The projection calculation method of the door and window type components is similar to that of the wall type components, the layer 1 layout diagram is filled preferentially, when the sub-cells at the corresponding positions of the layer 1 layout diagram already have data, the sub-cells at the corresponding positions of the layer 2 layout diagram are filled, and the wall where the door and window type components are located is indicated by the identifier.
Based on any of the above embodiments, generating a fixed component entity structure according to a component class corresponding to each fixed component in the scene layout diagram includes:
Step 401, calculating the space coordinates of all vertexes of the fixed component according to the central position and the geometric dimension in each fixed component cell in the scene layout;
Step 402, performing triangular mesh dissection by taking the space coordinates of all vertexes as key points to obtain a triangular mesh model of the fixed component;
In the embodiment of the invention, triangulation can be performed by adopting a Delaunay equal subdivision algorithm.
Step 403, inputting a component category corresponding to the fixed component into a texture generation neural network model, and obtaining a texture map of the fixed component, wherein the texture generation neural network model is obtained by training based on the component category and the corresponding texture map;
In an embodiment of the invention, a texture generation neural network model comprises a coding module and a multi-layer perceptron;
The coding module is used for coding the component categories to obtain implicit characteristics of each component category;
the component categories are used for performing perception learning on implicit features of each component category, and generating texture maps corresponding to the component categories.
In the embodiment of the invention, the texture of the fixing member is automatically generated through a texture generation neural network model, and the structure of the texture generation neural network model is shown in fig. 4. The input component category, such as 'wall', is encoded into implicit characteristics by an encoder, and then the texture map of the wall component is obtained by a texture generation network of a multi-layer perceptron (MLP), compared with a method based on triangular mesh optimization, the embodiment of the invention can enable the geometric surface of a fixed component to be finer by generating the texture map by a texture generation neural network model.
Step 404, generating a solid structure of the fixed component according to the triangular mesh model of the fixed component and the texture map of the fixed component.
Based on any of the above embodiments, the preset digital twin object model asset library is constructed according to the semantic category of the auxiliary object, and after obtaining the digital twin object model corresponding to the auxiliary object, the method further includes:
And replacing the auxiliary objects in the scene layout by using the acquired digital twin object models corresponding to the auxiliary objects.
In the embodiment of the invention, the digital twin object model in the digital twin object model asset library is preset to be a refined model, the digital twin object model corresponding to the obtained auxiliary object can be pre-constructed or obtained from a later library, the auxiliary object in the scene layout is replaced by the digital twin object model corresponding to the obtained auxiliary object, the auxiliary object does not need to be reconstructed, and the geometric surface of the reconstructed model of the auxiliary object with small size can be finer.
In the embodiment of the present invention, before replacing the auxiliary object in the scene layout by the acquired digital twin object model corresponding to the auxiliary object, the method further includes:
scaling the digital twin object model indexed from the preset digital twin object model asset library according to the axis alignment bounding box size of the accessory object.
According to the digital twinning-based three-dimensional scene display method provided by the embodiment of the invention, the geometric precision quality of each semantic component of the point cloud can be improved, and the problem of low quality of the acquired or calculated three-dimensional point cloud can be solved. According to the scene organization method based on the layout diagram, objects in the scene are divided into two types, namely a fixed component and an accessory object, and the scene space layout is represented by combining the multi-layer layout diagram, so that the space structural representation of the reconstructed target scene is realized, and the problem that the objects are more and more complicated in the scene reconstruction is solved. Combining a scene layout diagram and different object classifications, generating fixed component objects based on mesh division and accessory objects based on asset library replacement, and assembling by means of the scene diagram to realize structural high-quality three-dimensional reconstruction of the scene, so that the problem of poor detail quality of a model in the traditional reconstruction method is solved.
The digital twinning-based three-dimensional scene display system provided by the invention is described below, and the digital twinning-based three-dimensional scene display system described below and the digital twinning-based three-dimensional scene display method described above can be correspondingly referred to each other. Fig. 5 is a schematic structural diagram of a three-dimensional scene display system based on digital twinning according to an embodiment of the present invention, where, as shown in fig. 5, the three-dimensional scene display system based on digital twinning according to an embodiment of the present invention includes:
an obtaining module 501, configured to obtain indoor scene point cloud data, and extract a plurality of semantic component units from the point cloud data;
The calculating module 502 is configured to calculate position information and size information of each semantic component unit, and generate a scene layout according to the position information and the size information of the semantic component unit;
A generating module 503, configured to divide each semantic component unit into corresponding scene component semantic categories, where the scene component semantic categories include fixed components and auxiliary objects, generate a fixed component entity structure according to a component category corresponding to each fixed component in the scene layout, and search a preset object model asset library by using an object category corresponding to the auxiliary object as an index, to obtain an object model corresponding to the auxiliary object;
The assembling module 504 is configured to assemble the fixed component entity structure and the object model corresponding to the auxiliary object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
the three-dimensional scene display module 505 is configured to display the three-dimensional scene of the three-dimensional model.
According to the embodiment of the invention, the indoor scene point cloud data are acquired, and a plurality of semantic member units are extracted from the point cloud data; calculating the position information and the size information of each semantic member unit, and generating a scene layout according to the position information and the size information of the semantic member units; dividing each semantic component unit into corresponding scene component semantic categories, wherein the scene component semantic categories comprise fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in a scene layout, and searching a preset object model asset library by taking the object category corresponding to the auxiliary object as an index to obtain an object model corresponding to the auxiliary object; assembling the solid structure of the fixed component and the object model corresponding to the auxiliary object in the scene layout diagram to obtain a three-dimensional model of the indoor scene; and the three-dimensional model is displayed in a three-dimensional scene, so that a scene layout chart is generated, specific components are thinned according to the scene, the three-dimensional model is generated according to the specific components, the data processing amount in the reconstruction process is reduced, the fixed components are acquired according to the component types, the auxiliary objects are acquired according to the asset library, and the fineness of the geometric surfaces of the fixed components and the auxiliary objects can be improved.
Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an electronic device according to the present invention. As shown in fig. 6, an embodiment of the present invention provides an electronic device 600, including a memory 610, a processor 620, and a computer program 611 stored in the memory 610 and executable on the processor 620, wherein the processor 620 executes the computer program 611 to implement the following steps:
Acquiring indoor scene point cloud data, and extracting a plurality of semantic member units from the point cloud data;
Calculating the position information and the size information of each semantic member unit, and generating a scene layout chart according to the position information and the size information of the semantic member units;
Dividing each semantic component unit into corresponding scene component semantic categories, wherein each scene component semantic category comprises fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object;
assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
And displaying the three-dimensional scene of the three-dimensional model.
In an embodiment, after extracting the plurality of semantic member units from the point cloud data, the method further includes:
Optimizing each semantic component unit specifically comprises the following steps:
Initializing an optimized point set corresponding to each semantic component unit, wherein the optimized point set is used for storing the points contained in the optimized semantic component units;
Randomly selecting three non-collinear points from all points of the semantic component unit, and calculating a semantic component unit plane according to coordinates of the three non-collinear points;
Calculating the distances from other points in the semantic member unit to the plane of the semantic member unit respectively, counting the number of points with the distance smaller than a preset error threshold, if the number of points with the distance smaller than the preset error threshold is greater than a preset number record variable, setting the preset number record variable as the number of points with the current distance smaller than the preset error threshold, and storing the points with the distance smaller than the preset error threshold into a corresponding optimized point set;
And re-selecting three non-collinear points in the semantic component units, calculating a semantic component unit plane, updating a corresponding optimized point set according to the distance from other points to the semantic component unit plane until the preset optimization times are reached, and taking the points in the optimized set as the points of the optimized semantic component units to obtain the optimized semantic component units.
In an embodiment, when the semantic member units are fixed members, the calculating the position information and the size information of each semantic member unit includes:
Respectively calculating the maximum coordinate position and the minimum coordinate position of each optimized fixed component in the three-dimensional coordinate axis direction;
And calculating the position coordinates of the central point of the optimized fixing member and the geometric dimension of the optimized fixing member according to the maximum coordinate position and the minimum coordinate position of each coordinate axis direction.
In an embodiment, when the semantic component units are attached objects, the calculating the location information and the size information of each semantic component unit includes:
calculating the maximum coordinate position and the minimum coordinate position of the axis alignment bounding box of the accessory object in the three-dimensional coordinate axis direction;
and calculating the center point position coordinates of the auxiliary object according to the maximum coordinate position and the minimum coordinate position of the three-dimensional coordinate axis direction, and acquiring the geometric dimension of the auxiliary object according to the dimension of the axis alignment bounding box.
In one embodiment, the scene layout is a hierarchical structure; the number of layers of the scene layout diagram, the spatial structure of the indoor scene and the types and the number of semantic member units included in the indoor scene are determined;
Each layer of the scene layout map corresponds to one or more component categories, and each layer of the scene layout map is expressed as a unit grid, the unit grid comprises a plurality of unit grids, each unit grid comprises a semantic label, a central position, a geometric dimension and an identifier, and the identifier is used for assisting in indicating position information of one or more component categories corresponding to fixed components.
In an embodiment, the component categories include at least one of floor layers, walls, doors, windows, ceilings, beams, and columns.
In one embodiment, generating a scene layout according to the position information and the size information of the fixed member and the subordinate object includes:
projecting points in the fixed component optimization point set corresponding to each component category to an XY plane, and respectively calculating the maximum coordinate position and the minimum coordinate position of the projected points in the X, Y direction;
Calculating vertex coordinates of the fixed component in the scene layout according to the maximum and minimum coordinate positions of the X, Y directions;
Acquiring a coordinate range of a cell corresponding to the fixed member according to the vertex coordinates of the fixed member in the scene layout;
determining a center point of the fixed member according to the coordinate range of the corresponding cell of the fixed member;
If the fixed construction is a wall component, calculating the distance from other projection points of the wall component to the center point of the wall component, selecting a cell corresponding to the center point with the smallest distance as a wall component cell, and if no data exists in the wall component cell, filling the semantic label center position, the geometric dimension and the identifier of the wall component into the wall component cell; if the wall member cells have data, averaging the central positions in the wall member cells and the stored central positions in the wall member cells to obtain new central positions; taking the union of the geometric dimensions in the wall member cells and the stored geometric dimensions of the wall member cells as new geometric dimensions;
If the fixed construction is a door and window component, calculating the distance from other projection points of the door and window component to the center point of the door and window component, selecting a cell corresponding to the center point with the smallest distance as a door and window component cell, and if no data exists in the door and window component cell, filling the semantic label center position, the geometric dimension and the identifier of the door and window component into the door and window component cell; if the door and window component cells have data, filling the door and window component cells in a layer with high priority in the scene layout chart according to a preset priority, and taking the average value of the central positions in the door and window component cells and the stored central positions in the door and window component cells as a new central position; and taking the union set of the geometric dimensions in the door and window type component cells and the stored geometric dimensions of the door and window type component cells as a new geometric dimension, and indicating the wall where the door and window type component is located through an identifier.
In an embodiment, generating a fixed component entity structure according to a component class corresponding to each fixed component in the scene layout includes:
Calculating the space coordinates of all vertexes of the fixed component according to the central position and the geometric dimension in each fixed component cell in the scene layout;
performing triangular mesh dissection by taking the space coordinates of all vertexes as key points to obtain a triangular mesh model of the fixed component;
Inputting the component category corresponding to the fixed component into a texture generation neural network model, and obtaining a texture map of the fixed component, wherein the texture generation neural network model is obtained based on the component category and the corresponding texture map training;
and generating a solid structure of the fixing member according to the triangular mesh model of the fixing member and the texture map of the fixing member.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The three-dimensional scene display method based on digital twinning is characterized by comprising the following steps of:
Acquiring indoor scene point cloud data, and extracting a plurality of semantic member units from the point cloud data;
Calculating the position information and the size information of each semantic member unit, and generating a scene layout chart according to the position information and the size information of the semantic member units;
Dividing each semantic component unit into corresponding scene component semantic categories, wherein each scene component semantic category comprises fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object;
assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
And displaying the three-dimensional scene of the three-dimensional model.
2. The digital twinning-based three-dimensional scene showing method according to claim 1, wherein after extracting a plurality of semantic member units from the point cloud data, further comprising:
Optimizing each semantic component unit specifically comprises the following steps:
Initializing an optimized point set corresponding to each semantic component unit, wherein the optimized point set is used for storing the points contained in the optimized semantic component units;
Randomly selecting three non-collinear points from all points of the semantic component unit, and calculating a semantic component unit plane according to coordinates of the three non-collinear points;
Calculating the distances from other points in the semantic member unit to the plane of the semantic member unit respectively, counting the number of points with the distance smaller than a preset error threshold, if the number of points with the distance smaller than the preset error threshold is greater than a preset number record variable, setting the preset number record variable as the number of points with the current distance smaller than the preset error threshold, and storing the points with the distance smaller than the preset error threshold into a corresponding optimized point set;
And re-selecting three non-collinear points in the semantic component units, calculating a semantic component unit plane, updating a corresponding optimized point set according to the distance from other points to the semantic component unit plane until the preset optimization times are reached, and taking the points in the optimized set as the points of the optimized semantic component units to obtain the optimized semantic component units.
3. The digital twinning-based three-dimensional scene showing method according to claim 2, wherein when the semantic member units are fixed members, the calculating the position information and the size information of each semantic member unit includes:
Respectively calculating the maximum coordinate position and the minimum coordinate position of each optimized fixed component in the three-dimensional coordinate axis direction;
And calculating the position coordinates of the central point of the optimized fixing member and the geometric dimension of the optimized fixing member according to the maximum coordinate position and the minimum coordinate position of each coordinate axis direction.
4. The digital twinning-based three-dimensional scene showing method according to claim 2, wherein when the semantic member units are subordinate objects, the calculating the position information and the size information of each semantic member unit includes:
calculating the maximum coordinate position and the minimum coordinate position of the axis alignment bounding box of the accessory object in the three-dimensional coordinate axis direction;
and calculating the center point position coordinates of the auxiliary object according to the maximum coordinate position and the minimum coordinate position of the three-dimensional coordinate axis direction, and acquiring the geometric dimension of the auxiliary object according to the dimension of the axis alignment bounding box.
5. The digital twinning-based three-dimensional scene showing method according to claim 1, wherein the scene layout is a hierarchical structure; the number of layers of the scene layout diagram, the spatial structure of the indoor scene and the types and the number of semantic member units included in the indoor scene are determined;
Each layer of the scene layout map corresponds to one or more component categories, and each layer of the scene layout map is expressed as a unit grid, the unit grid comprises a plurality of unit grids, each unit grid comprises a semantic label, a central position, a geometric dimension and an identifier, and the identifier is used for assisting in indicating position information of one or more component categories corresponding to fixed components.
6. The digital twinning-based three-dimensional scene showing method of claim 5, wherein the component categories include at least one of floor layers, walls, doors, windows, ceilings, beams, and columns.
7. The digital twinning-based three-dimensional scene showing method of claim 6, wherein generating a scene layout from the position information and the size information of the fixed member and the subordinate object comprises:
projecting points in the fixed component optimization point set corresponding to each component category to an XY plane, and respectively calculating the maximum coordinate position and the minimum coordinate position of the projected points in the X, Y direction;
Calculating vertex coordinates of the fixed component in the scene layout according to the maximum and minimum coordinate positions of the X, Y directions;
Acquiring a coordinate range of a cell corresponding to the fixed member according to the vertex coordinates of the fixed member in the scene layout;
determining a center point of the fixed member according to the coordinate range of the corresponding cell of the fixed member;
If the fixed construction is a wall component, calculating the distance from other projection points of the wall component to the center point of the wall component, selecting a cell corresponding to the center point with the smallest distance as a wall component cell, and if no data exists in the wall component cell, filling the semantic label center position, the geometric dimension and the identifier of the wall component into the wall component cell; if the wall member cells have data, averaging the central positions in the wall member cells and the stored central positions in the wall member cells to obtain new central positions; taking the union of the geometric dimensions in the wall member cells and the stored geometric dimensions of the wall member cells as new geometric dimensions;
If the fixed construction is a door and window component, calculating the distance from other projection points of the door and window component to the center point of the door and window component, selecting a cell corresponding to the center point with the smallest distance as a door and window component cell, and if no data exists in the door and window component cell, filling the semantic label center position, the geometric dimension and the identifier of the door and window component into the door and window component cell; if the door and window component cells have data, filling the door and window component cells in a layer with high priority in the scene layout chart according to a preset priority, and taking the average value of the central positions in the door and window component cells and the stored central positions in the door and window component cells as a new central position; and taking the union set of the geometric dimensions in the door and window type component cells and the stored geometric dimensions of the door and window type component cells as a new geometric dimension, and indicating the wall where the door and window type component is located through an identifier.
8. The digital twinning-based three-dimensional scene showing method according to claim 6, wherein the generating a fixed component entity structure according to a component class corresponding to each fixed component in the scene layout diagram comprises:
Calculating the space coordinates of all vertexes of the fixed component according to the central position and the geometric dimension in each fixed component cell in the scene layout;
performing triangular mesh dissection by taking the space coordinates of all vertexes as key points to obtain a triangular mesh model of the fixed component;
Inputting the component category corresponding to the fixed component into a texture generation neural network model, and obtaining a texture map of the fixed component, wherein the texture generation neural network model is obtained based on the component category and the corresponding texture map training;
and generating a solid structure of the fixing member according to the triangular mesh model of the fixing member and the texture map of the fixing member.
9. A digital twinning-based three-dimensional scene display system, comprising:
the acquisition module is used for acquiring indoor scene point cloud data and extracting a plurality of semantic member units from the point cloud data;
The computing module is used for computing the position information and the size information of each semantic component unit and generating a scene layout chart according to the position information and the size information of the semantic component units;
The generation module is used for dividing each semantic component unit into corresponding scene component semantic categories, wherein each scene component semantic category comprises fixed components and auxiliary objects, generating a fixed component entity structure according to the component category corresponding to each fixed component in the scene layout, searching a preset digital twin object model asset library by taking the object category corresponding to the auxiliary object as an index, and acquiring a digital twin object model corresponding to the auxiliary object;
The assembling module is used for assembling the fixed component entity structure and the digital twin object model corresponding to the accessory object in the scene layout diagram to obtain a three-dimensional model of the indoor scene;
And the three-dimensional scene display module is used for displaying the three-dimensional scene of the three-dimensional model.
10. A non-transitory computer readable storage medium, wherein the storage medium has stored therein a computer software program which, when executed by a processor, implements the digital twinning-based three-dimensional scene representation method according to any of claims 1-8.
CN202410160269.XA 2024-02-02 2024-02-02 Three-dimensional scene display method and system based on digital twinning Pending CN117974899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410160269.XA CN117974899A (en) 2024-02-02 2024-02-02 Three-dimensional scene display method and system based on digital twinning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410160269.XA CN117974899A (en) 2024-02-02 2024-02-02 Three-dimensional scene display method and system based on digital twinning

Publications (1)

Publication Number Publication Date
CN117974899A true CN117974899A (en) 2024-05-03

Family

ID=90851336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410160269.XA Pending CN117974899A (en) 2024-02-02 2024-02-02 Three-dimensional scene display method and system based on digital twinning

Country Status (1)

Country Link
CN (1) CN117974899A (en)

Similar Documents

Publication Publication Date Title
EP3008702B1 (en) Scalable volumetric 3d reconstruction
CN111008422B (en) Building live-action map making method and system
CN112633657B (en) Construction quality management method, device, equipment and storage medium
KR20100136604A (en) Real-time visualization system of 3 dimension terrain image
CN107170033A (en) Smart city 3D live-action map systems based on laser radar technique
CN109544672A (en) A kind of three-dimensional building model texture mapping method and device
CN111028335B (en) Point cloud data block surface patch reconstruction method based on deep learning
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
CN115937461B (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN110533764B (en) Fractal quadtree texture organization method for building group
CN112102467A (en) Parallel octree generation and device based on GPU and electronic equipment
Zhang et al. A geometry and texture coupled flexible generalization of urban building models
Yang et al. A hybrid spatial index for massive point cloud data management and visualization
Wiemann et al. Automatic Map Creation For Environment Modelling In Robotic Simulators.
Wu et al. [Retracted] Intelligent City 3D Modeling Model Based on Multisource Data Point Cloud Algorithm
Glander et al. Techniques for generalizing building geometry of complex virtual 3D city models
CN113989680B (en) Automatic building three-dimensional scene construction method and system
Sahebdivani et al. Deep learning based classification of color point cloud for 3D reconstruction of interior elements of buildings
CN117974899A (en) Three-dimensional scene display method and system based on digital twinning
CN114708382A (en) Three-dimensional modeling method, device, storage medium and equipment based on augmented reality
CN115375836A (en) Point cloud fusion three-dimensional reconstruction method and system based on multivariate confidence filtering
CN117808987B (en) Indoor scene three-dimensional reconstruction method and device, electronic equipment and storage medium
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
CN113591208A (en) Oversized model lightweight method based on ship feature extraction and electronic equipment
Xiong Reconstructing and correcting 3d building models using roof topology graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination