CN116502303B - BIM model visualization method based on scene hierarchy instance information enhancement - Google Patents

BIM model visualization method based on scene hierarchy instance information enhancement Download PDF

Info

Publication number
CN116502303B
CN116502303B CN202310260322.9A CN202310260322A CN116502303B CN 116502303 B CN116502303 B CN 116502303B CN 202310260322 A CN202310260322 A CN 202310260322A CN 116502303 B CN116502303 B CN 116502303B
Authority
CN
China
Prior art keywords
model
data
target
target model
bim
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310260322.9A
Other languages
Chinese (zh)
Other versions
CN116502303A (en
Inventor
贺彪
唐骜巍
蒯希
郭仁忠
林浩嘉
张琛
朱维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202310260322.9A priority Critical patent/CN116502303B/en
Publication of CN116502303A publication Critical patent/CN116502303A/en
Application granted granted Critical
Publication of CN116502303B publication Critical patent/CN116502303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a BIM model visualization method based on scene level instance information enhancement, which comprises the following steps: acquiring target BIM model data, and classifying the target BIM model data to obtain a first target model, a second target model and a third target model; extracting example information of the first target model and the second target model to obtain first 3D data and second 3D data; clustering and subtracting the surface of the third target model to obtain third 3D data; and constructing a mixed spatial index for the first 3D data, the second 3D data and the third 3D data according to the scene level instance information and the spatial position information, and performing visual rendering to obtain a target 3D model. According to the method, a mixed spatial index strategy based on scene level instance enhancement is designed, so that large-scale BIM-CIM three-dimensional scene fusion and efficient visual rendering are realized, and an efficient and feasible path is provided for BIM-CIM fusion.

Description

BIM model visualization method based on scene hierarchy instance information enhancement
Technical Field
The invention relates to the technical field of building information models, in particular to a BIM model visualization method based on scene level instance information enhancement and related equipment.
Background
In the prior art, a Building Information Model (BIM) can accurately express building structures, component compositions and business semantic attributes thereof, and plays an important role in the digital management of the whole life cycle of building 'planning, building, maintenance and transportation' in space-time bottom plate construction of a smart city. BIM-GIS fusion enables the smart city CIM platform to have the capability of macroscopic-microscopic integrated expression. However, BIM has the characteristics of large data volume, various formats and rich component detail semantic description, and at present, the BIM data is dynamically accessed through a CIM platform and large-scale BIM-CIM three-dimensional scene fusion rendering is still difficult.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a BIM model visualization method based on scene level instance information enhancement and related equipment, and aims to solve the problems that BIM data are dynamically accessed through a CIM platform and large-scale BIM-CIM three-dimensional scene fusion rendering is difficult in the prior art.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
in a first aspect of the present invention, there is provided a method for visualizing a BIM model based on scene-level instance information enhancement, the method comprising:
Acquiring target BIM model data, and classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model;
extracting example information of the first target model and the second target model to obtain first 3D data and second 3D data;
clustering and subtracting the surface of the third target model to obtain third 3D data;
and constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model.
The BIM model visualization method based on scene level instance information enhancement, wherein the classifying the BIM target model data to obtain a first target model, a second target model and a third target model, comprises the following steps:
extracting a model with geometric multiplexing semantic information from the target BIM model data to obtain the first target model;
Performing geometric semantic screening on the model without geometric multiplexing semantic information in the target BIM model data, and extracting an implicit instantiation model to obtain the second target model;
the model other than the first target model and the second target model in the target model data is the third target model.
The method for visualizing the BIM model based on scene-level instance information enhancement, wherein the extracting the instance information of the first target model and the second target model to obtain first 3D data and second 3D data comprises the following steps:
directly acquiring geometric multiplexing information in the first target model, and extracting instance information to obtain the first 3D data;
and extracting instantiation enhancement information of the second target model, obtaining a space conversion matrix between the standard model corresponding to the second target model and the target model, and extracting instance information according to the standard model corresponding to the second target model and the space conversion matrix to obtain the second 3D data.
The method for visualizing the BIM model based on scene-level instance information enhancement, wherein the obtaining the spatial transformation matrix between the standard model corresponding to the second target model and the target model comprises the following steps:
And acquiring an instantiation model rotation space transformation matrix, an instantiation translation space transformation matrix and an instantiation model scaling space transformation matrix of the second target model.
The method for visualizing the BIM model based on scene-level instance information enhancement, wherein the clustering and face-subtracting processing are performed on the third target model to obtain third 3D data, comprises the following steps:
and carrying out spatial clustering on the third target model based on the R tree from bottom to top, selecting clustering objects according to the distance between the centers of mass of the spatial bounding boxes between the nodes, taking the average triangular surface number of the bounding boxes in the current scene as a clustering threshold value, and continuously carrying out iterative clustering until the clustering is carried out to form a node, so as to obtain the third 3D data, wherein the surface subtracting processing is carried out in the iterative clustering process each time.
The BIM model visualization method based on scene-level instance information enhancement, wherein the constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene-level instance information and spatial position information, and performing visualization rendering comprises the following steps:
clustering the first 3D data and the second 3D data by using a scene hierarchy instance space index, and then carrying out instance rendering by taking clusters as units;
And adopting a Lod-based scheduling strategy to carry out visual rendering on the third 3D model.
The BIM model visualization method based on scene level instance information enhancement, wherein the clustering of the first 3D data and the second 3D data by using the scene level instance spatial index and then the instantiation rendering by taking the cluster as a unit comprises the following steps:
performing space division on the first 3D data and the second 3D data by using a KD tree, and clustering the instances using the same geometric grid to obtain a plurality of instance clusters;
performing visibility elimination calculation on each example cluster to obtain a plurality of target clusters;
and respectively carrying out instantiation rendering on each target cluster.
In a second aspect of the present invention, there is provided a BIM model visualization apparatus based on scene-level instance information enhancement, comprising:
the classification module is used for acquiring target BIM model data, classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model;
The 3D data acquisition module is used for extracting instance information of the first target model and the second target model to obtain first 3D data and second 3D data;
the 3D data acquisition module is further used for clustering and face reduction processing of the third target model to obtain third 3D data;
and the rendering module is used for constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model.
In a third aspect of the present invention, a terminal is provided, the terminal comprising a processor, a computer readable storage medium communicatively coupled to the processor, the computer readable storage medium adapted to store a plurality of instructions, the processor adapted to invoke the instructions in the computer readable storage medium to perform the steps of implementing the scene-level instance information-based enhanced BIM model visualization method according to any of the above.
In a fourth aspect of the present invention, there is provided a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps of the method for visualizing a BIM model based on scene level instance information enhancement as set forth in any one of the above.
Compared with the prior art, in the BIM model visualization method based on scene-level instance information enhancement, the object model data is classified by acquiring the instantiation semantic information of the object BIM model data to obtain a first object model, a second object model and a third object model, wherein the first object model is an explicit instantiation model, the second object model is an implicit instantiation model, the third object model is a non-instantiation model, and then the first object model and the second object model are subjected to instance information extraction to obtain first 3D data and second 3D data; and meanwhile, clustering and subtracting the surface of the third target model to obtain third 3D data, and finally, constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain the target 3D model. According to the BIM model visualization method based on scene level instance information enhancement, a set of efficient visualization schemes of level instance enhancement spatial indexes are designed, large-scale BIM-CIM three-dimensional scene fusion rendering is achieved, and an efficient and feasible road is provided for BIM-CIM fusion.
Drawings
FIG. 1 is a flow chart of an embodiment of a BIM model visualization method based on scene level instance information enhancement provided by the present invention;
FIG. 2 is a process flow diagram of an embodiment of a BIM model visualization method based on scene level instance information enhancement provided by the present invention;
FIG. 3 is a transition GlTF flow chart diagram I of an embodiment of a BIM model visualization method based on scene-level instance information enhancement provided by the present invention;
FIG. 4 is a transition GlTF flow chart II of an embodiment of the BIM model visualization method based on scene level instance information enhancement provided by the present invention;
FIG. 5 is a transition GlTF flowchart of an embodiment of a BIM model visualization device based on scene-level instance information enhancement provided by the present invention;
FIG. 6 is an R-tree segmentation diagram of an embodiment of a BIM model visualization device based on scene level instance information enhancement provided by the present invention;
FIG. 7 is a flow chart of a non-realistic model process for an embodiment of a BIM model visualization device based on scene level instance information enhancement provided by the present invention;
FIG. 8 is an exemplary render culling diagram I of an embodiment of a BIM model visualization device based on scene level instance information enhancement provided by the present invention;
FIG. 9 is an exemplary rendering culling diagram II of an embodiment of a BIM model visualization device based on scene level instance information enhancement provided by the present invention;
FIG. 10 is a schematic diagram of an embodiment of a BIM model visualization device based on scene level instance information enhancement provided by the present invention;
fig. 11 is a schematic diagram of an embodiment of a terminal provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The BIM model visualization method based on scene level instance information enhancement provided by the invention can be applied to terminals with computing capability, the terminals can acquire target densification points by executing the BIM model visualization method based on scene level instance information enhancement provided by the invention and transmit the target densification points to a target motion control card, and the terminals can be but are not limited to various computers, mobile terminals, intelligent household appliances, wearable devices and the like.
Example 1
In recent years, BIM-GIS integrated visualization research has received considerable attention, mainly including two aspects of data organization and scheduling optimization. The existing research is mainly focused on visualization of a single BIM model or a few BIM models based on a local index structure, although some researches consider partial BIM semantic information, the data characteristics of BIM model data per se are not fully considered, the total number of parts of a complete physical BIM system is usually up to 30-50 ten thousand, index redundancy and complexity are independently built for each part under the huge data volume, and the semantic information of the BIM model is not fully considered. Most of the existing BIM visual data organization still uses the idea of the spatial three-dimensional data organization of the traditional GIS, the GIS three-dimensional spatial data often has the characteristics of wide spatial distribution, multiple sources and heterogeneous, and BIM model data often has the characteristics of dense spatial distribution, high model reusability and the like, so that the traditional GIS spatial data organization method is difficult to efficiently compress and organize BIM model data when facing to BIM data with high mass spatial aggregation.
The data flow direction of BIM-GIS integration is always BIM-GIS-Visualization Engine, and 3D GIS data index construction integration belongs to the middle part of the whole BIM-GIS integration, so that the index construction takes the spatial distribution characteristics of BIM and GIS data into consideration and takes the pressure of terminal rendering large-scale spatial data into consideration, thereby playing a role in supporting the top and bottom. The IFC civil engineering community and the computer graphics community have similar ideas for processing large-scale building data, namely, an instantiation idea is adopted for optimizing large-scale repeated or similar three-dimensional models, the existing data structure and geometric structure based on the IFCXML identify the same BIM model algorithm, the repeated information is removed and the data structure of the IFCXML is maintained through iterative reference mapping, the method belongs to a conservative instantiation storage idea in IFC storage, and instantiation rendering is a recognized solution of the computer graphics community when the computer graphics community is instantiated and rendered in the face of large-scale identical or similar three-dimensional model scenes. Therefore, the 3DGIS index needs to be constructed while considering the feature that the BIM model can be instantiated and the instantiation scheduling when the terminal is convenient to render, and the embodiment proposes a spatial hybrid index based on instantiation semantics on spatial data organization by taking the feature as a starting point.
In terms of visual scheduling optimization, the prior art generally falls into two directions, namely reducing the number of triangular patches in a viewport and reducing the number of rendering draw calls (dragcall). The traditional visualized loading scheduling optimization aims at the characteristics of wide spatial distribution and complex model geometric structure of traditional GIS three-dimensional data, and is generally focused on the problem of the number of triangle patches in a visual plane, so that the adopted three-dimensional data spatial index structure is the problem of better matching with potential visible set calculation (PVS) in a visualization stage and performing view-port triangle patch elimination. However, the BIM model data has the characteristics of small triangle patches of the single geometric model and large total number of components, and there may be a situation that a scene has a model with many vertex data, but different world coordinate conversions are performed, such as round tubes, square tubes, house beams, doors and the like of the same type, so that the problem that the CPU draws and schedules too much pressure instead of too fine a single model is caused, and therefore, the spatial index design of the large-scale BIM model data needs to not only consider the convenience of potential visible set calculation (PVS) but also focus on optimizing and drawing and scheduling bottlenecks. Therefore, the embodiment provides a hierarchical instantiation fusion rendering scheme, and the characteristics of an instantiation model and a non-instantiation model are correspondingly optimized so as to realize smooth visual real-time rendering.
Specifically, as shown in fig. 1, one embodiment of the method for visualizing a BIM model based on scene-level instance information enhancement includes the steps of:
s100, acquiring target BIM model data, and classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model.
Specifically, referring to fig. 2, in this embodiment, before acquiring the target model data, the method further includes:
and acquiring an original BIM model, and converting the information of the original BIM model by using an information model IFC constructed in a cooperative mode to obtain target model data.
After the target model data is obtained, analysis is carried out through an IFC file, a Revit file and the like. In this embodiment, the IFC file is used to parse the IFC file to obtain the geometric semantics and attribute semantics of the original BIM model, and the BIM model may be instantiated through parsing semantic information screening.
The classifying the target model data to obtain a first target model, a second target model and a third target model includes:
S110, extracting a model with geometric multiplexing semantic information from the target BIM model data to obtain the first target model;
s120, performing geometric semantic screening on the model without geometric multiplexing semantic information in the target BIM model data, and extracting an implicit instantiation model to obtain the second target model;
s130, the models except the first target model and the second target model in the target BIM model data are the third target model.
In particular, the term "instantiate" has different meanings in different parts of the graphical ecosystem. In writing tools and data formats, the term "instantiating" refers to reusing any resource for efficiency, whether or not the resource represents a GPU draw call, and this technique is referred to as data instantiation. In the computer graphics world, an instance generally refers to a method in which geometric expressions of two or more models can express any instance model by multiplying one or more space transformation matrices by one primitive geometric model, and only the primitive geometric model and the space transformation matrices converted to other models are stored, so as to achieve rendering of multiple instance models, which is called instantiation rendering.
In the present embodiment, taking the IFC file as an example, similar processing ideas can be applied to files such as Rvt, 3D Max, and the like. The main types of the geometrical categories of the IFC file are three categories of Tessell (index triangular mesh model), sweptsolid (tensile parameter model) and MappedRepresentationand the like for model geometrical expression. The Tessell Index triangle mesh model expression is a classical model expression form in computer graphics, which maintains a vertex table (Vertix Buffer) and a triangle table (Index Buffer), the vertex table records vertex coordinates, and the triangle table records vertex indexes constituting triangles, thereby expressing a three-dimensional model in the form of triangles. The geometric representation of the SweptSolid tensile body model is common to building modeling software, as many square, round pipes are typically modeled in a tensile manner and are therefore recorded in the IFC file. The concrete expression form is that, taking a cuboid pipeline and a cylindrical pipeline as examples, a bottom Curve (Curve Points), a stretching direction and a stretching depth are given, so that the three-dimensional model is expressed. The mappdrepresentation element may have a "map geometry" representation that reuses the concept of product type geometry in the corresponding product type, defined by the concept object type. The expression shapes of the products are respectively the two main types, namely the Tesselltion and the Sweptsolid, which are just the types multiplexed for the geometric models.
In this embodiment, a model with geometric multiplexing semantic information in the target model data is extracted to obtain the first target model. Specifically, by analyzing the IFC semantics of the target model data, for the BIM model with geometric multiplexing semantic information, the geometric multiplexing information of the BIM model is directly obtained, so as to obtain the first target model. The first object model instantiation information is easy to acquire and is an explicit instantiation model.
And performing geometric semantic screening on the model without geometric multiplexing semantic information in the target model data, and extracting the implicit instantiation model to obtain the second target model. That is, for models without multiplexing information, it is necessary to discriminate the instantiated stored BIM model by geometric semantics to obtain the second target model. The second target model is obtained through model geometric semantic analysis, for example, a model of the type of a Sweptsolid (tensile body model) in the IFC is formed by connecting bottom points into a curve and then stretching, and some regular tensile bodies have specificity, for example, a cylindrical model and a cuboid model, any spatial position and any radius and length of the cylindrical model can be obtained through spatial transformation of rotation, translation and scaling of a standard cylinder with the diameter and height of 1. Therefore, the instantiation enhancement information extraction can be performed aiming at the characteristics of the SweptSolid type model. That is, the second target model needs to be obtained through geometric semantic screening, so that the model is implicitly instantiated.
And the model which does not belong to the explicit instantiation model or the implicit instantiation model is the non-instantiation model, and in this embodiment, the filtered non-instantiation model is the third target model.
And S200, extracting example information of the first target model and the second target model to obtain first 3D data and second 3D data.
Extracting semantic information of the first target model and the second target model, and storing the semantic information in an instantiated form as GLTF geospatial data format to obtain the first 3D data and the second 3D data.
Among them, GLTF (derivative abbreviation of Graphics Language Transmission Format or GL Transmission Format) is a standard file format of three-dimensional scenes and models.
In this embodiment, taking an IFC file as an example, in order to optimize a storage space problem of converting a BIM model recorded in an IFC file format into a GLTF format, according to an example classification result, the first target model and the second target model are stored in an "instantiated" form as GLTF data formats, so as to obtain the first 3D data and the second 3D data.
The GLTF provides an extension of EXT_mesh_gpu_instance to store an materialized model, the materialized extension is added to any node with Mesh (Mesh model) to define the materialized behavior of the Mesh model, a plurality of BIM models using the Mesh model can be filled in the space transformation information of the BIM models using the Mesh model under the attribute information of the extension of ' transition ' (TRANSLATION), ' ROTATION), ' SCALE ', and the like, and a plurality of BIM models using the same geometric Mesh can be stored as one GLTF model through materialized storage, and the concrete flow is shown in figure 3.
Specifically, the extracting the instance information of the first target model and the second target model to obtain first 3D data and second 3D data includes:
s210, directly acquiring geometric multiplexing information in the first target model, and extracting instance information to obtain the first 3D data.
For the first target model, the IFC file has recorded its type multiplexing information, so long as it is obtained by direct access. Specifically, the geometry and attribute information analyzed by the IFC file is read, multiplexing semantic information of all maprpresentation type components is obtained, a multiplexing geometry grid and a BIM model ID of the geometry are obtained through a mapresentation list of the mapresentation type components, a spatial transformation matrix of the first target model can be obtained through the BIM model ID, and the spatial transformation matrix is converted into a GlTF format for storage, so that first 3D data is obtained, and specific reference is made to fig. 4.
S220, extracting instantiation enhancement information of the second target model, obtaining a space conversion matrix between the standard model corresponding to the second target model and the target model, and extracting instance information according to the standard model corresponding to the second target model and the space conversion matrix to obtain the second 3D data.
Specifically, for the second target model, models belonging to an example enhancement part, such as a cylindrical model and a cuboid model in the IFC Sweptsolid type, the cylindrical model with any space position and any radius and length can be obtained by spatial transformation of a standard cylinder with the diameter and height being 1 unit through rotation, translation and scaling. Referring to fig. 5, geometric semantic information of a BIM model belonging to an instantiation model and being of a SweptSolid type needs to be extracted, and is stored as a GlTF structure in an instantiation form, specifically, instantiation enhancement information extraction is performed on the second target model, a spatial conversion matrix between a standard model corresponding to the second target model and the target model is obtained, and then, instance information extraction is performed according to the standard model corresponding to the second target model and the spatial conversion matrix, so as to obtain the second 3D data.
The obtaining the spatial transformation matrix between the standard model and the target model corresponding to the second target model includes:
and acquiring an instantiation model rotation space transformation matrix, an instantiation translation space transformation matrix and an instantiation model scaling space transformation matrix of the second target model.
Specifically, in this embodiment, taking the model of the SweptSolid (tensile body model) type in the IFC as an example, the model of the SweptSolid type in the IFC is formed by connecting bottom points into a curve and then stretching, and some regular tensile bodies have specificities, such as a cylindrical model and a cuboid model, and any spatial position and any radius and length of the cylindrical model can be obtained by spatial transformation of a standard cylinder with a diameter and height of 1 unit through rotation, translation and scaling. Therefore, the instantiation enhancement information extraction can be performed aiming at the characteristics of the SweptSolid type model.
Specifically, firstly, a cylindrical and cuboid stretching body model in the Swetsolid is identified through model semantic information, and then a space transformation matrix from an instance model to a standard model is calculated. Calculating an instantiation model rotation space transformation matrix R (θ) according to a first formula xyz ) In the first formula, θ xyz And respectively rotating angles around x, y and z axes of the second target model analyzed by the IFC file.
The first formula is:
further, an instantiation model translation spatial transformation matrix T (T) is calculated according to a second formula x ,t y ,t z ) T in the second formula x ,t y ,t z For the translation distance of the second target die along the X, Y, Z axis, respectively.
The second formula is:
further, an instantiation model scaling spatial transformation matrix S (S) is calculated according to a third formula x ,s y ,s z ) According to the geometric semantic analysis result of the IFC file, setting the radius r and the height h of the BIM component of the instantiation cylindrical tensile body; and setting the radius of the standard multiplexing cylindrical model as R and the height as H, calculating the scaling parameter according to a fourth formula, and then calculating the scaling reciprocal parameter of the cuboid-shaped stretching body instantiation model according to a fifth formula. The geometrical analysis result of the IFC file can obtain the length and the stretching depth of the shaft of the closed rectangle X, Y at the bottom of the cuboid-shaped stretching body, and the lengthThe width and the height are X, Y, depth and the length, the width and the height of a standard multiplexing cuboid model are X, Y and Z respectively. Scaling parameters s of an explicit instantiation model itself with geometric reference relationships x 、s y 、s z Can be directly obtained through IFC geometric semantic analysis.
Wherein the third formula is:
the fourth formula is:
the fifth formula is:
in this embodiment, when the second object model is instantiated, only one geometric model is recorded and mapped to different spatial positions through a plurality of spatial matrices of coordinate transformation, so as to realize the effect that one geometric shape expresses a plurality of models. The Transform (spatial matrix of coordinate transformation) is the result of the above-mentioned spatial change of rotation, translation and scaling.
Referring to fig. 1, the method for visualizing a BIM model based on scene level instance information enhancement further comprises the steps of:
s300, clustering and face reduction processing are conducted on the third target model, and third 3D data are obtained.
The clustering and face subtracting processing are performed on the third target model to obtain third 3D data, including:
and carrying out spatial clustering on the third target model based on the R tree from bottom to top, selecting clustering objects according to the distance between the centers of mass of the spatial bounding boxes between the nodes, taking the average triangular surface number of the bounding boxes in the current scene as a clustering threshold value, and continuously carrying out iterative clustering until the clustering is carried out to form a node, so as to obtain the third 3D data, wherein the surface subtracting processing is carried out in the iterative clustering process each time.
For the third object model, which is a non-instantiated model, such complex models are generally more extensive, and if directly converted to the GlTF format, the following two problems are faced. First, the number of non-instantiated model trigrams is large, and a large amount of resources are consumed, whether in storage or in the rendering stage. Secondly, because the number of non-instantiation models is large, if the non-instantiation models are singly stored as a GlTF data structure, the problem of excessive drawing call is caused in a rendering stage, so that the conditions of excessive CPU drawing scheduling pressure, picture blocking and the like are caused. Therefore, in the present embodiment, the third objective model is optimized for the above two problems.
Specifically, referring to fig. 6, a method of constructing an R tree is adopted to spatially cluster a non-instantiated model, the R tree includes a spatial object with a minimum rectangle (MBR, minimum Bounding Box) with one side parallel to a coordinate axis, the R tree is utilized to spatially cluster the non-instantiated model from bottom to top, that is, the third target model, a cluster object is selected according to a distance between centroids of spatial bounding boxes between nodes, an average number of triangular faces of the bounding boxes in a current scene is used as a cluster threshold, iterative clustering is continuously performed until the cluster becomes a node, and the node is stored as a GlTF structure, so that third data is obtained, wherein a face reduction process is performed in each iterative clustering process. Specifically, in the processing procedure of the third object model, as shown in fig. 7, in this embodiment, the distance between the nodes is calculated first, each model (data point) is initially taken as a separate node, then similar nodes are clustered according to a distance function to obtain a plurality of new nodes, and the step of calculating the distance between the nodes is repeatedly performed until all the nodes are combined into one node, so as to obtain the third data.
S400, constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model.
Specifically, GPU instantiation rendering is a draw call optimization method that presents multiple copies of a geometric grid using the same geometric information in a draw call. Each copy of the geometric grid is called an instance, which is very useful for drawing things that appear multiple times in the scene. Submitting triangles to the GPU for rendering in Direct3D is a relatively slow operation, and Wloka2003 shows that a 1ghz cpu can only render about 10,000 to 40,000 batches per second in Direct 3D. On more modern CPUs, this number can be raised to between 30,000 and 120,000 batches per second (approximately 1,000 to 4,000 batches per frame at 30 frames/second), whereas the total number of components for a whole set of BIM model data is as high as hundreds of thousands, which can easily exceed the maximum threshold of modern CPU draw calls, so it is more feasible to use an instantiated rendering scheme for the visualization loading of large-scale BIM scenes. In this embodiment, all BIM models are organized in the form of a hybrid index, KD-tree for the instantiated models and R-tree for the non-instantiated models to render the original BIM models more efficiently.
The step of constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering, including:
s410, clustering the first 3D data and the second 3D data by using a scene level instance space index, and then carrying out instance rendering by taking clusters as a unit.
The performing instantiation rendering on the first 3D data and the second 3D data by using a scene hierarchy instance spatial index in a cluster unit includes:
performing space division on the first 3D data and the second 3D data by using a KD tree, and clustering the instances using the same geometric grid to obtain a plurality of instance clusters;
performing visibility elimination calculation on each example cluster to obtain a plurality of target clusters;
and respectively carrying out instantiation rendering on each target cluster.
Although the instantiation rendering scheme controls the number of draw calls, a large-scale identical geometric grid can be drawn by one draw call, but all the identical geometric grids are bound to one draw call, so that the model visibility rejection efficiency of the rendering mode is extremely low. The traditional visibility eliminating algorithm comprises several eliminating algorithms such as view cone eliminating, back eliminating, shielding eliminating and the like, which eliminates invisible parts in a scene as much as possible, reduces the total number of triangular faces and further reduces the rendering pressure of the GPU. However, the instantiation rendering submits all the same geometric grids in one draw call, so as long as the bounding box of one geometric grid in the viewpoint is visible, as shown in FIG. 8, all instances using that geometric grid will be rendered.
Although the instantiation rendering reduces the draw call, the number of instances in the geometric grid in each draw call is too much, and the GPU is also caused to face performance pressure due to the sudden increase of the number of triangular patches, so in this embodiment, the KD tree is adopted to spatially divide the first 3D data and the second 3D data, the instances using the same geometric grid are clustered to obtain a plurality of instance clusters, then each instance cluster is subjected to visibility elimination calculation to obtain a plurality of target clusters, and finally each target cluster is subjected to instantiation rendering.
Specifically, referring to fig. 9, spatial division is performed by using a KD tree, and clustering is performed by using instances of the same geometric grid, so that the visibility elimination calculation is performed by dividing the whole into a plurality of clusters originally in the visibility elimination. After the KD tree is clustered, the whole bounding boxes are built for the same cluster instance, the whole bounding boxes are used for the visibility elimination calculation, the elimination process is the same as that of a common elimination algorithm, the visibility elimination effect is shown in fig. 9, and after the visibility elimination calculation is finished, each cluster independently initiates a drawing call to render the intra-cluster instance.
S420, adopting a Lod-based scheduling strategy to visually render the third 3D model.
Specifically, in this embodiment, for rendering of the non-instantiated model such as the third 3D model, the spatial index obtained by merging the R tree segmentation models is adopted, and the visualized rendering is performed by taking the integral merging model as a unit by adopting a Lod-based scheduling policy.
In summary, the present embodiment provides a method for visualizing a BIM model based on scene-level instance information enhancement, where the method classifies target BIM model data by obtaining target BIM model data to obtain a first target model, a second target model and a third target model, where the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model, and then performs instance information extraction on the first target model and the second target model to obtain first 3D data and second 3D data; and meanwhile, clustering and subtracting the surface of the third target model to obtain third 3D data, and finally, constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain the target 3D model. According to the BIM model visualization method based on scene level instance information enhancement, instantiation organization storage is performed according to geometric organization characteristics of the BIM model, and a set of efficient visualization schemes of level instance enhancement spatial indexes are designed by taking the data organization as a substrate, so that an efficient and feasible road is provided for BIM-CIM fusion.
It should be understood that, although the steps in the flowcharts shown in the drawings of this specification are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in the flowcharts may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order in which the sub-steps or stages are performed is not necessarily sequential, and may be performed in turn or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Example two
Based on the above embodiment, the present invention further provides a BIM model visualization device based on scene level instance information enhancement, as shown in fig. 10, where the BIM model visualization device based on scene level instance information enhancement includes:
the classification module is configured to obtain target BIM model data, classify the target BIM model data, and obtain a first target model, a second target model and a third target model, where the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model, as described in embodiment one;
the 3D data acquisition module is configured to extract instance information of the first target model and the second target model to obtain first 3D data and second 3D data, which is specifically described in embodiment one;
the 3D data obtaining module is further configured to perform clustering and face subtracting processing on the third target model to obtain third 3D data, which is specifically described in the first embodiment;
and the rendering module is used for performing visual rendering on the first 3D data, the second 3D data and the third 3D data to obtain a target 3D model, and the method is specifically described in the first embodiment.
Example III
Based on the above embodiment, the present application also provides a terminal correspondingly, as shown in fig. 11, where the terminal includes a processor 10 and a memory 20. Fig. 11 shows only some of the components of the terminal, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software and various data installed in the terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In one embodiment, the memory 20 stores a BIM model visualization program 30 enhanced based on scene level instance information, and the BIM model visualization program 30 enhanced based on scene level instance information can be executed by the processor 10, so as to implement the BIM model visualization method enhanced based on scene level instance information in the present application.
The processor 10 may in some embodiments be a Central processing unit (Central ProcessingUnit, CPU), microprocessor or other chip for executing program code or processing data stored in the memory 20, for example performing the super resolution image quality evaluation method or the like.
In one embodiment, referring to the flowchart of FIG. 11, the following steps are implemented when the processor 10 executes the BIM model visualization program 30 in the memory 20 that is enhanced based on scene level instance information:
acquiring target BIM model data, and classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model;
extracting example information of the first target model and the second target model to obtain first 3D data and second 3D data;
clustering and subtracting the surface of the third target model to obtain third 3D data;
and constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model.
The classifying the target BIM model data to obtain a first target model, a second target model and a third target model includes:
extracting a model with geometric multiplexing semantic information from the target BIM model data to obtain the first target model;
performing geometric semantic screening on the model without geometric multiplexing semantic information in the target BIM model data, and extracting an implicit instantiation model to obtain the second target model;
the model other than the first target model and the second target model in the target model data is the third target model.
The extracting the instance information of the first target model and the second target model to obtain first 3D data and second 3D data includes:
directly acquiring geometric multiplexing information in the first target model and extracting instance information of the geometric multiplexing information
Obtaining the first 3D data;
and extracting instantiation enhancement information of the second target model, obtaining a space conversion matrix between the standard model corresponding to the second target model and the target model, and extracting instance information according to the standard model corresponding to the second target model and the space conversion matrix to obtain the second 3D data.
The obtaining the spatial transformation matrix between the standard model and the target model corresponding to the second target model includes:
and acquiring an instantiation model rotation space transformation matrix, an instantiation translation space transformation matrix and an instantiation model scaling space transformation matrix of the second target model.
The clustering and face subtracting processing are performed on the third target model to obtain third 3D data, including:
and carrying out spatial clustering on the third target model based on the R tree from bottom to top, selecting clustering objects according to the distance between the centers of mass of the spatial bounding boxes between the nodes, taking the average triangular surface number of the bounding boxes in the current scene as a clustering threshold value, and continuously carrying out iterative clustering until the clustering is carried out to form a node, so as to obtain the third 3D data, wherein the surface subtracting processing is carried out in the iterative clustering process each time.
The step of constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering includes:
clustering the first 3D data and the second 3D data by using a scene hierarchy instance space index, and then carrying out instance rendering by taking clusters as units;
And adopting a Lod-based scheduling strategy to carry out visual rendering on the third 3D model.
The step of performing instantiation rendering on the first 3D data and the second 3D data by using a scene hierarchy instance spatial index in a cluster unit after clustering includes:
performing space division on the first 3D data and the second 3D data by using a KD tree, and clustering the instances using the same geometric grid to obtain a plurality of instance clusters;
performing visibility elimination calculation on each example cluster to obtain a plurality of target clusters;
and respectively carrying out instantiation rendering on each target cluster.
Example IV
The present invention also provides a computer readable storage medium having stored therein one or more programs executable by one or more processors to implement the steps of a BIM model visualization method based on scene-level instance information enhancement as described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method of visualizing a BIM model based on scene-level instance information enhancement, the method comprising:
acquiring target BIM model data, and classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model;
extracting example information of the first target model and the second target model to obtain first 3D data and second 3D data;
clustering and subtracting the surface of the third target model to obtain third 3D data;
constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model;
the classifying the target BIM model data to obtain a first target model, a second target model and a third target model includes:
extracting a model with geometric multiplexing semantic information from the target BIM model data to obtain the first target model;
Performing geometric semantic screening on the model without geometric multiplexing semantic information in the target BIM model data, and extracting an implicit instantiation model to obtain the second target model;
the models in the target BIM model data except the first target model and the second target model are the third target model;
the extracting the instance information of the first target model and the second target model to obtain first 3D data and second 3D data includes:
directly acquiring geometric multiplexing information in the first target model, and extracting instance information to obtain the first 3D data;
extracting instantiation enhancement information of the second target model, obtaining a space conversion matrix between a standard model corresponding to the second target model and the target model, and extracting instance information according to the standard model corresponding to the second target model and the space conversion matrix to obtain second 3D data;
the clustering and face subtracting processing are performed on the third target model to obtain third 3D data, including:
and carrying out spatial clustering on the third target model based on the R tree from bottom to top, selecting clustering objects according to the distance between the centers of mass of the spatial bounding boxes between the nodes, taking the average triangular surface number of the bounding boxes in the current scene as a clustering threshold value, and continuously carrying out iterative clustering until the clustering is carried out to form a node, so as to obtain the third 3D data, wherein the surface subtracting processing is carried out in the iterative clustering process each time.
2. The method for visualizing a BIM model based on scene level instance information enhancement as in claim 1, wherein said obtaining a spatial transformation matrix between a standard model and a target model corresponding to the second target model comprises:
and acquiring an instantiation model rotation space transformation matrix, an instantiation translation space transformation matrix and an instantiation model scaling space transformation matrix of the second target model.
3. The method for visualizing a BIM model based on scene-level instance information enhancement according to claim 1, wherein the constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to the scene-level instance information and the spatial location information, and performing the visual rendering, comprises:
clustering the first 3D data and the second 3D data by using a scene hierarchy instance space index, and then carrying out instance rendering by taking clusters as units;
and adopting a Lod-based scheduling strategy to carry out visual rendering on the third 3D data.
4. The method for visualizing a BIM model based on scene-level instance information enhancement according to claim 3, wherein the clustering of the first 3D data and the second 3D data using the scene-level instance spatial index includes:
Performing space division on the first 3D data and the second 3D data by using a KD tree, and clustering the instances using the same geometric grid to obtain a plurality of instance clusters;
performing visibility elimination calculation on each example cluster to obtain a plurality of target clusters;
and respectively carrying out instantiation rendering on each target cluster.
5. A BIM model visualization device based on scene level instance information enhancement, comprising:
the classification module is used for acquiring target BIM model data, classifying the target BIM model data to obtain a first target model, a second target model and a third target model, wherein the first target model is an explicit instantiation model, the second target model is an implicit instantiation model, and the third target model is a non-instantiation model;
the 3D data acquisition module is used for extracting instance information of the first target model and the second target model to obtain first 3D data and second 3D data;
the 3D data acquisition module is further used for clustering and face reduction processing of the third target model to obtain third 3D data;
The rendering module is used for constructing a hybrid spatial index for the first 3D data, the second 3D data and the third 3D data according to scene level instance information and spatial position information, and performing visual rendering to obtain a target 3D model;
the classifying the target BIM model data to obtain a first target model, a second target model and a third target model includes:
extracting a model with geometric multiplexing semantic information from the target BIM model data to obtain the first target model;
performing geometric semantic screening on the model without geometric multiplexing semantic information in the target BIM model data, and extracting an implicit instantiation model to obtain the second target model;
the models in the target BIM model data except the first target model and the second target model are the third target model;
the extracting the instance information of the first target model and the second target model to obtain first 3D data and second 3D data includes:
directly acquiring geometric multiplexing information in the first target model, and extracting instance information to obtain the first 3D data;
Extracting instantiation enhancement information of the second target model, obtaining a space conversion matrix between a standard model corresponding to the second target model and the target model, and extracting instance information according to the standard model corresponding to the second target model and the space conversion matrix to obtain second 3D data;
the clustering and face subtracting processing are performed on the third target model to obtain third 3D data, including:
and carrying out spatial clustering on the third target model based on the R tree from bottom to top, selecting clustering objects according to the distance between the centers of mass of the spatial bounding boxes between the nodes, taking the average triangular surface number of the bounding boxes in the current scene as a clustering threshold value, and continuously carrying out iterative clustering until the clustering is carried out to form a node, so as to obtain the third 3D data, wherein the surface subtracting processing is carried out in the iterative clustering process each time.
6. A terminal, the terminal comprising: a processor, a computer readable storage medium communicatively coupled to the processor, the computer readable storage medium adapted to store a plurality of instructions, the processor adapted to invoke the instructions in the computer readable storage medium to perform the steps of implementing the scene-level instance information-based enhanced BIM model visualization method of any of the above claims 1-4.
7. A computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the scene-level instance information-based enhanced BIM model visualization method of any one of claims 1-4.
CN202310260322.9A 2023-03-10 2023-03-10 BIM model visualization method based on scene hierarchy instance information enhancement Active CN116502303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310260322.9A CN116502303B (en) 2023-03-10 2023-03-10 BIM model visualization method based on scene hierarchy instance information enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310260322.9A CN116502303B (en) 2023-03-10 2023-03-10 BIM model visualization method based on scene hierarchy instance information enhancement

Publications (2)

Publication Number Publication Date
CN116502303A CN116502303A (en) 2023-07-28
CN116502303B true CN116502303B (en) 2023-10-27

Family

ID=87323802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310260322.9A Active CN116502303B (en) 2023-03-10 2023-03-10 BIM model visualization method based on scene hierarchy instance information enhancement

Country Status (1)

Country Link
CN (1) CN116502303B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556519A (en) * 2023-12-11 2024-02-13 深圳大学 Conversion method and equipment for digital twin-oriented Revit model data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791986A (en) * 2021-01-25 2022-07-26 广东博智林机器人有限公司 Three-dimensional information model processing method and device
WO2022257099A1 (en) * 2021-06-09 2022-12-15 青岛理工大学 Prefabricated building intelligent drawing output method based on bim

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791986A (en) * 2021-01-25 2022-07-26 广东博智林机器人有限公司 Three-dimensional information model processing method and device
WO2022257099A1 (en) * 2021-06-09 2022-12-15 青岛理工大学 Prefabricated building intelligent drawing output method based on bim

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于BIM的项目文本信息集成方法研究;姜韶华;李丽娜;戴利人;;工程管理学报(04);全文 *
姜韶华 ; 李丽娜 ; 戴利人 ; .基于BIM的项目文本信息集成方法研究.工程管理学报.(04),全文. *

Also Published As

Publication number Publication date
CN116502303A (en) 2023-07-28

Similar Documents

Publication Publication Date Title
CN108133044B (en) Spatial big data three-dimensional visualization method and platform based on attribute separation
US10262392B2 (en) Distributed and parallelized visualization framework
AU2018258094B2 (en) Octree-based convolutional neural network
CN111862292B (en) Data rendering method and device for transmission line corridor and computer equipment
CN112785710B (en) Rapid unitization method, system, memory and equipment for OSGB three-dimensional model building
DE112022004435T5 (en) Accelerating triangle visibility tests for real-time ray tracing
CN116502303B (en) BIM model visualization method based on scene hierarchy instance information enhancement
CN110211234A (en) A kind of grid model sewing system and method
CN115795629A (en) Data conversion method, data conversion system and electronic equipment
CN115994197A (en) GeoSOT grid data calculation method
Shan et al. Interactive visual exploration of halos in large-scale cosmology simulation
CN106846457B (en) Octree parallel construction method for CT slice data visual reconstruction
CN116414316B (en) Illusion engine rendering method based on BIM model in digital city
Stolte et al. Parallel spatial enumeration of implicit surfaces using interval arithmetic for octree generation and its direct visualization
CN105093283A (en) Three-dimensional observation system surface element attribute multi-thread rapid display method
Guo et al. A 3D Surface Reconstruction Method for Large‐Scale Point Cloud Data
CN114648607B (en) Inclined three-dimensional model reconstruction and dynamic scheduling method based on CAD platform
CN113434514B (en) Voxelization index and output method of offshore oil and gas field point cloud model
DE102022111609A1 (en) ACCELERATED PROCESSING VIA BODY-BASED RENDERING ENGINE
Wang et al. A composition-free parallel volume rendering method
CN113392348A (en) BIM-based tunnel main body structural steel IFC2x3 data visualization method
CN114328769A (en) WebGL-based Beidou grid drawing method and device
CN1773494A (en) Pattern drawing platform-oriented scene graph optimizational designing method
Cai et al. Application of Gpu parallel in BIM model lightweight
Tang et al. A high-efficiency data compression and visualization method for large complex BIM model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant