CN117332489A - Tunnel environment parameter fusion modeling method based on space semantic constraint - Google Patents

Tunnel environment parameter fusion modeling method based on space semantic constraint Download PDF

Info

Publication number
CN117332489A
CN117332489A CN202311366521.4A CN202311366521A CN117332489A CN 117332489 A CN117332489 A CN 117332489A CN 202311366521 A CN202311366521 A CN 202311366521A CN 117332489 A CN117332489 A CN 117332489A
Authority
CN
China
Prior art keywords
tunnel
terrain
ray
model
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311366521.4A
Other languages
Chinese (zh)
Other versions
CN117332489B (en
Inventor
徐胜华
江文星
马钰
王勇
王琢璐
罗安
车向红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Academy of Surveying and Mapping
Original Assignee
Chinese Academy of Surveying and Mapping
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Academy of Surveying and Mapping filed Critical Chinese Academy of Surveying and Mapping
Priority to CN202311366521.4A priority Critical patent/CN117332489B/en
Publication of CN117332489A publication Critical patent/CN117332489A/en
Application granted granted Critical
Publication of CN117332489B publication Critical patent/CN117332489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • G06T15/87Gouraud shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Remote Sensing (AREA)
  • Civil Engineering (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to the field of tunnel engineering and emergency rescue, in particular to a tunnel environment parameter fusion modeling method based on space semantic constraint, which realizes the fusion of a tunnel component and three-dimensional terrain under the constraint of space semantic rules and accurately fuses environment parameters and a terrain-tunnel model. The reality of the scene is improved, invalid voxels of the environment parameters are cut, the visual effect comparison analysis of various transparencies is supported for the distribution characteristics of the tunnel and the environment parameter field, and meanwhile, the transverse, longitudinal and irregular subdivision is supported for viewing the region of interest of the user.

Description

Tunnel environment parameter fusion modeling method based on space semantic constraint
Technical Field
The invention relates to the field of tunnel engineering technology and emergency rescue, in particular to a tunnel environment parameter fusion modeling method based on space semantic constraint.
Background
Tunnel engineering is a building built underground or under water or in mountains, paved with railways or built on roads for motor vehicles to pass through. The tunnel construction method can be divided into mountain tunnel engineering, underwater tunnel engineering and urban tunnel engineering according to the position of the tunnel construction method. A tunnel engineering called mountain tunnel crossing under mountain or hills to shorten the distance and avoid a large slope; what is known as underwater tunnel engineering, which passes under a river or the sea floor for crossing a river or a channel; urban tunnelling, which is the passage of railways underground in cities to accommodate the need for large cities, is known as urban tunnelling.
In tunnel engineering, tunnel line selection, longitudinal section design, cross section design, auxiliary tunnel design and the like are needed, and tunnel gate design, excavation method and lining type selection are also included, and the construction designs need to consider the environment and parameters inside the tunnel. In the modeling process of a tunnel scene, different modeling methods are adopted to model the terrain, tunnel components and tunnel environment parameters independently, which causes significant differences in data structures and combination modes, and fusion of various models in the scene cannot be performed rapidly.
In the field of emergency rescue, a tunnel belongs to a shielding space, and when disasters occur, the tunnel is characterized by narrow visual field, limited communication and complex internal conditions, and in order to realize safe and accurate rescue, the safety of rescue workers is guaranteed to the greatest extent, and the tunnel, the topography around the tunnel and the environmental parameters in the tunnel are required to be subjected to rapid fusion modeling.
Therefore, a tunnel environment parameter fusion modeling method of space semantic constraint is designed, the fusion of a tunnel component and a three-dimensional terrain under the constraint of space semantic rule is realized, and simultaneously, environment parameters and a terrain-tunnel model are fused accurately.
Disclosure of Invention
The invention aims to improve the defects of the prior art, and provides a tunnel environment parameter fusion modeling method of space semantic constraint, which realizes fusion of a tunnel component and three-dimensional terrain under the space semantic constraint rule and accurately fuses environment parameters with a tunnel-terrain model.
In order to achieve the above purpose, the invention provides a tunnel environment parameter fusion modeling method of spatial semantic constraint, which comprises the following steps:
the method comprises the steps of tunnel basic scene modeling, tunnel scene environment parameter three-dimensional visual modeling, space semantic constraint rule construction and tunnel environment parameter fusion modeling of space semantic constraint;
the tunnel base scene is modeled as:
s1, constructing a three-dimensional terrain grid of a tunnel by taking a digital elevation model as basic data, and performing texture mapping by a remote sensing image so as to establish a three-dimensional terrain scene at a tunnel entrance and above the tunnel; the data of the terrain scene includes: the coordinate and elevation value of the terrain grid unit, and the coordinate calculation formula of each terrain grid unit is as follows:
x start and y start Respectively representing the coordinates of the starting points of the terrains,
G size representing the size of each grid, G col And G row Representing the column number and the row number of the current grid, and R represents the total row number;
s2, traversing the vertexes of each triangle in the triangular mesh model in a anticlockwise mode, and storing coordinates and corresponding indexes of the vertexes so that the topological relation among points, lines and planes in the whole terrain triangular mesh model is completely described and stored;
s3, obtaining a data structure of the topographic coordinate index by storing vertex data and corresponding index numbers; by accessing and reading the topographic coordinate index data in real time and combining the remote sensing images to carry out texture mapping of the grid model, the three-dimensional topographic scene around the tunnel can be quickly constructed;
s4, carrying out standardized treatment by using the existing tunnel cross section design diagram, only retaining lining inner edge lines and lining outer edge lines, removing other irrelevant lines to obtain a basic sketch of the tunnel cross section, importing the treated tunnel cross section diagram into modeling software, generating a geometric model of a tunnel main body through lofting and stretching operation, and exporting to obtain a tunnel component model;
the three-dimensional visual modeling of the tunnel scene environment parameters is as follows:
s10, filling a blank area in a tunnel through numerical calculation of a collaborative kriging interpolation method so as to realize continuous and comprehensive visual display of environmental parameter data in the whole tunnel space, selecting other environmental parameters except a certain environmental parameter of the tunnel as covariates, and selecting a main variable Z (p i ) Is Z as the covariate of (2) c (p j ) The calculation formula is as follows:
z in * (p 0 ) Is a predicted value of a collaborative kriging interpolation method; z (p) i ) (i=1, 2,., m) is sample point primary variable environment parameter data; z is Z c (p j ) (j=1, 2,., n) is co-variable environmental parameter data; lambda (lambda) i Sum mu j Is a weighting coefficient in the cooperative kriging model;
introduction of Lagrange coefficientsThe following formula is obtained:
γ 1 a variation function model with a main variable gamma 2 A variation function model for the cooperative variables; gamma ray 1211 A main covariant function model; solving the above to obtain a weighting coefficient lambda i Sum mu j Substituted intoObtaining the predicted value Z of the cooperative kriging * (p 0 );
S20, realizing volume drawing of a ray projection algorithm by adopting a graphic processor GPU, and improving the rendering speed through texture mapping accelerated by hardware, wherein the projection algorithm comprises a vertex shader and a fragment shader, the vertex shader is used for converting data vertex coordinates, and transmitting a calculation result to the fragment shader, and the fragment shader executes the steps of ray projection, data resampling, transfer function design and ray synthesis;
s30, light is projected as follows:
sampling tunnel environment parameter volume data according to the emitted rays, detecting whether the rays intersect with a volume data bounding box or not before sampling, and calculating to obtain a sampling starting point and a sampling end point;
adopting an AABB (Axis-aligned bounding box) bounding box as a volume data bounding box, and solving the intersection point of the Ray and the environmental parameter volume data bounding box through a Ray-AABB algorithm;
judging whether the ray intersects the bounding box and calculating the intersection point of the ray and the bounding box in the GPU fragment shader by a method of a slice based on an intersection detection algorithm of an AABB bounding box, wherein the slice refers to a space between two parallel planes, and the AABB bounding box of the environment parameter volume data can be regarded as an intersection set of three slices consisting of three groups of parallel planes;
according to the normal directions of six faces of the bounding box, the bounding box is divided into three near faces and three far faces, and if the intersection part of the ray and the three slebs is coincident, the ray is intersected with the AABB bounding box necessarily, and the specific algorithm is as follows:
s301, establishing a ray equation: assuming that the Ray origin is Ray o Dir is a ray direction vector, s is a sampling step length, and then a ray equation can be expressedThe method is shown as follows: ray (t) =ray o +s*dir;
S302, detecting whether the ray intersects with the AABB bounding box: and calculating an intersection point with the sleb according to the established ray equation, judging whether the three intersection areas have overlapping parts to detect whether the ray intersects with the body bounding box, wherein the detection formula is as follows: max(s) xnear ,s ynear ,s znear )<min(s xfar ,s yfar ,s zfar ) Wherein s is xnear ,s ynear ,s znear S is the smaller value in the intersection with three slebs xfar ,s yfar ,s zfar A larger value of s;
s303, calculating coordinates of a sampling start point and a sampling end point: the intersection point detected by intersection is the intersection point of the Ray and the bounding box, and the origin coordinate is Ray o +max(s xnear ,s ynear ,s znear ) Dir, endpoint coordinates are Ray o +min(s xfar ,s yfar ,s zfar ) Dir, the length of the line segment determined by the starting point and the end point is the ray passing distance;
s304, removing an invalid sampling process by calculating the intersection point of the ray and the body bounding box, and using the invalid sampling process in a subsequent resampling and ray synthesis calculation part;
s40, resampling data to be:
resampling the volume data to convert it into a continuous data field; according to the attribute values of the adjacent voxels of the resampling point, interpolation calculation is carried out, and the interpolation result is used as the value of the sampling point, specifically comprising the following steps:
s401, performing interpolation for the first time, and performing interpolation in the x direction according to the attribute values of eight voxels around the sampling point to obtain a point i 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Is given by:
s402, second interpolation, i calculated according to S401 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Interpolation in the z direction to obtain point i 2-1 And i 2-2 Is given by:
s403, third interpolation, i calculated according to the previous step 2-1 ,i 2-2 Interpolation is carried out on the y direction to obtain a value of a point i, and the formula is as follows: i=i 2-1 *y+i 2-2 *(1-y);
S50, designing a transfer function as follows:
converting the three-dimensional voxel attribute values into optical coefficients such as color, opacity and the like for image synthesis through a certain mapping relation, and mathematically defining a transfer function as follows:
τ:D 1 ×D 2 ×…×D n →O 1 ×O 2 ×…×O m
d is a function definition domain, which represents attribute values of the volume data, namely attribute values of tunnel environment parameter data, O is a value domain of a transfer function, which represents optical coefficients of the volume data after being mapped by the transfer function, and τ represents a mapping rule;
the transfer function is divided into a pre-interactive transfer function and a post-interactive transfer function, a two-dimensional texture picture is transferred as an optical coefficient map to play a role of the transfer function, a resampling result is taken as a parameter to be input into the transfer function, and the color value and the opacity corresponding to the sampling point are obtained through mapping of the optical coefficient map texture map;
s60, synthesizing light rays into:
according to the sampling points in the direction of the rays, synthesizing and calculating, and calculating the color and the opacity of all pixel points on an imaging plane according to the light projection direction, so that a two-dimensional image reflecting complete environment parameter volume data can be obtained on a screen;
the light ray synthesis direction can be divided into backward forward back-to-front and forward backward front-to-back synthesis;
the synthesis formula from back to front is: c (C) out =C in (1-a now )+C now a now
Wherein C represents a color, alpha represents an opacity alpha value ranging from 0 to 1, the sum of the transparency value and the opacity value always being equal to 1; c (C) 0 For synthesizing the initial color value, then C represents the final synthesized color value, C j ,a j For the color value and opacity value, beta, of the jth sample point j =1-a j Is a transparency value;
according to the back-to-front synthesis method, the formula is iterated continuously along the ray direction, and the synthesis result of the pixel point can be finally obtained, and the final synthesized color value is expressed as: the forward-backward image synthesis formula is: c (C) out a out =C in a in +C now a now (1-a now )、a out =a now (1-a in )+a in
The spatial semantic constraint rule is constructed as follows:
the unified semantic description is given to the corresponding model and data, the unified semantic description is effectively stored, and the rapid fusion construction of the tunnel environment parameter scene is realized on the basis;
establishing space semantic constraint rules for various space relations among the terrain model, the tunnel component model and the tunnel environment parameter model, wherein the space semantic constraint rules comprise space position constraint, space attitude constraint, space scale constraint and space topology constraint, and realizing fusion of the tunnel component and the three-dimensional terrain and precise fusion of the tunnel component and the environment parameter under the various constraint rules;
the spatial position constraint is:
taking a geocentric rectangular coordinate system as a reference, carrying out translation transformation on the tunnel component and the tunnel environment parameter field, and transforming the tunnel component and the environment parameter field model into the geocentric rectangular coordinate system where the three-dimensional terrain is located, so that the tunnel component and the tunnel environment parameter field are matched with the real geographic position, wherein the formula is as follows:
representing the translation of a point (X, Y, Z) to a point (X ', Y ', Z '), M t For a translation matrix, (T) x ,T y ,T z ) Is a translation parameter;
the spatial pose constraints are:
the directions of coordinate axes of the coordinate systems are changed, and the directions of the axes of the different coordinate systems are unified through rotation transformation due to the fact that the directions of the XYZ axes among the different coordinate systems are different; integral rotation matrix M r From a matrix M rotated about the XYZ axes x 、M y 、M z The formula is calculated as: m is M r =M x *M y *M z
The rotation matrices around the XYZ axes are respectively:
alpha, beta and gamma respectively represent the rotation angle around the XYZ axis;
the spatial scale constraints are:
the method is used for unifying units under different coordinate systems, and because a model in a local coordinate system can be scaled in a certain proportion, scaling of a spatial scale is performed, namely scaling transformation is performed on the model in the XYZ direction, and the formula is as follows:
scaling matrix
S x 、S y 、S z Scaling on XYZ axes are represented respectively;
the spatial topology constraints are:
three topological relations are selected as the spatial topological constraint rules: containing, adjacent, intersecting, the formula:
t (a, B) =c (a, B) +p (a, B) +i (a, B), T representing the topological relationship between models a, B; c represents that the model A and the model B are in an inclusion relationship, namely the model B is positioned in the model A; p represents model a adjacent to model B; i represents that model A intersects model B;
the tunnel environment parameters of the space semantic constraint are fused and modeled as follows:
s100, fusion modeling is carried out on a plurality of data models of a tunnel scene based on the constructed space semantic rule, wherein the fusion modeling comprises fusion of a tunnel component and a terrain and fusion of a tunnel environment parameter field and the tunnel component;
s200, primarily fusing the tunnel component and the terrain comprises the steps of extracting the tunnel component and performing modeling operation on the fusion of the tunnel component and the terrain; obtaining a spatial position (lan, lat, height), a spatial posture (pitch, roll) and a scaling (scale) fusion modeling parameter of a tunnel component through extraction of the tunnel component parameter;
s300, positioning, rotating and scaling fusion modeling operation is carried out on the tunnel component under the limitation and guidance of a space semantic constraint rule according to relevant parameters (lan, lat, height, head, pitch, roll, scale) of the tunnel component, so that fusion matching of the tunnel component with three-dimensional topography in terms of position, posture and scale is realized;
s400, a topological relation of intersection exists between a tunnel member and three-dimensional terrain at a tunnel opening part, and in order to realize seamless connection of the intersection of the tunnel opening and the terrain, rendering of the terrain at the intersection area of the tunnel opening and the terrain is required to be limited;
calculating the intersection line of the terrain and the tunnel portal according to the intersection topological relation of the tunnel and the terrain:
s4001, calculating the position and the size of a land grid in an intersection area; the outer contour of the tunnel cross section is regarded as being composed of an arc section and a straight line section, and any point P on the arc section of the outer contour of the tunnel i (x i ,y i ) Can be calculated by the following calculation formula:
straight line segment P n+1 P n+2 Point P on j (x j ,y j ) The calculation formula of (2) is as follows:
after the coordinates of any point of the tunnel cross section are calculated, the intersection line of the tunnel and the terrain is calculated, the intersection line is obtained by projecting the cross section in the orthogonal direction due to the orthogonality of the cross section and the slope of the terrain, the slope of the tunnel opening is 1:M according to the design data by taking the orthogonal direction of the tunnel cross section as the z axis, and the point P on the intersection line is obtained i The calculation formula of the z-axis coordinate of (2) is as follows:
Z i =M*X i
s4002, setting a reference value in the fusion area, and comparing the reference value with a template value of a terrain fragment in a template test stage in a rendering pipeline; judging whether the terrain fragment is positioned in the fusion area, discarding the terrain fragment positioned in the fusion area, and reserving the terrain fragment positioned outside the fusion area, so that the rendering of the terrain in the intersection area of the tunnel component and the terrain is accurately limited, and seamless fusion display of the tunnel entrance and the terrain scene is realized;
s500, converting the tunnel environment parameter field model into a geocentric rectangular coordinate system where the three-dimensional terrain is located through rotation and translation operations according to defined space semantic constraint rules; m=m t *M r M is a conversion matrix, M t 、M r Respectively a translation matrix and a rotation matrix;
deleting voxels which are not included in the tunnel according to the geometric boundary of the tunnel component by taking the inclusion relation between the environment parameter field and the tunnel component as a constraint condition, wherein the method comprises the following specific steps of:
s5001, acquiring a tunnel component, and acquiring three-dimensional model data of the tunnel component, wherein the three-dimensional model data comprise the shape, the size and the central line information of a tunnel;
s5002, calculating the distance between the tunnel environment parameter voxels and the tunnel center line, calculating each voxel in the cube environment parameter field to obtain the shortest distance between the voxel and the tunnel center line, and calculating the shortest distance between the voxel and the center line by using a geometric algorithm;
s5003, deleting voxels in the non-tunnel according to the distance, setting a threshold according to the tunnel cross section design parameter, and deleting voxels with the distance exceeding the threshold from the tunnel center line in each section;
the part which does not belong to the interior of the tunnel component in the environment parameter field is eliminated through S5001-S5003, so that the environment parameter field and the tunnel component are better fused and displayed.
Compared with the prior art, the method has the advantages that fusion modeling is carried out on various data of the tunnel scene, matching fusion of the tunnel component and the terrain on the spatial position, the spatial posture and the spatial scale is realized through the spatial semantic constraint rule, the intersecting line of the tunnel portal and the terrain is calculated according to the topological relation of the tunnel portal and the terrain, rendering of the terrain fragment in the area is limited, fusion of the tunnel portal and the terrain is realized, and the reality of the scene is improved.
For tunnel environment parameter data, three-dimensional visual modeling of various environment parameters is realized, and volume drawing is respectively carried out on tunnel temperature, humidity and air pressure data. The translation and rotation operation of the environment parameter field is realized through a space semantic constraint rule, and the environment parameter field is matched to a real geographic position; and cutting invalid voxels of the environment parameters by taking the tunnel geometric shape as a constraint condition according to the inclusion relation between the tunnel component and the environment parameters.
The three-dimensional visual modeling of the tunnel environment parameters supports modification of an illumination model, increase and decrease of illumination intensity, self-definition of an opacity transfer function and self-definition of a threshold value, supports visual effect checking comparison analysis of the distribution characteristics of the tunnel and the environment parameter fields of various opacity, and supports transverse, longitudinal and irregular subdivision to check the region of interest of a user.
Drawings
FIG. 1 is a schematic diagram of a detailed modeling process of the present invention.
FIG. 2 is a flowchart of a GPU-accelerated ray casting algorithm according to the present invention.
FIG. 3 is a schematic diagram of the spatial semantic constraint rules of the present invention.
Fig. 4 is a schematic diagram of the effect of the tunnel member of the present invention before and after fusion with three-dimensional terrain.
FIG. 5 is a schematic diagram of the effect of fusion modeling of the environment parameters of the tunnel scene.
FIG. 6 is a schematic diagram of the multi-view fusion modeling effect of the present invention.
Detailed Description
The invention will now be further described with reference to the accompanying drawings.
1-6, the invention provides a tunnel environment parameter fusion modeling method of spatial semantic constraint, which comprises tunnel basic scene modeling, tunnel scene environment parameter three-dimensional visualization modeling, spatial semantic constraint rule construction and tunnel environment parameter fusion modeling of spatial semantic constraint;
the tunnel base scene is modeled as:
s1, constructing a three-dimensional terrain grid of a tunnel by taking a digital elevation model as basic data, and performing texture mapping by a remote sensing image so as to establish a three-dimensional terrain scene at a tunnel entrance and above the tunnel; the data of the terrain scene includes: the coordinate and elevation value of the terrain grid unit, and the coordinate calculation formula of each terrain grid unit is as follows:
x start and y start Respectively represent the landform starting point coordinates G size Representing the size of each grid, G col And G row Representing the column number and the row number of the current grid, and R represents the total row number;
s2, traversing the vertexes of each triangle in the triangular mesh model in a anticlockwise mode, and storing coordinates and corresponding indexes of the vertexes so that the topological relation among points, lines and planes in the whole terrain triangular mesh model is completely described and stored;
s3, obtaining a data structure of the topographic coordinate index by storing vertex data and corresponding index numbers; by accessing and reading the topographic coordinate index data in real time and combining the remote sensing images to carry out texture mapping of the grid model, the three-dimensional topographic scene around the tunnel can be quickly constructed;
s4, carrying out standardized treatment by using the existing tunnel cross section design diagram, only retaining lining inner edge lines and lining outer edge lines, removing other irrelevant lines to obtain a basic sketch of the tunnel cross section, importing the treated tunnel cross section diagram into modeling software, generating a geometric model of a tunnel main body through lofting and stretching operation, and exporting to obtain a tunnel component model;
the three-dimensional visual modeling of the tunnel scene environment parameters is as follows:
s10, filling a blank area in a tunnel through numerical calculation of a collaborative kriging interpolation method so as to realize continuous and comprehensive visual display of environmental parameter data in the whole tunnel space, selecting other environmental parameters except a certain environmental parameter of the tunnel as covariates, and selecting a main variable Z (p i ) Is Z as the covariate of (2) c (p j ) The calculation formula is as follows:
z in * (p 0 ) Is a predicted value of a collaborative kriging interpolation method; z (p) i ) (i=1, 2,., m) is sample point primary variable environment parameter data; z is Z c (p j ) (j=1, 2,., n) is co-variable environmental parameter data; lambda (lambda) i Sum mu j Is a weighting coefficient in the cooperative kriging model;
introduction of Lagrange coefficientsThe following formula is obtained:
γ 1 a variation function model with a main variable gamma 2 A variation function model for the cooperative variables; gamma ray 1211 A main covariant function model; solving the above to obtain a weighting coefficient lambda i Sum mu j Substituted intoObtaining the predicted value Z of the cooperative kriging * (p 0 );
S20, realizing volume drawing of a ray projection algorithm by adopting a graphic processor GPU, and improving the rendering speed through texture mapping accelerated by hardware, wherein the projection algorithm comprises a vertex shader and a fragment shader, the vertex shader is used for converting data vertex coordinates, and transmitting a calculation result to the fragment shader, and the fragment shader executes the steps of ray projection, data resampling, transfer function design and ray synthesis;
s30, light is projected as follows:
sampling tunnel environment parameter volume data according to the emitted rays, detecting whether the rays intersect with a volume data bounding box or not before sampling, and calculating to obtain a sampling starting point and a sampling end point;
adopting an AABB (Axis-aligned bounding box) bounding box as a volume data bounding box, and solving the intersection point of the Ray and the environmental parameter volume data bounding box through a Ray-AABB algorithm;
judging whether the ray intersects the bounding box and calculating the intersection point of the ray and the bounding box in the GPU fragment shader by a method of a slice based on an intersection detection algorithm of an AABB bounding box, wherein the slice refers to a space between two parallel planes, and the AABB bounding box of the environment parameter volume data can be regarded as an intersection set of three slices consisting of three groups of parallel planes;
according to the normal directions of six faces of the bounding box, the bounding box is divided into three near faces and three far faces, and if the intersection part of the ray and the three slebs is coincident, the ray is intersected with the AABB bounding box necessarily, and the specific algorithm is as follows:
s301, establishing a ray equation: assuming that the Ray origin is Ray o Dir is the ray direction vector, s is the sampling step, then the ray equation can be expressed as: ray (t) =ray o +s*dir;
S302, detecting whether the ray intersects with the AABB bounding box: according to the established ray equation, calculating the intersection point with the sleb, judging whether the three intersection areas have overlapping parts or not to detect whether the ray is matched with the body bounding box or notThe detection formula is as follows: max(s) xnear ,s ynear ,s znear )<min(s xfar ,s yfar ,s zfar ) Wherein s is xnear ,s ynear ,s znear S is the smaller value in the intersection with three slebs xfar ,s yfar ,s zfar A larger value of s;
s303, calculating coordinates of a sampling start point and a sampling end point: the intersection point detected by intersection is the intersection point of the Ray and the bounding box, and the origin coordinate is Ray o +max(s xnear ,s ynear ,s znear ) Dir, endpoint coordinates are Ray o +min(s xfar ,s yfar ,s zfar ) Dir, the length of the line segment determined by the starting point and the end point is the ray passing distance;
s304, removing an invalid sampling process by calculating the intersection point of the ray and the body bounding box, and using the invalid sampling process in a subsequent resampling and ray synthesis calculation part;
s40, resampling data to be:
resampling the volume data to convert it into a continuous data field; according to the attribute values of the adjacent voxels of the resampling point, interpolation calculation is carried out, and the interpolation result is used as the value of the sampling point, specifically comprising the following steps:
s401, performing interpolation for the first time, and performing interpolation in the x direction according to the attribute values of eight voxels around the sampling point to obtain a point i 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Is given by:
s402, second interpolation, i calculated according to S401 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Interpolation in the z direction to obtain point i 2-1 And i 2-2 Is given by:
S403, third interpolation, i calculated according to the previous step 2-1 ,i 2-2 Interpolation is carried out on the y direction to obtain a value of a point i, and the formula is as follows: i=i 2-1 *y+i 2-2 *(1-y);
S50, designing a transfer function as follows:
converting the three-dimensional voxel attribute values into optical coefficients such as color, opacity and the like for image synthesis through a certain mapping relation, and mathematically defining a transfer function as follows:
τ:D 1 ×D 2 ×…×D n →O 1 ×O 2 ×…×O m
d is a function definition domain, which represents attribute values of the volume data, namely attribute values of tunnel environment parameter data, O is a value domain of a transfer function, which represents optical coefficients of the volume data after being mapped by the transfer function, and τ represents a mapping rule;
the transfer function is divided into a pre-interactive transfer function and a post-interactive transfer function, a two-dimensional texture picture is transferred as an optical coefficient map to play a role of the transfer function, a resampling result is taken as a parameter to be input into the transfer function, and the color value and the opacity corresponding to the sampling point are obtained through mapping of the optical coefficient map texture map;
s60, synthesizing light rays into:
according to the sampling points in the direction of the rays, synthesizing and calculating, and calculating the color and the opacity of all pixel points on an imaging plane according to the light projection direction, so that a two-dimensional image reflecting complete environment parameter volume data can be obtained on a screen;
the light ray synthesis direction can be divided into backward forward back-to-front and forward backward front-to-back synthesis;
the synthesis formula from back to front is: c (C) out =C in (1-a now )+C now a now
Wherein C represents a color, alpha represents an opacity alpha value ranging from 0 to 1, the sum of the transparency value and the opacity value always being equal to 1; c (C) 0 For synthesizing the initial color value, then C represents the final synthesized color value, C j ,a j For the color value and opacity value, beta, of the jth sample point j =1-a j Is a transparency value;
according to the back-to-front synthesis method, the formula is iterated continuously along the ray direction, and the synthesis result of the pixel point can be finally obtained, and the final synthesized color value is expressed as: the forward-backward image synthesis formula is: c (C) out a out =C in a in +C now a now (1-a now )、a out =a now (1-a in )+a in
The spatial semantic constraint rule is constructed as follows:
the unified semantic description is given to the corresponding model and data, the unified semantic description is effectively stored, and the rapid fusion construction of the tunnel environment parameter scene is realized on the basis;
establishing space semantic constraint rules for various space relations among the terrain model, the tunnel component model and the tunnel environment parameter model, wherein the space semantic constraint rules comprise space position constraint, space attitude constraint, space scale constraint and space topology constraint, and realizing fusion of the tunnel component and the three-dimensional terrain and precise fusion of the tunnel component and the environment parameter under the various constraint rules;
the spatial position constraint is:
taking a geocentric rectangular coordinate system as a reference, carrying out translation transformation on the tunnel component and the tunnel environment parameter field, and transforming the tunnel component and the environment parameter field model into the geocentric rectangular coordinate system where the three-dimensional terrain is located, so that the tunnel component and the tunnel environment parameter field are matched with the real geographic position, wherein the formula is as follows:
representing translation of point (X, Y, Z) to point (X ', Y'Process of Z'), M t For a translation matrix, (T) x ,T y ,T z ) Is a translation parameter;
the spatial pose constraints are:
the directions of coordinate axes of the coordinate systems are changed, and the directions of the axes of the different coordinate systems are unified through rotation transformation due to the fact that the directions of the XYZ axes among the different coordinate systems are different; integral rotation matrix M r From a matrix M rotated about the XYZ axes x 、M y 、M z The formula is calculated as: m is M r =M x *M y *M z
The rotation matrices around the XYZ axes are respectively:
alpha, beta and gamma respectively represent the rotation angle around the XYZ axis;
the spatial scale constraints are:
the method is used for unifying units under different coordinate systems, and because a model in a local coordinate system can be scaled in a certain proportion, scaling of a spatial scale is performed, namely scaling transformation is performed on the model in the XYZ direction, and the formula is as follows:
scaling matrix
S x 、S y 、S z Scaling on XYZ axes are represented respectively;
the spatial topology constraints are:
three topological relations are selected as the spatial topological constraint rules: containing, adjacent, intersecting, the formula:
t (a, B) =c (a, B) +p (a, B) +i (a, B), T representing the topological relationship between models a, B; c represents that the model A and the model B are in an inclusion relationship, namely the model B is positioned in the model A; p represents model a adjacent to model B; i represents that model A intersects model B;
the tunnel environment parameters of the space semantic constraint are fused and modeled as follows:
s100, fusion modeling is carried out on a plurality of data models of a tunnel scene based on the constructed space semantic rule, wherein the fusion modeling comprises fusion of a tunnel component and a terrain and fusion of a tunnel environment parameter field and the tunnel component;
s200, primarily fusing the tunnel component and the terrain comprises the steps of extracting the tunnel component and performing modeling operation on the fusion of the tunnel component and the terrain; obtaining a spatial position (lan, lat, height), a spatial posture (pitch, roll) and a scaling (scale) fusion modeling parameter of a tunnel component through extraction of the tunnel component parameter;
s300, positioning, rotating and scaling fusion modeling operation is carried out on the tunnel component under the limitation and guidance of a space semantic constraint rule according to relevant parameters (lan, lat, height, head, pitch, roll, scale) of the tunnel component, so that fusion matching of the tunnel component with three-dimensional topography in terms of position, posture and scale is realized;
s400, a topological relation of intersection exists between a tunnel member and three-dimensional terrain at a tunnel opening part, and in order to realize seamless connection of the intersection of the tunnel opening and the terrain, rendering of the terrain at the intersection area of the tunnel opening and the terrain is required to be limited;
calculating the intersection line of the terrain and the tunnel portal according to the intersection topological relation of the tunnel and the terrain:
s4001, calculating the position and the size of a land grid in an intersection area; the outer contour of the tunnel cross section is regarded as being composed of an arc section and a straight line section, and any point P on the arc section of the tunnel outer contour i (x i ,y i ) Can be calculated by the following calculation formula:
straight line segment P n+1 P n+2 Point P on j (x j ,y j ) The calculation formula of (2) is as follows:
after the coordinates of any point of the tunnel cross section are calculated, the intersection line of the tunnel and the terrain is calculated, the intersection line is obtained by projecting the cross section in the orthogonal direction due to the orthogonality of the cross section and the slope of the terrain, the slope of the tunnel opening is 1:M according to the design data by taking the orthogonal direction of the tunnel cross section as the z axis, and the point P on the intersection line is obtained i The calculation formula of the z-axis coordinate of (2) is as follows:
Z i =M*X i
s4002, setting a reference value in the fusion area, and comparing the reference value with a template value of a terrain fragment in a template test stage of a rendering pipeline; judging whether the terrain fragment is positioned in the fusion area, discarding the terrain fragment positioned in the fusion area, and reserving the terrain fragment positioned outside the fusion area, so that the rendering of the terrain in the intersection area of the tunnel component and the terrain is accurately limited, and seamless fusion display of the tunnel entrance and the terrain scene is realized;
s500, converting the tunnel environment parameter field model into a geocentric rectangular coordinate system where the three-dimensional terrain is located through rotation and translation operations according to defined space semantic constraint rules; m=m t *M r M is a conversion matrix, M t 、M r Respectively a translation matrix and a rotation matrix;
deleting voxels which are not included in the tunnel according to the geometric boundary of the tunnel component by taking the inclusion relation between the environment parameter field and the tunnel component as a constraint condition, wherein the method comprises the following specific steps of:
s5001, acquiring a tunnel component, and acquiring three-dimensional model data of the tunnel component, wherein the three-dimensional model data comprise the shape, the size and the central line information of a tunnel;
s5002, calculating the distance between the tunnel environment parameter voxels and the tunnel center line, calculating each voxel in the cube environment parameter field to obtain the shortest distance between the voxel and the tunnel center line, and calculating the shortest distance between the voxel and the center line by using a geometric algorithm;
s5003, deleting voxels in the non-tunnel according to the distance, setting a threshold according to the tunnel cross section design parameter, and deleting voxels with the distance exceeding the threshold from the tunnel center line in each section;
the part which does not belong to the interior of the tunnel component in the environment parameter field is eliminated through S5001-S5003, so that the environment parameter field and the tunnel component are better fused and displayed.
The above is only a preferred embodiment of the present invention, only for helping to understand the method and the core idea of the present application, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.
According to the invention, the problems that in the prior art, as different modeling methods in tunnel modeling are used for independently modeling terrains, tunnel components and tunnel environment parameters, obvious differences exist in data structures and combination modes, and fusion of multiple models in a scene cannot be rapidly carried out are solved, fusion of the tunnel components and three-dimensional terrains and accurate fusion of the tunnel components and the environment parameters are realized under a space semantic constraint rule, the sense of reality of the scene is improved, invalid voxels of the environment parameters are cut, various opacity visualization effects are supported to check and compare the distribution characteristics of tunnels and environment parameter fields, and meanwhile, transverse, longitudinal and irregular subdivision is supported to check the region of interest of a user.

Claims (1)

1. The tunnel environment parameter fusion modeling method for the spatial semantic constraint is characterized by comprising tunnel basic scene modeling, tunnel scene environment parameter three-dimensional visual modeling, spatial semantic constraint rule construction and tunnel environment parameter fusion modeling for the spatial semantic constraint;
the tunnel base scene is modeled as:
s1, constructing a three-dimensional terrain grid of a tunnel by taking a digital elevation model as basic data, and performing texture mapping by a remote sensing image so as to establish a three-dimensional terrain scene at a tunnel entrance and above the tunnel; the data of the terrain scene includes: the coordinate and elevation value of the terrain grid unit, and the coordinate calculation formula of each terrain grid unit is as follows:
x start and y start Respectively represent the initial point coordinates of the terrain, G size Representing the size of each grid, G col And G row Representing the column number and the row number of the current grid, and R represents the total row number;
s2, traversing the vertexes of each triangle in the triangular mesh model in a anticlockwise mode, and storing coordinates and corresponding indexes of the vertexes so that the topological relation among points, lines and planes in the whole terrain triangular mesh model is completely described and stored;
s3, obtaining a data structure of the topographic coordinate index by storing vertex data and corresponding index numbers; by accessing and reading the topographic coordinate index data in real time and combining the remote sensing images to carry out texture mapping of the grid model, the three-dimensional topographic scene around the tunnel can be quickly constructed;
s4, carrying out standardized treatment by using the existing tunnel cross section design diagram, only retaining lining inner edge lines and lining outer edge lines, removing other irrelevant lines to obtain a basic sketch of the tunnel cross section, importing the treated tunnel cross section diagram into modeling software, generating a geometric model of a tunnel main body through lofting and stretching operation, and exporting to obtain a tunnel component model;
the three-dimensional visual modeling of the tunnel scene environment parameters is as follows:
s10, filling a blank area in a tunnel through numerical calculation of a collaborative kriging interpolation method so as to realize continuous and comprehensive visual display of environmental parameter data in the whole tunnel space, selecting other environmental parameters except a certain environmental parameter of the tunnel as covariates, and selecting a main variable Z (p i ) Is Z as the covariate of (2) c (p j ) The calculation formula is as follows:
z in * (p 0 ) Is a predicted value of a collaborative kriging interpolation method; z (p) i ) (i=1, 2,., m) is sample point primary variable environment parameter data; z is Z c (p j ) (j=1, 2,., n) is co-variable environmental parameter data; lambda (lambda) i Sum mu j Is a weighting coefficient in the cooperative kriging model;
introduction of Lagrange coefficientsThe following formula is obtained:
γ 1 a variation function model with a main variable gamma 2 A variation function model for the cooperative variables; gamma ray 1211 A main covariant function model; solving the above to obtain a weighting coefficient lambda i Sum mu j Substituted intoObtaining the predicted value Z of the cooperative cricket * (p 0 );
S20, realizing volume drawing of a ray casting algorithm by adopting a graphic processor GPU, and improving the rendering speed through texture mapping accelerated by hardware, wherein the casting algorithm comprises a vertex shader and a fragment shader, the vertex shader is used for converting data vertex coordinates and transmitting a calculation result to the fragment shader, and the fragment shader executes the steps of ray casting, data resampling, transfer function design and ray synthesis;
s30, the light is projected as follows:
sampling tunnel environment parameter volume data according to the emitted rays, detecting whether the rays intersect with a volume data bounding box or not before sampling, and calculating to obtain a sampling starting point and a sampling end point;
adopting an AABB (Axis-aligned bounding box) bounding box as a volume data bounding box, and solving the intersection point of the Ray and the environmental parameter volume data bounding box through a Ray-AABB algorithm;
judging whether a ray intersects with a bounding box and calculating an intersection point of the ray and the bounding box in a GPU fragment shader through an intersection detection algorithm based on an AABB bounding box, namely a sleb method, wherein the sleb refers to a space between two parallel planes, and regarding the AABB bounding box of the environmental parameter volume data as an intersection set of three slebs formed by three groups of parallel planes;
according to the normal directions of six faces of the bounding box, the bounding box is divided into three near faces and three far faces, and if the intersection part of the ray and the three slebs is coincident, the ray is intersected with the AABB bounding box necessarily, and the specific algorithm is as follows:
s301, establishing a ray equation: assuming that the Ray origin is Ray o Dir is a ray direction vector, s is a sampling step, and then the ray equation is expressed as: ray (t) =ray o +s*dir;
S302, detecting whether the ray intersects with the AABB bounding box: and calculating an intersection point with the sleb according to the established ray equation, judging whether the three intersection areas have overlapping parts to detect whether the ray intersects with the body bounding box, wherein the detection formula is as follows:
max(s xnear ,s ynear ,s znear )<min(s xfar ,s yfar ,s zfar ) Wherein s is xnear ,s ynear ,s znear S is the smaller value in the intersection with three slebs xfar ,s yfar ,s zfar A larger value of s;
s303, calculating coordinates of a sampling start point and a sampling end point: the intersection point detected by intersection is the intersection point of the Ray and the bounding box, and the origin coordinate is Ray o +max(s xnear ,s ynear ,s znear ) Dir, endpoint coordinates are Ray o +min(s xfar ,s yfar ,s zfar ) Dir, the length of the line segment determined by the starting point and the end point is the ray passing distance;
s304, removing an invalid sampling process by calculating the intersection point of the ray and the body bounding box, and using the invalid sampling process in a subsequent resampling and ray synthesis calculation part;
s40, resampling the data to:
resampling the volume data to convert it into a continuous data field; according to the attribute values of the adjacent voxels of the resampling point, interpolation calculation is carried out, and the interpolation result is used as the value of the sampling point, specifically comprising the following steps:
s401, performing interpolation for the first time, and performing interpolation in the x direction according to the attribute values of eight voxels around the sampling point to obtain a point i 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Is given by:
s402, second interpolation, i calculated according to S401 1-1 ,i 1-2 ,i 1-3 ,i 1-4 Interpolation in the z direction to obtain point i 2-1 And i 2-2 Is given by:
s403, third interpolation, i calculated according to the previous step 2-1 ,i 2-2 Interpolation is carried out on the y direction to obtain a value of a point i, and the formula is as follows: i=i 2-1 *y+i 2-2 *(1-y);
S50, the transfer function is designed as:
converting the three-dimensional voxel attribute values into optical coefficients such as color, opacity and the like for image synthesis through a certain mapping relation, and mathematically defining a transfer function as follows:
τ:D 1 ×D 2 ×…×D n →O 1 ×O 2 ×…×O m
d is a function definition domain, which represents attribute values of the volume data, namely attribute values of tunnel environment parameter data, O is a value domain of a transfer function, which represents optical coefficients of the volume data after being mapped by the transfer function, and τ represents a mapping rule;
the transfer function is divided into a pre-interactive transfer function and a post-interactive transfer function, a two-dimensional texture picture is transferred as an optical coefficient mapping chart to play a role of the transfer function, a resampling result is used as a parameter to be input into the transfer function, and the color value and the opacity corresponding to the sampling point are obtained through mapping of the optical coefficient mapping texture chart;
s60, synthesizing the light into:
synthesizing according to the sampling points in the direction of the rays, and calculating all pixel points on an imaging plane
Calculating the color and the opacity according to the light projection direction, namely obtaining a two-dimensional image reflecting complete environment parameter volume data on a screen;
dividing the light into a backward forward back-to-front and a forward backward forward synthesized front-to-back according to the light synthesis direction;
the synthesis formula from back to front is: c (C) out =C in (1-a now )+C now a now
Wherein C represents a color, alpha represents an opacity alpha value ranging from 0 to 1, the sum of the transparency value and the opacity value always being equal to 1; c (C) 0 For synthesizing the initial color value, then C represents the final synthesized color value, C j ,a j For the color value and opacity value, beta, of the jth sample point j =1-a j Is a transparency value;
according to the back-to-front synthesis method, the formula is iterated continuously along the ray direction, the synthesis result of the pixel point is finally obtained, and the final synthesized color value is expressed as: the forward-backward image synthesis formula is: c (C) out a out =C in a in +C now a now (1-a now )、a out =a now (1-a in )+a in
The spatial semantic constraint rule is constructed as follows:
the unified semantic description is given to the corresponding model and data, the unified semantic description is effectively stored, and the rapid fusion construction of the tunnel environment parameter scene is realized on the basis;
establishing space semantic constraint rules for various space relations among the terrain model, the tunnel component model and the tunnel environment parameter model, wherein the space semantic constraint rules comprise space position constraint, space attitude constraint, space scale constraint and space topology constraint, and realizing fusion of the tunnel component and the three-dimensional terrain and precise fusion of the tunnel component and the environment parameter under the various constraint rules;
the spatial position constraint is:
taking a geocentric rectangular coordinate system as a reference, carrying out translation transformation on the tunnel component and the tunnel environment parameter field, and transforming the tunnel component and the environment parameter field model into the geocentric rectangular coordinate system where the three-dimensional terrain is located, so that the tunnel component and the tunnel environment parameter field are matched with the real geographic position, wherein the formula is as follows:
representing the translation of a point (X, Y, Z) to a point (X ', Y ', Z '), M t For a translation matrix, (T) x ,T y ,T z ) Is a translation parameter;
the spatial pose constraint is:
the directions of coordinate axes of the coordinate systems are changed, and the directions of the axes of the different coordinate systems are unified through rotation transformation due to the fact that the directions of the XYZ axes among the different coordinate systems are different; integral rotation matrix M r From a matrix M rotated about the XYZ axes x 、M y 、M z The formula is calculated as: m is M r =M x *M y *M z
The rotation matrices around the XYZ axes are respectively:
alpha, beta and gamma respectively represent the rotation angle around the XYZ axis;
the spatial scale constraint is:
the method is used for unifying units under different coordinate systems, and because the model in the local coordinate system can be scaled in a certain proportion, the scaling of the spatial scale is carried out, namely the scaling transformation is carried out on the model in the XYZ direction, and the formula is as follows:
scaling matrix
S x 、S y 、S z Scaling on XYZ axes are represented respectively;
the spatial topology constraint is:
three topological relations are selected as the spatial topological constraint rules: containing, adjacent, intersecting, the formula:
t (a, B) =c (a, B) +p (a, B) +i (a, B), T representing the topological relationship between models a, B; c represents that the model A and the model B are in an inclusion relationship, namely the model B is positioned in the model A; p represents model a adjacent to model B; i represents that model A intersects model B;
the tunnel environment parameter fusion modeling of the space semantic constraint is as follows:
s100, fusion modeling is carried out on a plurality of data models of a tunnel scene based on the constructed space semantic rule, wherein the fusion modeling comprises fusion of a tunnel component and a terrain and fusion of a tunnel environment parameter field and the tunnel component;
s200, primarily fusing the tunnel component and the terrain comprises the steps of extracting the tunnel component and performing modeling operation on the fusion of the tunnel component and the terrain; obtaining a spatial position (lan, lat, height), a spatial posture (pitch, roll) and a scaling (scale) fusion modeling parameter of a tunnel component through extraction of the tunnel component parameter;
s300, positioning, rotating and scaling fusion modeling operation is carried out on the tunnel component under the limitation and guidance of a space semantic constraint rule according to relevant parameters (lan, lat, height, head, pitch, roll, scale) of the tunnel component, so that fusion matching of the tunnel component with three-dimensional topography in terms of position, posture and scale is realized;
s400, a topological relation of intersection exists between a tunnel member and three-dimensional terrain at a tunnel opening part, and in order to realize seamless connection of the intersection of the tunnel opening and the terrain, rendering of the terrain at the intersection area of the tunnel opening and the terrain is required to be limited;
calculating the intersection line of the terrain and the tunnel portal according to the intersection topological relation of the tunnel and the terrain:
s4001, calculating the position and the size of a land grid in an intersection area; the outer contour of the tunnel cross section is regarded as being composed of an arc section and a straight line section, and any point P on the arc section of the outer contour of the tunnel i (x i ,y i ) All the calculation results are that the calculation formula is:
straight line segment P n+1 P n+2 Point P on j (x j ,y j ) The calculation formula of (2) is as follows:
after the coordinates of any point of the tunnel cross section are calculated, the intersection line of the tunnel and the terrain is calculated, the intersection line is obtained by projecting the cross section in the orthogonal direction due to the orthogonality of the cross section and the slope of the terrain, the slope of the tunnel opening is 1:M according to the design data by taking the orthogonal direction of the tunnel cross section as the z axis, and the point P on the intersection line is obtained i The calculation formula of the z-axis coordinate of (2) is as follows:
Z i =M*X i
s4002, setting up a reference value in the fusion area, wherein the reference value is compared with a template value of a terrain fragment in a template test stage of a rendering pipeline; judging whether the terrain fragment is positioned in the fusion area, discarding the terrain fragment positioned in the fusion area, and reserving the terrain fragment positioned outside the fusion area, so that the rendering of the terrain in the intersection area of the tunnel component and the terrain is accurately limited, and seamless fusion display of the tunnel entrance and the terrain scene is realized;
s500, converting the tunnel environment parameter field model into a geocentric rectangular coordinate system where the three-dimensional terrain is located through rotation and translation operations according to defined space semantic constraint rules; m=m t *M r M is a conversion matrix, M t 、M r Respectively a translation matrix and a rotation matrix;
deleting voxels which are not included in the tunnel according to the geometric boundary of the tunnel component by taking the inclusion relation between the environment parameter field and the tunnel component as a constraint condition, wherein the method comprises the following specific steps of:
s5001, acquiring a tunnel component, and acquiring three-dimensional model data of the tunnel component, wherein the three-dimensional model data comprise the shape, the size and the central line information of a tunnel;
s5002, calculating the distance between the tunnel environment parameter voxels and the tunnel center line, calculating each voxel in the cube environment parameter field to obtain the shortest distance between the voxel and the tunnel center line, and calculating the shortest distance between the voxel and the center line by using a geometric algorithm;
s5003, deleting voxels in the non-tunnel according to the distance, setting a threshold according to the tunnel cross section design parameter, and deleting voxels with the distance exceeding the threshold from the tunnel center line in each section;
and removing the part which does not belong to the interior of the tunnel component in the environment parameter field through the S5001-S5003, so that the environment parameter field and the tunnel component are better fused and displayed.
CN202311366521.4A 2023-10-20 2023-10-20 Tunnel environment parameter fusion modeling method based on space semantic constraint Active CN117332489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311366521.4A CN117332489B (en) 2023-10-20 2023-10-20 Tunnel environment parameter fusion modeling method based on space semantic constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311366521.4A CN117332489B (en) 2023-10-20 2023-10-20 Tunnel environment parameter fusion modeling method based on space semantic constraint

Publications (2)

Publication Number Publication Date
CN117332489A true CN117332489A (en) 2024-01-02
CN117332489B CN117332489B (en) 2024-04-26

Family

ID=89277121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311366521.4A Active CN117332489B (en) 2023-10-20 2023-10-20 Tunnel environment parameter fusion modeling method based on space semantic constraint

Country Status (1)

Country Link
CN (1) CN117332489B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882171A (en) * 2010-05-21 2010-11-10 中交第二公路勘察设计研究院有限公司 Method for fast establishing interactive tunnel and wall rock body three-dimensional models
KR20180069547A (en) * 2016-12-15 2018-06-25 (주)넥스지오 Constructing Method of Three Dimension Model Using Line Information Based on Two Dimension Drawing and System of The Same
CN108874932A (en) * 2018-05-31 2018-11-23 哈尔滨工程大学 A kind of ocean underwater sound field three-dimensional visualization method based on improved light projecting algorithm
CN113780475A (en) * 2021-10-08 2021-12-10 中国铁建重工集团股份有限公司 Mountain tunnel model fusion method based on GIS environment
CN115205492A (en) * 2022-06-28 2022-10-18 北京应用物理与计算数学研究所 Method and device for real-time mapping of laser beam on three-dimensional model
CN115374511A (en) * 2022-08-16 2022-11-22 南京林业大学 Simulation design system and method for three-dimensional control network for monitoring subway tunnel
CN115640634A (en) * 2022-11-04 2023-01-24 山西路翔交通科技咨询有限公司 Dynamic modeling method for expressway construction scene
CN116522732A (en) * 2023-05-08 2023-08-01 苏州大学 Virtual simulation method, system and storage medium for shield tunnel based on numerical simulation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101882171A (en) * 2010-05-21 2010-11-10 中交第二公路勘察设计研究院有限公司 Method for fast establishing interactive tunnel and wall rock body three-dimensional models
KR20180069547A (en) * 2016-12-15 2018-06-25 (주)넥스지오 Constructing Method of Three Dimension Model Using Line Information Based on Two Dimension Drawing and System of The Same
CN108874932A (en) * 2018-05-31 2018-11-23 哈尔滨工程大学 A kind of ocean underwater sound field three-dimensional visualization method based on improved light projecting algorithm
CN113780475A (en) * 2021-10-08 2021-12-10 中国铁建重工集团股份有限公司 Mountain tunnel model fusion method based on GIS environment
CN115205492A (en) * 2022-06-28 2022-10-18 北京应用物理与计算数学研究所 Method and device for real-time mapping of laser beam on three-dimensional model
CN115374511A (en) * 2022-08-16 2022-11-22 南京林业大学 Simulation design system and method for three-dimensional control network for monitoring subway tunnel
CN115640634A (en) * 2022-11-04 2023-01-24 山西路翔交通科技咨询有限公司 Dynamic modeling method for expressway construction scene
CN116522732A (en) * 2023-05-08 2023-08-01 苏州大学 Virtual simulation method, system and storage medium for shield tunnel based on numerical simulation

Also Published As

Publication number Publication date
CN117332489B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US6268862B1 (en) Three dimensional virtual space generation by fusing images
RU2599277C1 (en) Computed tomography system for inspection and corresponding method
Upson et al. V-buffer: Visible volume rendering
Haala et al. 3D urban GIS from laser altimeter and 2D map data
US20130112407A1 (en) Obtaining Data From An Earth Model Using Functional Descriptors
US6400362B1 (en) Image processing method and apparatus
US7327363B2 (en) Image processing apparatus, and computer program
CN114219902A (en) Volume rendering method and device for meteorological data and computer equipment
Mudge et al. Viewpoint quality and scene understanding
JPH10506487A (en) Computer generated image generation method using spherical buffer
Khayyal et al. Creation and spatial analysis of 3D city modeling based on GIS data
CN113269883B (en) BIM three-dimensional geological modeling method based on two-dimensional profile and CATIA
Rougeron et al. Optimal positioning of terrestrial LiDAR scanner stations in complex 3D environments with a multiobjective optimization method based on GPU simulations
CN117332489B (en) Tunnel environment parameter fusion modeling method based on space semantic constraint
Kang et al. Automatic texture reconstruction of 3d city model from oblique images
Davis et al. 3d modeling of cities for virtual environments
RU2467395C1 (en) Method of forming images of three-dimensional objects for real-time systems
Oliveira et al. Incremental texture mapping for autonomous driving
Giertsen et al. 3D Visualization for 2D GIS: an Analysis of the Users' Needs and a Review of Techniques
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
Konev et al. Fast cutaway visualization of sub-terrain tubular networks
Min et al. OctoMap-RT: Fast probabilistic volumetric mapping using ray-tracing GPUs
CN116561842B (en) Urban landmark perception three-dimensional map construction method
Aluckal 3D Reconstruction of Environment using RGB-D Imaging
Finat et al. Ordering criteria and information fusion in 3D laser surveying of small urban spaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant