CN111179398A - Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS - Google Patents

Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS Download PDF

Info

Publication number
CN111179398A
CN111179398A CN201911296170.8A CN201911296170A CN111179398A CN 111179398 A CN111179398 A CN 111179398A CN 201911296170 A CN201911296170 A CN 201911296170A CN 111179398 A CN111179398 A CN 111179398A
Authority
CN
China
Prior art keywords
point
motor vehicle
concentration
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911296170.8A
Other languages
Chinese (zh)
Inventor
刘坡
刘晓霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Academy of Surveying and Mapping
Original Assignee
Chinese Academy of Surveying and Mapping
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Academy of Surveying and Mapping filed Critical Chinese Academy of Surveying and Mapping
Priority to CN201911296170.8A priority Critical patent/CN111179398A/en
Publication of CN111179398A publication Critical patent/CN111179398A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a motor vehicle tail gas diffusion simulation and stereoscopic visualization method based on 3DGIS, which is used for building a stereoscopic visualization frame based on the motor vehicle tail gas diffusion modeling and visualization of 3DGIS, firstly, a CALENE 4 mode is completely integrated in the 3DGIS, and a dynamic diffusion process is expressed by using a continuous field; secondly, performing visual analysis on process data based on a direct volume rendering method, and dynamically expressing the diffusion process of the motor vehicle exhaust by mainly optimizing key algorithms such as gradient calculation, volume texture resampling, dynamic interpolation and the like; finally, the reliability and the high efficiency of the method are tested through the motor vehicle tail gas diffusion experiment test of one actual road.

Description

Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS
Technical Field
The invention relates to a three-dimensional visualization processing method, in particular to a motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3 DGIS.
Background
In cities, the main source of air pollutants is motor vehicle exhaust, and the exploration of urban motor vehicle exhaust diffusion rules and the active search for improvement measures become urgent problems in many cities. The method is used for researching the tail gas diffusion rule of the motor vehicle in the city block, scientifically and effectively simulating and analyzing the diffusion process and the concentration field distribution, and can play a positive and necessary role in the improvement of pollution measures. With the development of decades, the CALINE4(California Line Source Dispersion Model-4) Model has become the most widely recognized Model for simulating the diffusion of exhaust gases of road vehicles, which is also one of the models recommended by the Ministry of environmental protection and has been verified in various cities in China.
In recent years, a plurality of scholars develop research on a motor vehicle exhaust diffusion model, and have an application condition of simulating pollutant diffusion of motor vehicles on different roads in Shenzhen city based on a CALENE 4 model; collecting actual conditions formed by the holding capacity of motor vehicles in the Taiyuan city, vehicle types, vehicle conditions and the like, and simulating the diffusion of pollutants in the street based on modes such as CALENE 4 and the like according to the geometric characteristics of the street in the Taiyuan city and the meteorological characteristics of the city; the CAL3QHS, the GIS and the three-dimensional visualization technology are integrated to realize road pollution influence analysis, the model and the GIS in the researches are independent systems or modules, connection is realized through data, and the developed system is not easy to maintain. For example, a domestic patent with application number CN201910231133.2, "a simulation method of motor vehicle exhaust diffusion in three-dimensional scene", discloses a simulation method of motor vehicle exhaust diffusion in three-dimensional scene. Air pollution diffusion is a three-dimensional dynamic process, the 3 DGIS-based pollution expression dynamic process has natural advantages, and an air pollution model is completely integrated in the 3DGIS, so that the air pollution diffusion gradually becomes a development trend. Meanwhile, a large number of models exist in the city, and the precision of the models can be improved by utilizing the three-dimensional models.
Visualization is used as an important analysis tool, and common software such as NCAR Graphic and GrADS mainly adopts two-dimensional or three-dimensional static display, lacks dynamic display and is difficult to integrate geographic information. The 3DGIS platform is adopted to simulate the air pollution, so that the attention of experts is gradually aroused, the research focuses on model mechanism analysis or simple visualization, and further research is needed in the aspects of modeling, visualization, integrated analysis and the like of the air pollution space.
Although the related literature has a study on visualization of air pollution, the following problems exist: firstly, diffusion simulation data is generally spatial continuous field data, a Voronoi diagram VD and a dual Delaunay grid DT thereof are generally adopted in a GIS to express a spatial continuous field, and the method is inconvenient to visualize. Meanwhile, a large number of building models, vector data, meteorological condition data and the like exist in the 3DGIS, and in order to meet the requirements of query and visualization, a uniform data model needs to be constructed to model multi-source data. Secondly, at present, the distribution characteristics of pollutants are mainly expressed by an isosurface method, as shown in fig. 1, concentration distribution is expressed by the vertex attached to the surface of a building, and the method has the following problems: 1) the pollutant is a body space, and the isosurface can not completely express the internal structure of the pollutant concentration field space; 2) the pollution dynamic diffusion process is difficult to express through discrete points of the building surface.
The stereoscopic visualization is an important visualization technology and is widely applied to the fields of computed tomography, medical nuclear magnetic resonance, hydromechanics and the like. The advantage of volume rendering is that the entirety and the full view of the three-dimensional data field can be observed from the resulting image, rather than just showing the iso-surface of interest, and is particularly suited for the representation of field data. Common volume rendering algorithms include a snowball throwing method, a shear deformation method, a three-dimensional texture mapping algorithm and a light projection algorithm, and the three-dimensional texture mapping is widely applied due to the characteristic of hardware acceleration. The volume rendering gradually draws the attention of geography students, and in related researches in recent years, spherical expression of large-scale dust data is realized based on octree, and spherical expression of meteorological field data is adapted through coordinate correction from different angles, and a good visualization effect is obtained. The studies described above are mainly directed to large area events and areas, and are focused on static visualization, and relatively few for process modeling and dynamic visualization.
Disclosure of Invention
The invention provides a 3 DGIS-based motor vehicle tail gas diffusion simulation and stereoscopic visualization method, which solves the problem of real-time simulation and analysis of the diffusion rule of motor vehicle tail gas in cities, and adopts the following technical scheme:
a3 DGIS-based motor vehicle exhaust diffusion simulation and stereoscopic visualization method comprises the following steps:
s1: integrating a CALINE4 model into 3DGIS to establish a motor vehicle exhaust diffusion modeling and visualization frame;
s2: loading the simulation data of the pollution process based on a direct volume rendering method;
s3: and performing gradient calculation, volume texture resampling and dynamic interpolation, and dynamically and visually expressing the diffusion process of the motor vehicle exhaust.
Further, in step S1, the CALINE4 model integrates the nth finite line source, and the contribution of the pollutant emitted from the line source to the concentration of the measurement point can be obtained:
Figure BDA0002320608810000031
and summing to calculate the pollution concentration generated by the whole road pollution source at the measuring point:
C=∑Cn(2)
wherein: c is the contaminant concentration at the spatial point (x, y, z), mg/m3;QmThe source strength of the line source is mg/(m × s),
Figure BDA0002320608810000032
q is total source intensity, and y1 and y2 are variables respectively; u is the near-ground wind speed, m/s; sigmayIs a horizontal diffusion parameter; sigmazIs a vertical direction diffusion parameter; l is the length of the line source;
on the road center line, the CALINE4 model considers that the pollution source is based on an infinite long line source model, the pollutant concentration is the same everywhere in the crosswind direction, and the concentration formula on the line is as follows:
Figure BDA0002320608810000033
in addition, when the wind direction is not perpendicular to the line source and the included angle is formed
Figure BDA0002320608810000034
Then, the concentration can be corrected as:
Figure BDA0002320608810000035
the pollution source is generally defined in a space absolute coordinate system, and when the pollution concentration of the point to be measured is calculated in the CALINE4 model, the point to be measured is taken as the origin, and the wind direction is taken as the positive direction of the X axis.
Wherein, the total source strength Q ═ sigma vehicle number per hour ═ percentage that a kind of vehicle accounts for a kind of vehicle comprehensive emission factor; wherein, L is the length of the whole road, and L is the length of the line source; l ═ W × Lf nWherein W is the road width, LfIs a line source length increasing factor, and n is a line source number; l isf=1.1+θ3/(2.5*105) And theta is an included angle between the road and the wind direction.
Further, each voxel in the model is called voxel v, where each voxel v ═ D, T, C, where
Figure BDA0002320608810000041
Is the spatial domain of the field, i.e. the three-dimensional spatial region (XYZ axes) occupied by the field,
Figure BDA0002320608810000042
the time domain of the field, namely the time interval of the field evolution process, corresponds to the T axis, C is the pollution concentration at the position of the voxel v, namely the voxel color, and the concentration of each voxel is calculated according to a diffusion equation; the conversion of the density value data into integers of 0 to 255 is set as follows:
Figure BDA0002320608810000043
wherein Value represents a concentration Value, Vmax、VminRespectively representing the maximum and minimum contamination concentrations in the field data, forming a process texture and storing the process texture in a memory or on a disk.
Further, in step S1, the data model integrating the 3DGIS and the CALINE4 models includes: 1) three-dimensional building and component models; 2) road data; 3) meteorological data; 4) topographic data; 5) motor vehicle data; 6) dynamic diffusion concentration field model.
Further, in step S3, the real-time gradient calculation is to enhance the visualization effect by using the gray level change in the neighborhood of each pixel of the voxel during the volume rendering process, and the gradient calculation formula is as follows:
Figure BDA0002320608810000044
wherein x, y and z respectively represent the three-dimensional coordinates of the f point of a certain pixel, and i, j and k respectively represent the vectors of the three-dimensional coordinates.
In step S3, the sampling method first numbers 8 vertices and 12 edges contained in a voxel, and records a start point number of each edge, including the following steps:
1) changing the top point of the volume bounding box into an observation coordinate according to the observation model matrix, and comparing the Z value of the top point under the observation coordinate to obtain a closest point minpt and a farthest point maxpt from the viewpoint;
2) obtaining a direction Normal of a viewpoint according to an inverse matrix invVM of an observation model matrix, calculating a nearest point minv and a farthest point maxv according to a nearest point minpt and a farthest point maxpt of a volume bounding box, calculating a distance between the points, and determining a sampling interval d according to a sampling number Sample;
3) from the nearest point minpt, solving the intersection point of each edge of the voxel and the tangent plane;
4) calculating to obtain intersection points of all edges, wherein the intersection points can be transformed into a point in a local range, and the repeated points are removed;
5) and extending the distance d of the slice vector to obtain the next tangent point, and repeating the steps 2) -4) to obtain a new tangent plane until the specified sampling number is reached.
6) And when the viewpoint changes, repeating the processes 1) to 5), transmitting all the triangular surfaces to a video memory, sampling the texture, and synthesizing to obtain the final rendering effect.
In step 3), according to the nearest point sp and the slice vector spn, assuming that the starting point of one edge is p0 and the end point is p1, the formula is as follows:
Figure BDA0002320608810000051
h represents a linear interpolation component on the edge, and the geometrical and texture coordinates of the vertex of the intersection point P are obtained through calculation according to h.
In step S3, the dynamic interpolation is to output a simulation value at a certain time interval during the simulation process, and interpolate data by a linear interpolation method assuming that each pixel point of the voxel weight is at t1Observed value of time is
Figure BDA0002320608810000052
At t2Observed value of time is
Figure BDA0002320608810000053
Then at some time t in the middle, the concentration value at this point is:
Figure BDA0002320608810000061
the invention adopts a complete integration mode, completely integrates the CALINE4 model in the 3DGIS, carries out unified modeling on diffusion process data, realizes the real-time simulation and analysis of the motor vehicle diffusion model, further improves the three-dimensional texture mapping volume rendering method, optimizes key algorithms such as real-time gradient calculation, volume texture resampling, dynamic interpolation and the like based on the dynamic volume visualization technology of the GPU, and realizes the dynamic visualization of the tail gas diffusion process.
Drawings
FIG. 1 is a schematic illustration of visualization of air pollution in 3DGIS with concentration distribution represented by the apex attached to the building surface;
FIG. 2 is a schematic diagram of an automotive exhaust diffusion simulation and visualization framework coupling the 3DGIS and CALINE4 modes;
FIG. 3 is a schematic diagram of the division of the line source;
FIG. 4 is a schematic diagram of a continuous field model;
FIG. 5 is a schematic diagram of a data model integrating a 3DGIS and a diffusion model;
FIG. 6 is a schematic diagram of a dynamic rendering flow;
FIG. 7 is a schematic diagram of GPU gradient-based computation;
FIG. 8 is a schematic diagram of equidistant incremental generation of sampling planes;
FIG. 9a is an edge structure of a voxel topology;
FIG. 9b is a schematic diagram of a counterclockwise ordering of voxel topology;
FIG. 10 is a schematic diagram of dynamic interpolation;
FIG. 11 is a visualization effect in which a) there is no gradient; b) a gradient exists;
FIG. 12 shows the diffusion of the tail gas of the vehicle in different wind directions (a, 0 degrees; b, 60 degrees; c, 120 degrees; d, 180 degrees);
FIG. 13 is a schematic view of the diffusion of motor vehicle exhaust at various times;
FIG. 14 is a schematic diagram of a volume-contour surface analysis.
Detailed Description
The motor vehicle tail gas diffusion simulation and stereoscopic visualization method based on the 3DGIS is based on a three-dimensional platform, and integrates a CALINE4 model and a dynamic stereoscopic visualization algorithm provided by the invention to form a motor vehicle tail gas diffusion modeling and visualization frame based on the 3 DGIS.
Motor vehicle exhaust diffusion modeling and visualization framework
As shown in fig. 2, a basic framework of simulation and visualization of a diffusion process of a motor vehicle on a street scale is provided, and the basic framework mainly comprises input data of a basic model, model simulation of the diffusion process, pollution continuous field expression and visualization analysis of stereoscopic visualization.
The input data of the basic model comprises city underlying surface data, meteorological data and leakage source data, the block terrain and building data form an underlying surface wall condition of the air pollution diffusion model, and the meteorological condition (boundary condition) and leakage source parameters are respectively used for setting atmospheric boundary and pollution source parameters.
The process modeling is carried out aiming at the dispersion of the spatial continuous field, and the input data provides basic data support for visualization analysis. And dynamically expressing the air pollution diffusion process by adopting stereoscopic visualization methods such as volume visualization, slicing, iso-surface and the like. The CALINE4 model and data modeling are described below.
1 CALLINE 4 model
The basic idea of the CALLINE 4 model is to divide a road into a series of line source units, respectively calculate the contribution of pollutants discharged by each line element to the concentration of a receiving point, and then sum the contributions to calculate the concentration of pollutants generated by the whole road flow source at the receiving point. As shown in fig. 3, integrating the nth finite line source can obtain the contribution of the pollutant emitted by the line source to the concentration of the measurement point:
Figure BDA0002320608810000071
and summing to calculate the pollution concentration generated by the whole road pollution source at the measuring point:
C=∑Cn(2)
in the formula: c is the contaminant concentration at the spatial point (x, y, z), mg/m3;QmThe source strength of the line source is mg/(m × s),
Figure BDA0002320608810000081
q is total source intensity, and y1 and y2 are variables respectively; u is the near-ground wind speed, m/s; sigmayIs a horizontal diffusion parameter; sigmazIs a vertical direction diffusion parameter.
The total source strength Q ═ sigma vehicle per hour ═ percentage of certain vehicle type comprehensive emission factor. Wherein, L is the length of the whole road, and L is the length of the line source. L ═ W × Lf nWherein W is the road width, LfThe line source length growth factor, n is the line source number. L isf=1.1+θ3/(2.5*105) And theta is an included angle between the road and the wind direction.
On the road center line, the CALINE4 model considers that the pollution source is based on an infinite long line source model, the pollutant concentration is the same everywhere in the crosswind direction, and the concentration formula on the line is as follows:
Figure BDA0002320608810000082
in addition, when the wind direction is not perpendicular to the line source and the included angle is formed
Figure BDA0002320608810000083
Then, the concentration can be corrected as:
Figure BDA0002320608810000084
the pollution source is generally defined in a space absolute coordinate system, and when the pollution concentration of the point to be measured is calculated in the CALINE4 model, the point to be measured is taken as the origin, and the wind direction is taken as the positive direction of the X axis.
2 data model
The invention adopts a field data model to express a concentration continuous field, and adopts a regular grid to express the pollution concentration for the convenience of visualization. As shown in fig. 4, each individual element in the model is called voxel v, where each voxel v is (D, T, C), where
Figure BDA0002320608810000085
Is the spatial domain of the field, i.e. the three-dimensional spatial region (XYZ axes) occupied by the field,
Figure BDA0002320608810000086
is the time domain of the field, i.e. the time interval (T-axis) of the field evolution process, C is the concentration of contamination at voxel v, i.e. the voxel color, and the concentration of each voxel is calculated according to the diffusion equation. For the convenience of subsequent rendering, and converting the concentration value data into integers of 0 to 255, the conversion formula is as follows:
Figure BDA0002320608810000091
wherein Value represents a concentration Value, Vmax、VminRespectively representing the maximum and minimum contamination concentrations in the field data, forming a process texture and storing the process texture in a memory or on a disk.
And (3) carrying out pollution process simulation in the 3DGIS, and establishing a unified data model in a semantic space to represent data of buildings, roads, trees and dynamic diffusion processes. The data model integrating the 3DGIS and CALINE4 models shown in fig. 5 mainly includes: 1) three-dimensional building and component models: expressing the geometric structure of the building model by using an irregular triangular net, and expressing the surface characteristics of the building by using textures; 2) the road data, if vector data exist, the vector data are directly adopted, otherwise, the peripheral outline of the road can be extracted from the three-dimensional road model; 3) meteorological data, including main wind direction, solar altitude angle and the like, mainly determining initial parameters of the diffusion model; 4) calculating roughness parameters of the model together with the terrain data and the building model; 5) vehicle data: mainly including the types of motor vehicles and the pollution discharge amount of different types of vehicles; 6) and the dynamic diffusion concentration field model expresses a dynamic diffusion process by using time series volume data.
Dynamic stereoscopic acceleration method based on GPU
As shown in fig. 6, the flow of dynamic stereoscopic visualization is as follows: 1) loading the pollution process simulation data from the disk into a memory according to the current simulation time and the position of the viewpoint, then loading the pollution process simulation data into a texture memory of the graphics hardware to generate a three-dimensional texture, and sampling the volume texture according to the position of the viewpoint; 2) generating a transmission function in real time according to the interactive setting of the interactive interface, and transmitting the transmission function into a video memory for color mapping; 3) texture sampling, color mapping, gradient calculation, illumination calculation, dynamic interpolation and the like are realized in a GPU; 4) and (4) obtaining a final rendering result by Alpha mixing and synthesizing the color of each pixel point in the sampling surface. The invention fully utilizes the parallel characteristic of the GPU, and puts the transfer function mapping, the gradient calculation and the dynamic interpolation into the GPU for carrying out, thereby improving the efficiency and the effect of the algorithm.
1 real-time gradient calculation
In the process of volume rendering, in order to express different rendering effects, the gradient of a voxel needs to be calculated, and the visualization effect is enhanced by utilizing the gray level change in the neighborhood of each pixel of the voxel. The gradient calculation formula is as follows:
Figure BDA0002320608810000101
wherein x, y and z respectively represent the three-dimensional coordinates of the f point of a certain pixel, and i, j and k respectively represent the vectors of the three-dimensional coordinates.
Gradient computation is very time consuming, and the conventional algorithm has two problems: on one hand, since conventional hardware does not support ray computation, surface gradients for volume rendering need to be pre-computed, and volume data and normalized gradients are stored in texture memory. Therefore, the limited memory space in the general graphic hardware is greatly occupied, and the data volume transmitted by the network is increased. On the other hand, during the process of volume data rendering, cubic linear interpolation is also performed on the pre-calculated gradient, and the non-normalized gradient after interpolation is used for calculating illumination, so that the artifact of a light and shade effect is caused, and the quality of the rendered image is reduced. Meanwhile, after the transfer function is changed, the gradient needs to be recalculated, which affects the interactive performance of the system. There is an article that proposes to use software for illumination calculation and data classification, the illumination operation being done on the CPU to ensure high quality of rendering, thus avoiding the problem of non-normalized gradients, but sacrificing rendering real-time.
With the development of general-purpose hardware, the programmable characteristic of general-purpose PC graphic hardware can carry out real-time calculation and normalization processing on the gradient and put the gradient in a GPU for real-time operation. The invention provides a GPU-based three-dimensional texture rendering method based on real-time gradient calculation, which is used for calculating the normalized gradient of each voxel in real time during rendering, so that the gradient is prevented from being pre-calculated and stored, and the rendering capability of large-scale volume data is improved. Because all drawing processes are completed in one drawing channel, the interactive drawing speed can be achieved. Pseudo code for real-time gradient computation based on a GPU is shown in fig. 7.
2 viewpoint-based volume texture resampling
In the volume rendering method based on texture mapping, the most important is the resampling of texture surfaces, and the invention adopts an image space texture resampling method. The sampling surfaces are perpendicular to the sight line direction and distributed in the voxel at equal intervals, the sampling surfaces are changed along with the change of the viewpoint, the sampling surfaces need to be continuously calculated in the roaming process, and the efficiency of the calculation of the sampling surfaces directly determines the rendering efficiency. It is proposed to use the adjacent edge structure as the body edge traversal sequence, but this method has the disadvantages of blind traversal and repeated traversal due to the algorithm. On the basis of the method, the vertex sequence of the voxel passed through is used for deducing the traversal priority of the body edge, and one body edge is definedThe global variable is used for recording the body edge number traversed, but the method excludes some normal sampling surfaces, and meanwhile, the calculation amount is large. As shown in fig. 8, during rendering, the spatial texture is sampled from the viewpoint from the front to the back or from the back to the front. The number N of the intersection points of the sampling surface and the voxel has 4 different conditions of 3, 4, 5 and 6, and the surface less than 3 points is automatically removed, d in the figure0Is the spacing between the sampling surfaces.
The invention improves the sampling method, as shown in fig. 9a, numbers 8 vertexes and 12 edges contained in a voxel, records the number of the starting point of each edge, and mainly comprises the following calculation steps:
1) and changing the vertex of the volume bounding box into an observation coordinate according to the observation model matrix, and comparing the vertex Z values under the observation coordinate to obtain a closest point minpt and a farthest point maxpt from the viewpoint, wherein the number 7 in the figure is closest to the viewpoint and the number 1 is farthest.
2) Obtaining a direction Normal of a viewpoint according to an inverse matrix invVM of an observation model matrix, calculating a nearest point minv and a farthest point maxv according to a nearest point minpt and a farthest point maxpt of a volume bounding box, calculating a distance between the points, and determining a sampling interval d according to a sampling number Sample, wherein a main calculation formula is as follows:
Figure BDA0002320608810000121
3) starting from the closest point minpt, the intersection of each edge of the voxel with the tangent plane is solved. From the nearest point sp and the slice vector spn, assuming an edge starting point is p0 and ending point is p1, the formula is as follows:
h=spn*(sp-p0)/(spn*(p1-p0))
p=p0+h*(p1-p0) (8)
h represents a linear interpolation component on the edge, and the geometrical and texture coordinates of the vertex of the intersection point P are obtained through calculation according to h.
4) The intersection points of all the edges are obtained through calculation, the intersection points can be transformed into one point in a local range, and repeated points need to be removed. If the number of intersections is less than 3, the next surface is calculated out of the current loop. As shown in fig. 9b, since there is no pushAnd calculating the sequencing of the edges, and performing inverse sequencing on the intersection points. Suppose a tangent plane intersection point t6、t5、t7And t4Firstly, calculating the center o of all the intersection points, calculating the quadrant distribution of each intersection point in the center, and sequencing all the intersection points in a counterclockwise manner to obtain t6、t5、t4And t7And form a triangular surface t in a counterclockwise direction6t5t4And t6t4t7
5) And extending the distance d of the slice vector to obtain the next tangent point, and repeating the steps 2) -4) to obtain a new tangent plane until the specified sampling number is reached.
6) And when the viewpoint changes, repeating the processes 1) to 5), transmitting all the triangular surfaces to a video memory, sampling the texture, and synthesizing to obtain the final rendering effect.
3 dynamic interpolation
In the simulation process, the simulation value is generally output at certain time intervals, and if the simulation value is only drawn, a jump-type drawing result appears, and a continuous dynamic process cannot be expressed. To express the dynamic diffusion process, the data is encrypted on the time axis so that a smooth transition is made in 2 key frames at the time of rendering.
The invention adopts a linear interpolation method to interpolate data, and assumes that each pixel point of voxel weight is at t1The observed Value at the moment is Valuet1At t2The observed Value at the moment is Valuet2Then, the concentration value at a certain middle time t is:
Figure BDA0002320608810000131
the traditional interpolation algorithm is realized in a memory, and the operation in the memory occupies a large storage space and needs to traverse all data. Meanwhile, in the process of rendering, the simulation time is constantly changed, and repeated calculation is needed.
The method makes full use of the parallel operation advantage of the GPU and puts the linear interpolation operation into the GPU rendering process. As shown in fig. 10, the volume textures at the previous time t1 and the next time t2 are both transmitted to the video memory according to the current time t, the geometric and texture coordinates of the vertices calculated in the 3 visualization analysis are sampled for the textures at 2 times, and the shader code is as follows:
Figure BDA0002320608810000132
in the formula, Tex3Dt1、Tex3Dt2Represents t1、t2And (3) volume texture data at the moment, texcoord represents the texture coordinate of the current rendering point, and the color value at the current moment t can be interpolated in real time by using a linear interpolation function lerp. When t is1Observed value of time is
Figure BDA0002320608810000133
Is empty, i.e. t1When there is no observation value, the user can select the target,
Figure BDA0002320608810000134
the default value is 0, so that the interpolated result is from 0 to
Figure BDA0002320608810000135
The gradual change of the variable value can represent the process of the variable value from nothing to nothing, and the visual result shows that some object slowly appears.
4 Algorithm comparison
The invention firstly compares the effect of the stereoscopic visualization, fig. 11a shows a process visualization effect graph without gradient, fig. 11b shows a final effect graph of the stereoscopic visualization with gradient, and it can be seen that the fine characteristics can be shown by adding the gradient. By calculating the gradient in real time in the GPU, the gradient does not need to be calculated in advance, and the data volume is only equivalent to 25% of the preprocessed data volume.
The minimum time scale of the time phase change is set to be 1h, and data are stored on 32 layers of 64 × 64 grid points. The method is firstly compared with a body texture resampling algorithm, and the influence of the efficiency of a dynamic interpolation algorithm and a gradient calculation algorithm is tested. As can be seen from table 1, the improved three-dimensional texture mapping volume rendering algorithm improves rendering efficiency by about 30%. The influence of the dynamic interpolation algorithm on the algorithm efficiency is small, and the gradient calculation reduces the rendering efficiency by 12 percent on average.
TABLE 1 rendering efficiency Table 1FPSofRendering
Figure BDA0002320608810000141
Third, visual analysis
A CALENE 4 model and the dynamic stereoscopic visualization algorithm provided by the invention are integrated by depending on a Newmap World three-dimensional platform, and a typical open road in a certain urban area is selected to test the algorithm of the invention, so that the method conforms to the adaptation condition of the model. The video memory capacity of the Quadro M2000M video card adopted in the experiment is 4G, the computer processor is 2.9GHZ Quad-core, and the physical memory size is 16G. Experiments were conducted in a 64-bit windows 7 system using a programming environment of vs2013. net.
Fig. 12 shows how the vehicle exhaust gas diffuses with the direction of the wind when the angles θ between the road and the wind direction and the road are 0 degree, 60 degrees, 120 degrees, and 1800 degrees, respectively. In the figure, red indicates a relatively large concentration, blue indicates a relatively light pollution, and arrows indicate wind directions. Fig. 12a shows the case of 0 degree wind direction angle, the concentration of the contaminant along the X-axis direction is increased, and the longitudinal section is wider. FIG. 12c shows the wind direction at 120 degrees, and the contaminants diffuse with the Y-axis.
Fig. 13 simulates the diffusion of the exhaust gas of the motor vehicle at a plurality of moments, and the diffusion of the exhaust gas at 1 st, 2 nd, 3 rd and 4 th moments of pollution on one road is respectively shown in the figure. It can be seen from the figure that the pollution concentration gradually diffuses downwards in the wind direction with the time of the automobile exhaust. At the first moment, the pollutants are mainly concentrated in the center of the road, and as time goes on, the pollutants are diffused downwards and farther away. It can be seen from the figure that the pollutant overall conforms to the normal distribution characteristic, and the pollutant concentration is gradually reduced along with the increase of the height. As the pollution concentration around the building is increased along with the time, the diffusion model and the building model are integrated, and the automobile exhaust diffusion process can be dynamically simulated and analyzed in the 3 DGIS.
In order to better observe the effect of the motor vehicle exhaust diffusion, concentration isosurface analysis can also be realized based on a stereoscopic visualization method, as shown in fig. 14, and isosurface analysis can be performed along the street surface according to the position. Compared with a two-dimensional isosurface method, the analysis position can be randomly specified, and the interpolation precision can be improved by adopting a trilinear interpolation method in the display card.

Claims (9)

1. A3 DGIS-based motor vehicle exhaust diffusion simulation and stereoscopic visualization method comprises the following steps:
s1: integrating a CALINE4 model into 3DGIS to establish a motor vehicle exhaust diffusion modeling and visualization frame;
s2: loading the simulation data of the pollution process based on a direct volume rendering method;
s3: and performing gradient calculation, volume texture resampling and dynamic interpolation, and dynamically and visually expressing the diffusion process of the motor vehicle exhaust.
2. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 1, wherein: in step S1, the CALINE4 model integrates the nth finite line source, and the contribution of the pollutant emitted from the line source to the concentration of the measurement point can be obtained:
Figure FDA0002320608800000011
and summing to calculate the pollution concentration generated by the whole road pollution source at the measuring point:
C=∑Cn(2)
wherein: c is the contaminant concentration at the spatial point (x, y, z), mg/m3;QmThe source strength of the line source is mg/(m × s),
Figure FDA0002320608800000012
q is the total intensity, y1 and y2 are variablesAn amount; u is the near-ground wind speed, m/s; sigmayIs a horizontal diffusion parameter; sigmazIs a vertical direction diffusion parameter; l is the length of the line source;
on the road center line, the CALINE4 model considers that the pollution source is based on an infinite long line source model, the pollutant concentration is the same everywhere in the crosswind direction, and the concentration formula on the line is as follows:
Figure FDA0002320608800000013
in addition, when the wind direction is not perpendicular to the line source and the included angle is formed
Figure FDA0002320608800000014
Then, the concentration can be corrected as:
Figure FDA0002320608800000021
the pollution source is generally defined in a space absolute coordinate system, and when the pollution concentration of the point to be measured is calculated in the CALINE4 model, the point to be measured is taken as the origin, and the wind direction is taken as the positive direction of the X axis.
3. 3 DGIS-based motor vehicle exhaust diffusion simulation and stereoscopic visualization method according to claim 2, characterized in that: the total source strength Q ═ sigma vehicle number per hour ═ percentage of certain type of vehicle comprehensive emission factor; wherein, L is the length of the whole road, and L is the length of the line source; l ═ W × Lf nWherein W is the road width, LfIs a line source length increasing factor, and n is a line source number; l isf=1.1+θ3/(2.5*105) And theta is an included angle between the road and the wind direction.
4. 3 DGIS-based motor vehicle exhaust diffusion simulation and stereoscopic visualization method according to claim 2, characterized in that: each voxel in the model is called voxel v, each voxel v ═ D, T, C, where
Figure FDA0002320608800000022
Is the spatial domain of the field, i.e. the three-dimensional spatial region (XYZ axes) occupied by the field,
Figure FDA0002320608800000023
the time domain of the field, namely the time interval of the field evolution process, corresponds to the T axis, C is the pollution concentration at the position of the voxel v, namely the voxel color, and the concentration of each voxel is calculated according to a diffusion equation; the conversion of the density value data into integers of 0 to 255 is set as follows:
Figure FDA0002320608800000024
wherein Value represents a concentration Value, Vmax、VminRespectively representing the maximum and minimum contamination concentrations in the field data, forming a process texture and storing the process texture in a memory or on a disk.
5. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 1, wherein: in step S1, the data model integrating the 3DGIS and the CALINE4 models includes: 1) three-dimensional building and component models; 2) road data; 3) meteorological data; 4) topographic data; 5) motor vehicle data; 6) dynamic diffusion concentration field model.
6. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 1, wherein: in step S3, the real-time gradient calculation is to enhance the visualization effect by using the gray level change in the neighborhood of each pixel of the voxel during the volume rendering process, and the gradient calculation formula is as follows:
Figure FDA0002320608800000031
wherein x, y and z respectively represent the three-dimensional coordinates of the f point of a certain pixel, and i, j and k respectively represent the vectors of the three-dimensional coordinates.
7. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 1, wherein: in step S3, the sampling method first numbers 8 vertices and 12 edges contained in a voxel, and records a start point number of each edge, including the following steps:
1) changing the top point of the volume bounding box into an observation coordinate according to the observation model matrix, and comparing the Z value of the top point under the observation coordinate to obtain a closest point minpt and a farthest point maxpt from the viewpoint;
2) obtaining a direction Normal of a viewpoint according to an inverse matrix invVM of an observation model matrix, calculating a nearest point minv and a farthest point maxv according to a nearest point minpt and a farthest point maxpt of a volume bounding box, calculating a distance between the points, and determining a sampling interval d according to a sampling number Sample;
3) from the nearest point minpt, solving the intersection point of each edge of the voxel and the tangent plane;
4) calculating to obtain intersection points of all edges, wherein the intersection points can be transformed into a point in a local range, and the repeated points are removed;
5) and extending the distance d of the slice vector to obtain the next tangent point, and repeating the steps 2) -4) to obtain a new tangent plane until the specified sampling number is reached.
6) And when the viewpoint changes, repeating the processes 1) to 5), transmitting all the triangular surfaces to a video memory, sampling the texture, and synthesizing to obtain the final rendering effect.
8. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 7, wherein: in step 3), according to the nearest point sp and the slice vector spn, assuming that the starting point of one edge is p0 and the end point is p1, the formula is as follows:
Figure FDA0002320608800000041
h represents a linear interpolation component on the edge, and the geometrical and texture coordinates of the vertex of the intersection point P are obtained through calculation according to h.
9. The 3 DGIS-based motor vehicle exhaust diffusion simulation and visualization method of claim 1, wherein: in step S3, the dynamic interpolation is to output a simulation value at a certain time interval during the simulation process, and interpolate data by a linear interpolation method assuming that each pixel point of the voxel weight is at t1Observed value of time is
Figure FDA0002320608800000042
At t2Observed value of time is
Figure FDA0002320608800000043
Then at some time t in the middle, the concentration value at this point is:
Figure FDA0002320608800000044
CN201911296170.8A 2019-12-16 2019-12-16 Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS Withdrawn CN111179398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911296170.8A CN111179398A (en) 2019-12-16 2019-12-16 Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911296170.8A CN111179398A (en) 2019-12-16 2019-12-16 Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS

Publications (1)

Publication Number Publication Date
CN111179398A true CN111179398A (en) 2020-05-19

Family

ID=70646613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911296170.8A Withdrawn CN111179398A (en) 2019-12-16 2019-12-16 Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS

Country Status (1)

Country Link
CN (1) CN111179398A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597260A (en) * 2020-12-17 2021-04-02 中科三清科技有限公司 Visualization method and device for air quality mode forecast data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1481512A (en) * 2000-12-15 2004-03-10 Ŭ���Ƽ����޹�˾ Location-based weather nowcast system and method
CN102495854A (en) * 2011-11-18 2012-06-13 中国测绘科学研究院 Method for realizing dynamic label placement based on basic state modification
CN107947967A (en) * 2017-11-01 2018-04-20 许继集团有限公司 A kind of high-tension apparatus presence detecting system of plug and play
CN110175345A (en) * 2019-03-26 2019-08-27 中国测绘科学研究院 The analogy method that motor-vehicle tail-gas is spread under a kind of three-dimensional scenic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1481512A (en) * 2000-12-15 2004-03-10 Ŭ���Ƽ����޹�˾ Location-based weather nowcast system and method
CN102495854A (en) * 2011-11-18 2012-06-13 中国测绘科学研究院 Method for realizing dynamic label placement based on basic state modification
CN107947967A (en) * 2017-11-01 2018-04-20 许继集团有限公司 A kind of high-tension apparatus presence detecting system of plug and play
CN110175345A (en) * 2019-03-26 2019-08-27 中国测绘科学研究院 The analogy method that motor-vehicle tail-gas is spread under a kind of three-dimensional scenic

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
WU, MIN-RU;LI, WEI-ZHONG;TUNG, CHUN-YI: "NO gas sensor based on ZnGa2O4 epilayer grown by metalorganic chemical vapor deposition", 《SCIENTIFIC REPORTS》 *
丁雨淋,何小波,朱庆,林珲,胡明远: "实时威胁态势感知的室内火灾疏散路径动态优化方法", 《测绘学报》 *
文佳龙: "地区大气环境状况及其对太阳能高效利用的影响研究", 《CNKI》 *
曹晓光,雷洋: "汽车排放物在绿化环境中的扩散研究", 《机电产品开发与创新》 *
王莹,李成名,赵占杰,刘振东,王飞,刘坡: "CALINE4模型扩散参数的城市建筑影响", 《测绘科学》 *
王莹: "机动车尾气扩散模拟及三维动态可视化研究", 《CNKI》 *
许王定: "LLDPE/SBS共混过程的数值模拟及实验研究", 《CNKI》 *
陈静,邹成,黄吴蒙,刘博洋.: "面向虚拟地球的三维气象场可视化方法", 《武汉大学学报(信息科学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597260A (en) * 2020-12-17 2021-04-02 中科三清科技有限公司 Visualization method and device for air quality mode forecast data

Similar Documents

Publication Publication Date Title
KR101085390B1 (en) Image presenting method and apparatus for 3D navigation, and mobile apparatus comprising the same apparatus
CN105336003A (en) Three-dimensional terrain model real-time smooth drawing method with combination of GPU technology
CN108537869B (en) Cone tracking dynamic global illumination method based on cascade texture
US20080012853A1 (en) Generating mesh from implicit surface
Liang et al. Visualizing 3D atmospheric data with spherical volume texture on virtual globes
Kaufman Voxels as a computational representation of geometry
CN102831634B (en) Efficient accurate general soft shadow generation method
CN113436308A (en) Three-dimensional environment air quality dynamic rendering method
CN113593051A (en) Live-action visualization method, dam visualization method and computer equipment
CN114511659B (en) Volume rendering optimization method under digital earth terrain constraint
Hufnagel et al. A survey of cloud lighting and rendering techniques
Vyatkin et al. Voxel Volumes volume-oriented visualization system
CN111179398A (en) Motor vehicle exhaust diffusion simulation and stereoscopic visualization method based on 3DGIS
Laine et al. Hierarchical penumbra casting
Yang et al. Game engine support for terrain rendering in architectural design
CN117152334B (en) Three-dimensional simulation method based on electric wave and meteorological cloud image big data
Reis et al. High-quality rendering of quartic spline surfaces on the GPU
Li et al. Research on Landscape Architecture Modeling Simulation System Based on Computer Virtual Reality Technology
Bittner Hierarchical techniques for visibility determination
Bolla High quality rendering of large point-based surfaces
EP3940651A1 (en) Direct volume rendering apparatus
Favorskaya et al. Large scene rendering
Collado et al. Guided modeling of natural scenarios: vegetation and terrain
CN115222880A (en) Three-dimensional cloud scene programmed modeling method, device and equipment based on atmosphere layered model
Akdemir et al. Right-triangular subdivision for texture mapping ray-traced objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200519

WW01 Invention patent application withdrawn after publication