CN115546374B - Digital twin three-dimensional scene rendering method, device and equipment and medium - Google Patents

Digital twin three-dimensional scene rendering method, device and equipment and medium Download PDF

Info

Publication number
CN115546374B
CN115546374B CN202211496053.8A CN202211496053A CN115546374B CN 115546374 B CN115546374 B CN 115546374B CN 202211496053 A CN202211496053 A CN 202211496053A CN 115546374 B CN115546374 B CN 115546374B
Authority
CN
China
Prior art keywords
pixel point
scene
dimensional
determining
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211496053.8A
Other languages
Chinese (zh)
Other versions
CN115546374A (en
Inventor
杨斌
贺业凤
王秋茹
邢迎伟
王彩宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jerei Digital Technology Co Ltd
Original Assignee
Shandong Jerei Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jerei Digital Technology Co Ltd filed Critical Shandong Jerei Digital Technology Co Ltd
Priority to CN202211496053.8A priority Critical patent/CN115546374B/en
Publication of CN115546374A publication Critical patent/CN115546374A/en
Application granted granted Critical
Publication of CN115546374B publication Critical patent/CN115546374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application discloses a digital twin three-dimensional scene rendering method, a digital twin three-dimensional scene rendering device and a digital twin three-dimensional scene rendering medium, which relate to the technical field of three-dimensional rendering, and the method comprises the following steps: constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data; determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining the ground wetting effect under different relative water accumulation depths; and performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on the ground wetting effect under different ponding depths. According to the method and the device, the relative ponding depths of different pixel points in the virtual scene and the ground wetting effect under different relative ponding depths are determined according to real weather data, the simulation effect is real, and real-time high-simulation restoration of a digital twin scene is realized.

Description

Digital twin three-dimensional scene rendering method, device and equipment and medium
Technical Field
The present disclosure relates to the field of three-dimensional rendering technologies, and in particular, to a method and an apparatus for rendering a digital twin three-dimensional scene, an electronic device, and a computer-readable storage medium.
Background
Real-time modeling and drawing based on a real natural scene are one of hot spots and difficulties for constructing a high-simulation scene based on a digital twinning technology. In various virtual natural scenes, the effect of virtual-real mapping on the ground in rainy days plays an indispensable role in the real reproduction of virtual scenes in rainy days.
In the related technology, a method for simulating and drawing a road surface wetting effect by adopting a path tracking technology and a Fresnel formula in a constructed scene after rain, however, the object is a constructed virtual scene, namely, calculation is carried out under the condition that the ponding area and depth are fixed, the dynamic expression effect of real synchronous ponding depth change in the virtual scene according to real-time rainfall and real topographic relief cannot be achieved, the virtual-real mapping effect is rigid, the simulation effect is unreal, and the requirement of real-time high-simulation restoration of a digital twin scene cannot be met.
Disclosure of Invention
The application aims to provide a digital twin three-dimensional scene rendering method and device, electronic equipment and a computer readable storage medium, so as to solve the problems that in the prior art, the virtual scene has a rigid effect on virtual-real mapping on the ground in rainy days, the simulation effect is not real, and the requirement of real-time high-simulation reduction of the digital twin scene cannot be met.
In order to achieve the above object, the present application provides a digital twin three-dimensional scene rendering method, including:
constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data;
determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining the ground wetting effect under different relative water accumulation depths;
and performing three-dimensional rendering on the virtual scene by utilizing the three-dimensional engine based on the ground wetting effect under different water accumulation depths.
Wherein, the constructing a virtual scene containing ground relief in the three-dimensional engine according to the real scene comprises:
constructing a virtual scene in a three-dimensional engine;
generating a height map of the virtual terrain according to the height data of the real scene;
and generating a ground surface with high and low undulations corresponding to a real scene in the virtual scene according to the height map.
The real weather data comprise ponding data of different pixel points; determining the relative water accumulation depth of different pixel points in the virtual scene according to the real weather data, comprising:
determining the height value of each pixel point in the virtual scene;
determining the ratio of the water accumulation data of each pixel point in the virtual scene to the historical highest water accumulation data as the relative water accumulation degree of each pixel point;
determining the height value of each pixel point and the relative water accumulation depth corresponding to the relative water accumulation degree according to the target corresponding relation; wherein the target correspondence describes a linear relationship between a product of the height value and the relative water accumulation degree and the relative water accumulation depth.
Wherein, confirm the ground moist effect under the different relative ponding depths, include:
mixing the concave-convex material on the ground and the material of the accumulated water according to the relative accumulated water depth of each pixel point to obtain the final concave-convex material of each pixel point;
and determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point.
Wherein, the ground wetting effect of each pixel point is determined based on the final concave-convex material of each pixel point, and the method comprises the following steps:
calculating the reflection value of each pixel point relative to the user visual angle based on the final concave-convex material of each pixel point;
calculating the reflectivity of each pixel point to the surrounding environment based on the reflectivity value of each pixel point relative to the user visual angle;
calculating the Fresnel value of each pixel point based on the reflectivity of each pixel point to the surrounding environment according to a Fresnel formula;
and performing mixed calculation on the difference value of the ground reflection and the accumulated water reflection of each pixel point according to the Fresnel value of each pixel point, and superposing the high light reflection effect to obtain the ground wetting effect of each pixel point.
Wherein the real weather data comprises rainfall; after the ground moistening effect under different relative ponding depths is confirmed, still include:
determining a ripple diffusion effect of the raindrops touching the accumulated water according to the rainfall;
correspondingly, the utilizing the three-dimensional engine to perform three-dimensional rendering on the virtual scene based on the ground wetting effect under different ponding depths comprises:
and performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on ground wetting effects under different water accumulation depths and ripple diffusion effects of raindrops touching the water accumulation.
Wherein, according to the rainfall determine the ripple diffusion effect that raindrops touch ponding, include:
determining the rainfall speed according to the rainfall;
acquiring the random occurrence time of ripples, and calculating the ripple generation disappearance speed according to the random occurrence time of the ripples and the rainfall speed;
acquiring an initial ripple sampling height, and calculating a ripple amplitude according to the initial ripple sampling height and the rainfall speed;
determining a dimple dispersing effect based on the speed at which the dimples disappear and the dimple amplitude.
To achieve the above object, the present application provides a digital twin three-dimensional scene rendering apparatus, comprising:
the building module is used for building a virtual scene containing ground fluctuation in the three-dimensional engine according to the real scene and acquiring real weather data;
the first determining module is used for determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data and determining the ground wetting effect under different relative water accumulation depths;
and the rendering module is used for performing three-dimensional rendering on the virtual scene based on the ground wetting effect under different water accumulation depths by utilizing the three-dimensional engine.
To achieve the above object, the present application provides an electronic device including:
a memory for storing a computer program;
a processor for implementing the steps of the three-dimensional scene rendering method as described above when executing the computer program.
To achieve the above object, the present application provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the three-dimensional scene rendering method as described above.
According to the scheme, the three-dimensional scene rendering method provided by the application comprises the following steps: constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data; determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining the ground wetting effect under different relative water accumulation depths; and performing three-dimensional rendering on the virtual scene by utilizing the three-dimensional engine based on the ground wetting effect under different water accumulation depths.
According to the three-dimensional scene rendering method, the relative ponding depths of different pixel points in the virtual scene and the ground wetting effects under different relative ponding depths are determined according to the real weather data, the simulation effect is real, and real-time high-simulation restoration of the digital twin scene is achieved. The method and the device simplify the calculation process of the wet effect after rain, are simpler in calculation mode and higher in calculation efficiency, and improve the expressive force of the wet effect after rain. The application carries out twinning to real scene, can present the effect of raining on the ground gradually at virtual scene, can let the people need not watch rainfall correlation data simultaneously, can judge the size of rainfall through the vision promptly, and the degree of reality is high, increases virtual scene's expressive force and substitution sense. The application also discloses a three-dimensional scene rendering device, an electronic device and a computer readable storage medium, which can also realize the technical effects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a flow diagram illustrating a method of rendering a three-dimensional scene in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another method of rendering a three-dimensional scene in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a non-raining ground effect according to an exemplary embodiment;
FIG. 4 is a diagram illustrating a rainy ground effect for a light rain according to an exemplary embodiment;
FIG. 5 is a diagram illustrating a rainy ground effect for heavy rain according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a three-dimensional scene rendering apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In addition, in the embodiments of the present application, "first", "second", and the like are used for distinguishing similar objects, and are not necessarily used for describing a specific order or a sequential order.
The embodiment of the application discloses a three-dimensional scene rendering method, and aims to solve the problems that in the prior art, the virtual scene has a rigid effect on virtual-real mapping on the ground in rainy days, the simulation effect is not real, and the requirement of real-time high-simulation restoration of a digital twin scene cannot be met.
Referring to fig. 1, a flowchart of a three-dimensional scene rendering method according to an exemplary embodiment is shown, as shown in fig. 1, including:
s101: constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data;
in this step, a corresponding virtual scene with ground relief is constructed in the three-dimensional engine according to the real scene, which may include a road surface, a building, vegetation, raindrops generated by a particle system, and the like. The three-dimensional engine in this embodiment may include unity, J3D, and the like.
As a possible implementation, the building a virtual scene containing ground relief from a real scene in a three-dimensional engine includes: constructing a virtual scene in a three-dimensional engine; generating a height map of the virtual terrain according to the height data of the real scene; and generating a ground surface with high and low undulations corresponding to a real scene in the virtual scene according to the height map.
In specific implementation, a virtual scene is constructed based on a three-dimensional engine, a height map of a virtual terrain is generated according to GIS height data of a real scene, and a ground with high and low undulations corresponding to an actual scene is generated in the virtual scene according to the height map.
Further, reading real weather data of the virtual scene corresponding to the real geographic area, which may include whether rain exists, rainfall, ponding data, and the like, and storing the real weather data.
S102: determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining the ground wetting effect under different relative water accumulation depths;
in the step, the surface ponding area and the water surface rendering effect under different ponding depths are controlled according to the ponding data twin. Specifically, the relative water accumulation depths of different pixel points in the virtual scene are determined according to the water accumulation data, and then the ground wetting effects under the different relative water accumulation depths are determined so as to determine the ground wetting effects of the different pixel points in the virtual scene.
As a possible implementation, the real weather data includes water accumulation data of different pixel points; determining the relative water accumulation depth of different pixel points in the virtual scene according to the real weather data, comprising: determining the height value of each pixel point in the virtual scene; determining the ratio of the accumulated water data of each pixel point in the virtual scene to the historical highest accumulated water data as the relative accumulated water degree of each pixel point; determining the height value of each pixel point and the relative water accumulation depth corresponding to the relative water accumulation degree according to the target corresponding relation; wherein the target correspondence describes a linear relationship between a product of the height value and the relative water accumulation degree and the relative water accumulation depth.
In specific implementation, a linear corresponding relation is established according to the accumulated water data, the historical highest accumulated water data and Alpha channel sampling of the height map, and the relative height of the accumulated water of each pixel point in a three-dimensional space is obtained, so that the depth and the area of the surface accumulated water are controlled in real time. Specifically, the height map of a specific area can be directly obtained through map software, and the principle of the height map is that the height range of the area is changed into 0-1, and the height information of the current area is stored in an Alpha channel of the height map. And acquiring height data stored in the Alpha channel of the height map by each pixel point according to the UV coordinates of the height map, wherein the range of the height data is 0~1. And dividing the acquired water accumulation data by the historical highest water accumulation data of the region to obtain the current relative water accumulation degree. Because the ponding in every area still can receive factors such as drainage speed influence, so open the current regional ponding adjustment parameter that the user can automatically regulated, the user can carry out corresponding adjustment to ponding data and ponding height and area according to actual conditions. And multiplying the height value, the relative water accumulation degree and the water accumulation adjusting parameter of each pixel point to obtain the relative water accumulation depth of each pixel point.
As a possible embodiment, the determining the ground wetting effect at different relative water accumulation depths comprises: mixing the concave-convex material of the ground and the material of the accumulated water according to the relative accumulated water depth of each pixel point to obtain the final concave-convex material of each pixel point; and determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point.
In concrete implementation, mix the material of the unsmooth material on ground and ponding according to the relative ponding degree of depth of every pixel, obtain the final unsmooth material in this area, the ponding height is higher, and the effect of the surface of water is more obvious. According to an algorithm of the deep wetting effect after rain, the ground wetting effect under different scenes and different water accumulation depths is drawn in real time by utilizing a Fresnel formula and a water accumulation reflection theory.
As a possible implementation manner, the determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point includes: calculating a reflection value of each pixel point relative to a user visual angle based on the final concave-convex material of each pixel point; calculating the reflectivity of each pixel point to the surrounding environment based on the reflectivity value of each pixel point relative to the user visual angle; calculating the Fresnel value of each pixel point based on the reflectivity of each pixel point to the surrounding environment according to a Fresnel formula; and performing mixed calculation on the difference value of the ground reflection and the accumulated water reflection of each pixel point according to the Fresnel value of each pixel point, and superposing the high light reflection effect to obtain the ground wetting effect of each pixel point.
In specific implementation, the reflectivity of accumulated water to the surrounding environment is calculated based on an accumulated water reflection theory, the Fresnel value of the current water surface wetting effect is calculated according to a Fresnel formula, and if the water surface wetting effect does not rain, the ground mapping is normally sampled; and if raining, performing difference value mixed calculation on the ground and accumulated water reflection according to the Fresnel value, and superposing a high light reflection effect to finally present a ground wetting effect after raining.
S103: and performing three-dimensional rendering on the virtual scene by utilizing the three-dimensional engine based on the ground wetting effect under different water accumulation depths.
In the step, a three-dimensional engine is used for rendering the virtual scene in real time based on the ground wetting effect under different ponding depths, and a digital twin-based real-time virtual mapping effect of the rainy ground is formed.
According to the three-dimensional scene rendering method provided by the embodiment of the application, the relative water accumulation depths of different pixel points in the virtual scene and the ground wetting effect under different relative water accumulation depths are determined according to the real weather data, the simulation effect is real, and real-time high-simulation reduction of a digital twin scene is realized. The embodiment of the application simplifies the calculation process of the wet effect after rain, has simpler calculation mode and higher calculation efficiency, and improves the expressive force of the wet effect after rain. The embodiment of the application carries out twinning to real scene, can show the effect of raining on the ground gradually at virtual scene, can let the people need not watch rainfall correlation data simultaneously, can judge the size of rainfall through the vision promptly, and the degree of reality is high, increases virtual scene's expressive force and substitution sense.
The embodiment of the application discloses a three-dimensional scene rendering method, and compared with the previous embodiment, the embodiment further explains and optimizes the technical scheme. Specifically, the method comprises the following steps:
referring to fig. 2, a flowchart illustrating another three-dimensional scene rendering method according to an exemplary embodiment is shown, as shown in fig. 2, including:
s201: constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data;
s202: determining relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining ground wetting effects under different relative water accumulation depths;
s203: determining ripple diffusion effect of the accumulated water touched by the raindrops according to the rainfall;
in this step, the speed of raindrops hitting the ponding and the effect of ripple diffusion of the ponding are controlled according to the real-time rainfall twinning.
As a possible implementation, the determining the ripple dispersion effect of the raindrops and the ponding according to the rainfall includes: determining the rainfall speed according to the rainfall; acquiring the random occurrence time of ripples, and calculating the disappearance speed of the ripples according to the random occurrence time of the ripples and the rainfall speed; acquiring an initial ripple sampling height, and calculating a ripple amplitude according to the initial ripple sampling height and the rainfall speed; determining the ripple dispersion effect based on the speed at which ripples disappear and the ripple amplitude.
In a specific implementation, the initial dimple sampling height is stored in the dimple sampling map g channel and the b channel, and the time at which a dimple randomly appears is stored in the alpha channel, thereby constructing a dimple sampling rgba map. The alpha channel (linearly gradually changing from the center of the ripple to the outside, range 1~0, initial data of the ripple existence time), the rainfall speed and the engine self-time parameters of the ripple sampling map control the ripple generation and disappearance speed. Sampling is carried out on a g channel and a b channel of the ripple sampling map, a mathematical model of the amplitude of the ripple on the water surface is constructed by the g channel and the b channel of the ripple sampling map and rainfall, and the amplitude of the ripple is controlled.
S204: and performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on ground wetting effects under different water accumulation depths and ripple diffusion effects of raindrops touching the water accumulation.
In the step, a three-dimensional engine is used for rendering the virtual scene in real time based on the ground wetting effect under different ponding depths, and a digital twin-based real-time virtual mapping effect of the ground in rainy days is formed.
According to the three-dimensional scene rendering method, the relative water accumulation depths of different pixel points in the virtual scene and the ground wetting effect under different relative water accumulation depths are determined according to the real weather data, the simulation effect is real, and real-time high-simulation reduction of a digital twin scene is achieved. According to the embodiment of the application, the computer graphics are utilized to simplify the calculation process of the wetting effect and the ripple diffusion effect after rain, the calculation mode is simpler, the calculation efficiency is higher, and the expressive force of the wetting effect after rain is improved. The embodiment of the application carries out twinning to real scene, can present the effect of raining on ground step by step at virtual scene, can let the people need not watch the rainfall correlation data simultaneously, can be through the size of visual determination rainfall promptly, and the degree of reality is high, increases virtual scene's expressive force and substitution sense.
An application embodiment provided by the present application is described below, which specifically includes the following steps:
step 1: and establishing a virtual scene corresponding to the real scene in the Unity, wherein the virtual scene comprises a road surface, buildings, vegetation, raindrops generated through a particle system and the like.
Step 2: and establishing a data processing module for reading and storing the real weather data.
And step 3: and establishing a linear corresponding relation according to the ponding data, the historical highest ponding data and Alpha channel sampling of the height map, and acquiring the ponding relative height of each pixel point in a three-dimensional space, so that the depth and the area of the surface ponding are controlled in real time.
The relative water accumulation height of each pixel point in the three-dimensional space = saturate (tex 2D (height map, height map UV coordinate). A (water accumulation data/historical highest water accumulation data). Current regional water accumulation adjustment parameter);
the height map of a specific region can be directly obtained through a height map, and the principle of the height map is that the height range of the region is changed into 0-1, and the height information of the current region is stored in an Alpha channel of the height map. And acquiring height data of each pixel point stored in an Alpha channel of the height map according to the UV coordinates of the height map by using a tex2D () 2D texture sampling function of the CG language, wherein the range of the height data is 0-1. And dividing the acquired ponding data by the historical highest ponding data of the region to obtain the current relative ponding degree. Because the ponding in every area still can receive factors such as drainage speed influence, so open the current regional ponding adjustment parameter that the user can automatically regulated, the user can carry out corresponding adjustment to ponding data and ponding height and area according to actual conditions. And multiplying the height data, the relative ponding degree and the ponding adjusting parameters of the current region to obtain the relative ponding height of each coordinate point. Since the sampling range is 0-1, the relative water height of each coordinate point is limited to 0-1 by applying the CG language owned saturrate () function.
And 4, step 4: according to the relative ponding height of every coordinate point, mix the unsmooth material on ground and the material of ponding, obtain the final unsmooth material in this area, namely world normal, the ponding height is higher, and the effect of surface of water is more obvious.
And 5: according to an algorithm of the wet effect after rain, the ground wet effect under the conditions of different scenes and different water accumulation depths is drawn in real time by utilizing a Fresnel formula and a water accumulation reflection theory.
5.1 the reflection degree of the accumulated water to the surrounding environment is calculated by the accumulated water reflection theory.
worldRefl=reflect(worldPos-ViewDir,worldNormal);
reflcol=texCUBE(_Cubemap,worldRefl).rgb;
And calculating the reflection value of the current coordinate point (information is stored in the world Pos) relative to the view angle direction of the user (information is stored in the ViewDir) under the condition of different concave-convex materials (information is stored in the world Normal) by a reflect () reflection calculation function of the CG language. And (4) carrying out water accumulation reflection calculation on the environment (information is stored in _ cube map) needing reflection according to the reflection value through a texCUBE () cube sampling function carried by the CG language.
And 5.2, calculating the Fresnel value of the current water surface wetting effect according to a Fresnel formula.
fresnel=F_base+(1-F_scale)*pow(1.0-dot(ViewDir,worldNormal),power);
The calculation formula is simplified by the fresnel reflection already disclosed in computer graphics:
fresnel = F_base + F_scale*((1-v*n)^power);
wherein, F _ base, F _ scale and power are Fresnel control coefficients, and the three coefficients are opened, so that the water surface effect can be dynamically adjusted. ViewDir refers to the viewing angle direction of the user, and worldNormal refers to the concave-convex material of the current position point.
5.3 if it does not rain, normally sampling the ground mapping; and if raining, performing difference value mixed calculation on the ground and accumulated water reflection according to the Fresnel value, and superposing a high light reflection effect to finally present a ground wetting effect after raining.
Figure DEST_PATH_IMAGE001
And determining the color mixing degree according to whether the current area is rainy or not. When _ rain is 0, it represents no rain; when the _ rain is not 0, the ground and the water surface are rendered and mixed by the lepp () carried by the CG language based on the Fresnel value obtained in 5.2 and the relative water accumulation height of each pixel point in the three-dimensional space.
And 6: and controlling the speed of raindrops hitting the accumulated water and the ripple diffusion effect of the accumulated water according to the real-time rainfall twin.
6.1 sampling the ripple map at the UV coordinate (UV) of the ripple map by the tex2D () 2D texture sampling function of the CG language itself (jlipplenormal), and obtaining the values of the different channels:
ripple=tex2D(_RippleNormal,uv);
6.2, establishing a water surface ripple continuous mathematical model, and controlling the ripple generation and disappearance speed through an alpha channel (which is linearly gradually changed from a ripple center to the outside, ranges from 1 to 0 and is initial data of ripple existence time), a rainfall speed and an engine self-brought time parameter of the ripple sampling map;
dropFrac = frac (ripple.a + rain speed × time.y);
RippleDuration =1.0-frac (ripple. A + rain speed — time. Y);
the value of the alpha channel of the ripple sample map is obtained (different transparency is set for different ripple regions, range 1~0, the initial data of the ripple existence time). Time is represented by CG language-time.y (counting is changed again when a scene is refreshed every time), and an influence parameter of rainfall speed on ripple existence time is obtained by multiplying the rainfall speed by the time; adding the initial data of the ripple existence time and the parameters of the influence of the rainfall speed on the ripple existence time, limiting the parameters within 0-1 by using the built-in frac () function of the CG language, and finally obtaining the existence time of the ripple.
6.3, establishing a mathematical model of the amplitude of the ripples on the water surface, and controlling the amplitude of the ripples by sampling the g channel and the b channel of the ripple sampling map and real-time rainfall:
finalRipple = ripple.gb ripple duration sin (ripple valley regulation parameter, ripple peak) UNITY _ PI);
gb is a g channel and a b channel of a ripple map, and is used for storing initial fluctuation data of ripples.
The method comprises the steps of obtaining ripple wave crest and trough height data based on influence parameters of rainfall speed on ripple existence time through a CG language built-in lerp () function, and simulating wave crest and trough fluctuation through the CG language built-in sin () function. UNITY _ PI is CG language self-defined static data and represents PI. The initial fluctuation data (ripple. Gb) of the ripple, together with the height data of the wave crests and troughs of the ripple and the existence time of the ripple, control the final rendering effect of the ripple.
And 7: and rendering the virtual scene in real time through a Unity three-dimensional engine to form a digital twin-based real-time virtual mapping effect on the ground in rainy days.
Another application example provided by the present application is described below, which specifically includes the following steps:
step 1: a virtual scene corresponding to a real scene is established in J3D, and the virtual scene comprises a road surface, buildings, vegetation, raindrops generated through a particle system and the like.
Step 2: and establishing a data processing module for reading and storing the real weather data.
And step 3: and establishing a linear corresponding relation according to the accumulated water data, the historical highest accumulated water data and Alpha channel sampling of the height map, thereby controlling the area of the surface accumulated water in real time.
The relative water accumulation height of each pixel point in the three-dimensional space = saturrate (texture (height map, height map UV coordinates). A (water accumulation data/historical highest water accumulation data). Current regional water accumulation regulation parameter);
the height map of a specific region can be directly obtained through a height map, and the principle of the height map is that the height range of the region is changed into 0-1, and the height information of the current region is stored in an Alpha channel of the height map. And acquiring height data stored in the Alpha channel of the height map by each pixel point according to the UV coordinates of the height map through a texture () 2D texture sampling function carried by the GLSL language, wherein the range of the height data is 0-1. And dividing the acquired water accumulation data by the historical highest water accumulation data of the region to obtain the current relative water accumulation degree. Because the ponding in every area still can receive factors such as drainage speed influence, so open the current regional ponding adjustment parameter that the user can automatically regulated, the user can carry out corresponding adjustment to ponding data and ponding height and area according to actual conditions. And multiplying the height data, the relative ponding degree and the ponding adjusting parameters of the current region to obtain the relative ponding height of each coordinate point. Since the sampling range is 0-1, applying the self-contained saturrate () function of the GLSL language limits the relative water accumulation height of each coordinate point to be 0~1.
And 4, step 4: according to the relative ponding height of every coordinate point, mix the unsmooth material on ground and the material of ponding, obtain the final unsmooth material in this area, world normal promptly, the ponding height is higher, and the effect of surface of water is more obvious.
And 5: according to the algorithm of the wet effect after rain, the ground wet effect under the conditions of different scenes and different water accumulation depths is drawn in real time by utilizing a Fresnel formula and a water accumulation reflection theory.
5.1 the reflection degree of the accumulated water to the surrounding environment is calculated by the accumulated water reflection theory.
worldRefl=reflect(worldPos- ViewDir,worldNormal);
reflcol=texCUBE(_Cubemap,worldRefl).rgb*_reflectIntensity;
And calculating the reflection value of the current coordinate point (information is stored in the world Pos) relative to the user viewing angle direction (information is stored in the ViewDir) under the condition of different concave-convex materials (information is stored in the world Normal) through a reflect () reflection calculation function of the GLSL language. And (3) carrying out water accumulation reflection calculation on the environment (information is stored in _ cube map) needing reflection according to the reflection value obtained in the step (1) through a texCUBE () cube sampling function carried by the GLSL language.
And 5.2, calculating the Fresnel value of the current water surface wetting effect according to a Fresnel formula.
fresnel=F_base+(1-F_scale)*pow(1.0-dot(ViewDir,worldNormal),power);
The calculation formula is simplified by the fresnel reflection already disclosed in computer graphics:
fresnel = F_base + F_scale*((1-v*n)^power);
wherein, F _ base, F _ scale and power are Fresnel control coefficients, and the three coefficients are opened, so that the water surface effect can be dynamically adjusted. ViewDir refers to the viewing angle direction of the user, and worldNormal refers to the concave-convex material of the current position point.
5.3 if it does not rain, normally sampling the ground map; if it rains, the difference value mixing calculation is carried out on the ground and accumulated water reflection according to the Fresnel value, the high light reflection effect is superposed, and finally the ground wetting effect after the rain is presented.
Figure 854619DEST_PATH_IMAGE002
And determining the color mixing degree according to whether the current area is rainy or not. When _ rainArea is 0, it represents no rain; when _ rainArea is not 0, mix () carried by the GLSL language is required to render and mix the ground and the water surface based on the fresnel value obtained in 5.2 and the relative water height of each pixel point in the three-dimensional space.
Step 6: and controlling the speed of raindrops hitting the accumulated water and the ripple diffusion effect of the accumulated water according to the real-time rainfall twin.
6.1 sampling the ripple map (_ ripple normal) by the UV coordinate (UV) of the ripple map (_ ripple normal) through the texture () 2D texture sampling function carried by the GLSL language itself, obtaining the values of the different channels:
ripple=texture(_RippleNormal,uv);
6.2, establishing a water surface ripple continuous mathematical model, and controlling the speed of ripple generation and disappearance through an alpha channel (which is linearly gradually changed from a ripple center to the outside, and ranges from 1 to 0, and initial data of ripple existence time) of a ripple sampling map, a rainfall speed and an engine self-contained time parameter:
dropFrac = frac (ripple. A + rain speed × time);
rippdu =1.0-frac (ripple. A + rain speed time);
acquiring the value of the alpha channel of the ripple sample map (different transparency is set in different ripple areas, range 1~0, initial data of ripple existence time); time is represented by time carried by GLSL language (counting is changed again when a scene is refreshed every time), and an influence parameter of rainfall speed on ripple existence time is obtained by multiplying the rainfall speed by the time; the initial data of the ripple existence time and the parameters of the rainfall speed influencing the ripple existence time are added, and the frac () function carried by the GLSL language is used to limit the parameters within 0-1, and finally the existence time of the ripple is obtained.
6.3, establishing a mathematical model of the amplitude of ripples on the water surface, and controlling the amplitude of the ripples by sampling a g channel and a b channel of a ripple sampling map and real-time rainfall:
finalRipple = ripple.gb ripple duration sin (ripple valley regulation parameter, ripple peak)). J3D _ PI);
gb is a g channel and a b channel of a ripple map, and is used for storing initial fluctuation data of ripples.
Acquiring ripple wave peak and valley height data based on the influence parameters of rainfall speed on ripple existence time through a mix () function carried by the GLSL language, and simulating wave peak and valley fluctuation through a sin () function carried by the GLSL language. J3D _ PI is self-defined static data and represents PI. The initial fluctuation data (ripple. Gb) of the ripple, together with the height data of the wave crests and the wave troughs of the ripple and the existence time of the ripple, control the final rendering effect of the ripple.
And 7: and rendering the virtual scene in real time through a J3D engine to form a digital twin-based real-time virtual mapping effect on the ground in rainy days.
The pattern of the effect on the ground in the absence of rain is shown in fig. 3, the pattern of the effect on the ground in rainy days in light rain is shown in fig. 4, and the pattern of the effect on the ground in rainy days in heavy rain is shown in fig. 5.
In the following, a three-dimensional scene rendering device provided by an embodiment of the present application is introduced, and a three-dimensional scene rendering device described below and a three-dimensional scene rendering method described above may be referred to each other.
Referring to fig. 6, a block diagram of a three-dimensional scene rendering apparatus according to an exemplary embodiment is shown, as shown in fig. 6, including:
the building module 601 is used for building a virtual scene containing ground fluctuation in the three-dimensional engine according to the real scene and acquiring real weather data;
a first determining module 602, configured to determine, according to the real weather data, relative water accumulation depths of different pixel points in the virtual scene, and determine ground wetting effects at different relative water accumulation depths;
a rendering module 603, configured to perform three-dimensional rendering on the virtual scene based on ground wetting effects at different water accumulation depths by using the three-dimensional engine.
The three-dimensional scene rendering device provided by the embodiment of the application determines the relative ponding depths of different pixel points in the virtual scene and the ground wetting effects under different relative ponding depths according to the real weather data, the simulation effect is real, and real-time high-simulation restoration of a digital twin scene is realized. The embodiment of the application simplifies the calculation process of the wet effect after rain, has simpler calculation mode and higher calculation efficiency, and improves the expressive force of the wet effect after rain. The embodiment of the application carries out twinning to real scene, can show the effect of raining on the ground gradually at virtual scene, can let the people need not watch rainfall correlation data simultaneously, can judge the size of rainfall through the vision promptly, and the degree of reality is high, increases virtual scene's expressive force and substitution sense.
On the basis of the foregoing embodiment, as a preferred implementation manner, the building module 601 is specifically configured to: constructing a virtual scene in a three-dimensional engine; generating a height map of the virtual terrain according to the height data of the real scene; generating a ground surface with high and low undulations corresponding to a real scene in the virtual scene according to the height map; and acquiring real weather data.
On the basis of the foregoing embodiment, as a preferred implementation manner, the first determining module 602 includes:
the first determining unit is used for determining the height value of each pixel point in the virtual scene;
the second determining unit is used for determining the ratio of the accumulated water data of each pixel point in the virtual scene to the historical highest accumulated water data as the relative accumulated water degree of each pixel point;
the third determining unit is used for determining the height value of each pixel point and the relative water accumulation depth corresponding to the relative water accumulation degree according to the target corresponding relation; wherein the target correspondence describes a linear relationship between a product of the height value and the relative water accumulation degree and the relative water accumulation depth.
On the basis of the foregoing embodiment, as a preferred implementation, the first determining module 602 includes:
the mixing unit is used for mixing the concave-convex material on the ground and the material of the accumulated water according to the relative accumulated water depth of each pixel point to obtain the final concave-convex material of each pixel point;
and the fourth determining unit is used for determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point.
On the basis of the foregoing embodiment, as a preferred implementation manner, the fourth determining unit is specifically configured to: calculating a reflection value of each pixel point relative to a user visual angle based on the final concave-convex material of each pixel point; calculating the reflectivity of each pixel point to the surrounding environment based on the reflectivity value of each pixel point relative to the user visual angle; calculating the Fresnel value of each pixel point based on the reflectivity of each pixel point to the surrounding environment according to a Fresnel formula; and performing difference value mixed calculation on the ground reflection and the accumulated water reflection of each pixel point according to the Fresnel value of each pixel point, and superposing high light reflection effects to obtain the ground wetting effect of each pixel point.
On the basis of the above embodiment, as a preferred implementation, the real weather data includes rainfall; the device further comprises:
the second determining module is used for determining the ripple diffusion effect of the raindrops touching the accumulated water according to the rainfall;
correspondingly, the rendering module 603 is specifically configured to: and performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on ground wetting effects under different water accumulation depths and ripple diffusion effects of raindrops touching the water accumulation.
On the basis of the foregoing embodiment, as a preferred implementation manner, the second determining module is specifically configured to: determining the rainfall speed according to the rainfall; acquiring the random occurrence time of ripples, and calculating the ripple generation disappearance speed according to the random occurrence time of the ripples and the rainfall speed; acquiring an initial ripple sampling height, and calculating ripple amplitude according to the initial ripple sampling height and the rainfall speed; determining a dimple dispersing effect based on the speed at which the dimples disappear and the dimple amplitude.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Based on the hardware implementation of the program module, and in order to implement the method according to the embodiment of the present application, an embodiment of the present application further provides an electronic device, and fig. 7 is a structural diagram of an electronic device according to an exemplary embodiment, as shown in fig. 7, the electronic device includes:
a communication interface 1 capable of performing information interaction with other devices such as network devices and the like;
and the processor 2 is connected with the communication interface 1 to realize information interaction with other equipment, and is used for executing the three-dimensional scene rendering method provided by one or more technical schemes when running a computer program. And the computer program is stored on the memory 3.
In practice, of course, the various components in the electronic device are coupled together by the bus system 4. It will be appreciated that the bus system 4 is used to enable connection communication between these components. The bus system 4 comprises, in addition to a data bus, a power bus, a control bus and a status signal bus. For the sake of clarity, however, the various buses are labeled as bus system 4 in fig. 7.
The memory 3 in the embodiment of the present application is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device.
It will be appreciated that the memory 3 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 3 described in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiment of the present application may be applied to the processor 2, or implemented by the processor 2. The processor 2 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 2. The processor 2 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 2 may implement or perform the methods, steps and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 3, and the processor 2 reads the program in the memory 3 and performs the steps of the foregoing method in combination with its hardware.
When the processor 2 executes the program, the corresponding processes in the methods of the embodiments of the present application are implemented, and for brevity, are not described herein again.
In an exemplary embodiment, the present application further provides a storage medium, i.e. a computer storage medium, specifically a computer readable storage medium, for example, including a memory 3 storing a computer program, which can be executed by a processor 2 to implement the steps of the foregoing method. The computer readable storage medium may be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof that contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. A digital twin three-dimensional scene rendering method, comprising:
constructing a virtual scene containing ground fluctuation in a three-dimensional engine according to a real scene, and acquiring real weather data;
determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data, and determining the ground wetting effect under different relative water accumulation depths;
performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on ground wetting effects at different relative ponding depths;
the real weather data comprise ponding data of different pixel points; the determining the relative water accumulation depth of different pixel points in the virtual scene according to the real weather data comprises:
determining the height value of each pixel point in the virtual scene;
determining the ratio of the accumulated water data of each pixel point in the virtual scene to the historical highest accumulated water data as the relative accumulated water degree of each pixel point;
determining the height value of each pixel point and the relative water accumulation depth corresponding to the relative water accumulation degree according to the target corresponding relation; wherein the target correspondence describes a linear relationship between a product of the height value and the relative ponding degree and the relative ponding depth;
wherein, confirm the ground moist effect under the different relative ponding depths, include:
mixing the concave-convex material on the ground and the material of the accumulated water according to the relative accumulated water depth of each pixel point to obtain the final concave-convex material of each pixel point;
determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point;
wherein the real weather data comprises rainfall; after the ground moistening effect under different relative ponding depths is confirmed, still include:
determining the rainfall speed according to the rainfall;
acquiring the random occurrence time of ripples, and calculating the ripple generation disappearance speed according to the random occurrence time of the ripples and the rainfall speed;
acquiring an initial ripple sampling height, and calculating a ripple amplitude according to the initial ripple sampling height and the rainfall speed;
determining a ripple dispersion effect based on the ripple generation disappearance speed and the ripple amplitude;
correspondingly, the three-dimensional rendering of the virtual scene based on the ground wetting effect under different relative water accumulation depths by using the three-dimensional engine includes:
and three-dimensional rendering is carried out on the virtual scene by utilizing the three-dimensional engine based on the ground wetting effect under different relative water accumulation depths and the ripple diffusion effect of raindrops touching the water accumulation.
2. The three-dimensional scene rendering method according to claim 1, wherein the building of the virtual scene containing the ground relief from the real scene in the three-dimensional engine comprises:
constructing a virtual scene in a three-dimensional engine;
generating a height map of the virtual terrain according to the height data of the real scene;
and generating a ground surface with high and low undulations corresponding to a real scene in the virtual scene according to the height map.
3. The three-dimensional scene rendering method according to claim 1, wherein the determining the ground wetting effect of each of the pixel points based on the final concave-convex texture of each of the pixel points comprises:
calculating a reflection value of each pixel point relative to a user visual angle based on the final concave-convex material of each pixel point;
calculating the reflectivity of each pixel point to the surrounding environment based on the reflectivity value of each pixel point relative to the user visual angle;
calculating the Fresnel value of each pixel point based on the reflectivity of each pixel point to the surrounding environment according to a Fresnel formula;
and performing difference value mixed calculation on the ground reflection and the accumulated water reflection of each pixel point according to the Fresnel value of each pixel point, and superposing high light reflection effects to obtain the ground wetting effect of each pixel point.
4. A digital twin three-dimensional scene rendering apparatus, comprising:
the building module is used for building a virtual scene containing ground fluctuation in the three-dimensional engine according to the real scene and acquiring real weather data;
the first determining module is used for determining the relative water accumulation depths of different pixel points in the virtual scene according to the real weather data and determining the ground wetting effect under different relative water accumulation depths;
the rendering module is used for performing three-dimensional rendering on the virtual scene based on ground wetting effects under different relative ponding depths by utilizing the three-dimensional engine;
the real weather data comprises accumulated water data of different pixel points; the first determining module is specifically configured to: determining the height value of each pixel point in the virtual scene; determining the ratio of the accumulated water data of each pixel point in the virtual scene to the historical highest accumulated water data as the relative accumulated water degree of each pixel point; determining the height value of each pixel point and the relative water accumulation depth corresponding to the relative water accumulation degree according to the target corresponding relation; wherein the target correspondence describes a linear relationship between a product of a height value and a relative water accumulation degree and a relative water accumulation depth; mixing the concave-convex material of the ground and the material of the accumulated water according to the relative accumulated water depth of each pixel point to obtain the final concave-convex material of each pixel point; determining the ground wetting effect of each pixel point based on the final concave-convex material of each pixel point;
wherein the real weather data comprises rainfall; the device further comprises:
the second determining module is used for determining the rainfall speed according to the rainfall; acquiring the random occurrence time of ripples, and calculating the ripple generation disappearance speed according to the random occurrence time of the ripples and the rainfall speed; acquiring an initial ripple sampling height, and calculating ripple amplitude according to the initial ripple sampling height and the rainfall speed; determining a ripple dispersion effect based on the ripple generation disappearance speed and the ripple amplitude;
correspondingly, the rendering module is specifically configured to: and performing three-dimensional rendering on the virtual scene by using the three-dimensional engine based on ground wetting effects under different relative ponding depths and ripple diffusion effects of raindrops touching the ponding.
5. An electronic device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the three-dimensional scene rendering method according to any of claims 1 to 3 when executing the computer program.
6. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the three-dimensional scene rendering method according to any one of claims 1 to 3.
CN202211496053.8A 2022-11-28 2022-11-28 Digital twin three-dimensional scene rendering method, device and equipment and medium Active CN115546374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211496053.8A CN115546374B (en) 2022-11-28 2022-11-28 Digital twin three-dimensional scene rendering method, device and equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211496053.8A CN115546374B (en) 2022-11-28 2022-11-28 Digital twin three-dimensional scene rendering method, device and equipment and medium

Publications (2)

Publication Number Publication Date
CN115546374A CN115546374A (en) 2022-12-30
CN115546374B true CN115546374B (en) 2023-04-07

Family

ID=84722102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211496053.8A Active CN115546374B (en) 2022-11-28 2022-11-28 Digital twin three-dimensional scene rendering method, device and equipment and medium

Country Status (1)

Country Link
CN (1) CN115546374B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116883563B (en) * 2023-05-18 2024-01-16 苏州高新区测绘事务所有限公司 Method, device, computer equipment and storage medium for rendering annotation points

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598999A (en) * 2020-05-13 2020-08-28 河海大学 Drought event identification method based on three-dimensional drought body structure

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9097800B1 (en) * 2012-10-11 2015-08-04 Google Inc. Solid object detection system using laser and radar sensor fusion
CN105843942B (en) * 2016-04-01 2019-03-29 浙江大学城市学院 A kind of Urban Flood control decision support method based on big data technology
US10311630B2 (en) * 2017-05-31 2019-06-04 Verizon Patent And Licensing Inc. Methods and systems for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
US11935288B2 (en) * 2019-12-01 2024-03-19 Pointivo Inc. Systems and methods for generating of 3D information on a user display from processing of sensor data for objects, components or features of interest in a scene and user navigation thereon
CN111368397B (en) * 2020-02-04 2021-02-12 中国水利水电科学研究院 Method and device for predicting waterlogging risk
CN112221150B (en) * 2020-10-19 2023-01-10 珠海金山数字网络科技有限公司 Ripple simulation method and device in virtual scene
CN113457137B (en) * 2021-06-30 2022-05-17 完美世界(北京)软件科技发展有限公司 Game scene generation method and device, computer equipment and readable storage medium
CN114820964A (en) * 2022-04-25 2022-07-29 浙江省水利水电勘测设计院有限责任公司 Method and system for constructing digital twin real-time weather scene based on unknown Engine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598999A (en) * 2020-05-13 2020-08-28 河海大学 Drought event identification method based on three-dimensional drought body structure

Also Published As

Publication number Publication date
CN115546374A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
Génevaux et al. Terrain modelling from feature primitives
Ai et al. A DEM generalization by minor valley branch detection and grid filling
CN115546374B (en) Digital twin three-dimensional scene rendering method, device and equipment and medium
De Carli et al. A survey of procedural content generation techniques suitable to game development
KR20010113730A (en) Method and apparatus for processing images
CN107403459A (en) Real terrain fast modeling method and landslide visualization technique
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
Rodríguez et al. Setting intelligent city tiling strategies for urban shading simulations
Shaker et al. Fractals, noise and agents with applications to landscapes
Samavati et al. Interactive 3D content modeling for digital earth
CN114307139A (en) Method and device for generating virtual natural phenomenon in game scene
Reynolds et al. Real-time accumulation of occlusion-based snow
Polis et al. Iterative TIN generation from digital elevation models
Tobler et al. A multiresolution mesh generation approach for procedural definition of complex geometry
KR102276451B1 (en) Apparatus and method for modeling using gis
Haglund et al. Snow accumulation in real-time
CN116071479A (en) Virtual vegetation rendering method and device, storage medium and electronic equipment
Santos et al. Efficient creation of 3D models from buildings’ floor plans
JP2019121238A (en) Program, image processing method, and image processing device
Ketabchi et al. 3D Maquetter: Sketch-based 3D content modeling for digital Earth
CN115018536A (en) Region determination method and device, electronic equipment and readable storage medium
González et al. User-assisted simplification method for triangle meshes preserving boundaries
Sugihara et al. Automatic generation of 3D house models with solar photovoltaic generation for smart city
Cortial et al. Real-time hyper-amplification of planets
Sugihara et al. Automatic Generation of 3D Building Models with Efficient Solar Photovoltaic Generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant