CN116883607B - Virtual reality scene generation system based on radiation transmission - Google Patents

Virtual reality scene generation system based on radiation transmission Download PDF

Info

Publication number
CN116883607B
CN116883607B CN202311142381.2A CN202311142381A CN116883607B CN 116883607 B CN116883607 B CN 116883607B CN 202311142381 A CN202311142381 A CN 202311142381A CN 116883607 B CN116883607 B CN 116883607B
Authority
CN
China
Prior art keywords
scene
dimensional
virtual reality
pixel
dimensional scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311142381.2A
Other languages
Chinese (zh)
Other versions
CN116883607A (en
Inventor
袁梁
罗翼鹏
徐勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Wutong Technology Co ltd
Original Assignee
Sichuan Wutong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Wutong Technology Co ltd filed Critical Sichuan Wutong Technology Co ltd
Priority to CN202311142381.2A priority Critical patent/CN116883607B/en
Publication of CN116883607A publication Critical patent/CN116883607A/en
Application granted granted Critical
Publication of CN116883607B publication Critical patent/CN116883607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems

Abstract

The invention belongs to the technical field of virtual reality, and particularly relates to a virtual reality scene generation system based on radiation transmission. The system comprises: the 3D scene generating unit and the virtual reality scene generating unit; the 3D scene generating unit is configured to generate a first three-dimensional scene and a second three-dimensional scene based on the input two-dimensional picture; the virtual reality scene generating unit is configured to import a first three-dimensional scene and a second three-dimensional scene, overlap the first three-dimensional scene and the second three-dimensional scene in pixels to obtain an overlapped three-dimensional scene, divide the overlapped three-dimensional scene into a plurality of voxels, establish a radiation transmission equation of the overlapped three-dimensional scene, solve the radiation transmission equation to obtain an equation solving result, and transmit rays into the overlapped three-dimensional scene to obtain a virtual reality scene. Through pixel overlapping and three-dimensional scene generation, a more realistic and natural virtual reality scene is realized, and depth sense and sense of reality are enhanced.

Description

Virtual reality scene generation system based on radiation transmission
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a virtual reality scene generation system based on radiation transmission.
Background
Virtual reality technology has been significantly developed and applied in the fields of entertainment, education, medical treatment, and the like in recent years. By simulating the virtual world, the user can experience diverse scenes and experiences personally. Among them, the realism and visual effect of the virtual reality scene are one of the important factors of the user experience. In this regard, the simulation of lighting effects is critical to create a more realistic and engaging virtual scene. However, the prior art still faces some challenges in virtual reality illumination simulation.
In the field of virtual reality illumination simulation, methods and techniques have been widely used, such as ray tracing, radiation transmission, etc. The ray tracing method calculates the illumination effect generated by the interaction of the light and the object by simulating the propagation of the light from the observation point to the surface of the object. The radiation transmission method focuses more on the propagation and interaction of light rays in a scene, and various light ray behaviors such as reflection, refraction and the like are considered. These methods can in many cases produce realistic lighting effects, but in some specific cases still have some problems.
While existing methods can simulate the propagation and interaction of light rays, depth perception is often difficult to accurately restore when generating virtual scenes. Especially in the case of multiple depth layers in a scene, how the propagation and influence of illumination simulates remains a challenging problem. When there is an overlapping region in the virtual reality scene, it is often difficult to achieve natural transitions and effects in the prior art. The problems of illumination spread in the overlapping areas, pixel value calculation, etc. require more accurate processing to avoid too flat or too deep effects. A virtual reality scene may contain a large number of objects, light sources, textures, etc., resulting in a huge amount of computation. While ensuring a realistic effect, how to improve the calculation efficiency of illumination simulation is a problem to be solved.
Disclosure of Invention
The invention mainly aims to provide a virtual reality scene generation system based on radiation transmission, which realizes more realistic and natural virtual reality scenes and enhances depth sense and sense of reality through pixel overlapping and three-dimensional scene generation.
In order to solve the technical problems, the invention provides a virtual reality scene generation system based on radiation transmission: the 3D scene generating unit and the virtual reality scene generating unit; the 3D scene generation unit is configured to generate a first three-dimensional scene according to a set first depth value and generate a second three-dimensional scene according to a set second depth value based on the input two-dimensional picture; the virtual reality scene generating unit is configured to import a first three-dimensional scene and a second three-dimensional scene, overlap the first three-dimensional scene and the second three-dimensional scene in pixels, keep a set distance between each pixel in the first three-dimensional scene and each corresponding pixel in the second three-dimensional scene when the pixels are overlapped, so as to obtain an overlapped three-dimensional scene, divide the overlapped scene into a plurality of voxels, set an observation point position in the overlapped three-dimensional scene, establish a radiation transmission equation of the overlapped three-dimensional scene, solve the radiation transmission equation to obtain an equation solving result, then transmit rays to the overlapped three-dimensional scene from the observation point position, determine a pixel value of each pixel in the overlapped three-dimensional scene through interaction of the rays and the overlapped three-dimensional scene and the equation solving result, and finally render the overlapped three-dimensional scene based on the calculated pixel value to obtain the virtual reality scene.
Further, the radiation transmission equation is expressed using the following formula:
wherein:for the purpose of being from the voxel->Along the direction->The emitted irradiance; />For the purpose of being from the voxel->In the direction ofEmitted spontaneous emissivity; />For the bi-directional reflection distribution function, the in-voxel +.>From the input direction->Along the direction->Is a light reflection characteristic of (a); />For entering the voxel->Along the direction->Is a radiation degree of (2); />Is the normal vector of the surface; />Is a hemisphere; />Is a second depth value; />Is a first depth value; />A distance to a corresponding pixel in the overlaid three dimensional scene;indicate direction +.>Normal vector->Is a dot product of (a).
Further, the bi-directional reflection distribution function is expressed using the following formula:
wherein:is a voxel->Is a diffuse reflection coefficient of (2); />Is a voxel->Is a specular reflection coefficient of (a); />Is the normal vector of the surface;is a perfect reflection direction, and the calculation formula is: />;/>Is the Phong index, which controls the sharpness of the specular light.
Further, the method for solving the radiation transmission equation to obtain an equation solving result comprises the following steps: for each voxel, emitting rays from the viewpoint; when the ray intersects with the overlapped three-dimensional scene, determining an intersection point; emitting one or more random rays from the intersection point into the scene, calculating the radiation of the random rays and weighting and averaging to obtain an intermediate value, and using the intermediate value as a voxelAnd taking the geometry of the outgoing radiation composition of all the voxels as an equation solving result.
Further, the voxelIs calculated using the following formula:
wherein,is a voxel->Is (are) emergent radiation of->For the number of random rays emitted from the intersection, +.>Is to select the direction +.>Probability density function of (a).
Further, the method for determining the pixel value of each pixel in the overlapped three-dimensional scene comprises the following steps: then, starting from the observation point, emitting rays into the overlapped three-dimensional scene, judging whether each emitted ray intersects with a voxel in the overlapped three-dimensional scene, if so, recording the position of the intersection point, at the intersection point, calculating the radiance of the intersection point according to a radiation transmission equation, and when the plane of the intersection point has a reflection attribute, emitting a reflection ray from the intersection point according to the reflection rule, wherein the direction of the reflection ray is calculated by using the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Then, continuing to perform collision detection and radiance calculation on the reflected ray, and performing the same, and stopping collision detection and radiance calculation after iterating for set times; if a surface has refractive properties, when a ray hits this surface, the direction of the refracted ray is calculated by the following formula:
wherein,is the refractive index of the incident medium and n2 is the refractive index of the refractive medium. Then, toThe refraction ray performs collision detection and radiance calculation, so that the collision detection and radiance calculation are stopped after iteration is performed for a set number of times; calculating the pixel value of the pixel point by taking an average value after carrying out cumulative operation on the radiance at the intersection point of each ray; repeating the steps until the pixel values of all the pixel points are obtained through calculation.
Further, the first depth value satisfies the following constraint relationship:
wherein,for the first adjustment coefficient, the value range is 0.25 to 0.35; />The number of pixels of the two-dimensional picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />Is the area of the two-dimensional picture in units of squares of pixels.
Further, the second depth value satisfies the following constraint relationship:
wherein,is two (two)The pixel number of the dimension picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />The unit is the square of the pixel; />The value range of the third adjustment coefficient is 0.6-0.9.
Further, when the pixels are overlapped, the distance between each pixel in the first three-dimensional scene and each corresponding pixel in the second three-dimensional scene is calculated by using the following formula:
wherein,is the set distance.
The virtual reality scene generation system based on radiation transmission has the following beneficial effects: firstly, the invention adopts the virtual reality scene generation system based on radiation transmission, so that more realistic illumination simulation can be realized. The existing virtual reality technology has been widely applied in the aspects of ray tracing, radiation transmission and the like, but challenges still exist for restoring depth sensation and accurately simulating illumination effect. According to the method and the device for simulating the illumination propagation of the virtual reality scene, through accurate processing of the overlapped area, illumination propagation between layers with different depths can be simulated better, so that the depth sense of the virtual reality scene is more real, and user experience is more attractive. Second, the pixel overlap technique of the present invention is excellent in handling overlapping scenes. There may be overlapping of multiple depth layers in a virtual reality scene, where it is difficult to naturally handle these areas, often with too planar or too deep an effect. According to the method and the device, through the pixel overlapping technology, the position and the interval of each pixel in the overlapping area can be accurately calculated, so that the transition of a scene is more natural and balanced, and the user experience is more real. In addition, the invention considers the calculation efficiency problem, and improves the calculation efficiency of illumination simulation through reasonable simulation and optimization for a large number of objects, light sources, textures and the like in the virtual reality scene. This makes the scene generation process of virtual reality smoother, and the user can obtain more satisfactory visual effects in a shorter time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic system structure diagram of a virtual reality scene generating system based on radiation transmission according to an embodiment of the present invention.
Detailed Description
The method of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1: referring to fig. 1, a radiation transmission based virtual reality scene generation system: the 3D scene generating unit and the virtual reality scene generating unit; the 3D scene generation unit is configured to generate a first three-dimensional scene according to a set first depth value and generate a second three-dimensional scene according to a set second depth value based on the input two-dimensional picture; the virtual reality scene generating unit is configured to import a first three-dimensional scene and a second three-dimensional scene, overlap the first three-dimensional scene and the second three-dimensional scene in pixels, keep a set distance between each pixel in the first three-dimensional scene and each corresponding pixel in the second three-dimensional scene when the pixels are overlapped, so as to obtain an overlapped three-dimensional scene, divide the overlapped scene into a plurality of voxels, set an observation point position in the overlapped three-dimensional scene, establish a radiation transmission equation of the overlapped three-dimensional scene, solve the radiation transmission equation to obtain an equation solving result, then transmit rays to the overlapped three-dimensional scene from the observation point position, determine a pixel value of each pixel in the overlapped three-dimensional scene through interaction of the rays and the overlapped three-dimensional scene and the equation solving result, and finally render the overlapped three-dimensional scene based on the calculated pixel value to obtain the virtual reality scene.
Specifically, the 3D scene generating unit creates three-dimensional scenes of different depth levels according to the input two-dimensional picture and the set depth value. This simulates the presence of objects at different depth locations, thereby increasing realism. The pixel overlapping step in the virtual reality scene generating unit is to synthesize scenes of different depths into one scene. The distance between the pixels is set to ensure a visual blending effect when overlapping, thereby creating a stereoscopic impression. Establishing the radiation transmission equation takes into account the propagation and interaction of light rays in the scene. This equation describes the reflection, refraction, and absorption of light as it passes over various surfaces. By solving this equation, the propagation and interaction of light in the scene can be simulated, and the luminance value of each pixel can be calculated. And according to the result of the radiation transmission equation, the system transmits rays into the scene and interacts with the rays, and the brightness and color information of each pixel are calculated. These pixel values represent the propagation of rays in the scene and determine the appearance of the final scene.
Assuming a virtual reality system, a user may provide a two-dimensional picture and depth setting. The user provides a photograph of a forest as input and sets different depth values, such as foreground trees and background. The 3D scene generating unit in the system creates a three-dimensional scene containing the front Jing Shumu and the background based on the information.
Then, the virtual reality scene generating unit performs pixel overlapping on the three-dimensional scenes of the foreground and the background, and ensures that the distance between the foreground and the background is consistent with the setting. Thus, the foreground tree and the background can be organically fused together visually, and a real depth sense is generated.
Then, the system establishes a radiation transmission equation, and takes reflection, refraction, scattering and other phenomena of light rays in the foreground and the background into consideration. By solving this equation, the system calculates the propagation path of the light rays emitted from the user viewpoint in the scene and determines the luminance value of each pixel.
Finally, the system renders the entire virtual reality scene using the calculated pixel values. The front Jing Shumu presents a realistic lighting effect, and the background is adapted to the depth set by the user, thereby forming a virtual reality scene similar to the real world.
Pixel overlap refers to the superposition of scenes of different depths at the pixel level to simulate the presence of objects at different depth locations. In the pixel overlapping process, the system needs to ensure that corresponding pixels in different depth scenes are kept at a certain distance visually so as to realize a real mixing effect. Pixel overlap can enhance the sense of depth of a virtual reality scene. By properly overlapping scene pixels of the foreground and the background, the human eyes can perceive the positions of different objects at different depths, thereby enhancing the layering and realism of the scene. Pixel overlap is a key step in creating a stereoscopic impression. By compositing scene pixels of different depths together, the observer will perceive that the object has an actual depth in the scene as if the object were actually present in virtual reality space. In the real world, the foreground and background of an object tend to blend visually rather than separate. The pixel overlapping can simulate the phenomenon, so that scenes with different depths can generate natural fusion effect visually, and the fidelity of the virtual reality scene is improved. By setting the distance between pixels in different depth scenes, smoothness of scene transitions can be achieved. Thus, at the junction of the foreground and the background, no obvious fracture sense appears, and a smooth transition effect is presented.
Taking a beach scene as an example, a palm tree is assumed to be used as a foreground, and sea waves and sky are assumed to be used as a background. When the pixels overlap, the system will ensure that there is a distance between the pixels of the palm tree and the pixels of the sea wave and sky that mimics the depth differences of the object in the real scene. Thus, in the final rendering process, the palm tree can be visually and stereoscopically integrated into sea waves and sky, and realistic depth and layering sense are generated, so that the observer can feel the stereoscopicity of the scene. This example demonstrates how pixel overlap can achieve blending and stereoscopic effects for a scene by setting the inter-pixel distance.
The goal of generating a three-dimensional scene is to convert an input two-dimensional image into a three-dimensional scene with depth information to increase the realism and stereoscopic impression of the scene. This process is based on the parallax effect, which refers to the illusion of object position due to the angular difference between the lines of sight when two eyes see the same object, respectively. First, the system will preprocess the input two-dimensional picture, including segmentation and feature extraction of the image. This helps to identify different objects and regions in the image. Based on the depth value set by the user, the system calculates the parallax of each pixel. The pixels of the foreground object have larger parallax values, and the parallax values of the background object are smaller. And generating a depth map according to the calculated parallax value. In the depth map, pixels of the foreground object correspond to larger depth values and pixels of the background object correspond to smaller depth values. The system converts the two-dimensional coordinates of each pixel and the corresponding depth values into three-dimensional coordinates using the depth map. Thus, for different pixels of the same object, they are mapped onto different depth levels, thereby generating a three-dimensional scene with depth information.
Example 2: the radiation transmission equation is expressed using the following formula:
wherein:for the purpose of being from the voxel->Along the direction->The emitted irradiance; />For the purpose of being from the voxel->In the direction ofEmitted spontaneous emissivity; />For the bi-directional reflection distribution function, the in-voxel +.>From the input direction->Along the direction->Is a light reflection characteristic of (a); />For entering the voxel->Along the direction->Is a radiation degree of (2); />Is the normal vector of the surface; />Is a hemisphere; />Is a second depth value;/>is a first depth value; />A distance to a corresponding pixel in the overlaid three dimensional scene;indicate direction +.>Normal vector->Is a dot product of (a).
In particular, the radiation transmission equation is a physical equation describing the various effects that light is subjected to as it propagates through a medium. In a virtual reality scene, it is used to simulate the propagation path of light from a light source to an observer (or camera) and the interaction of light with the surface of an object during propagation. The radiation transmission equation can model the process by which light rays emanate from a light source, propagate through a scene, and ultimately reach an observer or camera. This enables ray behavior in a virtual scene to be accurately simulated and reproduced. By taking into account the factors of spontaneous emission, reflection, refraction, etc. in the equation, the radiation transmission equation can calculate the illumination intensity and color seen by the observer in different positions and directions. This enables the virtual scene to present a realistic lighting effect, increasing the realism of the scene. The bi-directional reflectance distribution function (BRDF) in the equation describes the reflectance characteristics of light at the surface of an object, including diffuse reflectance, specular reflectance, etc. This allows the surfaces of different materials to reflect light in a proper manner, increasing the detail and texture of the virtual scene. By considering the propagation path and interaction of light rays, the radiation transmission equation can simulate the shielding relation between objects, and real depth feeling and stereoscopic impression are generated. This makes the virtual reality scene more spatially perceived. In a virtual indoor scene, a window and a table are located in a room. Sunlight is emitted into a room through a window, and the surface of the table is illuminated. Radiation transmission equations can be used to model this process. It takes into account the spontaneous emission of sunlight, the reflective properties of windows and table surfaces, and the propagation path of light. By calculating the radiation transfer equation, the system can determine the illumination intensity, color, and shadow effect seen by the observer at different locations. Thus, sunlight irradiation and illumination effects in the virtual scene can be presented in a realistic manner, and the sense of reality of the virtual reality experience is enhanced.
Example 3: the bi-directional reflection distribution function is expressed using the following formula:
wherein:is a voxel->Is a diffuse reflection coefficient of (2); />Is a voxel->Is a specular reflection coefficient of (a); />Is the normal vector of the surface;is a perfect reflection direction, and the calculation formula is: />;/>Is the Phong index, which controls the sharpness of the specular light.
In particular, this Bidirectional Reflectance Distribution Function (BRDF) formula describes the distribution characteristics of light as it is reflected at the voxel surface, i.e. the reflected light is in different directionsIntensity distribution.: representing the slave voxel->Surface in the direction->Reflect to the direction->Is a function of the light distribution of the light source. It determines the intensity distribution of the reflected light in different directions. />: diffuse reflection coefficient, representing voxel->Diffuse reflection characteristics of the surface. It determines the intensity of diffuse reflection in different light directions. />: specular reflection coefficient, representing voxel->Specular reflection characteristics of the surface. It determines the intensity of specular reflection in different light directions. />: surface normal vector, representing voxel->The normal direction of the surface at a given point. />: the perfect reflection direction, which is an idealized direction, represents the direction of light when reflected at a surface. The calculation formula is +.>Wherein->Is the direction of the incident light. />: phong index, controls the sharpness of highlights. It affects the distribution shape of the reflected light in the specular direction. The essence of this formula is to describe the reflection characteristics of the object surface in different directions in a mathematical way. The diffuse reflection term considers the case where light is scattered uniformly, and the specular reflection term considers the strong reflection of light in a specific direction. The Phong index controls the degree of specular diffusion.
Consider a virtual metal sphere scene. This sphere surface has metallic properties and thus has a high specular reflectivity. In the formula (i) the formula (ii),can be expressed as diffuse reflection properties (usually lower) of the sphere, < >>Indicating its high specular reflection characteristics. Normal vector of surface->At each point pointing outside the sphere surface.
Assuming that the light is directed from a particular directionIncident on the sphere surface, perfect reflection direction when calculating the specular reflection term according to the formula>Corresponds to a reflection direction symmetrical to the incident direction of the light. In this way, the surface of the sphere will exhibit a bright high light effect, strongly reflecting the incident light.
Phong indexThe diffusion degree of the high light is controlled. Greater->The value will make the highlights more concentrated, look sharper, and smaller +.>The value will make the highlights diffuse more and appear softer. This enables the virtual metal sphere to exhibit high light and reflection characteristics in a realistic manner, enhancing the realism of a virtual reality scene.
Example 4: the method for solving the radiation transmission equation to obtain the equation solving result comprises the following steps: for each voxel, emitting rays from the viewpoint; when the ray intersects with the overlapped three-dimensional scene, determining an intersection point; emitting one or more random rays from the intersection point into the scene, calculating the radiation of the random rays and weighting and averaging to obtain an intermediate value, and using the intermediate value as a voxelAnd taking the geometry of the outgoing radiation composition of all the voxels as an equation solving result.
Specifically, the radiation is emitted: for a viewpoint, rays are emitted from the viewpoint in a particular direction. These rays simulate the direction of the observer's line of sight.
Intersection point of ray and scene: the ray intersects an object in the overlapping three-dimensional scene. At the intersection point, the intersection point coordinates of the ray and the object surface are determined. These intersections represent locations in the scene where the ray arrived from the viewpoint.
Random radiation emission: one or more random rays are transmitted from the intersection point into the scene. These random rays represent minor rays, such as diffusely reflected or scattered rays, emanating from the intersection point.
Radiation calculation and weighted average: the emittance of each random ray, i.e., the intensity of the ray, is calculated. This can be calculated by considering the illumination source, the material reflection characteristics, etc. Then, the radiometric degrees of all the random rays are weighted and averaged to obtain an intermediate value.
Intermediate value as outgoing radiation: make the following stepsUsing this intermediate value as a voxelIs provided. This intermediate value reflects the slave voxel +.>The intensity of the emitted light in a particular direction.
Combining all voxel results: repeating the steps for all the voxels to obtain the emergent radiance of each voxel. And combining the emergent radiosity of all the voxels together to form a solution of a radiation transmission equation of the whole scene. This result represents a simulation of ray propagation and interaction at the viewpoint location.
The method obtains an approximate solution of the radiation transmission equation by simulating the light propagation and interaction process of the observation point position and considering the influence of random light. This enables the virtual reality scene to present the lighting effect in a realistic manner, increasing the realism and detail of the scene.
Example 5: the voxel isIs calculated using the following formula:
wherein,is a voxel->Is (are) emergent radiation of->For the number of random rays emitted from the intersection, +.>Is to select the direction +.>Probability density function of (a).
Specifically, the formula is actually to put a plurality of randomly sampled rays in a voxelThe reflection at the location, the radiance of the incident light are calculated and weighted averaged to obtain the voxel +.>Is provided. This process simulates the interaction of the rays at the voxel surface and takes into account the influence of multiple random rays, thereby more accurately calculating the intensity of the exiting rays.
The calculation of this formula is typically used in computer graphics in a rendering engine to simulate the propagation and interaction of rays in a virtual reality scene to generate realistic lighting effects.
Example 6: the method for determining the pixel value of each pixel in the overlapped three-dimensional scene comprises the following steps: then, starting from the observation point, emitting rays into the overlapped three-dimensional scene, judging whether each emitted ray intersects with a voxel in the overlapped three-dimensional scene, if so, recording the position of the intersection point, at the intersection point, calculating the radiance of the intersection point according to a radiation transmission equation, and when the plane of the intersection point has a reflection attribute, emitting a reflection ray from the intersection point according to the reflection rule, wherein the direction of the reflection ray is calculated by using the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Then, continuing to perform collision detection and radiance calculation on the reflected ray, and performing the same, and stopping collision detection and radiance calculation after iterating for set times; if a surface has refractive properties, when a ray hits this surface, the direction of the refracted ray is calculated by the following formula:
wherein,is the refractive index of the incident medium and n2 is the refractive index of the refractive medium. Then, collision detection and radiance calculation are carried out on the refraction ray, so that the collision detection and radiance calculation are stopped after iteration is carried out for a set number of times; calculating the pixel value of the pixel point by taking an average value after carrying out cumulative operation on the radiance at the intersection point of each ray; repeating the steps until the pixel values of all the pixel points are obtained through calculation.
Specifically, ray emission and intersection judgment: from the viewpoint position, rays are emitted into the overlapping three-dimensional scene. For each ray, it is determined whether it intersects a voxel in the scene. This step is used to determine the ray propagation path. And (5) calculating the radiance of the intersection point: if the ray intersects a voxel, the location of the intersection is recorded. Then, the emittance at the intersection point is calculated from the emittance equation. This step is used to calculate the intensity of the ray at the intersection point. Reflected ray calculation: if the surface where the intersection point is located has reflection attribute, the direction of the reflected ray is calculated according to the reflection rule. This step is used to simulate the behavior of light when reflected off a surface. Refractive ray calculation: if the surface has refractive properties, the direction of the refracted ray is calculated according to the law of refraction. This step is used to simulate the behavior of light rays as they are refracted at a surface. Iterative calculation: for reflected rays or refracted rays, collision detection and radiance calculation are performed. This process is iterated multiple times to simulate multiple reflections or refractions of light at the surface. Pixel value calculation: and calculating the pixel value of the pixel point by carrying out cumulative operation on the radiance at the intersection point of each ray and finally taking an average value. This value represents the intensity of illumination in the scene as seen from the viewpoint position. Repeating the calculation: repeating the steps, and calculating the pixel values of all the pixel points. Through this process, the pixel value of each pixel in the entire overlapping three-dimensional scene is obtained.
This approach is commonly used in rendering engines to generate realistic lighting effects. By simulating the propagation, reflection and refraction of light rays, and taking into account the effects of multiple reflections or refractions, a more realistic virtual reality scene can be generated.
Example 7: the first depth value satisfies the following constraint relationship:
wherein,for the first adjustment coefficient, the value range is 0.25 to 0.35; />The number of pixels of the two-dimensional picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />Is the area of the two-dimensional picture in units of squares of pixels.
In particular, this constraint is to ensure that the generated virtual reality scene can remain in the proper range in terms of depth values to avoid too planar or too deep scenes. Specifically:
left side constraintAn expression is represented that relates the number of pixels, length and width of a picture. Wherein (1)>Representing the calculation->Absolute value of logarithm of (c). Left side multiplied by the first adjustment factor->To control the range of constraints.
Right side constraintAn expression related to the picture area is represented. Wherein (1)>Is the area of the picture, < >>Representing a linear combination of the length and width of the picture. The right side multiplied by a second adjustment factor->To control the range of constraints.
This constraint requires a first depth valueWithin the scope of these two constraints. The purpose of this design is to ensure that the depth values of the generated virtual reality scene are within a proper range, not too flat or too deep, thereby increasing the realism of the scene.
In practical applications, this constraint relationship may be used as one of the parameter settings for generating a virtual reality scene, so as to ensure that the generated scene meets the visual expectations in terms of sense of depth.
Example 8: the second depth value satisfies the following constraint relationship:
wherein,the number of pixels of the two-dimensional picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />The unit is the square of the pixel; />The value range of the third adjustment coefficient is 0.6-0.9.
In particular, this constraint relation aims at ensuring a second depth value of the generated virtual reality sceneIt is possible to enhance the realistic sensation of a scene within an appropriate range, not too shallow nor too deep.
Left side constraintAn expression related to the picture area is represented. Wherein (1)>Is the area of the picture, < >>Representing a linear combination of the length and width of the picture. Left side multiplied by the second adjustment coefficient +.>To control the range of constraints.
Right side constraintAn expression related to the picture area is represented. The right side multiplied by a third adjustment factor->To control the range of constraints.
This constraint requires a second depth valueWithin the scope of these two constraints to ensure that the depth values of the generated virtual reality scene are within the proper range. By adjusting->And->The two adjusting coefficients can flexibly control the value range of the second depth value, so that different scenes and visual requirements can be met.
In practice, this constraint relationship may be used to adjust the sense of depth and fidelity of the virtual reality scene to achieve a more desirable visual effect.
Example 9: when the pixels are overlapped, the distance between each pixel in the first three-dimensional scene and each corresponding pixel in the second three-dimensional scene is calculated by using the following formula:
wherein,is the set distance.
In particular, in generating overlapping virtual reality scenes, it is desirable to keep pixels between different depth layers at a certain spacing to ensure that the scene has a visually adequate sense of depth. By the formulaTo calculate the distance between pixels, in practice taking into account the first depth valueAnd determining the magnitude of the distance by an exponential function after the influence of the second depth value.
When (when)Approach->The value of the exponential function is close to 1, meaning the distance set +.>Near 0, the interval between pixels is small. This applies to areas of smaller depth in the scene, keeping the pixels relatively close.
When (when)Is greater than->At this time, the value of the exponential function increases, meaning the distance +.>Gradually increasing, the interval between pixels increases. This applies to areas of larger depth in the scene, ensuring adequate spacing between pixels.
This approach may adaptively adjust the spacing between pixels according to the change in depth values, thereby achieving a suitable sense of depth and visual effect in the overlapping virtual reality scene.
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.

Claims (8)

1. A radiation transmission-based virtual reality scene generating system, the system comprising: the 3D scene generating unit and the virtual reality scene generating unit; the 3D scene generation unit is configured to generate a first three-dimensional scene according to a set first depth value and generate a second three-dimensional scene according to a set second depth value based on the input two-dimensional picture; the virtual reality scene generating unit is configured to import a first three-dimensional scene and a second three-dimensional scene, overlap the first three-dimensional scene and the second three-dimensional scene in pixels, keep a set distance between each pixel in the first three-dimensional scene and each corresponding pixel in the second three-dimensional scene when the pixels are overlapped, so as to obtain an overlapped three-dimensional scene, divide the overlapped scene into a plurality of voxels, set an observation point position in the overlapped three-dimensional scene, establish a radiation transmission equation of the overlapped three-dimensional scene, solve the radiation transmission equation to obtain an equation solving result, then transmit rays to the overlapped three-dimensional scene from the observation point position, determine a pixel value of each pixel in the overlapped three-dimensional scene through interaction of the rays and the overlapped three-dimensional scene and the equation solving result, and finally render the overlapped three-dimensional scene based on the calculated pixel value to obtain a virtual reality scene; the radiation transmission equation is expressed using the following formula:
wherein:for the purpose of being from the voxel->Along the direction->The emitted irradiance; />For the purpose of being from the voxel->Along the direction->Emitted spontaneous emissivity; />For the bi-directional reflection distribution function, the in-voxel +.>From the input direction->Along the direction->Is a light reflection characteristic of (a); />For entering the voxel->Along the direction->Is a radiation degree of (2); />Is the normal vector of the surface; />Is a hemisphere; />Is a second depth value; />Is a first depth value; />A distance to a corresponding pixel in the overlaid three dimensional scene;indicate direction +.>Normal vector->Is a dot product of (a).
2. The radiation transmission-based virtual reality scenario generating system of claim 1, wherein the bi-directional reflection distribution function is represented using the following formula:
wherein:is a voxel->Is a diffuse reflection coefficient of (2); />Is a voxel->Is a specular reflection coefficient of (a); />Is the normal vector of the surface; />Is a perfect reflection direction, and the calculation formula is: />;/>Is the Phong index, which controls the sharpness of the specular light.
3. The radiation transmission-based virtual reality scenario generating system of claim 2, wherein the method for solving the radiation transmission equation to obtain an equation solution result comprises: for each voxel, emitting rays from the viewpoint; when the ray intersects with the overlapped three-dimensional scene, determining an intersection point; emitting one or more random rays from the intersection point into the scene, calculating the radiation of the random rays and weighting and averaging to obtain an intermediate value, and using the intermediate value as a voxelAnd taking the geometry of the outgoing radiation composition of all the voxels as an equation solving result.
4. The radiation transmission-based virtual reality scenario generation system of claim 3, wherein the voxelIs calculated using the following formula:
wherein,is a voxel->Is (are) emergent radiation of->To be a slave trafficThe number of random rays emitted by the point, +.>Is the direction of selectionProbability density function of (a).
5. The radiation transmission-based virtual reality scene generating system of claim 4, wherein the method of determining pixel values for each pixel in an overlapping three-dimensional scene comprises: then, starting from the observation point, emitting rays into the overlapped three-dimensional scene, judging whether each emitted ray intersects with a voxel in the overlapped three-dimensional scene, if so, recording the position of the intersection point, at the intersection point, calculating the radiance of the intersection point according to a radiation transmission equation, and when the plane of the intersection point has a reflection attribute, emitting a reflection ray from the intersection point according to the reflection rule, wherein the direction of the reflection ray is calculated by using the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Then, continuing to perform collision detection and radiance calculation on the reflected ray, and performing the same, and stopping collision detection and radiance calculation after iterating for set times; if a surface has refractive properties, when a ray hits this surface, the direction of the refracted ray is calculated by the following formula:
wherein,is the refractive index of the incident medium and n2 is the refractive index of the refractive medium. Then, collision detection and radiance calculation are carried out on the refraction ray, so that the collision detection and radiance calculation are stopped after iteration is carried out for a set number of times; by intersecting each rayThe radiance at the point is accumulated and calculated, and then the average value is taken to calculate the pixel value of the pixel point; repeating the steps until the pixel values of all the pixel points are obtained through calculation.
6. The radiation transmission-based virtual reality scenario generating system of claim 5, wherein the first depth value satisfies the following constraint relationship:
wherein,for the first adjustment coefficient, the value range is 0.25 to 0.35; />The number of pixels of the two-dimensional picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />Is the area of the two-dimensional picture in units of squares of pixels.
7. The radiation transmission-based virtual reality scenario generating system of claim 6, wherein the second depth value satisfies the following constraint relationship:
wherein,the number of pixels of the two-dimensional picture; />For the length of a two-dimensional picture in pixels, < >>The unit is pixels, which is the width of the two-dimensional picture; />For the second adjustment coefficient, the value range is 0.35 to 0.5; />The unit is the square of the pixel; />The value range of the third adjustment coefficient is 0.6-0.9.
8. The radiation transmission-based virtual reality scenario generating system of claim 7, wherein the distance that each pixel in the first three-dimensional scenario remains set to each corresponding pixel in the second three-dimensional scenario when pixel overlapping is performed is calculated using the following formula:
wherein,is the set distance.
CN202311142381.2A 2023-09-06 2023-09-06 Virtual reality scene generation system based on radiation transmission Active CN116883607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311142381.2A CN116883607B (en) 2023-09-06 2023-09-06 Virtual reality scene generation system based on radiation transmission

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311142381.2A CN116883607B (en) 2023-09-06 2023-09-06 Virtual reality scene generation system based on radiation transmission

Publications (2)

Publication Number Publication Date
CN116883607A CN116883607A (en) 2023-10-13
CN116883607B true CN116883607B (en) 2023-12-05

Family

ID=88255401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311142381.2A Active CN116883607B (en) 2023-09-06 2023-09-06 Virtual reality scene generation system based on radiation transmission

Country Status (1)

Country Link
CN (1) CN116883607B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117093817B (en) * 2023-10-20 2023-12-22 中国空气动力研究与发展中心计算空气动力研究所 Radiation transfer factor correction method for non-closed radiation heat exchange system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791540B1 (en) * 1999-06-11 2004-09-14 Canon Kabushiki Kaisha Image processing apparatus
CN114863066A (en) * 2022-04-22 2022-08-05 贝塔通科技(北京)有限公司 Method and system for generating augmented reality scene presenting real object occlusion relation
WO2022222077A1 (en) * 2021-04-21 2022-10-27 浙江大学 Indoor scene virtual roaming method based on reflection decomposition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8217368B2 (en) * 2010-11-05 2012-07-10 Ronald Everett Meyers System and method for determining three-dimensional information from photoemission intensity data
US10417824B2 (en) * 2014-03-25 2019-09-17 Apple Inc. Method and system for representing a virtual object in a view of a real environment
WO2017161039A1 (en) * 2016-03-15 2017-09-21 Magic Leap, Inc. Direct light compensation technique for augmented reality system
WO2019140414A1 (en) * 2018-01-14 2019-07-18 Light Field Lab, Inc. Systems and methods for rendering data from a 3d environment
US20230206538A1 (en) * 2020-05-07 2023-06-29 Ecole Polytechnique Federale De Lausanne (Epfl) Differentiable inverse rendering based on radiative backpropagation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6791540B1 (en) * 1999-06-11 2004-09-14 Canon Kabushiki Kaisha Image processing apparatus
WO2022222077A1 (en) * 2021-04-21 2022-10-27 浙江大学 Indoor scene virtual roaming method based on reflection decomposition
CN114863066A (en) * 2022-04-22 2022-08-05 贝塔通科技(北京)有限公司 Method and system for generating augmented reality scene presenting real object occlusion relation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Survey on Virtual Reality;Zhao QinPing;Science in China Series F: Information Sciencs;52(03);第348-400页 *
植物冠层光照的建模与虚拟现实仿真研究;王昊鹏 等;农业系统科学与综合研究;第26卷(第01期);第68-74页 *

Also Published As

Publication number Publication date
CN116883607A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
Šoltészová et al. Chromatic shadows for improved perception
EP2951785B1 (en) Method and system for efficient modeling of specular reflection
CN116883607B (en) Virtual reality scene generation system based on radiation transmission
EP2419885A1 (en) Method for adding shadows to objects in computer graphics
Wyman Interactive image-space refraction of nearby geometry
CN112396684A (en) Ray tracing method, ray tracing device and machine-readable storage medium
CN112446943A (en) Image rendering method and device and computer readable storage medium
Hu et al. Realistic, real‐time rendering of ocean waves
Grosch Differential Photon Mapping-Consistent Augmentation of Photographs with Correction of all Light Paths.
Thompson et al. Real-time mixed reality rendering for underwater 360 videos
Gotanda Beyond a simple physically based Blinn-Phong model in real-time
Miyazaki et al. A fast rendering method of clouds using shadow-view slices
US10403033B2 (en) Preserving scene lighting effects across viewing perspectives
Pai An imitation of realistic subsurface scattering texture for physically based rendering workflow
Ross COSC 3P98: Ray Tracing Basics
Lee et al. Improved shading model in spectral-based ray tracing method
Antinozzi An overview of modern global illumination
Lorig Advanced image synthesis—shading
Domon et al. Real-time Rendering of Translucent Material by Contrast-Reversing Procedure
Holst Real-time rendering of subsurface scattering and skin
Frühstück et al. Caustics, Light Shafts, God Rays
Lee et al. Ray Tracing Method Based on Spectral Distribution for Reproducing Realistic Images
CN116977540A (en) Volume cloud rendering method and device, electronic equipment and storage medium
Peschel et al. Plausible visualization of the dynamic digital factory with massive amounts of lights
Xu et al. Interactive Reflection Simulation via Physical Shading Model and Fast Environment Mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant