CN115512034A - Virtual operation point light source real-time sampling method and device - Google Patents

Virtual operation point light source real-time sampling method and device Download PDF

Info

Publication number
CN115512034A
CN115512034A CN202210937518.2A CN202210937518A CN115512034A CN 115512034 A CN115512034 A CN 115512034A CN 202210937518 A CN202210937518 A CN 202210937518A CN 115512034 A CN115512034 A CN 115512034A
Authority
CN
China
Prior art keywords
illumination
scene
virtual
light source
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210937518.2A
Other languages
Chinese (zh)
Inventor
王娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Polytechnic Normal University
Original Assignee
Fujian Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Polytechnic Normal University filed Critical Fujian Polytechnic Normal University
Priority to CN202210937518.2A priority Critical patent/CN115512034A/en
Publication of CN115512034A publication Critical patent/CN115512034A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a real-time sampling method and a real-time sampling device for a point light source in a virtual operation, which respectively acquire reflection shadow map information and illumination information of a scene; sampling according to the reflection shadow map information to generate a virtual point light source, and performing interpolation reconstruction to obtain a complete depth map; performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination; and optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination, so that the sense of reality of the virtual operation scene and the efficiency of illumination calculation are improved.

Description

Virtual operation point light source real-time sampling method and device
Technical Field
The invention relates to the technical field of virtual reality, in particular to a real-time sampling method and device for a point light source in a virtual operation.
Background
Foreign research on virtual reality application starts earlier, and corresponding research is carried out on illumination technology in virtual reality. In 2004, K Dmitriev et al proposed in ACM Symposium on visual Reality Software & Technology, A CAVE system for interactive modulation of global illumination in a car interface, using a pre-calculated radiance algorithm to achieve a simulation of global illumination of an enclosed environment, which method supports dynamic light sources and viewing angles, but requires that the scene must be static. In 2007, J Mortens et al achieve global illumination simulation of a cave scene through a method of constructing a Virtual Light Field (VLF) on a GPU, so that illumination calculation complexity of the scene is reduced, but the algorithm does not support dynamic scenes and light sources. In the same year, roger Hoang et al proposed expanding global illumination for virtual reality in International Conference on Computer Graphics and Interactive technologies, SIGBRAPH 2010, and realized the building of virtual reality scene illumination by using photon mapping based on GPU, and realized the support of dynamic scenes and light sources. J Happa et al, VAST 2009: the 10th International Symposium on visual Reality proposed The visual Reconstruction and bright Illumination of The panagiaangelocktits, by acquiring The change of Illumination within one day and adopting The Illumination based on images, interpolating The high dynamic range environment map and The Illumination data acquired on The spot to realize The Virtual reproduction of some historical environments. P Lensing et al put forward Instant directed illumination for dynamic real scenes in IEEE International Symposium on Mixed and Augmented Reality, put forward a global illumination algorithm of an image space based on a reflection shadow map algorithm, and combine an RGB-D camera to realize the calculation of one-time indirect illumination of a virtual scene, thereby meeting the requirement of a system on Reality.
A great deal of research is also carried out on the illumination of the virtual reality scene in China, and the method is applied to many fields. Liu Ying, on the basis of research of a radiance illumination model for indoor virtual roaming, and a radiance algorithm, a projection method is adopted to calculate shape factors between micro-surface elements and surface patches, and a grid model double simplified algorithm is provided to improve the efficiency of the radiance algorithm and apply the grid model double simplified algorithm to a virtual indoor roaming scene. Liujun is proposed in the application of a high dynamic range illumination map in virtual reality, and the high dynamic range illumination map is applied to the virtual reality to approximate the effect of global illumination, so that the rendering quality of a scene is further improved, but the method is only suitable for static scenes and occasions with fixed light sources. Wu Wenzhen et al propose in the illumination model algorithm research and application in the oil field virtual reality system, based on the ray tracing algorithm, by adding an enclosure to complex objects in a scene and determining whether to perform intersection calculation of rays and objects by judging whether the rays intersect with the enclosure, thereby realizing acceleration of the ray tracing algorithm and applying the acceleration to the oil field virtual reality. Wang Lichuan is provided for the research of a global illumination real-time rendering technology in a virtual reality system, an indoor light source technology and an object-based regional light source technology are utilized, and an Ambient illumination technology is introduced to enhance indirect shadow on the basis of the technology by combining a semitransparent shadow technology, so that the simulation of indoor real illumination is realized. Zhang Guilian is provided in the research of indoor real-time global illumination technology, the global illumination of an indoor main light source is calculated by adopting an instant radiance algorithm based on GPU acceleration, and direct illumination is calculated by adopting a pre-calculation illumination pattern mode for a non-light source, so that real-time indoor global illumination simulation of a dynamic main light source and a visual angle is realized.
However, the above methods have poor reality sense of the virtual operation scene and low efficiency of illumination calculation.
Disclosure of Invention
Technical problem to be solved
In order to solve the above problems in the prior art, the invention provides a real-time sampling method and device for a point light source in a virtual surgery, which can improve the sense of reality of a virtual surgery scene and the efficiency of illumination calculation.
(II) technical scheme
In order to achieve the purpose, the invention adopts a technical scheme that:
a virtual surgery point light source real-time sampling method comprises the following steps:
s1, respectively acquiring reflection shadow map information and illumination information of a scene;
s2, sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map;
s3, performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination;
and S4, optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination.
In order to achieve the purpose, the invention adopts another technical scheme as follows:
a virtual intraoperative point light source real-time sampling device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to perform the steps of:
s1, respectively acquiring reflection shadow map information and illumination information of a scene;
s2, sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map;
s3, performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination;
and S4, optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination.
(III) advantageous effects
The invention has the beneficial effects that: respectively acquiring reflection shadow map information and illumination information of a scene; sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map; performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination; and optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination, so that the sense of reality of the virtual operation scene and the efficiency of illumination calculation are improved.
Drawings
FIG. 1 is a flow chart of a virtual surgical site light source real-time sampling method according to an embodiment of the present invention;
fig. 2 is a schematic overall structure diagram of a virtual surgical midpoint light source real-time sampling device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the Ping-Pong algorithm;
FIG. 4 is a schematic diagram of the sampling process at the Ping stage;
FIG. 5 is a schematic diagram of the Pong stage interpolation process;
fig. 6 is a schematic view of an illumination model.
[ description of reference ]
1: sampling a point light source in a virtual operation in real time;
2: a memory;
3: a processor.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
Example one
Referring to fig. 1, a real-time sampling method for a virtual surgical spot light source includes the steps of:
s1, respectively acquiring reflection shadow map information and illumination information of a scene;
in this embodiment, step S1 specifically includes:
s11, acquiring reflection shadow map information of a scene by taking each light source as a visual angle, and storing the reflection shadow map information into a G-Buffer;
and S12, acquiring the illumination information of the scene by taking the camera as a visual angle, and storing the illumination information into the G-Buffer.
S2, sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map;
in this embodiment, step S2 specifically includes:
s21, generating a virtual point light source by adopting Halton sequence sampling according to the reflection shadow map information;
s22, generating a corresponding shadow map for the virtual point light source by adopting parabolic mapping;
in this embodiment, step S22 specifically includes:
if there is an orthogonal camera in the scene facing the reflecting paraboloid, where the parameterization of the paraboloid is:
Figure BDA0003784051800000051
the camera can acquire the depth information of the scene with the central point located in (0,0,0) and the direction pointing to the hemispherical region of the camera (0,0,1); to implement the spatial to planar mapping with a paraboloid, a point P = (x, y, z) is found on the paraboloid, which point will give the incident direction
Figure BDA0003784051800000052
Towards direction d 0 = (0,0,1) reflect, from the above equation the normal vector at point P can be expressed as:
Figure BDA0003784051800000053
if the paraboloid can be reflected accurately, only half the vector needs to be found
Figure BDA0003784051800000054
An and may be obtained on the basis of the half vector by multiplying a scaling factor
Figure BDA0003784051800000055
The same value, in combination with the half-vector and the above formula can be
Figure BDA0003784051800000056
The plane map of (a) is represented by the following function:
Figure BDA0003784051800000057
by the derivation, the parametric representation of the hemisphere space can be realized by parabolic mapping by using two-dimensional coordinates (x, y), and the depth value reading is carried out by using the two-dimensional coordinates; therefore, during shadow mapping, the paraboloid mapping can be directly adopted for replacement without perspective projection; when obtaining the depth value of the scene, the depth value of the pixel can be represented by the distance from the point on the surface of the scene to the central point (0,0,0) of the paraboloid, so as to obtain all the relevant information for shadow mapping;
p represents the center point (0,0,0) of the coordinate system where the virtual point light source is located, M light A transformation matrix is represented that represents the matrix of the transformation, M model Representing a model matrix; to generate a parabolic shadow map of a virtual point light source, the coordinates of the scene point are first converted to virtual point lightIn the coordinate system where the source is located, denoted as P', then normalized by dividing by the co-coordinate values of the scene points:
P'=M light *M model *P
P'=P'/P′ ω
since the illumination range of the virtual point light source covers the hemispherical region whose normal vector is (0,0,1), in the process of generating the parabolic shadow map, pixel-by-pixel culling is performed to discard irrelevant pixels, an α value, P 'of the scene point can be calculated' z Z-coordinate value, Z, representing scene point scale Representing a custom constant; the alpha value can be obtained by multiplying the Z coordinate value of the scene point by a self-defined constant for scaling and then adding 0.5, and in the generation process of the paraboloid shadow map, the pixels with the alpha value larger than 0.5 can be removed:
Figure BDA0003784051800000061
lenth P' represents the distance from P' to the center point (0,0,0) of the coordinate system where the virtual point light source is located:
lenth P' =||P'||
Figure BDA0003784051800000062
in this process, since there is a problem of accuracy of the depth buffer, it is necessary to perform scaling and add an offset value d 0
P'=P'+d 0
The distance from the center point (0,0,0) to P 'of the coordinate system where the virtual point light source is located is recorded as P' z Updating scene points to P 'of paraboloids' x And P' y Coordinate values of (2):
Figure BDA0003784051800000063
meanwhile, in order to prevent the self-shadow phenomenonIt is also necessary to add an offset Z to the Z component value bias
Figure BDA0003784051800000064
Wherein, because of the accuracy problem of the depth cache, the Z component has an error, Z near Denotes the minimum value of the Z component, Z far Represents the maximum value of the Z component.
And S23, carrying out interpolation reconstruction on the missing depth value by adopting an image space Ping-Pong algorithm according to the shadow map to obtain a complete depth map.
S3, performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination;
in this embodiment, step S3 specifically includes:
s31, performing cross sampling according to the illumination information of the scene to generate a secondary cache;
in this embodiment, step S31 specifically includes:
the cross sampling is performed by sampling the illumination information of the scene in a cross sampling mode of n × m size to generate a plurality of sub-caches of n × m size, and then illumination is calculated for each sub-cache by adopting a delayed rendering technology in sequence, so that the pixel-by-pixel illumination calculation of multiple light sources can be converted into one illumination calculation for each sub-cache, and the illumination calculation efficiency is improved.
Wherein after cross sampling, the pixel at coordinate (x, y) will become the sub-buffer G i,j A pixel located at coordinates (s, t), wherein:
i=mod(x,n)
j=mod(y,m)
s=x/n
t=y/m
wherein G is i,j And expressing a pixel point with the abscissa of i and the ordinate of j in the secondary cache.
And S32, performing overall illumination and shadow rendering calculation of the scene by adopting delayed rendering according to the secondary cache to obtain all illumination.
In this embodiment, step S32 specifically includes:
calculating diffuse reflection component in direct illumination by using Lambert illumination model, and recording diffuse reflection light as L lambert Then, there are:
L lambert =k d I l cosθ
wherein, I l Is the intensity of incident light, k, emitted by the light source d Is the diffuse reflectivity of the surface of an object, theta is the included angle between incident light and the normal vector of the surface of the object, and the normal vector at the position of a scene surface point x is n p And the unit vector from point x to the light source is L, the above equation can be expressed in vector form as follows:
L lambert =k d I l max{0,<n p |L>}
wherein < | > is point multiplication;
secondly, according to the law of reflection of light, reflected light and incident light are symmetrically distributed on both sides of the normal direction of the object surface, when the object surface is a pure mirror surface, the incident light will be reflected unidirectionally following the law of reflection of light strictly, and it is noted that the unit reflection direction is R, then there are:
R=2n p <n p |L>-L
since the surface of a typical object is actually composed of many micro-planes with different orientations, the specular reflection light of the object will be distributed around the ideal specular reflection direction R, according to the Phong illumination model, with the reflected light brightness L specular Then, there are:
L specular =I l W(θ)cos n α
where W (θ) is the specular reflectance of the surface of the object, and is a function of the incident angle θ and the wavelength of the incident light, and is generally taken as a constant k s ,0≤k s Less than or equal to 1; n is a specular high light index which represents the convergence degree of specular reflection light in space, and beta is an included angle between a unit sight line vector V and a unit specular reflection vector R at a visible point on the surface of the scene;
since Phong illumination model is a pure geometric illumination model, the reflected light intensity is only related to the apparent β, then:
cosα=max{0,<V|R>}
L specular =I l k s (max{0,<V|R>}) n
thus, the direct illumination component L of the virtual surgical scene direct Can be expressed as:
L direct =L lambert +L specular
performing indirect illumination calculation of the virtual operation scene; recording the depth value of the scene point acquired by taking the camera as the visual angle as d p World space coordinate of x p Surface normal vector is n p Reflected radiant flux of phi p
Wherein phi is p Is the brightness of a virtual point light source, n p Is the radiation characteristic of a virtual point light source;
a virtual point light source p in the scene, the radiation intensity of the virtual point light source p towards the scene is omega p Then, there are:
I p (ω)=φ p max{0,<n p |ω>}
the virtual point light source emits light E to a point x with a normal vector n in the scene p Can be expressed as:
Figure BDA0003784051800000091
scene point x receives indirect illumination E of all virtual point light sources p (x, n) can be represented as:
Figure BDA0003784051800000092
in summary, the global illumination of the virtual operation scene is recorded as L global Then, there are:
L global =L direct +E(x,n)
finally, the global illumination of the light sources 1 to 4 is respectively recorded as L global1 、L global2 、L global3 、L global4 Then all illumination L acquired by the scene scene Can be expressed as:
L scene =L global1 +L globa2 +L global3 +L global4
wherein L is global1 、L global2 、L global3 And L global4 The representation is centered at the scene origin, with 4 light sources placed in a cross-symmetric manner directly above it.
And S4, optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination.
In this embodiment, step S4 specifically includes:
and performing anti-aliasing treatment according to the complete depth map and all the illumination, and performing optimized simulation on the illumination of the virtual operation scene.
Example two
The difference between the present embodiment and the first embodiment is that the present embodiment further illustrates how the virtual-operation point light source real-time sampling method of the present invention is implemented by combining a specific application scenario:
step1, respectively taking each light source as a visual angle to acquire reflection shadow map information of a scene and storing the reflection shadow map information in a G-Buffer, and taking a camera as the visual angle to acquire illumination information of the scene and also storing the illumination information in the G-Buffer.
Specifically, the material, world space coordinates and normal vector information of the scene point are obtained by adopting the multi-target rendering technology of OpenGL, and then the obtained illumination information is rendered to the texture technology and stored in the G-buffer. Meanwhile, a programmable rendering pipeline of OpenGL is adopted, the reflective shadow map information of the scene can be obtained only by rendering the scene once by utilizing the multi-target rendering function of the programmable rendering pipeline, and the obtained information is rendered to a texture technology to be stored in a G-buffer.
Step2, generating a virtual point light source by adopting Halton sequence sampling according to the reflection shadow map information, then generating a corresponding shadow map for the virtual point light source by adopting parabolic mapping, and then performing interpolation reconstruction on the missing depth value by adopting an image space Ping-Pong algorithm according to the shadow map to obtain a complete depth map. And meanwhile, carrying out cross sampling on the illumination information of the scene stored in the G-Buffer to generate a secondary cache, and carrying out overall illumination and shadow rendering calculation on the scene by adopting delayed rendering on the basis to obtain all illumination.
And Step3, performing anti-aliasing treatment on the illumination result stored in the texture, and finally realizing the optimized simulation of the illumination of the virtual operation scene by accumulating the illumination of 4 light sources.
Step2 is two parallel steps, called Step2.1 and Step2.2, respectively.
Step2.1 was divided into three steps, called Step2.1.1,2.1.2 and 2.1.3, respectively.
The process of generation of the Step2.1.1Halton sequence can be expressed as follows: first, a prime number is arbitrarily selected as the radix of the Halton sequence, taking radix 2 as an example, and then, by repeatedly halving the (0,1) interval by using the radix 2, a sequence of 1/2,1/4,1/8,5/8,3/8,7/8,1/16,9/16 is obtained. And sampling by adopting a Halton sequence to generate a virtual point light source sampling coordinate, thereby realizing the sampling of the reflection shadow map. Each light source samples and generates 100 virtual point light sources, and generates sum coordinate values of sampling points using prime numbers 2 and 3 as base numbers and stores the sum coordinate values as textures, and codes the generated sum values of the sampling coordinates as RG components of RGB colors, respectively.
Step2.1.2 the indirect illumination radiation range of each virtual point light source in the invention is a hemispherical area pointed by the normal direction of the virtual point light source. Therefore, when the traditional shadow mapping algorithm is adopted to obtain the scene depth map, the problem of insufficient size of the visual object exists. If there is a quadrature camera in the scene facing a reflecting paraboloid, wherein the parameterization of the paraboloid is as follows:
Figure BDA0003784051800000101
the camera can acquire depth information of a scene with a central point located in (0,0,0) and a direction pointing to the hemispherical area of the camera (0,0,1);
to adopt throwingThe object plane is used to implement a spatial to planar mapping, finding a point P = (x, y, z) on the paraboloid, which point will give the incident direction
Figure BDA0003784051800000111
Towards the direction d 0 = (0,0,1) reflect, and from the above equation, we can conclude that the normal vector at point P can be expressed as:
Figure BDA0003784051800000112
if the paraboloid can be reflected accurately, only half vector needs to be solved
Figure BDA0003784051800000113
An and may be obtained on the basis of the half vector by multiplying a scaling factor
Figure BDA0003784051800000114
The same value, in combination with the half-vector and the above formula can be
Figure BDA0003784051800000115
The plane map of (a) is represented by the following function:
Figure BDA0003784051800000116
by the derivation, the parametric representation of the hemisphere space can be realized by parabolic mapping by using two-dimensional coordinates (x, y), and the depth value reading is carried out by using the two-dimensional coordinates; therefore, during shadow mapping, the paraboloid mapping can be directly adopted for replacement without perspective projection; when the scene depth value is obtained, the depth value of the pixel can be represented by the distance from a point on the scene surface to the central point (0,0,0) of the paraboloid, so that all relevant information for shadow mapping is obtained;
p represents the center point (0,0,0) of the coordinate system where the virtual point light source is located, M light Representing a transformation matrix, M model Representing a model matrix; in order to generate a parabolic shadow map of the virtual point light source, the coordinates of the scene point are first converted into a coordinate system in which the virtual point light source is located, which is denoted as P', and then normalized by dividing by the ω -coordinate value of the scene point:
P'=M light *M model *P
P'=P'/P′ ω
since the illumination range of the virtual point light source covers the hemispherical region whose normal vector is (0,0,1), in the process of generating the paraboloid shadow map, pixel-by-pixel culling is performed to discard irrelevant pixels, an alpha value, P 'of the scene point can be calculated' z Z-coordinate value, Z, representing scene point scale Represents a self-defined constant; the alpha value can be obtained by multiplying the Z coordinate value of the scene point by a self-defined constant and then adding 0.5, and in the process of generating the paraboloid shadow map, the pixels with the alpha value larger than 0.5 can be eliminated:
Figure BDA0003784051800000121
lenth P' represents the distance from P' to the center point (0,0,0) of the coordinate system where the virtual point light source is located:
lenth P' =||P'||
Figure BDA0003784051800000122
in this process, because of the accuracy problem of the depth buffer, it is necessary to perform scaling and add an offset value d 0
P'=P'+d 0
The distance from the center point (0,0,0) to P 'of the coordinate system where the virtual point light source is located is recorded as P' z Updating scene points to P 'of paraboloids' x And P' y Coordinate values of (2):
Figure BDA0003784051800000123
meanwhile, in order to prevent the self-shadow phenomenon, an offset value Z is added to the Z component value bias
Figure BDA0003784051800000124
Wherein, because of the accuracy problem of the depth cache, the Z component has an error, Z near Denotes the minimum value of the Z component, Z far Represents the maximum value of the Z component.
The Step 2.1.3Ping-Pong algorithm is divided into two stages of Ping/Pong;
wherein, the Ping stage performs Pull operation of the image, and the Pong stage performs Push operation. The algorithm needs two texture objects which play a role of holding a level image generated by a Pull/Push operation and inputting a specified level image to the Pull/Push operation in a crossed manner. Suppose the generated original shadow map is A 1 And storing the texture A, creating another texture B, and recording A 1 Result B of performing the first Pull operation 1 Saving it on the blank area of texture B, and inputting hierarchical image B in texture B 1 Performing a second Pull operation to obtain a result A 2 Save on other blank areas of texture A, and then input A 2 A third Pull operation is performed to obtain a result B 2 Save on other blank areas of B, in this manner, sequentially, as shown in fig. 3.
After all operations in the Ping stage are performed, the hierarchical images missing from each other in the two textures are copied to each other, and after the copying operation is completed, all the hierarchical images are stored in both the two texture maps, as shown in fig. 3. Then the Push operation starts. Firstly, two hierarchical images B with lowest resolution in texture B are taken 3 、B 4 Push operation is carried out and then the generated image is saved and covered on the texture A 3 Image, then texture A uses A 3 The hierarchical image is identical with the hierarchical image A with higher resolution 2 B on image overlay texture B generated by Push operation 2 The images are sequentially processed in this way, and finally a complete depth map is obtained.
Corresponding to the operation shown in fig. 3, taking two Ping/Pong processes as an example, the change of the shadow map during the Ping phase is shown in fig. 4:
the change of the shadow map during its Pong phase processing, corresponding to its Ping phase, is shown in fig. 5 below. It can be seen from fig. 5 that after two Ping/Pong treatments, many missing depth values in the shadow map have been reconstructed, and therefore, a complete depth map can be obtained by increasing the Ping/Push treatment times on the basis.
Step2.2 was divided into three steps, called Step2.2.1 and Step2.2.2, respectively.
When step2.2.1 adopts the delayed rendering technology to perform illumination calculation, the illumination calculation needs to be performed on each pixel of the screen pixel by pixel, so that a high filling rate is brought when a plurality of light sources exist in a scene. The cross sampling is to sample the illumination information of the scene by adopting a cross sampling mode of n multiplied by m size to generate a plurality of sub-caches of n multiplied by m size, and then to calculate the illumination for each sub-cache by adopting a delayed rendering technology, so that the pixel-by-pixel illumination calculation of multiple light sources can be converted into one illumination calculation for each sub-cache, and the illumination calculation efficiency is improved.
After cross sampling, a pixel in the G-Buffer at the coordinate (x, y) becomes a secondary Buffer G i,j A pixel located at coordinates (s, t), wherein:
i=mod(x,n)
j=mod(y,m)
s=x/n
t=y/m
wherein G is i,j And expressing a pixel point with the abscissa of i and the ordinate of j in the secondary cache.
Step2.2.2 uses a Lambert illumination model, also known as Lambert's cosine law, which defines the ideal diffuse emitting surface under illumination of a light source, to calculate the diffuse reflection component in direct illuminationAnd (5) reflection rule. According to Lambert's cosine law, the intensity of diffuse reflection light generated by an ideal diffuse reflection surface is in direct proportion to the cosine of an included angle between incident light and the normal direction of the surface of an object, and is independent of a visual angle, and the diffuse reflection light is recorded as L lambert Then, there are:
L lambert =k d I l cosθ
wherein, I l Is the intensity of incident light, k, emitted by the light source d Is the diffuse reflectivity of the surface of an object, theta is the included angle between incident light and the normal vector of the surface of the object, and the normal vector at the position of a scene surface point x is n p And the unit vector from point x to the light source is L, the above equation can be expressed in vector form as follows:
L lambert =k d I l max{0,<n p |L>}
wherein < | > is point multiplication;
secondly, according to the law of reflection of light, reflected light and incident light line distribute in the both sides of object surface normal direction symmetrically, and when the object surface is pure specular surface, the incident light will be followed the law of reflection of light strictly and is reflected away unidirectionally, note unit reflection direction is R, then have:
R=2n p <n p |L>-L
since the surface of a typical object is actually composed of many micro-planes with different orientations, the specularly reflected light of the object will be distributed around the ideal specular reflection direction R, according to Phong illumination model, with the reflected light intensity L specular Then, there are:
L specular =I l W(θ)cos n α
where W (θ) is the specular reflectance of the surface of the object, and is a function of the incident angle θ and the wavelength of the incident light, and is generally taken as a constant k s ,0≤k s Less than or equal to 1; n is a specular high light index which represents the convergence degree of specular reflection light in space, and beta is an included angle between a unit sight line vector V and a unit specular reflection vector R at a visible point on the surface of the scene;
since Phong illumination model is a pure geometric illumination model, the reflected light intensity is only related to apparent β, and there are:
cosα=max{0,<V|R>}
L specular =I l k s (max{0,<V|R>}) n
thus, the direct illumination component L of the virtual surgical scene direct Can be expressed as:
L direct =L lambert +L specular
after the sampling of the virtual point light source is realized, the indirect illumination calculation of the virtual operation scene can be carried out; let d be the depth value of the scene point obtained by taking the camera as the visual angle p World space coordinate of x p Surface normal vector is n p Reflected radiant flux of phi p
Wherein phi p Is the brightness of a virtual point light source, n p Is the radiation characteristic of a virtual point light source;
a virtual point light source p in the scene, the radiation intensity of the virtual point light source p towards the scene is omega p Then, there are:
I p (ω)=φ p max{0,<n p |ω>}
the virtual point light source emits light E to a point x with a normal vector n in the scene p Can be expressed as:
Figure BDA0003784051800000151
scene point x receives indirect illumination E of all virtual point light sources p (x, n) can be represented as:
Figure BDA0003784051800000152
to sum up, let the global illumination of the virtual operation scene be L global Then, there are:
L global =L direct +E(x,n)
finally, the global light of light sources 1 to 4 is noted separatelyAccording to L global1 、L global2 、L global3 、L global4 Then all illumination L acquired by the scene scene Can be expressed as:
L scene =L global1 +L globa2 +L global3 +L global4
wherein L is global1 、L global2 、L global3 And L global4 The representation is centered at the scene origin, with 4 light sources placed in a cross-symmetric manner directly above it.
The invention provides a method for realizing the illumination simulation of a virtual operation scene in a multi-light-source global illumination mode. However, the global illumination calculation amount is huge, if a large number of light sources are adopted to simulate illumination, the problem is aggravated, although the calculation capability of a computer is greatly improved in recent years, at present, tens of light sources like a lens multi-hole type operation shadowless lamp cannot be adopted, and the global illumination mode is calculated for the light sources to realize the simulation of the illumination of the virtual operation scene. Therefore, the invention realizes the design of the virtual operation scene illumination model based on the following criteria:
1. since the amount of global illumination computation increases rapidly with the number of scene light sources, the number of light sources in the scene must be strictly controlled in order to ensure the interactive features of the virtual surgical system.
2. Since the operation illumination requires illumination to maintain high color rendering, the illumination intensity of each light source should be limited within a certain range, otherwise the scene is excessively illuminated to cause color difference of the organ tissues.
3. Because the operation illumination requires that the illumination should have the characteristic of uniform distribution, all the light sources should be symmetrically distributed on the same plane with the central point between the light sources as the origin on the premise of ensuring that the illumination intensity of each light source is the same.
4. Because the number of light sources in a scene must be limited, the illumination radiation angle of each light source in the scene is required to be large enough to ensure that 360-degree illumination coverage of the operating area can be achieved with the scene having the least light sources.
5. In order to realize the simulation of the illumination of the virtual operation scene by adopting the minimum light sources, the distance between the light sources and the inclination angle of the illumination scene of the light sources are properly set on the premise of ensuring the mutual symmetry of the light sources, so that the shadows generated by each light source cannot be mutually overlapped, otherwise, under the same illumination intensity, the mutually overlapped shadows can be weakened and eliminated by more light source illumination.
In summary, the present invention proposes that 4 light sources are placed in a cross-symmetric manner right above a scene origin as a center, and a distance between the light sources and an inclination angle of an illumination scene are set according to a scale size of a virtual surgical scene, so as to serve as a virtual surgical scene illumination simulation model, wherein a schematic diagram of the illumination simulation model is shown in fig. 6:
and Step3, performing anti-aliasing treatment according to the complete depth map and all the illumination, and performing optimized simulation on the illumination of the virtual operation scene.
In order to accelerate the illumination calculation, the invention adopts a cross sampling delay rendering technology. In order to make the final rendering effect similar to the rendering effect in the non-cross sampling manner, it is necessary to obtain consistency information of adjacent pixels in the illumination result, and perform anti-aliasing processing on the illumination result by using gaussian filtering based on the consistency information. A discontinuity threshold is first set. Then, reading the normal vector buffer of the scene to generate a fuzzy parameter for judging whether a certain point in the scene is a discontinuous point. One tries to make a point x in the vector and can generate a blur value by accumulating the distances to the point for a specified number of pixels around the point. If the fuzzy parameter is larger than the set threshold value, the pixel point is a discontinuous point. The result of the determination is then stored in a discontinuous buffer. Let the fuzzy parameter at the point (u, v) in the scene be Factor blur (u, v), then:
Figure BDA0003784051800000171
wherein, the offset [ ] is used to look up the look-up table of the surrounding pixels.
Gaussian filtering is employed to eliminate the jaggy of the scene. In order to accelerate the operation, the Gaussian filtering is divided into an x direction and a y direction which are performed successively. Before gaussian filtering, the discontinuous buffer is read to determine whether the current pixel is a discontinuous point:
1. if the pixel is not a discontinuous point, a specified number of adjacent pixels are required to be found from the positive and negative directions of the x axis of the position of the pixel for filtering. If the adjacent pixel searched in a certain direction of the x-axis is also a non-continuous point, the adjacent pixel is not searched in the direction any more.
2. If the pixel is a discontinuity, pixel (x) 0 +1,y 0 )、(x 0 ,y 0 +1)、 (x 0 +1,y 0 Value of + 1) and pixel (x) 0 ,y 0 ) It is irrelevant, therefore, the pixels are searched in the x-axis and y-axis directions respectively for filtering processing until another discontinuous point is encountered.
EXAMPLE III
Referring to fig. 2, a virtual in-operation point light source real-time sampling apparatus 1 includes a memory 2, a processor 3, and a computer program stored on the memory 2 and capable of running on the processor 3, wherein the processor 3 implements the steps of the first embodiment when executing the program.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or directly or indirectly applied in the related technical fields, are included in the scope of the present invention.

Claims (9)

1. A real-time sampling method for a point light source in a virtual operation is characterized by comprising the following steps:
s1, respectively acquiring reflection shadow map information and illumination information of a scene;
s2, sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map;
s3, performing overall illumination and shadow rendering calculation on the scene according to the illumination information of the scene to obtain all illumination;
and S4, optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination.
2. The virtual-operation point light source real-time sampling method according to claim 1, wherein the step S1 specifically comprises:
s11, acquiring reflection shadow map information of a scene by taking each light source as a visual angle, and storing the reflection shadow map information into a G-Buffer;
and S12, acquiring the illumination information of the scene by taking the camera as a visual angle, and storing the illumination information into the G-Buffer.
3. The virtual-operation point light source real-time sampling method according to claim 1, wherein the step S2 specifically comprises:
s21, generating a virtual point light source by adopting Halton sequence sampling according to the reflection shadow map information;
s22, generating a corresponding shadow map for the virtual point light source by adopting parabolic mapping;
and S23, carrying out interpolation reconstruction on the missing depth value by adopting an image space Ping-Pong algorithm according to the shadow map to obtain a complete depth map.
4. The virtual-operation point light source real-time sampling method according to claim 3, wherein the step S22 specifically comprises:
if there is an orthogonal camera in the scene facing a reflecting paraboloid, where the parameterization of the paraboloid is:
Figure FDA0003784051790000011
the camera can acquire the depth information of the scene with the central point located in (0,0,0) and the direction pointing to the hemispherical region of the camera (0,0,1);
in throwingFinding a point P = (x, y, z) on the object plane, which point will give the incident direction
Figure FDA0003784051790000021
Towards direction d 0 = (0,0,1) reflect, from the above equation the normal vector at point P can be expressed as:
Figure FDA0003784051790000022
if the paraboloid can be reflected accurately, only half the vector needs to be found
Figure FDA0003784051790000023
An and may be obtained by multiplying a scaling factor on a half-vector basis
Figure FDA0003784051790000024
The same value, in combination with the half-vector and the above formula can be
Figure FDA0003784051790000025
Is represented by the following function:
Figure FDA0003784051790000026
let P denote the center point (0,0,0) of the coordinate system where the virtual point light source is located, M light Representing a transformation matrix, M model Representing the model matrix, firstly converting the coordinates of the scene point into a coordinate system where the virtual point light source is located, and marking as P', and then dividing the coordinate system by the omega coordinate value of the scene point to perform normalization:
P'=M light *M model *P
P'=P'/P′ ω
in order to generate the paraboloid shadow map of the virtual point light source, the illumination range of the virtual point light source covers the hemispherical area with the normal vector of (0,0,1), so that the illumination range of the virtual point light source covers the hemispherical area with the normal vector of (0,0,1)In the process of generating the paraboloid shadow map, pixel-by-pixel elimination is carried out to discard irrelevant pixels, specifically to calculate an alpha value, P 'of scene point' z Z-coordinate value, Z, representing scene point scale A constant representing a customization; the alpha value is obtained by multiplying the Z coordinate value of the scene point by a self-defined constant for scaling and then adding 0.5, and in the generation process of the paraboloid shadow map, the pixels with the alpha value larger than 0.5 can be eliminated:
Figure FDA0003784051790000027
lenth P' represents the distance from P' to the center point (0,0,0) of the coordinate system where the virtual point light source is located:
lenth P' =||P'||
Figure FDA0003784051790000028
in the process, the distance P' is scaled and an offset d is added 0
P'=P'+d 0
The distance from the center point (0,0,0) of the coordinate system where the virtual point light source is located to P 'is denoted as P' z Updating P 'of scene point mapping to paraboloid' x And P' y Coordinate values of (2):
Figure FDA0003784051790000031
at the same time, the Z component value is added with an offset value Z bias To obtain P' z
Figure FDA0003784051790000032
Wherein, Z near Denotes the minimum value of the Z component, Z far Represents the maximum value of the Z component.
5. The virtual-operation point light source real-time sampling method according to claim 1, wherein the step S3 specifically comprises:
s31, performing cross sampling according to the illumination information of the scene to generate a secondary cache;
and S32, performing overall illumination and shadow rendering calculation of the scene by adopting delayed rendering according to the secondary cache to obtain all illumination.
6. The virtual-operation point light source real-time sampling method according to claim 5, wherein the step S31 specifically comprises:
the cross sampling is to sample the illumination information of the scene by adopting a cross sampling mode of n multiplied by m size to generate a plurality of sub-caches of n multiplied by m size, and then the illumination is calculated for each sub-cache by adopting a delayed rendering technology in sequence;
wherein after cross sampling, the pixel at coordinate (x, y) will become the sub-buffer G i,j A pixel located at coordinates (s, t), wherein:
i=mod(x,n)
j=mod(y,m)
s=x/n
t=y/m
wherein G is i,j And expressing a pixel point with the abscissa of i and the ordinate of j in the secondary cache.
7. The virtual-operation point light source real-time sampling method according to claim 5, wherein the step S32 is specifically as follows:
calculating diffuse reflection component in direct illumination by using Lambert illumination model, and recording diffuse reflection light as L lambert Then, there are:
L lambert =k d I l cosθ
wherein, I l Is the intensity of incident light, k, emitted by the light source d Is the diffuse reflectivity of the surface of an object, theta is the included angle between incident light and the normal vector of the surface of the object, and the normal vector at the position of a scene surface point x is n p And the unit vector from point x to the light source is L, the above equation can be expressed in vector form as follows:
L lambert =k d I l max{0,<n p |L>}
wherein < | > is point multiplication;
secondly, according to the law of reflection of light, reflected light and incident light line distribute in the both sides of object surface normal direction symmetrically, and when the object surface is pure specular surface, the incident light will be followed the law of reflection of light strictly and is reflected away unidirectionally, note unit reflection direction is R, then have:
R=2n p <n p |L>-L
since the surface of a typical object is actually composed of many micro-planes with different orientations, the specular reflection light of the object will be distributed around the ideal specular reflection direction R, according to the Phong illumination model, with the reflected light brightness L specular Then, there are:
L specular =I l W(θ)cos n α
where W (θ) is the specular reflectance of the surface of the object as a function of the incident angle θ and the wavelength of the incident light, which is typically taken to be a constant k s ,0≤k s Less than or equal to 1; n is a specular high light index which represents the convergence degree of specular reflection light in space, and beta is an included angle between a unit sight line vector V and a unit specular reflection vector R at a visible point on the surface of the scene;
since Phong illumination model is a pure geometric illumination model, the reflected light intensity is only related to the apparent β, then:
cosα=max{0,<V|R>}
L specular =I l k s (max{0,<V|R>}) n
thus, the direct illumination component L of the virtual surgical scene direct Can be expressed as:
L direct =L lambert +L specular
performing indirect illumination calculation of the virtual operation scene; let d be the depth value of the scene point obtained by taking the camera as the visual angle p World space coordinate of x p Surface normal vector is n p Reflected radiant flux of phi p
Wherein phi is p Brightness of a virtual point source, n p Is the radiation characteristic of a virtual point light source;
a virtual point light source p in the scene, the radiation intensity of the virtual point light source p towards the scene is omega p Then, there are:
I p (ω)=φ p max{0,<n p |ω>}
the virtual point light source emits light E to a point x with a normal vector n in the scene p Can be expressed as:
Figure FDA0003784051790000051
scene point x receives indirect illumination E of all virtual point light sources p (x, n) can be represented as:
Figure FDA0003784051790000052
in summary, the global illumination of the virtual operation scene is recorded as L global Then, there are:
L global =L direct +E(x,n)
finally, the global illumination of the light sources 1 to 4 is respectively recorded as L global1 、L global2 、L global3 、L global4 Then all illumination L acquired by the scene scene Can be expressed as:
L scene =L global1 +L globa2 +L global3 +L global4
wherein L is global1 、L global2 、L global3 And L global4 The representation is centered at the origin of the scene, and 4 light sources are placed in a cross-symmetric manner directly above it.
8. The virtual-operation point light source real-time sampling method according to claim 1, wherein the step S4 specifically comprises:
and performing anti-aliasing treatment according to the complete depth map and all the illumination, and performing optimized simulation on the illumination of the virtual operation scene.
9. A virtual intraoperative point light source real-time sampling device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to perform the steps of:
s1, respectively acquiring reflection shadow map information and illumination information of a scene;
s2, sampling according to the reflection shadow map information to generate a virtual point light source, and carrying out interpolation reconstruction to obtain a complete depth map;
s3, performing overall illumination and shadow rendering calculation of the scene according to the illumination information of the scene to obtain all illumination;
and S4, optimizing and simulating the illumination of the virtual operation scene according to the complete depth map and all the illumination.
CN202210937518.2A 2022-08-05 2022-08-05 Virtual operation point light source real-time sampling method and device Pending CN115512034A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210937518.2A CN115512034A (en) 2022-08-05 2022-08-05 Virtual operation point light source real-time sampling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210937518.2A CN115512034A (en) 2022-08-05 2022-08-05 Virtual operation point light source real-time sampling method and device

Publications (1)

Publication Number Publication Date
CN115512034A true CN115512034A (en) 2022-12-23

Family

ID=84501904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210937518.2A Pending CN115512034A (en) 2022-08-05 2022-08-05 Virtual operation point light source real-time sampling method and device

Country Status (1)

Country Link
CN (1) CN115512034A (en)

Similar Documents

Publication Publication Date Title
US7212207B2 (en) Method and apparatus for real-time global illumination incorporating stream processor based hybrid ray tracing
US6567083B1 (en) Method, system, and computer program product for providing illumination in computer graphics shading and animation
US8648856B2 (en) Omnidirectional shadow texture mapping
Kontkanen et al. Ambient occlusion fields
US7256781B2 (en) Image processing apparatus and method of same
Everitt et al. Hardware shadow mapping
JPH10510074A (en) Image composition
TW201805894A (en) 3D rendering method and 3D graphics processing device
JPH05143746A (en) Video effect device
CN112819941A (en) Method, device, equipment and computer-readable storage medium for rendering water surface
Forest et al. Accurate shadows by depth complexity sampling
US20240095996A1 (en) Efficiency of ray-box tests
CN115512034A (en) Virtual operation point light source real-time sampling method and device
US20230274493A1 (en) Direct volume rendering apparatus
KR100951121B1 (en) Rendering method for indirect illumination effect
CN116137051A (en) Water surface rendering method, device, equipment and storage medium
CN114022599A (en) Real-time indirect gloss reflection rendering method for linearly changing spherical distribution
CN109934900A (en) Real-time global illumination solution based on VR hardware structure
JP4219090B2 (en) Method, system, and computer program product for providing lighting in computer graphics shading and animation
CN110832549A (en) Method for the rapid generation of ray traced reflections of virtual objects in a real world environment
CN117671125A (en) Illumination rendering method, device, equipment and storage medium
CN118674853A (en) Three-dimensional model surface skin rendering method, system, storage medium and equipment
Karlsson et al. Rendering Realistic Augmented Objects Using a Image Based Lighting Approach
KR20240140624A (en) Smart CG rendering methodfor high-quality VFX implementation
Yuksel et al. Rendering hair-like objects with indirect illumination.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination