CN107909647B - Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing - Google Patents

Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing Download PDF

Info

Publication number
CN107909647B
CN107909647B CN201711175280.XA CN201711175280A CN107909647B CN 107909647 B CN107909647 B CN 107909647B CN 201711175280 A CN201711175280 A CN 201711175280A CN 107909647 B CN107909647 B CN 107909647B
Authority
CN
China
Prior art keywords
variable
data structure
virtual camera
light source
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711175280.XA
Other languages
Chinese (zh)
Other versions
CN107909647A (en
Inventor
陈纯毅
杨华民
蒋振刚
姜会林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Kasite Technology Co.,Ltd.
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201711175280.XA priority Critical patent/CN107909647B/en
Publication of CN107909647A publication Critical patent/CN107909647A/en
Application granted granted Critical
Publication of CN107909647B publication Critical patent/CN107909647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing, which is characterized by comprising the following steps of: when each virtual 3D scene picture shot by the virtual camera array is drawn, the drawing speed of the 3D scene picture is accelerated by spatially multiplexing the illumination calculation result. For the indirect illumination result of the virtual 3D scene under the irradiation of the surface light source, the method uses a photon mapping technology to realize the spatial multiplexing of the indirect illumination calculation result when each virtual camera shoots the picture. For the direct illumination result of the virtual 3D scene under the illumination of the surface light source, the method carries out spatial multiplexing on the visibility calculation result of the sampling point of the surface light source.

Description

Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing
Technical Field
The invention relates to a realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing, and belongs to the technical field of 3D scene drawing and display.
Background
Currently, true three-dimensional display technology is receiving wide attention. The light field three-dimensional display technology is a novel true three-dimensional display technology appearing in recent years. The multilayer LCD three-dimensional display system introduced in the paper 3D display system and algorithm design based on liquid crystal multilayer screen in the publication of liquid crystal and display 2017, volume 32 and phase 4 is a specific implementation of the light field three-dimensional display technology. At present, there are mainly two methods for acquiring three-dimensional data displayed by a light field three-dimensional display system; one is a 360-degree multi-view image acquisition method for an actual 3D scene, which uses a CCD camera array distributed circumferentially around the actual 3D scene to capture different sides of the scene from multiple views, obtaining corresponding captured images; another is to take a virtual 3D scene using a virtual camera array to obtain a projection image of the light field of the virtual 3D scene. If the light field three-dimensional display technology is applied to movie and television entertainment, the second method can create a three-dimensional scene by using software according to needs in terms of display content production, and therefore has greater flexibility. The generation process of the projection image of the light field of the virtual 3D scene has been discussed in the literature, for example, in paper "liquid crystal and display" 2017, volume 32, design of 3D display system and algorithm based on liquid crystal multi-layer screen ", a doctor's academic paper" horizontal light field three-dimensional display mechanism and realization technology research "completed in 2014 by the university of zhejiang, and a master's academic paper" near-to-eye three-dimensional display research based on multi-layer liquid crystal "completed in 2016 by the university of zhejiang. For near-eye light field display systems, it is generally required to perform viewpoint sampling within the entire circular pupil region, each viewpoint sampling corresponding to a viewpoint position of a virtual camera, all virtual cameras constituting a virtual camera array, and to determine the viewport orientation, field angle and camera resolution of each virtual camera. The specific implementation method of the process is introduced in 'near-to-eye three-dimensional display research based on multilayer liquid crystal'. After the viewpoint position, the view port orientation, the view angle and the camera resolution of each virtual camera of the virtual camera array are determined, virtual 3D scene pictures shot by each virtual camera can be drawn by using a three-dimensional scene drawing technology, and each picture is a light field projection image of a virtual 3D scene. In order to make the virtual 3D scene picture have good reality, a global illumination effect of the virtual 3D scene picture needs to be rendered. For complex virtual 3D scenes, rendering a scene picture while considering global lighting effects is a very time consuming task. In generating light field projection images of a realistic virtual 3D scene, if one scene rendering operation is performed independently for each camera in the virtual camera array, the total time consumption will multiply with the increase in the number of virtual cameras. For a series of light field projection images for near-to-eye light field display, there are slight differences between the individual light field projection images, but significant similarities are usually maintained. This similarity provides a physical basis for reducing the drawing operation time consumption by using spatial multiplexing in drawing a realistic virtual 3D scene picture.
After light emitted by the light source enters one 3D scene point, the light is scattered by the 3D scene point and transmits illumination along other directions. The luminance of light scattered by one 3D scene point directly into the virtual camera can be divided into luminance originating from direct illumination and luminance originating from indirect illumination. The luminance from direct illumination can be estimated using a monte carlo integral solution method. This requires generating a certain number of sampling points on the surface light source (as shown in fig. 1) and calculating each sampling point and the visual sceneVisibility between points. The Monte Carlo direct illumination value estimation technique is described in detail in the paper published in ACM Transactions on Graphics 1996, volume 15, 1, to pages 36. Photon Mapping (Photon Mapping) technology is often used to calculate luminance from indirect illumination. A photon map is first created using photon tracking techniques, and then a visual field sight is found using ray casting techniques. For each diffuse reflection type of visual scene point, photons adjacent to the visual scene point may be searched from the photon map and the radiance from indirect illumination scattered by the visual scene point into the virtual camera calculated therefrom. However, this way of calculating the luminance from indirect illumination scattered by a visual scene point into a virtual camera directly from photons in a photon map can result in significant low frequency noise in the three-dimensional scene. The solution to this problem is to use Final aggregation (Final heating) technology. Final aggregation techniques are specifically discussed in many documents, such as the book written by M.Pharr and G.Humphreys, physical base rendering: From Theory to augmentation, 2, published by MorganKaufmann press 2010ndEdition, Shandong university 2014-Techiroc Master study paper & Renderman based photon mapping algorithm research and implementation & gt. The ray casting technology is a common technology in three-dimensional graphic rendering, and the key steps of the technology are to emit rays from a viewpoint position of a virtual camera through the center point of each pixel on a virtual pixel plane and calculate the intersection point between the rays and the geometric object of the 3D scene closest to the viewpoint position. As shown in fig. 2, a point E is a viewpoint position of the virtual camera, a rectangle ABCD is a virtual pixel plane, a connecting line between the point E and a center point G of the rectangle ABCD is perpendicular to the plane of the rectangle ABCD, a length of a line EG may be designated as 1, a vector pointing from the point E to the center point G of the rectangle ABCD corresponds to a viewport orientation of the virtual camera, a point K is a midpoint of a line BC, a point H is a midpoint of a line AD, a point R is a midpoint of a line AB, a point S is a midpoint of a line CD, an angle between the line EH and the line EK corresponds to a horizontal field angle of the virtual camera, an angle between the line ER and the line ES corresponds to a vertical field angle of the virtual camera, each small square in the rectangle ABCD represents one pixel on the virtual pixel plane, and the virtual pixel is a flatIn three-dimensional Graphics rendering, a kd tree space data structure is commonly used to organize a data set so as to be capable of quickly finding data set elements meeting specific conditions according to primary key values, wherein the kd tree space data structure is described in detail in Computer Graphics, Principles and Practice (3rd Edition) published in 2014 by Pearson Edition.
Disclosure of Invention
The invention aims to provide a realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing, which draws and uses a projection image containing Ncamr×NcamcA virtual 3D scene picture shot by a virtual camera array of each virtual camera provides three-dimensional data for a near-eye light field display application system.
The technical scheme of the invention is realized as follows: a realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing is characterized in that: firstly, a photon tracking technology is used for creating a photon map, then visible scene points corresponding to all virtual cameras in a virtual camera array are calculated, all the visible scene points are stored in a list, and then the global illumination values of all the visible scene points in the list are calculated, and the specific implementation steps are as follows:
providing a data structure TVSPT for storing data related to visual scene points; the data structure TVSPT comprises eight member variables, namely a position vsPos where a visual scene point is located, a normal vector vsNrm on the surface of a geometric object where a visual scene point is located, a virtual camera number nCam corresponding to the visual scene point, a line number vnRow of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, a column number vnCol of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, luminance vsL of light scattered from the visual scene point into the corresponding virtual camera, a light source sampling point position vsQ corresponding to the visual scene point, and light source visibility vsV corresponding to the visual scene point;
providing a data structure TALSPT for storing relevant data of a light source sampling point; the data structure TALSPT comprises two member variables of a light source sampling point position lsPos and a light source visibility lsV;
1) the photon tracking technology is used for creating a photon graph, and the specific method is as follows:
firstly, creating a photon map PMap which does not contain any photon record in a memory of a computer; emitting N from a surface light source to a three-dimensional scene using photon tracking technologyptOne photon, track this NptA process in which an individual photon is scattered upon collision with a geometric object as it propagates in a 3D scene; for each photon a002, in the process of tracking the photon a002 to collide with the geometric object and be scattered when the photon a002 propagates in the 3D scene, from the second collision of the photon a002 with the geometric object of the 3D scene, a photon record is added to the photon map PMap every collision, each photon record comprises a collision position PPos of the photon with the geometric object of the 3D scene, a normalized incident direction vector PVi of the photon at the collision position PPos, and the incident power PW of the photon at the collision position PPos has three components.
2) The calculation includes Ncamr×NcamcThe method comprises the following specific steps that each pixel on a virtual pixel plane of each virtual camera in a virtual camera array of each virtual camera corresponds to a visual field sight spot:
step 201: creating a list Ltvspt in a memory of the computer, wherein each element of the list Ltvspt is used for storing a variable of a data structure TVSPT type, and the list Ltvspt is enabled to be empty;
step 202: for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
according to the viewpoint position, the view port orientation, the view angle and the camera resolution parameters of the virtual camera a003, emitting a ray a004 passing through each pixel center point on the virtual pixel plane of the virtual camera a003 from the viewpoint position of the virtual camera a003 by using a ray projection technique, wherein the ray a004 corresponds to pixels on the virtual pixel plane of the virtual camera a003 one to one; for the ray a004 corresponding to each pixel on the virtual pixel plane of the virtual camera a003, the following operations are performed:
judging whether the ray A004 is intersected with the geometric objects of the 3D scene, if the ray A004 is intersected with the geometric objects of the 3D scene, further executing the following two substeps:
step 202-1: calculating an intersection point A005 of the light ray A004 and a geometric object of the 3D scene closest to the viewpoint position of the virtual camera A003, the intersection point A005 being a visual field scene point, creating a variable A006 of a data structure TVSPT type, the variable A006 corresponding to a unique light ray A004, assigning a position vsPos member variable of the visual scene point of the variable A006 to the position of the intersection point A005, assigning a normal vector vsNrm member variable of the geometric object surface of the position of the visual scene point of the variable A006 to a geometric object surface normal vector at the intersection point A005, assigning a virtual camera number nCam member variable corresponding to the visual scene point of the variable A006 to the number of the virtual camera A003 in the virtual camera array, assigning a row number vnRow member variable of a pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable A006 to the row number of a pixel on the virtual pixel plane of the virtual camera A003 corresponding to the variable A006, assigning a column number vnCol member variable of a pixel on a virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable a006 as a column number of a pixel on a virtual pixel plane of the virtual camera a003 corresponding to the light ray a004 corresponding to the variable a006, and assigning a luminance vsL member variable of a light ray of the variable a006 scattered from the visual scene point into the corresponding virtual camera as 0;
step 202-2: adding the variable a006 to the list Ltvspt;
3) calculating the brightness of the light scattered by each visual scene point and entering the corresponding virtual camera, wherein the specific method comprises the following steps:
step 301: for each element B001 in the list Ltvspt, the following operations are performed:
generating a random light source sampling point B002 on the surface light source according to uniform distribution; assigning vsQ member variables of the light source sampling point position corresponding to the visual scene point of the TVSPT type variable of the data structure stored in the element B001 as the position of a light source sampling point B002; judging whether a line segment B003 from the position of a light source sampling point B002 to the position represented by a vsPos member variable of the position of a visible scene point of a data structure TVSPT type variable stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 0, otherwise, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 1;
step 302: taking the value of a vsPos member variable at the position of a visual scene point of a variable of a data structure TVSPT type as a primary key value, and storing variables of the data structure TVSPT type stored by all elements in a list Ltvspt in a kd tree space data structure C001;
step 303: for each element B001 in the list Ltvspt, the following sub-steps are performed:
step 303-1: creating a list C002 in a computer memory, each element of the list C002 storing a variable of the data structure TVSPT type, leaving the list C002 empty; finding all the variables of the data structure TVSPT type that satisfy the condition COND1 from the kd-tree spatial data structure C001 and adding these found variables of the data structure TVSPT type to the list C002; condition COND1 is: the distance from the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the kd-tree spatial data structure C001 to the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the element B001 is less than Td
Step 303-2: for each element C003 in list C002, the following is performed:
step 303-2-1: let VsTable of geometric objects representing the positions of visual scene points of a variable of the data structure TVSPT type stored by the element C003Vector represented by the normal vector vsNrm member variable of a surface, let VrRepresenting a vector represented by a normal vector vsNrm member variable of the surface of a geometric object at the position of a visual scene point of a variable of a data structure TVSPT type stored by an element B001; if (V)s·Vr)/(|Vs|·|VrI) less than TvThen element C003 is deleted from list C002 and steps Step303-2-2, | VsI represents VsLength, | VrI represents VrLength of (d); let P1Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element C003 is located, and enabling P2Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element B001, and enabling Q to belThe light source sampling point position vsQ member variable representation position corresponding to the visual scene point of the data structure TVSPT type variable stored by the representative element C003, let Vl1Represents from QlPoint of direction P1Vector of (a), let Vl2Represents from QlPoint of direction P2The vector of (a); if (V)l1·Vl2)/(|Vl1|·|Vl2I) less than TlThen element C003, | V is deleted from list C002l1I represents Vl1Length, | Vl2I represents Vl2Length of (d);
step 303-2-2: the operation for element C003 ends;
step 303-3: creating a list C004 in a memory of the computer, wherein each element of the list C004 stores a variable of a data structure TALSPT type, and the list C004 is made to be empty; for each element C005 in list C002, the following is performed:
creating a variable C006 of a data structure TVSPT type, assigning the value of a member variable of a light source sampling point position vsQ corresponding to a visual scene point of the variable of the data structure TVSPT type stored in an element C005 to a member variable of a light source sampling point position lsPos of the variable C006, and assigning the value of a member variable of a light source visibility vsV corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element C005 to a member variable of a light source visibility lsV of the variable C006; add variable C006 to list C004;
step 303-4: let NC004Number of elements representing list C004; if N is presentC004Less than NalsThen N is generated on the surface light source according to uniform distributionals-NC004A random light source sampling point C007 while creating N in the memory of the computerals-NC004Variable C008, N of one data structure TALSPT typeals-NC004Random light source sampling points C007 and Nals-NC004The variables C008 of the data structure TALSPT type correspond to one another, and the position of each random light source sampling point C007 is assigned to the light source sampling point position lsPos member variable of the corresponding variable C008; judging whether a line segment from each random light source sampling point C007 to a position represented by a vsPos member variable of a position of a visible scene point of a variable of a data structure TVSPT type stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility lsV member variable of a variable C008 corresponding to the random light source sampling point C007 to be 0, otherwise, assigning a light source visibility lsV member variable of the variable C008 corresponding to the random light source sampling point C007 to be 1;
step 303-5: let VSPOINT represent the position represented by vsPos member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored in element B001, and let NCam represent the number represented by the virtual camera number NCam member variable corresponding to the visible scene point of the variable of the data structure TVSPT type stored in element B001; calculating the brightness D001 of the light entering the NCam virtual camera through the VSPIONT position after the light emitted by the area light source is scattered to the VSPIONT position through other 3D scene points by using a final aggregation technology according to the value of the vsPSOS member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored by the photon map PMap and the element B001 and the value of the normal vector vsNrm member variable on the surface of the geometric object at the position of the visible scene point; determining light source sampling points required by estimating a Monte Carlo direct light illumination value according to values of lsPos member variables of light source sampling point positions of all variables of the data structure TALSPT type stored in the list C004, taking values of light source visibility lsV member variables of all variables of the data structure TALSPT type stored in the list C004 as visibility approximate values of corresponding light source sampling points to VSPOINT positions, and calculating the luminance D002 of the NCam virtual camera through which light emitted by a surface light source is directly scattered through the VSPOINT positions by using a Monte Carlo direct light luminance value estimation technology; assigning the sum of luminance D001 and luminance D002 to a luminance vsL member variable of the data structure TVSPT type variable stored by element B001 that scatters light from a visual scene point into the corresponding virtual camera;
4) from the elements in the list Ltvspt, a light field projection image is generated, in a specific way as follows:
for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
step 401: creating a container N in the memory of a computerpixrLine, NpixcTwo-dimensional array ILLUMIN, N of column elementspixrNumber of pixel lines on virtual pixel plane, N, of virtual camera A003pixcIs the number of columns of pixels on the virtual pixel plane of virtual camera a 003; the elements of the array ILLUMIN correspond one-to-one to the pixels on the virtual pixel plane of the virtual camera A003; the array ILLUMIN is used for storing the brightness of the light scattered into the virtual camera A003 by the visible scene points corresponding to the pixels on the virtual pixel plane of the virtual camera A003; assigning each element of the array ILLUMIN to 0; calculating the number IDcam of the virtual camera A003 in the virtual camera array; creating a list D003 in the memory of the computer, making the list D003 empty; putting all elements D004 in the list Ltvspt that satisfy the condition COND2 into the list D003; condition COND2 is: the virtual camera number nCam corresponding to the visual scene point of the variable of the data structure TVSPT type stored by the element D004 is equal to the number IDCam; for each element D005 of list D003, the following is done:
let IdR represent the row number represented by the row number vnRow member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, let IdC represent the column number represented by the column number vnCol member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, and assign the value of the luminance vsL member variable of the light scattered from the visual scene point into the corresponding virtual camera of the data structure TVSPT type variable stored by element D005 to the IdR th and IdC th row elements of the array ILLUMIN;
step 402: the luminance values stored for each element of the array ILLUMIN are converted to picture image pixel color values obtained by the virtual camera A003 capturing the 3D scene, and the picture image pixel color values are stored in an image file corresponding to the virtual camera A003, which stores a light field projection image.
The method has the advantages that the illumination calculation result can be spatially multiplexed in the process of drawing the realistic virtual 3D scene light field projection image, so that the drawing speed of the realistic virtual 3D scene light field projection image shot by the virtual camera array is increased.
Drawings
Fig. 1 is a schematic diagram of a three-dimensional scene illuminated by a surface light source.
Fig. 2 is a schematic plan view of a virtual pixel.
Detailed Description
In order that the features and advantages of the method may be more clearly understood, the method is further described below in connection with specific embodiments. This embodiment considers a virtual 3D scene of a gypsum statue placed in an enclosed room with a surface light source on the ceiling of the room, all the surfaces of the geometric objects in the 3D scene being diffusely reflecting surfaces. The CPU of the computer system selects Intel (R) Xeon (R) CPU E3-1225v3@3.20GHz, the memory selects Jinshiton 8GB DDR 31333, the disk selects Buffalo HD-CE 1.5TU2, and the video card selects NVidia Quadro K2000; windows 7 is selected as the computer operating system, and VC + +2010 is selected as the software programming tool.
Firstly, a photon tracking technology is used for creating a photon map, then visible scene points corresponding to all virtual cameras in a virtual camera array are calculated, all the visible scene points are stored in a list, and then the global illumination values of all the visible scene points in the list are calculated, and the specific implementation steps are as follows:
providing a data structure TVSPT for storing data related to visual scene points; the data structure TVSPT comprises eight member variables, namely a position vsPos where a visual scene point is located, a normal vector vsNrm on the surface of a geometric object where a visual scene point is located, a virtual camera number nCam corresponding to the visual scene point, a line number vnRow of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, a column number vnCol of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, luminance vsL of light scattered from the visual scene point into the corresponding virtual camera, a light source sampling point position vsQ corresponding to the visual scene point, and light source visibility vsV corresponding to the visual scene point;
providing a data structure TALSPT for storing relevant data of a light source sampling point; the data structure TALSPT comprises two member variables of a light source sampling point position lsPos and a light source visibility lsV;
1) the photon tracking technology is used for creating a photon graph, and the specific method is as follows:
firstly, creating a photon map PMap which does not contain any photon record in a memory of a computer; emitting N from a surface light source to a three-dimensional scene using photon tracking technologyptOne photon, track this NptA process in which an individual photon is scattered upon collision with a geometric object as it propagates in a 3D scene; for each photon a002, in the process of tracking the photon a002 to collide with the geometric object and be scattered when the photon a002 propagates in the 3D scene, from the second collision of the photon a002 with the geometric object of the 3D scene, a photon record is added to the photon map PMap every collision, each photon record comprises a collision position PPos of the photon with the geometric object of the 3D scene, a normalized incident direction vector PVi of the photon at the collision position PPos, and the incident power PW of the photon at the collision position PPos has three components.
2) The calculation includes Ncamr×NcamcThe method comprises the following specific steps that each pixel on a virtual pixel plane of each virtual camera in a virtual camera array of each virtual camera corresponds to a visual field sight spot:
step 201: creating a list Ltvspt in a memory of the computer, wherein each element of the list Ltvspt is used for storing a variable of a data structure TVSPT type, and the list Ltvspt is enabled to be empty;
step 202: for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
according to the viewpoint position, the view port orientation, the view angle and the camera resolution parameters of the virtual camera a003, emitting a ray a004 passing through each pixel center point on the virtual pixel plane of the virtual camera a003 from the viewpoint position of the virtual camera a003 by using a ray projection technique, wherein the ray a004 corresponds to pixels on the virtual pixel plane of the virtual camera a003 one to one; for the ray a004 corresponding to each pixel on the virtual pixel plane of the virtual camera a003, the following operations are performed:
judging whether the ray A004 is intersected with the geometric objects of the 3D scene, if the ray A004 is intersected with the geometric objects of the 3D scene, further executing the following two substeps:
step 202-1: calculating an intersection point A005 of the light ray A004 and a geometric object of the 3D scene closest to the viewpoint position of the virtual camera A003, the intersection point A005 being a visual field scene point, creating a variable A006 of a data structure TVSPT type, the variable A006 corresponding to a unique light ray A004, assigning a position vsPos member variable of the visual scene point of the variable A006 to the position of the intersection point A005, assigning a normal vector vsNrm member variable of the geometric object surface of the position of the visual scene point of the variable A006 to a geometric object surface normal vector at the intersection point A005, assigning a virtual camera number nCam member variable corresponding to the visual scene point of the variable A006 to the number of the virtual camera A003 in the virtual camera array, assigning a row number vnRow member variable of a pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable A006 to the row number of a pixel on the virtual pixel plane of the virtual camera A003 corresponding to the variable A006, assigning a column number vnCol member variable of a pixel on a virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable a006 as a column number of a pixel on a virtual pixel plane of the virtual camera a003 corresponding to the light ray a004 corresponding to the variable a006, and assigning a luminance vsL member variable of a light ray of the variable a006 scattered from the visual scene point into the corresponding virtual camera as 0;
step 202-2: adding the variable a006 to the list Ltvspt;
3) calculating the brightness of the light scattered by each visual scene point and entering the corresponding virtual camera, wherein the specific method comprises the following steps:
step 301: for each element B001 in the list Ltvspt, the following operations are performed:
generating a random light source sampling point B002 on the surface light source according to uniform distribution; assigning vsQ member variables of the light source sampling point position corresponding to the visual scene point of the TVSPT type variable of the data structure stored in the element B001 as the position of a light source sampling point B002; judging whether a line segment B003 from the position of a light source sampling point B002 to the position represented by a vsPos member variable of the position of a visible scene point of a data structure TVSPT type variable stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 0, otherwise, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 1;
step 302: taking the value of a vsPos member variable at the position of a visual scene point of a variable of a data structure TVSPT type as a primary key value, and storing variables of the data structure TVSPT type stored by all elements in a list Ltvspt in a kd tree space data structure C001;
step 303: for each element B001 in the list Ltvspt, the following sub-steps are performed:
step 303-1: creating a list C002 in a computer memory, each element of the list C002 storing a variable of the data structure TVSPT type, leaving the list C002 empty; finding all the variables of the data structure TVSPT type that satisfy the condition COND1 from the kd-tree spatial data structure C001 and adding these found variables of the data structure TVSPT type to the list C002; condition COND1 is: kd tree spaceThe distance from the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the inter data structure C001 to the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the element B001 is less than Td
Step 303-2: for each element C003 in list C002, the following is performed:
step 303-2-1: let VsA vector represented by a normal vector vsNrm member variable of the surface of the geometric object at the position of the visual scene point of the variable of the data structure TVSPT type stored by the representative element C003, let VrRepresenting a vector represented by a normal vector vsNrm member variable of the surface of a geometric object at the position of a visual scene point of a variable of a data structure TVSPT type stored by an element B001; if (V)s·Vr)/(|Vs|·|VrI) less than TvThen element C003 is deleted from list C002 and steps Step303-2-2, | VsI represents VsLength, | VrI represents VrLength of (d); let P1Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element C003 is located, and enabling P2Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element B001, and enabling Q to belThe light source sampling point position vsQ member variable representation position corresponding to the visual scene point of the data structure TVSPT type variable stored by the representative element C003, let Vl1Represents from QlPoint of direction P1Vector of (a), let Vl2Represents from QlPoint of direction P2The vector of (a); if (V)l1·Vl2)/(|Vl1|·|Vl2I) less than TlThen element C003, | V is deleted from list C002l1I represents Vl1Length, | Vl2I represents Vl2Length of (d);
step 303-2-2: the operation for element C003 ends;
step 303-3: creating a list C004 in a memory of the computer, wherein each element of the list C004 stores a variable of a data structure TALSPT type, and the list C004 is made to be empty; for each element C005 in list C002, the following is performed:
creating a variable C006 of a data structure TVSPT type, assigning the value of a member variable of a light source sampling point position vsQ corresponding to a visual scene point of the variable of the data structure TVSPT type stored in an element C005 to a member variable of a light source sampling point position lsPos of the variable C006, and assigning the value of a member variable of a light source visibility vsV corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element C005 to a member variable of a light source visibility lsV of the variable C006; add variable C006 to list C004;
step 303-4: let NC004Number of elements representing list C004; if N is presentC004Less than NalsThen N is generated on the surface light source according to uniform distributionals-NC004A random light source sampling point C007 while creating N in the memory of the computerals-NC004Variable C008, N of one data structure TALSPT typeals-NC004Random light source sampling points C007 and Nals-NC004The variables C008 of the data structure TALSPT type correspond to one another, and the position of each random light source sampling point C007 is assigned to the light source sampling point position lsPos member variable of the corresponding variable C008; judging whether a line segment from each random light source sampling point C007 to a position represented by a vsPos member variable of a position of a visible scene point of a variable of a data structure TVSPT type stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility lsV member variable of a variable C008 corresponding to the random light source sampling point C007 to be 0, otherwise, assigning a light source visibility lsV member variable of the variable C008 corresponding to the random light source sampling point C007 to be 1;
step 303-5: let VSPOINT represent the position represented by vsPos member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored in element B001, and let NCam represent the number represented by the virtual camera number NCam member variable corresponding to the visible scene point of the variable of the data structure TVSPT type stored in element B001; calculating the brightness D001 of the light entering the NCam virtual camera through the VSPIONT position after the light emitted by the area light source is scattered to the VSPIONT position through other 3D scene points by using a final aggregation technology according to the value of the vsPSOS member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored by the photon map PMap and the element B001 and the value of the normal vector vsNrm member variable on the surface of the geometric object at the position of the visible scene point; determining light source sampling points required by estimating a Monte Carlo direct light illumination value according to values of lsPos member variables of light source sampling point positions of all variables of the data structure TALSPT type stored in the list C004, taking values of light source visibility lsV member variables of all variables of the data structure TALSPT type stored in the list C004 as visibility approximate values of corresponding light source sampling points to VSPOINT positions, and calculating the luminance D002 of the NCam virtual camera through which light emitted by a surface light source is directly scattered through the VSPOINT positions by using a Monte Carlo direct light luminance value estimation technology; assigning the sum of luminance D001 and luminance D002 to a luminance vsL member variable of the data structure TVSPT type variable stored by element B001 that scatters light from a visual scene point into the corresponding virtual camera;
4) from the elements in the list Ltvspt, a light field projection image is generated, in a specific way as follows:
for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
step 401: creating a container N in the memory of a computerpixrLine, NpixcTwo-dimensional array ILLUMIN, N of column elementspixrNumber of pixel lines on virtual pixel plane, N, of virtual camera A003pixcIs the number of columns of pixels on the virtual pixel plane of virtual camera a 003; the elements of the array ILLUMIN correspond one-to-one to the pixels on the virtual pixel plane of the virtual camera A003; the array ILLUMIN is used for storing the brightness of the light scattered into the virtual camera A003 by the visible scene points corresponding to the pixels on the virtual pixel plane of the virtual camera A003; assigning each element of the array ILLUMIN to 0; calculating the number IDcam of the virtual camera A003 in the virtual camera array; a list D003 is created in the memory of the computer,let list D003 be empty; putting all elements D004 in the list Ltvspt that satisfy the condition COND2 into the list D003; condition COND2 is: the virtual camera number nCam corresponding to the visual scene point of the variable of the data structure TVSPT type stored by the element D004 is equal to the number IDCam; for each element D005 of list D003, the following is done:
let IdR represent the row number represented by the row number vnRow member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, let IdC represent the column number represented by the column number vnCol member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, and assign the value of the luminance vsL member variable of the light scattered from the visual scene point into the corresponding virtual camera of the data structure TVSPT type variable stored by element D005 to the IdR th and IdC th row elements of the array ILLUMIN;
step 402: the luminance values stored for each element of the array ILLUMIN are converted to picture image pixel color values obtained by the virtual camera A003 capturing the 3D scene, and the picture image pixel color values are stored in an image file corresponding to the virtual camera A003, which stores a light field projection image.
In this embodiment, Npt=1000,Ncamr=5,Ncamc=5,Tv=0.92,Tl=0.92,Nals=20;TdThe value is one twentieth of the radius of a ball which just can surround all geometric objects of the 3D scene; the number of pixel rows on the virtual pixel plane of all virtual cameras is 1080 and the number of pixel columns on the virtual pixel plane of all virtual cameras is 1440.

Claims (1)

1. A realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing is characterized in that: firstly, a photon tracking technology is used for creating a photon map, then visible scene points corresponding to all virtual cameras in a virtual camera array are calculated, all the visible scene points are stored in a list, and then the global illumination values of all the visible scene points in the list are calculated, and the specific implementation steps are as follows:
providing a data structure TVSPT for storing data related to visual scene points; the data structure TVSPT comprises eight member variables, namely a position vsPos where a visual scene point is located, a normal vector vsNrm on the surface of a geometric object where a visual scene point is located, a virtual camera number nCam corresponding to the visual scene point, a line number vnRow of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, a column number vnCol of a pixel on a virtual pixel plane of a virtual camera corresponding to the visual scene point, luminance vsL of light scattered from the visual scene point into the corresponding virtual camera, a light source sampling point position vsQ corresponding to the visual scene point, and light source visibility vsV corresponding to the visual scene point;
providing a data structure TALSPT for storing relevant data of a light source sampling point; the data structure TALSPT comprises two member variables of a light source sampling point position lsPos and a light source visibility lsV;
1) the photon tracking technology is used for creating a photon graph, and the specific method is as follows:
firstly, creating a photon map PMap which does not contain any photon record in a memory of a computer; emitting N from a surface light source to a three-dimensional scene using photon tracking technologyptOne photon, track this NptA process in which an individual photon is scattered upon collision with a geometric object as it propagates in a 3D scene; for each photon A002, adding a photon record to the photon map PMap every time when the photon A002 collides with the geometric object of the 3D scene for the second time from the beginning of the collision of the photon A002 with the geometric object of the 3D scene and is scattered by colliding with the geometric object when the tracking photon A002 propagates in the 3D scene, wherein each photon record comprises three components of a collision position PPos of the photon with the geometric object of the 3D scene, a normalized incident direction vector PVi of the photon at the collision position PPos, and incident power PW of the photon at the collision position PPos;
2) the calculation includes Ncamr×NcamcThe method comprises the following specific steps that each pixel on a virtual pixel plane of each virtual camera in a virtual camera array of each virtual camera corresponds to a visual field sight spot:
step 201: creating a list Ltvspt in a memory of the computer, wherein each element of the list Ltvspt is used for storing a variable of a data structure TVSPT type, and the list Ltvspt is enabled to be empty;
step 202: for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
according to the viewpoint position, the view port orientation, the view angle and the camera resolution parameters of the virtual camera a003, emitting a ray a004 passing through each pixel center point on the virtual pixel plane of the virtual camera a003 from the viewpoint position of the virtual camera a003 by using a ray projection technique, wherein the ray a004 corresponds to pixels on the virtual pixel plane of the virtual camera a003 one to one; for the ray a004 corresponding to each pixel on the virtual pixel plane of the virtual camera a003, the following operations are performed:
judging whether the ray A004 is intersected with the geometric objects of the 3D scene, if the ray A004 is intersected with the geometric objects of the 3D scene, further executing the following two substeps:
step 202-1: calculating an intersection point A005 of the light ray A004 and a geometric object of the 3D scene closest to the viewpoint position of the virtual camera A003, the intersection point A005 being a visual field scene point, creating a variable A006 of a data structure TVSPT type, the variable A006 corresponding to a unique light ray A004, assigning a position vsPos member variable of the visual scene point of the variable A006 to the position of the intersection point A005, assigning a normal vector vsNrm member variable of the geometric object surface of the position of the visual scene point of the variable A006 to a geometric object surface normal vector at the intersection point A005, assigning a virtual camera number nCam member variable corresponding to the visual scene point of the variable A006 to the number of the virtual camera A003 in the virtual camera array, assigning a row number vnRow member variable of a pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable A006 to the row number of a pixel on the virtual pixel plane of the virtual camera A003 corresponding to the variable A006, assigning a column number vnCol member variable of a pixel on a virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable a006 as a column number of a pixel on a virtual pixel plane of the virtual camera a003 corresponding to the light ray a004 corresponding to the variable a006, and assigning a luminance vsL member variable of a light ray of the variable a006 scattered from the visual scene point into the corresponding virtual camera as 0;
step 202-2: adding the variable a006 to the list Ltvspt;
3) calculating the brightness of the light scattered by each visual scene point and entering the corresponding virtual camera, wherein the specific method comprises the following steps:
step 301: for each element B001 in the list Ltvspt, the following operations are performed:
generating a random light source sampling point B002 on the surface light source according to uniform distribution; assigning vsQ member variables of the light source sampling point position corresponding to the visual scene point of the TVSPT type variable of the data structure stored in the element B001 as the position of a light source sampling point B002; judging whether a line segment B003 from the position of a light source sampling point B002 to the position represented by a vsPos member variable of the position of a visible scene point of a data structure TVSPT type variable stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 0, otherwise, assigning a light source visibility vsV member variable corresponding to the visible scene point of the data structure TVSPT type variable stored in the element B001 to be 1;
step 302: taking the value of a vsPos member variable at the position of a visual scene point of a variable of a data structure TVSPT type as a primary key value, and storing variables of the data structure TVSPT type stored by all elements in a list Ltvspt in a kd tree space data structure C001;
step 303: for each element B001 in the list Ltvspt, the following sub-steps are performed:
step 303-1: creating a list C002 in a computer memory, each element of the list C002 storing a variable of the data structure TVSPT type, leaving the list C002 empty; finding all the variables of the data structure TVSPT type that satisfy the condition COND1 from the kd-tree spatial data structure C001 and adding these found variables of the data structure TVSPT type to the list C002; condition COND1 is:the distance from the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the kd-tree spatial data structure C001 to the position vsPos member variable representing the position of the visual scene point of the variable of the data structure TVSPT type stored in the element B001 is less than Td
Step 303-2: for each element C003 in list C002, the following is performed:
step 303-2-1: let VsA vector represented by a normal vector vsNrm member variable of the surface of the geometric object at the position of the visual scene point of the variable of the data structure TVSPT type stored by the representative element C003, let VrRepresenting a vector represented by a normal vector vsNrm member variable of the surface of a geometric object at the position of a visual scene point of a variable of a data structure TVSPT type stored by an element B001; if (V)s·Vr)/(|Vs|·|VrI) less than TvThen element C003 is deleted from list C002 and steps Step303-2-2, | VsI represents VsLength, | VrI represents VrLength of (d); let P1Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element C003 is located, and enabling P2Representing the position represented by the vsPos member variable of the position where the visual scene point of the variable of the data structure TVSPT type stored by the element B001, and enabling Q to belThe light source sampling point position vsQ member variable representation position corresponding to the visual scene point of the data structure TVSPT type variable stored by the representative element C003, let Vl1Represents from QlPoint of direction P1Vector of (a), let Vl2Represents from QlPoint of direction P2The vector of (a); if (V)l1·Vl2)/(|Vl1|·|Vl2I) less than TlThen element C003, | V is deleted from list C002l1I represents Vl1Length, | Vl2I represents Vl2Length of (d);
step 303-2-2: the operation for element C003 ends;
step 303-3: creating a list C004 in a memory of the computer, wherein each element of the list C004 stores a variable of a data structure TALSPT type, and the list C004 is made to be empty; for each element C005 in list C002, the following is performed:
creating a variable C006 of a data structure TVSPT type, assigning the value of a member variable of a light source sampling point position vsQ corresponding to a visual scene point of the variable of the data structure TVSPT type stored in an element C005 to a member variable of a light source sampling point position lsPos of the variable C006, and assigning the value of a member variable of a light source visibility vsV corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element C005 to a member variable of a light source visibility lsV of the variable C006; add variable C006 to list C004;
step 303-4: let NC004Number of elements representing list C004; if N is presentC004Less than NalsThen N is generated on the surface light source according to uniform distributionals-NC004A random light source sampling point C007 while creating N in the memory of the computerals-NC004Variable C008, N of one data structure TALSPT typeals-NC004Random light source sampling points C007 and Nals-NC004The variables C008 of the data structure TALSPT type correspond to one another, and the position of each random light source sampling point C007 is assigned to the light source sampling point position lsPos member variable of the corresponding variable C008; judging whether a line segment from each random light source sampling point C007 to a position represented by a vsPos member variable of a position of a visible scene point of a variable of a data structure TVSPT type stored in an element B001 intersects with a geometric object of a 3D scene, if so, assigning a light source visibility lsV member variable of a variable C008 corresponding to the random light source sampling point C007 to be 0, otherwise, assigning a light source visibility lsV member variable of the variable C008 corresponding to the random light source sampling point C007 to be 1;
step 303-5: let VSPOINT represent the position represented by vsPos member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored in element B001, and let NCam represent the number represented by the virtual camera number NCam member variable corresponding to the visible scene point of the variable of the data structure TVSPT type stored in element B001; calculating the brightness D001 of the light entering the NCam virtual camera through the VSPIONT position after the light emitted by the area light source is scattered to the VSPIONT position through other 3D scene points by using a final aggregation technology according to the value of the vsPSOS member variable at the position of the visible scene point of the variable of the data structure TVSPT type stored by the photon map PMap and the element B001 and the value of the normal vector vsNrm member variable on the surface of the geometric object at the position of the visible scene point; determining light source sampling points required by estimating a Monte Carlo direct light illumination value according to values of lsPos member variables of light source sampling point positions of all variables of the data structure TALSPT type stored in the list C004, taking values of light source visibility lsV member variables of all variables of the data structure TALSPT type stored in the list C004 as visibility approximate values of corresponding light source sampling points to VSPOINT positions, and calculating the luminance D002 of the NCam virtual camera through which light emitted by a surface light source is directly scattered through the VSPOINT positions by using a Monte Carlo direct light luminance value estimation technology; assigning the sum of luminance D001 and luminance D002 to a luminance vsL member variable of the data structure TVSPT type variable stored by element B001 that scatters light from a visual scene point into the corresponding virtual camera;
4) from the elements in the list Ltvspt, a light field projection image is generated, in a specific way as follows:
for the inclusion of Ncamr×NcamcEach virtual camera a003 in the virtual camera array of virtual cameras operates as follows:
step 401: creating a container N in the memory of a computerpixrLine, NpixcTwo-dimensional array ILLUMIN, N of column elementspixrNumber of pixel lines on virtual pixel plane, N, of virtual camera A003pixcIs the number of columns of pixels on the virtual pixel plane of virtual camera a 003; the elements of the array ILLUMIN correspond one-to-one to the pixels on the virtual pixel plane of the virtual camera A003; the array ILLUMIN is used for storing the brightness of the light scattered into the virtual camera A003 by the visible scene points corresponding to the pixels on the virtual pixel plane of the virtual camera A003; assigning each element of the array ILLUMIN to 0; calculating the number IDcam of the virtual camera A003 in the virtual camera array; creating a list D003 in the memory of the computer, let list D003 is empty; putting all elements D004 in the list Ltvspt that satisfy the condition COND2 into the list D003; condition COND2 is: the virtual camera number nCam corresponding to the visual scene point of the variable of the data structure TVSPT type stored by the element D004 is equal to the number IDCam; for each element D005 of list D003, the following is done:
let IdR represent the row number represented by the row number vnRow member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, let IdC represent the column number represented by the column number vnCol member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the data structure TVSPT type variable stored by element D005, and assign the value of the luminance vsL member variable of the light scattered from the visual scene point into the corresponding virtual camera of the data structure TVSPT type variable stored by element D005 to the IdR th and IdC th row elements of the array ILLUMIN;
step 402: the luminance values stored for each element of the array ILLUMIN are converted to picture image pixel color values obtained by the virtual camera A003 capturing the 3D scene, and the picture image pixel color values are stored in an image file corresponding to the virtual camera A003, which stores a light field projection image.
CN201711175280.XA 2017-11-22 2017-11-22 Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing Active CN107909647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711175280.XA CN107909647B (en) 2017-11-22 2017-11-22 Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711175280.XA CN107909647B (en) 2017-11-22 2017-11-22 Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing

Publications (2)

Publication Number Publication Date
CN107909647A CN107909647A (en) 2018-04-13
CN107909647B true CN107909647B (en) 2020-09-15

Family

ID=61847316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711175280.XA Active CN107909647B (en) 2017-11-22 2017-11-22 Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing

Country Status (1)

Country Link
CN (1) CN107909647B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493412B (en) * 2018-11-07 2022-10-21 长春理工大学 Oversampling ray tracing method for multiplexing scene point light source visibility
CN110675482B (en) * 2019-08-28 2023-05-19 长春理工大学 Spherical fibonacci pixel lattice panoramic picture rendering and displaying method of virtual three-dimensional scene
CN110751714B (en) * 2019-10-18 2022-09-06 长春理工大学 Indirect illumination multiplexing method based on object discrimination in three-dimensional scene rendering
CN113724309A (en) * 2021-08-27 2021-11-30 杭州海康威视数字技术股份有限公司 Image generation method, device, equipment and storage medium
US20230281918A1 (en) * 2022-03-04 2023-09-07 Bidstack Group PLC Viewability testing in the presence of fine-scale occluders

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101982838A (en) * 2010-11-02 2011-03-02 长春理工大学 3D virtual set ray tracking method for accelerating back light source irradiation
CN102074038A (en) * 2010-12-28 2011-05-25 长春理工大学 Method for drawing surface caustic effect of 3D virtual scene generated by smooth surface refraction
US8041463B2 (en) * 2006-05-09 2011-10-18 Advanced Liquid Logic, Inc. Modular droplet actuator drive
CN104700448A (en) * 2015-03-23 2015-06-10 山东大学 Self adaption photon mapping optimization algorithm based on gradient
CN106251393A (en) * 2016-07-14 2016-12-21 山东大学 A kind of gradual Photon Mapping optimization method eliminated based on sample
CN106471392A (en) * 2014-07-04 2017-03-01 株式会社岛津制作所 Image reconstruction process method
CN107274474A (en) * 2017-07-03 2017-10-20 长春理工大学 Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8041463B2 (en) * 2006-05-09 2011-10-18 Advanced Liquid Logic, Inc. Modular droplet actuator drive
CN101982838A (en) * 2010-11-02 2011-03-02 长春理工大学 3D virtual set ray tracking method for accelerating back light source irradiation
CN102074038A (en) * 2010-12-28 2011-05-25 长春理工大学 Method for drawing surface caustic effect of 3D virtual scene generated by smooth surface refraction
CN106471392A (en) * 2014-07-04 2017-03-01 株式会社岛津制作所 Image reconstruction process method
CN104700448A (en) * 2015-03-23 2015-06-10 山东大学 Self adaption photon mapping optimization algorithm based on gradient
CN106251393A (en) * 2016-07-14 2016-12-21 山东大学 A kind of gradual Photon Mapping optimization method eliminated based on sample
CN107274474A (en) * 2017-07-03 2017-10-20 长春理工大学 Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method

Also Published As

Publication number Publication date
CN107909647A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN107909647B (en) Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing
US11423599B2 (en) Multi-view processing unit systems and methods
Weier et al. Foveated real‐time ray tracing for head‐mounted displays
JP6260924B2 (en) Image rendering of laser scan data
Jones et al. Interpolating vertical parallax for an autostereoscopic three-dimensional projector array
CN107274474B (en) Indirect illumination multiplexing method in three-dimensional scene three-dimensional picture drawing
Pfeiffer et al. Model-based real-time visualization of realistic three-dimensional heat maps for mobile eye tracking and eye tracking in virtual reality
CN109493413B (en) Three-dimensional scene global illumination effect drawing method based on self-adaptive virtual point light source sampling
CN104103092A (en) Real-time dynamic shadowing realization method based on projector lamp
CN102243768A (en) Method for drawing stereo picture of three-dimensional virtual scene
US20130147785A1 (en) Three-dimensional texture reprojection
Matsubara et al. Light field display simulation for light field quality assessment
Alpaslan et al. Small form factor full parallax tiled light field display
Chen et al. Real-time lens based rendering algorithm for super-multiview integral photography without image resampling
CN106780704B (en) The direct lighting effect proximity rendering method of three-dimensional scenic reused based on visibility
JP5252703B2 (en) 3D image display device, 3D image display method, and 3D image display program
CN107346558B (en) Method for accelerating direct illumination effect drawing of three-dimensional scene by utilizing surface light source visibility space correlation
JP2014505954A (en) Estimation method of concealment in virtual environment
CN112802170A (en) Illumination image generation method, apparatus, device, and medium
CN107909639B (en) Self-adaptive 3D scene drawing method of light source visibility multiplexing range
JP4987890B2 (en) Stereoscopic image rendering apparatus, stereoscopic image rendering method, stereoscopic image rendering program
Burnett 61‐1: Invited Paper: Light‐field Display Architecture and the Challenge of Synthetic Light‐field Radiance Image Rendering
CN114332356A (en) Virtual and real picture combining method and device
Xing et al. A real-time super multiview rendering pipeline for wide viewing-angle and high-resolution 3D displays based on a hybrid rendering technique
De Sorbier et al. Depth camera based system for auto-stereoscopic displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220325

Address after: 130000 room 504c-1, 4th floor, high tech entrepreneurship incubation Industrial Park, No. 1357, Jinhu Road, high tech Industrial Development Zone, Changchun City, Jilin Province

Patentee after: Jilin Kasite Technology Co.,Ltd.

Address before: 130022 No. 7089 Satellite Road, Jilin, Changchun

Patentee before: CHANGCHUN University OF SCIENCE AND TECHNOLOGY