CN107909639B - Self-adaptive 3D scene drawing method of light source visibility multiplexing range - Google Patents

Self-adaptive 3D scene drawing method of light source visibility multiplexing range Download PDF

Info

Publication number
CN107909639B
CN107909639B CN201711103516.9A CN201711103516A CN107909639B CN 107909639 B CN107909639 B CN 107909639B CN 201711103516 A CN201711103516 A CN 201711103516A CN 107909639 B CN107909639 B CN 107909639B
Authority
CN
China
Prior art keywords
light source
source sampling
scene
list
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711103516.9A
Other languages
Chinese (zh)
Other versions
CN107909639A (en
Inventor
陈纯毅
杨华民
蒋振刚
李华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN201711103516.9A priority Critical patent/CN107909639B/en
Publication of CN107909639A publication Critical patent/CN107909639A/en
Application granted granted Critical
Publication of CN107909639B publication Critical patent/CN107909639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models

Abstract

The invention relates to a 3D scene drawing method for adaptively controlling a light source visibility multiplexing range according to positions. The method can adaptively determine the visibility multiplexing range matched with the scenic spots to be drawn in different visual scene areas, achieves the purpose of enhancing the accuracy of light source visibility multiplexing, and further improves the drawing quality of the direct illumination effect of the 3D scene.

Description

Self-adaptive 3D scene drawing method of light source visibility multiplexing range
Technical Field
The invention relates to a 3D scene drawing method for adaptively controlling a light source visibility multiplexing range according to positions, and belongs to the technical field of realistic 3D scene drawing.
Background
Rendering a 3D scene illuminated by a surface light source is more expensive than rendering a 3D scene illuminated by a point light source. When a 3D scene is rendered, for each viewable sight, a direct illumination value generated by the surface light source for the viewable sight needs to be calculated. A monte carlo integral solution method is generally used for estimating a direct illumination value generated by a surface light source on a visual field scenery spot; this requires generating a certain number of sampling points on the surface light source and calculating the visibility between each sampling point and the viewable point. The Monte Carlo direct illumination value estimation technique is described in detail in the paper published in ACM Transactions on Graphics 1996, volume 15, 1, to pages 36. In order to reduce the Monte Carlo integral solving error, a large number of sampling points are often required to be generated on the surface light source; testing the visibility of each sampling point and the visual field sights is an important computational overhead. How to reduce this overhead is a research focus of 3D scene rendering. Notably, there is typically some spatial correlation between the visibility of one area source to two adjacent viewable spots. By fully utilizing the spatial correlation, when the direct illumination value of the surface light source to the scenic spot with the visual field is estimated by using a Monte Carlo integral solving method, the light source visibility meter of the scenic spot with the visual field is multiplexedAnd calculating the result to achieve the aim of reducing the calculation overhead. In fact, the size of the angle between two adjacent visible scene points in different visible scene areas and the connecting line of the same light source sampling point is not always equal. As shown in FIG. 1, two adjacent visuals points V1And V2The angle between the sampling point q and the light source is larger than the angle between two adjacent visual field spots V3And V4The angle between the sampling point q and the light source is small. This difference in the size of the included angle affects the spatially dependent size of the light source visibility of adjacent viewable sights. The above problem also exists for the two adjacent visual scene point scenarios. It is clear that a larger included angle results in a larger difference in the direction of the light transmission path, and thus the visibility of the light sources adjacent to the viewable spots is generally less relevant. Multiplexing light source visibility calculation results for unrelated viewable sights can reduce the rendering quality of the direct lighting effect of the 3D scene. Therefore, when multiplexing the light source visibility of the nearby view spots, the size of the multiplexing range should be adaptively controlled according to the position of the view spot to be drawn, so as to improve the multiplexing accuracy. Aiming at the problem of drawing the direct illumination effect of a 3D scene irradiated by a surface light source, the invention adaptively controls the light source visibility multiplexing range according to the position of a visual scene point to be drawn, thereby improving the drawing quality of the direct illumination effect of the 3D scene.
Disclosure of Invention
The invention aims to provide a 3D scene drawing method for adaptively controlling a light source visibility multiplexing range according to a position, which adaptively controls the light source visibility multiplexing range according to the position of a visual scene to be drawn, achieves the aim of enhancing the light source visibility multiplexing accuracy and further improves the drawing quality of the direct illumination effect of a 3D scene.
The technical scheme of the invention is realized as follows: the 3D scene drawing method for adaptively controlling the light source visibility multiplexing range according to the position is characterized by comprising the following steps of: placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on a virtual pixel plane from a viewpoint, the light ray A001 and the pixel on the virtual pixel plane are in one-to-one correspondence, whether the light ray A001 intersects a geometric object of a 3D scene is judged, if the light ray A001 and the geometric object of the 3D scene intersect, an intersection point A002 which is closest to the viewpoint is further calculated, the intersection point A002 is a visual field scenic spot, meanwhile, NUM light source sampling points A003 which are uniformly distributed are randomly generated on a surface light source, the spatial positions of the NUM light source sampling points A003 are recorded, and the visibility VIS between each light source sampling point A003 and the intersection point A002 is calculated; for each intersection point A002, multiplexing the surface light source sampling point and the visibility calculation result of the intersection point A002, which is in a specific area and is adjacent to the visual field scenic spot, to accelerate the calculation of the approximate direct illumination value of the surface light source to the intersection point A002; the method comprises the following concrete steps:
providing a data structure LSPD for storing data related to sampling points of a surface light source; the data structure LSPD includes two member variables, the spatial position of the light source sampling points and the visibility of the light source sampling points.
1) The method realizes the sampling of the surface light source and the calculation of the visibility between the surface light source sampling point and the visual field scenic spots, and comprises the following specific steps:
step 101: placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on the virtual pixel plane from a viewpoint, the light ray A001 corresponds to the pixel on the virtual pixel plane one by one, whether the light ray A001 intersects with a geometric object of the 3D scene is judged, if the light ray A001 intersects with the geometric object of the 3D scene, an intersection point A002 which is closest to the viewpoint of the light ray A001 and the geometric object of the 3D scene is further calculated, the intersection point A002 is a visual field scenic spot, and the visual field point corresponds to a unique pixel on the virtual pixel plane;
step 102: creating an array LS comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array LS stores a list A004, and each element of the list A004 stores a variable of a data structure LSPD type; make list A004 stored for each element of array LS empty; each element of the array LS corresponds to a pixel on the virtual pixel plane one by one;
step 103: for each viewable scenery PV, the following is performed:
randomly generating NUM light source sampling points A003 which are uniformly distributed on a surface light source; creating variables of NUM data structures LSPD types in a computer memory, wherein the variables correspond to the NUM light source sampling points A003 one by one; assigning the spatial position of each light source sampling point A003 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point A003, judging whether a line segment from the spatial position of the light source sampling point A003 to a visible scene point PV intersects with a geometric object of the 3D scene, if so, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 0, otherwise, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 1; the variable of the NUM data structure LSPD type corresponding to NUM light source sampling points a003 is added to the list a004 of the element store of the array LS corresponding to the pixel on the virtual pixel plane corresponding to the visual field visuals PV.
2) Calculating the approximate direct illumination value of each sight spot with a visual field and drawing the direct illumination effect of the 3D scene, and the method comprises the following specific steps:
step 201: creating an array ILU comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array ILU corresponds to a pixel on the virtual pixel plane one by one, and each element of the array ILU is used for storing a direct illumination value of a visual field sight spot corresponding to the pixel on the virtual pixel plane; setting the value of each element of the array ILU as a lighting value corresponding to the background color; placing all the visual scene points obtained in Step101 into a list B001, wherein each element in the list B001 stores a visual scene point, and the number of the elements in the list B001 is equal to the number of all the visual scene points obtained in Step 101;
step 202: for each viewable sight EV, the following operations are performed:
step 202-1: let PA0 represent a visual scene EV(ii) a Let alpha1Represents a vector pointing from the center point of the area light source to PA 0; creating a list LREUSE, each element of the list LREUSE storing a viewable sight adjacent to PA0 suitable for participating in light source visibility multiplexing; make list LREUSE empty;
step 202-2: for each element B002 in list B001, the following operations are performed:
let alpha2A vector representing a visual field point stored from the center point of the area light source to element B002; if (alpha)1·α2)/(|α1|·|α2I) is greater than TaThen add the visual scene point stored by element B002 to the list LREUSE; | α1I represents the vector alpha1Modulo, | α2I represents the vector alpha2Die of (2), TaA direction difference threshold value representing a center point of the reference surface light source;
step 202-3: creating a list LRULS for storing variables of the type of data structure LSPD; make list LRULS empty;
step 202-4: for each element B003 in the list LREUSE, the following operations are performed:
let LA1 represent the list a004 of element stores of the array LS corresponding to the pixels on the virtual pixel plane corresponding to the visual scene point stored by element B003; let PA1 represent the visual field point stored by element B003; for variable B004 of each data structure LSPD type stored in LA1, the following is performed:
let beta1A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 0; let beta2A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 1; let n be1A normalized geometric object surface normal vector representing the location of PA 0; n is2A normalized geometric object surface normal vector representing the location of PA 1; if (beta)1·β2)/(|β1|·|β2I) is greater than TbAnd the distance from PA0 to PA1 is less than TdAnd (n)1·n2) Greater than TnThen add variable B004 to the list LRUIn LS; beta | (B)1I denotes the vector beta1Modulo, | beta2I denotes the vector beta2The mold of (4); t isbIndicating a direction difference threshold, T, of a reference light source sampling pointdRepresents a distance difference threshold, TnRepresenting a normal vector difference threshold;
step 202-5: let nsi be the variable number of the data structure LSPD type stored in the list LRULS; if nsi is larger than or equal to Nst, turning to Step202-6, otherwise randomly generating Nst-nsi light source sampling points C001 which are uniformly distributed on the surface light source, creating variables of Nst-nsi data structure LSPD types which correspond to the Nst-nsi light source sampling points C001 one by one in a computer memory, and assigning the spatial position of each light source sampling point C001 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point C001, judging whether a line segment from the spatial position of the light source sampling point C001 to PA0 intersects with a geometric object of the 3D scene, if so, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 0, otherwise, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 1; adding the variables of the Nst-nsi data structure LSPD types corresponding to the Nst-nsi light source sampling points C001 into the list LRULS; nst represents the minimum number of light source sampling points required to estimate an approximate direct illumination value for a viewable scene;
step 202-6: determining light source sampling points required for estimating Monte Carlo direct illumination values of the PA0 according to values of spatial position member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS, using values of visibility member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS as visibility approximate values of the corresponding light source sampling points to the PA0, and calculating a direct illumination approximate value C002 of the PA0 by using a Monte Carlo direct illumination value estimation technology;
step 202-7: calculating the row number irow and the column number jcol of the pixel on the virtual pixel plane corresponding to the PA 0; assigning the elements of the irow line and the jcol column of the array ILU as direct illumination approximate values C002;
step 203: the direct illumination values stored for each element of the array ILU are converted to 3D scene image pixel color values and the 3D scene image is displayed on the display.
The method has the advantages that when the visibility of the light source sampling points of the adjacent visual field scenic spots is multiplexed, the size of an included angle between the visual scene point to be drawn and the connecting line of the adjacent visual scene point and the given light source sampling point is considered, and the visibility multiplexing range is controlled by setting an included angle threshold value instead of simply controlling the visibility multiplexing range only through the distance between the visual field scenic spots. The angles between the connecting lines of two adjacent visual scene points at different positions in the visual scene area and a given light source sampling point may not be the same, and thus the correlation degrees of the light source visibility of two adjacent visual scene points in different visual scene areas may be different. This problem also exists for two adjacent viewable sights of different areas, based on the same rationale. The invention can adaptively determine the visibility multiplexing range matched with the scenic spots to be drawn in different visual scene areas, thereby overcoming the problems and enhancing the light source visibility multiplexing accuracy.
Drawings
Fig. 1 is a schematic diagram of a three-dimensional scene illuminated by a surface light source.
Detailed Description
In order that the features and advantages of the method may be more clearly understood, the method is further described below in connection with specific embodiments. This embodiment considers a bedroom 3D scene with a rectangular area light source on the roof of the bedroom. The CPU of the computer system selects Intel (R) Xeon (R) CPU E3-1225v3@3.20GHz, the memory selects Jinshiton 8GB DDR 31333, the disk selects Buffalo HD-CE 1.5TU2, and the video card selects NVidia Quadro K2000; windows 7 is selected as the computer operating system, and VC + +2010 is selected as the software programming tool.
Placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on a virtual pixel plane from a viewpoint, the light ray A001 and the pixel on the virtual pixel plane are in one-to-one correspondence, whether the light ray A001 intersects a geometric object of a 3D scene is judged, if the light ray A001 and the geometric object of the 3D scene intersect, an intersection point A002 which is closest to the viewpoint is further calculated, the intersection point A002 is a visual field scenic spot, meanwhile, NUM light source sampling points A003 which are uniformly distributed are randomly generated on a surface light source, the spatial positions of the NUM light source sampling points A003 are recorded, and the visibility VIS between each light source sampling point A003 and the intersection point A002 is calculated; for each intersection point A002, multiplexing the surface light source sampling point and the visibility calculation result of the intersection point A002, which is in a specific area and is adjacent to the visual field scenic spot, to accelerate the calculation of the approximate direct illumination value of the surface light source to the intersection point A002; the method comprises the following concrete steps:
providing a data structure LSPD for storing data related to sampling points of a surface light source; the data structure LSPD includes two member variables, the spatial position of the light source sampling points and the visibility of the light source sampling points.
1) The method realizes the sampling of the surface light source and the calculation of the visibility between the surface light source sampling point and the visual field scenic spots, and comprises the following specific steps:
step 101: placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on the virtual pixel plane from a viewpoint, the light ray A001 corresponds to the pixel on the virtual pixel plane one by one, whether the light ray A001 intersects with a geometric object of the 3D scene is judged, if the light ray A001 intersects with the geometric object of the 3D scene, an intersection point A002 which is closest to the viewpoint of the light ray A001 and the geometric object of the 3D scene is further calculated, the intersection point A002 is a visual field scenic spot, and the visual field point corresponds to a unique pixel on the virtual pixel plane;
step 102: creating an array LS comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array LS stores a list A004, and each element of the list A004 stores a variable of a data structure LSPD type; make list A004 stored for each element of array LS empty; each element of the array LS corresponds to a pixel on the virtual pixel plane one by one;
step 103: for each viewable scenery PV, the following is performed:
randomly generating NUM light source sampling points A003 which are uniformly distributed on a surface light source; creating variables of NUM data structures LSPD types in a computer memory, wherein the variables correspond to the NUM light source sampling points A003 one by one; assigning the spatial position of each light source sampling point A003 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point A003, judging whether a line segment from the spatial position of the light source sampling point A003 to a visible scene point PV intersects with a geometric object of the 3D scene, if so, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 0, otherwise, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 1; the variable of the NUM data structure LSPD type corresponding to NUM light source sampling points a003 is added to the list a004 of the element store of the array LS corresponding to the pixel on the virtual pixel plane corresponding to the visual field visuals PV.
2) Calculating the approximate direct illumination value of each sight spot with a visual field and drawing the direct illumination effect of the 3D scene, and the method comprises the following specific steps:
step 201: creating an array ILU comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array ILU corresponds to a pixel on the virtual pixel plane one by one, and each element of the array ILU is used for storing a direct illumination value of a visual field sight spot corresponding to the pixel on the virtual pixel plane; setting the value of each element of the array ILU as a lighting value corresponding to the background color; placing all the visual scene points obtained in Step101 into a list B001, wherein each element in the list B001 stores a visual scene point, and the number of the elements in the list B001 is equal to the number of all the visual scene points obtained in Step 101;
step 202: for each viewable sight EV, the following operations are performed:
step 202-1: let PA0 represent the visual scene EV; let alpha1Represents a vector pointing from the center point of the area light source to PA 0; creating a list LREUSE, each element of the list LREUSE storing a viewable sight adjacent to PA0 suitable for participating in light source visibility multiplexing; make list LREUSE empty;
step 202-2: for each element B002 in list B001, the following operations are performed:
let alpha2A vector representing a visual field point stored from the center point of the area light source to element B002; if (alpha)1·α2)/(|α1|·|α2I) is greater than TaThen add the visual scene point stored by element B002 to the list LREUSE; | α1I represents the vector alpha1Modulo, | α2I represents the vector alpha2Die of (2), TaA direction difference threshold value representing a center point of the reference surface light source;
step 202-3: creating a list LRULS for storing variables of the type of data structure LSPD; make list LRULS empty;
step 202-4: for each element B003 in the list LREUSE, the following operations are performed:
let LA1 represent the list a004 of element stores of the array LS corresponding to the pixels on the virtual pixel plane corresponding to the visual scene point stored by element B003; let PA1 represent the visual field point stored by element B003; for variable B004 of each data structure LSPD type stored in LA1, the following is performed:
let beta1A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 0; let beta2A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 1; let n be1A normalized geometric object surface normal vector representing the location of PA 0; n is2A normalized geometric object surface normal vector representing the location of PA 1; if (beta)1·β2)/(|β1|·|β2I) is greater than TbAnd the distance from PA0 to PA1 is less than TdAnd (n)1·n2) Greater than TnThen variable B004 is added to the list LRULS; beta | (B)1I denotes the vector beta1Modulo, | beta2I denotes the vector beta2The mold of (4); t isbIndicating a direction difference threshold, T, of a reference light source sampling pointdRepresents a distance difference threshold, TnRepresenting a normal vector difference threshold;
step 202-5: let nsi be the variable number of the data structure LSPD type stored in the list LRULS; if nsi is larger than or equal to Nst, turning to Step202-6, otherwise randomly generating Nst-nsi light source sampling points C001 which are uniformly distributed on the surface light source, creating variables of Nst-nsi data structure LSPD types which correspond to the Nst-nsi light source sampling points C001 one by one in a computer memory, and assigning the spatial position of each light source sampling point C001 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point C001, judging whether a line segment from the spatial position of the light source sampling point C001 to PA0 intersects with a geometric object of the 3D scene, if so, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 0, otherwise, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 1; adding the variables of the Nst-nsi data structure LSPD types corresponding to the Nst-nsi light source sampling points C001 into the list LRULS; nst represents the minimum number of light source sampling points required to estimate an approximate direct illumination value for a viewable scene;
step 202-6: determining light source sampling points required for estimating Monte Carlo direct illumination values of the PA0 according to values of spatial position member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS, using values of visibility member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS as visibility approximate values of the corresponding light source sampling points to the PA0, and calculating a direct illumination approximate value C002 of the PA0 by using a Monte Carlo direct illumination value estimation technology;
step 202-7: calculating the row number irow and the column number jcol of the pixel on the virtual pixel plane corresponding to the PA 0; assigning the elements of the irow line and the jcol column of the array ILU as direct illumination approximate values C002;
step 203: the direct illumination values stored for each element of the array ILU are converted to 3D scene image pixel color values and the 3D scene image is displayed on the display.
In this embodiment, NUM is 8, M is 1024, N is 768, and T isa=0.92,Tb=0.92,Tn=0.9,TdThe value is one twentieth of the radius of a sphere that can just enclose all geometric objects of the 3D scene, with Nst 15.

Claims (1)

1. The 3D scene drawing method for adaptively controlling the light source visibility multiplexing range according to the position is characterized by comprising the following steps of: placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on a virtual pixel plane from a viewpoint, the light ray A001 and the pixel on the virtual pixel plane are in one-to-one correspondence, whether the light ray A001 intersects a geometric object of a 3D scene is judged, if the light ray A001 and the geometric object of the 3D scene intersect, an intersection point A002 which is closest to the viewpoint is further calculated, the intersection point A002 is a visual field scenic spot, meanwhile, NUM light source sampling points A003 which are uniformly distributed are randomly generated on a surface light source, the spatial positions of the NUM light source sampling points A003 are recorded, and the visibility VIS between each light source sampling point A003 and the intersection point A002 is calculated; for each intersection point A002, multiplexing the surface light source sampling point and the visibility calculation result of the intersection point A002, which is in a specific area and is adjacent to the visual field scenic spot, to accelerate the calculation of the approximate direct illumination value of the surface light source to the intersection point A002; the method comprises the following concrete steps:
providing a data structure LSPD for storing data related to sampling points of a surface light source; the data structure LSPD comprises two member variables of the spatial position of the light source sampling point and the visibility of the light source sampling point;
1) the method realizes the sampling of the surface light source and the calculation of the visibility between the surface light source sampling point and the visual field scenic spots, and comprises the following specific steps:
step 101: placing a virtual camera at a viewpoint position, and drawing a 3D scene by using a ray projection technology according to camera observation parameters; for each light ray A001 which passes through a pixel on the virtual pixel plane from a viewpoint, the light ray A001 corresponds to the pixel on the virtual pixel plane one by one, whether the light ray A001 intersects with a geometric object of the 3D scene is judged, if the light ray A001 intersects with the geometric object of the 3D scene, an intersection point A002 which is closest to the viewpoint of the light ray A001 and the geometric object of the 3D scene is further calculated, the intersection point A002 is a visual field scenic spot, and the visual field point corresponds to a unique pixel on the virtual pixel plane;
step 102: creating an array LS comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array LS stores a list A004, and each element of the list A004 stores a variable of a data structure LSPD type; make list A004 stored for each element of array LS empty; each element of the array LS corresponds to a pixel on the virtual pixel plane one by one;
step 103: for each viewable sight EV, the following operations are performed:
randomly generating NUM light source sampling points A003 which are uniformly distributed on a surface light source; creating variables of NUM data structures LSPD types in a computer memory, wherein the variables correspond to the NUM light source sampling points A003 one by one; assigning the spatial position of each light source sampling point A003 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point A003, judging whether a line segment from the spatial position of the light source sampling point A003 to a visible scene point EV intersects with a geometric object of the 3D scene, if so, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 0, otherwise, setting the value of the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point A003 as 1; adding variables of NUM data structure LSPD types corresponding to the NUM light source sampling points A003 into a list A004 of element storage of an array LS corresponding to pixels on a virtual pixel plane corresponding to the visual field scenic spots EV;
2) calculating the approximate direct illumination value of each sight spot with a visual field and drawing the direct illumination effect of the 3D scene, and the method comprises the following specific steps:
step 201: creating an array ILU comprising M rows and N columns of elements, wherein M is the number of pixel rows on a virtual pixel plane, and N is the number of pixel columns on the virtual pixel plane; each element of the array ILU corresponds to a pixel on the virtual pixel plane one by one, and each element of the array ILU is used for storing a direct illumination value of a visual field sight spot corresponding to the pixel on the virtual pixel plane; setting the value of each element of the array ILU as a lighting value corresponding to the background color; placing all the visual scene points obtained in Step101 into a list B001, wherein each element in the list B001 stores a visual scene point, and the number of the elements in the list B001 is equal to the number of all the visual scene points obtained in Step 101;
step 202: for each viewable sight EV, the following operations are performed:
step 202-1: let PA0 represent the visual scene EV; let alpha1Represents a vector pointing from the center point of the area light source to PA 0; creating a list LREUSE, each element of the list LREUSE storing a viewable sight adjacent to PA0 suitable for participating in light source visibility multiplexing; make list LREUSE empty;
step 202-2: for each element B002 in list B001, the following operations are performed:
let alpha2A vector representing a visual field point stored from the center point of the area light source to element B002; if (alpha)1·α2)/(|α1|·|α2I) is greater than TaThen add the visual scene point stored by element B002 to the list LREUSE; | α1I represents the vector alpha1Modulo, | α2I represents the vector alpha2Die of (2), TaA direction difference threshold value representing a center point of the reference surface light source;
step 202-3: creating a list LRULS for storing variables of the type of data structure LSPD; make list LRULS empty;
step 202-4: for each element B003 in the list LREUSE, the following operations are performed:
let LA1 represent element storage list a004 of the array LS corresponding to the pixels on the virtual pixel plane corresponding to the visual scene point stored by element B003; let PA1 represent the visual field point stored by element B003; for variable B004 of each data structure LSPD type stored in LA1, the following is performed:
let beta1A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 0; let beta2A vector representing the spatial position represented by the spatial position member variable from the light source sampling point of the variable B004 toward PA 1; let n be1A normalized geometric object surface normal vector representing the location of PA 0; n is2A normalized geometric object surface normal vector representing the location of PA 1; if (beta)1·β2)/(|β1|·|β2I) is greater than TbAnd the distance from PA0 to PA1 is less than TdAnd (n)1·n2) Greater than TnThen variable B004 is added to the list LRULS; beta | (B)1I denotes the vector beta1Modulo, | beta2I denotes the vector beta2The mold of (4); t isbIndicating a direction difference threshold, T, of a reference light source sampling pointdRepresents a distance difference threshold, TnRepresenting a normal vector difference threshold;
step 202-5: let nsi be the variable number of the data structure LSPD type stored in the list LRULS; if nsi is larger than or equal to Nst, turning to Step202-6, otherwise randomly generating Nst-nsi light source sampling points C001 which are uniformly distributed on the surface light source, creating variables of Nst-nsi data structure LSPD types which correspond to the Nst-nsi light source sampling points C001 one by one in a computer memory, and assigning the spatial position of each light source sampling point C001 to the spatial position member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the spatial position member variable; for each light source sampling point C001, judging whether a line segment from the spatial position of the light source sampling point C001 to PA0 intersects with a geometric object of the 3D scene, if so, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 0, otherwise, making the visibility member variable of the light source sampling point of the variable of the data structure LSPD type corresponding to the light source sampling point C001 be 1; adding the variables of the Nst-nsi data structure LSPD types corresponding to the Nst-nsi light source sampling points C001 into the list LRULS; nst represents the minimum number of light source sampling points required to estimate an approximate direct illumination value for a viewable scene;
step 202-6: determining light source sampling points required for estimating Monte Carlo direct illumination values of the PA0 according to values of spatial position member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS, using values of visibility member variables of the light source sampling points of all variables of the data structure LSPD types stored in the list LRULS as visibility approximate values of the corresponding light source sampling points to the PA0, and calculating a direct illumination approximate value C002 of the PA0 by using a Monte Carlo direct illumination value estimation technology;
step 202-7: calculating the row number irow and the column number jcol of the pixel on the virtual pixel plane corresponding to the PA 0; assigning the elements of the irow line and the jcol column of the array ILU as direct illumination approximate values C002;
step 203: the direct illumination values stored for each element of the array ILU are converted to 3D scene image pixel color values and the 3D scene image is displayed on the display.
CN201711103516.9A 2017-11-10 2017-11-10 Self-adaptive 3D scene drawing method of light source visibility multiplexing range Active CN107909639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711103516.9A CN107909639B (en) 2017-11-10 2017-11-10 Self-adaptive 3D scene drawing method of light source visibility multiplexing range

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711103516.9A CN107909639B (en) 2017-11-10 2017-11-10 Self-adaptive 3D scene drawing method of light source visibility multiplexing range

Publications (2)

Publication Number Publication Date
CN107909639A CN107909639A (en) 2018-04-13
CN107909639B true CN107909639B (en) 2021-02-19

Family

ID=61844605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711103516.9A Active CN107909639B (en) 2017-11-10 2017-11-10 Self-adaptive 3D scene drawing method of light source visibility multiplexing range

Country Status (1)

Country Link
CN (1) CN107909639B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493412B (en) * 2018-11-07 2022-10-21 长春理工大学 Oversampling ray tracing method for multiplexing scene point light source visibility
US20230281918A1 (en) * 2022-03-04 2023-09-07 Bidstack Group PLC Viewability testing in the presence of fine-scale occluders

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831634A (en) * 2012-08-16 2012-12-19 北京航空航天大学 Efficient accurate general soft shadow generation method
CN104200513A (en) * 2014-08-08 2014-12-10 浙江传媒学院 Matrix row-column sampling based multi-light-source rendering method
CN104346831A (en) * 2014-11-01 2015-02-11 长春理工大学 Method for approximately drawing soft shadow of three-dimensional scene
CN105335995A (en) * 2015-10-28 2016-02-17 华为技术有限公司 Multi-light source global illumination rendering method and apparatus
US20160171752A1 (en) * 2009-06-26 2016-06-16 Intel Corporation Optimized Stereoscopic Visualization
CN106780704A (en) * 2016-12-07 2017-05-31 长春理工大学 Based on the direct lighting effect proximity rendering method of three-dimensional scenic that observability is reused

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171752A1 (en) * 2009-06-26 2016-06-16 Intel Corporation Optimized Stereoscopic Visualization
CN102831634A (en) * 2012-08-16 2012-12-19 北京航空航天大学 Efficient accurate general soft shadow generation method
CN104200513A (en) * 2014-08-08 2014-12-10 浙江传媒学院 Matrix row-column sampling based multi-light-source rendering method
CN104346831A (en) * 2014-11-01 2015-02-11 长春理工大学 Method for approximately drawing soft shadow of three-dimensional scene
CN105335995A (en) * 2015-10-28 2016-02-17 华为技术有限公司 Multi-light source global illumination rendering method and apparatus
CN106780704A (en) * 2016-12-07 2017-05-31 长春理工大学 Based on the direct lighting effect proximity rendering method of three-dimensional scenic that observability is reused

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Efficient Shadows from Sampled Environment Maps;Aner Ben-Artzi等;《Columbia University Tech Report CUCS-025-04》;20131231;全文 *

Also Published As

Publication number Publication date
CN107909639A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
US11605204B2 (en) Image processing for augmented reality
CN103795998B (en) Image processing method and image processing equipment
US8436857B2 (en) System and method for applying level of detail schemes
CN107909647B (en) Realistic virtual 3D scene light field projection image drawing method based on spatial multiplexing
EP3091739A1 (en) Apparatus and method performing rendering on viewpoint disparity image
CN105209960A (en) System, method, and computer program product to produce images for a near-eye light field display
US11417065B2 (en) Methods and systems for reprojection in augmented-reality displays
CN107886563B (en) Three-dimensional scene global illumination effect distributed cluster drawing method based on virtual point light sources
CN105913481B (en) Shadow rendering apparatus and control method thereof
CN107909639B (en) Self-adaptive 3D scene drawing method of light source visibility multiplexing range
CN112703527A (en) Head-up display (HUD) content control system and method
US9467684B2 (en) Three dimensional image display device and method of displaying three dimensional image
CN107346558B (en) Method for accelerating direct illumination effect drawing of three-dimensional scene by utilizing surface light source visibility space correlation
JP6061334B2 (en) AR system using optical see-through HMD
US20240062449A1 (en) Illumination rendering method and apparatus, and electronic device and storage medium
CN112802170A (en) Illumination image generation method, apparatus, device, and medium
US20130321416A1 (en) Method for estimation of occlusion in a virtual environment
US20100134492A1 (en) Tertiary lighting system
JP4987890B2 (en) Stereoscopic image rendering apparatus, stereoscopic image rendering method, stereoscopic image rendering program
EP3232405A1 (en) Image processing apparatus and method for processing images, and recording medium
CN109493412B (en) Oversampling ray tracing method for multiplexing scene point light source visibility
CN112819929A (en) Water surface rendering method and device, electronic equipment and storage medium
CN113126944A (en) Depth map display method, display device, electronic device, and storage medium
CN103795999B (en) A kind of method and system generating stereo-picture
US10255717B2 (en) Geometry shadow maps with per-fragment atomics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant