CN109493409B - Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing - Google Patents
Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing Download PDFInfo
- Publication number
- CN109493409B CN109493409B CN201811305231.8A CN201811305231A CN109493409B CN 109493409 B CN109493409 B CN 109493409B CN 201811305231 A CN201811305231 A CN 201811305231A CN 109493409 B CN109493409 B CN 109493409B
- Authority
- CN
- China
- Prior art keywords
- eye camera
- dimensional
- virtual
- eye
- dimensional array
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/02—Non-photorealistic rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Abstract
The invention discloses a virtual three-dimensional scene stereo picture drawing method based on left and right eye space multiplexing. When the method is used for drawing the left-eye image, if the visible scene point of the left-eye camera is not on the specular reflection surface, the visible scene point of the left-eye camera is corresponding to the virtual pixel plane of the right-eye camera, and the illumination value of the visible scene point is used as the approximate illumination value of the visible scene point corresponding to the corresponding pixel of the right-eye camera. When the method is used for drawing the right eye image, the visual scene points with approximate illumination values obtained through the visual scene point corresponding operation are not processed any more, so that the drawing calculation amount is reduced. The method can obviously improve the drawing speed of the three-dimensional picture of the virtual three-dimensional scene.
Description
Technical Field
The invention belongs to the technical field of virtual three-dimensional scene drawing, and relates to a method for drawing a three-dimensional picture of a virtual three-dimensional scene based on left-right eye spatial multiplexing.
Background
In recent years, Virtual Reality (VR) technology has attracted attention. In general, VR applications need to display a virtual three-dimensional scene in the form of an image frame in front of a user in real time so as to enable the user to visually perceive the existence of the virtual three-dimensional scene. High-immersion VR application systems often need to display stereoscopic pictures to users. The currently widely used form of stereoscopic pictures is a stereoscopic image picture based on binocular parallax, i.e., each stereoscopic picture includes two sub-pictures corresponding to the left and right eyes, respectively. When playing a stereoscopic picture, the left eye of the user can only see the sub-picture corresponding to the left eye (hereinafter referred to as left-eye image), and the right eye of the user can only see the sub-picture corresponding to the right eye (hereinafter referred to as right-eye image). When a three-dimensional picture of a virtual three-dimensional scene is drawn, two sub-pictures of a left eye and a right eye are actually required to be drawn, so the time overhead for drawing the three-dimensional picture is larger than the time overhead for drawing a non-three-dimensional picture. Traditionally, the drawing of a virtual three-dimensional scene stereoscopic picture is realized by independently drawing a left-eye image and a right-eye image respectively. The time overhead for drawing a stereoscopic picture is about twice that for drawing a non-stereoscopic picture. Analyzing the stereo picture of the virtual three-dimensional scene can find that the two sub-pictures of the left eye and the right eye have slight difference, but generally look very similar. This means that the results obtained when rendering the left eye image can be multiplexed when rendering the right eye image, thereby reducing the overall time overhead of rendering the entire stereoscopic picture. Of course, for a specular reflection three-dimensional geometric object, the light reflection on the surface has strong spatial directivity, so it is generally difficult to multiplex the illumination results of the specular reflection surface visible points when drawing the left and right eye images. The visual scene point as described herein is the intersection point between a ray emitted from the camera position passing through a pixel on the virtual pixel plane of the camera and the three-dimensional geometric object of the virtual three-dimensional scene that is closest to the location of the camera. The invention provides a virtual three-dimensional scene three-dimensional picture drawing method based on left-eye and right-eye spatial multiplexing, which accelerates the drawing of a virtual three-dimensional scene three-dimensional picture by multiplexing the illumination calculation result of a non-specular reflection visual field sight point of a left-eye camera when drawing a right-eye image.
Disclosure of Invention
The method aims to provide a method for drawing a virtual three-dimensional scene three-dimensional picture based on left-eye and right-eye spatial multiplexing so as to accelerate the drawing speed of the virtual three-dimensional scene three-dimensional picture.
The technical scheme of the method is realized as follows: the virtual three-dimensional scene stereo picture drawing method based on left and right eye space multiplexing is characterized in that: first, the positions of the left-eye camera and the right-eye camera, the horizontal angle of view, the vertical angle of view, the straight-ahead direction, the upward direction, the horizontal direction, the vertical direction, the horizontal direction, and the vertical direction are set in accordance with the stereoscopic picture drawing request,The pixel row number M on the virtual pixel plane and the pixel column number N on the virtual pixel plane are eight parameters; as shown in FIG. 1, the front direction of the left-eye camera is a direction vector w 1 The upward direction of the left eye camera is a direction vector v 1 The horizontal direction of the left-eye camera is a direction vector u 1 The right front direction of the right eye camera is a direction vector w 2 The upward direction of the right eye camera is a direction vector v 2 The horizontal direction of the right-eye camera is a direction vector u 2 (ii) a Direction vector u 1 Direction vector v 1 Direction vector w 1 Determining the camera coordinate system u of the left-eye camera 1 -v 1 -w 1 (ii) a Direction vector u 2 A direction vector v 2 A direction vector w 2 Determining a camera coordinate system u of a right eye camera 2 -v 2 -w 2 (ii) a The vertex coordinates of the three-dimensional geometric objects are described in a world coordinate system x-y-z; the pixel line number M on the virtual pixel plane of the left-eye camera and the right-eye camera is the same, and the pixel column number N on the virtual pixel plane of the left-eye camera and the right-eye camera is the same; the horizontal field angles of the left-eye camera and the right-eye camera take the same value theta u The vertical field angles of the left and right eye cameras take the same value theta v . The numbering of the row and column numbers of the pixels on the virtual pixel plane of the left-eye camera is shown in fig. 2, and the numbering of the row and column numbers of the pixels on the virtual pixel plane of the right-eye camera is shown in fig. 3. Then, the following steps are performed:
step 101: creating a two-dimensional array ILL comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILL to 0; elements of the two-dimensional array ILL correspond to pixels on a virtual pixel plane of the left-eye camera one to one; creating a two-dimensional array TagR containing M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array TagR to 0; elements of the two-dimensional array TagR correspond to pixels on a virtual pixel plane of the right-eye camera one by one; creating a two-dimensional array ILR comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILR to 0; elements of the two-dimensional array ILR correspond to pixels on a virtual pixel plane of the right-eye camera one by one;
step 102: emitting a light ray A001 which passes through the center position of each pixel on the virtual pixel plane of the left-eye camera from the position of the left-eye camera, wherein the light rays A001 correspond to the pixels on the virtual pixel plane of the left-eye camera one by one; for each ray a001, the following operations are performed:
step 102-1: calculating an intersection point A002 closest to the position of the left eye camera between the ray A001 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A003 of the intersection point A002 by using a ray tracing technology; calculating the line number iRow and the column number iCol of the pixels on the virtual pixel plane of the left-eye camera corresponding to the light ray A001 on the virtual pixel plane of the left-eye camera, and assigning the elements of the iRow-th line and the iCol-th column of the two-dimensional array ILL as an illumination value A003;
step 102-2: if the three-dimensional geometric object surface where the intersection point A002 is located is a specular reflection surface, turning to Step102-3, otherwise, performing the following operation:
judging whether other intersection points except two end points of the line segment SEG exist between the line segment SEG taking the position of the intersection point A002 and the right-eye camera as end points and the three-dimensional geometric object of the virtual three-dimensional scene, if so, turning to Step102-3, otherwise, calculating the three-dimensional coordinate of the intersection point A002 in a camera coordinate system u according to the three-dimensional coordinate of the intersection point A002 in a world coordinate system x-y-z 2 -v 2 -w 2 Three-dimensional coordinate Pcr of (1), edge u of three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrU2, the edge v of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrV2, the edge w of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrW 2;
② if | PcrU2/PcrW2| is greater than tan (theta) u /2) or | PcrV2/PcrW2| is greater than tan (θ) v /2), wherein | x | represents taking the absolute value of x, then Step102-3 is executed, otherwise, the instruction is executed Represents rounding down on x; if nCol>N, let nCol be N; if nRow>M, let nRow be M;
③ ream n t Equal to the value of the nRow line and nCol column elements of the two-dimensional array TagR, let I bt Equal to the value of the nRow line and nCol column elements of the two-dimensional array ILR, let I A Equal to the illumination value A003, and assigning the nRow line and nCol column elements of the two-dimensional array ILR as I bt ×n t /(n t +1)+I A /(n t +1), assigning n to the nRow line and nCol column elements of the two-dimensional array TagR t +1;
Step 102-3: the operation for ray a001 ends;
step 103: for each pixel a004 on the virtual pixel plane of the right-eye camera, the following operations are performed:
step 103-1: calculate the row number irow and column number icol of pixel a004 on the virtual pixel plane of the right eye camera; if the values of the irow row and the icol column elements of the two-dimensional array TagR are not 0, turning to Step103-2, otherwise, emitting a ray A005 passing through the center position of the pixel A004 from the position of the right-eye camera, calculating an intersection point A006 nearest to the position of the right-eye camera between the ray A005 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A007 of the intersection point A006 by using a ray tracing technology; assigning the irow row and the icol column elements of the two-dimensional array ILR as illumination values A007;
step 103-2: the operation for pixel a004 ends;
step 104: and converting the illumination values stored in the two-dimensional array ILL and the two-dimensional array ILR into a pixel color value of a left-eye image and a pixel color value of a right-eye image respectively, and outputting the pixel color values to the stereoscopic display equipment for displaying.
The invention provides a method for drawing a three-dimensional picture of a virtual three-dimensional scene based on left-eye and right-eye spatial multiplexing. When the method is used for drawing the left-eye image, if the visible scene point of the left-eye camera is not on the specular reflection surface, the visible scene point of the left-eye camera is corresponding to the virtual pixel plane of the right-eye camera, and the illumination value of the visible scene point is used as the approximate illumination value of the visible scene point corresponding to the corresponding pixel of the right-eye camera. Furthermore, the method of the present invention provides a solution to efficiently handle the problem that the visible scene points of multiple left-eye cameras correspond to the same pixel of the right-eye camera, without first using a buffer to store the illumination values of the visible scene points of the multiple left-eye cameras involved, and finally performing an additional operation to merge the illumination values. The method provides a scheme for correctly processing the shielding problem in the corresponding process of the visual scene point caused by different positions of the left eye camera and the right eye camera. When drawing the right eye image, the visual scene points which have obtained approximate illumination values through the visual scene point corresponding operation are not processed any more. The method can obviously improve the drawing speed of the three-dimensional picture of the virtual three-dimensional scene.
Drawings
Fig. 1 is a schematic diagram of a virtual three-dimensional scene stereo picture shot by using left and right eye cameras.
FIG. 2 is a schematic diagram of pixel matrix and row-column numbering on a virtual pixel plane of a left-eye camera.
Fig. 3 is a schematic diagram of pixel matrix and row-column numbering on the virtual pixel plane of the right-eye camera.
Detailed Description
In order that the features and advantages of the method may be more clearly understood, the method is further described below in connection with specific embodiments. In this embodiment, consider the following virtual room three-dimensional scene: a table and a chair are placed in the room, and a mirror is hung on one wall of the room. Except that the mirror in the room is a specular reflection object, other three-dimensional geometric objects are diffuse reflection objects. The CPU of the computer system selects Intel (R) Xeon (R) CPU E3-1225 v3@3.20GHz, the memory selects Jinshiton 8GB DDR 31333, and the hard disk selects Buffalo HD-CE 1.5 TU 2; windows 7 is selected as the computer operating system, and VC + +2010 is selected as the software programming tool.
The technical scheme of the method is realized as follows: the virtual three-dimensional scene stereo picture drawing method based on left and right eye space multiplexing is characterized in that:firstly, setting eight parameters including the positions of a left-eye camera and a right-eye camera, a horizontal field angle, a vertical field angle, a right front direction, an upward direction, a horizontal direction, the number of pixel rows M on a virtual pixel plane and the number of pixel columns N on the virtual pixel plane according to a three-dimensional picture drawing requirement; as shown in FIG. 1, the front direction of the left-eye camera is a direction vector w 1 The upward direction of the left eye camera is a direction vector v 1 The horizontal direction of the left-eye camera is a direction vector u 1 The right front direction of the right eye camera is a direction vector w 2 The upward direction of the right eye camera is a direction vector v 2 The horizontal direction of the right-eye camera is a direction vector u 2 (ii) a Direction vector u 1 A direction vector v 1 Direction vector w 1 Determining the camera coordinate system u of the left-eye camera 1 -v 1 -w 1 (ii) a Direction vector u 2 A direction vector v 2 Direction vector w 2 Determining a camera coordinate system u of a right eye camera 2 -v 2 -w 2 (ii) a The vertex coordinates of the three-dimensional geometric objects are described in a world coordinate system x-y-z; the pixel line number M on the virtual pixel plane of the left-eye camera and the right-eye camera is the same, and the pixel column number N on the virtual pixel plane of the left-eye camera and the right-eye camera is the same; the horizontal field angles of the left-eye camera and the right-eye camera take the same value theta u The vertical field angles of the left and right eye cameras take the same value theta v . The numbering of the row and column numbers of the pixels on the virtual pixel plane of the left-eye camera is shown in fig. 2, and the numbering of the row and column numbers of the pixels on the virtual pixel plane of the right-eye camera is shown in fig. 3. Then, the following steps are performed:
step 101: creating a two-dimensional array ILL comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILL to 0; elements of the two-dimensional array ILL correspond to pixels on a virtual pixel plane of the left-eye camera one to one; creating a two-dimensional array TagR containing M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array TagR to 0; elements of the two-dimensional array TagR correspond to pixels on a virtual pixel plane of the right-eye camera one by one; creating a two-dimensional array ILR comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILR to 0; elements of the two-dimensional array ILR correspond to pixels on a virtual pixel plane of the right-eye camera one by one;
step 102: emitting a light ray A001 passing through the center position of each pixel on the virtual pixel plane of the left-eye camera from the position of the left-eye camera, wherein the light ray A001 corresponds to the pixels on the virtual pixel plane of the left-eye camera one by one; for each ray a001, the following operations are performed:
step 102-1: calculating an intersection point A002 closest to the position of the left eye camera between the ray A001 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A003 of the intersection point A002 by using a ray tracing technology; calculating a line number iRow and a column number iCol of pixels on the virtual pixel plane of the left-eye camera, which correspond to the ray A001, on the virtual pixel plane of the left-eye camera, and assigning elements of an iRow line and an iCol column of the two-dimensional array ILL to be an illumination value A003;
step 102-2: if the three-dimensional geometric object surface where the intersection point A002 is located is a specular reflection surface, turning to Step102-3, otherwise, performing the following operation:
judging whether other intersection points except two end points of the line segment SEG exist between the line segment SEG taking the position of the intersection point A002 and the right-eye camera as end points and the three-dimensional geometric object of the virtual three-dimensional scene, if so, turning to Step102-3, otherwise, calculating the three-dimensional coordinate of the intersection point A002 in a camera coordinate system u according to the three-dimensional coordinate of the intersection point A002 in a world coordinate system x-y-z 2 -v 2 -w 2 Three-dimensional coordinate Pcr of (1), edge u of three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrU2, the edge v of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrV2, the edge w of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrW 2;
② if | PcrU2/PcrW2| is greater than tan (theta) u /2) or | PcrV2/PcrW2| is greater than tan (θ) v /2), wherein | x | represents taking the absolute value of x, then Step102-3 is executed, otherwise, the instruction is executed Represents rounding down on x; if nCol>N, let nCol be N; if nRow>M, let nRow be M;
③ ream n t Equal to the value of the nRow line and nCol column elements of the two-dimensional array TagR, let I bt Equal to the value of the nRow line and nCol column elements of the two-dimensional array ILR, let I A Equal to the illumination value A003, and assigning the nRow line and nCol column elements of the two-dimensional array ILR as I bt ×n t /(n t +1)+I A /(n t +1), assigning n to the nRow line and nCol column elements of the two-dimensional array TagR t +1;
Step 102-3: the operation for ray a001 ends;
step 103: for each pixel a004 on the virtual pixel plane of the right-eye camera, the following operations are performed:
step 103-1: calculate the row number irow and column number icol of pixel a004 on the virtual pixel plane of the right eye camera; if the values of the irow row and the icol column elements of the two-dimensional array TagR are not 0, turning to Step103-2, otherwise, emitting a ray A005 passing through the center position of the pixel A004 from the position of the right-eye camera, calculating an intersection point A006 nearest to the position of the right-eye camera between the ray A005 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A007 of the intersection point A006 by using a ray tracing technology; assigning the irow row and the icol column elements of the two-dimensional array ILR as illumination values A007;
step 103-2: the operation for pixel a004 ends;
step 104: and converting the illumination values stored in the two-dimensional array ILL and the two-dimensional array ILR into a pixel color value of the left-eye image and a pixel color value of the right-eye image, respectively, and outputting the pixel color values to the stereoscopic display device for display.
In this embodiment, M is 1024, N is 768, and θ u =π/3rad,θ v =π/3rad。
Claims (1)
1. The virtual three-dimensional scene stereo picture drawing method based on the left-eye and right-eye spatial multiplexing is characterized by comprising the following steps: firstly, setting eight parameters including the positions of a left-eye camera and a right-eye camera, a horizontal field angle, a vertical field angle, a right front direction, an upward direction, a horizontal direction, the number of pixel rows M on a virtual pixel plane and the number of pixel columns N on the virtual pixel plane according to a three-dimensional picture drawing requirement; the right front direction of the left eye camera is a direction vector w 1 The upward direction of the left-eye camera is a direction vector v 1 The horizontal direction of the left-eye camera is a direction vector u 1 The right front direction of the right eye camera is a direction vector w 2 The upward direction of the right eye camera is a direction vector v 2 The horizontal direction of the right-eye camera is a direction vector u 2 (ii) a Direction vector u 1 A direction vector v 1 A direction vector w 1 Determining the camera coordinate system u of the left-eye camera 1 -v 1 -w 1 (ii) a Direction vector u 2 A direction vector v 2 A direction vector w 2 Determining a camera coordinate system u for a right eye camera 2 -v 2 -w 2 (ii) a The vertex coordinates of the three-dimensional geometric objects are described in a world coordinate system x-y-z; the pixel line number M on the virtual pixel plane of the left-eye camera and the right-eye camera is the same, and the pixel column number N on the virtual pixel plane of the left-eye camera and the right-eye camera is the same; the horizontal field angles of the left-eye camera and the right-eye camera take the same value theta u The vertical field angles of the left and right eye cameras take the same value theta v (ii) a Then, the following steps are performed:
step 101: creating a two-dimensional array ILL comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILL to 0; elements of the two-dimensional array ILL correspond to pixels on a virtual pixel plane of the left-eye camera one to one; creating a two-dimensional array TagR comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array TagR to 0; elements of the two-dimensional array TagR correspond to pixels on a virtual pixel plane of the right-eye camera one by one; creating a two-dimensional array ILR comprising M rows and N columns of elements in a memory of a computer system; assigning each element of the two-dimensional array ILR to 0; elements of the two-dimensional array ILR correspond to pixels on a virtual pixel plane of the right-eye camera one by one;
step 102: emitting a light ray A001 passing through the center position of each pixel on the virtual pixel plane of the left-eye camera from the position of the left-eye camera, wherein the light ray A001 corresponds to the pixels on the virtual pixel plane of the left-eye camera one by one; for each ray a001, the following operations are performed:
step 102-1: calculating an intersection point A002 closest to the position of the left eye camera between the ray A001 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A003 of the intersection point A002 by using a ray tracing technology; calculating the line number iRow and the column number iCol of the pixels on the virtual pixel plane of the left-eye camera corresponding to the light ray A001 on the virtual pixel plane of the left-eye camera, and assigning the elements of the iRow-th line and the iCol-th column of the two-dimensional array ILL as an illumination value A003;
step 102-2: if the three-dimensional geometric object surface where the intersection point A002 is located is a specular reflection surface, turning to Step102-3, otherwise, performing the following operation:
judging whether other intersection points except two end points of the line segment SEG exist between the line segment SEG taking the position of the intersection point A002 and the right-eye camera as end points and the three-dimensional geometric object of the virtual three-dimensional scene, if so, turning to Step102-3, otherwise, calculating the three-dimensional coordinate of the intersection point A002 in a camera coordinate system u according to the three-dimensional coordinate of the intersection point A002 in a world coordinate system x-y-z 2 -v 2 -w 2 Three-dimensional coordinate Pcr of (1), edge u of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrU2, the edge v of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrV2, the edge w of the three-dimensional coordinate Pcr 2 The coordinate component of the axis is PcrW 2;
② if | PcrU2/PcrW2| is greater than tan (theta) u /2) or | PcrV2/PcrW2| is greater than tan (θ) v /2), wherein | x | represents taking the absolute value of x, then Step102-3 is executed, otherwise, the instruction is executed Represents rounding down on x; if nCol>N, let nCol ═ N; if nRow>M, let nRow ═ M;
③ ream n t Equal to the value of the nRow line and nCol column elements of the two-dimensional array TagR, let I bt Equal to the value of the nRow line and nCol column elements of the two-dimensional array ILR, let I A Equal to the illumination value A003, and assigning the nRow line and nCol column elements of the two-dimensional array ILR as I bt ×n t /(n t +1)+I A /(n t +1), assigning n to the nRow line and nCol column elements of the two-dimensional array TagR t +1;
Step 102-3: the operation for ray a001 ends;
step 103: for each pixel a004 on the virtual pixel plane of the right-eye camera, the following operations are performed:
step 103-1: calculate the row number irow and column number icol of pixel a004 on the virtual pixel plane of the right eye camera; if the values of the irow row and the icol column elements of the two-dimensional array TagR are not 0, turning to Step103-2, otherwise, emitting a ray A005 passing through the center position of the pixel A004 from the position of the right-eye camera, calculating an intersection point A006 nearest to the position of the right-eye camera between the ray A005 and the three-dimensional geometric object of the virtual three-dimensional scene, and calculating an illumination value A007 of the intersection point A006 by using a ray tracing technology; assigning the irow row and the icol column elements of the two-dimensional array ILR as illumination values A007;
step 103-2: the operation for pixel a004 ends;
step 104: and converting the illumination values stored in the two-dimensional array ILL and the two-dimensional array ILR into a pixel color value of the left-eye image and a pixel color value of the right-eye image, respectively, and outputting the pixel color values to the stereoscopic display device for display.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811305231.8A CN109493409B (en) | 2018-11-05 | 2018-11-05 | Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811305231.8A CN109493409B (en) | 2018-11-05 | 2018-11-05 | Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109493409A CN109493409A (en) | 2019-03-19 |
CN109493409B true CN109493409B (en) | 2022-08-23 |
Family
ID=65693851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811305231.8A Active CN109493409B (en) | 2018-11-05 | 2018-11-05 | Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109493409B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110675482B (en) * | 2019-08-28 | 2023-05-19 | 长春理工大学 | Spherical fibonacci pixel lattice panoramic picture rendering and displaying method of virtual three-dimensional scene |
CN110717968B (en) * | 2019-10-11 | 2023-04-07 | 长春理工大学 | Computing resource request driven self-adaptive cloud rendering method for three-dimensional scene |
CN110728742B (en) * | 2019-10-11 | 2022-08-23 | 长春理工大学 | Three-dimensional scene animation rendering indirect illumination interframe multiplexing method based on visual importance |
CN110728743B (en) * | 2019-10-11 | 2022-09-06 | 长春理工大学 | VR three-dimensional scene three-dimensional picture generation method combining cloud global illumination rendering |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6011581A (en) * | 1992-11-16 | 2000-01-04 | Reveo, Inc. | Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments |
CN102243768A (en) * | 2011-06-17 | 2011-11-16 | 长春理工大学 | Method for drawing stereo picture of three-dimensional virtual scene |
CN102306401A (en) * | 2011-08-08 | 2012-01-04 | 长春理工大学 | Left/right-eye three-dimensional picture drawing method for three-dimensional (3D) virtual scene containing fuzzy reflection effect |
CN106576161A (en) * | 2014-06-25 | 2017-04-19 | 夏普株式会社 | Variable barrier pitch correction |
CN107274474A (en) * | 2017-07-03 | 2017-10-20 | 长春理工大学 | Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102004044111B4 (en) * | 2004-09-08 | 2015-05-07 | Seereal Technologies Gmbh | Method and device for coding and reconstructing computer-generated video holograms |
-
2018
- 2018-11-05 CN CN201811305231.8A patent/CN109493409B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6011581A (en) * | 1992-11-16 | 2000-01-04 | Reveo, Inc. | Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments |
CN102243768A (en) * | 2011-06-17 | 2011-11-16 | 长春理工大学 | Method for drawing stereo picture of three-dimensional virtual scene |
CN102306401A (en) * | 2011-08-08 | 2012-01-04 | 长春理工大学 | Left/right-eye three-dimensional picture drawing method for three-dimensional (3D) virtual scene containing fuzzy reflection effect |
CN106576161A (en) * | 2014-06-25 | 2017-04-19 | 夏普株式会社 | Variable barrier pitch correction |
CN107274474A (en) * | 2017-07-03 | 2017-10-20 | 长春理工大学 | Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method |
Non-Patent Citations (3)
Title |
---|
A GPU Sub-pixel Algorithm for Autostereoscopic Virtual Reality;Robert L. Kooima et al;《Researchgate》;20070131;第1-7页 * |
三维显示信息重建及评价方法研究;蒋昊;《中国博士学位论文全文数据库 信息科技辑》;20130815(第8期);第I135-189页 * |
基于位姿约束的大视场双目视觉标定算法;张超;《光学学报》;20160110;第36卷(第1期);第205-214页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109493409A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493409B (en) | Virtual three-dimensional scene stereo picture drawing method based on left-right eye space multiplexing | |
US9986227B2 (en) | Tracked automultiscopic 3D tabletop display | |
US9848184B2 (en) | Stereoscopic display system using light field type data | |
US8471898B2 (en) | Medial axis decomposition of 2D objects to synthesize binocular depth | |
US20140306954A1 (en) | Image display apparatus and method for displaying image | |
AU2018249563B2 (en) | System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display | |
US9681122B2 (en) | Modifying displayed images in the coupled zone of a stereoscopic display based on user comfort | |
US20160021363A1 (en) | Enhancing the Coupled Zone of a Stereoscopic Display | |
CN103392342A (en) | Method and device for adjusting viewing area, and device for displaying three-dimensional video signal | |
WO2012140397A2 (en) | Three-dimensional display system | |
CN114371779A (en) | Visual enhancement method for sight depth guidance | |
KR100764382B1 (en) | Apparatus for image mapping in computer-generated integral imaging system and method thereof | |
US10957106B2 (en) | Image display system, image display device, control method therefor, and program | |
CN102063735B (en) | Method and device for manufacturing three-dimensional image source by changing viewpoint angles | |
CN111327886B (en) | 3D light field rendering method and device | |
Zhang et al. | An interactive multiview 3D display system | |
JP2014241015A (en) | Image processing device, method and program, and stereoscopic image display device | |
Lee et al. | Eye tracking based glasses-free 3D display by dynamic light field rendering | |
KR101831978B1 (en) | Generation method of elemental image contents for display system with rotated lenticular sheet | |
CN111050145A (en) | Multi-screen fusion imaging method, intelligent device and system | |
Thatte et al. | Real-World Virtual Reality With Head-Motion Parallax | |
Matsuki et al. | Considerations on binocular mismatching in observation-based diminished reality | |
NZ757902B2 (en) | System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display | |
Chotrov | Methods for Generating and Displaying Stereo Images on VR Systems using Quad-Buffered Graphics Adapters | |
ASL et al. | A new implementation of head-coupled perspective for virtual architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |