CN113516748A - Real-time rendering method and device for integrated imaging light field display - Google Patents

Real-time rendering method and device for integrated imaging light field display Download PDF

Info

Publication number
CN113516748A
CN113516748A CN202110875812.0A CN202110875812A CN113516748A CN 113516748 A CN113516748 A CN 113516748A CN 202110875812 A CN202110875812 A CN 202110875812A CN 113516748 A CN113516748 A CN 113516748A
Authority
CN
China
Prior art keywords
target
reconstruction
display panel
reconstruction plane
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110875812.0A
Other languages
Chinese (zh)
Other versions
CN113516748B (en
Inventor
秦宗
万权震
邱钰清
戴睿佳
杨文超
杨柏儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110875812.0A priority Critical patent/CN113516748B/en
Priority claimed from CN202110875812.0A external-priority patent/CN113516748B/en
Publication of CN113516748A publication Critical patent/CN113516748A/en
Application granted granted Critical
Publication of CN113516748B publication Critical patent/CN113516748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a real-time rendering method and a real-time rendering device for integrated imaging light field display, which comprise the following steps: determining a target reconstruction plane and a reconstruction range thereof according to a preset target reconstruction space and a target image to be rendered; aiming at each target reconstruction plane, acquiring a texture map of a target image on the target reconstruction plane; acquiring a second index matrix containing the corresponding relation between each voxel in the texture map of each target reconstruction plane and a two-dimensional pixel on the display panel according to the voxel information of the texture map of each target reconstruction plane and the first index matrix; wherein the first index matrix is obtained by pre-calculation; and finally, acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix. The whole calculation process is simple and quick, does not need to depend on strong hardware calculation force, reduces the calculation amount, saves the time for rendering a single picture, and ensures that the image rendering speed reaches the video level.

Description

Real-time rendering method and device for integrated imaging light field display
Technical Field
The application relates to the technical field of three-dimensional display technology and image rendering, in particular to a real-time rendering method and device for integrated imaging light field display.
Background
Virtual Reality (VR) and Augmented Reality (Augmented Reality) three-dimensional display technologies are widely used in the fields of military, medical treatment, entertainment, education, and the like. The traditional realization method utilizes the principle of binocular parallax and generates a three-dimensional effect through the effect of binocular convergence. However, because the light emitted from the screen lacks depth information, the focus of the eyes is not matched with the depth sense, which causes convergence-Accommodation Conflict (VAC), and causes visual fatigue to the viewer, and vertigo and discomfort after long-term viewing.
In order to realize true 3D display without VAC, people have invented various realization methods, which can be roughly classified into methods of integrated imaging light field display, holographic display, volume 3D display, adaptive zoom, and the like according to the imaging principle. The integrated imaging light field display is a focus of the next generation of three-dimensional display technology due to the advantages of simple and easily realized hardware, light and thin, continuously adjustable depth and the like.
Integrated imaging is an autostereoscopic (autostereoscopic) and multi-view (multiscopic) three-dimensional imaging technique that captures and reproduces the light field by using an array of apertures or microlenses, also known as fly-eye lenses, typically without the aid of a large integrated objective or viewing lens. In capture mode, a film or detector is coupled to an array of microlenses, each of which allows an image of the subject to be acquired from the perspective of the lens position. In the rendering mode, each microlens allows each viewing eye to see only the relevant microimage region containing the portion of the object visible through the space from that region.
Currently, referring to fig. 1, an integrated imaging light field display system generally includes a high-resolution display panel and a microlens Array, a two-dimensional Image displayed on the display panel is called an Element Image Array (EIA), and different parts of the EIA are projected to different orientations in a three-dimensional space through the microlens Array to form a three-dimensional Image.
In order to generate the EIA, mapping is generally performed based on a viewpoint of a tracking ray, and image information carried by the ray corresponding to each pixel on the EIA, that is, luminance and chrominance information corresponding to the mapping is calculated. The algorithm has large computation amount and low response speed, and cannot meet the requirement of human-computer real-time interaction.
The prior art proposes various improvements from the hardware, algorithm and both perspectives, for example: simulating the propagation of a certain amount of light rays based on Monte Carlo light rays and a fovea method to realize rapid rendering; the lens is used for multiple catadioptric reflection, so that the reconstruction of a long-distance continuous depth three-dimensional image is realized; or, extracting a multi-view map by using different sampling and ray tracing algorithms, such as vvr (viewpoint Vector rendering), and reconstructing a three-dimensional light field according to the multi-view map.
However, the existing image rendering technologies for virtual reality and augmented reality have too strong dependence on hardware computing power, need to be combined with high-cost computing technologies, and are difficult to implement in wearable equipment; on the other hand, the unit rendering time (time to render a single picture) of the rendering machine is too long, and it is difficult to implement real-time rendering.
Disclosure of Invention
In view of this, the present application provides a real-time rendering method and apparatus for integrated imaging light field display, so as to implement the method.
In order to achieve the above object, a first aspect of the present application provides a real-time rendering method for integrated imaging light field display, including:
determining at least two target reconstruction planes and a reconstruction range of the target reconstruction planes according to a preset target reconstruction space and a target image to be rendered;
aiming at each target reconstruction plane, acquiring a texture map of a target image to be rendered on the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
acquiring a second index matrix according to the volume pixel information of the texture map of each target reconstruction plane and a preset first index matrix; the first index matrix comprises the corresponding relation between each voxel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel; the second index matrix comprises the corresponding relation between each voxel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel;
and acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
Preferably, the calculation method of the first index matrix includes:
determining a plurality of possible reconstruction planes and a reconstruction range of the reconstruction planes according to a preset target reconstruction space;
and for each possible reconstruction plane, calculating that each voxel of the reconstruction plane in the reconstruction range corresponds to a two-dimensional pixel on the display panel according to the parameters of the micro lens array and the geometric relationship among the display panel, the micro lens array and the reconstruction plane to obtain a first index matrix.
Preferably, the calculating, according to the parameters of the microlens array and in combination with the geometric relationship among the display panel, the microlens array, and the reconstruction plane, each voxel of the reconstruction plane within the reconstruction range thereof corresponding to a two-dimensional pixel on the display panel includes:
and aiming at each lens in the micro-lens array, performing ray tracing by a Gaussian formula according to the focal length of the lens and the geometric relationship among the lens, the display panel and the reconstruction plane to obtain a two-dimensional pixel of each voxel of the reconstruction plane in the reconstruction range corresponding to the display panel.
Preferably, the process of obtaining a texture map of the target image to be rendered on the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array, and the target reconstruction plane includes:
acquiring the sampling frequency of a target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
and according to the sampling frequency and the reconstruction range of the target reconstruction plane, performing down-sampling on the target image to be rendered to obtain a texture map of the target reconstruction plane.
Preferably, the process of obtaining the sampling frequency of the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array, and the target reconstruction plane includes:
determining the size of a voxel of a target reconstruction plane according to the parameters of the display panel and the geometric relationship among the display panel, the micro-lens array and the target reconstruction plane;
and determining the sampling frequency of the target reconstruction plane according to the reconstruction range of the target reconstruction plane and by combining the size of the volume pixel of the target reconstruction plane.
Preferably, the process of determining the size of the voxel of the target reconstruction plane according to the parameters of the display panel and the geometric relationship among the display panel, the microlens array and the target reconstruction plane includes:
the size of the voxels of the target reconstruction plane is determined by the following formula:
Figure BDA0003190211060000031
wherein L iscon_imgl_pix_iReconstructing the size of the volume pixels of the plane for the object, Lper_pixelIs the size of a single pixel on the display panel, r is the distance from the microlens array to the display panel, LiThe distance of the plane to the display panel is reconstructed for the object.
Preferably, the process of determining the sampling frequency of the target reconstruction plane according to the reconstruction range of the target reconstruction plane and the size of the volume pixel of the target reconstruction plane includes:
determining a sampling frequency of the target reconstruction plane by:
Figure BDA0003190211060000041
wherein f ispixel_iReconstructing the sampling frequency, L, of the plane for the objectcon_imglSize of reconstruction range for object reconstruction plane, Lcon_imgl_pix_iThe size of the voxels of the plane is reconstructed for the object.
Preferably, the process of obtaining the second index matrix according to the voxel information of the texture map of each target reconstruction plane and a preset first index matrix includes:
and traversing each volume pixel in the texture map aiming at the texture map of each target reconstruction plane, and extracting a two-dimensional pixel corresponding to the volume pixel from the first index matrix to generate the second index matrix.
Preferably, the process of obtaining a unit image array of a display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix includes:
acquiring two-dimensional pixels on the display panel corresponding to the volume pixels of the texture map of each target reconstruction plane according to the second index matrix;
traversing the volume pixels of the texture map aiming at the texture map of each target reconstruction plane, acquiring the color information of the volume pixels, and setting the color information of the two-dimensional pixels on the display panel corresponding to the volume pixels to be consistent with the color information of the volume pixels, so as to obtain a unit image array of the display panel.
The second aspect of the present application provides a real-time rendering apparatus for integrated imaging light field display, comprising:
the reconstruction region acquisition unit is used for determining at least two target reconstruction planes and the reconstruction range of the target reconstruction planes according to a preset target reconstruction space and a target image to be rendered;
the texture map obtaining unit is used for obtaining a texture map of a target image to be rendered on each target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
the second index matrix obtaining unit is used for obtaining a second index matrix according to the volume pixel information of the texture map of each target reconstruction plane and a preset first index matrix; the first index matrix comprises the corresponding relation between each voxel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel; the second index matrix comprises the corresponding relation between each voxel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel;
and the unit image array obtaining unit is used for obtaining the unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
As can be seen from the foregoing technical solutions, in the embodiment of the present application, a first index matrix is pre-calculated, where the first index matrix includes a correspondence relationship between each voxel of each possible reconstruction plane within a reconstruction range of the reconstruction plane and a two-dimensional pixel on a display panel.
And then determining at least two target reconstruction planes of the target image and a reconstruction range corresponding to the target reconstruction planes according to a preset target reconstruction space and the target image to be rendered.
And then, aiming at each target reconstruction plane, acquiring a texture map of the target image to be rendered on the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane.
And acquiring a second index matrix containing the volume pixel information of the texture map according to the volume pixel information in the texture map of each target reconstruction plane and the first index matrix. The second index matrix comprises the corresponding relation between the volume pixels in the texture map of each target reconstruction plane and the pixels on the display panel. And finally, acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
According to the method and the device, the first index matrix containing the position corresponding information and the second index matrix containing the texture map pixel corresponding information are calculated separately, for the target rendering image which does not exceed the target reconstruction space and the target reconstruction plane is contained in the possible reconstruction plane, only the first index matrix which is calculated in advance needs to be called, the second index matrix can be conveniently calculated from the first index matrix, namely, for the whole rendering process of all pictures, calculation about the first index matrix needs to be carried out only once in advance, repeated calculation is avoided, the calculation amount is reduced, and the calculation time of a single target rendering picture is saved.
For the second index matrix, aiming at the target rendering images with different contents, only the texture maps of the target rendering images in all target reconstruction planes are needed to be calculated, and the corresponding second index matrix can be calculated according to the pre-calculated first index matrix, wherein the calculation involved in the process has lower calculation complexity. And finally, acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix. The whole calculation process is simple and quick, does not need to depend on strong hardware calculation force, reduces the calculation amount, saves the time for rendering a single picture, and ensures that the image rendering speed reaches the video level.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of an integrated imaging light field display system as disclosed in embodiments of the present application;
FIG. 2 is a schematic diagram of a real-time rendering method for integrated imaging light field display disclosed in an embodiment of the present application;
FIG. 3 is a schematic diagram of an integrated imaging light field display with 6 possible reconstruction planes as disclosed in an embodiment of the present application;
fig. 4 is a schematic diagram of a real-time rendering apparatus for integrated imaging light field display according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 2, a real-time rendering method for integrated imaging light field display according to an embodiment of the present application may include the following steps:
and step S100, determining a target reconstruction plane and a reconstruction range thereof.
Specifically, at least two target reconstruction planes and a reconstruction range of the target reconstruction planes are determined according to a preset target reconstruction space and a target image to be rendered.
The target reconstruction space is a preset space, and generally can be determined according to the position relationship between the display panel and the microlens array and the range needing projection, so as to reconstruct an image to be rendered in the target reconstruction space.
The target image to be rendered is a three-dimensional image, which may be a depth image, and includes information of each pixel point (hereinafter, referred to as a voxel) on a reconstruction plane at different depths. It will be appreciated that for any three-dimensional image it will necessarily contain more than two reconstruction planes. Each volume pixel of the target image to be rendered on a reconstruction plane constitutes a texture map of the reconstruction plane.
In general, the reconstruction plane may be disposed parallel to the plane of the display panel; according to the target reconstruction space, the reconstruction range of the reconstruction plane can be further determined.
Step S200, acquiring a texture map of a target image to be rendered on a target reconstruction plane.
Specifically, for each target reconstruction plane, a texture map of the target image to be rendered on the target reconstruction plane is obtained according to the parameters of the display panel, the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array and the target reconstruction plane.
For example, according to the parameters of the display panel, the resolution of the image displayed by the display panel can be obtained; according to the reconstruction range of the target reconstruction plane, the size of the image reconstructed on the target reconstruction plane can be obtained; according to the geometric relationship among the display panel, the micro lens array and the target reconstruction plane, the ratio of the image displayed by the display panel to the image reconstructed on the target reconstruction plane can be obtained. According to the resolution, the size and the ratio, and the size and the resolution of the image reconstructed on the target reconstruction plane, the specific voxel information of the image reconstructed on the target reconstruction plane, that is, the texture map of the target image to be rendered on the target reconstruction plane, can be obtained by combining the target image to be rendered.
Step 300, acquiring the corresponding relation between each voxel in the texture map of the target reconstruction plane and the two-dimensional pixel on the display panel.
Specifically, a second index matrix is obtained according to the voxel information of the texture map of each target reconstruction plane and a preset first index matrix.
The first index matrix comprises the corresponding relation between each volume pixel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel.
The second index matrix comprises the corresponding relation between each volume pixel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel.
Specifically, each voxel on the reconstruction plane corresponds to a plurality of two-dimensional pixels on the display panel under the action of the microlens array, and the specific correspondence between the voxel and the two-dimensional pixels is determined by the position of the voxel on the reconstruction plane, the lens parameters in the microlens array, and the relative positional relationship between the reconstruction plane and the microlens array, the display panel.
Since the first index matrix includes the corresponding relationship between each voxel of each possible reconstruction plane in the reconstruction range thereof and the two-dimensional pixel on the display panel, it means that the corresponding relationship between each voxel of each target reconstruction plane in the reconstruction range thereof and the two-dimensional pixel on the display panel can be known through the first index matrix. And then, by combining the position information of the volume pixels in the texture map of each target reconstruction plane, the corresponding relation between each volume pixel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel can be obtained.
In step S400, a unit image array of the display panel is acquired.
Specifically, a unit image array of the display panel is obtained according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
For example, according to the color information of each voxel in the texture map of each target reconstruction plane, and then by combining the correspondence between the voxel and the two-dimensional pixel on the display panel, the color information of the corresponding two-dimensional pixel on the display panel can be obtained, thereby obtaining the unit image array of the display panel.
According to the method and the device, the first index matrix containing the position corresponding information and the second index matrix containing the texture map pixel corresponding information are calculated separately, for the target rendering image which does not exceed the target reconstruction space and the target reconstruction plane is contained in the possible reconstruction plane, only the first index matrix which is calculated in advance needs to be called, the second index matrix can be conveniently calculated from the first index matrix, namely, for the whole rendering process of all pictures, calculation about the first index matrix needs to be carried out only once in advance, repeated calculation is avoided, the calculation amount is reduced, and the calculation time of a single target rendering picture is saved.
For the second index matrix, aiming at the target rendering images with different contents, only the texture maps of the target rendering images in all target reconstruction planes are needed to be calculated, and the corresponding second index matrix can be calculated according to the pre-calculated first index matrix, wherein the calculation involved in the process has lower calculation complexity. And finally, acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix. The whole calculation process is simple and quick, does not need to depend on strong hardware calculation force, reduces the calculation amount, saves the time for rendering a single picture, and ensures that the image rendering speed reaches the video level.
In some embodiments of the present application, for the pre-calculated first index matrix, a specific calculation method may include:
a1, determining a plurality of possible reconstruction planes and the reconstruction range of the reconstruction planes according to a preset target reconstruction space;
and A2, calculating that each voxel of the reconstruction plane in the reconstruction range corresponds to a two-dimensional pixel on the display panel according to the parameters of the microlens array and the geometric relationship among the display panel, the microlens array and the reconstruction plane to obtain a first index matrix.
And calculating a first index matrix in advance according to a plurality of possible reconstruction planes and the reconstruction range of the reconstruction planes, and storing the first index matrix. And the data of the first index matrix can be directly called by rendering other three-dimensional images, so that the calculation step is saved.
In some embodiments of the present application, the process of calculating that each voxel of the reconstruction plane in the reconstruction range corresponds to a two-dimensional pixel on the display panel according to the parameters of the microlens array and the geometric relationship among the display panel, the microlens array and the reconstruction plane by using a2 as described above may include:
and aiming at each lens in the micro-lens array, performing fiber tracking through a Gaussian formula according to the focal length of the lens and the geometric relationship among the lens, the display panel and the reconstruction plane to obtain a two-dimensional pixel of each voxel of the reconstruction plane in the reconstruction range corresponding to the display panel.
Among them, the gaussian formula can be expressed as:
Figure BDA0003190211060000091
where n and n' are the refractive indices of the media on the object side and the image side of the lens, respectively, u is the distance from the two-dimensional pixel on the display panel to the optical center of the lens in the microlens array, v is the distance from the voxel in the reconstruction plane to the optical center of the lens, and r is the radius of curvature of the lens.
The calculation process mainly obtains a plurality of two-dimensional pixels corresponding to the volume pixels according to the imaging law of the lens, and is low in complexity and easy to implement.
In some embodiments of the present application, the step S200 of obtaining a texture map of the target image to be rendered on the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array, and the target reconstruction plane may include:
b1, acquiring the sampling frequency of the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro-lens array and the target reconstruction plane;
and B2, according to the sampling frequency and the reconstruction range of the target reconstruction plane, performing down-sampling on the target image to be rendered to obtain a texture map of the target reconstruction plane.
The above calculation process is actually a process of down-sampling the target image to be rendered to the target reconstruction plane, that is, reducing the dimension of the three-dimensional image of the target image to be rendered to the two-dimensional images of the plurality of target reconstruction planes. Two-dimensional pixel readiness correlation data on the display panel is calculated for step S300 by obtaining a texture map of the target image to be rendered at each target reconstruction plane.
In some embodiments of the present application, the process of obtaining the sampling frequency of the target reconstruction plane according to the parameters of the display panel and the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the microlens array, and the target reconstruction plane by B1 may include:
c1, determining the size of the volume pixel of the target reconstruction plane according to the parameters of the display panel and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
and C2, determining the sampling frequency of the target reconstruction plane according to the reconstruction range of the target reconstruction plane and the size of the volume pixel of the target reconstruction plane.
Wherein the parameter of the display panel may be a resolution, and the sampling frequency is equal to a resolution of the texture map of the target reconstruction plane. The sampling frequency of the target reconstruction plane is determined through the above calculation process so as to calculate the volume pixel information of the texture map.
In some embodiments of the present application, the determining, in C1, the size of the voxel of the target reconstruction plane according to the parameters of the display panel and the geometric relationship among the display panel, the microlens array, and the target reconstruction plane may include:
the size of the voxels of the target reconstruction plane is determined by the following formula:
Figure BDA0003190211060000101
wherein L iscon_imgl_pix_iReconstructing the size of the volume pixels of the plane for the object, Lper_pixelIs the size of a single pixel on the display panel, r is the distance from the microlens array to the display panel, LiThe distance of the plane to the display panel is reconstructed for the object.
In some embodiments of the present application, the above-mentioned process of determining the sampling frequency of the target reconstruction plane by the C2 according to the reconstruction range of the target reconstruction plane and the size of the voxel of the target reconstruction plane may include:
determining a sampling frequency of the target reconstruction plane by:
Figure BDA0003190211060000102
wherein f ispixel_iReconstructing the sampling frequency, L, of the plane for the objectcon_imglSize of reconstruction range for object reconstruction plane, Lcon_imgl_pix_iThe size of the voxels of the plane is reconstructed for the object.
In some embodiments of the present application, the step S300 of obtaining the second index matrix according to the volume pixel information of the texture map of each target reconstruction plane and the first index matrix may include:
and traversing each volume pixel in the texture map aiming at the texture map of each target reconstruction plane, and extracting a two-dimensional pixel corresponding to the volume pixel from the first index matrix to generate a second index matrix.
Since the first index matrix actually contains the correspondence of all voxels of each possible reconstruction plane with respect to the two-dimensional pixels of the display panel within its reconstruction range, the target reconstruction plane is often only a subset of all possible reconstruction planes in practice, and the texture map on the reconstruction plane does not necessarily completely cover the entire reconstruction area. Therefore, the calculation process is to extract the corresponding relationship between the volume pixel and the two-dimensional pixel related to the texture map from the first index matrix, and use the extracted relationship as the second index matrix.
In some embodiments of the present application, the step S400 of obtaining a unit image array of a display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix may include:
d1, acquiring two-dimensional pixels on the display panel corresponding to the volume pixels of the texture map of each target reconstruction plane according to the second index matrix;
and D2, traversing the volume pixels of the texture map according to the texture map of each target reconstruction plane, acquiring the color information of the volume pixels, and setting the color information of the two-dimensional pixels on the display panel corresponding to the volume pixels to be consistent with the color information of the volume pixels to obtain the unit image array of the display panel.
Wherein the color information includes at least one of chrominance information and luminance information.
So far, embodiments of various variations of the real-time rendering method for integrated imaging light field display of the present application have been introduced in principle. For the sake of understanding, the following describes specific embodiments of the real-time rendering method for integrated imaging light field display according to the embodiments of the present application with two specific examples.
Example one:
referring to fig. 3, the integrated imaging light field is composed of a display panel and a microlens array.
Wherein the optical device parameters are as follows:
the display panel is rectangular, the length of the diagonal is 0.7 inches, and the number of pixels is 1920 × 1080 (the number of long-edge pixels × the number of wide-edge pixels);
the size of the microlens array must be not smaller than the size of the display panel, and in this example, a microlens array having a size of 160mm x 80mm is used, and the diameter of the individual microlens in the microlens array is 1mm, and the focal length is 4 mm.
The positional parameters of the light field are as follows:
the distance r from the micro lens array to the display panel is 3 mm;
it is assumed that the possible reconstruction planes are 6 depth reconstruction planes from reconstruction plane 1 to reconstruction plane 6, and that the 6 depths are 200mm for L1, 500mm for L2, 800mm for L3, 1000mm for L4, 1500mm for L5, and 2000mm for L6, respectively.
First, a spatial information preprocessing process is performed to pre-calculate a first index matrix.
According to the geometric relation between the micro display, the lens and the reconstruction virtual plane in the integrated light field, ray tracing is carried out according to a Gaussian formula, the position information of each voxel in the reconstruction plane 1 corresponding to a plurality of two-dimensional pixels in the display panel is obtained through calculation, and the position information is stored as a matrix so as to generate a first index matrix.
And repeating the process, traversing the voxels of each reconstruction plane respectively aiming at the reconstruction planes 2 to 6, calculating the position information of the voxels corresponding to a plurality of two-dimensional pixels in the display panel, updating the position information to the first index matrix, and finally obtaining the complete first index matrix.
Then, calculation of a specific target image to be rendered is started.
Accordingly, it is assumed that the target image to be rendered includes depths of 200mm, 500mm, 800mm, 1000mm, 1500mm, and 2000mm, respectively, that is, the depths of the target reconstruction plane of the target image to be rendered are 200mm, 500mm, 800mm, 1000mm, 1500mm, and 2000mm, respectively.
The sampling frequency for each depth plane is calculated according to equation (3) as f 1-360-203, f 2-346-195, f 3-343-193, f 4-343-193, f 5-341-192, and f 6-339-191.
And performing down-sampling on the target image to be rendered by using the sampling frequency to obtain a texture map of each target reconstruction plane.
And then carrying out a real-time color information processing process. And generating a specific spatial position information corresponding matrix corresponding to the texture map, namely a second index matrix, by calling the first index matrix according to the texture map of the target image to be rendered on each target reconstruction plane. And traversing the volume pixels of the texture map of a target reconstruction plane, and mapping the chroma information and the brightness information one by one to obtain a complete depth EIA map of the texture map of the target reconstruction plane. And combining the depth EIA maps of all the target reconstruction planes to obtain a final output image.
The rendering method provided by the embodiment of the application only relates to EIA image reconstruction, is based on a display panel and a micro-lens array which are necessary for integrated light field display, does not need to add any other optical and electrical devices, can be combined with any light path constructed on the basis of the integrated light field display, does not need to change the operation amount, and has extremely high universality; in addition, the calculation process does not depend on hardware performance, the universality is strong, the stability is good, the combination with the current commercial virtual reality and augmented reality display equipment is easy, and the market prospect is wide.
Example two:
this example is presented for an integrated imaging light-field display with a quasi-continuous reconstructed depth plane. The integrated imaging light field consists of a display panel and a micro-thick lens array.
Wherein the optical device parameters are as follows:
the display panel is rectangular, the length of the diagonal is 0.7 inches, and the number of pixels is 1920 × 1080 (the number of long-edge pixels × the number of wide-edge pixels);
the size of the microlens array must also be no smaller than the size of the display panel, in this example a microlens array with a size of 160mm x 80mm is used, and the individual microlenses in the microlens array have a diameter of 1mm and a focal length of 2.9 mm.
The positional parameters of the light field are as follows:
the distance r from the micro lens array to the micro display is 3 mm;
it is assumed that the possible reconstruction planes are 10 depth reconstruction planes from reconstruction plane 1 to reconstruction plane 10, and the 6 depths are respectively L1-200 mm, L2-300 mm, L3-500 mm, L4-700 mm, L5-900 mm, L6-1200 mm, L7-1500 mm, L8-2000 mm, L9-5000 mm, and L10-10000 mm.
First, a spatial information preprocessing process is performed to pre-calculate a first index matrix.
According to the geometric relation between the micro display, the lens and the reconstruction virtual plane in the integrated light field, ray tracing is carried out according to a Gaussian formula under a thick lens model, the position information of each voxel in the reconstruction plane 1 corresponding to a plurality of two-dimensional pixels in the display panel is obtained through calculation, and the position information is stored as a matrix so as to generate a first index matrix.
And repeating the process, traversing the voxels of each reconstruction plane respectively aiming at the reconstruction planes 2 to 10, calculating the position information of the voxels corresponding to a plurality of two-dimensional pixels in the display panel, updating the position information to the first index matrix, and finally obtaining the complete first index matrix.
Then, calculation of a specific target image to be rendered is started.
Accordingly, it is assumed that the target image to be rendered includes depths of 200mm, 300mm500mm, 700mm, 900mm, 1200mm, 1500mm, 2000mm, 5000mm, and 10000mm, respectively, that is, the depths of the target reconstruction plane of the target image to be rendered are 200mm, 300mm500mm, 700mm, 900mm, 1200mm, 1500mm, 2000mm, 5000mm, and 10000mm, respectively.
And (4) calculating the sampling frequency of each depth plane according to the formula (3), and performing down-sampling on the target image to be rendered by using the sampling frequency to obtain the texture map of each target reconstruction plane.
And then carrying out a real-time color information processing process. And generating a specific spatial position information corresponding matrix corresponding to the texture map, namely a second index matrix, by calling the first index matrix according to the texture map of the target image to be rendered on each target reconstruction plane. And traversing the volume pixels of the texture map of a target reconstruction plane, and mapping the chroma information and the brightness information one by one to obtain a complete depth EIA map of the texture map of the target reconstruction plane. And combining the depth EIA maps of all the target reconstruction planes to obtain a final output image.
The real-time rendering method for integrated imaging light field display provided by the embodiment of the application does not need a method for improving the operation speed of unit operation amount, such as high-performance hardware computing power and cloud computing, and does not influence the performance of resolution, visual angle and the like determined by hardware. The ray tracing process is optimized by a method of prestoring ray information, the spatial position information of the voxels in each possible reconstruction plane is precomputed and stored, and the invocation is performed by a table look-up method; the fixed volume pixel space distribution information and the 3D image information changed along with the input image are processed separately, the target image is directly projected to a pre-storage position to realize output only after down-sampling, repeated calculation of the fixed information is avoided, the operation amount is compressed, the operation complexity is reduced in a multiplied mode, and the image rendering speed can reach the video level.
The following describes the real-time rendering apparatus for integrated imaging light field display provided in the embodiment of the present application, and the real-time rendering apparatus for integrated imaging light field display described below and the real-time rendering method for integrated imaging light field display described above may be referred to correspondingly.
Referring to fig. 4, a real-time rendering apparatus for integrated imaging light field display provided in an embodiment of the present application may include:
a reconstruction region obtaining unit 10, configured to determine at least two target reconstruction planes and a reconstruction range of the target reconstruction planes according to a preset target reconstruction space and a target image to be rendered;
the texture map obtaining unit 20 is configured to obtain, for each target reconstruction plane, a texture map of a target image to be rendered on the target reconstruction plane according to parameters of the display panel, a reconstruction range of the target reconstruction plane, and a geometric relationship among the display panel, the microlens array, and the target reconstruction plane;
a second index matrix obtaining unit 30, configured to obtain a second index matrix according to the voxel information of the texture map of each target reconstruction plane and a preset first index matrix; the first index matrix comprises the corresponding relation between each voxel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel; the second index matrix comprises the corresponding relation between each voxel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel;
and the unit image array obtaining unit 40 is configured to obtain a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
In summary, the following steps:
according to the method and the device, the first index matrix containing the position corresponding information and the second index matrix containing the texture map pixel corresponding information are calculated separately, for the target rendering image which does not exceed the target reconstruction space and the target reconstruction plane is contained in the possible reconstruction plane, only the first index matrix which is calculated in advance needs to be called, the second index matrix can be conveniently calculated from the first index matrix, namely, for the whole rendering process of all pictures, calculation about the first index matrix needs to be carried out only once in advance, repeated calculation is avoided, the calculation amount is reduced, and the calculation time of a single target rendering picture is saved.
For the second index matrix, aiming at the target rendering images with different contents, only the texture maps of the target rendering images in all target reconstruction planes are needed to be calculated, and the corresponding second index matrix can be calculated according to the pre-calculated first index matrix, wherein the calculation involved in the process has lower calculation complexity. And finally, acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix. The whole calculation process is simple and quick, does not need to depend on strong hardware calculation force, reduces the calculation amount, saves the time for rendering a single picture, and ensures that the image rendering speed reaches the video level.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, the embodiments may be combined as needed, and the same and similar parts may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A real-time rendering method for integrated imaging light field display, comprising:
determining at least two target reconstruction planes and a reconstruction range of the target reconstruction planes according to a preset target reconstruction space and a target image to be rendered;
aiming at each target reconstruction plane, acquiring a texture map of a target image to be rendered on the target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
acquiring a second index matrix according to the volume pixel information of the texture map of each target reconstruction plane and a preset first index matrix; the first index matrix comprises the corresponding relation between each voxel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel; the second index matrix comprises the corresponding relation between each voxel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel;
and acquiring a unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
2. The method of claim 1, wherein the calculating of the first index matrix comprises:
determining a plurality of possible reconstruction planes and a reconstruction range of the reconstruction planes according to a preset target reconstruction space;
and for each possible reconstruction plane, calculating that each voxel of the reconstruction plane in the reconstruction range corresponds to a two-dimensional pixel on the display panel according to the parameters of the micro lens array and the geometric relationship among the display panel, the micro lens array and the reconstruction plane to obtain a first index matrix.
3. The method according to claim 2, wherein the calculating of the reconstruction plane according to the parameters of the microlens array and the geometric relationship among the display panel, the microlens array and the reconstruction plane, wherein each voxel of the reconstruction plane in the reconstruction range thereof corresponds to a two-dimensional pixel on the display panel, comprises:
and aiming at each lens in the micro-lens array, performing ray tracing by a Gaussian formula according to the focal length of the lens and the geometric relationship among the lens, the display panel and the reconstruction plane to obtain a two-dimensional pixel of each voxel of the reconstruction plane in the reconstruction range corresponding to the display panel.
4. The method according to claim 1, wherein the process of obtaining the texture map of the target image to be rendered in the target reconstruction plane according to the parameters of the display panel and the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array and the target reconstruction plane comprises:
acquiring the sampling frequency of a target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
and according to the sampling frequency and the reconstruction range of the target reconstruction plane, performing down-sampling on the target image to be rendered to obtain a texture map of the target reconstruction plane.
5. The method according to claim 4, wherein the process of obtaining the sampling frequency of the target reconstruction plane according to the parameters of the display panel and the reconstruction range of the target reconstruction plane, and the geometric relationship among the display panel, the microlens array and the target reconstruction plane comprises:
determining the size of a voxel of a target reconstruction plane according to the parameters of the display panel and the geometric relationship among the display panel, the micro-lens array and the target reconstruction plane;
and determining the sampling frequency of the target reconstruction plane according to the reconstruction range of the target reconstruction plane and by combining the size of the volume pixel of the target reconstruction plane.
6. The method according to claim 5, wherein the determining the size of the voxels of the object reconstruction plane according to the parameters of the display panel and the geometric relationship between the display panel, the microlens array and the object reconstruction plane comprises:
the size of the voxels of the target reconstruction plane is determined by the following formula:
Figure FDA0003190211050000021
wherein L iscon_imgl_pix_iReconstructing the size of the volume pixels of the plane for the object, Lper_pixelIs the size of a single pixel on the display panel, r is the distance from the microlens array to the display panel, LiThe distance of the plane to the display panel is reconstructed for the object.
7. The method according to claim 5, wherein the determining the sampling frequency of the target reconstruction plane according to the reconstruction range of the target reconstruction plane in combination with the size of the voxels of the target reconstruction plane comprises:
determining a sampling frequency of the target reconstruction plane by:
Figure FDA0003190211050000022
wherein f ispixel_iReconstructing the sampling frequency, L, of the plane for the objectcon_imglSize of reconstruction range for object reconstruction plane, Lcon_imgl_pix_iThe size of the voxels of the plane is reconstructed for the object.
8. The method according to claim 1, wherein the process of obtaining the second index matrix according to the voxel information of the texture map of each target reconstruction plane and the preset first index matrix comprises:
and traversing each volume pixel in the texture map aiming at the texture map of each target reconstruction plane, and extracting a two-dimensional pixel corresponding to the volume pixel from the first index matrix to generate the second index matrix.
9. The method according to claim 1, wherein the process of obtaining the unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix comprises:
acquiring two-dimensional pixels on the display panel corresponding to the volume pixels of the texture map of each target reconstruction plane according to the second index matrix;
traversing the volume pixels of the texture map aiming at the texture map of each target reconstruction plane, acquiring the color information of the volume pixels, and setting the color information of the two-dimensional pixels on the display panel corresponding to the volume pixels to be consistent with the color information of the volume pixels, so as to obtain a unit image array of the display panel.
10. A real-time rendering apparatus for integrated imaging light field display, comprising:
the reconstruction region acquisition unit is used for determining at least two target reconstruction planes and the reconstruction range of the target reconstruction planes according to a preset target reconstruction space and a target image to be rendered;
the texture map obtaining unit is used for obtaining a texture map of a target image to be rendered on each target reconstruction plane according to the parameters of the display panel, the reconstruction range of the target reconstruction plane and the geometric relationship among the display panel, the micro lens array and the target reconstruction plane;
the second index matrix obtaining unit is used for obtaining a second index matrix according to the volume pixel information of the texture map of each target reconstruction plane and a preset first index matrix; the first index matrix comprises the corresponding relation between each voxel of each possible reconstruction plane in the reconstruction range and the two-dimensional pixel on the display panel; the second index matrix comprises the corresponding relation between each voxel in the texture map of each target reconstruction plane and the two-dimensional pixel on the display panel;
and the unit image array obtaining unit is used for obtaining the unit image array of the display panel according to the volume pixel information of the texture map of each target reconstruction plane and the second index matrix.
CN202110875812.0A 2021-07-30 Real-time rendering method and device for integrated imaging light field display Active CN113516748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110875812.0A CN113516748B (en) 2021-07-30 Real-time rendering method and device for integrated imaging light field display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110875812.0A CN113516748B (en) 2021-07-30 Real-time rendering method and device for integrated imaging light field display

Publications (2)

Publication Number Publication Date
CN113516748A true CN113516748A (en) 2021-10-19
CN113516748B CN113516748B (en) 2024-05-28

Family

ID=

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154567A (en) * 2016-07-18 2016-11-23 北京邮电大学 The imaging method of a kind of 3 d light fields display system and device
US20190124313A1 (en) * 2017-10-19 2019-04-25 Intel Corporation Three dimensional glasses free light field display using eye location
WO2021030430A1 (en) * 2019-08-12 2021-02-18 Arizona Board Of Regents On Behalf Of The University Of Arizona Optical design and optimization techniques for 3d light field displays
US20210146247A1 (en) * 2019-09-11 2021-05-20 Tencent Technology (Shenzhen) Company Limited Image rendering method and apparatus, device and storage medium
CN112967370A (en) * 2021-03-03 2021-06-15 北京邮电大学 Three-dimensional light field reconstruction method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154567A (en) * 2016-07-18 2016-11-23 北京邮电大学 The imaging method of a kind of 3 d light fields display system and device
US20190124313A1 (en) * 2017-10-19 2019-04-25 Intel Corporation Three dimensional glasses free light field display using eye location
WO2021030430A1 (en) * 2019-08-12 2021-02-18 Arizona Board Of Regents On Behalf Of The University Of Arizona Optical design and optimization techniques for 3d light field displays
US20210146247A1 (en) * 2019-09-11 2021-05-20 Tencent Technology (Shenzhen) Company Limited Image rendering method and apparatus, device and storage medium
CN112967370A (en) * 2021-03-03 2021-06-15 北京邮电大学 Three-dimensional light field reconstruction method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田丰;夏雪;王鹤;: "真三维显示在医学教育与仿真中的应用", 液晶与显示, no. 04, pages 535 *

Similar Documents

Publication Publication Date Title
US9443338B2 (en) Techniques for producing baseline stereo parameters for stereoscopic computer animation
KR101675961B1 (en) Apparatus and Method for Rendering Subpixel Adaptively
CN105611278B (en) The image processing method and system and display equipment of anti-bore hole 3D viewings spinning sensation
US20130050187A1 (en) Method and Apparatus for Generating Multiple Image Views for a Multiview Autosteroscopic Display Device
CN108769664B (en) Naked eye 3D display method, device, equipment and medium based on human eye tracking
EP3001681B1 (en) Device, method and computer program for 3d rendering
JP6585938B2 (en) Stereoscopic image depth conversion apparatus and program thereof
Javidi et al. Breakthroughs in photonics 2014: recent advances in 3-D integral imaging sensing and display
JP3032414B2 (en) Image processing method and image processing apparatus
CN108287609B (en) Image drawing method for AR glasses
CN111079673A (en) Near-infrared face recognition method based on naked eye three-dimension
Kuo et al. Perspective-Correct VR Passthrough Without Reprojection
CN113516748B (en) Real-time rendering method and device for integrated imaging light field display
CN108012139B (en) The image generating method and device shown applied to the nearly eye of the sense of reality
CN113516748A (en) Real-time rendering method and device for integrated imaging light field display
Jang et al. Focused augmented mirror based on human visual perception
Hansen et al. Light field rendering for head mounted displays using pixel reprojection
TWI572899B (en) Augmented reality imaging method and system
CN114637391A (en) VR content processing method and equipment based on light field
Lee et al. Eye tracking based glasses-free 3D display by dynamic light field rendering
JPH08116556A (en) Image processing method and device
Martínez‐Corral et al. Three‐Dimensional Integral Imaging and Display
Takaki Next-generation 3D display and related 3D technologies
US20230239456A1 (en) Display system with machine learning (ml) based stereoscopic view synthesis over a wide field of view
Jin et al. Intermediate view synthesis for multi-view 3D displays using belief propagation-based stereo matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant