CN113556529B - High-resolution light field image display method, device, equipment and medium - Google Patents

High-resolution light field image display method, device, equipment and medium Download PDF

Info

Publication number
CN113556529B
CN113556529B CN202110873615.5A CN202110873615A CN113556529B CN 113556529 B CN113556529 B CN 113556529B CN 202110873615 A CN202110873615 A CN 202110873615A CN 113556529 B CN113556529 B CN 113556529B
Authority
CN
China
Prior art keywords
pixel
sub
depth
reconstruction
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110873615.5A
Other languages
Chinese (zh)
Other versions
CN113556529A (en
Inventor
秦宗
杨文超
程云帆
邹国伟
龚又又
吴梓毅
杨柏儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110873615.5A priority Critical patent/CN113556529B/en
Publication of CN113556529A publication Critical patent/CN113556529A/en
Application granted granted Critical
Publication of CN113556529B publication Critical patent/CN113556529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
    • G02B30/52Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Abstract

The invention discloses a high-resolution light field image display method, a device, equipment and a medium, which are applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, and the method comprises the following steps: acquiring display parameters of a microdisplay, the lens diameter of a lens array and the current exit pupil distance, and determining the depth of a reconstructed plane; determining the sub-pixel coordinates of different pixel positions of the reconstructed volume pixel on the same image based on the depth of the reconstructed plane and/or the diameter of the lens; when an image to be displayed is received, the reconstruction sub-pixels corresponding to the image to be displayed are respectively extracted from the sub-pixel coordinates, the reconstruction volume pixels are constructed on the reconstruction depth surface corresponding to the depth of the reconstruction plane, and the target display image is generated. Therefore, the synthesis of the volume pixels is carried out through the reconstruction sub-pixels respectively provided by different pixels on the micro display, compared with the existing volume pixel synthesis, the size of the volume pixel synthesis is only one third, and the display resolution and the corresponding space bandwidth product of the micro display are obviously improved.

Description

High-resolution light field image display method, device, equipment and medium
Technical Field
The invention relates to the technical field of light field display, in particular to a high-resolution light field image display method, device, equipment and medium.
Background
With the development of technology, the resolution of the display is rising. However, the display seen by the user is usually only a two-dimensional plane display at present, and the depth information is lacked. In order to realize three-dimensional display, common technologies include binocular parallax, volumetric three-dimensional display, holographic display, integrated imaging optical field display, and the like. The integrated imaging light field display has the advantages of no convergence adjustment conflict, simple hardware, light and thin volume and the like, and has wide application scenes, but the integrated imaging light field display has lower resolution due to lower space bandwidth and volume utilization rate.
For this reason, the prior art proposes to subdivide the subpixels of the two-dimensional display, for example, through holes in a subpixel encoding template, to match the pixels of different resolutions respectively to improve the display resolution; or dividing the image pixels based on the variable-focus lens array; either the light redirecting device is controlled by the controller to switch between different light redirecting states based on the gaze direction of the human eye, or different images are delivered to different locations on the retina via the beam redirecting assembly to display a high resolution image; or providing a high-resolution two-dimensional image in a sub-pixel rendering mode through the sequencing recombination of adjacent pixels.
The prior art generally needs to combine time division multiplexing or add a light modulation device to increase the resolution, but the increased resolution is very limited due to various dynamic device limitations, and the use cost is increased due to the complexity of the system.
Disclosure of Invention
The invention provides a high-resolution light field image display method, a high-resolution light field image display device, high-resolution light field image display equipment and a high-resolution light field image display medium, and solves the technical problems that dynamic devices such as a light modulator and the like are additionally added in the existing technical scheme for improving the display resolution of an integrated imaging light field, the improvement resolution is limited, and the use cost is increased due to the complexity of a system.
The invention provides a high-resolution light field image display method, which is applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, and the method comprises the following steps:
acquiring display parameters of the micro display, the lens diameter of the lens array and the current exit pupil distance;
determining the depth of a reconstruction plane according to the display parameters, the lens diameter and the current exit pupil distance;
determining sub-pixel coordinates corresponding to reconstructed volume pixels based on the reconstructed plane depth and/or the lens diameter;
when an image to be displayed is received, acquiring a reconstruction sub-pixel corresponding to the image to be displayed from the sub-pixel coordinate;
and constructing the reconstructed body pixel on the reconstructed depth plane corresponding to the depth of the reconstructed plane by adopting the reconstructed sub-pixel corresponding to the image to be displayed, and generating a target display image.
Optionally, the step of determining the depth of the reconstruction plane according to the display parameters, the lens diameter, and the current exit pupil distance includes:
determining the minimum depth value of the depth of the reconstruction plane according to the difference value between the preset eye photopic distance and the current exit pupil distance;
substituting the pixel size, the lens distance and the lens diameter into a preset zero ray sampling error formula to obtain a plurality of candidate reconstructed plane depths;
selecting the candidate reconstruction plane depth greater than or equal to the depth minimum as a reconstruction plane depth;
wherein the lens distance is a distance between the microdisplay and the lens array.
Optionally, the zero ray sampling error formula is:
Figure BDA0003189561760000021
wherein L isRFor the reconstruction plane depth, g is the lens distance, p is the pixel size, D is the lens diameter, LminIs the minimum depth value, j is the number of subpixel arrangement cycles, KnIs a preset trim positive integer, and n is more than or equal to 1.
Optionally, the sub-pixel coordinates comprise first or second reconstruction pixel coordinates, and the step of determining reconstruction pixel coordinates of a reconstruction pixel based on the reconstruction plane depth and/or the lens diameter comprises:
determining the first sub-pixel coordinate based on the reconstruction plane depth, the lens distance, and the lens diameter when the mapping of the reconstruction volume pixel is at a lateral centerline of the microdisplay;
determining the second sub-pixel coordinate based on the reconstruction plane depth and the lens distance when the mapping of the reconstruction volume pixel is not at a lateral median line of the microdisplay.
Optionally, the first sub-pixel coordinates comprise a first sub-pixel ordinate and a plurality of first sub-pixel abscissas; said step of determining said first sub-pixel coordinate based on said reconstruction plane depth, said lens distance and said lens diameter when said mapping of said reconstruction volume pixel is at a lateral median line of said microdisplay, comprising:
when the mapping of the reconstructed volume pixel is on a transverse middle line of the micro display, acquiring a transverse pixel value of the reconstructed volume pixel mapped on the micro display;
calculating a first sum of the lens distance and the reconstruction plane depth;
calculating a first product of the reconstructed plane depth and the lens diameter;
calculating a first ratio of the first multiplication value to the first sum value;
calculating a second sum and a first difference of the first ratio and the horizontal pixel value, respectively;
determining the horizontal pixel value, the first difference value and the second sum value as the first sub-pixel horizontal coordinate, respectively;
and determining a longitudinal pixel value of the reconstructed pixel mapped on the micro display as the first sub-pixel longitudinal coordinate.
Optionally, the second sub-pixel coordinates comprise a second sub-pixel ordinate and a plurality of second sub-pixel abscissas; the step of determining the second sub-pixel coordinate based on the reconstruction plane depth and the lens distance when the mapping of the reconstruction volume pixel is not at the lateral centerline of the microdisplay comprises:
when the mapping of the reconstructed body pixel is not located on the transverse middle line of the microdisplay, acquiring a transverse pixel value of the reconstructed body pixel mapped on the microdisplay;
calculating a second sum of the lens distance and the reconstruction plane depth;
calculating a second product of the reconstructed plane depth and the lens diameter;
calculating a second ratio of the second multiplication value to the second sum value;
determining the horizontal coordinate of the second sub-pixel according to the second ratio and the horizontal pixel value;
determining a vertical pixel value of the reconstructed volume pixel mapped on the microdisplay as the second sub-pixel vertical coordinate.
Optionally, the step of determining the abscissa of the second sub-pixel according to the second ratio and the horizontal pixel value includes:
calculating a second difference of the lateral pixel value and the second ratio;
calculating a third difference between the second difference and the second ratio;
and determining the transverse pixel value, the second difference value and the third difference value as the second sub-pixel abscissa respectively.
A second aspect of the present invention provides a high resolution light field image display apparatus applied to a display terminal including a lens array and a microdisplay, the apparatus comprising:
the parameter acquisition module is used for acquiring display parameters of the micro display, the lens diameter of the lens array and the current exit pupil distance;
the reconstruction plane depth determining module is used for determining the depth of a reconstruction plane according to the display parameters, the diameter of the lens and the current exit pupil distance;
a sub-pixel coordinate determination module for determining sub-pixel coordinates corresponding to the reconstruction volume pixels based on the depth of the reconstruction plane and/or the lens diameter;
the device comprises a sub-pixel acquisition module, a reconstruction module and a display module, wherein the sub-pixel acquisition module is used for extracting a reconstruction sub-pixel corresponding to an image to be displayed from the sub-pixel coordinates when the image to be displayed is received;
and the reconstructed body pixel construction module is used for constructing the reconstructed body pixel on the reconstructed depth plane corresponding to the depth of the reconstructed plane by adopting the reconstructed sub-pixel corresponding to the image to be displayed so as to generate a target display image.
A third aspect of the present invention provides an electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the high resolution light field image display method according to the first aspect of the present invention.
A fourth aspect of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a high resolution light field image display method according to the first aspect of the present invention.
According to the technical scheme, the invention has the following advantages:
after a user selects a display terminal, determining a reconstruction plane depth based on display parameters of a micro display, the lens diameter of a lens array and a current exit pupil distance, and determining sub-pixel coordinates required by a synthesized reconstruction volume pixel based on the reconstruction plane depth and/or the lens diameter; if the image to be displayed is received, the corresponding reconstruction sub-pixels can be obtained according to the sub-pixel coordinates of the image to be displayed on the micro display, and finally the reconstruction sub-pixels are adopted to complete the synthesis of the reconstruction pixels on the reconstruction plane corresponding to the depth of the reconstruction plane, so that the target display image with high resolution is generated. Therefore, the synthesis of the volume pixels is carried out through the reconstruction sub-pixels provided by different pixels on the micro display, compared with the existing volume pixel synthesis, the size of the volume pixel synthesis is only one third, and the display resolution of the micro display and the corresponding space bandwidth product are obviously improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a high-resolution light field image displaying method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a high resolution light field image displaying method according to a second embodiment of the present invention;
fig. 3 is a schematic diagram illustrating association between a reconstructed volume pixel and a first sub-pixel coordinate according to a second embodiment of the present invention;
fig. 4 is a schematic diagram illustrating association between a reconstructed pixel and a second sub-pixel coordinate according to a second embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a reconstructed pixel according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of a reconstructed volume pixel in an alternative embodiment of the invention;
FIG. 7 is a schematic diagram of a reconstructed volume pixel according to another embodiment of the invention;
fig. 8 is a block diagram of a high-resolution light field image display apparatus according to a third embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a high-resolution light field image display method, a high-resolution light field image display device, high-resolution light field image display equipment and a high-resolution light field image display medium, and aims to solve the technical problems that in the existing technical scheme for improving the display resolution of an integrated imaging light field, dynamic devices such as a light modulator and the like are additionally arranged, the improvement resolution is limited, and the use cost is increased due to the complexity of a system.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a high-resolution light field image display method according to an embodiment of the present invention.
The invention provides a high-resolution light field image display method, which is applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, and the method comprises the following steps:
step 101, acquiring display parameters of a microdisplay, the lens diameter of a lens array and the current exit pupil distance;
the display terminal is terminal equipment with a display screen, such as VR (virtual reality) glasses, computers, mobile phones and other equipment, and comprises a micro display and a lens array, wherein the lens array is arranged at the outer side of the micro display, and a certain lens distance exists between the micro display and the lens array; the lens array includes a plurality of microlenses each having the same lens diameter.
In the embodiment of the invention, in order to obtain the data basis of the subsequent high-resolution integrated light field display, after a user selects a display terminal, the display parameters of the microdisplay, the lens diameter of the lens array matched with the microdisplay and the current exit pupil distance can be obtained.
The display parameters refer to the inherent parameters of the microdisplay, such as the displayed pixel size, the distance from the microlens to the microdisplay, and the like, and the exit pupil distance refers to the distance from the last vertex of the optical system to the intersection point of the exit pupil plane and the optical axis.
102, determining the depth of a reconstruction plane according to the display parameters, the diameter of a lens and the current exit pupil distance;
after the display parameters, the lens diameter and the current exit pupil distance are obtained, the minimum reconstruction plane depth which can be constructed at the current moment can be determined by combining the eye photopic distance, and then the minimum reconstruction plane depth is used as a limiting condition to determine a plurality of reconstruction plane depths, so that when a subsequent video is played, due to the front-back distance of different objects, the reconstruction of voxels is carried out on the reconstruction planes at different depths.
103, determining the sub-pixel coordinates corresponding to the reconstructed volume pixels based on the depth of the reconstructed plane and/or the diameter of the lens;
in an embodiment of the present invention, after the depth of the reconstruction plane is determined, the sub-pixel coordinates corresponding to the reconstructed volume pixels on each reconstruction plane may be determined based on the depth of the reconstruction plane and the lens diameter.
104, when an image to be displayed is received, extracting a reconstruction sub-pixel corresponding to the image to be displayed from the sub-pixel coordinate;
after determining each sub-pixel coordinate corresponding to the reconstructed voxel, if the image to be displayed is received, the required reconstructed sub-pixel can be directly obtained according to the corresponding sub-pixel coordinate of the image to be displayed on the micro display, so as to be used for constructing the voxel subsequently.
And 105, constructing a reconstructed body pixel on a reconstructed depth plane corresponding to the depth of the reconstructed plane by using the reconstructed sub-pixel corresponding to the image to be displayed, and generating a target display image.
In a specific implementation, after each reconstruction sub-pixel of the image to be displayed is obtained, the reconstruction of the volume pixel can be performed on the reconstruction depth plane corresponding to the depth of the reconstruction plane, so that the target display image with high resolution and three-dimensional display is generated.
In the embodiment of the invention, after a user selects a display terminal, the reconstruction plane depth can be determined based on the display parameters of the microdisplay, the lens diameter of the lens array and the current exit pupil distance, and the sub-pixel coordinates needed by the synthesized reconstruction volume pixel are determined based on the reconstruction plane depth and/or the lens diameter; if the image to be displayed is received, the corresponding reconstruction sub-pixels can be obtained according to the sub-pixel coordinates of the image to be displayed on the micro display, and finally the reconstruction sub-pixels are adopted on a reconstruction plane corresponding to the depth of the reconstruction plane to complete the synthesis of the reconstruction body pixels, so that the target display image with high resolution is generated. Therefore, the synthesis of the volume pixels is carried out through the reconstruction sub-pixels provided by different pixels on the micro display, compared with the existing volume pixel synthesis, the size of the volume pixel synthesis is only one third, and the display resolution and the corresponding spatial bandwidth product of the micro display are obviously improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a high resolution light field image display method according to a second embodiment of the present invention.
The invention provides a high-resolution light field image display method, which is applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, display parameters comprise the number of sub-pixel arrangement cycles, the pixel size and the lens distance, and the method comprises the following steps:
step 201, acquiring display parameters of a microdisplay, the lens diameter of a lens array and a current exit pupil distance;
the display terminal is terminal equipment with a display screen, such as VR (virtual reality) glasses, computers, mobile phones and other equipment, and comprises a micro display and a lens array, wherein the lens array is arranged at the outer side of the micro display, and a certain lens distance exists between the micro display and the lens array; each microlens within the lens array has the same lens diameter.
In the embodiment of the invention, in order to obtain the data basis of the subsequent high-resolution integrated light field display, after a user selects the display terminal, the display parameters of the microdisplay, the lens diameter of the lens array matched with the microdisplay and the current exit pupil distance can be obtained.
The display parameters refer to the inherent parameters of the microdisplay, such as the displayed pixel size and the distance from the microlens to the microdisplay, the current exit pupil distance refers to the distance from the vertex of the last surface of the optical system to the intersection point of the exit pupil plane and the optical axis, and the lens distance refers to the distance between the microdisplay and the lens array.
It should be noted that the lens array may include, but is not limited to, a micro lens array, a pinhole array, a super lens array, etc., and the specific type of the lens array is not limited by the embodiments of the present application
Step 202, determining a depth minimum value of a reconstruction plane depth according to a difference value between a preset eye photopic distance and a current exit pupil distance;
the photopic distance of the human eye refers to the working distance that the eye is most convenient and accustomed to under appropriate lighting conditions. The distance of a small object close to the human eye is most suitable for normal human eyes to observe, is about 25cm, and the human eye has less tension in adjusting function and can observe for a long time without fatigue.
After the current exit pupil distance is obtained, the minimum depth value of the depth of the reconstruction plane can be determined by combining the preset eye photopic distance and the difference value thereof.
The predetermined eye photopic distance may be set to 25cm, 20cm, and the like, which is not limited in this embodiment of the present invention.
Step 203, substituting the pixel size, the lens distance and the lens diameter into a preset zero ray sampling error formula to obtain a plurality of candidate reconstructed plane depths;
in a specific implementation, when the chief rays passing through the microlens pass through the pixels on the display, not every chief ray can pass through the center of the pixel, so that the fused volume pixel has a certain error, which is called a ray sampling error (sampling error).
In the embodiment of the present invention, the reconstructed plane depth with the ray sampling error of the sub-pixel being zero can be calculated by the following zero ray sampling error formula through the similar triangle relationship.
Further, the zero ray sampling error formula is:
Figure BDA0003189561760000081
wherein L isRTo reconstruct the depth of the plane, g is the lens distance, p is the pixel size, D is the lens diameter, LminIs the minimum depth value, j is the number of subpixel arrangement cycles, KnIs a preset trim positive integer, and n is more than or equal to 1.
In a specific implementation, after the specification of the display terminal is selected, the lens diameter D, the lens distance g and the pixel size displayed by the micro display of each micro lens in the lens array are all fixed, and in the above formula, the reconstructed plane depth is only adjusted by a positive integer KnMay vary. Can be managed by a base administratorAnd respectively calculating the depths of a plurality of reconstruction planes by using preset trim positive integers so as to meet the requirement of the voxel reconstruction basis of the reconstruction planes of the object at different depths.
The number of cycles of the pixel arrangement depends on the arrangement of the pixels displayed on the microdisplay, for example, if the pixels are arranged in RGB on the vertical bar, j can be set to 3; if the pixels are arranged in RGBW, the pixel arrangement is two, j can be set to 2, and so on, and the setting of j varies with the number of cycles of the sub-pixel arrangement in a certain direction.
Step 204, selecting the candidate reconstruction plane depth which is greater than or equal to the minimum depth value as the reconstruction plane depth;
in the embodiment of the invention, after the minimum depth value is obtained by calculating the eye distance and the current exit pupil distance, in order to further improve the comfort of the user, the candidate reconstruction plane depth which is greater than or equal to the minimum depth value can be selected from the candidate reconstruction plane depths to serve as the reconstruction plane depth.
For example, in the case where p is 7.8um, g is 5mm, and D is 0.98mm, the ray sampling errors of voxels in several depth planes, such as 2000mm, 958mm, 472mm, 376mm, and 266mm, can be found at the minimum depth value.
Step 205, determining the sub-pixel coordinates corresponding to the reconstructed volume pixels based on the depth of the reconstructed plane and/or the diameter of the lens;
optionally, the sub-pixel coordinates comprise first or second reconstructed pixel coordinates, and step 205 may comprise the following sub-steps S1-S2:
s1, when the mapping of the reconstructed volume pixel is on the transverse median line of the micro display, determining a first sub-pixel coordinate based on the depth of the reconstructed plane, the lens distance and the lens diameter;
in one example of the present invention, the first sub-pixel coordinate includes a first sub-pixel ordinate and a plurality of first sub-pixel abscissas, and the step S1 may include the sub-steps of:
when the mapping of the reconstructed volume pixel is on the transverse middle line of the micro display, acquiring a transverse pixel value of the reconstructed volume pixel mapped on the micro display;
calculating a first sum of the lens distance and the depth of the reconstruction plane;
calculating a first product of the depth of the reconstruction plane and the diameter of the lens;
calculating a first ratio of the first multiplication value to the first sum value;
respectively calculating a second sum value and a first difference value of the first ratio and the transverse pixel value;
determining the horizontal pixel value, the first difference value and the second sum value as a first sub-pixel horizontal coordinate respectively;
and determining a longitudinal pixel value of the reconstructed volume pixel mapped on the micro display as a first sub-pixel longitudinal coordinate.
In this embodiment, when the mapping of the reconstructed voxel on the microdisplay is on the horizontal centerline, that is, the reconstructed voxel is on the centerline of the microdisplay, the horizontal pixel value of the voxel on the microdisplay can be obtained, then the first sum of the lens distance and the reconstructed plane depth and the first multiplication of the reconstructed plane depth and the lens diameter are calculated respectively, and then the first ratio of the first multiplication and the first sum is calculated based on the principle of similar triangle, so as to determine the difference between the target abscissa and the horizontal pixel value. Finally, a first difference value and a second sum value of the first ratio and the transverse pixel value are adopted, and the transverse pixel value is combined to be respectively used as a first sub-pixel abscissa; and because the sub-pixels are all in the same row, the vertical pixel value of the reconstructed pixel mapped on the micro display can be determined as the vertical coordinate of the first sub-pixel.
It is worth mentioning that in the RGB arrangement, the first sub-pixel value abscissa represents the specific pixel abscissa of R, G, B sub-pixels, respectively, so that the vertical pixel value of the reconstructed volume pixel mapped on the microdisplay can be used as the first sub-pixel ordinate of the three sub-pixels.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a relationship between a reconstructed volume pixel and a first sub-pixel coordinate according to the present invention.
In the embodiment of the invention, because the voxel 1 to be reconstructed is positioned on the transverse middle line of the micro display, the transverse pixel value of the voxel vertically mapped to the micro display can be firstly obtained to obtain the first subpixel abscissa of the first subpixel G (namely, the oblique line identification part); further, the first sub-pixel abscissas of the other first sub-pixels R (i.e., the blank marked portion) and B (the solid black marked portion), that is, the above-described first difference value and second sum value are determined based on the relation of the similar triangles, respectively.
For example, in a microdisplay with 1920 × 1080 pixels, the first sub-pixel coordinate r on the microdisplay corresponding to the reconstructed pixel 1 is calculated according to the method1Provided by i 540, j 1087 pixels, g1Provided by pixels of i 540, j 960, b1Provided by the pixels of i 540 and j 834.
S2, determining second sub-pixel coordinates based on the reconstruction plane depth and the lens distance when the mapping of the reconstructed voxel is not in the lateral median line of the microdisplay.
In another example of the present invention, the second sub-pixel coordinates include a second sub-pixel ordinate and a plurality of second sub-pixel abscissas, and the step S2 may include the sub-steps of:
when the mapping of the reconstructed body pixel is not located on the transverse median line of the micro display, acquiring a transverse pixel value of the reconstructed body pixel mapped on the micro display;
calculating a second sum of the lens distance and the reconstructed plane depth;
calculating a second product of the depth of the reconstruction plane and the diameter of the lens;
calculating a second ratio of the second multiplied value to the second summed value;
determining a second sub-pixel abscissa according to the second ratio and the transverse pixel value;
and determining the longitudinal pixel value of the reconstructed body pixel mapped on the micro display as a second sub-pixel longitudinal coordinate.
In an optional embodiment of the present invention, if the mapping of the reconstructed pixel is not located on the transverse centerline of the microdisplay, the longitudinal pixel value of the reconstructed pixel mapped on the microdisplay may be determined as the longitudinal coordinate of the second subpixel; acquiring a transverse pixel value of a reconstructed body pixel mapped on the micro display; calculating a second sum of the lens distance and the reconstructed plane depth; calculating a second product of the depth of the reconstruction plane and the diameter of the lens; calculating a second ratio of the second multiplication value to the second sum value based on the relation of the similar triangles; and determining a second sub-pixel abscissa according to the second ratio and the transverse pixel value.
Further, the step of determining the abscissa of the second sub-pixel based on the second ratio and the horizontal pixel value may comprise the sub-steps of:
calculating a second difference between the horizontal pixel value and the second ratio;
calculating a third difference between the second difference and the second ratio;
and determining the transverse pixel value, the second difference value and the third difference value as a second sub-pixel abscissa respectively.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a relationship between a reconstructed volume pixel and a second sub-pixel coordinate according to an embodiment of the invention.
In the embodiment of the present invention, since the voxel 2 to be reconstructed is located on the left side of the transverse centerline of the microdisplay, the second subpixel abscissa of the second subpixel G (i.e. the diagonal mark part) may be determined based on the obtained transverse pixel value at this time; further calculating a second difference value of the lateral pixel value and the second ratio value, thereby determining a second sub-pixel abscissa of a second sub-pixel B (solid black identification portion); a third difference of the second difference and the second ratio is calculated to determine a second sub-pixel abscissa of the second sub-pixel R (i.e., the blank identification portion).
Similarly, for the reconstructed volume pixel 3, the horizontal pixel value, the second difference value, and the third difference value can be directly determined as the horizontal coordinates of the second sub-pixel, respectively, due to the symmetric relationship with the volume pixel 2.
For example, in a microdisplay with 1920 × 1080 pixels, the pixel coordinate on the microdisplay corresponding to the reconstructed pixel 2 calculated according to the method is r2Provided by pixels of i 540, j 834, g2Provided by pixels of i-540, j-1087, b2Provided by the pixels of i 540, j 960. Reconstruction volume pixel 3 corresponds to a microPixel coordinate on display is g3Provided by pixels of i 540, j 834, r3Provided by pixels of i 540, j 960, b3Provided by the pixels i 540, j 1087.
Step 206, when receiving the image to be displayed, extracting a reconstruction sub-pixel corresponding to the image to be displayed from the sub-pixel coordinate;
when the image to be displayed is received, the reconstruction sub-pixels needed by the image to be displayed can be respectively obtained from the sub-pixel coordinates.
For example, in reconstructing volume pixel 1, reconstruction sub-pixel b may be obtained from sub-pixel coordinates A (540, 834)1Obtaining the reconstructed sub-pixel g from B (540, 960)1The reconstruction sub-pixel r is obtained from C (540, 1087)1Thus providing a basis for reconstructed data for the voxel 1.
And step 207, constructing a reconstructed body pixel on a reconstructed depth plane corresponding to the depth of the reconstructed plane by using the reconstructed sub-pixel corresponding to the image to be displayed, and generating a target display image.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a reconstructed voxel according to an embodiment of the invention.
If the pixel arrangement of the micro display is a strip-shaped RGB arrangement, the pixel size p of one micro display is 7.8um, the diameter D of a single micro lens of a lens array size 15 x 15cm (capable of covering the micro display) is 0.98mm, and the distance g from the lens array to the display is 5 mm. The microdisplay has 1920 × 1080 pixels, and the horizontal coordinate of the display pixel is defined as i pixel vertical coordinate j, and the first pixel of the display sitting angle is i-1, and j-1. Reconstructing plane depth L at 126R2000 mm. Pixel coordinate r on the microdisplay at reconstructed volume pixel 1 corresponds to1Provided by i 540, j 1086 pixels, g1Provided by i 540, j 960 pixels, b1Provided by i 540 and j 834 pixels. The pixel coordinate on the micro display corresponding to the reconstructed volume pixel 2 is r2Provided by i 540, j 834 pixels, g2Provided by i-540, j-1086 pixels, b2Provided by i 540, j 960 pixels, the display resolution is increased by a factor of 3 compared to a conventional integrated imaging light field display. .
Alternatively, when the depth of the reconstruction plane changes, execution of step 205 may be skipped to re-determine the corresponding sub-pixel coordinates.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a reconstructed voxel according to an alternative embodiment of the invention.
In an embodiment of the invention, the microdisplay pixels are arranged in an RGBW pattern and have an integrated imaging three-dimensional light field display with 5 reconstructed depth planes. The microdisplay has a pixel size of 7.8um, a lens array size of 15 x 15cm (capable of covering the microdisplay) and a single microlens diameter of 0.98mm, the distance g from the lens array to the display being 5 mm. The microdisplay has 1920 × 1080 pixels, and the microdisplay pixel horizontal coordinate is defined as i pixel vertical coordinate j, and the first pixel in the microdisplay is i-1, and j-1.
L when N is 125RWhen the thickness is 1602mm, the thickness is LRThe reconstructed voxel 1 on the reconstruction plane 1602mm has zero sampling error. When L isRWhen 1602mm, the reconstructed volume pixel 1 corresponds to the pixel coordinate r on the microdisplay1Provided by i 540, j 1086 pixels, g1Provided by i 540, j 960 pixels, b1Provided by i 540, j 834 pixels, w1Provided by i 540 and j 1212 pixels. Similarly, the voxels reconstructed on the 700mm, 447mm, 327mm, and 258mm depth planes also have zero sampling error, and the pixel coordinates of the microdisplays corresponding to all the voxels on these 5 depth planes can be calculated, so that the resolution can be increased by 2 times in each direction compared with the conventional light field display, and the resolution can be increased by 4 times for the entire light field display.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a reconstructed volume pixel according to another embodiment of the invention.
In an embodiment of the invention, the pixels of the microdisplay are arranged in a triangle and the integrated imaging three-dimensional light field display has 5 reconstructed depth planes. The microdisplay has a pixel size of 7.8um and the pixels are arranged in a triangle, a lens array size of 15 x 15cm (capable of covering the microdisplay) with a single microlens diameter of 0.98mm and a lens array to display distance g of 5 mm.
Wherein light respectively emitted from R, G, B sub-pixels of the triangular arrangement of pixels on the microdisplay enters the human eye through the center of the microlens, and a single voxel formed by the reverse extensions of these sub-pixels has zero sampling error. The size of the volume pixel formed by the method is 1/3 of a triangular pixel on a micro display, namely, the resolution can be remarkably improved compared with that of the traditional integrated imaging light field display.
In the embodiment of the invention, after a user selects a display terminal, the reconstruction plane depth can be determined based on the display parameters of the microdisplay, the lens diameter of the lens array and the current exit pupil distance, and the sub-pixel coordinates needed by the synthesized reconstruction volume pixel are determined based on the reconstruction plane depth and/or the lens diameter; if the image to be displayed is received, the corresponding reconstruction sub-pixels can be obtained according to the sub-pixel coordinates of the image to be displayed on the micro display, and finally the reconstruction sub-pixels are adopted to complete the synthesis of the reconstruction pixels on the reconstruction plane corresponding to the depth of the reconstruction plane, so that the target display image with high resolution is generated. Therefore, the synthesis of the volume pixels is carried out through the reconstruction sub-pixels provided by different pixels on the micro display, compared with the existing volume pixel synthesis, the size of the volume pixel synthesis is only one third, and the display resolution and the corresponding spatial bandwidth product of the micro display are obviously improved.
Referring to fig. 8, fig. 8 is a block diagram illustrating a high-resolution light field image display apparatus according to a third embodiment of the present invention.
The embodiment of the invention provides a high-resolution light field image display device, which is applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, and the device comprises:
a parameter obtaining module 801, configured to obtain display parameters of a microdisplay, a lens diameter of a lens array, and a current exit pupil distance;
a reconstruction plane depth determining module 802, configured to determine a reconstruction plane depth according to the display parameter, the lens diameter, and the current exit pupil distance;
a sub-pixel coordinate determining module 803, configured to determine sub-pixel coordinates corresponding to the reconstructed volume pixels based on the depth of the reconstruction plane and/or the lens diameter;
a sub-pixel obtaining module 804, configured to extract a reconstructed sub-pixel corresponding to the image to be displayed from the sub-pixel coordinates when the image to be displayed is received;
and a reconstructed volume pixel constructing module 805 configured to construct a reconstructed volume pixel on a reconstructed depth plane corresponding to the depth of the reconstructed plane by using the reconstructed sub-pixel corresponding to the image to be displayed, so as to generate a target display image.
Optionally, the display parameters include a sub-pixel arrangement cycle number, a pixel size, and a lens distance, and the reconstruction plane depth determining module 802 includes:
the depth minimum value determining submodule is used for determining the depth minimum value of the depth of the reconstruction plane according to the difference value of the preset eye photopic distance and the current exit pupil distance;
the candidate reconstruction plane depth calculation operator module is used for substituting the pixel size, the lens distance and the lens diameter into a preset zero ray sampling error formula to obtain a plurality of candidate reconstruction plane depths;
a reconstruction plane depth selection submodule for selecting a candidate reconstruction plane depth greater than or equal to the minimum depth value as a reconstruction plane depth;
wherein the lens distance is the distance between the microdisplay and the lens array.
Optionally, the zero ray sampling error formula is:
Figure BDA0003189561760000141
wherein L isRTo reconstruct the depth of the plane, g is the lens distance, p is the pixel size, D is the lens diameter, LminIs the minimum depth value, j is the number of subpixel arrangement cycles, KnIs a preset trim positive integer, and n is more than or equal to 1.
Optionally, the sub-pixel coordinates include first reconstructed pixel coordinates or second reconstructed pixel coordinates, and the sub-pixel coordinate determining module 803 includes:
a first sub-pixel coordinate determination sub-module for determining a first sub-pixel coordinate based on the reconstruction plane depth, the lens distance and the lens diameter when the mapping of the reconstruction volume pixel is at the transverse median line of the microdisplay;
a second sub-pixel coordinate determination sub-module to determine a second sub-pixel coordinate based on the reconstruction plane depth and the lens distance when the mapping of the reconstruction volume pixel is not at the lateral median line of the microdisplay.
Optionally, the first sub-pixel coordinates comprise a first sub-pixel ordinate and a plurality of first sub-pixel abscissas; a first sub-pixel coordinate determination sub-module comprising:
the first transverse pixel value acquisition unit is used for acquiring the transverse pixel value of the reconstructed volume pixel mapped on the micro display when the mapping of the reconstructed volume pixel is on the transverse middle line of the micro display;
a first sum value calculation unit for calculating a first sum value of the lens distance and the depth of the reconstruction plane;
a first multiplication value calculation unit for calculating a first multiplication value of the reconstruction plane depth and the lens diameter;
a first ratio calculation unit for calculating a first ratio of the first multiplication value to the first sum value;
a sum and difference calculation unit for calculating a second sum and a first difference of the first ratio and the lateral pixel value, respectively;
a first sub-pixel abscissa determining unit for determining the horizontal pixel value, the first difference value, and the second sum value as first sub-pixel abscissas, respectively;
and the first sub-pixel longitudinal coordinate determining unit is used for determining a longitudinal pixel value of the reconstructed pixel mapped on the micro display as a first sub-pixel longitudinal coordinate.
Optionally, the second sub-pixel coordinate comprises a second sub-pixel ordinate and a plurality of second sub-pixel abscissas; a second sub-pixel coordinate determination sub-module comprising:
the second transverse pixel value acquisition unit is used for acquiring the transverse pixel value of the reconstructed volume pixel mapped on the micro display when the mapping of the reconstructed volume pixel is not positioned on the transverse median line of the micro display;
a second sum value calculation unit for calculating a second sum value of the lens distance and the reconstructed plane depth;
a second multiplication value calculation unit for calculating a second multiplication value of the depth of the reconstruction plane and the diameter of the lens;
a second ratio calculation unit for calculating a second ratio of the second multiplied value to the second summed value;
the second sub-pixel horizontal coordinate calculating unit is used for determining a second sub-pixel horizontal coordinate according to the second ratio and the transverse pixel value;
and the second sub-pixel ordinate determining unit is used for determining the longitudinal pixel value of the reconstructed volume pixel mapped on the micro display as a second sub-pixel ordinate.
Optionally, the second sub-pixel abscissa calculating unit is specifically configured to:
calculating a second difference between the horizontal pixel value and the second ratio;
calculating a third difference between the second difference and the second ratio;
and determining the transverse pixel value, the second difference value and the third difference value as a second sub-pixel abscissa respectively.
An embodiment of the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the high resolution light field image display method according to any embodiment of the present invention.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the high resolution light field image display method according to any embodiment of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices, modules, sub-modules, and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A high-resolution light field image display method is applied to a display terminal, wherein the display terminal comprises a lens array and a micro display, and the method comprises the following steps:
acquiring display parameters of the microdisplay, the lens diameter of the lens array and the current exit pupil distance;
determining the depth of a reconstruction plane according to the display parameters, the lens diameter and the current exit pupil distance;
determining sub-pixel coordinates corresponding to reconstructed volume pixels based on the reconstructed plane depth and/or the lens diameter;
when an image to be displayed is received, acquiring a reconstruction sub-pixel corresponding to the image to be displayed from the sub-pixel coordinate;
adopting the reconstruction sub-pixel corresponding to the image to be displayed to construct the reconstruction volume pixel on the reconstruction depth plane corresponding to the depth of the reconstruction plane, and generating a target display image;
the display parameters comprise the number of subpixel arrangement cycles, the pixel size and the lens distance, and the step of determining the depth of the reconstruction plane according to the display parameters, the lens diameter and the current exit pupil distance comprises the following steps:
determining the minimum depth value of the depth of the reconstruction plane according to the difference value between the preset eye photopic distance and the current exit pupil distance;
substituting the pixel size, the lens distance and the lens diameter into a preset zero ray sampling error formula to obtain a plurality of candidate reconstructed plane depths;
selecting the candidate reconstruction plane depth greater than or equal to the depth minimum as a reconstruction plane depth;
wherein the lens distance is a distance between the microdisplay and the lens array;
the sub-pixel coordinates comprise first sub-pixel coordinates or second sub-pixel coordinates, and the step of determining sub-pixel coordinates of a reconstructed volume pixel based on the reconstructed plane depth and/or the lens diameter comprises:
determining the first sub-pixel coordinate based on the reconstruction plane depth, the lens distance, and the lens diameter when the mapping of the reconstruction volume pixel is at a lateral centerline of the microdisplay;
determining the second sub-pixel coordinate based on the reconstruction plane depth and the lens distance when the mapping of the reconstruction volume pixel is not at a lateral centerline of the microdisplay;
the first sub-pixel coordinate comprises a first sub-pixel ordinate and a plurality of first sub-pixel abscissas; said step of determining said first sub-pixel coordinate based on said reconstruction plane depth, said lens distance and said lens diameter when said mapping of said reconstruction volume pixel is at a lateral median line of said microdisplay, comprising:
when the mapping of the reconstructed volume pixel is on a transverse middle line of the micro display, acquiring a transverse pixel coordinate value of the reconstructed volume pixel mapped on the micro display;
calculating a first sum of the lens distance and the reconstruction plane depth;
calculating a first product of the reconstructed plane depth and the lens diameter;
calculating a first ratio of the first multiplication value to the first sum value;
respectively calculating a second sum and a first difference of the first ratio and the horizontal pixel coordinate value;
determining the horizontal pixel coordinate value, the first difference value and the second sum value as the first sub-pixel horizontal coordinate, respectively;
determining a longitudinal pixel coordinate value of the reconstructed volume pixel mapped on the micro display as the first sub-pixel longitudinal coordinate;
the second sub-pixel coordinates comprise a second sub-pixel ordinate and a plurality of second sub-pixel abscissas; said step of determining said second sub-pixel coordinate based on said reconstructed plane depth and said lens distance when said reconstructed volume pixel map is not in a lateral centerline of said microdisplay, comprising:
when the mapping of the reconstructed volume pixel is not located on the transverse middle line of the micro display, acquiring a transverse pixel coordinate value of the reconstructed volume pixel mapped on the micro display;
calculating a second sum of the lens distance and the reconstruction plane depth;
calculating a second product of the reconstructed plane depth and the lens diameter;
calculating a second ratio of the second multiplication value to the second sum value;
determining the horizontal coordinate of the second sub-pixel according to the second ratio and the horizontal pixel coordinate value;
and determining the longitudinal pixel coordinate value of the reconstructed volume pixel mapped on the micro display as the second sub-pixel longitudinal coordinate.
2. The method of claim 1, wherein the zero ray sampling error formula is:
Figure 191509DEST_PATH_IMAGE001
Figure 108650DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 763753DEST_PATH_IMAGE003
for the purpose of the depth of the reconstruction plane,
Figure 158962DEST_PATH_IMAGE004
is the distance between the lenses, and is,
Figure 848701DEST_PATH_IMAGE005
for the size of the pixel in question,
Figure 546530DEST_PATH_IMAGE006
is the diameter of the lens in question,
Figure 813563DEST_PATH_IMAGE007
j is the number of subpixel arrangement cycles as the minimum depth value,
Figure 887829DEST_PATH_IMAGE008
is a preset trim positive integer,n≥1。
3. the method of claim 1, wherein said step of determining said second sub-pixel abscissa based on said second ratio and said lateral pixel coordinate value comprises:
calculating a second difference value between the transverse pixel coordinate value and the second ratio;
calculating a third difference between the second difference and the second ratio;
and respectively determining the transverse pixel coordinate value, the second difference value and the third difference value as the second sub-pixel transverse coordinate.
4. A high-resolution light field image display device applied to a display terminal including a lens array and a micro display, the device comprising:
the parameter acquisition module is used for acquiring display parameters of the microdisplay, the lens diameter of the lens array and the current exit pupil distance;
the reconstruction plane depth determining module is used for determining the depth of a reconstruction plane according to the display parameters, the diameter of the lens and the current exit pupil distance;
a sub-pixel coordinate determination module, configured to determine sub-pixel coordinates corresponding to a reconstructed volume pixel based on the depth of the reconstruction plane and/or the lens diameter;
the device comprises a sub-pixel acquisition module, a reconstruction module and a display module, wherein the sub-pixel acquisition module is used for extracting a reconstruction sub-pixel corresponding to an image to be displayed from the sub-pixel coordinates when the image to be displayed is received;
the reconstructed body pixel construction module is used for constructing the reconstructed body pixel on a reconstructed depth plane corresponding to the depth of the reconstructed plane by adopting the reconstructed sub-pixel corresponding to the image to be displayed, and generating a target display image;
the display parameters include sub-pixel arrangement period number, pixel size and lens distance, and the reconstruction plane depth determination module includes:
the depth minimum value determining submodule is used for determining the depth minimum value of the depth of the reconstruction plane according to the difference value of the preset eye photopic distance and the current exit pupil distance;
the candidate reconstruction plane depth calculation submodule is used for substituting the pixel size, the lens distance and the lens diameter into a preset zero ray sampling error formula to obtain a plurality of candidate reconstruction plane depths;
a reconstruction plane depth selection submodule for selecting the candidate reconstruction plane depth greater than or equal to the depth minimum as a reconstruction plane depth;
wherein the lens distance is a distance between the microdisplay and the lens array;
the sub-pixel coordinates include first sub-pixel coordinates or second sub-pixel coordinates, and the sub-pixel coordinate determination module includes:
a first sub-pixel coordinate determination sub-module to determine the first sub-pixel coordinate based on the reconstruction plane depth, the lens distance, and the lens diameter when the mapping of the reconstruction volume pixel is at a lateral median line of the microdisplay;
a second sub-pixel coordinate determination sub-module to determine the second sub-pixel coordinate based on the reconstruction plane depth and the lens distance when the mapping of the reconstruction volume pixel is not at a lateral centerline of the microdisplay;
the first sub-pixel coordinate comprises a first sub-pixel ordinate and a plurality of first sub-pixel abscissas; the first sub-pixel coordinate determination sub-module comprising:
a first transverse pixel coordinate value obtaining unit, configured to obtain a transverse pixel coordinate value of the reconstructed volume pixel mapped on the microdisplay when the reconstructed volume pixel mapping is on a transverse middle-line of the microdisplay;
a first sum value calculation unit for calculating a first sum value of the lens distance and the reconstruction plane depth;
a first multiplication value calculation unit for calculating a first multiplication value of the reconstruction plane depth and the lens diameter;
a first ratio calculation unit configured to calculate a first ratio of the first multiplied value to the first sum value;
a sum and difference value calculation unit for calculating a second sum and a first difference value of the first ratio and the horizontal pixel coordinate values, respectively;
a first sub-pixel abscissa determining unit to determine the lateral pixel coordinate value, the first difference value, and the second sum value as the first sub-pixel abscissa, respectively;
a first sub-pixel ordinate determining unit, configured to determine a longitudinal pixel coordinate value of the reconstructed volume pixel mapped on the microdisplay as the first sub-pixel ordinate;
the second sub-pixel coordinates comprise a second sub-pixel ordinate and a plurality of second sub-pixel abscissas; the second sub-pixel coordinate determination sub-module includes:
a second transverse pixel coordinate value obtaining unit, configured to obtain a transverse pixel coordinate value of the reconstructed volume pixel mapped on the microdisplay when the mapping of the reconstructed volume pixel is not on a transverse median line of the microdisplay;
a second sum value calculation unit for calculating a second sum value of the lens distance and the reconstruction plane depth;
a second product calculation unit configured to calculate a second product of the reconstruction plane depth and the lens diameter;
a second ratio calculation unit configured to calculate a second ratio of the second multiplied value to the second sum value;
the second sub-pixel horizontal coordinate calculation unit is used for determining the second sub-pixel horizontal coordinate according to the second ratio and the horizontal pixel coordinate value;
and the second sub-pixel longitudinal coordinate determining unit is used for determining the longitudinal pixel coordinate value of the reconstructed volume pixel mapped on the micro display as the second sub-pixel longitudinal coordinate.
5. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the high resolution light field image display method of any of claims 1-3.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a high resolution light field image display method according to any one of claims 1 to 3.
CN202110873615.5A 2021-07-30 2021-07-30 High-resolution light field image display method, device, equipment and medium Active CN113556529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110873615.5A CN113556529B (en) 2021-07-30 2021-07-30 High-resolution light field image display method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110873615.5A CN113556529B (en) 2021-07-30 2021-07-30 High-resolution light field image display method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN113556529A CN113556529A (en) 2021-10-26
CN113556529B true CN113556529B (en) 2022-07-19

Family

ID=78133397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110873615.5A Active CN113556529B (en) 2021-07-30 2021-07-30 High-resolution light field image display method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113556529B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006010971A1 (en) * 2005-03-09 2006-09-21 Newsight Gmbh Autostereoscopic viewing method e.g. for images, involves having arrays providing defined propagation directions for light which emerge from one of arrays through one array from light source and oriented to array of transparent elements
US8290358B1 (en) * 2007-06-25 2012-10-16 Adobe Systems Incorporated Methods and apparatus for light-field imaging
JP5618943B2 (en) * 2011-08-19 2014-11-05 キヤノン株式会社 Image processing method, imaging apparatus, image processing apparatus, and image processing program
US9874749B2 (en) * 2013-11-27 2018-01-23 Magic Leap, Inc. Virtual and augmented reality systems and methods
JP6470530B2 (en) * 2013-12-06 2019-02-13 キヤノン株式会社 Image processing apparatus, image processing method, program, and recording medium
US20160241797A1 (en) * 2015-02-17 2016-08-18 Canon Kabushiki Kaisha Devices, systems, and methods for single-shot high-resolution multispectral image acquisition
EP3422722A1 (en) * 2017-06-30 2019-01-02 Thomson Licensing Method for encoding a matrix of image views obtained from data acquired by a plenoptic camera
CN110632767B (en) * 2019-10-30 2022-05-24 京东方科技集团股份有限公司 Display device and display method thereof
CN111624784B (en) * 2020-06-23 2022-10-18 京东方科技集团股份有限公司 Light field display device

Also Published As

Publication number Publication date
CN113556529A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
US11113794B2 (en) Systems and methods for generating defocus blur effects
Huang et al. The light field stereoscope.
JP4950293B2 (en) Autostereoscopic system
US7652674B2 (en) On the fly hardware based interdigitation
AU2016308980B2 (en) Image generating apparatus and image display control apparatus
CN103562963A (en) Systems and methods for alignment, calibration and rendering for an angular slice true-3D display
CN110300990A (en) It is carried out in the image just drawn anti-aliasing
Peillard et al. Can retinal projection displays improve spatial perception in augmented reality?
CN113763301B (en) Three-dimensional image synthesis method and device for reducing miscut probability
CN109870820A (en) Pin hole reflection mirror array integration imaging augmented reality device and method
Wang et al. Three-dimensional light-field display with enhanced horizontal viewing angle by introducing a new lenticular lens array
CN113556529B (en) High-resolution light field image display method, device, equipment and medium
WO2019026388A1 (en) Image generation device and image generation method
Thumuluri et al. A unified deep learning approach for foveated rendering & novel view synthesis from sparse rgb-d light fields
EP4014483A1 (en) Optical design and optimization techniques for 3d light field displays
Ebner et al. Off-Axis Layered Displays: Hybrid Direct-View/Near-Eye Mixed Reality with Focus Cues
CN111818324A (en) Device and method for generating three-dimensional large-visual-angle light field
CN113223144B (en) Processing method and system for three-dimensional display of mass data
CN113900608B (en) Method and device for displaying stereoscopic three-dimensional light field, electronic equipment and medium
US11917167B2 (en) Image compression method and apparatus, image display method and apparatus, and medium
US10869023B1 (en) Method and apparatus for correcting lenticular distortion
Date et al. Smooth motion parallax and high resolution display based on visually equivalent light field 3D
Pohl et al. Concept for rendering optimizations for full human field of view hmds
Yang et al. Towards the light field display
He et al. Assessment of the definition varying with display depth for three-dimensional light field displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant