CN115220240A - Method for generating stereoscopic image data adapting to eye position and display system - Google Patents

Method for generating stereoscopic image data adapting to eye position and display system Download PDF

Info

Publication number
CN115220240A
CN115220240A CN202110415722.3A CN202110415722A CN115220240A CN 115220240 A CN115220240 A CN 115220240A CN 202110415722 A CN202110415722 A CN 202110415722A CN 115220240 A CN115220240 A CN 115220240A
Authority
CN
China
Prior art keywords
stereoscopic image
image
unit
image data
stereoscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110415722.3A
Other languages
Chinese (zh)
Other versions
CN115220240B (en
Inventor
杨钧翔
丁志宏
张凯杰
侯昕佑
施智维
陈韦安
陈冠宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mirage Start Co ltd
Original Assignee
Mirage Start Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mirage Start Co ltd filed Critical Mirage Start Co ltd
Priority to CN202110415722.3A priority Critical patent/CN115220240B/en
Publication of CN115220240A publication Critical patent/CN115220240A/en
Application granted granted Critical
Publication of CN115220240B publication Critical patent/CN115220240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/27Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving lenticular arrays

Abstract

The application provides a method for generating stereoscopic image data adaptive to eye position and a display system, the purpose is to generate a stereoscopic image presented in a stereoscopic space, in the method, stereoscopic image data describing three-dimensional space information of the stereoscopic image is obtained first, the position of eyes of a user is detected, a visible range can be formed according to the eye position and light ray tracing information between each lens unit in a multi-optical component module, the visible range corresponding to each lens unit possibly covers the part of the stereoscopic image, unit images used for displaying on a display panel are calculated by matching with physical characteristics of the display panel and the multi-optical component module, a plurality of unit images form an integrated image for recording the stereoscopic image data adaptive to the eye position, and the integrated image is used for reproducing the stereoscopic image.

Description

Method for generating stereoscopic image data adaptive to eye position and display system
Technical Field
The present application provides a display technology of stereoscopic images, and more particularly, to a method and a display system for generating stereoscopic image data adaptive to eye positions, which can adjust the stereoscopic image data according to the eye positions.
Background
In the conventional method for displaying stereoscopic images, the principle that parallax is generated when two eyes see the same object is mostly applied, so that stereoscopic patterns of two images with difference can be respectively seen by the two eyes, or dynamic images of two different images are sequentially played, and the stereoscopic images are usually viewed through special glasses, such as red and blue glasses, polarized glasses or shutter glasses, so that the stereoscopic effects with depth are fused in the brain due to the parallax of the two eyes.
Another method is to provide a stereoscopic display device, which can enable a viewer to receive images with image differences at a specific viewing angle by two eyes without using special glasses through an optical assembly in the display device, and can also generate a stereoscopic effect with depth.
The prior art techniques for providing a viewable stereoscopic image do not refer to a viewing position of a user, even require the user to view the viewing position, and do not provide any solution.
Disclosure of Invention
In order to improve a stereoscopic image display method that originally does not consider eye positions according to the prior art, the present application discloses a method for generating stereoscopic image data adapted to eye positions and a display system.
According to the embodiment, the display system comprises a multi-optical component module, a plurality of lens units and a plurality of optical elements, wherein the multi-optical component module is composed of a plurality of lens units and is used for presenting a stereoscopic image; the display system also comprises a display panel for displaying an integrated image, wherein the integrated image consists of a plurality of unit images and can present a stereoscopic image through the multi-optical component module after being displayed; the display system is provided with a display driving unit for driving the display unit to display the integrated image; and an image processing unit, wherein the method for generating the stereoscopic image data adapted to the eye position is performed to generate the integrated image, wherein the stereoscopic image data describing the three-dimensional spatial information of the stereoscopic image is processed to form the integrated image corresponding to the eye position.
The specific way of forming the integrated image is to obtain the stereo image data describing the stereo image, obtain the position of the eyes of the user through detection, and obtain the physical characteristics of the display panel and the multiple optical component modules in the display system. Then, a visible range can be formed according to the eye position and the light ray tracing information between each lens unit in the multi-optical module, so that unit images can be recorded according to the part of the stereoscopic image covered by the visible range corresponding to each lens unit and the physical characteristics of the display panel and the multi-optical module, and a plurality of unit images corresponding to a plurality of lens units are generated according to the display pixel positions of the display panel, and the plurality of unit images form the integrated image for recording the stereoscopic image data adapting to the eye position.
Preferably, the stereoscopic image data further records color information describing the stereoscopic image, and the three-dimensional spatial information includes coordinate values and chrominance values of each pixel displayed in a stereoscopic space. And the physical characteristics of the display panel and the multi-optical module include the coordinates, the size and the refractive index of each lens unit in the multi-optical module and the spatial relationship between each lens unit and the display panel.
Furthermore, the unit images corresponding to different lens units have different positions and sizes due to different light ray traces, and the formed integrated image is displayed by the display panel and projected by the multi-optical component module to reproduce a stereoscopic image adaptive to the eye position.
Furthermore, when the stereoscopic image data is a dynamic stereoscopic image, the method for generating the stereoscopic image data repeatedly adapting to the eye positions generates a plurality of continuous integrated images adapting to the eye positions, so as to reproduce the dynamic stereoscopic image through the multi-optical component module.
For a better understanding of the features and technical content of the present invention, reference should be made to the following detailed description and accompanying drawings, which are provided for purposes of illustration and description only and are not intended to limit the invention.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a display device in a stereoscopic display system;
FIG. 2 is a schematic diagram of a display apparatus in a stereoscopic display system;
FIG. 3 is a block diagram of an embodiment of a circuit of a stereoscopic display system;
FIG. 4 is a diagram illustrating an embodiment of forming a cell image by ray tracing according to an eye position in a method for generating stereoscopic image data adapted to the eye position;
FIGS. 5A and 5B are diagrams illustrating a second embodiment of forming a unit image by light tracing in a method for generating stereoscopic image data adapted to an eye position;
FIG. 6 is a flowchart illustrating an embodiment of forming an integrated image in a method for generating eye-position adaptive stereo image data; and
fig. 7 is a flowchart illustrating an embodiment of a method for generating stereoscopic image data adapted to an eye position.
Detailed Description
The application discloses a method for generating stereoscopic image data adaptive to eye position and a display system, the disclosed method is suitable for a display device provided with multiple optical components and used for displaying stereoscopic images, and the embodiment of the display device can refer to the schematic diagram of the embodiment shown in fig. 1.
This figure shows a schematic structural diagram of a display device in a stereoscopic image display system, wherein the stereoscopic display device can be a liquid crystal display panel (LCD) with a backlight module (not shown in the figure), without excluding other display forms with a backlight module, or can be an Organic Light Emitting Diode (OLED) with self-luminous property. The display image 11 displayed on the display panel 1 is an integrated image (integral image) generated by the method shown in fig. 6, the integrated image is a planar image composed of a plurality of unit images (elemental images), the unit images are images corresponding to the lens groups of the multi-optical module 2 composed of a plurality of lens groups in the embodiment shown in the figure, wherein each unit image on the display image 11 can be an image corresponding to the position of each lens group in a one-to-one manner, a one-to-many manner, a many-to-one manner, and the like.
In the multi-optical module 2, a base 21 and a lens portion 22 are provided, each optical component on the lens portion 22 can be a lens group, and the lens group can be composed of one or more convex lenses and concave lenses, and the multi-optical components form a lens matrix. The system presents stereoscopic images through the multi-optical module 2, however, in this technical concept, the position and the angle of the viewing position 5 relative to the display device will affect the formation of the integrated image and the unit image. Besides executing general image processing procedures, the image processing unit 12 of the display device can also execute the method for forming an integrated image, which can adjust the reference image, the operation unit image and form the integrated image according to the viewing position of the viewing position 5, the position of displaying the stereoscopic image, the physical characteristics of the lens groups in the multi-optical component module 2 and the spatial relationship between the components. According to the stereoscopic image display embodiment adapted to the eye position, if the user changes the viewing position 5, it is possible to adaptively provide the user with appropriate viewing contents at the viewing position 5.
The display device shown in the figure can be an electronic device with a flat screen, such as a mobile phone, a tablet, a computer, etc., and the display panel 1 is arranged at the lower layer and is responsible for displaying a flat image which is not reproduced by light rays, mainly a display component for displaying integrated images; the multi-optical component module 2 is arranged on the upper layer, has the effect of regulating and controlling the light field, can regulate and control the light angle of the three-dimensional image, and redistributes and combines the plane image which is not recombined originally. In this embodiment, the integrated image is redistributed and combined by the multi-optical module 2 to display a recombined stereoscopic image.
The multi-optical module 2 is composed of a plurality of lens sets arranged in an array form, and constitutes a lens portion 22, which has physical characteristics such as refractive index and penetrability caused by material and lens curvature, and the number and arrangement of the lens sets of the lens matrix, in cooperation with the arrangement of the display panel 1, can determine three-dimensional image contents such as height, visual angle range and definition of a stereoscopic image.
According to various embodiments, each lens group may be a single lens (single lens), a lens array (lens array), a lenticular lens (lenticular lens), a Fresnel lens (Fresnel lens), and a pinhole (pin hole), a pinhole array (pin hole array), a light barrier (barrier), and a point light source (point light source) for imaging. The display device for displaying stereoscopic images may be a display with one lens set or a display array with more than two lens sets, and the images are displayed through a display panel and are imaged on a default position through lenses.
Fig. 2 is another schematic diagram of an embodiment of a display device in a stereoscopic display system, in which a display panel 1 of a stereoscopic display device displays an integrated image composed of unit images, and the stereoscopic image is reproduced through a multi-optical module 2.
In the illustrated embodiment, also shown in fig. 1, the user sees a "3D" floating stereoscopic image from his viewing position 5, and this stereoscopic image is displayed by the display panel 1 as a display image 11, which is an integrated image formed by a plurality of unit images, each unit image corresponding to a single optical element, i.e. a lens group, in the multi-optical module 2.
Since each lens group on the lens portion 22 is disposed at a different position, when the multi-optical module 2 is to project a floating stereoscopic image and the stereoscopic image is to be seen at a certain viewing position 5, it is obvious that the image projected by the lens group at each position is projected to a set spatial position, and therefore the lens group at each position should project a different image, that is, the unit images corresponding to each optical module will have a difference from each other.
For example, when a floating stereoscopic image is to be projected, an optical component located at the left of the projected stereoscopic image should project a unit image that is biased to a certain projection angle at the left of the stereoscopic image; the optical component at the right side of the stereoscopic image projects a unit image with a projection angle more deviated to the right side of the stereoscopic image; the optical components below the stereoscopic image should project the unit image projected upward with a higher weight below the stereoscopic image, and project the unit image as a real image (real image) to the human eye. In addition, the floating stereoscopic image represents a distance from the display plane, and the display effect is similar to floating in the air, but in another embodiment, the effect of sinking in the display plane is not excluded.
The stereoscopic image display system described in the above embodiments can be implemented by a circuit system, and the embodiment can refer to the circuit block diagram shown in fig. 3.
The stereoscopic image display system can realize the generation method of stereoscopic image data adapted to the eye position by hardware and software, wherein the hardware part is a display device including circuit units electrically connected with each other as shown in the figure, the main assembly includes a multi-optical assembly module 301 composed of a plurality of single optical assemblies for presenting a stereoscopic image, as shown in the embodiment of fig. 1 or fig. 2, each optical assembly is a lens assembly, and the lens assembly can be composed of one or more convex lenses and concave lenses, and the multi-optical assembly forms a lens matrix. The display system includes a display unit 303 including a display panel for displaying an integrated image, which can be projected by the multi-optical module 301 to display a stereoscopic image.
The display system includes a display driving unit 305, which may be a driving circuit of the display panel, capable of generating image control signals to drive the display unit 303 to display the integrated image. The display system includes an image processing unit 307, which may be an image processing integrated circuit, such as an digital signal processor (ISP) or a module implemented by specific software, for generating a stereoscopic image, a storage unit 311 connected to store stereoscopic image data received via an input interface unit 309, the storage unit 311 being a system memory for temporarily storing image data, system operation instructions and calculation instructions, capable of providing an instruction set for calculation and related image data, and capable of serving as a buffer for temporarily storing files generated during system operation.
The image processing unit 307 is further electrically connected to an eye detection unit 313 for detecting the position of the eyes of the user, the eye detection unit 313 may be a circuit module or an independent device disposed near the display unit 303 and the multi-optical module 301, and detects the position of the eyes according to the eye characteristics after acquiring the face image of the user by software (image processing) and hardware (camera), and the position of the eyes may refer to the position of one or both eyes of the user, and may be described by a set of coordinates, or may be converted into coordinates in the same coordinate system as the stereoscopic display device by coordinates. After the eye position is obtained, the image processing unit 307 obtains the related stereoscopic image data of the stereoscopic image adapted to the eye position from the storage unit 311. On the other hand, when the stereoscopic image display system adapted to the eye position records the image data of the stereoscopic space, the eye position of the user is first obtained through a software method or through the eye detection unit 313, and then the unit image corresponding to each lens unit is calculated according to the eye position, the stereoscopic image to be displayed, and the information of each lens unit in the multi-optical module 301, and then stored in the storage unit 311.
The image processing unit 307, the input interface unit 309, the storage unit 311, and the eye detection unit 313 in the stereoscopic image display system implement the operation circuit 300 for generating stereoscopic image data adapted to the eye position, the operation circuit 300 is connected to the external stereoscopic image source 30 through the input interface unit 309, and when the image processing unit 307 operates the method for generating stereoscopic image data adapted to the eye position, the operation circuit first receives stereoscopic image data describing three-dimensional spatial information of a stereoscopic image through the input interface unit 309. The stereoscopic image source 30 may be a stereoscopic image data drawn by specific software and hardware, in which information such as stereoscopic coordinates and chromaticity of the stereoscopic image is recorded, which may include color information and three-dimensional space information of the stereoscopic image, and in another embodiment, the stereoscopic image source may be a two-dimensional plane image and a depth map (depth map).
Then, a spatial relative relationship is established according to the stereoscopic image data, and actually, the spatial relative relationship can be reflected by a reference image, and the reference image can reflect the finally displayed stereoscopic image, the reference image is set by a user to set the stereoscopic image to be presented, then, the system calculates unit images corresponding to each optical component according to the eye position, the reference image and the physical information related to the multi-optical-component module 301, and forms an integrated image for the display unit 303 to display corresponding to the plurality of unit images of the multi-optical-component module, and the integrated image is driven and displayed by the display driving unit 305, and then the stereoscopic image is presented through the multi-optical-component module 301.
The physical information related to the multi-optical module mainly relates to physical characteristics of each optical module, such as each lens unit forming the lens array, and related physical characteristics such as optical characteristics (such as refractive index) of each lens unit, arrangement position, adjacent spacing of lens units, and the like, and further at least includes a spatial relationship between a spatial position of projecting the stereoscopic image and each optical module, such as a distance and a relative angle of the stereoscopic image from each optical module (such as a lens group); and the spatial relationship between each optical element and a display panel in the display unit 303, such as the distance between each optical element and the display panel.
The spatial relationship can be set in the same spatial coordinate system, and the distance and relative angle between the stereo image and each optical assembly can be calculated by the spatial coordinate of the stereo image and the relative coordinate of each optical assembly, so the relative position between each optical assembly can be obtained, and the distance between each optical assembly and the display panel can be obtained. The spatial relationship may also include the relative position of each optical element on the multi-optical-element module, and the distance to the display panel and the pixel size matching. Then, the stereoscopic image data to be displayed can be input in the method for generating the stereoscopic image data adaptive to the eye position according to the various spatial relationships, and the method comprises the steps of setting the oblique angle of the stereoscopic image display according to the watching position of a user, forming ray tracing (ray tracing) according to the eye position, forming unit images through calculation, and forming an integrated image adaptive to the eye position after processing.
Fig. 4 is a diagram illustrating an embodiment of forming a unit image in a ray tracing manner according to an eye position in a method for generating stereoscopic image data adapted to the eye position, wherein the description of forming the unit image and the integrated image can be referred to the flowchart of fig. 6.
The illustration schematically shows a user (eyes 42) viewing a stereoscopic image 410 above a stereoscopic image display device, the stereoscopic image 410 projecting the image displayed on the display panel 40 through lens groups in the form of an array in a multi-optical module in the display device into the stereoscopic space, presenting a floating stereoscopic image 410 (in this case displaying the stereo font "3D"). The technique first determines a stereoscopic image to be displayed, for example, obtains stereoscopic image data from a stereoscopic image source (step S601, fig. 6), and establishes a corresponding reference image (reference image), i.e., the stereoscopic image 410 in the schematic diagram of fig. 4 (step S603, fig. 6). The stereoscopic image information of the reference image may include color information describing the stereoscopic image 410 and three-dimensional space information, where each image pixel in the space includes a set of coordinate values (x, y, z) and chrominance values (chrominance values), which may be a plane image data and a depth map thereof. Then, an eye position can be set, or the eye position of the real user can be obtained by detecting the eye position, and various physical information between the eye position and the multi-optical module can be obtained (step S605). The system obtains physical information related to the multiple optical components, including the size and characteristics of the optical components (such as lens groups), including the set coordinates, size, refractive index, etc. of the single lens group and the multiple lens matrix, and the spatial position projected by the optical components, the spatial relationship (such as distance) between each optical component and the display panel, the spatial relationship between the projected position and each optical component, etc.
When the stereoscopic image is reproduced, the coordinate values (x, y, z) describing the stereoscopic image when the image is reproduced may be matched with the depth value (z value) of each pixel in the planar image recorded in the depth map, and the chrominance value may be added to display the correct spatial position and color of each portion in the image, thereby generating the stereoscopic image. And then, the system establishes the reference image according to the received three-dimensional image data, the eye position of the user and the projection position of the three-dimensional image. The reference image is used for reflecting the three-dimensional coordinate value and the chromatic value of the display image. According to one embodiment, the input original stereo image is transformed into the reference image by a coordinate transformation, wherein a set of transformation parameters is calculated according to a coordinate transformation algorithm.
When the eye position is obtained, ray tracing information between the eye position and each lens unit is established. As shown in the figure, in order to establish the light ray tracing information between the eye 42, each lens unit (411, 412) and the corresponding unit image (421, 422), according to an embodiment, the edge position of the eye 42, the edge position of each lens unit in the multi-optical component module form a visible range (RoV), each visible range can cover a part of the stereoscopic image 410, taking fig. 4 as an example, the result of the light ray tracing between the eye 42 and the first lens unit 411 forms a first visible range 421, the result of the light ray tracing between the eye 42 and the second lens unit 412 forms a second visible range 422, and so on, the eye 42 can form a plurality of visible ranges with each lens unit in the multi-optical component module, each visible range covers a part of the stereoscopic image 410, that is, a plurality of unit images (401, 402) with different contents and sizes are calculated correspondingly.
In this example, according to the obtained first visual range 421 and second visual range 422, and considering the optical characteristics of the corresponding first lens unit 411 and second lens unit 412, such as lens specifications of lens thickness, area, surface curvature, refractive index, and the like, respectively, and the distance from the display panel, the corresponding first unit image 401 and second unit image 402 can be calculated, and finally the image information of the three-dimensional space is recorded according to the plurality of ray tracing information, the plurality of lens unit information, and the reference image, wherein the display pixel position of the display panel is recorded in comparison with the image information of the three-dimensional image 410 covered by each visual range in the three-dimensional space (step S607), specifically, the pixel information (three-dimensional coordinates and chromaticity values) of the three-dimensional image 410 in the three-dimensional space is recorded by the memory and projected onto each pixel value of the display panel by the multi-optical module.
In an embodiment, a coordinate transformation function is established between the original stereoscopic image information and the reference image, and the algorithm calculates the reference image into a unit image corresponding to each optical element (e.g., each lens) according to the characteristics of hardware, such as the physical information of each lens unit and the display panel of each optical element and the coordinate transformation function, wherein the unit images corresponding to each lens unit have different positions and sizes due to different light traces (step S609). A plurality of unit images corresponding to the lens units in the multi-optical module are formed on the display panel, and the unit images form an integrated image for recording the eye-adaptive stereoscopic image 410 (step S611), and then the eye-adaptive stereoscopic image 410 can be reproduced according to the position of the eye 42.
Fig. 4 also shows that lens units with different positions correspond to unit images with different positions, sizes and contents, and when the position of the eye 42 is changed, the method for generating stereoscopic image data adapted to the eye position described in fig. 6 also differs from the final integrated image due to the changed result of the ray tracing. FIGS. 5A and 5B are diagrams next illustrating another embodiment of forming a cell image by ray tracing in the method.
Fig. 5A and 5B schematically show a coordinate diagram, wherein each component in the X or Y axis and Z axis directions in the coordinate is displayed based on the stereoscopic image display system, and one lens unit 50 of the multi-optical component module near the origin (0) can be a single lens of the multi-optical component module in the stereoscopic image display system or a lens group in the stereoscopic image display apparatus shown in fig. 1 or 2, and this is taken as a representative description of the optical track formed with the edge of the eye 52. It is noted that the eyes 52 may represent one or both eyes of the user, and two sets of integrated images may be calculated for both eyes respectively.
In fig. 5A, a visual Range (ROV) formed by each lens (edge) relative to the position of the eye 52 (edge) is determined according to the position of the eye 52 and the position of each lens unit (in this case, the lens unit 50 is taken as an example), that is, the visual range formed between the first light ray 501 and the second light ray 502 in the figure is determined, then according to the physical characteristics (such as refractive index, lens thickness, etc.) of the lens unit 50, at this time, stereoscopic image data (not shown in this figure, such as a stereoscopic image floating on a display panel and described by a stereoscopic coordinate system) is introduced, the portion of the stereoscopic image covered by each visual range is determined according to the obtained visual range and stereoscopic image data, and the display position and display range of each unit image are determined in consideration of the spatial relationship between the eye position, the display panel and each lens unit, that is, the image unit 500 corresponding to the lens unit 50 is formed at a dotted line (the position of Z-axis coordinate 0) representing the position of the display panel in the figure. Fig. 5B further shows a coordinate illustration of the visible range formed by the first light ray 501 and the second light ray 502 between the lens unit 50 and the eye 52. Similarly, according to the principle of ray tracing, the respective visible ranges can be established according to the eyes 52 and other lens units of other multi-optical component modules, and the corresponding respective unit images can be generated according to the stereoscopic coordinate range covered by the respective visible ranges of the reference images and the chroma values of the corresponding pixels.
It is noted that the generation of the reference image will involve displaying the panel position corresponding to each optical element (including one-to-one, one-to-many, or many-to-one), and in general, does not necessarily refer to the viewing position of the user. However, since the user may view the stereoscopic image in an oblique direction, the method for generating the stereoscopic image data adapted to the eye position provided by the disclosure is applicable to a multi-optical module including lens groups arranged in an array, and the calculated unit images are changed according to the eye position, and the light may be considered to be transmitted out and re-converged into the stereoscopic image when passing through the multi-optical module, and even various physical information of the stereoscopic image floating or sinking above, below, or in front of and behind the display device, and the reference image is set in cooperation with the display at different angles in the algorithm, so that the unit image and the integrated image generated finally are also different.
Next, as shown in the flowchart of fig. 7, the images are formed on the display device, and the relative positions of the stereoscopic image and the display device are not limited, wherein the inventive concept can also be applied to an embodiment in which two (or more) sets of display devices display one or more stereoscopic images.
As in the previous embodiment, after the eye position is detected according to the face and eye features and the stereoscopic image to be displayed is confirmed in step S701, the integrated image in which the unit images corresponding to the respective lens units are recorded may be obtained from the memory in the display system in step S703. Then, in step S705, the integrated image is input to a display driving unit in the display system, which is a driving circuit for driving the display unit to display the image, the integrated image is displayed, and the corresponding unit image is projected on each optical assembly through the multi-optical module in step S707, and a stereoscopic image adapted to the eye position is projected at a certain spatial position on the multi-optical module.
In the imaging process, the stereoscopic image can be displayed above, below, or in front of and behind a display plane formed by multiple optical components in the display device, if the stereoscopic image data relates to a dynamic stereoscopic image, it is necessary to establish a plurality of consecutive reference images reflecting the spatial relative relationship in the memory and the unit image adapted to the eye position obtained by calculation in advance, and then output a plurality of integrated images, and then reproduce the dynamic stereoscopic image through the multiple optical component module by the flow described in fig. 7.
In summary, the generation method and display system of stereoscopic image data adapted to eye positions described in the above embodiments are different from the conventional stereoscopic image display method without considering eye positions, and the system provided by the present application provides a light tracking means for detecting eye positions of a user and an image processing means capable of correcting unit images according to the eye positions of the user, wherein the method can provide stereoscopic image data adapted to eye positions according to the detected viewing positions of the user, thereby solving the problem of poor viewing of stereoscopic images caused by poor viewing positions of the user.
The above disclosure is only a preferred embodiment of the present invention, and is not intended to limit the scope of the claims, so that all equivalent technical changes made by using the contents of the present specification and the drawings are included in the scope of the claims.

Claims (12)

1. A method for generating stereoscopic image data adapted to a position of an eye, the method comprising:
acquiring stereo image data, wherein three-dimensional space information describing a stereo image is recorded;
obtaining the position of eyes of a user and the physical characteristics of a display panel and a multi-optical component module in a display system;
forming a visible range according to the light ray tracing information between the eye position and each lens unit in the multi-optical component module; and
calculating a unit image displayed on the display panel according to the part of the stereoscopic image covered by the visible range corresponding to each lens unit and the physical characteristics of the display panel and the multi-optical component module to form a plurality of unit images corresponding to the plurality of lens units in the multi-optical component module, wherein the plurality of unit images form an integrated image for recording stereoscopic image data adapting to the eye position.
2. The method as claimed in claim 1, wherein the stereoscopic image data further records color information describing the stereoscopic image, and the three-dimensional spatial information includes coordinate values and chrominance values of each pixel displayed in a stereoscopic space.
3. The method as claimed in claim 1, wherein the physical properties of the display panel and the multi-optical module include coordinates and lens specifications of each lens unit in the multi-optical module, and a spatial relationship between each lens unit and the display panel.
4. The method as claimed in claim 1, wherein the unit images corresponding to different lens units have different positions and sizes due to different light traces, and the integrated image is displayed on the display panel and projected by the multi-optical module to reproduce the stereoscopic image adapted to the eye position.
5. The method of claim 4, wherein when the stereoscopic image data is a dynamic stereoscopic image, the method for generating the eye-position-adaptive stereoscopic image data is repeated to generate a plurality of consecutive integrated images suitable for the eye position for reproducing the dynamic stereoscopic image through the multi-optical module.
6. The method as claimed in any one of claims 1 to 5, wherein when the stereoscopic image data describing the stereoscopic image is obtained, a spatial relationship is established according to the stereoscopic image data, and a reference image is established to reflect the spatial relationship, the reference image being configured by a user to reflect the stereoscopic image finally displayed by the display system.
7. A display system, the system comprising:
a multi-optical module composed of multiple lens units for displaying a three-dimensional image;
a display panel for displaying an integrated image, the integrated image being composed of a plurality of unit images, the stereoscopic image being presented by the multi-optical component module;
a display driving unit for driving the display unit to display the integrated image; and
an image processing unit for processing a stereoscopic image data describing three-dimensional spatial information of the stereoscopic image to form the integrated image adapted to the eye position, wherein the method for forming the integrated image comprises:
obtaining the position of eyes of a user and the physical characteristics of the display panel and the multi-optical component module;
forming a visible range according to the light ray tracing information between the eye position and each lens unit in the multi-optical component module; and
calculating a unit image displayed on the display panel according to the part of the stereoscopic image covered by the visible range corresponding to each lens unit and the physical characteristics of the display panel and the multi-optical component module to form a plurality of unit images corresponding to a plurality of lens units in the multi-optical component module, wherein the plurality of unit images form the integrated image for recording stereoscopic image data suitable for the eye position.
8. The display system of claim 7, wherein the stereoscopic image data further describes color information describing the stereoscopic image, and the three-dimensional spatial information includes coordinate values and chromaticity values for displaying each pixel in a stereoscopic space.
9. The display system of claim 7, wherein the physical characteristics of the display panel and the multi-optical module include coordinates and lens specifications of each lens unit in the multi-optical module and a spatial relationship between each lens unit and the display panel.
10. The display system as claimed in claim 7, wherein the unit images corresponding to different lens units have different positions and sizes due to different light traces, and the integrated image is displayed on the display panel and projected by the multi-optical module to reproduce the stereoscopic image adapted to the eye position.
11. The display system of claim 10, wherein when the stereoscopic image data is a dynamic stereoscopic image, the method for generating the eye-position-adaptive stereoscopic image data is repeated to generate a plurality of consecutive integrated images adapted to the eye position for reproducing the dynamic stereoscopic image through the multi-optical module.
12. The display system according to any one of claims 7 to 11, wherein when the stereoscopic image data describing the stereoscopic image is obtained, a spatial relative relationship is established according to the stereoscopic image data, and a reference image is established to reflect the spatial relative relationship, the reference image being configured by a user to reflect the stereoscopic image finally displayed by the display system.
CN202110415722.3A 2021-04-19 2021-04-19 Method for generating stereoscopic image data adapting to eye positions and display system Active CN115220240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110415722.3A CN115220240B (en) 2021-04-19 2021-04-19 Method for generating stereoscopic image data adapting to eye positions and display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110415722.3A CN115220240B (en) 2021-04-19 2021-04-19 Method for generating stereoscopic image data adapting to eye positions and display system

Publications (2)

Publication Number Publication Date
CN115220240A true CN115220240A (en) 2022-10-21
CN115220240B CN115220240B (en) 2023-11-21

Family

ID=83605017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110415722.3A Active CN115220240B (en) 2021-04-19 2021-04-19 Method for generating stereoscopic image data adapting to eye positions and display system

Country Status (1)

Country Link
CN (1) CN115220240B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1175308A (en) * 1994-12-13 1998-03-04 齐诺技术有限公司 Tracking system for stereoscopic display systems
CN1894976A (en) * 2003-12-18 2007-01-10 视真技术有限公司 Multi-user autostereoscopic display with position tracking
US20080246753A1 (en) * 2005-02-25 2008-10-09 Seereal Technologies Gmbh Method and Device for Tracking Sweet Spots
US20100123952A1 (en) * 2008-11-18 2010-05-20 Industrial Technology Research Institute Stereoscopic image display apparatus
CN103384854A (en) * 2010-12-22 2013-11-06 视瑞尔技术公司 Combined light modulation device for tracking users
KR101741227B1 (en) * 2016-02-25 2017-05-29 주식회사 다이프로 Auto stereoscopic image display device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1175308A (en) * 1994-12-13 1998-03-04 齐诺技术有限公司 Tracking system for stereoscopic display systems
CN1894976A (en) * 2003-12-18 2007-01-10 视真技术有限公司 Multi-user autostereoscopic display with position tracking
US20080246753A1 (en) * 2005-02-25 2008-10-09 Seereal Technologies Gmbh Method and Device for Tracking Sweet Spots
US20100123952A1 (en) * 2008-11-18 2010-05-20 Industrial Technology Research Institute Stereoscopic image display apparatus
TW201021545A (en) * 2008-11-18 2010-06-01 Ind Tech Res Inst Stereoscopic image displaying apparatus
CN103384854A (en) * 2010-12-22 2013-11-06 视瑞尔技术公司 Combined light modulation device for tracking users
KR101741227B1 (en) * 2016-02-25 2017-05-29 주식회사 다이프로 Auto stereoscopic image display device

Also Published As

Publication number Publication date
CN115220240B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
US10502967B2 (en) Method for rendering three-dimensional image, imaging method and system
US8675048B2 (en) Image processing apparatus, image processing method, recording method, and recording medium
US8791989B2 (en) Image processing apparatus, image processing method, recording method, and recording medium
US20190306485A1 (en) 3d system including a marker mode
CN107071382B (en) Stereoscopic display device
US9191661B2 (en) Virtual image display device
US8189035B2 (en) Method and apparatus for rendering virtual see-through scenes on single or tiled displays
US9848184B2 (en) Stereoscopic display system using light field type data
KR102121389B1 (en) Glassless 3d display apparatus and contorl method thereof
US9467685B2 (en) Enhancing the coupled zone of a stereoscopic display
US8564647B2 (en) Color management of autostereoscopic 3D displays
US11095872B2 (en) Autostereoscopic 3-dimensional display
CN109782452B (en) Stereoscopic image generation method, imaging method and system
CN105374325A (en) Bendable stereoscopic 3D display device
US10473945B2 (en) Method for tuning a three-dimensional image and a display apparatus thereof
US20170142392A1 (en) 3d system including additional 2d to 3d conversion
CN113272710A (en) Extending field of view by color separation
CN115220240B (en) Method for generating stereoscopic image data adapting to eye positions and display system
WO2014119555A1 (en) Image processing device, display device and program
TWI771969B (en) Method for rendering data of a three-dimensional image adapted to eye position and a display system
US11172190B2 (en) Stereo weaving for head-tracked autostereoscopic displays
CN110072098B (en) Stereo image adjusting method and display device
WO2017083509A1 (en) Three dimensional system
Taherkhani et al. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor
EP3922012A1 (en) Method and apparatus for correcting lenticular distortion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant