CN111757088A - Naked eye stereoscopic display system with lossless resolution - Google Patents

Naked eye stereoscopic display system with lossless resolution Download PDF

Info

Publication number
CN111757088A
CN111757088A CN201910247546.XA CN201910247546A CN111757088A CN 111757088 A CN111757088 A CN 111757088A CN 201910247546 A CN201910247546 A CN 201910247546A CN 111757088 A CN111757088 A CN 111757088A
Authority
CN
China
Prior art keywords
pixels
display
pixel
view
grating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910247546.XA
Other languages
Chinese (zh)
Inventor
刁鸿浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910247546.XA priority Critical patent/CN111757088A/en
Priority to PCT/CN2020/078937 priority patent/WO2020199887A1/en
Priority to PCT/CN2020/078938 priority patent/WO2020199888A1/en
Priority to PCT/CN2020/078942 priority patent/WO2020199889A1/en
Publication of CN111757088A publication Critical patent/CN111757088A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/346Image reproducers using prisms or semi-transparent mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention relates to a multi-view autostereoscopic display comprising a display screen with a display panel and a raster, a video signal interface for receiving a 3D video signal and one or more 3D video processing units, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels and defines a plurality of pixel groups, each pixel group being made up of at least 3 pixels and corresponding to a multi-view setting, wherein the one or more 3D video processing units are configured to generate a plurality of images corresponding to all views or a predetermined view based on an image of the 3D video signal and to render corresponding pixels in each pixel group from the generated plurality of images. In some embodiments, the plurality of pixel groups have mutual arrangement positions adjusted or determined based on optical relationship data of the pixels and the gratings and/or correspondence data of the pixels of the display panel and the view point. The invention also provides a naked eye three-dimensional display system, a display method of the naked eye three-dimensional display and a pixel group arrangement method of the naked eye three-dimensional display.

Description

Naked eye stereoscopic display system with lossless resolution
Technical Field
The invention relates to the field of stereoscopic images, in particular to a naked eye type stereoscopic display technology. In particular, the present invention relates to a autostereoscopic (3D) display system.
Background
Stereoscopic images are one of the hottest technologies in the video industry, and the technology change from flat display to stereoscopic display is promoted. The stereoscopic display technology is a key part in the stereoscopic image industry, and is mainly classified into two types, namely, an eye type stereoscopic display technology and a naked eye type stereoscopic display technology. The naked eye type stereoscopic display technology is a technology in which a viewer can view a stereoscopic display image therebetween without wearing glasses. Compared with eye type stereo display, naked eye type stereo display belongs to an auto-stereo display technology, and the constraint on a viewer is reduced.
In general, autostereoscopic display is viewpoint-based, and a sequence of parallax images (frames) is formed at different positions in space so that a pair of stereoscopic images having a parallax relationship can enter the left and right eyes of a person, respectively, to give a stereoscopic impression to a viewer. For a conventional multi-view autostereoscopic (3D) display having, for example, N views, a plurality of views of a space need to be projected with a plurality of independent pixels on a display panel. Since the total resolution of the display panel is constant, the resolution is drastically reduced, for example, the column resolution is reduced to 1/N of the original resolution. This also results in different horizontal and vertical resolution reduction factors due to the pixel arrangement of the multi-view display.
In the case of an N-viewpoint 3D display device that provides high definition, for example, N times as much as a 2D display device, if high-definition display is to be maintained, the transmission bandwidth of the terminal to the display that needs to be occupied is also multiplied by N times, resulting in a too large amount of signal transmission. Moreover, the pixel-level rendering of such N-fold high-resolution images can severely occupy the computing resources of the terminal or the display itself, resulting in a significant performance degradation.
This background is only for convenience in understanding the relevant art in this field and is not to be taken as an admission of prior art.
Disclosure of Invention
The embodiment of the invention aims to provide a multi-view naked eye stereoscopic display and a display method thereof, aims to overcome or alleviate the problem of resolution reduction of naked eye stereoscopic display, and simultaneously does not occupy too large transmission bandwidth or rendering calculation resources.
Furthermore, the present inventors have recognized that there may be a problem that pixels of the display screen viewed from a viewpoint in space do not correspond to "ideal" pixels (or vice versa) due to the mounting, material, or alignment of the gratings. In some embodiments of the present invention, a completely new technical solution to the technical problem is also provided.
In one aspect, a multi-view autostereoscopic display is provided, which includes a display screen having a display panel and a raster, a video signal interface for receiving a 3D video signal, and one or more 3D video processing units, wherein the display panel includes a plurality of rows and a plurality of columns of pixels and defines a plurality of pixel groups, each pixel group being composed of at least 3 pixels and corresponding to a multi-view setting, wherein the one or more 3D video processing units are configured to generate a plurality of images corresponding to all views or a predetermined view based on an image of the 3D video signal and render corresponding pixels in each pixel group according to the generated plurality of images.
In the solution of the embodiment of the invention, the generated image is generated from the image of the received 3D video signal corresponding to all viewpoints or a predetermined viewpoint in a "resolution lossless" manner, i.e. the image and the rendered pixels are generated from the image of the original 3D video signal "point-to-point" according to the required (all or predetermined) viewpoint. This advantageously overcomes the resolution degradation problems of the prior art. In an embodiment of the invention, said "resolution-lossless" or "point-to-point" rendering explicitly comprises that the image corresponding to a single viewpoint has the same resolution as the image (frame) of the received 3D video signal and that the pixels in each pixel group corresponding to each viewpoint (or pixels determined according to the pixel-viewpoint correspondence) substantially point-by-point correspond to the generated image (and thus the received image). The "resolution-lossless" or "point-to-point" rendering may also include embodiments in which the received 3D video signal is interpolated or otherwise increased in resolution, an image for each viewpoint is generated corresponding to the interpolated or resolution-increased image in a "resolution-lossless" manner, and pixels corresponding to each viewpoint in each pixel group (or pixels determined from pixel-viewpoint correspondence) are rendered accordingly.
In one embodiment, the plurality of pixel groups have mutual arrangement positions adjusted or determined based on optical relationship data of pixels and gratings and/or correspondence data of pixels of the display panel and viewpoints.
In one embodiment, the grating comprises a cylindrical prism grating, and the optical relationship between the pixels and the grating comprises the alignment relationship between the pixels and the cylindrical prism grating and/or the refraction state of the cylindrical prism grating relative to the corresponding pixels.
In one embodiment, the grating comprises a front and/or rear parallax barrier grating, the parallax barrier grating comprises a light blocking portion and a light transmitting portion, and the optical relationship of the pixels to the grating comprises an alignment relationship of the pixels to the respective light transmitting portions of the parallax barrier grating.
In one embodiment, the correspondence of pixels to viewpoints is calculated or determined based on the optical relationship of pixels to gratings.
In one embodiment, the correspondence of pixels to viewpoints is determined by measuring at each viewpoint position.
In one embodiment, the multi-view autostereoscopic display further comprises a memory storing the optical relationship data and/or the correspondence data of pixels to views, and the one or more 3D video processing units are configured to read data in the memory.
In one embodiment, the received 3D video signal includes a received depth image and a rendered color image, and the generated image includes the generated depth image and the rendered color image.
In one embodiment, the received 3D video signal includes a received depth image and a rendered color image, and the generated image includes a generated first parallax image and a generated second parallax image.
In one embodiment, the received 3D video signal includes received first and second parallax images, and the generated image includes the generated first and second parallax images.
In one embodiment, the received 3D video signal comprises the received first and second parallax images, and the generated images comprise the generated depth image and the rendered color image.
In one embodiment, a plurality of 3D video processing units are provided in the multi-view autostereoscopic display, each 3D video processing unit being configured to be each allocated with a plurality of rows or columns of pixels and to render a respective plurality of rows or columns of pixels. In one embodiment, the plurality of 3D video processing units may be sequentially arranged and render respective rows or columns of pixels. For example, assuming that 4 3D video processing units are provided and the display panel is provided with M columns of pixels in total, each 3D video processing unit is provided with respective M/4 columns of pixels in order from left to right or from right to left, for example.
In some embodiments of the present invention, pixel driving, rendering of the display panel is progressive.
In some preferred embodiments of the present invention, the 3D video processing units respectively allocated to multiple columns of pixels have a prominent effect in combination with the progressive scanning, which effectively reduces the computational bandwidth.
In one embodiment, the one or more 3D video processing units are FPGA or ASIC chips or chipsets.
In one embodiment, the 3D video signal is a one-way signal, and the one or more 3D video processing units are configured to generate a plurality of images corresponding to all viewpoints based on the one-way 3D video signal and render all pixels in each pixel group.
In one embodiment, the 3D video signal is a multi-channel signal, wherein the number of multi-views is N, the number of multi-channels is M, N ≧ M, the one or more 3D video processing units are configured to generate N images corresponding to all views and render all pixels in each pixel group, each generated image being generated based on one of the M-channel signals, respectively.
In one embodiment, the multi-view autostereoscopic display further comprises an eye tracking device or eye tracking data interface for acquiring eye tracking data.
In one embodiment, the one or more 3D video processing units are configured to generate a plurality of images corresponding to predetermined viewpoints based on the images of the 3D video signal and to render corresponding pixels in respective pixel groups from the generated plurality of images, the predetermined viewpoints being determined by real-time eye tracking data of a viewer.
In one embodiment, the one or more 3D video processing units are configured to, when each eye of the viewer is located at a single viewpoint, generate an image corresponding to the single viewpoint based on an image of the 3D video signal and render a pixel corresponding to the single viewpoint in each pixel group.
In one embodiment, the one or more 3D video processing units are configured to also generate images corresponding to viewpoints adjacent to the single viewpoint and also to render pixels corresponding to the adjacent viewpoints in each pixel group.
In one embodiment, the one or more 3D video processing units are configured to, when respective eyeballs of a viewer are located between two viewpoints, generate images corresponding to the two viewpoints based on images of the 3D video signal and render pixels corresponding to the two viewpoints in respective pixel groups.
In one embodiment, where the 3D video signal is a single-channel signal, the one or more 3D video processing units are configured to, when there are a plurality of viewers, generate the plurality of images and render corresponding pixels in each pixel group for viewpoints corresponding to respective eyeballs of the respective viewers based on the single-channel signal.
In one embodiment, where the 3D video signals are multiplexed signals, the one or more 3D video processing units are configured to, when the number of viewers is multiple, generate the multiple images based on different 3D video signals and render corresponding pixels in each pixel group for a viewpoint corresponding to respective eyeballs of at least some of the viewers.
In one embodiment, the display panel is a self-emissive display panel configured such that non-rendered pixels do not emit light. Preferably, the display panel is a MICRO-LED display panel.
In another technical solution, a multi-view autostereoscopic display is provided, including a display screen having a display panel and a barrier, a video signal interface, and one or more 3D video processing units, where the display panel includes a plurality of rows and columns of pixels and defines a plurality of pixel groups, each pixel group is composed of at least 3 pixels and is arranged corresponding to a multi-view point, where the plurality of pixel groups have irregular mutual arrangement positions and are adjusted or determined based on an optical relationship between the pixels and the barrier and/or correspondence data between the pixels of the display panel and the view point, and the one or more 3D video processing units are configured to render corresponding pixels in each pixel group.
Compared with the conventional method for improving the precision and overcoming the alignment error, the installation error and the material error, the method and the device provided by the embodiment of the invention provide simple and high-reliability high-definition naked eye stereoscopic display without loss of resolution by simply adjusting the arrangement mode of the pixel groups.
In one embodiment, the grating comprises a cylindrical prism grating, and the optical relationship between the pixels and the grating comprises the alignment relationship between the pixels and the cylindrical prism grating and/or the refraction state of the cylindrical prism grating relative to the corresponding pixels.
In one embodiment, the grating comprises a front and/or rear parallax barrier grating, the parallax barrier grating comprises a light blocking portion and a light transmitting portion, and the optical relationship of the pixels to the grating comprises an alignment relationship of the pixels to the respective light transmitting portions of the parallax barrier grating.
In one embodiment, the correspondence of pixels to viewpoints is calculated or determined based on the optical relationship of pixels to gratings.
In one embodiment, the correspondence of pixels to viewpoints is determined by measuring at each viewpoint position.
In one embodiment, the multi-view autostereoscopic display further includes a memory storing the optical relationship data and/or the correspondence data between pixels and views, and the one or more 3D video processing units are configured to read data in the memory.
In still another aspect, a multi-view autostereoscopic display is provided, which includes a display screen having a display panel and a raster, and a memory, wherein the display panel includes a plurality of rows and columns of pixels, and the memory stores optical relationship data between each pixel of the display panel and the raster and/or correspondence data between each pixel of the display panel and a view point. With the aid of the stored data, it is possible to use the autostereoscopic display according to the invention, in particular an "resolution-free" autostereoscopic display.
Compared with the conventional method for improving the precision and overcoming the alignment error, the installation error and the material error, the embodiment of the invention provides simple, high-reliability and high-definition naked eye stereoscopic display without loss of resolution.
In one embodiment, the grating comprises a cylindrical prism grating, and the optical relationship between the pixels and the grating comprises the alignment relationship between the pixels and the cylindrical prism grating and/or the refraction state of the cylindrical prism grating relative to the corresponding pixels.
In one embodiment, the grating comprises a front and/or rear parallax barrier grating, the parallax barrier grating comprises a light blocking portion and a light transmitting portion, and the optical relationship of the pixels to the grating comprises an alignment relationship of the pixels to the respective light transmitting portions of the parallax barrier grating.
In one embodiment, the correspondence of pixels to viewpoints is calculated or determined based on the optical relationship of pixels to gratings.
In one embodiment, the correspondence of pixels to viewpoints is determined by measuring at each viewpoint position.
In one embodiment, the multi-view autostereoscopic display further includes a video signal interface for receiving a 3D video signal and one or more 3D video processing units, wherein the one or more 3D video processing units are configured to generate images of a plurality of 3D videos corresponding to a part or all of the views based on the received video signal, and the one or more 3D video processing units are further configured to read registration relationship data of each pixel of the display panel and a raster and/or correspondence relationship data of each pixel of the display panel and the views and render pixels corresponding to the part or all of the views based on the data.
In another technical solution, a autostereoscopic display system is provided, which includes a processor unit and a multi-view autostereoscopic display according to an embodiment of the present invention, where the processor unit is in communication connection with the multi-view autostereoscopic display.
In one embodiment, the autostereoscopic display system is configured as a smart television having the processor unit.
In one embodiment, the autostereoscopic display system is a smart cellular phone, a tablet computer, a personal computer, or a wearable device.
In one embodiment, the autostereoscopic display system comprises a set top box or a screen-projectable cellular phone or a tablet computer as the processor unit and a digital television as a multi-view autostereoscopic display connected with the set top box, the cellular phone or the tablet computer in a wired or wireless mode.
In one embodiment, the autostereoscopic display system is configured as an intelligent home system or a part thereof, wherein the processor unit comprises an intelligent gateway or a central controller of the intelligent home system, and the intelligent home system further comprises an eye tracking device for obtaining eye tracking data.
In one embodiment, the autostereoscopic display system is configured as, or part of, an entertainment interaction system. Preferably, the entertainment interaction system is configured to be suitable for use by multiple persons and to generate a multi-channel 3D video signal for transmission to the autostereoscopic display system based on the multiple users.
In still another technical solution, a display method of a multi-view autostereoscopic display is provided. The display includes a display screen having a display panel and a raster, wherein the display panel includes a plurality of rows and a plurality of columns of pixels. The method comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to all viewpoints or a predetermined viewpoint based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images.
In one embodiment, the step of defining a plurality of pixel groups comprises: the mutual arrangement positions of the plurality of pixel groups are adjusted or determined based on the optical relationship data of the pixels and the gratings and/or the corresponding relationship data of the pixels and the viewpoints of the display panel.
In one embodiment, the display method further comprises the following steps: implemented eye tracking data of a viewer is received or read. The generating step includes determining the predetermined viewpoint based on real-time eye tracking data by the viewer. The rendering step includes rendering pixels corresponding to the predetermined viewpoint in each pixel group.
In another technical scheme, a display method of a multi-view autostereoscopic display is provided. The display includes a display screen having a display panel and a raster, wherein the display panel includes a plurality of rows and a plurality of columns of pixels. The method comprises the following steps: acquiring optical relationship data of each pixel of the display panel and the grating and/or corresponding relationship data of each pixel of the display panel and a viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to all viewpoints or a predetermined viewpoint based on the images of the received 3D video signal; and rendering corresponding pixels according to the generated images. Wherein the corresponding pixels being rendered are determined based on the acquired optical relationship data and/or the corresponding relationship data of each pixel to the viewpoint.
In one embodiment, the step of acquiring data comprises measuring as the optical relationship data alignment data of each pixel with the grating and/or a refractive state of the cylindrical prism grating relative to each pixel.
In one embodiment, the step of acquiring data comprises calculating or determining a correspondence of pixels to viewpoints based on an optical relationship of pixels to gratings or by measuring at each viewpoint position.
In another technical solution, a pixel group arrangement method of a multi-view autostereoscopic display is provided, which includes the following steps: providing a display screen having a display panel and a raster, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels; acquiring optical relationship data of each pixel of the display panel and the grating and/or corresponding relationship data of each pixel of the display panel and a viewpoint; a plurality of pixel groups are defined based on the acquired optical relationship data and/or the correspondence data of each pixel with the viewpoint, each pixel group being composed of at least 3 pixels and being set corresponding to the multiple viewpoints. The defined plurality of pixel groups are used for multi-view autostereoscopic display of the display.
Preferred features of the invention are described in part below and in part will be apparent from the description.
Drawings
Embodiments of the present disclosure are described in detail below with reference to the attached drawing figures, wherein:
fig. 1A illustrates a schematic structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention.
Fig. 1B illustrates a structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention.
Fig. 1C illustrates a structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention.
Fig. 2 shows a schematic diagram of the structure of the pixels in the display panel corresponding to the view points in the embodiment shown in fig. 1A-C.
Fig. 3 schematically shows a schematic diagram of generating images for respective viewpoints from images (frames) of a received 3D video signal in the embodiment shown in fig. 1A-C.
Fig. 4 illustrates a schematic structural diagram of a single 3D video processing unit of a multi-view autostereoscopic display according to an embodiment of the present invention.
Fig. 5 illustrates a schematic structural diagram of a plurality of 3D video processing units of a multi-view autostereoscopic display according to an embodiment of the present invention.
Fig. 6 schematically shows a schematic diagram of generating images corresponding to respective viewpoints from images (frames) of a received 3D video signal in the embodiment shown in fig. 1.
Fig. 7A shows a schematic structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention, and schematically shows that the correspondence between pixels in some pixel groups and views is biased.
Fig. 7B shows a schematic structural diagram of the multi-view autostereoscopic display of the embodiment of fig. 7A, and schematically presents a corresponding relationship between adjustment pixels and views.
Fig. 8 shows a schematic structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention, and schematically presents a correspondence relationship between each pixel of a display panel and a view point.
Fig. 9 shows a partial structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention, which employs a cylindrical prism grating.
Fig. 10 illustrates a partial structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention, which employs a cylindrical prism grating.
Fig. 11 illustrates a partial structural diagram of a multi-view autostereoscopic display according to an embodiment of the present invention, which employs a parallax barrier grating.
Fig. 12 shows a schematic structural diagram of a multi-view autostereoscopic display using real-time eye tracking data according to an embodiment of the invention, wherein each eye corresponds to one view.
Fig. 13 shows a schematic structural diagram of a multi-view autostereoscopic display using real-time eye tracking data according to an embodiment of the invention, wherein each eye corresponds to a view.
Fig. 14 shows a schematic structural diagram of a multi-view autostereoscopic display using real-time eye tracking data according to an embodiment of the invention, wherein each eye is located between two views.
Fig. 15 illustrates a schematic structural diagram of a multi-view autostereoscopic display using real-time eye tracking data according to an embodiment of the present invention, in which an eye generates motion.
Fig. 16 shows a schematic configuration of a multi-view autostereoscopic display using real-time eye tracking data according to an embodiment of the invention, with multiple viewers.
Fig. 17 schematically shows a schematic diagram of generating images corresponding to predetermined viewpoints from images (frames) of two received 3D video signals in the embodiment shown in fig. 16.
Fig. 18 schematically illustrates that the multi-view autostereoscopic display system according to the embodiment of the present invention is configured as a cellular phone or a part thereof.
Fig. 19 schematically illustrates a digital television in which the multi-view autostereoscopic display system according to the embodiment of the present invention is configured to be connected to a set top box.
Fig. 20 schematically illustrates that the multi-view autostereoscopic display system according to the embodiment of the present invention is configured as a smart home system or a part thereof.
Fig. 21 schematically shows that the multi-view autostereoscopic display system according to an embodiment of the invention is configured as an entertainment interaction system or a part thereof.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following detailed description and accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
Definition of
Herein, "autostereoscopic (3D) display" refers to a technology in which a viewer can observe a stereoscopic display image on a flat display without wearing glasses for stereoscopic display, and includes, but is not limited to, "parallax barrier", "lenticular lens", and "directional backlight" technologies.
Herein, "grating" has its broadest interpretation in the art, including but not limited to "parallax barrier" gratings and "lenticular" gratings.
Herein, "multi-viewpoint" has a conventional meaning in the art, meaning a sequence (frames) of parallax images formed at different positions (viewpoints) in space. In this context, multi-view shall mean at least 3 views.
Referring to fig. 1A, in an embodiment of the present invention, a autostereoscopic display system may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively connected to the multi-view autostereoscopic display. In some embodiments herein, the processor unit comprises processing \ transmitting \ forwarding \ controlling means for transmitting the 3D video signal to the autostereoscopic display, which may be means having both the function of generating and transmitting the 3D video signal, or means that process or not process the received 3D video signal and forward it to the display. In some embodiments, the processor unit may be included in or referred to as a processing terminal or terminals.
The multi-view autostereoscopic display may include a display screen having a display panel and a grating (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints.
In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided.
In some embodiments of the present invention, the display may also optionally include a memory to store desired data. Some display embodiments of the present invention incorporating memory will be further described below.
With continued reference to FIG. 1A, the display panel may include a plurality of rows and a plurality of columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12). The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. As an illustrative description only, the foregoing PGx,yCan be schematically represented in the X-th row and the Y-th columnThe pixel group of (1).
With reference to fig. 1A and 2 in combination, the display of this embodiment is described. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereoscopic picture is synthesized in a brain.
In an embodiment of the invention, the one or more 3D video processing units are configured to generate images for display and render pixels such that a plurality of images corresponding to all viewpoints are generated based on the images of the 3D video signal and corresponding pixels in each pixel group are rendered from the generated plurality of images.
Correspondingly, the embodiment of the invention also provides a display method of the multi-view naked eye stereoscopic display, which comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to all viewpoints or a predetermined viewpoint based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images. In the illustrated embodiment, image generation and pixel rendering are performed corresponding to all viewpoints (12).
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1A, 2 and 3. The 3D video signal S1 received by the video signal interface is an image frame containing two contents, a color image and a depth field. Thus, the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and renders 12 pictures at viewing angles corresponding to the viewpoints of V1-V12. Then, the content of each generated image is written into the pixels viewed corresponding to each viewpoint.
Therefore, when the eyes of the viewer watch at different viewpoints V1-V12, the rendered pictures at different angles can be seen, and parallax is generated to form a stereoscopic effect of 3D display.
In some embodiments, the 12 generated pictures are generated with "lossless" resolution, in particular equal resolution, with respect to the corresponding image frames of the received 3D video signal, and in these embodiments, the correspondingly written pixels also correspond substantially point-by-point to the resolution of the generated image (and thus of the image of the received 3D video signal).
In some embodiments, a process of increasing (multiplying) the resolution of the received 3D video signal, such as an interpolation process, or so-called preprocessing, may also be performed. As an illustrative example, a 2-fold line resolution interpolation may be performed for both the color image and the depth image, for example. The processing of "resolution lossless" and/or "point-to-point rendering" according to embodiments of the invention may then be combined to obtain a new embodiment. It will be appreciated that processes that are "resolution lossless" and/or "point-to-point rendering" in conjunction with interpolation or other resolution increasing processes, or that themselves, fall within the scope of the "resolution lossless" and/or "point-to-point rendering" processes described herein. Picture generation in conjunction with a resolution increase for a corresponding view may sometimes also be referred to herein as resolution increase (multiplication) generation.
In some embodiments of the invention, an additional (pre-) processor may be provided to perform the resolution increase (multiplication) or interpolation, which may also be performed by the one or more 3D video processing units, which fall within the scope of the invention.
In some embodiments of the present invention, the display system or display may include an eye tracking device or may read eye tracking data.
Referring to fig. 1B, in an embodiment of the present invention, an autostereoscopic display system may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively connected to the multi-view autostereoscopic display. In the illustrated embodiment, the display may be integrated with an eye tracking device that is directly communicatively coupled to the 3D video processing unit. In some embodiments, not shown, the display may be provided with a memory for storing the eye tracking data, and the 3D video processing unit is connected to the memory and reads the eye tracking data. Preferably, the eye tracking data is real-time data. In the embodiment shown, the eye tracking device may be in the form of a dual camera, for example. In other embodiments of the present invention, other forms of eye tracking devices may be used, such as a single camera, a combination of eye tracking and depth cameras, and other sensing devices or combinations thereof that can be used to determine the position of the viewer's eyes. In some embodiments of the invention, the eye tracking device may have other functions or be shared with other functions or components. For example, in one embodiment, a front facing camera of a cellular telephone may be used as an eye tracking device in a display system configured as a cellular telephone.
Alternatively, the display may further include an eye tracking data interface, and the 3D processing unit may read real-time eye tracking data via the eye tracking data transmission interface.
In one embodiment referring to fig. 1C, an autostereoscopic display system may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively coupled to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit. Furthermore, the display may include an eye tracking data interface, and the 3D processing unit may be communicatively connected to the processor unit via the eye tracking data transmission interface to read real-time eye tracking data.
In some embodiments, not shown, the processor unit may not be equipped with or connected to the eye tracking device, but may itself read the eye tracking data in real time. Alternatively, the 3D processing unit may obtain real-time eye tracking data from other sources via the eye tracking data interface. All falling within the scope of the invention.
Embodiments of the present invention in which an eye tracking apparatus is provided may be combined with the above-described embodiments to obtain further embodiments. For example, a "resolution-lossless" embodiment of the present invention may be used in conjunction with conventional use of eye tracking devices or data to arrive at a new embodiment. Further improvements may also be made using the eye tracking device or data to obtain preferred embodiments, as further described below.
In one embodiment, the generated image content is written (rendered) to the pixels of the display panel by writing (rendering) line by line. This greatly reduces the pressure of rendering computations.
In one embodiment of the present invention, the progressive writing (rendering) process is performed as follows: and respectively reading information of corresponding points in each generated image and writing the information into pixels of the display panel line by line.
In another embodiment of the present invention, the method further includes synthesizing the plurality of generated images into a synthesized image, and reading information of corresponding points in the synthesized image and writing the information line by line into pixels of the display panel.
With continued reference to FIG. 4, another embodiment of a display according to the present invention is shown. In the illustrated embodiment, only a single 3D video processing unit is provided that simultaneously processes image generation corresponding to multiple viewpoints and rendering of a corresponding plurality of pixels in a pixel group.
In some embodiments of the invention, multiple 3D video processing units may be provided that process image generation and pixel rendering in parallel, serial, or a combination of serial and parallel.
Referring to fig. 5, a preferred embodiment of a display comprising a plurality of 3D video processing units according to the present invention is shown. In this embodiment, a plurality of 3D video processing units, i.e., a group of 3D video processing units, are provided. More preferably, the plurality of parallel 3D video processing units are arranged in parallel in sequence corresponding to respective columns of pixels. As such, each 3D video processing unit may, among other things, process pixel rendering in parallel. That is, each 3D video processing unit may correspondingly process the rendering of a respective pixel (column). As exemplarily shown in fig. 5, when the display panel has a total of M columns of pixels, as 4 parallel 3D video processing units (groups) are provided, the first 3D video processing unit processes the first M/4 columns of pixels, the second 3D video processing unit processes the second M/4 columns of pixels, the third 3D video processing unit processes the third M/4 columns of pixels, and the fourth 3D video processing unit processes the fourth M/4 columns of pixels.
The 3D video processing unit (group) simplifies the structure and greatly accelerates the processing process. In particular, this embodiment is suitable for being combined with the aforementioned embodiment in which each generated image is read separately for the line-by-line writing (rendering) process to obtain a further preferred embodiment. For example, taking the embodiment shown in fig. 5 as an example, when the scanning is progressive scanning, the first to fourth columns may sequentially process and render each M/4 column pixel of the first row, and for example, when the first 3D video processing unit completes the processing and then sequentially performs the processing of other video processing units, the first 3D video processing unit may obtain sufficient time to prepare for processing the corresponding M/4 column pixel of the next row (e.g., the second row), such as the first M/4 column pixel of the second row. This can greatly overcome the problem of serious insufficiency of rendering computation power that may arise in the conventional structure.
It will be appreciated by those skilled in the art that the embodiments shown in the figures are merely illustrative and that there may be more or fewer 3D video processing units, or that there may be other ways that the 3D video processing units (groups) may allocate and process the rows and columns of pixels in parallel, which is within the scope of the present invention.
Referring to fig. 1A-1C, fig. 2 and fig. 6, an autostereoscopic display system according to another embodiment of the invention is shown, which may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively connected to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit.
The multi-view autostereoscopic display may include a display screen having a display panel and a grating (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints. In the illustrated embodiment, the display may also include an eye tracking data interface. The 3D processing unit can be in communication connection with the processor unit through the eyeball tracking data transmission interface so as to read real-time eyeball tracking data. In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided. In some embodiments of the present invention, the display may be integrated with an eye end device that is directly communicatively coupled to the 3D video processing unit.
The display panel may include a plurality of rows and a plurality of columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12).
With reference to fig. 1 and 2 in combination, the display of this embodiment is described. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereo picture is synthesized in a brain
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1-2 and 6. The 3D video signal S1 received by the video signal interface is an image frame containing two contents of left and right parallax color images. Thereby, the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and thereby generates intermediate image information I1. In a specific embodiment, the depth image is synthesized with left and right parallax color images on one hand. On the other hand, a color image of the center point is generated by means of one or both of the above-described left and right parallax color images. Then, the intermediate image information I1, i.e. the depth image information and the color image information of the center point are used as input, and 12 pictures are rendered at the viewing angle corresponding to the view point of V1-V12. Then, the content of each generated image is written into a corresponding pixel in each pixel group viewed correspondingly to each viewpoint.
Therefore, when the eyes of the viewer watch at different viewpoints V1-V12, the rendered pictures at different angles can be seen, and parallax is generated to form a stereoscopic effect of 3D display.
In some embodiments, the 12 generated pictures are generated with "lossless" resolution, in particular equal resolution, with respect to the corresponding image frames of the received 3D video signal, and in these embodiments, the correspondingly written pixels also correspond substantially point-by-point to the resolution of the generated image (and thus of the image of the received 3D video signal).
In some embodiments, a process of increasing (multiplying) the resolution of the received 3D video signal, such as an interpolation process, or so-called preprocessing, may also be performed. As an illustrative example, a 2-fold line resolution interpolation may be performed for both left-eye and right-eye parallax images, for example. It is within the scope of the invention that the "resolution lossless" and/or "point-to-point rendering" processes described in accordance with embodiments of the invention may then be combined to obtain a new embodiment, and that the image conversion process may be performed as previously described before the processing. It will be appreciated that processes that are "resolution lossless" and/or "point-to-point rendering" in conjunction with interpolation or other resolution increasing processes, or that themselves, fall within the scope of the "resolution lossless" and/or "point-to-point rendering" processes described herein. Picture generation in conjunction with a resolution increase for a corresponding view may sometimes also be referred to herein as resolution increase (multiplication) generation.
In some embodiments of the invention, an additional (pre-) processor may be provided to perform the resolution increase (multiplication) or interpolation, which may also be performed by the one or more 3D video processing units, which fall within the scope of the invention.
Referring to fig. 7A and 7B, a autostereoscopic display system and a display thereof according to another embodiment of the present invention are illustrated.
Although not shown, the display panel of the autostereoscopic display according to the embodiment of the present invention has a plurality of rows and a plurality of columns of pixels. For "multi-view" display, the rows and columns of pixels are arranged in a matrixThe modes corresponding to the multiple viewpoints are divided into groups. For example, in the illustrated embodiment, each pixel group includes a row of 12 pixels corresponding to 12 viewpoints. In a conventional configuration, the pixel groups are arranged in a regular manner with respect to each other. For example, in a pixel group consisting of a single row and a plurality of columns of pixels, the pixel groups are arranged in sequence in the same row, e.g., the pixel group PG of the same row1,i(i is more than or equal to 1) are sequentially arranged end to end; the pixel groups are aligned in the same column, e.g. as a column of pixel groups PGj,1(j.gtoreq.1) are arranged in vertical alignment. The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. In conventional arrangements, other forms of pixel groups PG are still regularly arranged with respect to each other.
Ideally, the corresponding pixels in the regularly arranged pixel groups are correctly displayed in the corresponding view points. However, the present inventors have appreciated that there may be a problem that pixels of the display viewed from a viewpoint in space do not correspond to "ideal" pixels (or vice versa) due to the mounting, material or alignment of the gratings.
As exemplarily shown in FIG. 7A, for example, the display panel is shown having a plurality of pixel groups PG, including PG, regularly distributed1,1And PGx,y. In the illustrated embodiment, the pixel group PG1,1The corresponding pixels in (a) are correctly displayed in the corresponding viewpoints V1-V12, respectively. However, the pixel group PG that should be displayed "theoretically" at the corresponding viewpoint V1-V12x,yAre actually displayed in the viewpoints V1 '-V12', respectively. (in the illustrated embodiment, V1' corresponds to V3).
As exemplarily shown with reference to fig. 7B, in the exemplary embodiment shown, the multi-view autostereoscopic display is configured with pixel groups having an irregular mutual arrangement position, i.e. adjusted with respect to pixel groups of a "regular" arrangement. Such adjustment is adjusted or determined based on the correspondence of the pixels of the display panel with the viewpoints. In the illustrated embodiment, the pixel group PG 'is based on the correspondence of pixels to viewpoints'x,ySo adjusted or confirmedPixel groups PG arranged regularly, i.e. in comparison with "regular" arrangementx,yTwo pixels are shifted to the left of the drawing plane. Thus, the pixel group PG 'having the adjusted "irregular" arrangement'x,yThe pixels in (b) are correctly displayed at the corresponding viewpoints V1-V12.
Although in the illustrated embodiment a lateral (row) adjustment of groups of pixels of a single row and multiple columns of pixels is provided, other directional adjustments are conceivable, such as a vertical (column) adjustment or a combined lateral and vertical adjustment. Furthermore, lateral, vertical and/or combined adjustment of pixel groups of other pixel arrangements is also conceivable.
In the illustrated embodiment, the adjustment of the "irregular" pixel groups described above is directly adjusted based on the correspondence of the pixels to the viewpoints. In some embodiments, the "irregular" correspondence of the pixels to the viewpoints is determined based on the optical relationship, e.g., alignment, refraction, of the pixels to the gratings. Thus, in some embodiments, an "irregular" group of pixels may be adjusted or determined based on the optical relationship of the pixels to the grating. In other embodiments, the "irregular" or actual alignment of pixels to viewpoints may be determined by direct measurement.
In some embodiments, the optical data and/or the bit map data may be stored in a memory for reading when processed by the 3D video processing unit. In an alternative embodiment, a data interface may be provided in communication with the 3D video processing unit, so that the 3D video processing unit reads the optical data and/or the bit relation data by means of the data interface. In alternative embodiments, the optical data and/or the bit map data may be written directly into the 3D video processing unit or as part of its algorithm. All falling within the scope of the invention.
The processing of the 3D video processing unit of the illustrated embodiment, and thus the display of the display, is described with reference to fig. 1-3, 6, and 7A-7B in combination. The 3D video signal S1 received by the video signal interface is an image frame containing two contents of left and right parallax color images. Thus, the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and therebyIntermediate image information I1 is generated. In a specific embodiment, the depth image is synthesized with left and right parallax color images on one hand. On the other hand, a color image of the center point is generated by means of one or both of the above-described left and right parallax color images. Then, the intermediate image information I1, i.e. the depth image information and the color image information of the center point are used as input, and 12 pictures are rendered at the viewing angle corresponding to the view point of V1-V12. Then, the content of each generated image is written into corresponding pixels in pixel groups viewed correspondingly to each viewpoint, wherein each pixel group is a pixel group which is adjusted or determined based on optical data or pixel-viewpoint alignment relation and preferably arranged irregularly. For example, in the illustrated embodiment, the pixel groups comprise PG arranged in a "regular" manner1,1And PG 'adjusted'x,y
Therefore, when the eyes of the viewer watch at different viewpoints V1 … V12, the rendered pictures at different angles can be seen, and parallax is generated to form a stereoscopic effect of 3D display.
The embodiments illustrated in fig. 7A-7B describe adjusting a pixel group based on optical data and/or pixel-viewpoint alignment relationships for correct rendering of corresponding pixels in the pixel group by a 3D video processing unit. However, it is contemplated that methods of using optical data and/or pixel-viewpoint alignment relationships, either directly or indirectly, to determine the pixel that is correctly displayed at the corresponding viewpoint, and methods of rendering the pixel, whether or not the pixel groups and adjustments thereof are intentionally defined, are within the scope and equivalents of the present invention.
Referring to fig. 8, a naked eye stereoscopic display system and a display thereof according to another embodiment of the present invention are illustrated. In the illustrated embodiment, the display panel of the autostereoscopic display of the embodiment has a plurality of rows and a plurality of columns of pixels. In the embodiment shown in fig. 8, the display stores or can read data of the view point corresponding to each pixel of the display panel. For example, as exemplarily shown in FIG. 8, a pixel P1,b1Corresponding to the viewpoint V8, pixel Pam,bnCorresponding to the viewpoint V6, pixel Paz,bzCorresponding to viewpoint V12.
In the embodiment shown in fig. 8, direct correspondence data of each pixel with a viewpoint is shown. However, it is contemplated that in some embodiments, optical data, such as grating to pixel alignment data and/or grating refraction data, or other indirect data, that can be used to determine the correspondence of pixels to viewpoints may be employed. In some embodiments, the optical data and/or the bit map data may be stored in a memory for reading when processed by the 3D video processing unit. In an alternative embodiment, a data interface may be provided in communication with the 3D video processing unit, so that the 3D video processing unit reads the optical data and/or the bit relation data by means of the data interface. In alternative embodiments, the optical data and/or the bit map data may be written directly into the 3D video processing unit or as part of its algorithm. In a preferred embodiment, the pixel-viewpoint correspondence data may be in the form of a look-up table. All falling within the scope of the invention.
The processing of the 3D video processing unit, and thus the display of the display, of the illustrated embodiment is described with reference to fig. 1-3, 6, and 8 in combination. The 3D video signal S1 received by the video signal interface is an image frame containing two contents of left and right parallax color images. Thereby, the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and thereby generates intermediate image information I1. In a specific embodiment, the depth image is synthesized with left and right parallax color images on one hand. On the other hand, a color image of the center point is generated by means of one or both of the above-described left and right parallax color images. Then, the intermediate image information I1, i.e. the depth image information and the color image information of the center point are used as input, and 12 pictures are rendered at the viewing angle corresponding to the view point of V1-V12. Then, each generated image content is written into each pixel viewed correspondingly to each viewpoint according to the pixel-viewpoint correspondence relationship.
Therefore, when the eyes of the viewer watch at different viewpoints V1-V12, the rendered pictures at different angles can be seen, and parallax is generated to form a stereoscopic effect of 3D display.
In some embodiments, the grating of the display is a cylindrical prism grating. Figure 9 illustrates one embodiment of a cylindrical prism grating.
In the embodiment of the cylindrical prism grating shown in fig. 9, the pixel group adjustment shown in fig. 7A to 7B or the pixel-viewpoint alignment shown in fig. 8 can be adopted accordingly.
Referring specifically to fig. 9, in the illustrated embodiment, the obliquely disposed prismatic prisms cover substantially 12 pixels per row. As an example, the display of this embodiment also has 12 viewpoints, and the pixel groups of the display panel have one row and a plurality of columns of pixels corresponding to the 12 viewpoints. Referring to fig. 9 and 7A-7B in combination, in the display of the lenticular sheet of the illustrated embodiment, the pixels P in the "regularly" arranged pixel groups on the top of the illustrated lenticular prismsa1,b1-Pa1,b4May have correct correspondence to viewpoints V1-V4. However, the four pixels in the "regularly" arranged pixel group at the bottom of the cylindrical prism are not aligned with the correct viewpoints V1-V4, but rather with the corresponding viewpoints V1 '-V4'. To this end, the pixel group may be adjusted to be shifted one pixel to the left in the illustration so as to be displayed at the correct viewpoint V1-V4, and the remaining viewpoints in the pixel group may likewise be shifted one pixel to the left, e.g., the pixel "theoretically" corresponding to viewpoint V4' shown in FIG. 9 corresponds to viewpoint V5.
With combined reference to fig. 9 and 8, the embodiment shown in fig. 9 is equally applicable to utilizing optical (deflection) data and/or "irregular" alignment of pixel-viewpoints directly for each pixel of the display panel. For example, pixel-viewpoint correspondence data may be stored, recorded, or read, and four pixels located at the bottom of the columnar prism correspond to viewpoints V2, V3, V4, and V5, respectively.
Although not wishing to be bound by theory, the "misalignment" of the pixel groups or pixels may be caused by misalignment of the cylindrical prisms with the pixels and/or the refractive state of the cylindrical prisms. Fig. 9 illustrates the theoretical alignment position and the actual alignment deviation on the left side of the prism in a dotted line and a solid line.
With combined reference to fig. 9 and 10, the cylindrical prisms in the illustrated embodiment are disposed, for example, obliquely to the pixels, for example, to eliminate moir é. Therefore, there are pixels that are "shared" between the boundaries of the columnar prisms (e.g., pixels corresponding to the viewpoint V1 described above). In some configurations, a viewpoint is specified for each of these "shared" pixels that corresponds to it. However, in some preferred embodiments of the present invention, a "dynamic" correspondence or so-called view-shared pixels based on a pixel group fine-tuning or pixel-view of these "shared" pixels may be provided.
Referring to fig. 10, for a pixel row Pam,bn-Pam,bn+i(i ≧ 1), for example, the "shared" pixel conventionally corresponding to viewpoint V12, may be rendered in accordance with the image of viewpoint V1, for example, when viewpoint V12 is not rendered.
Those skilled in the art will appreciate that the fine tuning or "dynamic" relationship of the embodiment shown in fig. 10 may be applied to other types of rasters, and that further preferred embodiments may be obtained in connection with embodiments of real-time eye tracking data acquisition.
Referring to fig. 11, a schematic diagram of a partial structure of a parallax barrier type display is shown. The parallax barrier grating 100 includes a light-shielding portion 102 and a light-transmitting portion 104. In the embodiment of the parallax barrier shown in fig. 11, the characteristics of the pixel group adjustment as shown in fig. 7A to 7B or the pixel-viewpoint alignment relationship as shown in fig. 8 may be adopted accordingly.
Although not wishing to be bound by theory, the "misalignment" of the groups or pixels may be caused by misalignment of the light-transmissive portion 104 of the parallax barrier grating with the pixels.
In the illustrated embodiment, the parallax barrier grating 100 is a front grating, but it is conceivable to provide a rear grating and to provide both front and rear gratings.
With combined reference to fig. 1B-1C and fig. 12, in one embodiment of the present invention, a autostereoscopic display system is provided, which may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively connected to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit. As an alternative embodiment, the eye tracking device may be provided in the display or the system or the display may only have a transmission interface that can receive real-time eye tracking data.
With continued reference to fig. 1B-1C, the multi-view autostereoscopic display may include a display screen having a display panel and a raster (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints. In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided. In some embodiments of the present invention, the display may be integrated with an eye end device that is directly communicatively coupled to the 3D video processing unit.
With continued reference to FIGS. 1B-1C, the display panel may include a plurality of rows and columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12). The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. As an illustrative description only, the foregoing PGx,yThe pixel groups in the X-th row and Y-th column can be schematically represented.
The display of this embodiment is described with reference to fig. 1B to 1C and fig. 12 in combination. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereo picture is synthesized in a brain
In the embodiment shown in fig. 12, the one or more 3D video processing units are configured to generate an image for display and render pixels such that a plurality of images corresponding to a predetermined viewpoint are generated based on the image of the 3D video signal and pixels corresponding to the predetermined viewpoint in each pixel group are rendered in accordance with the generated plurality of images. In the illustrated embodiment, the predetermined viewpoint is determined based on real-time eye tracking data. More specifically, when it is detected that the eyeballs (left and right eyes) of the viewer are at predetermined viewpoints (spatial positions), an image for the respective viewpoints is generated and pixels in pixel groups corresponding to the respective viewpoints are rendered. Specifically, in the embodiment shown in fig. 12, it is detected that the first eyeball (e.g., the right eye) is located at the viewpoint V4, and the second eyeball (e.g., the left eye) is located at the viewpoint V8.
Correspondingly, the embodiment of the invention also provides a display method of the multi-view naked eye stereoscopic display, which comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to predetermined viewpoints such as viewpoints V4 and V8 based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images. In the illustrated embodiment, image generation and pixel rendering are performed corresponding to predetermined viewpoints (V4 and V8).
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1B-1C and fig. 12. The 3D video signal S1 received by the video signal interface is an image frame containing two contents, a color image and a depth field. Thus, the 3D video processing unit takes the image information and the depth information of the received 3D video signal S1 as input, and renders the viewpoints V4 and V8 of the eyeballs at corresponding viewing angles to 2 pictures based on the real-time eyeball data. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding viewpoint corresponds to the pixel seen (e.g., the 4 th and 8 th pixels).
Thus, the eyes of the viewers at the viewpoints V4 and V8 can see the rendered pictures at different angles, and parallax is generated to form a stereoscopic effect of 3D display.
In some embodiments of the present invention, the aforementioned "resolution-lossless" embodiments may be combined with eye tracking to obtain new embodiments. The pictures generated above for the viewpoints V4 and V8 are generated with "lossless resolution", in particular equal resolution, of the corresponding image frames of the received 3D video signal, as described for example in connection with fig. 1B-1C and fig. 12, and in these embodiments the correspondingly written pixels also correspond substantially point-by-point to the resolution of the generated image (and thus of the image of the received 3D video signal).
In some embodiments, a process of increasing (multiplying) the resolution of the received 3D video signal, such as an interpolation process, or so-called preprocessing, may also be performed. As an illustrative example, a 2-fold line resolution interpolation may be performed for both the color image and the depth image, for example. The processing of "resolution lossless" and/or "point-to-point rendering" according to embodiments of the invention may then be combined to obtain new embodiments, for example to obtain generated pictures with resolutions corresponding to the viewpoints V4 and V8 of the 2-fold interpolated image. It will be appreciated that processes that are "resolution lossless" and/or "point-to-point rendering" in conjunction with interpolation or other resolution increasing processes, or that themselves, fall within the scope of the "resolution lossless" and/or "point-to-point rendering" processes described herein. Picture generation in conjunction with a resolution increase for a corresponding view may sometimes also be referred to herein as resolution increase (multiplication) generation.
In some embodiments of the invention, an additional (pre-) processor may be provided to perform the resolution increase (multiplication) or interpolation, which may also be performed by the one or more 3D video processing units, which fall within the scope of the invention.
Furthermore, it will be appreciated that the described embodiments for rendering a predetermined viewpoint (less than all viewpoints) using real-time eye tracking data may be combined with many of the previously described embodiments or replaced by features to yield new embodiments. In particular, this embodiment may be combined with features related to optical data/pixel-viewpoint alignment data to obtain a new embodiment. And this embodiment can be adapted without explicitly grouping pixels to obtain a new embodiment.
Continuing with the embodiment shown in FIG. 13, it is generally similar to the embodiment shown in FIG. 12. The difference is that the predetermined viewpoint further comprises a viewpoint adjacent to the viewpoint where the eyeball is located. For example, in the embodiment shown in fig. 13, the predetermined viewpoints for which the image is to be generated may further include viewpoints V3 and V5, and viewpoints V7 and V9, and further render pixels corresponding to the viewpoints in the pixel group. In some embodiments, only one-sided neighboring viewpoint may be used as the predetermined viewpoint.
In some embodiments, for example, only the pixels as described in fig. 12 or 13 may be rendered, with the remaining pixels not being rendered. Preferably, for a liquid crystal display, the pixels that are not rendered may leave white light or leave the color of the previous image frame. This can thereby reduce the calculation load as much as possible.
Referring to fig. 12, 13, a preferred embodiment of the present invention is described wherein the display comprises a self-luminous display panel, preferably a MICRO-LED display panel. In some embodiments of the present invention, the self-emissive display panel, such as a MICRO-LED display panel, is configured such that pixels that are not being rendered do not emit light. This can greatly save power consumed by the display screen, especially for multi-view ultra high definition displays.
With combined reference to fig. 1B-1C and fig. 14, in one embodiment of the present invention, a autostereoscopic display system is provided, which may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively coupled to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit. As an alternative embodiment, the eye tracking device may be provided in the display or the system or the display may only have a transmission interface that can receive real-time eye tracking data.
With continued reference to fig. 1, the multi-view autostereoscopic display may comprise a display screen having a display panel and a raster (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints. In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided. In some embodiments of the present invention, the display may be integrated with an eye end device that is directly communicatively coupled to the 3D video processing unit.
With continued reference to FIG. 1, the display panel may include a plurality of rows and columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12). The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. As an illustrative description only, the foregoing PGx,yThe pixel groups in the X-th row and Y-th column can be schematically represented.
With reference to fig. 1 and 14 in combination, the display of this embodiment is described. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereo picture is synthesized in a brain
In the embodiment shown in fig. 14, the one or more 3D video processing units are configured to generate images for display and render pixels such that a plurality of images corresponding to a predetermined viewpoint are generated based on the images of the 3D video signal and pixels corresponding to the predetermined viewpoint in each pixel group are rendered in accordance with the generated plurality of images. In the illustrated embodiment, the predetermined viewpoint is determined based on real-time eye tracking data. More specifically, when it is detected that the eyeballs (left and right eyes) of the viewer are at adjacent viewpoints, images for the adjacent viewpoints are generated and pixels in pixel groups corresponding to the respective viewpoints are rendered. Specifically, in the embodiment shown in fig. 12, it is detected that the first eyeball (e.g., the right eye) is located between viewpoints V4 and V5, and the second eyeball (e.g., the left eye) is located between viewpoints V8 and V9. Thus, four images corresponding to the viewpoints V4, V5, and V8, V9 may be generated accordingly, and pixels corresponding to the four viewpoints in the pixel group may be rendered.
Correspondingly, the embodiment of the invention also provides a display method of the multi-view naked eye stereoscopic display, which comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to predetermined viewpoints such as viewpoints V4, V5 and V8, V9 based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images.
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1 and 14. The 3D video signal S1 received by the video signal interface is an image frame containing two contents, a color image and a depth field. Thus, the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and based on the real-time eyeball data, renders the viewpoints V4, V5, V8, and V9 of the eyeballs into 4 pictures at corresponding viewing angles. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding view points in the image correspond to the pixels (e.g., the 4 th, 5 th and 8 th, 9 th pixels) that are seen.
Thus, the eyes of the viewers positioned between the viewpoints V4 and V5 and between the viewpoints V8 and V9 can see the rendered pictures at different angles, and parallax is generated to form a stereoscopic effect of 3D display.
It will be appreciated that the described embodiments for rendering a predetermined viewpoint (less than all viewpoints) using real-time eye tracking data may be combined with many of the previously described embodiments or replaced by features to achieve new embodiments. In particular, this embodiment may be combined with features related to optical data/pixel-viewpoint alignment data to obtain a new embodiment. And this embodiment can be adapted without explicitly grouping pixels to obtain a new embodiment.
With combined reference to fig. 1B-C and fig. 14, in another embodiment of the invention, an autostereoscopic display system is provided, which may comprise a processor unit and a multi-view autostereoscopic display. The difference between this embodiment is that the 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content. Thus, the 3D video processing unit takes as input the image frames of the received 3D video signal S1 containing left and right parallax color image content. And correspondingly generating a left eye or right eye parallax color image according to the eyeballs detected by the real-time eyeball tracking data based on the real-time eyeball data. For example, two pictures are rendered based on the right parallax color image content of the 3D video signal S1 for the viewpoints V4 and V5 where the right eye is located. Two pictures are rendered based on the left parallax color image content of the 3D video signal S1 for the viewpoints V8 and V9 at which the left eye is located. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding view points in the image correspond to the pixels (e.g., the 4 th, 5 th and 8 th, 9 th pixels) that are seen.
Thus, the eyes of the viewers positioned between the viewpoints V4 and V5 and between the viewpoints V8 and V9 can see the rendered pictures at different angles, and parallax is generated to form a stereoscopic effect of 3D display.
With combined reference to fig. 1 and 15, in one embodiment of the present invention, an autostereoscopic display system may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively connected to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit. As an alternative embodiment, the eye tracking device may be provided in the display or the system or the display may only have a transmission interface that can receive real-time eye tracking data.
With continued reference to fig. 1, the multi-view autostereoscopic display may comprise a display screen having a display panel and a raster (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints. In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided. In some embodiments of the present invention, the display may be integrated with an eye end device that is directly communicatively coupled to the 3D video processing unit.
With continued reference to FIG. 1, the display panel may include a plurality of rows and columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12). The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. As an illustrative description only, the foregoing PGx,yThe pixel groups in the X-th row and Y-th column can be schematically represented.
With reference to fig. 1 and fig. 15 in combination, the display of this embodiment is described. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereo picture is synthesized in a brain
In the embodiment shown in fig. 15, the one or more 3D video processing units are configured to generate images for display and render pixels such that a plurality of images corresponding to a predetermined viewpoint are generated based on the images of the 3D video signal and pixels corresponding to the predetermined viewpoint in each pixel group are rendered in accordance with the generated plurality of images. In the illustrated embodiment, the predetermined viewpoint is determined based on real-time eye tracking data. More specifically, when it is detected that the eyeballs (left and right eyes) of the viewer are at predetermined viewpoints (spatial positions), an image for the respective viewpoints is generated and pixels in pixel groups corresponding to the respective viewpoints are rendered. Specifically, in the embodiment shown in fig. 15, it is detected that the first eyeball (e.g., the right eye Er) is located at the viewpoint V4, and the second eyeball (e.g., the left eye El) is located at the viewpoint V8.
With continued reference to fig. 15, when the real-time eye tracking data indicates that the eye of the viewer is moving, a plurality of images corresponding to a new predetermined viewpoint may be generated based on a next image (frame) of the 3D video signal and pixels corresponding to the predetermined viewpoint in each pixel group may be rendered according to the generated plurality of images. Specifically, in the embodiment shown in fig. 15, it is currently detected that the first eyeball (e.g., the right eye Er) moves to the viewpoint V6, and the second eyeball (e.g., the left eye El) is located at the viewpoint V10. In the illustrated embodiment, the predetermined viewpoint may also be changed based on the real-time eye-tracking data based on a timing controller provided to the display.
Correspondingly, the embodiment of the invention also provides a display method of the multi-view naked eye stereoscopic display, which comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to a predetermined viewpoint based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images. Further comprising the steps of: based on the real-time eye tracking data, the predetermined viewpoint is adjusted, and an image and rendered pixels are generated based on the new predetermined viewpoint. In the illustrated embodiment, image generation and pixel rendering is performed corresponding to the current predetermined viewpoint V4, V8 or V6, V10 based on real-time eye tracking data.
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1 and 15. The 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content. And correspondingly generating a left eye or right eye parallax color image according to the eyeballs detected by the real-time eyeball tracking data based on the real-time eyeball data.
For example, at the first time, a picture is rendered based on the right parallax color image content of the 3D video signal S1 for the viewpoint V4 where the right eye is located. For the leftThe viewpoint V8 at which the eyes are located renders a picture based on the left parallax color image content of the 3D video signal S1. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding viewpoint corresponds to the pixel seen (e.g., the 4 th and 8 th pixels).
At the second time, a picture is rendered based on the right parallax color image content of the 3D video signal S1 for the viewpoint V6 where the right eye is located. For the viewpoint V10 where the left eye is located, a picture is rendered based on the left parallax color image content of the 3D video signal S1. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding viewpoint corresponds to the pixel seen (e.g., the 6 th and 10 th pixels).
Therefore, the eyes of the viewer in the motion state can still see the rendered pictures at different angles in real time, and parallax is generated to form a three-dimensional effect of 3D display.
With combined reference to fig. 1 and 16, in one embodiment of the present invention, an autostereoscopic display system may include a processor unit and a multi-view autostereoscopic display, the processor unit being communicatively coupled to the multi-view autostereoscopic display. In the illustrated embodiment, the autostereoscopic display system may further comprise an eye tracking device, for example in the form of a dual camera, in communication with the processor unit. As an alternative embodiment, the eye tracking device may be provided in the display or the system or the display may only have a transmission interface that can receive real-time eye tracking data.
With continued reference to fig. 1, the multi-view autostereoscopic display may comprise a display screen having a display panel and a raster (not identified), a video signal interface for receiving a 3D video signal, and a 3D video processing unit. Referring to FIG. 2, in the illustrated embodiment, the display may have 12 viewpoints (V1-V12), but it is contemplated that it may have more or fewer viewpoints. In an embodiment of the present invention, the display may further optionally include a timing controller and/or a display driving chip, which may be integrally provided with the 3D video processing unit or independently provided. In some embodiments of the present invention, the display may be integrated with an eye end device that is directly communicatively coupled to the 3D video processing unit.
With continued reference to FIG. 1, the display panel may include a plurality of rows and columns of pixels and define a plurality of pixel groups. In the illustrated embodiment, only two exemplary pixel groups PG are shown for illustrative purposes1,1And PGx,yEach pixel group corresponds to a multi-view setting, having respective 12 pixels (P1-P12). The pixels in the pixel group are arranged in a single row and in multiple columns as an illustrative embodiment, but other arrangements are contemplated, such as a single column and multiple rows and multiple columns. As an illustrative description only, the foregoing PGx,yThe pixel groups in the X-th row and Y-th column can be schematically represented.
With reference to fig. 1 and fig. 16 in combination, the display of this embodiment is described. As mentioned above, the display may have 12 viewpoints V1-V12, and the viewer's eyes can see the display of corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and thus see different rendered pictures. Two different pictures seen by two eyes of a viewer at different viewpoints form parallax, and a stereo picture is synthesized in a brain
In the embodiment shown in fig. 16, the one or more 3D video processing units are configured to generate an image for display and render pixels such that a plurality of images corresponding to a predetermined viewpoint are generated based on the image of the 3D video signal and pixels corresponding to the predetermined viewpoint in each pixel group are rendered in accordance with the generated plurality of images. In the illustrated embodiment, the viewer has multiple bits, such as two bits. Based on the positions of the eyeballs of different viewers, rendering the image for the corresponding viewpoints and writing the corresponding pixels in the pixel groups.
Correspondingly, the embodiment of the invention also provides a display method of the multi-view naked eye stereoscopic display, which comprises the following steps: defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint; receiving a 3D video signal; generating a plurality of images corresponding to predetermined viewpoints such as viewpoints V4, V6 corresponding to left and right eyes of a first user and viewpoints V8, V10 corresponding to left and right eyes of a second user based on the images of the received 3D video signal; and rendering corresponding pixels in each pixel group according to the generated plurality of images.
The processing of the 3D video processing unit in the particular embodiment shown is described with combined reference to fig. 1 and 16. The 3D video signal S1 received by the video signal interface is an image frame containing both color images and depth images. Thus, the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and renders 4 pictures at corresponding viewing angles from viewpoints V4 and V6 corresponding to the left and right eyes of the first user and viewpoints V8 and V10 corresponding to the left and right eyes of the second user based on the real-time eyeball data. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding view points in the image correspond to the pixels (e.g., the 4 th, 6 th and 8 th, 10 th pixels) that are seen.
Therefore, each person can watch the rendering image corresponding to the own observation angle, and parallax is generated so as to form a three-dimensional effect of 3D display.
With combined reference to fig. 1 and 16, in another embodiment of the invention, an autostereoscopic display system is provided, which may comprise a processor unit and a multi-view autostereoscopic display. The difference between this embodiment is that the 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content. Thus, the 3D video processing unit takes as input the image frames of the received 3D video signal S1 containing left and right parallax color image content. And correspondingly generating a left eye or right eye parallax color image according to the eyeballs detected by the real-time eyeball tracking data based on the real-time eyeball data. For example, two frames are rendered based on the right parallax color image content of the 3D video signal S1 for the viewpoint V4 where the right eye of the first user is located and the viewpoint V8 where the right eye of the first user is located. And two pictures are rendered according to the left parallax color image content of the 3D video signal S1 aiming at the viewpoint V6 where the left eye of the first user is located and the viewpoint V10 where the left eye of the first user is located. Then, the contents of the generated corresponding image are written to each pixel group (e.g., PG)1,1And PGx,y) The corresponding view points in the image correspond to the pixels (e.g., the 4 th, 6 th and 8 th, 10 th pixels) that are seen.
Therefore, each person can watch the rendering image corresponding to the own observation angle, and parallax is generated so as to form a three-dimensional effect of 3D display.
With combined reference to fig. 16 and 17, in another embodiment of the invention, an autostereoscopic display system is provided, which may comprise a processor unit and a multi-view autostereoscopic display. The display is configured to receive multiple signal inputs, in the embodiment shown in fig. 17, two paths S1 (left and right parallax images) and S2 (color image and depth image).
With continued reference to fig. 16, for example, a first User (User2) desires to see left and right disparity signals S1, while a second User (User2) desires to see color and depth signals. Thus, the 3D video processing unit generates rendered images corresponding to the viewpoints and writes the contents of the generated respective images to the respective pixel groups (e.g., PG) according to the positions (viewpoints V4 and V6) where the left and right eyeballs (Er and El) of the first user are located and the positions (viewpoints V8 and V10) where the left and right eyeballs (Er and El) of the first user are located, respectively1,1And PGx,y) The corresponding view points in the image correspond to the pixels (e.g., the 4 th, 6 th and 8 th, 10 th pixels) that are seen.
Therefore, each person can watch the rendering image corresponding to the own observation angle, parallax is generated, a three-dimensional effect of 3D display is formed, and different users can watch different video contents.
In some embodiments of the present invention, the above-described embodiments may have specific implementations. For example, for a naked eye 3D display with 12 viewpoints, the video signal interface receives a 1920x1200 resolution MiPi signal, and the signal is converted into a mini-LVDS signal after entering the time schedule controller. The conventional approach is to give the signals output to a plurality of display driver chips for the panel, respectively. In this regard, in one embodiment of the present invention, a 3D video processing unit (or group of units) in the form of an FPGA, ASIC is provided before the display driving chip.
The resolution of the display screen is 1920x12 x1200, and the signals received by the interface are processed to achieve lossless extension for each viewpoint resolution, i.e. 12 times the resolution of the received video.
In some embodiments of the present invention, the video signal Interface may have multiple implementation forms, including but not limited to, a High Definition Multimedia Interface (HDMI 2.0) with version 1.2, a High Definition Multimedia Interface (HDMI 2.0) with version 2.0, or a wireless Interface, such as WiFi, bluetooth, cellular network, or the like.
In some embodiments of the invention, the display or display system and display method may incorporate other image processing techniques: such as Color adjustment of video signals, including Color space rotation (Color Tint) adjustment and Color Gain (Color Gain) adjustment; the brightness adjustment includes Contrast (Contrast) adjustment, Drive Gain (Drive Gain) adjustment, and GAMMA curve adjustment.
In some embodiments of the invention, implementations of display systems according to the invention are described. In one embodiment, as shown in FIG. 18, the display system 1800 is or is configured as part of a cellular telephone. In some embodiments, the processing unit of the display system may be provided by or integrated in a processor of the cellular telephone, such as an Application Processor (AP). In some embodiments, the eye tracking device may comprise or be configured as a camera of a cellular telephone, in particular a front facing camera. In some preferred embodiments, the eye tracking device according to the invention may comprise or be constructed as a front camera in combination with a structured light camera.
In some embodiments, the display system may be configured as a tablet, personal computer, or wearable device having a processor unit.
In one embodiment of the invention, the autostereoscopic display may be a digital television (smart or non-smart). In some embodiments of the invention, as shown in fig. 19, the display system 1900 may be configured as the autostereoscopic display 1904 with a set-top box 1902 or a screen-projecting cell phone or tablet connected thereto, in which the processor unit is included.
In an alternative embodiment, the naked eye stereoscopic display is a smart television and is integrated with the processor unit.
In some embodiments of the invention, the autostereoscopic display system is configured as a smart home system or a part thereof. In the embodiment shown in fig. 20, the smart home system 2000 (or autostereoscopic display system) may include an intelligent gateway 2002 or central controller including or integrated with a processor unit, an autostereoscopic display 2004, and an eye tracking device, such as a dual camera 2006, for acquiring eye tracking data. By way of example, the eye tracking device may take other forms, including, for example, a single camera, a combination of a camera and a depth of field camera, and the like. In the illustrated embodiment, the display and eye tracking device are both wirelessly connected to a smart gateway or central controller, such as via a WiFi connection. Other forms of connection are conceivable.
In some embodiments of the invention, the autostereoscopic display system is configured as, or part of, an entertainment interaction system.
Fig. 21 shows an autostereoscopic display system according to a preferred embodiment of the present invention, which is configured as or part of an entertainment interactive system 2100. The entertainment interactive system 2100 comprises an autostereoscopic display 2104 and an eye tracking device, such as a dual camera 2106, for obtaining eye tracking data, the processor unit not being shown. In the entertainment interaction system 2100, it is configured for use by multiple users, and in the depicted embodiment, two users. In the illustrated embodiment, the autostereoscopic display 2104 of the entertainment interaction system 2100 generates an image based on eye tracking data of an eye tracking device, such as a dual camera 2106, for example, and writes pixels corresponding to the viewpoint.
In a more preferred embodiment, the entertainment interaction system 2100 may also incorporate multiple signal input embodiments to achieve a new embodiment. For example, in one embodiment, based on user interaction (e.g., based on data detected by an eye tracking device or other sensor), the processing unit generates multiple, e.g., two, personalized video signals accordingly, and displays the video signals using the display and the display method thereof according to embodiments of the present invention.
The entertainment interaction system according to the embodiment of the invention can provide a user with extremely high degree of freedom and interaction.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by various possible entities. A typical implementation entity is a computer or a processor or other component thereof. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, a smart television, an internet of things system, a smart home, an industrial computer, a single chip microcomputer system, or a combination of these devices. In a typical configuration, a computer may include one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM).
The methods, programs, systems, apparatuses, etc., in embodiments of the present invention may be performed or implemented in a single or multiple networked computers, or may be practiced in distributed computing environments. In the described embodiments, tasks are performed by remote processing devices that are linked through a communications network in these distributed computing environments.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects.
Those skilled in the art will appreciate that the implementation of the functional blocks/units or controllers and the associated method steps set forth in the above embodiments may be implemented in software, hardware, or a combination of software and hardware. For example, it may be implemented in purely computer readable program code means or it may be possible to cause a controller to perform the same function in hardware, in part or in whole by logically programming method steps, including but not limited to logic gates, switches, application specific integrated circuits, programmable logic controllers (e.g., FPGAs), and embedded microcontrollers.
In some embodiments of the invention, the components of the apparatus are described in the form of functional modules/units. It is contemplated that the various functional modules/units may be implemented in one or more "combined" functional modules/units and/or one or more software and/or hardware components. It is also conceivable that a single functional module/unit is implemented by a plurality of sub-functional modules or combinations of sub-units and/or by a plurality of software and/or hardware. The division of functional modules/units may be only one logical division of functions, and in particular implementations, multiple modules/units may be combined or may be integrated into another system. Furthermore, the connection of the modules, units, devices, systems and their components described herein includes direct or indirect connections, encompassing possible electrical, mechanical, communication connections, including in particular wired or wireless connections between various interfaces, including but not limited to HDMI, thunderbolt, USB, WiFi, cellular networks.
In the embodiments of the present invention, the technical features, the flowcharts and/or the block diagrams of the methods, the programs may be applied to corresponding apparatuses, devices, systems and modules, units and components thereof. Conversely, various embodiments and features of apparatuses, devices, systems and modules, units, components thereof may be applied to methods, programs according to embodiments of the present invention. For example, the computer program instructions may be loaded onto a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, having corresponding functions or features, which implement one or more of the procedures of the flowcharts and/or one or more blocks of the block diagrams.
Methods, programs, and computer program instructions according to embodiments of the present invention may be stored in a computer-readable memory or medium that can direct a computer or other programmable data processing apparatus to function in a particular manner. Embodiments of the invention also relate to a readable memory or medium having stored thereon methods, programs, instructions that may implement embodiments of the invention.
Storage media include articles of manufacture that are permanent and non-permanent, removable and non-removable, and that may implement any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
Unless specifically stated otherwise, the actions or steps of a method, program or process described in accordance with an embodiment of the present invention need not be performed in a particular order and still achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
While various embodiments of the invention have been described herein, the description of the various embodiments is not intended to be exhaustive or to limit the invention to the precise forms disclosed, and features and components that are the same or similar to one another may be omitted for clarity and conciseness. As used herein, "one embodiment," "some embodiments," "examples," "specific examples," or "some examples" are intended to apply to at least one embodiment or example, but not to all embodiments, in accordance with the present invention. And the above terms are not necessarily meant to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics of the various embodiments may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exhaustive, such that a process, method, article, or apparatus that comprises a list of elements may include those elements but do not exclude the presence of other elements not expressly listed. For purposes of this disclosure and unless specifically stated otherwise, "a" means "one or more". To the extent that the term "includes" or "including" is used in this specification and the claims, it is intended to be inclusive in a manner similar to the term "comprising" as that term is interpreted when employed as a transitional word. Furthermore, to the extent that the term "or" is used (e.g., a or B), it will mean "a or B or both". When applicants intend to indicate "only a or B but not both," only a or B but not both will be used. Thus, use of the term "or" is inclusive and not exclusive. See bryan.a. garner's "dictionary of modern law terminology" page 624 (2 d.ed.1995).
Exemplary systems and methods of the present invention have been particularly shown and described with reference to the foregoing embodiments, which are merely illustrative of the best modes for carrying out the systems and methods. It will be appreciated by those skilled in the art that various changes in the embodiments of the systems and methods described herein may be made in practicing the systems and/or methods without departing from the spirit and scope of the invention as defined in the appended claims. It is intended that the following claims define the scope of the system and method and that the system and method within the scope of these claims and their equivalents be covered thereby. The above description of the present system and method should be understood to include all new and non-obvious combinations of elements described herein, and claims may be presented in this or a later application to any new and non-obvious combination of elements. Moreover, the foregoing embodiments are illustrative, and no single feature or element is essential to all possible combinations that may be claimed in this or a later application.

Claims (46)

1. A multi-view autostereoscopic stereoscopic display comprising a display screen having a display panel and a raster, a video signal interface for receiving a 3D video signal and one or more 3D video processing units, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels and defines a plurality of pixel groups, each pixel group being made up of at least 3 pixels and corresponding to a multi-view setting, wherein the one or more 3D video processing units are configured to generate a plurality of images corresponding to all views or a predetermined view based on an image of the 3D video signal and to render corresponding pixels in each pixel group from the generated plurality of images.
2. The multi-view autostereoscopic display according to claim 1, wherein the mutual arrangement positions of the plurality of pixel groups are adjusted or determined based on optical relationship data of pixels and gratings and/or correspondence data of pixels of the display panel and views.
3. The multi-view autostereoscopic display according to claim 2, wherein the grating comprises a cylindrical prism grating, the optical relationship of the pixels to the grating comprising an alignment relationship of the pixels to the cylindrical prism grating and/or a refraction state of the cylindrical prism grating relative to the corresponding pixels.
4. The multi-view autostereoscopic display according to claim 2, wherein the grating comprises a front and/or rear parallax barrier grating comprising a light blocking portion and a light transmitting portion, the pixel to grating optical relationship comprising a pixel to grating alignment relationship with the corresponding light transmitting portion of the parallax barrier grating.
5. The multi-view autostereoscopic stereoscopic display of claim 2, wherein the correspondence of pixels to views is calculated or determined based on an optical relationship of pixels to gratings.
6. The multi-view autostereoscopic stereoscopic display of claim 2, wherein the correspondence of pixels to views is determined by measurement at each view position.
7. The multi-view autostereoscopic stereoscopic display according to claim 2, further comprising a memory storing the optical relationship data and/or pixel-to-viewpoint correspondence data, the one or more 3D video processing units configured to read data in the memory.
8. The multi-view autostereoscopic display according to claim 1, wherein the received 3D video signal comprises a received depth image and a rendered color image, and the generated image comprises a generated depth image and a rendered color image.
9. The multi-view autostereoscopic stereoscopic display of claim 1, wherein the received 3D video signal comprises a received depth image and a rendered color image, and the generated image comprises the generated first parallax image and the second parallax image.
10. The multi-view autostereoscopic display according to claim 1, wherein the received 3D video signal comprises received first and second parallax images, and the generated image comprises the generated first and second parallax images.
11. The multi-view autostereoscopic display according to claim 1, wherein the received 3D video signal comprises a received first parallax image and a second parallax image, and the generated images comprise a generated depth image and a rendered color image.
12. The multi-view autostereoscopic stereoscopic display of claim 1, wherein a plurality of 3D video processing units are provided, each 3D video processing unit configured to be each assigned with a plurality of rows or columns of pixels and to render a respective plurality of rows or columns of pixels.
13. The multi-view autostereoscopic display according to claim 1, wherein the one or more 3D video processing units are FPGA or ASIC chips or chip sets.
14. The multi-view autostereoscopic display according to any one of claims 1 to 13, wherein the 3D video signal is a mono signal, the one or more 3D video processing units being configured to generate a plurality of images corresponding to all views based on the mono 3D video signal and to render all pixels in each pixel group.
15. The multi-view autostereoscopic display according to one of claims 1 to 13, wherein the 3D video signal is a multi-channel signal, wherein the number of multi-views is N, the number of multi-channel signals is M, N ≧ M, the one or more 3D video processing units are configured to generate N images corresponding to all views and render all pixels in each pixel group, each generated image being generated based on one of the M channels of signals, respectively.
16. Multi-view autostereoscopic display according to one of claims 1 to 13, characterized by further comprising an eye tracking device or an eye tracking data interface for acquiring eye tracking data.
17. The multi-view autostereoscopic stereoscopic display of claim 16, wherein the one or more 3D video processing units are configured to generate a plurality of images corresponding to predetermined viewpoints based on images of the 3D video signal and to render corresponding pixels in respective pixel groups from the generated plurality of images, the predetermined viewpoints being determined by real-time eye tracking data of a viewer.
18. The multi-view autostereoscopic stereoscopic display of claim 17, wherein the one or more 3D video processing units are configured to, when each eye of the viewer is located at a single view, generate an image corresponding to the single view based on an image of the 3D video signal and render a pixel corresponding to the single view in each pixel group.
19. The multi-view autostereoscopic stereoscopic display of claim 18, wherein the one or more 3D video processing units are configured to also generate images corresponding to views adjacent to the single view and also to render pixels in each pixel group corresponding to the adjacent views.
20. The multi-view autostereoscopic stereoscopic display of claim 17, wherein the one or more 3D video processing units are configured to generate images corresponding to the two views based on images of the 3D video signal and to render pixels corresponding to the two views in each pixel group when each eye of a viewer is positioned between the two views.
21. The multi-view autostereoscopic display according to claim 17, wherein the 3D video signal is a single-channel signal, and the one or more 3D video processing units are configured to, when there are a plurality of viewers, generate the plurality of images and render corresponding pixels in each pixel group for a view point corresponding to a respective eye position of each viewer based on the single-channel signal.
22. The multi-view autostereoscopic display according to claim 17, wherein the 3D video signals are multiplexed signals, and the one or more 3D video processing units are configured to, when the number of viewers is plural, generate the plurality of images based on different 3D video signals and render corresponding pixels in each pixel group for a view point corresponding to respective eyeballs of at least some of the viewers.
23. The multi-view autostereoscopic display according to any one of claims 17 to 22, wherein the display panel is a self-emissive display panel, and the self-emissive display panel is configured such that pixels not being rendered do not emit light.
24. The multi-view autostereoscopic display according to claim 23, wherein the display panel is a Micro-LED display panel.
25. A multi-view autostereoscopic display includes a display screen having a display panel and a raster, a video signal interface for receiving a 3D video signal, and one or more 3D video processing units, wherein the display panel includes a plurality of rows and a plurality of columns of pixels and defines a plurality of pixel groups, each pixel group is composed of at least 3 pixels and is arranged corresponding to a multi-view point, wherein the plurality of pixel groups have irregular mutual arrangement positions adjusted or determined based on an optical relationship between the pixels and the raster and/or corresponding relationship data between the pixels of the display panel and the view point, and wherein the one or more 3D video processing units are configured to render corresponding pixels in each pixel group.
26. The multi-view autostereoscopic display of claim 25, wherein the grating comprises a cylindrical prism grating, the optical relationship of the pixels to the grating comprising an alignment relationship of the pixels to the cylindrical prism grating and/or a refraction state of the cylindrical prism grating relative to the corresponding pixels.
27. The multi-view autostereoscopic display according to claim 25, wherein the grating comprises a front and/or rear parallax barrier grating comprising a light blocking portion and a light transmitting portion, the pixel to grating optical relationship comprising a pixel to grating alignment relationship with a corresponding light transmitting portion of the parallax barrier grating.
28. The multi-view autostereoscopic stereoscopic display of claim 25, wherein the correspondence of pixels to views is calculated or determined based on an optical relationship of pixels to gratings.
29. The multi-view autostereoscopic display of claim 25, wherein the correspondence of pixels to views is determined by measurement at each view position.
30. The multi-view autostereoscopic display according to any of claims 25 to 29, further comprising a memory storing the optical relationship data and/or pixel-to-viewpoint correspondence data, the one or more 3D video processing units configured to read data in the memory.
31. The multi-view naked eye stereoscopic display is characterized by comprising a display screen and a memory, wherein the display screen is provided with a display panel and a grating, the display panel comprises a plurality of rows and columns of pixels, and the memory stores optical relationship data of each pixel of the display panel and the grating and/or corresponding relationship data of each pixel of the display panel and a view point.
32. The multi-view autostereoscopic display of claim 31, wherein the grating comprises a cylindrical prism grating, the optical relationship of the pixels to the grating comprising an alignment relationship of the pixels to the cylindrical prism grating and/or a refraction state of the cylindrical prism grating relative to the corresponding pixels.
33. The multi-view autostereoscopic display according to claim 31, wherein the grating comprises a front and/or rear parallax barrier grating comprising a light blocking portion and a light transmitting portion, the pixel to grating optical relationship comprising a pixel to grating alignment relationship with a corresponding light transmitting portion of the parallax barrier grating.
34. The multi-view autostereoscopic stereoscopic display of claim 31, wherein the correspondence of pixels to views is calculated or determined based on an optical relationship of pixels to gratings.
35. The multi-view autostereoscopic display of claim 31, wherein the correspondence of pixels to views is determined by measurement at each view position.
36. The multi-view autostereoscopic display according to any one of claims 31 to 35, further comprising a video signal interface for receiving a 3D video signal and one or more 3D video processing units, wherein the one or more 3D video processing units are configured to generate images of a plurality of 3D videos corresponding to a part or all of the views based on the received video signal, and the one or more 3D video processing units are further configured to read the registration relationship data of each pixel of the display panel with the raster and/or the correspondence data of each pixel of the display panel with the views and render the pixels corresponding to the part or all of the views based on the data.
37. An autostereoscopic display system, comprising a processor unit and a multi-view autostereoscopic display according to any of claims 1 to 36, the processor unit being communicatively connected to the multi-view autostereoscopic display.
38. The autostereoscopic display system of claim 37, wherein the autostereoscopic display system is configured as a smart television having the processor unit; or the naked eye stereoscopic display system is an intelligent cellular phone, a tablet computer, a personal computer or wearable equipment; or the naked eye stereoscopic display system comprises a set top box or a screen-projectable cellular phone or a tablet personal computer serving as the processor unit and a digital television serving as a multi-view naked eye stereoscopic display, wherein the digital television is in wired or wireless connection with the set top box, the cellular phone or the tablet personal computer; or, the naked eye stereoscopic display system is constructed as an intelligent home system or a part thereof, wherein the processor unit comprises an intelligent gateway or a central controller of the intelligent home system, and the intelligent home system further comprises an eyeball tracking device for acquiring eyeball tracking data; alternatively, the autostereoscopic display system is configured as an entertainment interaction system or a part thereof.
39. The autostereoscopic display system of claim 37, wherein the entertainment interaction system is configured to be suitable for use by multiple people and to generate a multi-channel 3D video signal for transmission to the autostereoscopic display based on multiple users.
40. A display method of a multi-view naked eye stereoscopic display is characterized in that the display comprises a display screen with a display panel and a grating, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels, and the method comprises the following steps:
defining a plurality of pixel groups, each pixel group being composed of at least 3 pixels and being disposed corresponding to a multi-viewpoint;
receiving a 3D video signal;
generating a plurality of images corresponding to all viewpoints or a predetermined viewpoint based on the images of the received 3D video signal;
and rendering corresponding pixels in each pixel group according to the generated plurality of images.
41. The display method according to claim 40, wherein the step of defining the plurality of pixel groups comprises: the mutual arrangement positions of the plurality of pixel groups are adjusted or determined based on the optical relationship data of the pixels and the gratings and/or the corresponding relationship data of the pixels and the viewpoints of the display panel.
42. The display method according to claim 40 or 41, further comprising the steps of: receiving or reading implemented eye tracking data of a viewer; wherein the generating step comprises determining the predetermined viewpoint based on real-time eye tracking data by a viewer; the rendering step includes rendering pixels corresponding to the predetermined viewpoint in each pixel group.
43. A display method of a multi-view naked eye stereoscopic display is characterized in that the display comprises a display screen with a display panel and a grating, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels, and the method comprises the following steps:
acquiring optical relationship data of each pixel of the display panel and the grating and/or corresponding relationship data of each pixel of the display panel and a viewpoint;
receiving a 3D video signal;
generating a plurality of images corresponding to all viewpoints or a predetermined viewpoint based on the images of the received 3D video signal;
rendering corresponding pixels from the generated plurality of images,
wherein the corresponding pixels being rendered are determined based on the acquired optical relationship data and/or the correspondence data of each pixel to a viewpoint.
44. The display method according to claim 43, wherein the step of acquiring data comprises measuring alignment data of each pixel with a grating and/or a refraction state of a cylindrical prism grating relative to each pixel as the optical relationship data.
45. The display method according to claim 43, wherein the step of acquiring data comprises calculating or determining a correspondence of pixels to viewpoints based on optical relationships of pixels to gratings or determining a correspondence of pixels to viewpoints by measurement at each viewpoint position.
46. A pixel group arrangement method of a multi-view naked eye stereoscopic display is characterized by comprising the following steps:
providing a display screen having a display panel and a raster, wherein the display panel comprises a plurality of rows and a plurality of columns of pixels;
acquiring optical relationship data of each pixel of the display panel and the grating and/or corresponding relationship data of each pixel of the display panel and a viewpoint;
defining a plurality of pixel groups each of which is composed of at least 3 pixels and is set corresponding to multiple viewpoints, based on the acquired optical relationship data and/or the correspondence relationship data of each pixel with viewpoints;
wherein the defined plurality of pixel groups are for multi-view autostereoscopic display of the display.
CN201910247546.XA 2019-03-29 2019-03-29 Naked eye stereoscopic display system with lossless resolution Pending CN111757088A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201910247546.XA CN111757088A (en) 2019-03-29 2019-03-29 Naked eye stereoscopic display system with lossless resolution
PCT/CN2020/078937 WO2020199887A1 (en) 2019-03-29 2020-03-12 Multi-view naked-eye stereoscopic display, display system, and pixel group arrangement method
PCT/CN2020/078938 WO2020199888A1 (en) 2019-03-29 2020-03-12 Multi-view naked-eye stereoscopic display, display system, and display method
PCT/CN2020/078942 WO2020199889A1 (en) 2019-03-29 2020-03-12 Multi-view naked-eye stereoscopic display, display system, and display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910247546.XA CN111757088A (en) 2019-03-29 2019-03-29 Naked eye stereoscopic display system with lossless resolution

Publications (1)

Publication Number Publication Date
CN111757088A true CN111757088A (en) 2020-10-09

Family

ID=72664669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910247546.XA Pending CN111757088A (en) 2019-03-29 2019-03-29 Naked eye stereoscopic display system with lossless resolution

Country Status (2)

Country Link
CN (1) CN111757088A (en)
WO (3) WO2020199887A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012636A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
CN113012621A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
CN113010020A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
CN114513650A (en) * 2022-01-27 2022-05-17 北京芯海视界三维科技有限公司 Image display processing method and image display processing device
CN115278200A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278201A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278197A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278198A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
WO2022246791A1 (en) * 2021-05-28 2022-12-01 京东方科技集团股份有限公司 Multi-viewpoint image processing system and method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064424A (en) * 1996-02-23 2000-05-16 U.S. Philips Corporation Autostereoscopic display apparatus
CN101573987B (en) * 2007-01-03 2011-11-16 皇家飞利浦电子股份有限公司 A display device
US20110038043A1 (en) * 2009-08-17 2011-02-17 Industrial Technology Research Institute Segmented lenticular array used in autostereoscopic display apparatus
EP2553934A4 (en) * 2010-04-01 2015-04-15 Intel Corp A multi-core processor supporting real-time 3d image rendering on an autostereoscopic display
TWI420152B (en) * 2011-04-26 2013-12-21 Unique Instr Co Ltd A Method of Multi - view Three - dimensional Image Display
CN102802004B (en) * 2012-08-15 2016-06-01 上海易维视科技有限公司 Bore hole 3D module
KR102153605B1 (en) * 2013-11-27 2020-09-09 삼성디스플레이 주식회사 Three dimensional image display device
KR101975246B1 (en) * 2014-10-10 2019-05-07 삼성전자주식회사 Multi view image display apparatus and contorl method thereof
CN104506843A (en) * 2014-12-10 2015-04-08 深圳市奥拓电子股份有限公司 Multi-viewpoint LED (Light Emitting Diode) free stereoscopic display device
KR102185130B1 (en) * 2015-05-21 2020-12-01 삼성전자주식회사 Multi view image display apparatus and contorl method thereof
CN104849870B (en) * 2015-06-12 2018-01-09 京东方科技集团股份有限公司 Display panel and display device
KR102121389B1 (en) * 2015-10-16 2020-06-10 삼성전자주식회사 Glassless 3d display apparatus and contorl method thereof
KR102174258B1 (en) * 2015-11-06 2020-11-04 삼성전자주식회사 Glassless 3d display apparatus and contorl method thereof
CN106131542A (en) * 2016-08-26 2016-11-16 广州市朗辰电子科技有限公司 The device that a kind of bore hole 3D based on both full-pixel light splitting shows
KR102597593B1 (en) * 2016-11-30 2023-11-01 엘지디스플레이 주식회사 Autostereoscopic 3-Dimensional Display

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022247645A1 (en) * 2021-05-25 2022-12-01 北京芯海视界三维科技有限公司 Timing controller and display device
CN113012621A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
CN113010020A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
CN113012636A (en) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 Time schedule controller and display device
TWI802414B (en) * 2021-05-25 2023-05-11 大陸商北京芯海視界三維科技有限公司 Timing controller and display device
WO2022247646A1 (en) * 2021-05-25 2022-12-01 北京芯海视界三维科技有限公司 Timing controllers and display device
WO2022247647A1 (en) * 2021-05-25 2022-12-01 北京芯海视界三维科技有限公司 Timing controller and display device
WO2022246791A1 (en) * 2021-05-28 2022-12-01 京东方科技集团股份有限公司 Multi-viewpoint image processing system and method
CN114513650A (en) * 2022-01-27 2022-05-17 北京芯海视界三维科技有限公司 Image display processing method and image display processing device
CN115278198A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278197A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278201A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278200A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device

Also Published As

Publication number Publication date
WO2020199887A1 (en) 2020-10-08
WO2020199889A1 (en) 2020-10-08
WO2020199888A1 (en) 2020-10-08

Similar Documents

Publication Publication Date Title
CN111757088A (en) Naked eye stereoscopic display system with lossless resolution
US9924153B2 (en) Parallel scaling engine for multi-view 3DTV display and method thereof
US9083963B2 (en) Method and device for the creation of pseudo-holographic images
CN103873844B (en) Multiple views automatic stereoscopic display device and control the method for its viewing ratio
US9081195B2 (en) Three-dimensional image display apparatus and three-dimensional image processing method
EP1882368B1 (en) Cost effective rendering for 3d displays
US20120113219A1 (en) Image conversion apparatus and display apparatus and methods using the same
CN103988504A (en) Image processing apparatus and method for subpixel rendering
US9154765B2 (en) Image processing device and method, and stereoscopic image display device
US8723920B1 (en) Encoding process for multidimensional display
US20170111633A1 (en) 3d display apparatus and control method thereof
US20160295200A1 (en) Generaton of images for an autostereoscopic multi-view display
WO2012172766A1 (en) Image processing device and method thereof, and program
US10939092B2 (en) Multiview image display apparatus and multiview image display method thereof
CN102238396A (en) Image converting method, imaging method and system of stereoscopic vision
CN101626517B (en) Method for synthesizing stereo image from parallax image in a real-time manner
US8693767B2 (en) Method and device for generating partial views and/or a stereoscopic image master from a 2D-view for stereoscopic playback
CN216086864U (en) Multi-view naked eye stereoscopic display and naked eye stereoscopic display system
US20120163700A1 (en) Image processing device and image processing method
US20120081513A1 (en) Multiple Parallax Image Receiver Apparatus
US20140204175A1 (en) Image conversion method and module for naked-eye 3d display
US20120163702A1 (en) Image processing apparatus and image processing method
TWI499279B (en) Image processing apparatus and method thereof
US20130021324A1 (en) Method for improving three-dimensional display quality
JP2012203050A (en) Stereoscopic display device and signal processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination