WO2020199888A1 - 多视点裸眼立体显示器、显示系统及显示方法 - Google Patents

多视点裸眼立体显示器、显示系统及显示方法 Download PDF

Info

Publication number
WO2020199888A1
WO2020199888A1 PCT/CN2020/078938 CN2020078938W WO2020199888A1 WO 2020199888 A1 WO2020199888 A1 WO 2020199888A1 CN 2020078938 W CN2020078938 W CN 2020078938W WO 2020199888 A1 WO2020199888 A1 WO 2020199888A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
pixels
eye
naked
viewpoint
Prior art date
Application number
PCT/CN2020/078938
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
刁鸿浩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司, 刁鸿浩 filed Critical 北京芯海视界三维科技有限公司
Publication of WO2020199888A1 publication Critical patent/WO2020199888A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/346Image reproducers using prisms or semi-transparent mirrors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers

Definitions

  • This application relates to the field of stereo display technology, for example, to a multi-viewpoint naked eye stereo display, a display system, and a display method.
  • stereoscopic display technologies mainly include glasses-type stereoscopic display and naked-eye stereoscopic display technologies.
  • the naked-eye stereoscopic display technology is a technology in which users can view a stereoscopic display screen without wearing glasses. Compared with the glasses-type stereoscopic display, the naked-eye stereoscopic display reduces the constraints on the user.
  • the naked-eye stereoscopic display is based on the viewpoint.
  • a sequence of parallax images (frames) is formed at different positions in space, so that the stereoscopic image pairs with parallax relationship can enter the left and right eyes of the person, thereby giving the user a stereoscopic feeling.
  • 3D multi-viewpoint autostereoscopic
  • the resolution of the display panel of the traditional multi-view naked-eye stereo display is a fixed value, the resolution will be significantly reduced when performing 3D display, for example, the row/column resolution is reduced to 1/N of the original resolution.
  • the embodiments of the present disclosure provide a multi-viewpoint naked-eye stereoscopic display, a display system, and a display method, so as to alleviate the problem that the resolution of the naked-eye stereoscopic display drops significantly.
  • the multi-view naked-eye stereoscopic display includes a display screen with a display panel and a grating, a video signal interface configured to receive 3D video signals, and one or more 3D video processing units;
  • the display panel includes multiple rows and multiple columns of pixels and defines multiple pixel groups.
  • Each pixel group in the multiple pixel groups is composed of at least 3 pixels and corresponds to a multi-viewpoint arrangement.
  • the multiple pixel groups have irregular mutual
  • the arrangement position is adjusted or determined based on the optical relationship between pixels and gratings and/or the corresponding relationship between pixels and viewpoints, and one or more 3D video processing units are configured to render corresponding pixels in each pixel group.
  • the multi-view naked-eye stereoscopic display system provided by the embodiments of the present disclosure includes a processor unit and the above-mentioned multi-view naked-eye stereo display; wherein the processor unit is communicatively connected with the multi-view naked-eye stereo display.
  • the multi-view naked-eye stereo display includes a display screen having a display panel and a grating, and the display panel includes multiple rows and multiple columns of pixels;
  • the above-mentioned display method of the multi-view naked-eye stereo display include:
  • the corresponding pixels are rendered according to the generated multiple images; wherein the rendered pixels are determined based on the optical relationship between the pixel and the grating and/or the corresponding relationship between the pixel and the viewpoint.
  • the multi-view naked-eye stereoscopic display, display system, and display method provided by the embodiments of the present disclosure can achieve the following technical effects:
  • FIG. 1A shows a schematic structural diagram of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • FIG. 1B shows a schematic structural diagram of a multi-view naked-eye stereo display according to an embodiment of the present disclosure
  • FIG. 1C shows a schematic structural diagram of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • FIG. 2 shows a schematic structural diagram of pixels and viewpoints in the display panel shown in FIGS. 1A-C;
  • Fig. 3 schematically shows a schematic diagram of the images (frames) of the received 3D video signals generated from the images (frames) of the received 3D video signals in Figs.
  • FIG. 4 shows a schematic structural diagram of a single 3D video processing unit of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • Fig. 5 shows a schematic structural diagram of multiple 3D video processing units of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • FIG. 6 schematically shows a schematic diagram of generating images corresponding to various viewpoints from images (frames) of received 3D video signals in FIG. 1;
  • Fig. 7A shows a schematic structural diagram of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • FIG. 7B shows a schematic structural diagram of the multi-view naked-eye stereo display in FIG. 7A;
  • FIG. 8 shows a schematic structural diagram of a multi-view naked-eye stereoscopic display according to an embodiment of the present disclosure
  • FIG. 9 shows a partial structural schematic diagram of a multi-view naked-eye stereoscopic display adopting a cylindrical prism grating according to an embodiment of the present disclosure
  • FIG. 10 shows a partial structural schematic diagram of a multi-view naked-eye stereoscopic display adopting a cylindrical prism grating according to an embodiment of the present disclosure
  • FIG. 11 shows a partial structural diagram of a multi-view naked-eye stereoscopic display using a parallax barrier grating according to an embodiment of the present disclosure
  • FIG. 12 shows a schematic structural diagram of a multi-viewpoint naked-eye stereoscopic display using eye displacement data according to an embodiment of the present disclosure
  • FIG. 13 shows a schematic structural diagram of a multi-viewpoint naked-eye stereo display using eye displacement data according to an embodiment of the present disclosure
  • FIG. 14 shows a schematic structural diagram of a multi-viewpoint naked-eye stereoscopic display using eye displacement data according to an embodiment of the present disclosure
  • FIG. 15 shows a schematic structural diagram of a multi-view naked-eye stereoscopic display using eye displacement data according to an embodiment of the present disclosure
  • FIG. 16 shows a schematic structural diagram of a multi-view naked-eye stereoscopic display using eye displacement data according to an embodiment of the present disclosure
  • FIG. 17 schematically shows a schematic diagram of generating an image corresponding to a specific viewpoint from images (frames) of two received 3D video signals in FIG. 16;
  • FIG. 18 schematically shows a schematic diagram of a multi-view naked-eye stereoscopic display system configured as a cellular phone or a part thereof according to an embodiment of the present disclosure
  • FIG. 19 schematically shows a principle diagram of a multi-view naked-eye stereoscopic display system configured as a digital TV connected to a set-top box according to an embodiment of the present disclosure
  • FIG. 20 schematically shows a schematic diagram of a multi-view naked-eye stereoscopic display system constructed as a smart home system or a part thereof according to an embodiment of the present disclosure
  • Fig. 21 schematically shows a schematic diagram of the principle of a multi-view naked-eye stereoscopic display system constructed as an entertainment interactive system or a part thereof according to an embodiment of the present disclosure.
  • the multi-view naked-eye stereoscopic display includes a display screen with a display panel and a grating, a video signal interface configured to receive 3D video signals, and one or more 3D video processing units;
  • the display panel includes multiple rows and multiple columns of pixels and defines multiple pixel groups.
  • Each pixel group in the multiple pixel groups is composed of at least 3 pixels and corresponds to a multi-viewpoint arrangement.
  • the multiple pixel groups have irregular mutual
  • the arrangement position is adjusted or determined based on the optical relationship between pixels and gratings and/or the corresponding relationship between pixels and viewpoints, and one or more 3D video processing units are configured to render corresponding pixels in each pixel group.
  • the grating may include a lenticular prism grating
  • the optical relationship may include the alignment relationship between the pixel and the lenticular prism grating and/or the refraction state of the lenticular prism grating relative to the corresponding pixel.
  • the grating may include a front and/or rear parallax barrier grating
  • the parallax barrier grating may include a light-shielding part and a light-transmitting part
  • the optical relationship may include the alignment of the pixel and the corresponding light-transmitting part of the parallax barrier grating. relationship.
  • the optical relationship may be the alignment data obtained by measuring the pixels and the grating in the display panel.
  • the optical relationship may be the refraction state of the grating relative to the pixels in the display panel.
  • the correspondence relationship may be calculated or determined based on the optical relationship.
  • the correspondence relationship may be determined by measuring at each of the multiple viewpoints.
  • the multi-viewpoint auto-stereoscopic display may further include a memory storing optical relationships and/or corresponding relationships.
  • one or more 3D video processing units may be configured to obtain data in the memory.
  • the multi-view naked-eye stereoscopic display system provided by the embodiments of the present disclosure includes a processor unit and the above-mentioned multi-view naked-eye stereo display; wherein the processor unit is communicatively connected with the multi-view naked-eye stereo display.
  • the naked-eye stereoscopic display system may include:
  • Smart cell phone tablet, personal computer or wearable device
  • a set-top box as a processor unit or a cellular phone or tablet computer capable of screen projection, and a digital TV as a multi-view naked-eye stereo display connected to the set-top box, cellular phone or tablet computer by wire or wireless; or
  • a smart home system or a part thereof wherein the processor unit includes a smart gateway or a central controller of the smart home system, and the smart home system further includes an eye displacement sensor configured to obtain eye displacement data; or
  • the entertainment interactive system may be configured to be suitable for use by multiple users and generate multiple 3D video signals based on multiple users for transmission to a multi-view naked-eye stereo display.
  • the multi-view naked-eye stereo display includes a display screen having a display panel and a grating, and the display panel includes multiple rows and multiple columns of pixels; the method includes:
  • the corresponding pixels are rendered according to the generated multiple images; wherein the rendered pixels are determined based on the optical relationship between the pixel and the grating and/or the corresponding relationship between the pixel and the viewpoint.
  • the optical relationship can be determined as follows:
  • the refraction state of the grating with respect to the pixels in the display panel is taken as the optical relationship.
  • the correspondence relationship may be determined in the following manner:
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is in communication connection with the multi-viewpoint naked-eye stereoscopic display.
  • the processor unit includes a processing ⁇ sending ⁇ forwarding ⁇ control device configured to send a 3D video signal to a naked-eye stereoscopic display. It can be a device that has both the function of generating and sending 3D video signals, or it can be a processing device. Or a device that does not process the received 3D video signal and forward it to the display.
  • the processor unit may be included in or referred to as a processing terminal or terminal.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may also optionally include a memory to store required data.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two exemplary pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as: single column and multiple rows or multiple rows and multiple columns.
  • the aforementioned PGx, y may represent a pixel group in the Xth row and Yth column.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain.
  • one or more 3D video processing units are configured to generate images and rendering pixels for display in such a way that the images based on the 3D video signal generate multiple images corresponding to all viewpoints and based on the generated images.
  • the multiple images render the corresponding pixels in each pixel group.
  • the embodiment of the present disclosure provides a method for arranging pixel groups of a multi-view naked-eye stereoscopic display, including: providing a display screen with a display panel and a grating; wherein the display panel includes multiple rows and multiple columns of pixels; The optical relationship of the grating and/or the corresponding relationship between the pixels in the display panel and the viewpoint; a plurality of pixel groups are defined based on the acquired optical relationship and/or the corresponding relationship between each pixel and the viewpoint, and each pixel group is composed of at least 3 pixels and Corresponds to multi-viewpoint settings.
  • the defined multiple pixel groups can support the multi-view naked-eye stereo display of the display.
  • the embodiment of the present disclosure provides a display method of a multi-view naked-eye stereo display, including: defining a plurality of pixel groups, each pixel group is composed of at least 3 pixels and corresponds to a multi-view setting; receiving a 3D video signal; based on the received The image of the 3D video signal generates multiple images corresponding to all viewpoints or specific viewpoints; and renders the corresponding pixels in each pixel group according to the generated multiple images.
  • image generation and pixel rendering are performed corresponding to all viewpoints (12).
  • the 3D video signal S1 received by the video signal interface is an image frame containing two contents of color image and depth of field.
  • the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and renders 12 pictures according to the viewing angles corresponding to the viewpoints of V1-V12. Then, the content of each generated image is written into the pixels corresponding to each viewpoint.
  • the 12 images generated above may be generated with the corresponding image frames of the received 3D video signal at the same resolution.
  • the correspondingly written pixels correspond to the resolution of the generated image (and thus the resolution of the image of the received 3D video signal) basically point by point.
  • resolution increase for example, multiplication
  • two-fold line resolution interpolation may be performed on both the color image and the depth image.
  • processing of increasing the resolution for example, multiplication
  • the processing of increasing the resolution may also be combined with the processing of point-to-point rendering described in the embodiments of the present disclosure to obtain a new embodiment.
  • generation of images from corresponding viewpoints with increased resolution may sometimes be referred to as generation of increased resolution (for example, multiplication).
  • another (pre-)processor may be provided to perform resolution increase (for example: multiplication) or interpolation, or one or more 3D video processing units may perform resolution increase (for example: multiplication) or Interpolation.
  • the display system or display may include an eye displacement sensor or readable eye displacement data.
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is in communication connection with the multi-viewpoint naked-eye stereoscopic display.
  • the display may be integrated with an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display may be provided with a memory to store eye displacement data, and the 3D video processing unit is connected to the memory and reads the eye displacement data.
  • the eye displacement data may be real-time data.
  • the eye displacement sensor may be in the form of a dual camera, for example.
  • eye displacement sensors may be used, such as a single camera, a combination of an eye displacement camera and a depth camera, and other sensing devices that can be used to determine the position of the user's eyes, or a combination thereof.
  • the eye displacement sensor may have other functions or be shared with other functions or components.
  • a display system configured as a cellular phone can use a front camera built into the cellular phone as an eye displacement sensor.
  • the display may include an eye displacement data interface
  • the 3D processing unit may use the eye displacement data interface to read the eye displacement data
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is communicatively connected with the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the display may include an eye displacement data interface, and the 3D processing unit may communicate with the processor unit via the eye displacement data interface to read the eye displacement data.
  • the processor unit may not be equipped with or connected to the eye displacement sensor, but directly read the eye displacement data.
  • the 3D processing unit may obtain eye displacement data from other sources through the eye displacement data interface.
  • the embodiments related to setting the eye displacement sensor can be combined with other embodiments.
  • the embodiment of point-to-point rendering can be combined with the use of eye displacement sensors or data to obtain a new embodiment. It is also possible to use eye displacement sensors or data to obtain other embodiments.
  • the generated image content is written (rendered) to the pixels of the display panel by writing (rendering) line by line. This greatly reduces the pressure of rendering calculations.
  • the line-by-line writing (rendering) process is performed in the following manner: the information of the corresponding points in each generated image is read and written line by line into the pixels of the display panel.
  • the method further includes synthesizing a plurality of generated images into a composite image, and reading the information of corresponding points in the composite image and writing it into the pixels of the display panel line by line.
  • a single 3D video processing unit is provided in FIG. 4, and the single 3D video processing unit simultaneously processes image generation corresponding to multiple viewpoints and rendering of corresponding multiple pixels in the pixel group.
  • a plurality of 3D video processing units may be provided, which are combined to process image generation and pixel rendering in parallel, serial or serial-parallel.
  • a plurality of 3D video processing units are provided in the multi-view naked-eye stereoscopic display, and each 3D video processing unit is configured to be each allocated with multiple rows or columns of pixels and to render respective multiple rows or columns of pixels.
  • a plurality of 3D video processing units may be sequentially arranged and rendered in respective rows or columns of pixels. For example, assuming that 4 3D video processing units are provided, and the display panel is provided with a total of M columns of pixels, each 3D video processing unit is arranged with respective M/4 columns of pixels, for example, from left to right or from right to left.
  • each 3D video processing unit can process pixel rendering in parallel.
  • each 3D video processing unit can correspondingly process the rendering of its respective pixels (columns).
  • FIG. 5 exemplarily, when the display panel has a total of M columns of pixels, if there are 4 parallel 3D video processing units (groups), the first 3D video processing unit processes the first M/4 columns of pixels.
  • the second 3D video processing unit processes the second M/4 column of pixels, the third 3D video processing unit processes the third M/4 column of pixels, and the fourth 3D video processing unit processes the fourth M/4 column of pixels.
  • the arrangement of such a 3D video processing unit simplifies the structure, greatly speeds up the processing process, and can be combined with the aforementioned embodiment of separately reading each generated image for progressive writing (rendering) processing, to Obtain other examples.
  • the first to fourth columns can sequentially process and render each M/4 column of pixels in the first row, and for example, when the first 3D video processing unit completes
  • the first 3D video processing unit can obtain sufficient time to prepare to process the corresponding M/4 column of pixels in the next row (such as the second row), such as the second row of the second row.
  • One M/4 column of pixels can effectively improve the rendering computing power.
  • 3D video processing units there may be more or fewer 3D video processing units, or 3D video processing units (groups) may be allocated in other ways and process the multiple rows and multiple columns of pixels in parallel.
  • the pixel driving and rendering of the display panel are progressive scan.
  • the aforementioned 3D video processing units each assigning multiple columns of pixels are combined with progressive scanning, which effectively reduces the calculation bandwidth.
  • the naked-eye stereoscopic display system may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is communicatively connected with the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may further include an eye displacement data interface.
  • the 3D processing unit can communicate with the processor unit via the eye displacement data interface to read the eye displacement data.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may integrate an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two illustrative pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the display of the display is described.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain
  • the 3D video signal S1 received by the video signal interface is an image frame containing two content of left and right parallax color images.
  • the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and thereby generates intermediate image information I1.
  • the left and right parallax color images are used to synthesize the depth image.
  • the color image of the center point is generated by means of one or both of the above-mentioned left and right parallax color images.
  • the 12 images generated above may be generated with the corresponding image frames of the received 3D video signal at the same resolution.
  • the correspondingly written pixels correspond to the resolution of the generated image (and thus the resolution of the image of the received 3D video signal) basically point by point.
  • resolution increase for example, multiplication
  • two-fold line resolution interpolation may be performed on the left-eye and right-eye parallax images.
  • processing of increasing the resolution for example, multiplication
  • the processing of increasing the resolution may also be combined with the processing of point-to-point rendering described in the embodiments of the present disclosure to obtain a new embodiment.
  • image conversion processing can be performed as described above before processing. In this article, the generation of images from corresponding viewpoints with increased resolution may sometimes be referred to as generation of increased resolution (for example, multiplication).
  • another (pre-)processor may be provided to perform resolution increase (for example: multiplication) or interpolation, or one or more 3D video processing units may perform resolution increase (for example: multiplication) or Interpolation.
  • the generated image is generated from the image of the received 3D video signal corresponding to all viewpoints or specific viewpoints, that is, point-to-point from the image of the original 3D video signal according to the required (full or specific) viewpoint Generate images and render pixels locally, alleviating the problem of significant resolution degradation.
  • the image corresponding to a single viewpoint and the image (frame) of the received 3D video signal have the same resolution, and the pixels in each pixel group corresponding to each viewpoint (or determined according to the pixel-viewpoint correspondence relationship) The pixels) correspond to the generated image (and thus the received image) basically point by point.
  • the received 3D video signal may be interpolated or the resolution may be increased in other ways, and then corresponding to the interpolated or increased resolution image to generate images for each viewpoint and correspondingly Render the pixels corresponding to each viewpoint in each pixel group (or pixels determined according to the pixel-viewpoint correspondence).
  • FIGS. 7A and 7B illustrate a naked-eye stereoscopic display system and a display thereof according to an embodiment of the present disclosure.
  • the display panel of the naked-eye stereo display has multiple rows and multiple columns of pixels.
  • pixels in multiple rows and multiple columns are divided into multiple groups in a manner corresponding to multiple viewpoints.
  • each pixel group includes a row of 12 pixels corresponding to 12 viewpoints.
  • the pixel groups are arranged with each other in a regular manner.
  • the pixel groups are arranged in sequence in the same row, such as the pixel group PG1 in the same row, i (i ⁇ 1) are arranged end to end in sequence; the pixel groups are aligned in the same column
  • the pixel groups PGj,1 (j ⁇ 1) in the same column are vertically aligned.
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as: single column and multiple rows or multiple rows and multiple columns.
  • other types of pixel groups PG are still regularly arranged with each other.
  • the display panel shown has a plurality of pixel groups PG regularly distributed, including PG1,1 and PGx,y.
  • the corresponding pixels in the pixel groups PG1, 1 are displayed correctly in the corresponding viewpoints V1-V12, respectively.
  • the pixels of the pixel group PGx, y that should be displayed in the corresponding viewpoints V1-V12 are actually displayed in the viewpoints V1'-V12', respectively.
  • V1' corresponds to V3.
  • the multi-view naked-eye stereoscopic display is configured as a pixel group with irregular mutual arrangement positions, that is, adjusted relative to a regularly arranged pixel group. Such adjustment is adjusted or determined based on the correspondence between the pixels of the display panel and the viewpoint. In some embodiments, based on the correspondence between pixels and viewpoints, the pixel group PG'x, y is adjusted or determined in such a way that, compared to the regularly arranged pixel group PGx, y, it is shifted by two pixels to the left of the picture. . Thus, the pixels in the adjusted irregularly arranged pixel group PG'x, y are correctly displayed at the corresponding viewpoints V1-V12.
  • the above-mentioned irregular pixel groups are adjusted based on the correspondence between pixels and viewpoints.
  • the irregular correspondence between the pixels and the viewpoint is determined based on the optical relationship between the pixel and the grating, for example, based on the alignment relationship between the pixel and the grating, and the refraction relationship. Therefore, in some embodiments, the irregular pixel group can be adjusted or determined based on the optical relationship between the pixels and the grating. In some embodiments, the irregular or actual alignment relationship between pixels and viewpoints may be determined by measurement.
  • the optical relationship between the pixel and the grating may be embodied as the optical relationship data between the pixel and the grating.
  • the above-mentioned optical relationship and/or alignment relationship data may be stored in a memory for reading during processing by the 3D video processing unit.
  • the memory may store the optical relationship data between the pixels in the display panel and the grating and/or the corresponding relationship between the pixels in the display panel and the viewpoint. With the help of the stored data, the naked-eye stereoscopic display of the embodiment of the present disclosure can be realized.
  • a data interface for communicating with the 3D video processing unit may be provided, so that the 3D video processing unit can read the optical relationship data and/or the alignment relationship data through the data interface.
  • the optical relationship data and/or the alignment relationship data may be written into the 3D video processing unit or as part of its algorithm.
  • the 3D video signal S1 received by the video signal interface is an image frame containing two content of left and right parallax color images.
  • the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and thereby generates intermediate image information I1.
  • the left and right parallax color images are used to synthesize the depth image.
  • the color image of the center point is generated by means of one or both of the above-mentioned left and right parallax color images.
  • the intermediate image information I1 that is, the depth image information and the color image information of the center point as input
  • 12 images are rendered according to the viewing angles corresponding to the viewpoints of V1-V12.
  • the content of each generated image is written into the corresponding pixels in each pixel group seen by each viewpoint, where each pixel group is adjusted or determined based on the optical relationship or the pixel-viewpoint alignment relationship.
  • Irregularly arranged pixel groups the pixel group includes regularly arranged PG1,1 and adjusted PG'x,y.
  • the pixel group is adjusted based on the optical relationship and/or the pixel-viewpoint alignment relationship to support the 3D video processing unit to correctly render the corresponding pixels in the pixel group.
  • the optical relationship and/or the pixel-viewpoint alignment relationship can be directly or indirectly used to determine the pixel that is correctly displayed at the corresponding viewpoint, and the pixel can also be rendered.
  • FIG. 8 shows a naked-eye stereoscopic display system and a display thereof according to an embodiment of the present disclosure.
  • the display panel of the auto-stereoscopic display has multiple rows and multiple columns of pixels.
  • the display stores or can read data of the viewpoint corresponding to each pixel of the display panel.
  • FIG. 8 exemplarily shows that the pixels P1 and b1 correspond to the viewpoint V8, the pixels Pam and bn correspond to the viewpoint V6, and the pixels Paz and bz correspond to the viewpoint V12.
  • FIG. 8 shows the correspondence between each pixel and the viewpoint.
  • optical relationship data that can be used to determine the correspondence between pixels and viewpoints, such as grating-to-pixel alignment data and/or grating refraction data, or other indirect data may be used.
  • the above-mentioned optical relationship data and/or alignment relationship data may be stored in a memory for reading during processing by the 3D video processing unit.
  • a data interface for communicating with the 3D video processing unit may be provided, so that the 3D video processing unit can read the optical relationship data and/or the alignment relationship data through the data interface.
  • the optical relationship data and/or the alignment relationship data may be written into the 3D video processing unit or as part of its algorithm.
  • the pixel-viewpoint correspondence may be in the form of a lookup table.
  • the 3D video signal S1 received by the video signal interface is an image frame containing two content of left and right parallax color images.
  • the 3D video processing unit takes as input the left and right parallax color images of the received 3D video signal S1, and thereby generates intermediate image information I1.
  • the left and right parallax color images are used to synthesize the depth image.
  • the color image of the center point is generated by means of one or both of the above-mentioned left and right parallax color images.
  • the correspondence between pixels and viewpoints may be embodied as correspondence data between pixels and viewpoints.
  • the grating of the display is a cylindrical prism grating.
  • features such as the pixel group adjustment shown in FIGS. 7A-7B or the pixel-viewpoint alignment relationship shown in FIG. 8 can be used.
  • each row of obliquely arranged cylindrical prisms covers roughly 12 pixels.
  • the display also has 12 viewpoints
  • the pixel group of the display panel has a single row and multiple columns of pixels corresponding to the 12 viewpoints.
  • 9 and 7A-7B in a display provided with a lenticular prism grating, the pixels Pa1, b1-Pa1, b4 in the regularly arranged pixel group on the top of the lenticular prism can have correct corresponding viewpoints V1-V4 .
  • the four pixels in the regularly arranged pixel group at the bottom of the cylindrical prism are not aligned with the correct viewpoints V1-V4, but corresponding viewpoints V1'-V4'.
  • the pixel group can be adjusted to shift one pixel to the left in the illustration to display at the correct viewpoint V1-V4, and the remaining viewpoints in the pixel group can also be shifted one pixel to the left, such as the theory shown in Figure 9
  • the pixels above corresponding to the viewpoint V4' correspond to the viewpoint V5.
  • the embodiment shown in FIG. 9 is also applicable to using optical (deviation) data and/or irregular pixel-viewpoint alignment for pixels in the display panel.
  • the following pixel-viewpoint correspondence data can be stored, recorded or read.
  • the four pixels at the bottom of the cylindrical prism correspond to viewpoints V2, V3, V4, and V5, respectively.
  • the misalignment of the pixel group or the pixel may be caused by the alignment deviation of the cylindrical prism and the pixel and/or the refraction state of the cylindrical prism.
  • Fig. 9 exemplarily shows the theoretical alignment position and actual alignment deviation on the left side of the prism with dashed and solid lines.
  • the lenticular prism can be arranged obliquely to the pixels, which can eliminate moiré. Therefore, there are shared pixels (for example, pixels corresponding to the aforementioned viewpoint V1) between the boundaries of the columnar prisms. In some configurations, corresponding viewpoints are specified for these shared pixels. However, in some embodiments, pixel group fine-tuning or pixel-viewpoint dynamic correspondence based on these shared pixels may be provided, or pixels shared by viewpoints.
  • the shared pixels conventionally corresponding to the viewpoint V12 can be rendered according to the image of the viewpoint V1 when the viewpoint V12 is not rendered, for example.
  • the fine-tuning or dynamic relationship of the embodiment shown in FIG. 10 can be applied to other types of gratings, and can also be combined with the embodiment of obtaining eye displacement data to obtain other embodiments.
  • the parallax barrier grating 100 includes a light-shielding part 102 and a light-transmitting part 104.
  • the parallax barrier grating 100 features such as the pixel group adjustment shown in FIGS. 7A-7B or the pixel-viewpoint alignment relationship shown in FIG. 8 can be adopted.
  • the misalignment of the pixel group or the pixel may be caused by the misalignment between the transparent portion 104 of the parallax barrier grating and the pixel.
  • the parallax barrier grating 100 is a front grating.
  • the rear grating can be set and the front and rear gratings can be set at the same time.
  • a naked-eye stereoscopic display system which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is communicatively connected with the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the eye displacement sensor may be provided in a display or the system or display has a transmission interface that can receive eye displacement data.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may integrate an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two illustrative pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as: single column and multiple rows or multiple rows and multiple columns.
  • the aforementioned PGx, y may represent a pixel group in the Xth row and Yth column.
  • the display of the display is described.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain
  • one or more 3D video processing units are configured to generate images and rendering pixels for display in such a way that images based on 3D video signals generate multiple images corresponding to specific viewpoints and based on the generated multiple images. Each image renders the pixels corresponding to a specific viewpoint in each pixel group.
  • the specific viewpoint is determined based on eye displacement data.
  • the user's eyes left and right eyes
  • an image for the corresponding viewpoint is generated, and the pixels in the pixel group corresponding to the corresponding viewpoint are rendered.
  • the first eye such as the right eye
  • the second eye such as the left eye
  • the embodiment of the present disclosure provides a display method of a multi-view naked-eye stereo display, including: defining a plurality of pixel groups, each pixel group is composed of at least 3 pixels and corresponds to a multi-view setting; receiving a 3D video signal; based on the received The image of the 3D video signal generates multiple images corresponding to specific viewpoints (such as viewpoints V4 and V8); the corresponding pixels in each pixel group are rendered according to the generated multiple images.
  • viewpoints such as viewpoints V4 and V8
  • image generation and pixel rendering are performed corresponding to specific viewpoints (V4 and V8).
  • the 3D video signal S1 received by the video signal interface is an image frame containing two contents of color image and depth of field.
  • the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and based on the eye displacement data, renders the viewpoints V4 and V8 where the eyes are located according to the corresponding viewing angles. Frame. Then, the content of the generated corresponding image is written into pixels (such as the 4th and 8th pixels) corresponding to the corresponding viewpoints in each pixel group (such as PG1,1 and PGx,y).
  • the eyes of the user located at the viewpoints V4 and V8 can see the rendered images at different angles and generate parallax to form a stereoscopic effect of 3D display.
  • the generated pictures corresponding to viewpoints V4 and V8 may be generated with the corresponding image frames of the received 3D video signal at the same resolution.
  • the correspondingly written pixels correspond to the resolution of the generated image (and thus the resolution of the image of the received 3D video signal) basically point by point.
  • resolution increase for example, multiplication
  • two-fold line resolution interpolation may be performed on both the color image and the depth image.
  • the processing of increasing the resolution can also be combined with the processing of point-to-point rendering described in the embodiments of the present disclosure to obtain a new embodiment, for example, to obtain an image whose resolution corresponds to 2 times the interpolation.
  • the corresponding viewpoints V4 and V8 are generated on the screen. Image generation of corresponding viewpoints combined with increased resolution may sometimes be referred to as generation of increased resolution (for example, multiplication).
  • another (pre-)processor may be provided to perform resolution increase (for example: multiplication) or interpolation, or one or more 3D video processing units may perform resolution increase (for example: multiplication) or Interpolation.
  • the embodiment of rendering specific viewpoints (not all viewpoints) using eye displacement data can be combined with the aforementioned embodiments, or replaced by some features to obtain new embodiments.
  • this embodiment can be combined with features related to optical relationship data/pixel-viewpoint correspondence data to obtain a new embodiment.
  • this embodiment can be modified without explicitly grouping pixels to obtain a new embodiment.
  • the specific viewpoint also includes the viewpoint adjacent to the viewpoint where the eye is located.
  • the specific viewpoints from which the image is to be generated may also include viewpoints V3 and V5, and viewpoints V7 and V9, and then render the pixels corresponding to these viewpoints in the pixel group.
  • a single-sided adjacent viewpoint may be used as the specific viewpoint.
  • the pixels described in FIG. 12 or 13 may be rendered, and the remaining pixels are not rendered.
  • pixels that are not rendered can leave white light or leave the color of the previous image frame. Thus, this can reduce the computational load as much as possible.
  • the display includes a self-luminous display panel, such as a MICRO LED display panel.
  • the self-luminous display panel such as the MICRO LED display panel, is configured such that pixels that are not rendered do not emit light. This can greatly save the power consumed by the display.
  • a naked-eye stereoscopic display system which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is in communication connection with the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the eye displacement sensor may be provided in a display or the system or display has a transmission interface that can receive eye displacement data.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may integrate an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two illustrative pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as: single column and multiple rows or multiple rows and multiple columns.
  • the aforementioned PGx, y may represent a pixel group in the Xth row and Yth column.
  • the display of the display is described.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain
  • one or more 3D video processing units are configured to generate images and rendering pixels for display in such a way that images based on 3D video signals generate multiple images corresponding to specific viewpoints and based on the generated multiple images.
  • Each image renders the pixels corresponding to a specific viewpoint in each pixel group.
  • the specific viewpoint is determined based on eye displacement data.
  • the embodiment of the present disclosure provides a display method of a multi-view naked-eye stereo display, including: defining a plurality of pixel groups, each pixel group is composed of at least 3 pixels and corresponds to a multi-view setting; receiving a 3D video signal; based on the received The image of the 3D video signal generates multiple images corresponding to specific viewpoints (such as viewpoints V4, V5 and V8, V9); and renders corresponding pixels in each pixel group according to the generated multiple images.
  • viewpoints such as viewpoints V4, V5 and V8, V9
  • the 3D video signal S1 received by the video signal interface is an image frame containing two contents of color image and depth of field.
  • the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and based on the eye displacement data, sets the viewpoints V4, V5, V8 and V9 where the eyes are located according to the corresponding viewing The angle renders 4 pictures. Then, write the content of the generated corresponding image into each pixel group (such as PG1,1 and PGx,y) corresponding to the pixels seen by the corresponding viewpoint (such as the 4th, 5th and 8th, 9th pixels) in.
  • the eyes of the user located between the viewpoints V4 and V5 and between V8 and V9 can see the rendered images at different angles and generate parallax to form a 3D display stereoscopic effect.
  • the embodiment of rendering specific viewpoints (not all viewpoints) using eye displacement data can be combined with the aforementioned embodiments, or replaced by some features to obtain new embodiments.
  • this embodiment can be combined with features related to optical relationship data/pixel-viewpoint correspondence data to obtain a new embodiment.
  • this embodiment can be modified without explicitly grouping pixels to obtain a new embodiment.
  • embodiments of the present disclosure provide a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display.
  • the 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content.
  • the 3D video processing unit takes as input the image frames of the received 3D video signal S1 that contain the left and right parallax color image content. Based on the eye displacement data, the left eye or right eye parallax color image is generated correspondingly according to the eye detected by the eye displacement data.
  • viewpoints V4 and V5 where the right eye is located two images are rendered based on the right parallax color image content of the 3D video signal S1.
  • viewpoints V8 and V9 where the left eye is located two pictures are rendered based on the left parallax color image content of the 3D video signal S1. Then, write the content of the generated corresponding image into each pixel group (such as PG1,1 and PGx,y) corresponding to the pixels seen by the corresponding viewpoint (such as the 4th, 5th and 8th, 9th pixels) in.
  • the eyes of the user located between the viewpoints V4 and V5 and between V8 and V9 can see the rendered images at different angles and generate parallax to form a 3D display stereoscopic effect.
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is in communication connection with the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the eye displacement sensor may be provided in a display or the system or display has a transmission interface that can receive eye displacement data.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may integrate an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two illustrative pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as: single column and multiple rows or multiple rows and multiple columns.
  • the aforementioned PGx, y may represent a pixel group in the Xth row and Yth column.
  • the display of the display is described.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain
  • one or more 3D video processing units are configured to generate images and rendering pixels for display in such a way that images based on 3D video signals generate multiple images corresponding to specific viewpoints and based on the generated multiple images. Each image renders the pixels corresponding to a specific viewpoint in each pixel group.
  • the specific viewpoint is determined based on eye displacement data.
  • the user's eyes left and right eyes
  • an image for the corresponding viewpoint is generated, and the pixels in the pixel group corresponding to the corresponding viewpoint are rendered.
  • the first eye such as the right eye Er
  • the second eye such as the left eye El
  • the eye displacement data indicates that the user's eyes are moving
  • multiple images corresponding to the new specific viewpoint can be generated and based on the generated
  • the multiple images render pixels corresponding to a specific viewpoint in each pixel group.
  • the first eye such as the right eye Er
  • the second eye such as the left eye E1
  • the specific viewpoint can be changed based on the eye displacement data based on the timing controller of the display.
  • the embodiment of the present disclosure provides a display method of a multi-view naked-eye stereo display, including: defining a plurality of pixel groups, each pixel group is composed of at least 3 pixels and corresponds to a multi-view setting; receiving a 3D video signal; based on the received The image of the 3D video signal generates multiple images corresponding to a specific viewpoint; the corresponding pixels in each pixel group are rendered according to the generated multiple images.
  • image generation and pixel rendering are performed corresponding to the current specific viewpoint V4, V8 or V6, V10.
  • the 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content. Based on the eye displacement data, the left eye or right eye parallax color image is generated correspondingly according to the eye detected by the eye displacement data.
  • a picture is rendered based on the right parallax color image content of the 3D video signal S1.
  • a picture is rendered based on the left parallax color image content of the 3D video signal S1.
  • the content of the generated corresponding image is written into pixels (such as the 4th and 8th pixels) corresponding to the corresponding viewpoints in each pixel group (such as PG1,1 and PGx,y).
  • a picture is rendered based on the right parallax color image content of the 3D video signal S1.
  • a picture is rendered based on the left parallax color image content of the 3D video signal S1.
  • the content of the generated corresponding image is written into the pixels (such as the 6th and 10th pixels) corresponding to the corresponding viewpoints in each pixel group (such as PG1,1 and PGx,y).
  • the eyes of the user in the motion state can still see the rendered images at different angles, and the parallax is generated to form a 3D display stereo effect.
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display, and the processor unit is communicatively connected to the multi-viewpoint naked-eye stereoscopic display.
  • the naked-eye stereoscopic display system may further include, for example, an eye displacement sensor in the form of a dual camera, which is communicatively connected with the processor unit.
  • the eye displacement sensor may be provided in a display or the system or display has a transmission interface that can receive eye displacement data.
  • the multi-view naked-eye stereoscopic display may include a display screen with a display panel and a grating (not identified), a video signal interface configured to receive a 3D video signal, and a 3D video processing unit.
  • the display may have 12 viewpoints (V1-V12).
  • the display may have more or fewer viewpoints.
  • the display may also optionally include a timing controller and/or a display driver chip, which may be integrated with the 3D video processing unit or independently.
  • the display may integrate an eye displacement sensor, which is communicatively connected with the 3D video processing unit.
  • the display panel may include multiple rows and multiple columns of pixels and define multiple pixel groups.
  • two illustrative pixel groups PG1,1 and PGx,y are shown, each pixel group corresponds to a multi-viewpoint setting, and each has its own 12 pixels (P1-P12).
  • the pixels in the pixel group are arranged in a single row and multiple columns.
  • the pixels in the pixel group may have other arrangements, such as a single column and multiple rows or multiple rows and multiple columns.
  • the aforementioned PGx, y may represent a pixel group in the Xth row and Yth column.
  • the display of the display is described.
  • the display can have 12 viewpoints V1-V12, and the user's eyes can see the display of the corresponding pixel points in each pixel group in the display panel at each viewpoint (spatial position), and then see different renderings Picture.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a three-dimensional image is synthesized in the brain
  • one or more 3D video processing units are configured to generate images and rendering pixels for display in such a way that images based on 3D video signals generate multiple images corresponding to specific viewpoints and based on the generated multiple images. Each image renders the pixels corresponding to a specific viewpoint in each pixel group. In some embodiments, there are multiple users, such as two. Based on the positions of the eyes of different users, an image is rendered for the corresponding viewpoint and written into the corresponding pixel in the pixel group.
  • the embodiment of the present disclosure provides a display method of a multi-view naked-eye stereo display, including: defining a plurality of pixel groups, each pixel group is composed of at least 3 pixels and corresponds to a multi-view setting; receiving a 3D video signal; based on the received The image of the 3D video signal generates multiple images corresponding to specific viewpoints (such as the viewpoints V4 and V6 corresponding to the left and right eyes of the first user and the viewpoints V8 and V10 corresponding to the left and right eyes of the second user); each is rendered according to the generated multiple images The corresponding pixel in the pixel group.
  • specific viewpoints such as the viewpoints V4 and V6 corresponding to the left and right eyes of the first user and the viewpoints V8 and V10 corresponding to the left and right eyes of the second user
  • the 3D video signal S1 received by the video signal interface is an image frame containing two contents of a color image and a depth image.
  • the 3D video processing unit takes the image information and depth information of the received 3D video signal S1 as input, and based on the eye displacement data, sets the viewpoints V4, V6 and the second user corresponding to the left and right eyes of the first user.
  • the viewpoints V8 and V10 corresponding to the left and right eyes render 4 pictures according to the corresponding viewing angles.
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display.
  • the 3D video signal S1 received by the video signal interface is an image frame containing left and right parallax color image content.
  • the 3D video processing unit takes as input the image frames of the received 3D video signal S1 that contain the left and right parallax color image content. Based on the eye displacement data, the left eye or right eye parallax color image is generated correspondingly according to the eye detected by the eye displacement data.
  • the viewpoint V4 where the right eye of the first user is located and the viewpoint V8 where the right eye of the first user is located two pictures are rendered based on the right parallax color image content of the 3D video signal S1.
  • the viewpoint V6 where the left eye of the first user is located and the viewpoint V10 where the left eye of the first user is located two pictures are rendered based on the left parallax color image content of the 3D video signal S1.
  • write the content of the generated corresponding image into each pixel group (such as PG1,1 and PGx,y) corresponding to the pixels seen by the corresponding viewpoint (such as the 4th, 6th and 8th, 10th pixels) in.
  • an embodiment of the present disclosure provides a naked-eye stereoscopic display system, which may include a processor unit and a multi-viewpoint naked-eye stereoscopic display.
  • the display is configured to receive multiple signal inputs, for example, two signal inputs including S1 (left and right parallax image) and S2 (color image and depth image).
  • the first user wants to see the left and right parallax signal S1
  • the second user wants to see the color and depth signals.
  • the 3D video processing unit is based on the positions of the left and right eyes (Er and El) of the first user (viewpoints V4 and V6) and the positions of the left and right eyes (Er and El) of the first user (viewpoints V8 and V10).
  • the corresponding viewpoint corresponds to the pixel (such as the fourth and sixth And the 8th and 10th pixels).
  • each person can watch the rendered image corresponding to their own viewing angle to generate parallax to form a 3D display stereo effect, and different users can watch different video contents.
  • the embodiments of the present disclosure may have different implementation solutions.
  • the video signal interface receives a MiPi signal with a resolution of 1920x1200, and the signal is converted into a mini-LVDS signal after entering the timing controller.
  • the traditional processing method is to provide multiple display driver chips for the signal output of the screen.
  • a 3D video processing unit (or unit group) in the form of FPGA or ASIC is provided before the display driver chip.
  • the resolution of the display screen is 1920x12x1200, and the signal received by the interface is processed to complete the resolution expansion for each viewpoint, that is, the resolution expansion for 12 times of the received video.
  • the video signal interface may have multiple implementation forms, including but not limited to, for example, a high-definition digital display interface (Display Port, DP1.2) version 1.2, and a high-definition multimedia interface (High Definition Multimedia Interface version 2.0). Multimedia Interface, HDMI 2.0), high-definition digital display interface (V-by-One), etc. or wireless interfaces, such as WiFi, Bluetooth, cellular network, etc.
  • a high-definition digital display interface Display Port, DP1.2
  • HDMI 2.0 high-definition digital display interface
  • V-by-One high-definition digital display interface
  • wireless interfaces such as WiFi, Bluetooth, cellular network, etc.
  • the display or display system and display method can be combined with other image processing technologies: for example, color adjustment of the video signal, including color space rotation (Color Tint) adjustment and color gain (Color Gain) adjustment; brightness adjustment, including Contrast adjustment, Drive Gain adjustment, and Gamma GAMMA curve adjustment.
  • color adjustment of the video signal including color space rotation (Color Tint) adjustment and color gain (Color Gain) adjustment
  • brightness adjustment including Contrast adjustment, Drive Gain adjustment, and Gamma GAMMA curve adjustment.
  • the display system 1800 is a cellular phone or is configured as part of a cellular phone.
  • the processing unit of the display system may be provided by or integrated in a processor of a cellular phone, such as an application processor (AP).
  • AP application processor
  • the eye displacement sensor may include or be configured as a camera of a cellular phone, for example, a front camera.
  • the eye displacement sensor may include or be configured as a front camera combined with a structured light camera.
  • the display system may be configured as a tablet computer, a personal computer, or a wearable device with a processor unit.
  • the naked-eye stereoscopic display may be a digital television (smart or non-smart).
  • the display system 1900 may be configured as a naked-eye stereoscopic display 1904 connected to an organic top box 1902 or a projection cell phone or tablet computer, and the processor unit is contained in the set top box or projection cell phone or tablet. In the computer.
  • the naked-eye stereoscopic display is a smart TV and is integrated with a processor unit.
  • the naked-eye stereoscopic display system is configured as a smart home system or a part thereof.
  • the smart home system 2000 may include a smart gateway 2002 or a central controller including or integrated with a processor unit, a naked-eye stereoscopic display 2004, and an eye displacement sensor for obtaining eye displacement data, such as Dual camera 2006.
  • the eye displacement sensor may take other forms, such as a single camera, a combination of a camera and a depth-of-field camera, and so on.
  • both the display and the eye displacement sensor can communicate with the smart gateway or the central controller, for example, wirelessly connect via WiFi.
  • the display and the eye displacement sensor can also be connected wirelessly or wiredly with the smart gateway or the central controller in other ways.
  • the naked-eye stereoscopic display system is configured as an entertainment interactive system or a part thereof.
  • FIG. 21 shows a naked-eye stereoscopic display system according to an embodiment of the present disclosure, which is configured as an entertainment interactive system 2100 or a part thereof.
  • the entertainment interactive system 2100 includes a naked-eye stereoscopic display 2104 and an eye displacement sensor for obtaining eye displacement data, such as a dual camera 2106, and a processor unit is not shown.
  • the entertainment interactive system 2100 it is configured to be suitable for use by multiple people, for example: suitable for use by two or more users.
  • the naked-eye stereoscopic display 2104 of the entertainment interactive system 2100 generates an image based on eye displacement sensor, such as the eye displacement data of the dual camera 2106, and writes the image corresponding to the viewpoint.
  • the entertainment interactive system 2100 may also be combined with embodiments of multiple signal inputs to obtain new embodiments.
  • the processing unit based on the user's interaction (for example, based on the data detected by the eye displacement sensor or other sensors), the processing unit accordingly generates multiple (such as two) personalized video signals, and can use The display and the display method described in the embodiment of the present disclosure display.
  • the entertainment interactive system can provide users with a high degree of freedom and interaction.
  • the multi-view naked-eye stereoscopic display, display system, and display method provided by the embodiments of the present disclosure can alleviate the problem of a significant drop in resolution of the naked-eye stereoscopic display.
  • the embodiment of the present disclosure provides a computer-readable storage medium that stores computer-executable instructions, and the computer-executable instructions are configured to execute the above-mentioned display method of the multi-view naked-eye stereoscopic display.
  • the embodiments of the present disclosure provide a computer program product, including a computer program stored on a computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer executes the above-mentioned multi-view naked eye Display method of stereo display.
  • the aforementioned computer-readable storage medium may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
  • the computer-readable storage medium and computer program product provided by the embodiments of the present disclosure can alleviate the problem of a significant drop in resolution of naked-eye stereoscopic display.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage medium may be a non-transitory storage medium, including: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • the first element can be called the second element, and similarly, the second element can be called the first element, as long as all occurrences of the "first element” are renamed consistently and all occurrences "Second component” can be renamed consistently.
  • the first element and the second element are both elements, but they may not be the same element.
  • the terms used in this application are only used to describe the embodiments and are not used to limit the claims. As used in the description of the embodiments and claims, unless the context clearly indicates otherwise, the singular forms of "a” (a), “one” (an) and “the” (the) are intended to also include plural forms .
  • the term “and/or” as used in this application refers to any and all possible combinations of one or more of the associated lists.
  • the term “comprise” (comprise) and its variants “comprises” and/or including (comprising) and the like refer to the stated features, wholes, steps, operations, elements, and/or The existence of components does not exclude the existence or addition of one or more other features, wholes, steps, operations, elements, components and/or groups of these. If there are no more restrictions, the element defined by the sentence “including a" does not exclude the existence of other identical elements in the process, method, or equipment that includes the element.
  • each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
  • the methods, products, etc. disclosed in the embodiments if they correspond to the method parts disclosed in the embodiments, see the descriptions in the method parts for relevant points.
  • the disclosed methods and products may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of units may only be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection between devices or units through some interfaces, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units can be selected to implement this embodiment according to actual needs.
  • the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more executable instructions for realizing the specified logic function.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.

Abstract

本申请涉及立体显示技术领域,公开了一种多视点裸眼立体显示器,包括具有显示面板和光栅的显示屏、被配置为接收3D视频信号的视频信号接口、以及一个或多个3D视频处理单元;其中,显示面板包括多行多列像素并且限定出多个像素组,多个像素组中的各像素组由至少3个像素构成且对应于多视点设置,多个像素组具有的非规则的相互排布位置是基于像素与光栅的光学关系和/或像素与视点的对应关系调整或确定的,一个或多个3D视频处理单元被配置为可渲染各像素组中对应的像素。本申请公开的多视点裸眼立体显示器,能够减轻裸眼立体显示的分辨率显著下降的问题。本申请还公开了一种多视点裸眼立体显示系统及多视点裸眼立体显示器的显示方法。

Description

多视点裸眼立体显示器、显示系统及显示方法
本申请要求在2019年3月29日提交中国专利局、申请号为201910247546.X、发明名称为“一种分辨率无损的裸眼立体显示系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及立体显示技术领域,例如涉及一种多视点裸眼立体显示器、显示系统及显示方法。
背景技术
目前,立体显示技术主要包括眼镜式立体显示和裸眼式立体显示技术。裸眼式立体显示技术是一种用户无需佩戴眼镜而能够之间观看到立体显示画面的技术。与眼镜式立体显示相比,裸眼式立体显示减少了对用户的约束。
通常,裸眼式立体显示是基于视点的,在空间中不同位置处形成视差图像(帧)的序列,使得具有视差关系的立体图像对可以分别进入人的左右眼当中,从而给用户带来立体感。对于具有例如N个视点的传统的多视点裸眼立体(3D)显示器,需要用显示面板上的多个独立像素向空间的多个视点进行投射。
在实现本公开实施例的过程中,发现相关技术中至少存在如下问题:
由于传统的多视点裸眼立体显示器的显示面板的分辨率总数为定值,因此在进行3D显示时分辨率会显著下降,例如行/列分辨率降为原分辨率的1/N。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本公开实施例提供了一种多视点裸眼立体显示器、显示系统及显示方法,以减轻裸眼立体显示的分辨率显著下降的问题。
本公开实施例提供的多视点裸眼立体显示器,包括具有显示面板和光栅的显示屏、被配置为接收3D视频信号的视频信号接口、以及一个或多个3D视频处理单元;
其中,显示面板包括多行多列像素并且限定出多个像素组,多个像素组中的各像素组 由至少3个像素构成且对应于多视点设置,多个像素组具有的非规则的相互排布位置是基于像素与光栅的光学关系和/或像素与视点的对应关系调整或确定的,一个或多个3D视频处理单元被配置为可渲染各像素组中对应的像素。
本公开实施例提供的多视点裸眼立体显示系统,包括处理器单元和上述的多视点裸眼立体显示器;其中,处理器单元与多视点裸眼立体显示器通信连接。
本公开实施例提供的多视点裸眼立体显示器的显示方法中,多视点裸眼立体显示器包括具有显示面板和光栅的显示屏,显示面板包括多行多列像素;上述的多视点裸眼立体显示器的显示方法包括:
接收3D视频信号;
基于接收到的3D视频信号的图像生成对应于多视点中的全部视点或特定视点的多个图像;
依据生成的多个图像渲染对应的像素;其中,被渲染的像素是基于像素与光栅的光学关系和/或像素与视点的对应关系确定的。
本公开实施例提供的多视点裸眼立体显示器、显示系统及显示方法,可以实现以下技术效果:
减轻裸眼立体显示的分辨率显著下降的问题。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A示出了根据本公开实施例的多视点裸眼立体显示器的结构示意图;
图1B示出了根据本公开实施例的多视点裸眼立体显示器的结构示意图;
图1C示出了根据本公开实施例的多视点裸眼立体显示器的结构示意图;
图2示出了图1A-C所示的显示面板中的像素与视点相对应的结构示意图;
图3示意性地示出了图1A-C中由所接收到的3D视频信号的图像(帧)生成对应各视点的图像的示意图;
图4示出了根据本公开实施例的多视点裸眼立体显示器的单个3D视频处理单元的结构示意图;
图5示出了根据本公开实施例的多视点裸眼立体显示器的多个3D视频处理单元的结构 示意图;
图6示意性地示出了图1中由所接收到的3D视频信号的图像(帧)生成对应各视点的图像的示意图;
图7A示出了根据本公开实施例的多视点裸眼立体显示器的结构示意图;
图7B示出了图7A中的多视点裸眼立体显示器的结构示意图;
图8示出了根据本公开实施例的多视点裸眼立体显示器的结构示意图;
图9示出了根据本公开实施例的采用柱状棱镜光栅的多视点裸眼立体显示器的部分结构示意图;
图10示出了根据本公开实施例的采用柱状棱镜光栅的多视点裸眼立体显示器的部分结构示意图;
图11示出了根据本公开实施例的采用视差屏障光栅的多视点裸眼立体显示器的部分结构示意图;
图12示出了根据本公开实施例的使用眼部位移数据的多视点裸眼立体显示器的结构示意图;
图13示出了根据本公开实施例的使用眼部位移数据的多视点裸眼立体显示器的结构示意图;
图14示出了根据本公开实施例的使用眼部位移数据的多视点裸眼立体显示器的结构示意图;
图15示出了根据本公开实施例的使用眼部位移数据的多视点裸眼立体显示器的结构示意图;
图16示出了根据本公开实施例的使用眼部位移数据的多视点裸眼立体显示器的结构示意图;
图17示意性地示出了图16中由所接收到的两路3D视频信号的图像(帧)生成对应特定视点的图像的示意图;
图18示意性地示出了根据本公开实施例的多视点裸眼立体显示系统构造成蜂窝电话或其一部分的原理示意图;
图19示意性地示出了根据本公开实施例的多视点裸眼立体显示系统构造成连接机顶盒的数字电视的原理示意图;
图20示意性地示出了根据本公开实施例的多视点裸眼立体显示系统构造成智能家居系统或其一部分的原理示意图;
图21示意性地示出了根据本公开实施例的多视点裸眼立体显示系统构造成娱乐互动 系统或其一部分的原理示意图。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,一个或多个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。
本公开实施例提供的多视点裸眼立体显示器,包括具有显示面板和光栅的显示屏、被配置为接收3D视频信号的视频信号接口、以及一个或多个3D视频处理单元;
其中,显示面板包括多行多列像素并且限定出多个像素组,多个像素组中的各像素组由至少3个像素构成且对应于多视点设置,多个像素组具有的非规则的相互排布位置是基于像素与光栅的光学关系和/或像素与视点的对应关系调整或确定的,一个或多个3D视频处理单元被配置为可渲染各像素组中对应的像素。
在一些实施例中,光栅可以包括柱状棱镜光栅,光学关系可以包括像素与柱状棱镜光栅的对位关系和/或柱状棱镜光栅相对于相应像素的折射状态。
在一些实施例中,光栅可以包括前置和/或后置的视差屏障光栅,视差屏障光栅可以包括遮光部和透光部,光学关系可以包括像素与视差屏障光栅的相应透光部的对位关系。
在一些实施例中,光学关系可以是测量显示面板中的像素与光栅所得到的对位数据。可选地,光学关系可以是光栅相对于显示面板中的像素的折射状态。
在一些实施例中,对应关系可以是基于光学关系计算或确定的。可选地,对应关系可以是通过在多视点中的各视点进行测量确定的。
在一些实施例中,多视点裸眼立体显示器还可以包括存储有光学关系和/或对应关系的存储器。可选地,一个或多个3D视频处理单元可以被配置为可获取存储器中的数据。
本公开实施例提供的多视点裸眼立体显示系统,包括处理器单元和上述的多视点裸眼立体显示器;其中,处理器单元与多视点裸眼立体显示器通信连接。
在一些实施例中,裸眼立体显示系统可以包括:
具有处理器单元的智能电视;或
智能蜂窝电话、平板电脑、个人计算机或可穿戴设备;或
作为处理器单元的机顶盒或可投屏的蜂窝电话或平板电脑,和与机顶盒、蜂窝电话或平板电脑有线或无线连接的作为多视点裸眼立体显示器的数字电视;或
智能家居系统或其一部分,其中处理器单元包括智能家居系统的智能网关或中央控制器,智能家居系统还包括被配置为可获取眼部位移数据的眼部位移传感器;或
娱乐互动系统或其一部分。
在一些实施例中,娱乐互动系统可以被配置为适用于多人使用并基于多个用户生成多路3D视频信号以便传送至多视点裸眼立体显示器。
本公开实施例提供的多视点裸眼立体显示器的显示方法中,多视点裸眼立体显示器包括具有显示面板和光栅的显示屏,显示面板包括多行多列像素;方法包括:
接收3D视频信号;
基于接收到的3D视频信号的图像生成对应于多视点中的全部视点或特定视点的多个图像;
依据生成的多个图像渲染对应的像素;其中,被渲染的像素是基于像素与光栅的光学关系和/或像素与视点的对应关系确定的。
在一些实施例中,光学关系可以以如下方式确定:
测量显示面板中的像素与光栅的对位数据,将测量得到的对位数据作为光学关系;或
将光栅相对于显示面板中的像素的折射状态作为光学关系。
在一些实施例中,对应关系可以以如下方式确定:
基于光学关系计算或确定对应关系;或
在多视点中的各视点进行测量以确定对应关系。
参考图1A,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,处理器单元包括被配置为发送3D视频信号至裸眼立体显示器的处理\发送\转发\控制装置,其可以是同时具有生成和发送3D视频信号功能的装置,也可以是处理或不处理接收到的3D视频信号并将其转发至显示器的装置。在一些实施例中,处理器单元可以被包括在或被称为处理终端或终端。
该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。
在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。
在一些实施例中,显示器还可以选择性地包括存储器,以便存储所需的数据。
继续参考图1A,显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例 中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,例如:单列多行或多行多列等。可选地,前述PGx,y可表示在第X行、第Y列的像素组。
结合参考图1A和图2,描述该显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面。
在一些实施例中,一个或多个3D视频处理单元被配置为如此地生成用于显示的图像和渲染像素,即,基于3D视频信号的图像生成对应于全部视点的多个图像并依据所生成的多个图像渲染各像素组中对应的像素。
本公开实施例提供了一种多视点裸眼立体显示器的像素组排布方法,包括:提供具有显示面板和光栅的显示屏;其中,显示面板包括多行多列像素;获取显示面板中的像素与光栅的光学关系和/或显示面板中的像素与视点的对应关系;基于所获取的光学关系和/或各像素与视点的对应关系定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置。所定义的多个像素组能够支持显示器的多视点裸眼立体显示。
本公开实施例提供了一种多视点裸眼立体显示器的显示方法,包括:定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置;接收3D视频信号;基于接收到的3D视频信号的图像生成对应于全部视点或特定视点的多个图像;依据所生成的多个图像渲染各像素组中对应的像素。在一些实施例中,对应于全部视点(12个)来进行图像生成和像素渲染。
结合参考图1A、图2和图3,描述3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含色彩图像和景深两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的图像信息和景深信息作为输入,按V1-V12的视点对应的观看角度渲染出12幅画面。然后,将所生成的每幅图像的内容写入到各视点对应看到的像素中。
从而当用户的眼睛在不同视点V1-V12处观看时,能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
在一些实施例中,上述生成的12幅画面可以是与接收到的3D视频信号的对应的图像帧在分辨率相等的情况下生成的。可选地,所对应写入的像素基本逐点对应于生成的该图像的分辨率(并进而接收的3D视频信号的图像的分辨率)。
在一些实施例中,还可以进行对接收的3D视频信号进行分辨率增加(例如:倍增)的处理,如插值处理,或称为预处理。在一些实施例中,可以对色彩图像和景深图像均进行2倍的行分辨率插值。在一些实施例中,还可以将分辨率增加(例如:倍增)的处理结合本公开实施例所述的点对点渲染的处理以获得新的实施例。在本文中,结合分辨率增加的对应视点的图像生成有时也可称为分辨率增加(例如:倍增)的生成。
在一些实施例中,可以设置另外的(预)处理器来执行分辨率增加(例如:倍增)或插值,也可以由一个或多个3D视频处理处理单元执行分辨率增加(例如:倍增)或插值。
在一些实施例中,显示系统或显示器可以包括眼部位移传感器或可读取眼部位移数据。
参考图1B,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,显示器可以集成有眼部位移传感器,其与3D视频处理单元可通信连接。在一些实施例中,显示器可以设置存储器以便存储眼部位移数据,而3D视频处理单元与该存储器相连接并读取眼部位移数据。在一些实施例中,眼部位移数据可以为实时数据。在一些实施例中,眼部位移传感器可例如呈双摄像头形式。在一些实施例中,可以采用其他形式的眼部位移传感器,例如单摄像头、眼部位移摄像头与景深摄像头的结合以及其他的能用于确定用户眼部位置的感应装置或其组合。在一些实施例中,眼部位移传感器可以具有其他作用或者与其他功能或部件共享。可选地,构造成蜂窝电话的显示系统中可以采用蜂窝电话自带的前置摄像头用作眼部位移传感器。
在一些实施例中,显示器可包括眼部位移数据接口,3D处理单元可借助该眼部位移数据接口以读取眼部位移数据。
参考图1C,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理器单元通信连接。进而,显示器可包括眼部位移数据接口,并且3D处理单元可借助该眼部位移数据接口与该处理器单元通信连接以读取眼部位移数据。
在一些实施例中,处理器单元可以不配设或不连接眼部位移传感器,而是直接读取眼部位移数据。或者,3D处理单元可以通过眼部位移数据接口从其他来源获取眼部位移数据。
本公开实施例中,与设置眼部位移传感器相关的实施例可以与其他实施例相结合。例如,可以将点对点渲染的实施例结合眼部位移传感器或数据的使用以得到新的实施例。也可以,利用眼部位移传感器或数据得到其他实施例。
在一些实施例中,通过逐行写入(渲染)将生成的图像内容写入(渲染)显示面板的像素。这极大地减少了渲染计算的压力。
在一些实施例中,逐行写入(渲染)处理按照下述方式执行:分别读取各个生成的图像中相应的点的信息并逐行写入显示面板的像素中。
在一些实施例中,还包括将多个生成的图像合成为合成图像,并读取合成图像中相应的点的信息并逐行写入显示面板的像素中。
图4中提供了单个3D视频处理单元,该单个3D视频处理单元同时处理对应多个视点的图像生成和像素组中相应的多个像素的渲染。
在一些实施例中,可以提供多个3D视频处理单元,它们并行、串行或串并行结合处理图像生成和像素渲染。在一个实施例中,在多视点裸眼立体显示器中设置多个3D视频处理单元,各3D视频处理单元被配置为各自分配有多行或者多列像素并渲染各自的多行或多列像素。在一个实施例中,多个3D视频处理单元可以依序配设并渲染各自的多行或多列像素。例如,假设设置4个3D视频处理单元,显示面板总共设置有M列像素,则各3D视频处理单元例如从左到右或从右到左依次配设各自的M/4列像素。
参考图5,图5中提供了多个3D视频处理单元,即3D视频处理单元组。该多个并行的3D视频处理单元对应于各自的多列像素依次地并行设置。由此,各3D视频处理单元可并行处理像素渲染。也就是说,各3D视频处理单元可对应地处理各自各自的像素(列)的渲染。如图5示例性地示出,当显示面板具有总共M列像素时,如设置有4个并行的3D视频处理单元(组),第一3D视频处理单元处理第一M/4列像素,第二3D视频处理单元处理第二M/4列像素,第三3D视频处理单元处理第三M/4列像素,第四3D视频处理单元处理第四M/4列像素。
这样的3D视频处理单元(组)的设置简化了结构,大大加快了处理过程,并且可以与前述的关于分别读取各个生成图像以进行逐行写入(渲染)处理的实施例相结合,以获得其他实施例。例如以图5所示实施例为例,当为逐行扫描时,第一至第四列可以依次处理、渲染第一行的各M/4列像素,而例如当第一3D视频处理单元完成处理后,在依次进行其他视频处理单元的处理时,第一3D视频处理单元可获得充分的时间准备处理下一行(如第二行)的对应的M/4列像素,如第二行的第一M/4列像素。这样可以有效提升渲染计算能力。
在一些实施例中,可以有更多个或更少的3D视频处理单元,或者3D视频处理单元(组)可以有其他的方式分配且并行处理该多行多列像素。
在一些实施例中,显示面板的像素驱动、渲染是逐行扫描的。在一些实施例中,上述 各自分配多列像素的3D视频处理单元与逐行扫描相结合,有效减少了计算带宽。
参考图1A-1C、图2和图6,在一些实施例中,裸眼立体显示系统可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理器单元通信连接。
该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。在一些实施例中,该显示器还可包括眼部位移数据接口。3D处理单元可借助该眼部位移数据接口与处理器单元通信连接以读取眼部位移数据。在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。在一些实施例中,显示器可以集成眼部位移传感器,其与3D视频处理单元可通信连接。
显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。
结合参考图1和图2,描述了显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面
结合参考图1-2和图6,描述了3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含左右视差色彩图像两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的左右视差色彩图像作为输入,并由此生成中间图像信息I1。在一些实施例中,一方面用左右视差色彩图像合成景深图像。另一方面,借助于上述左右视差色彩图像之一或两者生成中心点的色彩图像。然后以该中间图像信息I1,即景深图像信息和中心点的色彩图像信息作为输入,按V1-V12的视点对应的观看角度渲染出12幅画面。然后,将所生成的每幅图像的内容写入到各视点对应看到的各像素组中的相应像素中。
从而当用户的眼睛在不同视点V1-V12处观看时,能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
在一些实施例中,上述生成的12幅画面可以是与接收到的3D视频信号的对应的图像帧在分辨率相等的情况下生成的。可选地,所对应写入的像素基本逐点对应于生成的该图像的分辨率(并进而接收的3D视频信号的图像的分辨率)。
在一些实施例中,还可以进行对接收的3D视频信号进行分辨率增加(例如:倍增)的处理,如插值处理,或称为预处理。在一些实施例中,可以对左眼和右眼视差图像均进行2倍的行分辨率插值。在一些实施例中,还可以将分辨率增加(例如:倍增)的处理结合本公开实施例所述的点对点渲染的处理以获得新的实施例。可选地,在处理前可如前所述地进行图像转换处理。在本文中,结合分辨率增加的对应视点的图像生成有时也可称为分辨率增加(例如:倍增)的生成。
在一些实施例中,可以设置另外的(预)处理器来执行分辨率增加(例如:倍增)或插值,也可以由一个或多个3D视频处理处理单元执行分辨率增加(例如:倍增)或插值。
在一些实施例中,所生成的图像是对应于全部视点或特定视点由所接收的3D视频信号的图像生成的,即根据所需的(全部或特定的)视点由原3D视频信号的图像点对点地生成图像和渲染像素,减轻了分辨率显著下降的问题。在一些实施例中,对应于单个视点的图像与所接收的3D视频信号的图像(帧)有相同分辨率且各像素组中对应于各视点的像素(或根据像素-视点对应关系所确定的像素)与生成的该图像(进而所接收的图像)基本上逐点对应。在一些实施例中,点对点地渲染时,可以对所接收的3D视频信号进行插值或以其他方式增加分辨率,再对应于该经插值或分辨率增加的图像生成针对各视点的图像以及相应地渲染各像素组中对应于各视点的像素(或根据像素-视点对应关系所确定的像素)。
参考图7A和图7B,图7A和图7B示出了根据本公开实施例的裸眼立体显示系统及其显示器。
裸眼立体显示器的显示面板具有多行多列像素。为了实现多视点显示,多行多列像素以对应于多视点的方式分为多组。在一些实施例中,各像素组包括对应于12个视点的一行12个像素。在常规的配置中,各像素组按照规律的方式相互排布。例如,在由单行多列像素构成的像素组中,像素组在同一行依次排列,例如同一行的像素组PG1,i(i≥1)依次首尾相接地排列;像素组在同一列相对齐,例如同一列的像素组PGj,1(j≥1)竖向对齐地排列。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,例如:单列多行或多行多列等。在常规的配置中,其他形式的像素组PG相互之间仍是规律设置的。
参考图7A示例性地示出,所示的显示面板具有按照规律地分布的多个像素组PG,包括PG1,1和PGx,y。在一些实施例中,像素组PG1,1中的相应像素分别在对应的视点V1-V12中正确显示。然而,理论上应在对应的视点V1-V12显示的像素组PGx,y的像素实际上在视点V1’-V12’中分别显示。(在一些实施例中,V1’对应于V3)。
参考图7B,在一些实施例中,该多视点裸眼立体显示器被配置为具有非规则的相互排布位置的像素组,即相对于规则排布的像素组被调整。这样的调整是基于显示面板的像素与视点的对应关系调整或确定的。在一些实施例中,基于像素与视点的对应关系,该像素组PG’x,y如此调整或确定,即相比于规则排布的像素组PGx,y向图面的左侧平移两个像素。由此,调整后的非规则排布的像素组PG’x,y中的像素在对应的视点V1-V12处正确显示。
除了单行多列像素构成的像素组的横向(行)调整,还可以进行其他方向的调整,例如:竖向(列)调整或横向竖向组合调整。在一些实施例中,也可以进行其他像素排布形式构成的像素组的横向、竖向和/或组合调整。
在一些实施例中,上述非规则像素组的调整是基于像素与视点的对应关系调整的。在一些实施例中,所示像素与视点的非规则的对应关系基于像素与光栅的光学关系所确定,例如:基于像素与光栅的对位关系、折射关系所确定。因此,在一些实施例中,可基于像素与光栅的光学关系调整或确定非规则的像素组。在一些实施例中,像素与视点的非规则或实际对位关系可以是通过测量确定的。
在一些实施例中,像素与光栅的光学关系可以体现为像素与光栅之间的光学关系数据。
在一些实施例中,上述光学关系和/或对位关系数据可以存储在存储器中,以便3D视频处理单元处理时读取。
在一些实施例中,存储器中可以存储有显示面板中的像素与光栅的光学关系数据和/或显示面板中的像素与视点的对应关系。借助于所存储的数据,可实现本公开实施例的裸眼立体显示。
在一些实施例中,可以提供与3D视频处理单元相通信的数据接口,以便3D视频处理单元借助该数据接口读取光学关系数据和/或对位关系数据。在一些实施例中,光学关系数据和/或对位关系数据可被写入3D视频处理单元中或者作为其算法的一部分。
结合参考图1-3、图6以及图7A-7B,描述3D视频处理单元的处理,进而显示器的显示。视频信号接口接收到的3D视频信号S1为含左右视差色彩图像两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的左右视差色彩图像作为输入,并由此生成中间图像信息I1。在一些实施例中,一方面用左右视差色彩图像合成景深图像。另一方面,借助于上述左右视差色彩图像之一或两者生成中心点的色彩图像。然后以该中间图像信息I1,即景深图像信息和中心点的色彩图像信息作为输入,按V1-V12的视点对应的观看角度渲染出12幅画面。然后,将所生成的每幅图像的内容写入到各视点对应看到的各像素组中的相应像素中,其中各像素组为基于光学关系或像素-视点对位关系调整的 或确定的、非规则排布的像素组。可选地,像素组包括规则排布的PG1,1和经调整的PG’x,y。
从而当用户的眼睛在不同视点V1…V12处观看时,能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
在图7A-7B所示的实施例中描述了基于光学关系和/或像素-视点对位关系调整像素组,以支持3D视频处理单元正确渲染像素组中的对应像素。可选地,无论是否定义像素组及其调整,均可直接或间接地利用光学关系和/或像素-视点对位关系以确定在对应的视点正确显示的像素,还可以渲染该像素。
相比于常规的提高精度克服对位误差、安装误差、材料误差,可以通过调整像素组的排布方式,提供高效、可靠性高的裸眼立体显示。
参考图8,图8示出了根据本公开实施例的裸眼立体显示系统及其显示器。在一些实施例中,裸眼立体显示器的显示面板具有多行多列像素。在一些实施例中,该显示器存储有或可读取显示面板的各个像素所对应的视点的数据。例如图8示例性地示出,像素P1,b1对应于视点V8,像素Pam,bn对应于视点V6,像素Paz,bz对应于视点V12。
图8示出了各像素与视点的对应关系。在一些实施例中,可以采用能用于确定像素与视点的对应关系的光学关系数据,如光栅与像素对位数据和/或光栅折射数据,或者其他间接数据。在一些实施例中,上述光学关系数据和/或对位关系数据可以存储在存储器中,以便3D视频处理单元处理时读取。在一些实施例中,可以提供与3D视频处理单元相通信的数据接口,以便3D视频处理单元借助该数据接口读取光学关系数据和/或对位关系数据。在一些实施例中,光学关系数据和/或对位关系数据可被写入3D视频处理单元中或者作为其算法的一部分。在一些实施例中,像素-视点对应关系可以呈查找表形式。
结合参考图1-3、图6以及图8,描述了3D视频处理单元的处理,进而显示器的显示。视频信号接口接收到的3D视频信号S1为含左右视差色彩图像两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的左右视差色彩图像作为输入,并由此生成中间图像信息I1。在一些实施例中,一方面用左右视差色彩图像合成景深图像。另一方面,借助于上述左右视差色彩图像之一或两者生成中心点的色彩图像。然后以该中间图像信息I1,即景深图像信息和中心点的色彩图像信息作为输入,按V1-V12的视点对应的观看角度渲染出12幅画面。然后,将所生成的每幅图像内容按照像素-视点的对应关系写入到各视点对应看到的各像素。
在一些实施例中,像素与视点的对应关系可以体现为像素与视点之间的对应关系数据。
从而当用户的眼睛在不同视点V1-V12处观看时,能看到不同角度的渲染画面,产生 视差,以形成3D显示的立体效果。
参考图9,在一些实施例中,显示器的光栅为柱状棱镜光栅。在一些实施例中,可以采用如图7A-7B所示的像素组调整或图8所示的像素-视点对位关系等特征。
图9中,倾斜设置的柱状棱镜每行大体覆盖12个像素点。在一些实施例中,显示器同样具有12个视点,显示面板的像素组具有对应于12个视点的单行多列像素。结合参考图9和图7A-7B,在设置有柱状棱镜光栅的显示器中,位于柱状棱镜顶部的规则排布的像素组中的像素Pa1,b1-Pa1,b4可具有正确对应于视点V1-V4。然而位于柱状棱镜底部的规则排布的像素组中的四个像素未对准正确的视点V1-V4,而是对应的视点V1’-V4’。为此,可调整该像素组在图示中向左平移一个像素,以便在正确的视点V1-V4显示,像素组中的其余视点可同样向左平移一个像素,例如图9中所示的理论上对应于视点V4’的像素对应于视点V5。
结合参考图9和图8,图9所示的实施例同样可适用于针对显示面板中的像素利用光学(偏差)数据和/或像素-视点的非规则对位关系。例如,可以存储、记录或读取如下像素-视点对应关系数据,位于柱状棱镜底部的四个像素分别对应视点V2、V3、V4、V5。
尽管不愿意受理论之约束,像素组或像素的错位可以是由柱状棱镜与像素的对位偏差和/或柱状棱镜的折射状态所造成的。图9以虚线和实线示例性示出了棱镜左侧的理论对位位置和实际对位偏差。
结合参考图9和图10,柱状棱镜可以是倾斜于像素设置的,这样可以消除摩尔纹。因此,存在着处于柱状棱镜边界之间的共享的像素(例如上述视点V1对应的像素)。在一些配置中,对于这些共享的像素均规定有其对应的视点。然而,在一些实施例中,可以提供基于这些共享像素的像素组微调或像素-视点的动态对应关系或称为视点共享的像素。
参考图10,针对像素行Pam,bn-Pam,bn+i(i≥1),例如常规对应于视点V12的共享像素,例如可以在视点V12不渲染时,按照视点V1的图像进行渲染。
图10所示实施例的微调或动态关系可以应用于其他类型的光栅中,还可以与眼部位移数据的获取的实施例相结合以获得其他实施例。
参考图11,示出了视差屏障式的显示器的部分结构示意图。视差屏障光栅100包括遮光部102和透光部104。在图11所示的视差屏障光栅的实施例中,可以采用如图7A-7B所示的像素组调整或图8所示的像素-视点对位关系等特征。
尽管不愿意受理论之约束,像素组或像素的错位可以是由视差屏障光栅的透光部104与像素的对位偏差所造成的。
在一些实施例中,视差屏障光栅100为前置光栅。可选地,可以设置后置光栅和同时 设置前置和后置光栅。
结合参考图1B-1C和图12,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理器单元通信连接。在一些实施例中,该眼部位移传感器可以设置在显示器中或者该系统或显示器具有可接收眼部位移数据的传输接口。
继续参考图1B-1C,该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。在一些实施例中,显示器可以集成眼部位移传感器,其与3D视频处理单元可通信连接。
继续参考图1B-1C,显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,例如:单列多行或多行多列等。可选地,前述PGx,y可表示在第X行、第Y列的像素组。
结合参考图1B-1C和图12,描述了显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面
图12中,一个或多个3D视频处理单元被配置为如此地生成用于显示的图像和渲染像素,即,基于3D视频信号的图像生成对应于特定视点的多个图像并依据所生成的多个图像渲染各像素组中与特定视点对应的像素。在一些实施例中,该特定视点是基于眼部位移数据确定的。当检测到用户的眼部(左眼和右眼)处于特定视点(空间位置)处,则生成针对相应视点的图像,并渲染与相应的视点相对应的像素组中的像素。图12中,检测到第一眼部(如右眼)位于视点V4处,而第二眼部(如左眼)位于视点V8处。
本公开实施例提供了一种多视点裸眼立体显示器的显示方法,包括:定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置;接收3D视频信号;基于接收到的3D视频信号的图像生成对应于特定视点(如视点V4和V8)的多个图像;依据所生成的多个图像渲染各像素组中对应的像素。在一些实施例中,对应于特定视点(V4和V8)来进行 图像生成和像素渲染。
结合参考图1B-1C和图12,描述了3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含色彩图像和景深两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的图像信息和景深信息作为输入,基于眼部位移数据,将眼部所在的视点V4、V8按对应的观看角度渲染出2幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4个和第8个像素)中。
从而位于视点V4和V8的用户的眼睛能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
结合参考图1B-1C和图12,在一些实施例中,生成的对应视点V4和V8的画面可以是与接收到的3D视频信号的对应的图像帧在分辨率相等的情况下生成的。在一些实施例中,所对应写入的像素基本逐点对应于生成的该图像的分辨率(并进而接收的3D视频信号的图像的分辨率)。
在一些实施例中,还可以进行对接收的3D视频信号进行分辨率增加(例如:倍增)的处理,如插值处理,或称为预处理。在一些实施例中,可以对色彩图像和景深图像均进行2倍的行分辨率插值。在一些实施例中,还可以将分辨率增加(例如:倍增)的处理结合本公开实施例所述的点对点渲染的处理以获得新的实施例,例如获得分辨率对应于2倍插值后的图像的对应视点V4和V8的生成的画面。结合分辨率增加的对应视点的图像生成有时也可称为分辨率增加(例如:倍增)的生成。
在一些实施例中,可以设置另外的(预)处理器来执行分辨率增加(例如:倍增)或插值,也可以由一个或多个3D视频处理处理单元执行分辨率增加(例如:倍增)或插值。
此外,利用眼部位移数据渲染特定视点(非全部视点)的实施例可以与前述的实施例相结合、或者被一些特征所替代以获得新的实施例。例如:该实施例可以与光学关系数据/像素-视点对应关系数据相关的特征相结合获得新的实施例。以及,该实施例可以改造而无需明确地对像素分组以获得新的实施例。
继续参考图13所示的实施例,其大体类似于图12所示的实施例。主要区别在于,特定视点还包括与眼部所在视点相邻的视点。例如在图13所示的实施例中,要生成图像的特定视点还可包括视点V3和V5,以及视点V7和V9,并进而渲染像素组中这些视点所对应的像素。在一些实施例中,可以是以单侧的相邻视点作为特定视点。
在一些实施例中,例如可以渲染如图12或13所述的像素,其余像素不渲染。对于液晶显示器而言,不渲染的像素可以留白光或者残留之前图像帧的颜色。由此,这可以尽可 能减小计算负荷。
参考图12、图13,在一些实施例中,该显示器包括自发光显示面板,例如:MICRO LED显示面板。在一些实施例中,该自发光显示面板、如MICRO LED显示面板被配置为未被渲染的像素不发光。这能够极大节省显示屏所耗的功率。
结合参考图1B-1C和图14,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理器单元通信连接。在一些实施例中,该眼部位移传感器可以设置在显示器中或者该系统或显示器具有可接收眼部位移数据的传输接口。
继续参考图1,该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。在一些实施例中,显示器可以集成眼部位移传感器,其与3D视频处理单元可通信连接。
继续参考图1,显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,例如:单列多行或多行多列等。可选地,前述PGx,y可表示在第X行、第Y列的像素组。
结合参考图1和图14,描述了显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面
图14中,一个或多个3D视频处理单元被配置为如此地生成用于显示的图像和渲染像素,即,基于3D视频信号的图像生成对应于特定视点的多个图像并依据所生成的多个图像渲染各像素组中与特定视点对应的像素。在一些实施例中,该特定视点是基于眼部位移数据确定的。当检测到用户的眼部(左眼和右眼)处于相邻的视点处,则生成针对相邻的视点的图像,并渲染与相应的视点相对应的像素组中的像素。图12中,检测到第一眼部(如右眼)位于视点V4和V5之间,而第二眼部(如左眼)位于视点V8和V9之间。由此,可以生成对应于视点V4、V5和V8、V9的四个图像,并渲染像素组中这四个视点对 应的像素。
本公开实施例提供了一种多视点裸眼立体显示器的显示方法,包括:定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置;接收3D视频信号;基于接收到的3D视频信号的图像生成对应于特定视点(如视点V4、V5和V8、V9)的多个图像;依据所生成的多个图像渲染各像素组中对应的像素。
结合参考图1和图14,描述了3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含色彩图像和景深两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的图像信息和景深信息作为输入,基于眼部位移数据,将眼部所在的视点V4、V5、V8和V9按对应的观看角度渲染出4幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4、5个和第8、9个像素)中。
从而位于视点V4、V5之间和V8、V9之间的用户的眼睛能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
利用眼部位移数据渲染特定视点(非全部视点)的实施例可以与前述的实施例相结合、或者被一些特征所替代以获得新的实施例。例如:该实施例可以与光学关系数据/像素-视点对应关系数据相关的特征相结合获得新的实施例。以及,该实施例可以改造而无需明确地对像素分组以获得新的实施例。
结合参考图1B-C和图14,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器。在一些实施例中,视频信号接口接收到的3D视频信号S1为含左右视差色彩图像内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的含左右视差色彩图像内容的图像帧作为输入。基于眼部位移数据,按照眼部位移数据检测到的眼部,对应地生成左眼或右眼视差色彩图像。例如,针对右眼所在的视点V4和V5,基于3D视频信号S1的右视差色彩图像内容渲染出两幅画面。针对左眼所在的视点V8和V9,基于3D视频信号S1的左视差色彩图像内容渲染出两幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4、5个和第8、9个像素)中。
从而位于视点V4、V5之间和V8、V9之间的用户的眼睛能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
结合参考图1和图15,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理 器单元通信连接。在一些实施例中,该眼部位移传感器可以设置在显示器中或者该系统或显示器具有可接收眼部位移数据的传输接口。
继续参考图1,该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。在一些实施例中,显示器可以集成眼部位移传感器,其与3D视频处理单元可通信连接。
继续参考图1,显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,例如:单列多行或多行多列等。可选地,前述PGx,y可表示在第X行、第Y列的像素组。
结合参考图1和图15,描述了显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面
图15中,一个或多个3D视频处理单元被配置为如此地生成用于显示的图像和渲染像素,即,基于3D视频信号的图像生成对应于特定视点的多个图像并依据所生成的多个图像渲染各像素组中与特定视点对应的像素。在一些实施例中,该特定视点是基于眼部位移数据确定的。当检测到用户的眼部(左眼和右眼)处于特定视点(空间位置)处,则生成针对相应视点的图像,并渲染与相应的视点相对应的像素组中的像素。图15中,检测到第一眼部(如右眼Er)位于视点V4处,而第二眼部(如左眼El)位于视点V8处。
继续参考图15,当眼部位移数据表明,用户的眼部发生运动时,则可基于3D视频信号的下一图像(帧),生成对应于新的特定视点的多个图像并依据所生成的多个图像渲染各像素组中与特定视点对应的像素。图15中,当前检测到第一眼部(如右眼Er)移动至视点V6处,而第二眼部(如左眼El)位于视点V10处。在一些实施例中,还可以基于显示器所具有的时序控制器来基于眼部位移数据,改变特定视点。
本公开实施例提供了一种多视点裸眼立体显示器的显示方法,包括:定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置;接收3D视频信号;基于接收到的3D视频信号的图像生成对应于特定视点的多个图像;依据所生成的多个图像渲染各像素组中 对应的像素。在一些实施例中,还可以基于眼部位移数据,调整特定视点,并基于新的特定视点生成图像和渲染像素。在一些实施例中,基于眼部位移数据,对应于当前的特定视点V4、V8或V6、V10来进行图像生成和像素渲染。
结合参考图1和图15,描述了3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含左右视差色彩图像内容的图像帧。基于眼部位移数据,按照眼部位移数据检测到的眼部,对应地生成左眼或右眼视差色彩图像。
例如,在第一时间,针对右眼所在的视点V4,基于3D视频信号S1的右视差色彩图像内容渲染出一幅画面。针对左眼所在的视点V8,基于3D视频信号S1的左视差色彩图像内容渲染出一幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4个和第8个像素)中。
在第二时间,针对右眼所在的视点V6,基于3D视频信号S1的右视差色彩图像内容渲染出一幅画面。针对左眼所在的视点V10,基于3D视频信号S1的左视差色彩图像内容渲染出一幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第6个和第10个像素)中。
从而处于运动状态的用户的眼睛仍能看到不同角度的渲染画面,产生视差,以形成3D显示的立体效果。
结合参考图1和图16,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器,处理器单元与多视点裸眼立体显示器通信连接。在一些实施例中,该裸眼立体显示系统还可包括例如呈双摄像头形式的眼部位移传感器,其与处理器单元通信连接。在一些实施例中,该眼部位移传感器可以设置在显示器中或者该系统或显示器具有可接收眼部位移数据的传输接口。
继续参考图1,该多视点裸眼立体显示器可包括具有显示面板和光栅(未标识)的显示屏、被配置为接收3D视频信号的视频信号接口和3D视频处理单元。参考图2,在一些实施例中,该显示器可具有12个视点(V1-V12)。可选地,该显示器可以具有更多或更少个视点。在一些实施例中,显示器还可以选择性地包括时序控制器和/或显示驱动芯片,其可与3D视频处理单元集成设置或独立设置。在一些实施例中,显示器可以集成眼部位移传感器,其与3D视频处理单元可通信连接。
继续参考图1,显示面板可包括多行多列像素并且限定出多个像素组。在一些实施例中,示出了两个示意性的像素组PG1,1和PGx,y,各像素组对应于多视点设置,分别具有各自的12个像素(P1-P12)。在一些实施例中,该像素组中的像素是以单行多列形式排布的。可选地,该像素组中的像素可以具有其他的排布形式,如单列多行或多行多列等。可 选地,前述PGx,y可表示在第X行、第Y列的像素组。
结合参考图1和图16,描述了显示器的显示。如前所述,该显示器可以具有12个视点V1-V12,用户的眼睛在每个视点(空间位置)可看到显示面板中各像素组中相应像素点的显示,并进而看到不同的渲染的画面。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成立体的画面
图16中,一个或多个3D视频处理单元被配置为如此地生成用于显示的图像和渲染像素,即,基于3D视频信号的图像生成对应于特定视点的多个图像并依据所生成的多个图像渲染各像素组中与特定视点对应的像素。在一些实施例中,用户有多位,如两位。基于不同用户眼部所在的位置,针对对应视点渲染图像并写入像素组中对应的像素。
本公开实施例提供了一种多视点裸眼立体显示器的显示方法,包括:定义多个像素组,各像素组由至少3个像素构成且对应于多视点设置;接收3D视频信号;基于接收到的3D视频信号的图像生成对应于特定视点(如第一用户左右眼对应的视点V4、V6和第二用户左右眼对应的视点V8、V10)的多个图像;依据所生成的多个图像渲染各像素组中对应的像素。
结合参考图1和图16,描述了3D视频处理单元的处理。视频信号接口接收到的3D视频信号S1为含色彩图像和景深图像两幅内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的图像信息和景深信息作为输入,基于眼部位移数据,将第一用户左右眼对应的视点V4、V6和和第二用户左右眼对应的视点V8、V10按对应的观看角度渲染出4幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4、6个和第8、10个像素)中。
从而每个人可以观看对应自己观察角度的渲染图像,产生视差,以形成3D显示的立体效果。
结合参考图1和图16,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器。在一些实施例中,视频信号接口接收到的3D视频信号S1为含左右视差色彩图像内容的图像帧。由此,该3D视频处理处理单元将所接收到的3D视频信号S1的含左右视差色彩图像内容的图像帧作为输入。基于眼部位移数据,按照眼部位移数据检测到的眼部,对应地生成左眼或右眼视差色彩图像。例如,针对第一用户右眼所在的视点V4和第一用户右眼所在的视点V8,基于3D视频信号S1的右视差色彩图像内容渲染出两幅画面。针对第一用户左眼所在的视点V6和第一用户左眼所在的视点V10,基于3D视频信号S1的左视差色彩图像内容渲染出两幅画面。然后,将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4、6个 和第8、10个像素)中。
从而每个人可以观看对应自己观察角度的渲染图像,产生视差,以形成3D显示的立体效果。
结合参考图16和图17,本公开实施例提供了一种裸眼立体显示系统,其可包括处理器单元和多视点裸眼立体显示器。该显示器被配置为可接收多路信号输入,例如:包含S1(左右视差图像)和S2(色彩图像和景深图像)在内的两路信号输入。
继续参考图16,例如第一用户(User2)希望看到左右视差信号S1,而第二用户(User2)希望看到色彩和景深信号。由此,3D视频处理处理单元根据第一用户的左右眼(Er和El)所在的位置(视点V4和V6)和第一用户的左右眼(Er和El)所在的位置(视点V8和V10),分别生成对应于视点的渲染图像并将所生成的相应的图像的内容写入到各像素组(如PG1,1和PGx,y)中相应视点对应看到的像素(如第4、6个和第8、10个像素)中。
从而每个人可以观看对应自己观察角度的渲染图像,产生视差,以形成3D显示的立体效果,且不同的使用者可以观看不同的视频内容。
本公开实施例可以具有不同的实现方案。例如针对具有12个视点的裸眼3D显示器,视频信号接口接收1920x1200分辨率的MiPi信号,信号进入时序控制器后以后转为mini-LVDS信号。传统的处理方式为分别给到多个显示驱动芯片用于屏的信号输出。对此,在一些实施例中,在显示驱动芯片之前设置呈FPGA、ASIC形式的3D视频处理处理单元(或单元组)。
显示屏的分辨率为1920x12x1200,对接口接收到的信号进行处理,以完成针对各视点的分辨率扩展,即针对接收视频的12倍的分辨率扩展。
在一些实施例中,视频信号接口可以有多重实现形式,包括但不限于,如版本为1.2的高清数字显示接口(Display Port,DP1.2)、版本为2.0的高清晰度多媒体接口(High Definition Multimedia Interface,HDMI 2.0)、高清数字显示接口(V-by-One)等或无线接口,如WiFi、蓝牙、蜂窝网络等。
在一些实施例中,显示器或显示系统以及显示方法可以结合其他图像处理技术:如对视频信号进行颜色调整,包括颜色空间旋转(Color Tint)调整和颜色增益(Color Gain)调整;亮度调整,包括对比度(Contrast)调整、驱动增益(Drive Gain)调整、以及伽玛GAMMA曲线调整。
在一些实施例中,描述了显示系统的实施。在一些实施例中,如图18所示,显示系统1800为蜂窝电话或者构造为蜂窝电话的一部分。在一些实施例中,显示系统的处理单元可以由蜂窝电话的处理器、例如应用处理器(AP)提供或集成在其中。在一些实施例中, 眼部位移传感器可以包括或构造为蜂窝电话的摄像头,例如:前置摄像头。在一些实施例中,眼部位移传感器可以包括或构造为前置摄像头结合结构光摄像头。
在一些实施例中,显示系统可构造为具有处理器单元的平板电脑、个人计算机或可穿戴设备。
在一些实施例中,裸眼立体显示器可以为数字电视(智能或非智能的)。在一些实施例中,如图19所示,显示系统1900可以构造为连接有机顶盒1902或投屏蜂窝电话或平板电脑的裸眼立体显示器1904,处理器单元被包含在机顶盒或投屏蜂窝电话或平板电脑中。
在一些实施例中,裸眼立体显示器为智能电视并集成有处理器单元。
在一些实施例中,裸眼立体显示系统构造为智能家居系统或其一部分。图20中,智能家居系统2000(或裸眼立体显示系统)可包括包含或集成有处理器单元的智能网关2002或中央控制器、裸眼立体显示器2004以及获取眼部位移数据的眼部位移传感器,如双摄像头2006。作为举例,眼部位移传感器可以为其他形式,例如包括单摄像头、摄像头与景深摄像头的结合等。在一些实施例中,显示器和眼部位移传感器均可与智能网关或中央控制器通信连接,例如:通过WiFi等方式无线连接。可选地,显示器和眼部位移传感器也可与智能网关或中央控制器以其他方式无线连接或有线连接。
在一些实施例中,裸眼立体显示系统构造为娱乐互动系统或其一部分。
图21示出了根据本公开实施例的裸眼立体显示系统,其构造为娱乐互动系统2100或其一部分。该娱乐互动系统2100包括裸眼立体显示器2104以及获取眼部位移数据的眼部位移传感器,如双摄像头2106,处理器单元未示出。在娱乐互动系统2100中,其被配置为适用于多人使用,例如:适用于两位或更多的用户使用。在一些实施例中,娱乐互动系统2100的裸眼立体显示器2104例如基于眼部位移传感器,如双摄像头2106的眼部位移数据生成图像并写入对应于视点的像素。
在一些实施例中,该娱乐互动系统2100还可以结合多路信号输入的实施例以获得新的实施例。例如,在一些实施例中,基于使用者的互动(例如基于眼部位移传感器或其他传感器检测到的数据),处理单元相应地生成多路(如两路)个性化的视频信号,并可利用本公开实施例所述的显示器及其显示方法显示。
根据本公开实施例的娱乐互动系统可为使用者提供极高的自由度和互动程度。
本公开实施例提供的多视点裸眼立体显示器、显示系统及显示方法,能够减轻裸眼立体显示的分辨率显著下降的问题。
本公开实施例提供了一种计算机可读存储介质,存储有计算机可执行指令,该计算机 可执行指令设置为执行上述的多视点裸眼立体显示器的显示方法。
本公开实施例提供了一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,该计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的多视点裸眼立体显示器的显示方法。
上述的计算机可读存储介质可以是暂态计算机可读存储介质,也可以是非暂态计算机可读存储介质。
本公开实施例提供的计算机可读存储介质和计算机程序产品,能够减轻裸眼立体显示的分辨率显著下降的问题。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
以上描述和附图充分地示出了本公开的实施例,以使本领域技术人员能够实践它们。其他实施例可以包括结构的、逻辑的、电气的、过程的以及其他的改变。实施例仅代表可能的变化。除非明确要求,否则单独的部件和功能是可选的,并且操作的顺序可以变化。一些实施例的部分和特征可以被包括在或替换其他实施例的部分和特征。本公开实施例的范围包括权利要求书的整个范围,以及权利要求书的所有可获得的等同物。当用于本申请中时,虽然术语“第一”、“第二”等可能会在本申请中使用以描述各元件,但这些元件不应受到这些术语的限制。这些术语仅用于将一个元件与另一个元件区别开。比如,在不改变描述的含义的情况下,第一元件可以叫做第二元件,并且同样地,第二元件可以叫做第一元件,只要所有出现的“第一元件”一致重命名并且所有出现的“第二元件”一致重命名即可。第一元件和第二元件都是元件,但可以不是相同的元件。而且,本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”(a)、“一个”(an)和“所述”(the)旨在同样包括复数形式。类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”(comprise)及其变型“包括”(comprises)和/或包括(comprising)等指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。在没有更多限制的情况下,由语句“包括一个…” 限定的要素,并不排除在包括该要素的过程、方法或者设备中还存在另外的相同要素。本文中,每个实施例重点说明的可以是与其他实施例的不同之处,各个实施例之间相同相似部分可以互相参见。对于实施例公开的方法、产品等而言,如果其与实施例公开的方法部分相对应,那么相关之处可以参见方法部分的描述。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,可以取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同方法以实现所描述的功能,但是这种实现不应认为超出本公开实施例的范围。本领域技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
附图中的流程图和框图显示了根据本公开实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。在附图中的流程图和框图所对应的描述中,不同的方框所对应的操作或步骤也可以以不同于描述中所披露的顺序发生,有时不同的操作或步骤之间不存在特定的顺序。例如,两个连续的操作或步骤实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。框图和/或流 程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。

Claims (12)

  1. 一种多视点裸眼立体显示器,包括具有显示面板和光栅的显示屏、被配置为接收3D视频信号的视频信号接口、以及一个或多个3D视频处理单元;
    其中,所述显示面板包括多行多列像素并且限定出多个像素组,所述多个像素组中的各像素组由至少3个像素构成且对应于多视点设置,所述多个像素组具有的非规则的相互排布位置是基于像素与光栅的光学关系和/或像素与视点的对应关系调整或确定的,所述一个或多个3D视频处理单元被配置为可渲染各像素组中对应的像素。
  2. 根据权利要求1所述的多视点裸眼立体显示器,其中,所述光栅包括柱状棱镜光栅,所述光学关系包括像素与所述柱状棱镜光栅的对位关系和/或所述柱状棱镜光栅相对于相应像素的折射状态。
  3. 根据权利要求1所述的多视点裸眼立体显示器,其中,所述光栅包括前置和/或后置的视差屏障光栅,所述视差屏障光栅包括遮光部和透光部,所述光学关系包括像素与所述视差屏障光栅的相应透光部的对位关系。
  4. 根据权利要求1所述的多视点裸眼立体显示器,其中,
    所述光学关系是测量所述显示面板中的像素与所述光栅所得到的对位数据;或
    所述光学关系是所述光栅相对于所述显示面板中的像素的折射状态。
  5. 根据权利要求1所述的多视点裸眼立体显示器,其中,
    所述对应关系是基于所述光学关系计算或确定的;或
    所述对应关系是通过在所述多视点中的各视点进行测量确定的。
  6. 根据权利要求1至5任一项所述的多视点裸眼立体显示器,其中,
    所述多视点裸眼立体显示器还包括存储有所述光学关系和/或对应关系的存储器;
    所述一个或多个3D视频处理单元被配置为可获取所述存储器中的数据。
  7. 一种多视点裸眼立体显示系统,包括处理器单元和根据权利要求1至6任一项所述的多视点裸眼立体显示器;其中,所述处理器单元与所述多视点裸眼立体显示器通信连接。
  8. 根据权利要求7所述的裸眼立体显示系统,其中,所述裸眼立体显示系统包括:
    具有所述处理器单元的智能电视;或
    智能蜂窝电话、平板电脑、个人计算机或可穿戴设备;或
    作为所述处理器单元的机顶盒或可投屏的蜂窝电话或平板电脑,和与所述机顶盒、蜂窝电话或平板电脑有线或无线连接的作为多视点裸眼立体显示器的数字电视;或
    智能家居系统或其一部分,其中所述处理器单元包括所述智能家居系统的智能网关或 中央控制器,所述智能家居系统还包括被配置为可获取眼部位移数据的眼部位移传感器;或
    娱乐互动系统或其一部分。
  9. 根据权利要求8所述的裸眼立体显示系统,其中,所述娱乐互动系统被配置为适用于多人使用并基于多个用户生成多路3D视频信号以便传送至所述多视点裸眼立体显示器。
  10. 一种多视点裸眼立体显示器的显示方法,其中,所述多视点裸眼立体显示器包括具有显示面板和光栅的显示屏,所述显示面板包括多行多列像素;所述方法包括:
    接收3D视频信号;
    基于接收到的3D视频信号的图像生成对应于所述多视点中的全部视点或特定视点的多个图像;
    依据生成的所述多个图像渲染对应的像素;其中,被渲染的所述像素是基于像素与光栅的光学关系和/或像素与视点的对应关系确定的。
  11. 根据权利要求10所述的显示方法,其中,所述光学关系以如下方式确定:
    测量所述显示面板中的像素与所述光栅的对位数据,将测量得到的对位数据作为所述光学关系;或
    将所述光栅相对于所述显示面板中的像素的折射状态作为所述光学关系。
  12. 根据权利要求10或11所述的显示方法,其中,所述对应关系以如下方式确定:
    基于所述光学关系计算或确定所述对应关系;或
    在所述多视点中的各视点进行测量以确定所述对应关系。
PCT/CN2020/078938 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及显示方法 WO2020199888A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910247546.XA CN111757088A (zh) 2019-03-29 2019-03-29 一种分辨率无损的裸眼立体显示系统
CN201910247546.X 2019-03-29

Publications (1)

Publication Number Publication Date
WO2020199888A1 true WO2020199888A1 (zh) 2020-10-08

Family

ID=72664669

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2020/078942 WO2020199889A1 (zh) 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及显示方法
PCT/CN2020/078938 WO2020199888A1 (zh) 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及显示方法
PCT/CN2020/078937 WO2020199887A1 (zh) 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及像素组排布方法

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078942 WO2020199889A1 (zh) 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及显示方法

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/078937 WO2020199887A1 (zh) 2019-03-29 2020-03-12 多视点裸眼立体显示器、显示系统及像素组排布方法

Country Status (2)

Country Link
CN (1) CN111757088A (zh)
WO (3) WO2020199889A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012636A (zh) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 一种时序控制器及显示设备
CN113010020A (zh) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 一种时序控制器和显示设备
CN113012621A (zh) * 2021-05-25 2021-06-22 北京芯海视界三维科技有限公司 时序控制器及显示设备
CN115701313A (zh) * 2021-05-28 2023-02-07 京东方科技集团股份有限公司 多视点图像处理系统及其方法
CN114513650A (zh) * 2022-01-27 2022-05-17 北京芯海视界三维科技有限公司 图像显示处理方法及图像显示处理装置
CN115278197A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件
CN115278201A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件
CN115278200A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件
CN115278198A (zh) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 处理装置及显示器件

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0791847A1 (en) * 1996-02-23 1997-08-27 Koninklijke Philips Electronics N.V. Autostereoscopic display apparatus
US20120274630A1 (en) * 2011-04-26 2012-11-01 Unique Instruments Co. Ltd Multi-view 3d image display method
CN106131542A (zh) * 2016-08-26 2016-11-16 广州市朗辰电子科技有限公司 一种基于全像素分光的裸眼3d显示的装置
CN106170084A (zh) * 2015-05-21 2016-11-30 三星电子株式会社 多视点图像显示设备及其控制方法及多视点图像产生方法
CN106604018A (zh) * 2015-10-16 2017-04-26 三星电子株式会社 3d显示设备及其控制方法
US20170134720A1 (en) * 2015-11-06 2017-05-11 Samsung Electronics Co., Ltd. Glassless three-dimensional (3d) display apparatus and control method thereof
CN106797462A (zh) * 2014-10-10 2017-05-31 三星电子株式会社 多视图图像显示设备及其控制方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2100463A2 (en) * 2007-01-03 2009-09-16 Koninklijke Philips Electronics N.V. A display device
US20110038043A1 (en) * 2009-08-17 2011-02-17 Industrial Technology Research Institute Segmented lenticular array used in autostereoscopic display apparatus
WO2011120228A1 (en) * 2010-04-01 2011-10-06 Intel Corporation A multi-core processor supporting real-time 3d image rendering on an autostereoscopic display
CN102802004B (zh) * 2012-08-15 2016-06-01 上海易维视科技有限公司 裸眼3d模组
KR102153605B1 (ko) * 2013-11-27 2020-09-09 삼성디스플레이 주식회사 입체 영상 표시 장치
CN104506843A (zh) * 2014-12-10 2015-04-08 深圳市奥拓电子股份有限公司 一种多视点led自由立体显示装置
CN104849870B (zh) * 2015-06-12 2018-01-09 京东方科技集团股份有限公司 显示面板及显示装置
KR102597593B1 (ko) * 2016-11-30 2023-11-01 엘지디스플레이 주식회사 무안경 입체 영상 표시장치

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0791847A1 (en) * 1996-02-23 1997-08-27 Koninklijke Philips Electronics N.V. Autostereoscopic display apparatus
US20120274630A1 (en) * 2011-04-26 2012-11-01 Unique Instruments Co. Ltd Multi-view 3d image display method
CN106797462A (zh) * 2014-10-10 2017-05-31 三星电子株式会社 多视图图像显示设备及其控制方法
CN106170084A (zh) * 2015-05-21 2016-11-30 三星电子株式会社 多视点图像显示设备及其控制方法及多视点图像产生方法
CN106604018A (zh) * 2015-10-16 2017-04-26 三星电子株式会社 3d显示设备及其控制方法
US20170134720A1 (en) * 2015-11-06 2017-05-11 Samsung Electronics Co., Ltd. Glassless three-dimensional (3d) display apparatus and control method thereof
CN106131542A (zh) * 2016-08-26 2016-11-16 广州市朗辰电子科技有限公司 一种基于全像素分光的裸眼3d显示的装置

Also Published As

Publication number Publication date
WO2020199887A1 (zh) 2020-10-08
WO2020199889A1 (zh) 2020-10-08
CN111757088A (zh) 2020-10-09

Similar Documents

Publication Publication Date Title
WO2020199888A1 (zh) 多视点裸眼立体显示器、显示系统及显示方法
US9924153B2 (en) Parallel scaling engine for multi-view 3DTV display and method thereof
CN103873844B (zh) 多视点自动立体显示器及控制其最佳观看距离的方法
CN103988504A (zh) 用于子像素渲染的图像处理设备和方法
KR20110129903A (ko) 3d 시청자 메타데이터의 전송
KR20140089860A (ko) 디스플레이 장치 및 그 디스플레이 방법
JP2013013055A (ja) 裸眼立体ディスプレイ装置及び視点調整方法
TW201125355A (en) Method and system for displaying 2D and 3D images simultaneously
KR101719984B1 (ko) 3차원 컨텐츠를 출력하는 멀티비전 디스플레이 기기의 영상 처리 방법 및 그 방법을 채용한 멀티비전 디스플레이 기기
WO2021110036A1 (zh) 多视点3d显示屏、多视点3d显示设备
CN103442241A (zh) 一种3d显示方法和3d显示装置
WO2012172766A1 (en) Image processing device and method thereof, and program
WO2016004715A1 (zh) 显示处理系统、方法及电子设备
US20120163700A1 (en) Image processing device and image processing method
US20120050290A1 (en) Three-dimensional image display apparatus and display method
US20120081513A1 (en) Multiple Parallax Image Receiver Apparatus
JP2012244625A (ja) 適応型タイミングコントローラ及びその駆動方法
CN216086864U (zh) 一种多视点裸眼立体显示器和裸眼立体显示系统
TWI499279B (zh) 影像處理裝置及其方法
US9137520B2 (en) Stereoscopic image display device and method of displaying stereoscopic image
JP2012134885A (ja) 画像処理装置及び画像処理方法
US20120249872A1 (en) Video signal processing apparatus and video signal processing method
JP4481275B2 (ja) 3次元映像情報の伝送方法
TWI489856B (zh) Dimensional image processing method
JP2012203050A (ja) 立体表示装置並びに信号処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783376

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20783376

Country of ref document: EP

Kind code of ref document: A1