WO2021110032A1 - 多视点3d显示设备和3d图像显示方法 - Google Patents

多视点3d显示设备和3d图像显示方法 Download PDF

Info

Publication number
WO2021110032A1
WO2021110032A1 PCT/CN2020/133325 CN2020133325W WO2021110032A1 WO 2021110032 A1 WO2021110032 A1 WO 2021110032A1 CN 2020133325 W CN2020133325 W CN 2020133325W WO 2021110032 A1 WO2021110032 A1 WO 2021110032A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
viewpoint
pixels
display screen
user
Prior art date
Application number
PCT/CN2020/133325
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司 filed Critical 北京芯海视界三维科技有限公司
Priority to US17/780,502 priority Critical patent/US20230007226A1/en
Priority to EP20895445.3A priority patent/EP4068765A4/en
Publication of WO2021110032A1 publication Critical patent/WO2021110032A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/373Image reproducers using viewer tracking for tracking forward-backward translational head movements, i.e. longitudinal movements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • H04N13/351Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • H04N13/359Switching between monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Definitions

  • This application relates to 3D display technology, for example, to a multi-viewpoint 3D display device and a 3D image display method.
  • 3D display technology uses multiple independent pixels of a display panel to project multiple viewpoints in space.
  • the traditional projection mode is single and is not suitable for occasions where the viewing situation changes, for example, the user is far away from or close to the display panel.
  • the embodiments of the present disclosure provide a multi-viewpoint 3D display device, a 3D image display method, a computer-readable storage medium, and a computer program product to solve the problems of single multi-viewpoint projection mode and transmission.
  • a multi-view 3D display method including: obtaining the distance between the user and the multi-view 3D display; in response to the change of the distance, dynamically rendering the composite in the multi-view 3D display based on the 3D signal A sub-pixel in a sub-pixel.
  • dynamically rendering the sub-pixels of the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to the increase of the distance between the user's eyes and the multi-viewpoint 3D display screen to be relatively close to each other The method dynamically renders the sub-pixels in the composite sub-pixels in the multi-view 3D display screen.
  • the multi-view 3D display method further includes: when the sub-pixel to be rendered is the same sub-pixel in the composite sub-pixel, switching to 2D display.
  • dynamically rendering the sub-pixels of the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to the decrease of the distance between the user's eyes and the multi-viewpoint 3D display screen to move away from each other.
  • the method dynamically renders the sub-pixels in the composite sub-pixels in the multi-view 3D display screen.
  • the multi-view 3D display method further includes: switching to 2D display when the sub-pixel to be rendered exceeds the outermost sub-pixel of the composite sub-pixel.
  • the multi-view 3D display method further includes: in response to the distance between the user's eyes and the multi-view 3D display screen being less than the first distance threshold, switching to 2D display.
  • the multi-viewpoint 3D display method further includes: in response to the distance between the user's eyes and the multi-viewpoint 3D display screen being greater than a second distance threshold, switching to 2D display; wherein, the second distance threshold is greater than the first distance threshold. Distance threshold.
  • the multi-viewpoint 3D display method further includes: detecting at least two users to obtain location information of the at least two users; determining a priority user based on the location information of the at least two users; The sub-pixels in the composite sub-pixels in the multi-view 3D display screen are rendered based on the 3D signal.
  • determining the priority user based on the location information of the at least two users includes: sorting the priorities of the at least two users based on the distance between the faces of the at least two users and the multi-view 3D display screen, according to the sorting As a result, priority users are determined.
  • rendering the sub-pixels in the composite sub-pixel in the multi-viewpoint 3D display screen based on the 3D signal according to the viewpoint where the eyes of the priority user are located includes: acquiring the viewpoints where the respective eyes of at least two users are located; and responding to Prioritize the conflict between the eyes of the priority user and other users, and render the sub-pixels in the composite sub-pixels in the multi-view 3D display screen corresponding to the viewpoints of the eyes of the priority user based on the 3D signal.
  • a multi-viewpoint 3D display device including: a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and a plurality of composite pixels.
  • Each composite sub-pixel in the sub-pixels includes a plurality of sub-pixels corresponding to a plurality of viewpoints; an eye positioning device is configured to obtain the distance between a user and a multi-viewpoint 3D display screen; and a 3D processing device is configured to respond to Based on the change of the distance, the multi-view 3D display screen is triggered to perform dynamic rendering of multiple composite sub-pixels based on the 3D signal.
  • the 3D processing device is configured to trigger the multi-viewpoint 3D display screen to dynamically render the sub-pixels in the composite sub-pixel in a manner of being relatively close to each other in response to the distance between the user and the multi-viewpoint 3D display screen becoming larger.
  • the 3D processing device is configured to trigger the multi-view 3D display screen to switch to 2D display when the sub-pixel to be rendered is the same sub-pixel in the composite sub-pixel.
  • the 3D processing device is configured to, in response to the distance between the user's eyes and the multi-viewpoint 3D display screen becoming smaller, trigger the multi-viewpoint 3D display screen to dynamically render the sub-pixels in the composite sub-pixels in a relatively distant manner from each other. Pixels.
  • the 3D processing device is configured to trigger the multi-view 3D display screen to switch to 2D display when the sub-pixel to be rendered exceeds the outermost sub-pixel of the composite sub-pixel.
  • the 3D processing device is configured to trigger the switching of the multi-view 3D display screen to 2D display when the distance between the user's eyes and the multi-view 3D display screen is less than the first distance threshold.
  • the 3D processing device is configured to trigger the switching of the multi-viewpoint 3D display screen to 2D display when the distance between the user's eyes and the multi-viewpoint 3D display screen is greater than the second distance threshold; wherein, the second distance The threshold is greater than the first distance threshold.
  • the multi-viewpoint 3D display device includes: a face detection device configured to detect at least two users to obtain location information of the at least two users; a priority logic circuit configured to be based on the The location information determines the priority user; the 3D processing device is configured to render the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal according to the viewpoint where the eyes of the priority user are located.
  • the priority logic circuit is configured to rank the priorities of the at least two users based on the distance between the faces of the at least two users and the multi-view 3D display screen, and determine the priority users according to the ranking results.
  • the multi-viewpoint 3D display device further includes: an eye positioning device configured to obtain the viewpoints of the respective eyes of at least two users; and a 3D processing device configured to respond to the eyes of the priority user and other users. Based on the conflict between the viewpoints of the user, the multi-view 3D display screen is triggered based on the 3D signal to render the sub-pixels in the composite sub-pixels corresponding to the viewpoints of the priority user's eyes.
  • a multi-view 3D display device including: a processor; and a memory storing program instructions; wherein the processor is configured to execute the above-mentioned method when executing the above-mentioned program instructions.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned 3D image display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D image display method.
  • the 3D image display method and the multi-view 3D display device for the multi-view 3D display screen provided by the embodiments of the present disclosure, as well as the computer-readable storage medium and the computer program product, can achieve the following technical effects:
  • the eye positioning data is acquired in real time by the eye positioning device, and the projection of multiple viewpoints can be adjusted in time according to the viewing situation, so as to realize a highly flexible 3D display.
  • FIGS. 1A to 1C are schematic diagrams of a multi-view 3D display device according to an embodiment of the present disclosure
  • Fig. 2 is an image of a 3D video signal according to an embodiment of the present disclosure
  • 3A to 3B are schematic diagrams of dynamically rendering sub-pixels according to an embodiment of the present disclosure.
  • 4A to 4C are sub-pixel renderings when the viewpoint positions of the eyes of multiple users conflict according to an embodiment of the present disclosure
  • FIG. 5 is a 3D image display method for a multi-viewpoint 3D display screen according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a multi-view 3D display device according to an embodiment of the present disclosure.
  • 100 Multi-viewpoint 3D display equipment; 110: Multi-viewpoint 3D display screen; 120: Processor; 121: Register; 130: 3D processing device; 131: Buffer; 140: 3D signal interface; 150: Eye positioning device; 160 : Eye positioning data interface; 300: Multi-view 3D display device; 310: Memory; 320: Processor; 330: Bus; 340: Communication interface; 400: Composite pixel; 410: Red composite sub-pixel; 420: Green composite sub-pixel Pixel; 430: blue composite sub-pixel; 601: one of the images of the 3D video signal; 602: one of the images of the 3D video signal.
  • a multi-viewpoint 3D display device which defines a plurality of viewpoints, and includes a multi-viewpoint 3D display screen (for example: a multi-viewpoint naked-eye 3D display screen), a video signal interface, an eye positioning device, and 3D processing device.
  • the multi-view 3D display screen includes a plurality of composite pixels, each of which includes a plurality of composite sub-pixels, and each composite sub-pixel is composed of a plurality of sub-pixels corresponding to the number of viewpoints of the multi-view 3D display device.
  • the video signal interface is configured to receive an image of a 3D video signal.
  • the eye positioning device is configured to obtain eye positioning data.
  • the 3D processing device is configured to respond to changes in the distance between the user and the multi-viewpoint 3D display screen and trigger the multi-viewpoint 3D display screen to dynamically render the sub-pixels in the composite sub-pixel based on the 3D signal.
  • each composite sub-pixel is composed of multiple sub-pixels of the same color corresponding to the number of viewpoints of the multi-view 3D display device.
  • the sub-pixels in each composite sub-pixel have a one-to-one correspondence with the viewpoints of the multi-view 3D display device.
  • the multi-viewpoint 3D display device has at least 3 viewpoints, and each composite sub-pixel has at least 3 sub-pixels correspondingly.
  • the 3D signal is an image of a 3D video signal.
  • the 3D processing device is communicatively connected with the multi-view 3D display screen. In some embodiments, the 3D processing device is communicatively connected with the driving device of the multi-view 3D display screen.
  • FIG. 1A shows a multi-view 3D display device 100 according to an embodiment of the present disclosure.
  • the multi-viewpoint 3D display device 100 includes a multi-viewpoint 3D display screen 110, a 3D processing device 130, a 3D signal interface 140 that receives a 3D signal such as an image of a 3D video signal, and a processor 120.
  • the multi-view 3D display screen 110 may include m columns and n rows (m ⁇ n) composite pixels 400 and thus defines a display resolution of m ⁇ n.
  • the display resolution of m ⁇ n may be a resolution above Full HD (FHD), including but not limited to: 1920 ⁇ 1080, 1920 ⁇ 1200, 2048 ⁇ 1280, 2560 ⁇ 1440, 3840 ⁇ 2160, etc. .
  • the green composite sub-pixel 420 and the blue composite sub-pixel 430 composed of i 6 blue sub-pixels B.
  • the multi-viewpoint 3D display device 100 has 6 viewpoints (V1-V6).
  • each composite pixel is square. All the composite sub-pixels in each composite pixel may be arranged in parallel to each other. The i sub-pixels in each composite sub-pixel may be arranged in rows.
  • each composite sub-pixel has a corresponding sub-pixel corresponding to the viewpoint.
  • the multiple sub-pixels of each composite sub-pixel are arranged in rows in the lateral direction of the multi-view 3D display screen, and the multiple sub-pixels in the rows have the same color. Since the multiple viewpoints of the 3D display device are arranged roughly along the horizontal direction of the multi-viewpoint 3D display screen, when the user moves and causes the eyes to be at different viewpoints, it is necessary to dynamically render each composite sub-pixel corresponding to the corresponding viewpoint. Different sub-pixels. Since the sub-pixels of the same color in each composite sub-pixel are arranged in rows, the cross-color problem caused by persistence of vision can be avoided.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset. As shown in the embodiment shown in FIG. 1A, the 3D processing device 130 may also optionally include a buffer 131 to buffer the image of the received 3D video signal.
  • the multi-view 3D display apparatus 100 may further include a processor 120 communicatively connected to the 3D processing device 130 through the 3D signal interface 140.
  • the processor 120 is included in a computer or an intelligent terminal, such as a mobile terminal, or as a processor unit.
  • the 3D signal interface 140 is an internal interface connecting the processor 120 and the 3D processing device 130.
  • a multi-view 3D display device 100 may be, for example, a mobile terminal, and the video signal interface 140 may be a MIPI, a mini-MIPI interface, an LVDS interface, a min-LVDS interface, or a DisplayPort interface.
  • the processor 120 of the multi-view 3D display device 100 may further include a register 121.
  • the register 121 can be configured to temporarily store instructions, data, and addresses.
  • the multi-viewpoint 3D display device further includes an eye positioning device or an eye positioning data interface configured to obtain eye positioning data.
  • the multi-viewpoint 3D display device 100 includes an eye positioning device 150 communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
  • the eye positioning device (not shown) may be directly connected to the processor 120, for example, and the 3D processing device 130 obtains eye positioning data from the processor 120 via the eye positioning data interface 160.
  • the eye positioning device can be connected to the processor and the 3D processing device at the same time, so that on the one hand, the 3D processing device 130 can directly obtain eye positioning data from the eye positioning device, and on the other hand, the eye positioning device can obtain the eye positioning data. Other information can be processed by the processing unit.
  • the eye positioning data includes eye space position information indicating the user's eye space position.
  • the eye space position information can be expressed in the form of three-dimensional coordinates, for example, including the user's eyes/face and multi-viewpoint 3D display
  • the distance information between the screen or the eye positioning device that is, the depth information of the user's eyes/face
  • the vertical position information of the user's eyes/face on the multi-viewpoint 3D display screen or eye positioning device The spatial position of the eye can also be expressed in the form of two-dimensional coordinates containing any two of the distance information, the horizontal position information, and the vertical position information.
  • the eye positioning data may also include the viewpoint (viewpoint position) where the user's eyes (for example, both eyes) are located, the user's perspective, and the like.
  • the eye positioning device includes an eye locator configured to capture a user image (for example, a user's face image), an eye positioning image processor configured to determine the spatial position of the eye based on the captured user image, and An eye positioning data interface configured to transmit eye spatial position information.
  • the eye space position information indicates the eye space position.
  • the eye locator includes a first camera configured to take a first image and a second camera configured to take a second image
  • the eye locator image processor is configured to be based on the first image and the second image At least one of the images recognizes the existence of the eye and determines the spatial position of the eye based on the recognized eye.
  • the eye locator includes at least one camera configured to take at least one image and a depth detector configured to obtain the depth information of the user's eyes, and the eye locator image processor is configured to be based on the captured image. At least one image recognizes the presence of the eye, and determines the spatial position of the eye based on the recognized eye and eye depth information.
  • the multi-viewpoint 3D display device 100 may define multiple viewpoints.
  • the user's eyes can see the display of the corresponding sub-pixel in the composite sub-pixel of each composite pixel 400 in the multi-viewpoint 3D display screen 110 at the spatial position corresponding to the viewpoint.
  • the two different images seen by the user's eyes at different points of view form a parallax, and a 3D image is synthesized in the brain.
  • the 3D processing device 130 receives an image, such as a decompressed 3D video signal, from the processor 120 through, for example, the 3D signal interface 140 as an internal interface.
  • an image such as a decompressed 3D video signal
  • the 3D signal interface 140 as an internal interface.
  • Each image can be two images or a composite image, or consist of them.
  • the two images or composite images may be different types of images and may be in various arrangements.
  • the image of the 3D video signal is or consists of two parallel images 601 and 602.
  • the two images may be a left-eye parallax image and a right-eye parallax image, respectively.
  • the two images may be a rendered color image and a depth image, respectively.
  • the image of the 3D video signal is an interlaced composite image.
  • the composite image may be an interlaced left-eye and right-eye parallax composite image, and an interlaced rendered color and depth-of-field composite image.
  • At least one 3D processing device 130 triggers the multi-view 3D display screen to render at least one sub-pixel in each composite sub-pixel based on one of the two images. Based on the other of the two images, the multi-viewpoint 3D display screen is triggered to render at least another sub-pixel in each composite sub-pixel.
  • At least one 3D processing device triggers the multi-viewpoint 3D display screen to render at least two sub-pixels in each composite sub-pixel based on the composite image.
  • the multi-view 3D display screen is triggered to render at least one sub-pixel in the composite sub-pixel based on the first image in the composite image (partially), and the multi-view 3D display screen is triggered to render at least one of the sub-pixels in the composite sub-pixel based on the second image (partially). Another sub-pixel.
  • the 3D processing device can trigger the multi-viewpoint 3D display screen to dynamically render the relevant sub-pixels in each composite sub-pixel in the multi-viewpoint 3D display screen based on real-time eye positioning data to adapt to changes in viewing conditions.
  • the dynamic rendering of the relevant sub-pixels in each composite sub-pixel covers the rendering of the relevant sub-pixels in all composite sub-pixels in substantially the entire display screen, or the sub-pixels and viewpoints that cover each composite sub-pixel. Error situation, or cover both.
  • FIG. 3A schematically shows an embodiment of dynamic rendering of the 3D processing device, in which the dynamic rendering of the red sub-pixel R in the red composite sub-pixel is taken as an example, and the dynamic rendering of the sub-pixels in other color composite sub-pixels is based on this analogy. As shown in FIG.
  • the eye positioning device judges the user based on the detected spatial position information (data) of the user’s eyes or face The distance from the multi-viewpoint 3D display screen becomes larger, or the 3D processing device judges between the user and the multi-viewpoint 3D display screen based on the spatial position information (data) of the user's eyes or face detected by the eye positioning device The distance becomes larger.
  • the 3D processing device triggers the multi-viewpoint 3D display screen to dynamically render the red sub-pixels in the red composite sub-pixels, such as at least two red sub-pixels, in a manner of being relatively close to each other. As shown in FIG.
  • the 3D processing device in response to the increase in the distance between the user and the multi-viewpoint 3D display screen, the 3D processing device triggers the multi-viewpoint 3D display screen to dynamically adjust the red sub-pixels R1 and R5 related to the initial viewpoint positions of the user's eyes.
  • the red sub-pixels R2 and R4 related to the subsequent viewpoint positions of the user's eyes.
  • the dynamic rendering of sub-pixels close to each other is shown by the dashed arrow in FIG. 3A.
  • the red sub-pixels R2 and R4 are close to each other with respect to the red sub-pixels R1 and R5.
  • the rendered sub-pixel in the composite sub-pixel is the sub-pixel corresponding to the viewpoint position of the eye determined by the eye positioning data.
  • the 3D processing device triggers the multi-viewpoint 3D display screen to continue to dynamically render the composite sub-pixels relatively close to each other
  • the sub-pixels for example, at least two sub-pixels.
  • the rendered sub-pixels in the composite sub-pixel are close to each other to overlap at the same sub-pixel or will be close to each other to overlap at the same sub-pixel, that is, the same sub-pixel in the composite sub-pixel is rendered or will be rendered.
  • the red sub-pixels to be rendered next will continue to approach each other with respect to the red sub-pixels R2 and R4, until the red sub-pixel R3 coincides. That is, corresponding to the viewpoint positions of the user's eyes, when the user reaches a certain distance away from the multi-viewpoint 3D display screen, there may be cases where the same sub-pixel R3 is rendered in order to form the left-eye parallax image and the right-eye parallax image.
  • the 3D processing device triggers the multi-view 3D display screen to switch to 2D display.
  • the dynamic rendering of sub-pixels that are close to each other or are about to overlap may be determined by a change in viewpoint caused by a change in distance. In some embodiments, it can be determined by simply detecting the change of the distance, for example, there is a correspondence between the distance and the sub-pixel being dynamically rendered, for example, switching to 2D outside or within the threshold of a predetermined distance, as described below As described in the examples.
  • FIG. 3B schematically shows another embodiment of dynamic rendering of the 3D processing device, in which the dynamic rendering of the red sub-pixel R in the red composite sub-pixel is taken as an example, and the dynamic rendering of the sub-pixels in other color composite sub-pixels is based on And so on.
  • the eye positioning device judges the user based on the detected spatial position information (data) of the user’s eyes or face The distance from the multi-viewpoint 3D display screen becomes smaller, or the 3D processing device judges between the user and the multi-viewpoint 3D display screen based on the spatial position information (data) of the user's eyes or face detected by the eye positioning device The distance becomes smaller.
  • the 3D processing device triggers the multi-viewpoint 3D display screen to dynamically render the red sub-pixels in the red composite sub-pixels, for example, at least two red sub-pixels, in a manner of being relatively far away from each other.
  • the 3D processing device triggers the multi-viewpoint 3D display screen to dynamically adjust from rendering the red sub-pixels R2 and R4 related to the initial viewpoint positions of the user's eyes To render the red sub-pixels R1 and R5 related to the subsequent viewpoint positions of the user's eyes.
  • the dynamic rendering where the sub-pixels are far away from each other is shown by the dashed arrow in FIG. 3B.
  • the red sub-pixels R1 and R5 are far away from each other with respect to the red sub-pixels R2 and R4.
  • the rendered sub-pixel in the composite sub-pixel is the sub-pixel corresponding to the viewpoint position of the eye determined by the eye positioning data.
  • the 3D processing device triggers the multi-viewpoint 3D display screen to continue to dynamically render the composite sub-pixels in a relatively distant manner from each other.
  • Sub-pixels for example, at least two sub-pixels. In this way, the rendered sub-pixels in the composite sub-pixel are far away from each other to eventually exceed or will exceed the outermost sub-pixel of the corresponding composite sub-pixel.
  • the red sub-pixels to be rendered next will continue to move away from each other relative to the red sub-pixels R1 and R5, until they will exceed the red composite Sub-pixels R1 and R6 at the outermost ends of the sub-pixels. That is, corresponding to the viewpoint position of the user's eyes, when the user approaches the multi-viewpoint 3D display screen and reaches a certain distance, there may be a situation that no sub-pixel corresponding to the current user's viewpoint is found in at least a part of the composite sub-pixels.
  • the 3D processing device triggers the switching of the multi-view 3D display screen to 2D display.
  • the dynamic rendering of sub-pixels that are far from each other beyond or about to exceed the outermost sub-pixel of the corresponding composite sub-pixel may be determined by the change in viewpoint caused by the change in distance.
  • the sub-pixels beyond the outermost end of the corresponding composite sub-pixel may include one-side protrusions, for example, the outermost sub-pixels R1 or R6 beyond the red composite sub-pixel shown in FIG. 3B may also include both-side protrusions.
  • the outermost sub-pixels R1 and R6 beyond the red composite sub-pixel shown in FIG. 3B When either of these two situations occurs, the 3D processing device plays the image of the 3D video signal in 2D.
  • the multi-viewpoint 3D display device defines a first distance threshold.
  • the limitation method may be preset when the multi-view 3D display device is shipped from the factory.
  • the 3D processing device plays the image of the 3D video signal in 2D.
  • the multi-viewpoint 3D display device defines a second distance threshold, and the second distance threshold is greater than the first distance threshold.
  • the limitation method may be preset when the multi-view 3D display device is shipped from the factory.
  • the 3D processing device plays the image of the 3D video signal in 2D.
  • the multi-viewpoint 3D display device may further include a position detection device that detects the user's position information.
  • the user's position information includes, for example, the user's spatial position, the distance between the user and the multi-viewpoint 3D display screen, and the like.
  • the position detection device may be, for example, a face detection device that obtains position information of the user's face.
  • the position information of the user's face may include, for example, the spatial position information of the user's face relative to the multi-viewpoint 3D display screen, such as the distance between the user's face and the multi-viewpoint 3D display screen, and the viewing angle of the user's face relative to the multi-viewpoint 3D display screen. Wait.
  • the face detection device may have a visual recognition function, such as a face recognition function, and may detect user facial information (such as facial features), for example, detect facial information of all users in front of a multi-view 3D display screen.
  • the face detection device can be connected to or integrated into the eye positioning device, and can also be connected to the 3D processing device to transmit the detected face information.
  • the face detection device can be set as a stand-alone device, or can be integrated with the eye positioning device, for example, integrated with the eye positioning device in the processor, or can be integrated into other components with similar functions in a multi-view 3D display device Or within the unit.
  • the multi-view 3D display device may further include a priority logic circuit.
  • the priority logic circuit determines the priority user or ranks the priority of the user based on the location information of the at least two users (for example, the location information of the user's face).
  • the priority logic circuit may determine the priority user among the at least two users or the priority of the at least two users based on the respective position information of the at least two users (such as the respective facial position information of the two users) acquired by the position detection device (such as the face detection device).
  • the priorities of the two users are sorted.
  • Priority users or the priority order of users may be determined or ranked based on the distance between the detected at least two users and the multi-viewpoint 3D display screen, for example, detecting the faces or eyes of at least two users and the multi-viewpoint The distance between the 3D displays.
  • the eye positioning device can detect the eye position of at least two users in real time, and the 3D processing device responds to the conflict of the eye position of the priority user or the user with the higher priority among the at least two users and the eye position of other users, based on 3D
  • the image of the video signal renders the sub-pixels in each composite sub-pixel corresponding to the viewpoint positions of the eyes of the priority user or the user with the highest priority among the at least two users.
  • FIG. 4A schematically shows an embodiment in which the 3D processing apparatus renders sub-pixels based on the determined priority users or users with high priority, where the rendering of the red sub-pixel R in the red composite sub-pixel is taken as an example, and other colors are composited
  • the rendering of the sub-pixels in the sub-pixels can be deduced by analogy.
  • the face detection device detects that the distance between the face of user a and the multi-viewpoint 3D display screen is smaller than the distance between the face of user b and the multi-viewpoint 3D display screen, and the priority determination unit determines that user a is the priority user, or Put user a in high priority.
  • the eye positioning device detects the position of the viewpoint of the eyes of the user a and the user b in real time.
  • the 3D processing device generates an image of the viewpoint corresponding to the eyes of the user a based on the image of the 3D video signal, and renders the red sub-pixels R2 and R4 corresponding to the viewpoint positions of the eyes of the user a in the red composite sub-pixel, and plays 3D to the user a effect.
  • the 3D processing device may also render the red color corresponding to the viewpoint position of the right eye of the user b in the red composite sub-pixel based on the image of the viewpoint corresponding to the left eye of the user b (the same as the image of the viewpoint corresponding to the right eye of the user a).
  • Sub pixel R6 Both eyes of user b see the same image, and the 3D processing device plays a 2D effect to user b.
  • FIG. 4B schematically shows another embodiment in which the 3D processing apparatus renders sub-pixels based on the determined priority users or users with high priority, where the rendering of the sub-pixel R in the red composite sub-pixel is taken as an example, and the other composite sub-pixels are rendered
  • the rendering of sub-pixels in a pixel can be deduced by analogy.
  • the face detection device detects that the distance between the face of user a and the multi-viewpoint 3D display screen is less than the distance between the face of user b and the multi-viewpoint 3D display screen, and the priority determination unit determines user a as the priority user, or User a is ranked high priority.
  • the eye positioning device detects the position of the viewpoint of the eyes of the user a and the user b in real time.
  • the 3D processing device generates an image of the viewpoint corresponding to the eyes of the user a based on the image of the 3D video signal, and renders the red sub-pixels R2 and R4 corresponding to the viewpoint positions of the eyes of the user a in the red composite sub-pixel, and plays 3D to the user a effect.
  • the 3D processing device may also generate an image of the viewpoint corresponding to the right eye of user b based on the image of the 3D video signal (the same as the viewpoint of the right eye of user a), and render the red composite sub-pixels that correspond to the viewpoint of user b.
  • the red sub-pixel R6 corresponding to the viewpoint position of the right eye.
  • User b sees different images with both eyes, and the 3D processing device plays a 3D effect to user b.
  • FIG. 4C schematically shows another embodiment in which the 3D processing apparatus renders sub-pixels based on the determined priority users or users with high priority, where the rendering of the sub-pixel R in the red composite sub-pixel is taken as an example, and other composite sub-pixels are rendered
  • the rendering of sub-pixels in a pixel can be deduced by analogy.
  • the face detection device detects that the distance between the face of user a and the multi-viewpoint 3D display screen is smaller than the distance between the face of user b and the multi-viewpoint 3D display screen, and the priority determination unit determines that user a is the priority user, or Put user a in high priority.
  • the eye positioning device detects the position of the viewpoint of the eyes of the user a and the user b in real time.
  • the viewpoint position of the left eye of the user a conflicts with the viewpoint position of the left eye of the user b
  • the viewpoint position of the right eye of the user a conflicts with the viewpoint position of the right eye of the user b.
  • the 3D processing device generates an image of the viewpoint corresponding to the eyes of the user a based on the image of the 3D video signal, and renders the red sub-pixels R2 and R4 corresponding to the viewpoint positions of the eyes of the user a in the red composite sub-pixel, and plays 3D to the user a effect.
  • User b can see the same 3D effect at the same time.
  • the embodiment according to the present disclosure also provides a 3D image display method for the above-mentioned multi-viewpoint 3D display screen.
  • the multi-view 3D display method includes:
  • the multi-view 3D display method includes:
  • S300 in response to a change in the distance between the user's eyes or face and the multi-viewpoint 3D display screen, dynamically render the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal.
  • the 3D signal includes an image of a 3D video signal.
  • dynamically rendering the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to the increase in the distance between the user's eyes or face and the multi-viewpoint 3D display screen to make relative The sub-pixels in the composite sub-pixel are dynamically rendered in a manner close to each other.
  • dynamically rendering the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal further includes: dynamically rendering the sub-pixels in the composite sub-pixels in a manner relatively close to each other, and when rendering the composite sub-pixels When the same sub-pixel in the sub-pixels, switch to 2D display.
  • dynamically rendering the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to the decrease of the distance between the user's eyes or face and the multi-viewpoint 3D display screen to be relatively The sub-pixels in each composite sub-pixel are dynamically rendered away from each other.
  • dynamically rendering the sub-pixels in the composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal further includes: dynamically rendering the sub-pixels in the composite sub-pixels in a relatively distant manner from each other. When the sub-pixels at the outer end are switched to 2D display.
  • a multi-viewpoint 3D display device including a multi-viewpoint 3D display screen defines a first distance threshold
  • the multi-viewpoint 3D display method further includes: responding to the gap between the user's eyes or face and the multi-viewpoint 3D display screen. The distance of is less than the first distance threshold, and the display is switched to 2D.
  • the multi-viewpoint 3D display device defines a second distance threshold, and the second distance threshold is greater than the first distance threshold.
  • the multi-viewpoint 3D display method further includes: responding to the user's eyes or face and the multi-viewpoint 3D The distance between the display screens is greater than the second distance threshold, and the display is switched to 2D.
  • the display method further includes:
  • At least two users are detected to obtain respective location information of the at least two users. For example, detecting the position information of the respective faces or eyes of at least two users, such as the distance between the respective faces or eyes of the at least two users and the multi-viewpoint 3D display screen.
  • the priority users are determined or the priorities of the at least two users are sorted based on the location information of the at least two users. For example, acquiring the spatial position information (spatial coordinates) of the respective faces or eyes of at least two users, and calculating the distance between the respective faces or eyes of the at least two users and the multi-view 3D display screen based on the spatial position information, And the calculated distance is compared, and it is concluded that the user closest to the multi-viewpoint 3D display screen is the priority user or the user with high priority, or the value between the first threshold and the second threshold of the multi-viewpoint 3D display device The user is a priority user or a user with high priority.
  • determining the priority user or ranking the priority of the at least two users is determined based on the distance of the respective faces of the at least two users with respect to the multi-view 3D display screen. For example, a user corresponding to a face with a smaller distance from the multi-viewpoint 3D display screen is determined as a priority user or a user with a higher priority.
  • the image rendering of each composite sub-pixel based on the 3D video signal is related to the priority user or the user with high priority.
  • the viewpoint position of one of the eyes of the priority user or the user with high priority conflicts with the viewpoint position of one of the eyes of the other user, and the conflicting eyes of the priority user or the user with high priority are opposite to the conflicting eyes of other users
  • the viewpoint position of the left eye of the priority user or user with high priority conflicts with the viewpoint position of the right eye of other users, or the viewpoint position of the right eye of the priority user or user with high priority and the viewpoint of the left eye of other users
  • the image based on the 3D video signal generates the image of the viewpoint corresponding to the eyes of the priority user or the user with high priority, and renders the viewpoint position of each composite sub-pixel with the eyes of the priority user or the user with high priority
  • the corresponding sub-pixels play 3D effects to priority users or users with high priority.
  • the image based on the 3D video signal generates the image of the viewpoint corresponding to the eyes of the priority user or the user with high priority, and renders each composite sub-pixel corresponding to the viewpoint position of the eyes of the priority user or the user with high priority Sub-pixels, play 3D effects to priority users or users with high priority.
  • the images of the viewpoints corresponding to the non-conflicting eyes of other users are different from the images of the viewpoints corresponding to the conflicting eyes, and the other users see the 3D effect.
  • the image based on the 3D video signal When the viewpoint position of the eyes of the priority user or the user with high priority conflicts with the viewpoint position of the eyes of other users, the image based on the 3D video signal generates the image of the viewpoint corresponding to the eyes of the priority user or the user with high priority, and Render the sub-pixels in each composite sub-pixel that correspond to the viewpoint positions of the eyes of the priority user or the user with a high priority, and play the 3D effect to the priority user or the user with the high priority and other users with conflicting binocular viewpoint positions.
  • the multi-viewpoint 3D display device 300 includes a processor 320 and a memory 310.
  • the multi-view 3D display device 300 may further include a communication interface 340 and a bus 330.
  • the processor 320, the communication interface 340, and the memory 310 communicate with each other through the bus 330.
  • the communication interface 340 may be configured to transmit information.
  • the processor 320 may call the logic instructions in the memory 310 to execute the 3D image display method of the foregoing embodiment.
  • the aforementioned logic instructions in the memory 310 can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
  • the memory 310 can be used to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 320 executes functional applications and data processing by running the program instructions/modules stored in the memory 310, that is, implements the 3D image display method in the foregoing embodiment.
  • the memory 310 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the terminal device and the like.
  • the memory 310 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned 3D image display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D image display method.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
  • the disclosed methods and products can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of units may only be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may be physical units. Some or all of the units may be selected according to actual needs to implement this embodiment.
  • the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more options for realizing the specified logic function.
  • Execute instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.

Abstract

本申请涉及3D显示技术领域,公开一种多视点3D显示方法,包括:获取用户与多视点3D显示屏之间的距离;响应于距离的变化,基于3D信号动态渲染多视点3D显示屏中的复合子像素中的子像素。该方法能实现多视点灵活投射。本申请还公开一种多视点3D显示设备、计算机可读存储介质、计算机程序产品。

Description

多视点3D显示设备和3D图像显示方法
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231146.6、发明名称为“多视点裸眼3D显示设备和3D图像显示方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D显示技术,例如涉及多视点3D显示设备和3D图像显示方法。
背景技术
目前,3D显示技术利用显示面板的多个独立像素来投射空间的多个视点。
在实现本公开实施例的过程中,发现相关技术中至少存在如下问题:传统的投射模式单一,不适于观看情况发生变化的场合,例如用户远离或靠近显示面板。
本背景技术仅为了便于了解本领域的相关技术,并不视作对现有技术的承认。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本公开实施例提供了一种多视点3D显示设备和3D图像显示方法、计算机可读存储介质、计算机程序产品,以解决多视点投射模式单一以及传输的问题。
在一些实施例中,公开了一种多视点3D显示方法,包括:获取用户与多视点3D显示屏之间的距离;响应于距离的变化,基于3D信号动态渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中的复合子像素的子像素包括:响应于用户的眼部与多视点3D显示屏之间的距离变大,以相对彼此靠近的方式动态渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,多视点3D显示方法还包括:当要渲染的子像素为复合子像素中的相同子像素时,切换为2D显示。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中的复合子像素的子像素包括:响应于用户的眼部与多视点3D显示屏之间的距离变小,以相对彼此远离的方式动 态渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,多视点3D显示方法还包括:当要渲染的子像素超出复合子像素的最外端的子像素时,切换为2D显示。
在一些实施例中,多视点3D显示方法还包括:响应于用户的眼部与多视点3D显示屏之间的距离小于第一距离阈值,切换为2D显示。
在一些实施例中,多视点3D显示方法还包括:响应于用户的眼部与多视点3D显示屏之间的距离大于第二距离阈值,切换为2D显示;其中,第二距离阈值大于第一距离阈值。
在一些实施例中,多视点3D显示方法还包括:检测至少两个用户以获取至少两个用户的位置信息;基于至少两个用户的位置信息确定优先用户;根据优先用户的眼部所在视点,基于3D信号渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,基于至少两个用户的位置信息确定优先用户包括:基于至少两个用户的脸部与多视点3D显示屏之间的距离对至少两个用户的优先级进行排序,根据排序结果确定优先用户。
在一些实施例中,根据优先用户的眼部所在视点,基于3D信号渲染多视点3D显示屏中的复合子像素中的子像素包括:获取至少两个用户各自眼部所在的视点;和响应于优先用户与其它用户的眼部所在视点的冲突,基于3D信号渲染多视点3D显示屏中的复合子像素中与优先用户的眼部所在的视点对应的子像素。
在一些实施例中,公开了一种多视点3D显示设备,包括:多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素包括对应于多个视点的多个子像素;眼部定位装置,被配置为获取用户与多视点3D显示屏之间的距离;3D处理装置,被配置为响应于距离的变化,基于3D信号触发多视点3D显示屏进行多个复合子像素的子像素的动态渲染。
在一些实施例中,3D处理装置被配置为响应于用户与多视点3D显示屏之间的距离变大,触发多视点3D显示屏以相对彼此靠近的方式动态渲染复合子像素中的子像素。
在一些实施例中,3D处理装置被配置为当要渲染的子像素为复合子像素中的相同子像素时,触发多视点3D显示屏切换为2D显示。
在一些实施例中,3D处理装置被配置为响应于用户眼部与多视点3D显示屏之间的距离变小,触发多视点3D显示屏以相对彼此远离的方式动态渲染复合子像素中的子像素。
在一些实施例中,3D处理装置被配置为当要渲染的子像素超出复合子像素的最外端的子像素时,触发多视点3D显示屏切换为2D显示。
在一些实施例中,3D处理装置被配置为在用户的眼部与多视点3D显示屏之间的距离 小于第一距离阈值时,触发多视点3D显示屏切换为2D显示。
在一些实施例中,3D处理装置被配置为在用户的眼部与多视点3D显示屏之间的距离大于第二距离阈值时,触发多视点3D显示屏切换为2D显示;其中,第二距离阈值大于第一距离阈值。
在一些实施例中,多视点3D显示设备包括:脸部检测装置,被配置为检测至少两个用户以获取至少两个用户的位置信息;优先级逻辑电路,被配置为基于至少两个用户的位置信息确定优先用户;3D处理装置配置为根据优先用户的眼部所在视点,基于3D信号渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,优先级逻辑电路被配置为基于至少两个用户的脸部与多视点3D显示屏之间的距离对至少两个用户的优先级进行排序,根据排序结果确定优先用户。
在一些实施例中,多视点3D显示设备还包括:眼部定位装置,被配置为获取至少两个用户各自眼部所在的视点;3D处理装置,被配置为响应于优先用户与其他用户的眼部所在视点的冲突,基于3D信号触发多视点3D显示屏渲染复合子像素中与优先用户的眼部所在的视点对应的子像素。
在一些实施例中,公开了一种多视点3D显示设备,包括:处理器;以及存储有程序指令的存储器;其中,处理器被配置为在执行上述程序指令时,执行如上所述的方法。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D图像显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D图像显示方法。
本公开实施例提供的用于多视点3D显示屏的3D图像显示方法和多视点3D显示设备,以及计算机可读存储介质、计算机程序产品,可以实现以下技术效果:
通过眼部定位装置实时获取眼部定位数据,能够根据观看情况及时调整多视点的投射,实现灵活度高的3D显示。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A至图1C是根据本公开实施例的多视点3D显示设备的示意图;
图2是根据本公开实施例的3D视频信号的图像;
图3A至图3B是根据本公开实施例的动态渲染子像素的示意图;
图4A至图4C是根据本公开实施例的多个用户眼部所处视点位置冲突情况下的子像素渲染;
图5是根据本公开的实施例的用于多视点3D显示屏的3D图像显示方法;
图6是根据本公开的实施例的多视点3D显示设备的示意图。
附图标记:
100:多视点3D显示设备;110:多视点3D显示屏;120:处理器;121:寄存器;130:3D处理装置;131:缓存器;140:3D信号接口;150:眼部定位装置;160:眼部定位数据接口;300:多视点3D显示设备;310:存储器;320:处理器;330:总线;340:通信接口;400:复合像素;410:红色复合子像素;420:绿色复合子像素;430:蓝色复合子像素;601:3D视频信号的图像之一;602:3D视频信号的图像之一。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
根据本公开的实施例提供了一种多视点3D显示设备,其限定出多个视点,并包括多视点3D显示屏(例如:多视点裸眼3D显示屏)、视频信号接口、眼部定位装置和3D处理装置。多视点3D显示屏包括多个复合像素,每个复合像素包括多个复合子像素,每个复合子像素由对应于多视点3D显示设备的视点数量的多个子像素构成。视频信号接口配置为接收3D视频信号的图像。眼部定位装置配置为获取眼部定位数据。3D处理装置配置为响应于用户与多视点3D显示屏之间的距离的变化,基于3D信号,触发多视点3D显示屏动态渲染复合子像素中的子像素。
在一些实施例中,每个复合子像素由对应于多视点3D显示设备的视点数量的多个同色子像素构成。
在一些实施例中,每个复合子像素中的子像素与多视点3D显示设备的视点是一一对应的关系。
在一些实施例中,多视点3D显示设备具有至少3个视点,每个复合子像素相应具有至少3个子像素。
在一些实施例中,3D信号是3D视频信号的图像。
在一些实施例中,3D处理装置与多视点3D显示屏通信连接。在一些实施例中,3D处理装置与多视点3D显示屏的驱动装置通信连接。
图1A示出了根据本公开实施例的多视点3D显示设备100。如图1A所示,多视点3D显示设备100包括多视点3D显示屏110、3D处理装置130、接收3D信号如3D视频信号的图像的3D信号接口140和处理器120。
多视点3D显示屏110可包括m列n行(m×n)个复合像素400并因此限定出m×n的显示分辨率。在一些实施例中,m×n的显示分辨率可以为全高清(FHD)以上的分辨率,包括但不限于:1920×1080、1920×1200、2048×1280、2560×1440、3840×2160等。
图1A示意性地示出了m×n个复合像素中的一个复合像素400,包括由i=6个红色子像素R构成的红色复合子像素410、由i=6个绿色子像素G构成的绿色复合子像素420和由i=6个蓝色子像素B构成的蓝色复合子像素430。多视点3D显示设备100具有6个视点(V1-V6)。
在一些实施例中,每个复合像素成正方形。每个复合像素中的所有复合子像素可以彼此平行布置。每个复合子像素中的i个子像素可以成行布置。
在本公开的实施例中,每个复合子像素具有对应于视点的相应子像素。每个复合子像素的多个子像素在多视点3D显示屏的横向上成行布置,且成行的多个子像素的颜色相同。由于3D显示设备的多个视点是大致沿多视点3D显示屏的横向排布的,这样,在用户移动导致眼部处于不同的视点时,需要相应动态渲染每个复合子像素中对应于相应视点的不同子像素。由于每个复合子像素中的同色子像素成行排列,所以能够避免由于视觉暂留带来的串色问题。此外,由于光栅的折射,有可能会在相邻的视点位置看见当前显示子像素的一部分,而通过同色、同行排列,即使当前显示子像素的一部分被看见,也不会出现混色的问题。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。如图1A所示的实施例,3D处理装置130还可以选择性地包括缓存器131,以便缓存所接收到的3D视频信号的图像。
多视点3D显示设备100还可包括通过3D信号接口140通信连接至3D处理装置130的处理器120。在一些实施例中,处理器120被包括在计算机或智能终端、如移动终端中或作为处理器单元。
在一些实施例中,3D信号接口140为连接处理器120与3D处理装置130的内部接口。这样的多视点3D显示设备100例如可以是移动终端,视频信号接口140可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。
在一些实施例中,如图1A所示,多视点3D显示设备100的处理器120还可包括寄存器121。寄存器121可配置为暂存指令、数据和地址。
在一些实施例中,多视点3D显示设备还包括配置为获取眼部定位数据的眼部定位装置或眼部定位数据接口。例如图1B所示的实施例中,多视点3D显示设备100包括通信连接至3D处理装置130的眼部定位装置150,由此3D处理装置130可以直接接收眼部定位数据。在图1C所示的实施例中,眼部定位装置(未示出)例如可以直接连接处理器120,而3D处理装置130经由眼部定位数据接口160从处理器120获得眼部定位数据。在另一些实施例中,眼部定位装置可同时连接处理器和3D处理装置,使得一方面3D处理装置130可以直接从眼部定位装置获取眼部定位数据,另一方面眼部定位装置获取的其他信息可以被处理单元处理。
在一些实施例中,眼部定位数据包括表明用户的眼部空间位置的眼部空间位置信息,眼部空间位置信息可以三维坐标形式表现,例如包括用户的眼部/脸部与多视点3D显示屏或眼部定位装置之间的距离信息(也就是用户的眼部/脸部的深度信息)、观看的眼部/脸部在多视点3D显示屏或眼部定位装置的横向上的位置信息、用户的眼部/脸部在多视点3D显示屏或眼部定位装置的竖向上的位置信息。眼部空间位置也可以用包含距离信息、横向位置信息和竖向位置信息中的任意两个信息的二维坐标形式表现。眼部定位数据还可以包括用户的眼部(例如双眼)所在的视点(视点位置)、用户视角等。
在一些实施例中,眼部定位装置包括配置为拍摄用户图像(例如用户脸部图像)的眼部定位器、配置为基于所拍摄的用户图像确定眼部空间位置的眼部定位图像处理器和配置为传输眼部空间位置信息的眼部定位数据接口。眼部空间位置信息表明眼部空间位置。
在一些实施例中,眼部定位器包括配置为拍摄第一图像的第一摄像头和配置为拍摄第二图像的第二摄像头,而眼部定位图像处理器配置为基于第一图像和第二图像中的至少一副图像识别眼部的存在且基于识别到的眼部确定眼部空间位置。
在一些实施例中,眼部定位器包括配置为拍摄至少一幅图像的至少一个摄像头和配置为获取用户的眼部深度信息的深度检测器,而眼部定位图像处理器配置为基于所拍摄的至少一幅图像识别眼部的存在,并基于识别到的眼部和眼部深度信息确定眼部空间位置。
下面参见图2来描述根据本公开的实施例的多视点3D显示设备100内的3D视频信号的图像的传输和显示。如上所述,多视点3D显示设备100可以限定出多个视点。用户的眼睛在对应于视点的空间位置处可看到多视点3D显示屏110中各复合像素400的复合子像素中相应的子像素的显示。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成3D的画面。
在本公开的一些实施例中,3D处理装置130通过例如作为内部接口的3D信号接口140从处理器120接收例如为解压缩的3D视频信号的图像。各图像可以是两幅图像或者是复合图像,或者由其构成。
在一些实施例中,两幅图像或复合图像可以是不同类型的图像以及可以呈各种排布形式。
如图2所示,3D视频信号的图像为并列的两幅图像601、602或由其构成。在一些实施例中,两幅图像可以分别为左眼视差图像和右眼视差图像。在一些实施例中,两幅图像可以分别为渲染色彩图像和景深图像。
在一些实施例中,3D视频信号的图像为交织的复合图像。在一些实施例中,复合图像可以为交织的左眼和右眼视差复合图像、交织的渲染色彩和景深复合图像。
在一些实施例中,至少一个3D处理装置130在接收到3D视频信号的两幅图像601、602后,基于两幅图像之一触发多视点3D显示屏渲染各复合子像素中至少一个子像素并基于两幅图像中的另一幅触发多视点3D显示屏渲染各复合子像素中至少另一个子像素。
在另一些实施例中,至少一个3D处理装置在接收到复合图像后,基于复合图像触发多视点3D显示屏渲染各复合子像素中至少两个子像素。例如,基于复合图像中的第一图像(部分)触发多视点3D显示屏渲染复合子像素中的至少一个子像素,根据第二图像(部分)触发多视点3D显示屏渲染复合子像素中的至少另一个子像素。
3D处理装置能够基于实时眼部定位数据,触发多视点3D显示屏动态渲染多视点3D显示屏中各复合子像素中的相关子像素,以适应观看情况的变化。在本公开的实施例中,动态渲染各复合子像素中的相关子像素涵盖渲染基本整个显示屏中的所有复合子像素中的相关子像素,或者涵盖各复合子像素的子像素与视点存在工艺误差的情况,或者涵盖这两者。
图3A示意性示出了3D处理装置动态渲染的一个实施例,其中以红色复合子像素中的红色子像素R的动态渲染作为示例,对其它颜色复合子像素中的子像素的动态渲染以此类推。如图3A所示,当用户沿实线箭头所示方向移动并远离多视点3D显示屏时,眼部定位装置基于检测到的用户的眼部或脸部的空间位置信息(数据)而判断用户与多视点3D显示屏之间的距离变大,或者3D处理装置基于眼部定位装置检测到的用户的眼部或脸部的空间位置信息(数据)而判断用户与多视点3D显示屏之间的距离变大。响应于距离变大,3D处理装置触发多视点3D显示屏以相对彼此靠近的方式动态渲染红色复合子像素中的红色子像素,例如至少两个红色子像素。如图3A所示,响应于用户与多视点3D显示屏之间的距离变大,3D处理装置触发多视点3D显示屏从渲染与用户双眼的初始视点位置相关的 红色子像素R1、R5动态调整为渲染与用户双眼的后续视点位置相关的红色子像素R2、R4。子像素彼此靠近的动态渲染如图3A的虚线箭头所示。红色子像素R2、R4相对于红色子像素R1、R5彼此靠近。复合子像素中被渲染的子像素是由眼部定位数据确定的眼部所在视点位置所对应的子像素。
当用户继续远离多视点3D显示屏,导致用户与多视点3D显示屏之间的距离继续变大时,3D处理装置触发多视点3D显示屏继续以相对彼此靠近的方式动态渲染各复合子像素中的子像素,例如至少两个子像素。这样会出现复合子像素中被渲染的子像素彼此靠近至在同一个子像素处重合或将要彼此靠近至在同一个子像素处重合,也就是渲染或将要渲染复合子像素中的相同子像素。例如还是参见图3A,随着用户与多视点3D显示屏之间的距离继续变大,接下来将要被渲染的红色子像素将相对于红色子像素R2、R4继续彼此靠近,直至在红色子像素R3处重合。也就是对应于用户双眼所在视点位置,当用户远离多视点3D显示屏到达一定距离时,可能存在为了形成左眼视差图像和右眼视差图像而渲染相同的子像素R3的情况。在以相对彼此靠近的方式动态渲染复合子像素中的子像素的情况下,当渲染到复合子像素中的相同子像素时,3D处理装置触发多视点3D显示屏切换为2D显示。换言之,在本公开的一些实施例中,子像素的动态渲染彼此靠近或将要重合可以是因距离变化导致的视点变化确定的。在一些实施例中,可以仅简单地检测距离的变化来确定,比如距离与被动态渲染的子像素之间有对应性,例如在预定距离的阈值之外或之内切换成2D,如下文一些实施例所述。
图3B示意性示出了3D处理装置动态渲染的另一个实施例,其中以红色复合子像素中的红色子像素R的动态渲染作为示例,对其它颜色复合子像素中的子像素的动态渲染以此类推。如图3B所示,当用户沿实线箭头所示方向移动并靠近多视点3D显示屏时,眼部定位装置基于检测到的用户的眼部或脸部的空间位置信息(数据)而判断用户与多视点3D显示屏之间的距离变小,或者3D处理装置基于眼部定位装置检测到的用户的眼部或脸部的空间位置信息(数据)而判断用户与多视点3D显示屏之间的距离变小。响应于距离变小,3D处理装置触发多视点3D显示屏以相对彼此远离的方式动态渲染红色复合子像素中的红色子像素,例如至少两个红色子像素。如图3B所示,响应于用户与多视点3D显示屏之间的距离变小,3D处理装置触发多视点3D显示屏从渲染与用户双眼的初始视点位置相关的红色子像素R2、R4动态调整为渲染与用户双眼的后续视点位置相关的红色子像素R1、R5。子像素彼此远离的动态渲染如图3B的虚线箭头所示。红色子像素R1、R5相对于红色子像素R2、R4彼此远离。复合子像素中被渲染的子像素是由眼部定位数据确定的眼部所在视点位置所对应的子像素。
当用户继续靠近多视点3D显示屏,导致用户与多视点3D显示屏之间的距离继续变小时,3D处理装置触发多视点3D显示屏继续以相对彼此远离的方式动态渲染各复合子像素中的子像素,例如至少两个子像素。这样会出现复合子像素中被渲染的子像素彼此远离至最终超出或将要超出相应复合子像素的最靠外的子像素。例如还是参见图3B,随着用户与多视点3D显示屏之间的距离继续变小,接下来将要被渲染的红色子像素将相对于红色子像素R1、R5继续彼此远离,直至将要超出红色复合子像素最外端的子像素R1、R6。也就是对应于用户双眼所在视点位置,当用户靠近多视点3D显示屏到达一定距离时,可能出现在至少一部分复合子像素中找不到能对应于当前用户视点的子像素的情况。在以相对彼此远离的方式动态渲染复合子像素中的子像素的情况下,当渲染到超出复合子像素中的最外端的至少一个子像素时,3D处理装置触发多视点3D显示屏切换为2D显示。类似地,在本公开的一些实施例中,子像素的动态渲染彼此远离超出或将要超出相应复合子像素的最外端的子像素可以是因距离变化导致的视点变化确定的。在一些实施例中,可以仅简单地检测距离的变化来确定的,比如距离与被动态渲染的子像素之间有对应性,例如在预定距离的阈值之外或之内切换成2D,如下文一些实施例所述。
可以想到的是,超出相应复合子像素的最外端的子像素可以包括一侧超出,例如超出图3B所示的红色复合子像素的最外端的子像素R1或R6,也可以包括两侧超出,例如超出图3B所示的红色复合子像素的最外端的子像素R1和R6。这两种情况出现任一种时,3D处理装置以2D形式播放3D视频信号的图像。
在一些实施例中,多视点3D显示设备限定第一距离阈值。限定方式可以是在多视点3D显示设备出厂时预设的。当用户与多视点3D显示屏之间的距离小于第一距离阈值时,3D处理装置以2D形式播放3D视频信号的图像。
在一些实施例中,多视点3D显示设备限定第二距离阈值,且第二距离阈值大于第一距离阈值。限定方式可以是在多视点3D显示设备出厂时预设的。当用户与多视点3D显示屏之间的距离大于第二距离阈值时,3D处理装置以2D形式播放3D视频信号的图像。
在一些实施例中,多视点3D显示设备还可以包括检测用户的位置信息的位置检测装置,用户的位置信息例如包括用户的空间位置、用户与多视点3D显示屏的距离等。位置检测装置例如可以是获取用户脸部位置信息的脸部检测装置。用户脸部位置信息例如可以包括用户脸部相对于多视点3D显示屏的空间位置信息,如用户脸部与多视点3D显示屏之间的距离、用户脸部相对于多视点3D显示屏的视角等。脸部检测装置可具有视觉识别功能,例如脸部识别功能,并可以检测用户的脸部信息(如脸部特征),例如检测多视点3D显示屏前面所有用户的脸部信息。脸部检测装置可以连接至或集成至眼部定位装置,也可 以连接至3D处理装置,以传输检测到的脸部信息。脸部检测装置可以作为独立装置设置,或者可以与眼部定位装置集成在一起,例如与眼部定位装置一起集成在处理器内,或者可以集成在多视点3D显示设备中具有类似功能的其他部件或单元内。
在一些实施例中,多视点3D显示设备还可以包括优先级逻辑电路。优先级逻辑电路基于至少两个用户的位置信息(例如用户的脸部位置信息)确定优先用户或者对用户的优先级进行排序。优先级逻辑电路可以基于位置检测装置(例如脸部检测装置)获取的至少两个用户各自的位置信息(例如两个用户各自的脸部位置信息)确定至少两个用户中的优先用户或者对至少两个用户的优先级进行排序。优先用户或者用户的优先级顺序可以是基于被检测的至少两个用户与多视点3D显示屏之间的距离大小来确定或排出的,例如检测至少两个用户的脸部或眼部与多视点3D显示屏之间的距离大小。
眼部定位装置可以实时检测至少两个用户各自双眼所在的视点位置,3D处理装置响应于优先用户或至少两个用户中优先级高的用户与其他用户的眼部所在视点位置的冲突,基于3D视频信号的图像渲染各复合子像素中与优先用户或至少两个用户中优先级高的用户的双眼所在的视点位置对应的子像素。
图4A示意性示出了3D处理装置基于确定的优先用户或优先级高的用户渲染子像素的一个实施例,其中以红色复合子像素中的红色子像素R的渲染作为示例,对其它颜色复合子像素中的子像素的渲染以此类推。脸部检测装置检测到用户a的脸部与多视点3D显示屏之间的距离小于用户b的脸部与多视点3D显示屏之间的距离,优先确定单元将用户a确定为优先用户,或者将用户a排在高优先级。眼部定位装置实时检测用户a和用户b各自双眼所在的视点位置。用户a的右眼所在视点位置与用户b的左眼所在视点位置存在冲突。3D处理装置基于3D视频信号的图像生成用户a的双眼所对应的视点的图像,并渲染红色复合子像素中与用户a的双眼所在视点位置对应的红色子像素R2、R4,向用户a播放3D效果。3D处理装置还可以基于用户b的左眼所对应的视点的图像(与用户a的右眼所对应的视点的图像相同)渲染红色复合子像素中与用户b的右眼所在视点位置对应的红色子像素R6。用户b的双眼看到的是相同图像,3D处理装置向用户b播放2D效果。
图4B示意性示出了3D处理装置基于确定的优先用户或优先级高的用户渲染子像素的另一个实施例,其中以红色复合子像素中的子像素R的渲染作为示例,对其它复合子像素中的子像素的渲染以此类推。脸部检测装置检测到用户a的脸部与多视点3D显示屏之间的距离小于用户b的脸部相对于多视点3D显示屏的距离,优先确定单元将用户a确定为优先用户,或者将用户a排在高优先级。眼部定位装置实时检测用户a和用户b各自双眼所在的视点位置。用户a的左眼所在视点位置与用户b的左眼所在视点位置存在冲突。 3D处理装置基于3D视频信号的图像生成用户a的双眼所对应的视点的图像,并渲染红色复合子像素中与用户a的双眼所在视点位置对应的红色子像素R2、R4,向用户a播放3D效果。3D处理装置还可以基于3D视频信号的图像生成用户b的右眼所对应的视点的图像(与用户a的右眼所对应的视点的图像相同),并渲染红色复合子像素中与用户b的右眼所在视点位置对应的红色子像素R6。用户b双眼看到的是不同图像,3D处理装置向用户b播放3D效果。
图4C示意性示出了3D处理装置基于确定的优先用户或优先级高的用户渲染子像素的又一个实施例,其中以红色复合子像素中的子像素R的渲染作为示例,对其它复合子像素中的子像素的渲染以此类推。脸部检测装置检测到用户a的脸部与多视点3D显示屏之间的距离小于用户b的脸部与多视点3D显示屏之间的距离,优先确定单元将用户a确定为优先用户,或者将用户a排在高优先级。眼部定位装置实时检测用户a和用户b各自双眼所在的视点位置。用户a的左眼所在视点位置与用户b的左眼所在视点位置存在冲突,且用户a的右眼所在视点位置与用户b的右眼所在的视点位置冲突。3D处理装置基于3D视频信号的图像生成用户a的双眼所对应的视点的图像,并渲染红色复合子像素中与用户a的双眼所在视点位置对应的红色子像素R2、R4,向用户a播放3D效果。用户b同时可以看到相同的3D效果。
根据本公开的实施例还提供了用于上述的多视点3D显示屏的3D图像显示方法。如图5所示,多视点3D显示方法包括:
S10,获取用户与多视点3D显示屏之间的距离;和
S20,响应于用户与所视点3D显示屏之间的距离变化,基于3D信号动态渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,多视点3D显示方法包括:
S100,传输3D信号;
S200,获取用户的眼部或脸部与多视点3D显示屏之间的距离;和
S300,响应于用户的眼部或脸部与多视点3D显示屏之间的距离变化,基于3D信号动态渲染多视点3D显示屏中的复合子像素中的子像素。
在一些实施例中,3D信号包括3D视频信号的图像。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中复合子像素中的子像素包括:响应于用户的眼部或脸部与多视点3D显示屏之间的距离变大,以相对彼此靠近的方式动态渲染复合子像素中的子像素。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中复合子像素中的子像素 还包括:以相对彼此靠近的方式动态渲染所述复合子像素中的子像素,当渲染所述复合子像素中的相同子像素时,切换为2D显示。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中复合子像素中的子像素包括:响应于用户的眼部或脸部与多视点3D显示屏之间的距离变小,以相对彼此远离的方式动态渲染各复合子像素中的子像素。
在一些实施例中,基于3D信号动态渲染多视点3D显示屏中复合子像素中的子像素还包括:以相对彼此远离的方式动态渲染复合子像素中的子像素,当超出复合子像素的最外端的子像素时,切换为2D显示。
在一些实施例中,包括多视点3D显示屏的多视点3D显示设备限定有第一距离阈值,多视点3D显示方法还包括:响应于用户的眼部或脸部与多视点3D显示屏之间的距离小于第一距离阈值,切换为2D显示。
在一些实施例中,多视点3D显示设备限定有第二距离阈值,且第二距离阈值大于第一距离阈值,多视点3D显示方法还包括:响应于用户的眼部或脸部与多视点3D显示屏之间的距离大于第二距离阈值,切换为2D显示。
在一些实施例中,显示方法还包括:
检测至少两个用户以获取该至少两个用户各自的位置信息。例如检测至少两个用户各自的脸部或眼部位置信息,比如该至少两个用户各自的脸部或眼部与多视点3D显示屏之间的距离。
基于至少两个用户的位置信息确定优先用户或对至少两个用户的优先级进行排序。例如获取至少两个用户各自的脸部或眼部空间位置信息(空间坐标),并基于空间位置信息计算该至少两个用户各自的脸部或眼部与多视点3D显示屏之间的距离,以及对计算出的距离进行比较,从而得出最靠近多视点3D显示屏的用户为优先用户或优先级高的用户,或以位于多视点3D显示设备的第一阈值与第二阈值之间的用户作为优先用户或优先级高的用户。
在一些实施例中,确定优先用户或对至少两个用户的优先级进行排序是基于至少两个用户各自的脸部相对于多视点3D显示屏的距离来确定的。例如,将相对于多视点3D显示屏的距离较小的脸部所对应的用户确定为优先用户或优先级高的用户。
在一些实施例中,响应于优先用户或优先级高的用户与其它用户的眼部所在视点位置的冲突,基于3D视频信号的图像渲染各复合子像素中与优先用户或优先级高的用户的双眼所在的视点位置对应的子像素。
当优先用户或优先级高的用户的双眼之一所在视点位置与其他用户的双眼之一所在 视点位置产生冲突,且优先用户或优先级高的用户的冲突眼与其他用户的冲突眼相反时,例如优先用户或优先级高的用户的左眼所在视点位置与其他用户的右眼所在视点位置产生冲突,或者优先用户或优先级高的用户的右眼所在视点位置与其他用户的左眼所在视点位置产生冲突,基于3D视频信号的图像生成优先用户或优先级高的用户的双眼所对应的视点的图像,并渲染各复合子像素中与优先用户或优先级高的用户的双眼所在的视点位置对应的子像素,向优先用户或优先级高的用户播放3D效果。还可以基于其他用户的冲突眼所对应的视点的图像渲染各复合子像素中与其他用户的非冲突眼所在视点位置对应的子像素。其他用户的冲突眼和非冲突眼看到相同的图像,其他用户看到的是2D效果。
当优先用户或优先级高的用户的双眼之一所在视点位置与其他用户的双眼之一所在视点位置产生冲突,且优先用户或优先级高的用户与其他用户的冲突眼不相反时,例如优先用户或优先级高的用户的左眼所在视点位置与其他用户的左眼所在视点位置产生冲突,或者优先用户或优先级高的用户的右眼所在视点位置与其他用户的右眼所在视点位置产生冲突,基于3D视频信号的图像生成优先用户或优先级高的用户的双眼所对应的视点的图像,并渲染各复合子像素中与优先用户或优先级高的用户的双眼所在的视点位置对应的子像素,向优先用户或优先级高的用户播放3D效果。还可以基于3D视频信号的图像生成其他用户的非冲突眼所对应的视点的图像,并渲染各复合子像素中与其他用户的非冲突眼所在视点位置对应的子像素。其他用户的非冲突眼所对应的视点的图像与冲突眼所对应的视点的图像不同,其他用户看到3D效果。
当优先用户或优先级高的用户的双眼所在视点位置与其它用户的双眼所在视点位置产生冲突时,基于3D视频信号的图像生成优先用户或优先级高的用户双眼所对应的视点的图像,并渲染各复合子像素中与优先用户或优先级高的用户的双眼所在的视点位置对应的子像素,向优先用户或优先级高的用户与其他有双眼视点位置冲突的用户共同播放3D效果。
本公开实施例提供了一种多视点3D显示设备300,参考图6,多视点3D显示设备300包括处理器320和存储器310。在一些实施例中,多视点3D显示设备300还可以包括通信接口340和总线330。其中,处理器320、通信接口340和存储器310通过总线330完成相互间的通信。通信接口340可配置为传输信息。处理器320可以调用存储器310中的逻辑指令,以执行上述实施例的3D图像显示方法。
此外,上述的存储器310中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器310作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序, 如本公开实施例中的方法对应的程序指令/模块。处理器320通过运行存储在存储器310中的程序指令/模块,从而执行功能应用以及数据处理,即实现上述实施例中的3D图像显示方法。
存储器310可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器310可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D图像显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D图像显示方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
以上描述和附图充分地示出了本公开的实施例,以使本领域技术人员能够实践它们。其他实施例可以包括结构的、逻辑的、电气的、过程的以及其他的改变。除非明确要求,否则单独的部件和功能是可选的,并且操作的顺序可以变化。一些实施例的部分和特征可以被包括在或替换其他实施例的部分和特征。本公开实施例的范围包括权利要求书的整个范围,以及权利要求书的所有可获得的等同物。本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。当用于本申请中时,术语“包括”等指陈述的特征中至少一项的存在,但不排除其它特征的存在。
本领域技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。本领域技术人员可以对每个特定的应用来使用不同方法以实现所描述的功能,但是这种实现不应认为超出本公开实施例的范围。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或 讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它的形式。作为分离部件说明的单元可以是或不是物理上分开的,作为单元显示的部件可以是不是物理单元。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
附图中的流程图和框图显示了根据本公开实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。在附图中的流程图和框图所对应的描述中,不同的方框所对应的操作或步骤也可以以不同于描述中所披露的顺序发生,有时不同的操作或步骤之间不存在特定的顺序。例如,两个连续的操作或步骤实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。

Claims (23)

  1. 一种多视点3D显示方法,包括:
    获取用户与多视点3D显示屏之间的距离;
    响应于所述距离的变化,基于3D信号动态渲染所述多视点3D显示屏中的复合子像素中的子像素。
  2. 根据权利要求1所述的多视点3D显示方法,其中,基于3D信号动态渲染所述多视点3D显示屏中的复合子像素的子像素包括:
    响应于所述用户的眼部与所述多视点3D显示屏之间的距离变大,以相对彼此靠近的方式动态渲染所述多视点3D显示屏中的复合子像素中的子像素。
  3. 根据权利要求2所述的多视点3D显示方法,还包括:
    当要渲染的子像素为所述复合子像素中的相同子像素时,切换为2D显示。
  4. 根据权利要求1所述的多视点3D显示方法,其中,基于3D信号动态渲染所述多视点3D显示屏中的复合子像素的子像素包括:
    响应于所述用户的眼部与所述多视点3D显示屏之间的距离变小,以相对彼此远离的方式动态渲染所述多视点3D显示屏中的复合子像素中的子像素。
  5. 根据权利要求4所述的多视点3D显示方法,还包括:
    当要渲染的子像素超出所述复合子像素的最外端的子像素时,切换为2D显示。
  6. 根据权利要求1、2或4所述的多视点3D显示方法,还包括:
    响应于所述用户的眼部与所述多视点3D显示屏之间的距离小于第一距离阈值,切换为2D显示。
  7. 根据权利要求6所述的多视点3D显示方法,还包括:
    响应于所述用户的眼部与所述多视点3D显示屏之间的距离大于第二距离阈值,切换为2D显示;
    其中,所述第二距离阈值大于所述第一距离阈值。
  8. 根据权利要求1至5任一项所述的多视点3D显示方法,还包括:
    检测至少两个用户以获取所述至少两个用户的位置信息;
    基于所述至少两个用户的位置信息确定优先用户;
    根据所述优先用户的眼部所在视点,基于所述3D信号渲染所述多视点3D显示屏中的复合子像素中的子像素。
  9. 根据权利要求8所述的多视点3D显示方法,其中,基于所述至少两个用户的位置信息确定优先用户包括:
    基于所述至少两个用户的脸部与所述多视点3D显示屏之间的距离对所述至少两个用户的优先级进行排序,根据排序结果确定优先用户。
  10. 根据权利要求8所述的多视点3D显示方法,其中,根据所述优先用户的眼部所在视点,基于所述3D信号渲染所述多视点3D显示屏中的复合子像素中的子像素包括:
    获取所述至少两个用户各自眼部所在的视点;和
    响应于优先用户与其它用户的眼部所在视点的冲突,基于所述3D信号渲染所述多视点3D显示屏中的复合子像素中与所述优先用户的眼部所在的视点对应的子像素。
  11. 一种多视点3D显示设备,包括:
    多视点3D显示屏,包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素包括对应于多个视点的多个子像素;
    眼部定位装置,被配置为获取用户与多视点3D显示屏之间的距离;
    3D处理装置,被配置为响应于所述距离的变化,基于3D信号触发所述多视点3D显示屏进行所述多个复合子像素的子像素的动态渲染。
  12. 根据权利要求11所述的多视点3D显示设备,其中,所述3D处理装置被配置为响应于所述用户与所述多视点3D显示屏之间的距离变大,触发所述多视点3D显示屏以相对彼此靠近的方式动态渲染复合子像素中的子像素。
  13. 根据权利要求12所述的多视点3D显示设备,其中,所述3D处理装置被配置为当要渲染的子像素为所述复合子像素中的相同子像素时,触发所述多视点3D显示屏切换为2D显示。
  14. 根据权利要求11所述的多视点3D显示设备,其中,所述3D处理装置被配置为响应于所述用户眼部与所述多视点3D显示屏之间的距离变小,触发所述多视点3D显示屏以相对彼此远离的方式动态渲染复合子像素中的子像素。
  15. 根据权利要求14所述的多视点3D显示设备,其中,所述3D处理装置被配置为当要渲染的子像素超出所述复合子像素的最外端的子像素时,触发所述多视点3D显示屏切换为2D显示。
  16. 根据权利要求11、12或14所述的多视点3D显示设备,其中,所述3D处理装置被配置为在所述用户的眼部与所述多视点3D显示屏之间的距离小于第一距离阈值时,触发所述多视点3D显示屏切换为2D显示。
  17. 根据权利要求16所述的多视点3D显示设备,其中,
    所述3D处理装置被配置为在所述用户的眼部与所述多视点3D显示屏之间的距离大 于第二距离阈值时,触发所述多视点3D显示屏切换为2D显示;
    其中,所述第二距离阈值大于所述第一距离阈值。
  18. 根据权利要求11至15任一项所述的多视点3D显示设备,还包括脸部检测装置、优先级逻辑电路;其中,
    所述脸部检测装置,被配置为检测至少两个用户以获取所述至少两个用户的位置信息;
    所述优先级逻辑电路,被配置为基于所述至少两个用户的位置信息确定优先用户;
    所述3D处理装置配置为根据所述优先用户的眼部所在视点,基于所述3D信号渲染所述多视点3D显示屏中的复合子像素中的子像素。
  19. 根据权利要求18所述的多视点3D显示设备,其中,
    所述优先级逻辑电路被配置为基于所述至少两个用户的脸部与所述多视点3D显示屏之间的距离对所述至少两个用户的优先级进行排序,根据排序结果确定优先用户。
  20. 根据权利要求18所述的多视点3D显示设备,还包括:眼部定位装置,被配置为获取所述至少两个用户各自眼部所在的视点;
    所述3D处理装置,被配置为响应于优先用户与其他用户的眼部所在视点的冲突,基于所述3D信号触发所述多视点3D显示屏渲染所述复合子像素中与所述优先用户的眼部所在的视点对应的子像素。
  21. 一种多视点3D显示设备,包括:
    处理器;以及
    存储有程序指令的存储器;
    其中,所述处理器被配置为在执行上述程序指令时,执行如权利要求1至10任一项所述的方法。
  22. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求1至10任一项所述的方法。
  23. 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求1至10任一项所述的方法。
PCT/CN2020/133325 2019-12-05 2020-12-02 多视点3d显示设备和3d图像显示方法 WO2021110032A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/780,502 US20230007226A1 (en) 2019-12-05 2020-12-02 Multi-viewpoint 3d display device and 3d image display method
EP20895445.3A EP4068765A4 (en) 2019-12-05 2020-12-02 MULTIPLE VIEWPOINT 3D DISPLAY DEVICE AND 3D IMAGE DISPLAY METHOD

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231146.6A CN112929634A (zh) 2019-12-05 2019-12-05 多视点裸眼3d显示设备和3d图像显示方法
CN201911231146.6 2019-12-05

Publications (1)

Publication Number Publication Date
WO2021110032A1 true WO2021110032A1 (zh) 2021-06-10

Family

ID=76160815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133325 WO2021110032A1 (zh) 2019-12-05 2020-12-02 多视点3d显示设备和3d图像显示方法

Country Status (5)

Country Link
US (1) US20230007226A1 (zh)
EP (1) EP4068765A4 (zh)
CN (1) CN112929634A (zh)
TW (1) TW202123686A (zh)
WO (1) WO2021110032A1 (zh)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012044130A2 (ko) * 2010-10-01 2012-04-05 삼성전자 주식회사 배리어를 이용하는 3d 디스플레이 장치 및 그 구동 방법
US20140035907A1 (en) * 2012-07-31 2014-02-06 Nlt Technologies, Ltd. Stereoscopic image display device, image processing device, and stereoscopic image processing method
CN103873844A (zh) * 2012-12-18 2014-06-18 乐金显示有限公司 多视点自动立体显示器及控制其最佳观看距离的方法
CN105049832A (zh) * 2014-04-24 2015-11-11 Nlt科技股份有限公司 立体图像显示装置、立体图像显示方法以及立体图像显示程序
US20160139797A1 (en) * 2014-11-14 2016-05-19 Samsung Electronics Co., Ltd. Display apparatus and contol method thereof
CN105911712A (zh) * 2016-06-30 2016-08-31 北京邮电大学 一种多视点液晶显示器lcd裸眼3d显示方法及装置
CN107167926A (zh) * 2017-06-22 2017-09-15 上海玮舟微电子科技有限公司 一种裸眼3d显示方法及装置
CN211128025U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 多视点裸眼3d显示屏、多视点裸眼3d显示设备
CN211128024U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 3d显示设备
CN211791828U (zh) * 2019-12-05 2020-10-27 北京芯海视界三维科技有限公司 3d显示设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100548056C (zh) * 2008-04-30 2009-10-07 北京超多维科技有限公司 一种感应式2d-3d自动立体显示装置
JP5494284B2 (ja) * 2010-06-24 2014-05-14 ソニー株式会社 立体表示装置及び立体表示装置の制御方法
JP6048819B2 (ja) * 2011-05-10 2016-12-21 パナソニックIpマネジメント株式会社 表示装置、表示方法、集積回路、プログラム
JP5100875B1 (ja) * 2011-08-31 2012-12-19 株式会社東芝 視域調整装置、映像処理装置および視域調整方法
CN103293692B (zh) * 2013-06-19 2016-01-13 青岛海信电器股份有限公司 一种裸眼立体图像显示控制方法及装置
KR102140080B1 (ko) * 2013-09-27 2020-07-31 삼성전자주식회사 다시점 영상 디스플레이 장치 및 제어 방법
US20150195502A1 (en) * 2014-01-06 2015-07-09 Innolux Corporation Display device and controlling method thereof
GB201709199D0 (en) * 2017-06-09 2017-07-26 Delamont Dean Lindsay IR mixed reality and augmented reality gaming system
JP2019102935A (ja) * 2017-11-30 2019-06-24 シャープ株式会社 表示装置、電子ミラー、表示装置の制御方法、および表示制御プログラム
CA3021636A1 (en) * 2018-10-22 2020-04-22 Evolution Optiks Limited Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same
US11353699B2 (en) * 2018-03-09 2022-06-07 Evolution Optiks Limited Vision correction system and method, light field display and light field shaping layer and alignment therefor
US10761604B2 (en) * 2018-10-22 2020-09-01 Evolution Optiks Limited Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same
CA3040939A1 (en) * 2019-04-23 2020-10-23 Evolution Optiks Limited Light field display and vibrating light field shaping layer therefor, and adjusted pixel rendering method therefor, and vision correction system and method using same

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012044130A2 (ko) * 2010-10-01 2012-04-05 삼성전자 주식회사 배리어를 이용하는 3d 디스플레이 장치 및 그 구동 방법
US20140035907A1 (en) * 2012-07-31 2014-02-06 Nlt Technologies, Ltd. Stereoscopic image display device, image processing device, and stereoscopic image processing method
CN103873844A (zh) * 2012-12-18 2014-06-18 乐金显示有限公司 多视点自动立体显示器及控制其最佳观看距离的方法
CN105049832A (zh) * 2014-04-24 2015-11-11 Nlt科技股份有限公司 立体图像显示装置、立体图像显示方法以及立体图像显示程序
US20160139797A1 (en) * 2014-11-14 2016-05-19 Samsung Electronics Co., Ltd. Display apparatus and contol method thereof
CN105911712A (zh) * 2016-06-30 2016-08-31 北京邮电大学 一种多视点液晶显示器lcd裸眼3d显示方法及装置
CN107167926A (zh) * 2017-06-22 2017-09-15 上海玮舟微电子科技有限公司 一种裸眼3d显示方法及装置
CN211128025U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 多视点裸眼3d显示屏、多视点裸眼3d显示设备
CN211128024U (zh) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 3d显示设备
CN211791828U (zh) * 2019-12-05 2020-10-27 北京芯海视界三维科技有限公司 3d显示设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068765A4

Also Published As

Publication number Publication date
EP4068765A4 (en) 2023-12-20
US20230007226A1 (en) 2023-01-05
EP4068765A1 (en) 2022-10-05
TW202123686A (zh) 2021-06-16
CN112929634A (zh) 2021-06-08

Similar Documents

Publication Publication Date Title
KR101166248B1 (ko) 수신된 이미지 데이터를 분석하는 방법, 컴퓨터 판독 가능한 저장 매체, 뷰 모드 분석 유닛, 및 디스플레이 디바이스
JP6443654B2 (ja) 立体画像表示装置、端末装置、立体画像表示方法、及びそのプログラム
JP5849811B2 (ja) 裸眼立体視用映像データ生成方法
CN211128024U (zh) 3d显示设备
WO2021110035A1 (zh) 眼部定位装置、方法及3d显示设备、方法和终端
TWI788739B (zh) 3d顯示設備、3d圖像顯示方法
JP2002092656A (ja) 立体画像表示装置及び画像データの表示方法
TWI772997B (zh) 多視點3d顯示屏、多視點3d顯示設備
WO2021110031A1 (zh) 多视点3d显示装置、显示方法、显示屏校正方法
US20130106843A1 (en) Information processing apparatus, display control method, and program
CN102630027B (zh) 裸眼3d显示方法和装置
CN112929638B (zh) 眼部定位方法、装置及多视点裸眼3d显示方法、设备
WO2021110032A1 (zh) 多视点3d显示设备和3d图像显示方法
KR101228916B1 (ko) 멀티 비젼의 3차원 영상 표시 장치 및 방법
TWI825367B (zh) 實現懸浮觸控的方法、3d顯示設備和3d終端
TWI499279B (zh) 影像處理裝置及其方法
WO2012165132A1 (ja) 裸眼立体ディスプレイ装置、視点調整方法、裸眼立体視用映像データ生成方法
WO2013031864A1 (ja) 表示装置
TWI826033B (zh) 影像顯示方法與3d顯示系統
CN214756700U (zh) 3d显示设备
JP6604493B2 (ja) 立体画像表示装置及び端末装置
WO2023092595A1 (zh) 三维图像数据的处理方法及装置、设备和介质
WO2024093893A1 (zh) 空间现实显示方法、空间现实显示系统以及非易失性计算机可读存储介质
TW202416709A (zh) 影像顯示方法與3d顯示系統
CN112929631A (zh) 用于在3d视频中显示弹幕的方法和设备、3d显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20895445

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020895445

Country of ref document: EP

Effective date: 20220630