SUMMERY OF THE UTILITY MODEL
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the utility model provides a multi-viewpoint bore hole 3D display screen, multi-viewpoint bore hole 3D display device to the user of position can't watch the 3D effect and transmit and render up the problem that the calculated amount is big around solving simultaneously.
In some embodiments of the utility model, a multi-viewpoint bore hole 3D display screen is provided, include: a display panel having a plurality of composite pixels, each of the plurality of composite pixels including a plurality of composite sub-pixels, each of the plurality of composite sub-pixels including a plurality of sub-pixels in an array; and a plurality of spherical gratings covering the plurality of composite sub-pixels.
In some embodiments, each composite subpixel is square.
In some embodiments, each of the plurality of sub-pixels is square.
In some embodiments, the plurality of subpixels is in an i × j array, where j ≧ 2 and i ≧ 2.
In some embodiments, each of the plurality of subpixels has an aspect ratio of i/j.
In some embodiments, i ≧ 3, j ≧ 3.
In some embodiments, the plurality of composite subpixels have different colors, and the plurality of composite subpixels having different colors are alternately arranged.
In some embodiments, the plurality of composite subpixels having different colors are arranged in a triangle.
In some embodiments, at least one of the plurality of spherical gratings is a spherical grating or an ellipsoidal grating.
In some embodiments, at least one spherical grating of the plurality of spherical gratings further comprises at least one side surface.
In some embodiments of the utility model, a multi-viewpoint bore hole 3D display device is provided, include: the multi-view naked eye 3D display screen is described above; and a 3D processing device configured to render sub-pixels of the plurality of composite sub-pixels in the multi-view naked eye 3D display screen.
In some embodiments, each composite subpixel comprises a plurality of subpixels in an i × j array, wherein the plurality of subpixels of the i × j array correspond to i first direction viewpoints and j second direction viewpoints of the multi-viewpoint naked eye 3D display device.
In some embodiments, the multi-view naked eye 3D display device further comprises: an eye tracking data acquisition device configured to acquire eye tracking data.
In some embodiments, the eye-tracking data acquisition device is configured to acquire a lateral position of the user's eyes to determine a first directional viewpoint at which the user's eyes are located.
In some embodiments, the 3D processing device is configured to render the sub-pixels of the plurality of sub-pixels in the array corresponding to the first direction viewpoint based on the first direction viewpoint at which the user's eyes are located.
In some embodiments, the eye-tracking data obtaining means is configured to obtain at least one of a depth position and a height position of the user's eyes to determine the second directional viewpoint at which the user's eyes are located.
In some embodiments, the 3D processing apparatus is configured to render the sub-pixels of the plurality of sub-pixels in the array corresponding to the second directional viewpoint based on the second directional viewpoint at which the user's eyes are located.
The embodiment of the utility model provides a multi-viewpoint bore hole 3D display screen, multi-viewpoint bore hole 3D display device can realize following technological effect:
in addition, the display resolution of the multi-view naked eye 3D display screen is defined in a composite pixel mode, the display resolution defined by the composite pixels is taken as a consideration factor during transmission and display, the calculation amount of transmission and rendering is reduced under the condition of ensuring a high-definition display effect, and high-quality naked eye type 3D display is realized.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the invention.
Detailed Description
In order to understand the features and technical contents of the embodiments of the present invention in more detail, the following description is given in conjunction with the accompanying drawings for describing the embodiments of the present invention in detail, and the accompanying drawings are only used for the purpose of reference and are not used to limit the embodiments of the present invention.
Herein, "naked eye three-dimensional (3D) display" relates to a technique in which a user can observe a 3D display image on a flat display without wearing glasses for 3D display.
In this context, "multi-view" has its conventional meaning in the art, meaning that different images displayed by different pixels or sub-pixels of the display screen can be viewed at different positions (viewpoints) in space. In this context, multi-view shall mean at least 3 views.
In this context, a conventional "pixel" means a 2D display or the smallest display unit in terms of its resolution when displayed as a 2D display.
However, in some embodiments herein, the term "composite pixel" when applied to multi-view technology in the field of naked eye 3D display refers to the smallest unit of display when a naked eye 3D display provides multi-view display, but does not exclude that a single composite pixel for multi-view technology may comprise or appear as a plurality of 2D display pixels. Herein, unless specifically stated as a composite pixel or 3D pixel for "3D display" or "multi-view" applications, a pixel will refer to the smallest unit of display in 2D display. Likewise, when describing a "composite subpixel" for multi-view, naked eye 3D display, it will refer to a composite subpixel of a single color present in the composite pixel when the naked eye 3D display provides multi-view display. Herein, a sub-pixel in a "composite sub-pixel" will refer to the smallest display unit of a single color, which tends to correspond to a viewpoint.
According to an embodiment of the present invention, there is provided a multi-view naked eye 3D display screen, which may be applied to a multi-view naked eye 3D display device, the multi-view naked eye 3D display screen includes a display panel and a plurality of spherical gratings, the display panel has a plurality of composite pixels, each composite pixel includes a plurality of composite sub-pixels, each composite sub-pixel is composed of an i × j array sub-pixel, wherein i is greater than or equal to 2, j is greater than or equal to 2, the plurality of spherical gratings cover the plurality of composite sub-pixels, in the i × j array sub-pixel, i corresponds to a first direction view point (e.g., a row view point, also referred to as a transverse view point) of the multi-view naked eye 3D display device, and j corresponds to a second direction view point (e.g., a column view point, also referred to as a height or depth view point) of the multi-view naked eye 3D display device.
In some embodiments, the spherical grating in the multi-view naked eye 3D display screen is in a one-to-one correspondence relationship with the composite sub-pixels.
In some embodiments, i ≧ 3, j ≧ 3.
Fig. 1 to 3 show a multi-view naked-eye 3D display screen 110 according to an embodiment of the present invention, the multi-view naked-eye 3D display screen 110 includes a display panel 111 and a plurality of spherical gratings 190 covering the display panel 111, the display panel 111 has a plurality of composite pixels 400, each composite pixel 400 includes a plurality of composite sub-pixels, in the illustrated embodiment, each composite pixel 400 includes three composite sub-pixels of different colors, red composite sub-pixel 410, green composite sub-pixel 420, and blue composite sub-pixel 430, the red composite sub-pixel 410 is composed of i columns and j rows (i × j array) of red sub-pixels R, the green composite sub-pixel 420 is composed of i columns and j rows (i × j array) of green sub-pixels G, and the blue composite sub-pixel 430 is composed of i columns and j rows (i × j array) of blue sub-pixels B, fig. 1 shows as an example the red composite sub-pixel 410 composed of i columns × j array of red sub-pixels R.
In the i × j array same color sub-pixel of each composite sub-pixel, the aspect ratio of each sub-pixel is equal to i/j.
As shown in fig. 1 and 2, in the i × j array red subpixel R of the red composite subpixel 410, i is 6, j is 3, in the i × j array green subpixel G of the green composite subpixel 420, i is 6, j is 3, in the i × j array green subpixel G of the blue composite subpixel 420, i is 6, j is 3, in the 6 × 3 array same-color subpixel in the composite subpixel of each color corresponds to 6 row viewpoints and 3 column viewpoints of the multi-viewpoint naked eye 3D display device.
For example, each subpixel in the i × j array same color subpixel has a square shape, and the aspect ratio i/j of each subpixel is 1.
In some embodiments, the composite sub-pixels of different colors are alternately arranged in the display panel, and the plurality of composite sub-pixels of each composite pixel are arranged in a triangle.
As shown in fig. 2, the red, green and blue composite subpixels 410, 420 and 430 in composite subpixel 400 are arranged in a triangle. In the lateral direction of the display panel 111, the red, green and blue composite subpixels 410, 420 and 430 are alternately arranged. The composite pixels 400 are arranged in a staggered manner.
In some embodiments, the display panel 111 of the multi-view naked-eye 3D display screen 110 may include m columns and n rows (i.e., m × n array) of composite pixels and thus define a display resolution of m × n in some embodiments, the display resolution of m × n may be a resolution above Full High Definition (FHD), including but not limited to 1920 × 1080, 1920 × 1200, 2048 × 1280, 2560 × 1440, 3840 × 2160, and the like.
In an embodiment of the present invention, each composite subpixel has a corresponding subpixel corresponding to a viewpoint. The multiple sub-pixels of each composite sub-pixel are arranged in an array on the multi-view naked eye 3D display screen, and the colors of the multiple sub-pixels in the array form are the same. Since the plurality of viewpoints of the 3D display device are arranged substantially along the lateral and longitudinal directions of the multi-viewpoint naked-eye 3D display screen, when the user moves forward, backward, left, and right, resulting in the viewpoints of the human eyes in different directions, different sub-pixels corresponding to the respective viewpoints in each of the composite sub-pixels need to be dynamically rendered. Because the same-color sub-pixels in each composite sub-pixel are arranged in an array, the color cross problem caused by visual persistence can be avoided. Further, there is a possibility that a part of the currently displayed sub-pixel is seen at an adjacent viewpoint position due to refraction of the grating, and the same color and the same line arrangement does not cause a problem of color mixing even if a part of the currently displayed sub-pixel is seen.
In some embodiments, a plurality of spherical gratings are arranged on the surface of the display panel and each cover one composite sub-pixel. Each spherical grating of the plurality of spherical gratings may, for example, comprise a spherical surface to form a spherical grating. In other embodiments, each of the plurality of spherical gratings comprises an ellipsoidal surface to form an ellipsoidal grating. In other embodiments, the spherical grating comprises a spherical surface and a side cross-section. In other embodiments, the spherical grating comprises an elliptical sphere and a side cross-section.
Fig. 3 shows an example of a spherical grating. As shown, one spherical grating 190 corresponds to one composite subpixel, such as red composite subpixel 410. The spherical grating 190 includes a base plane 193, for example, square, a spherical surface 192 opposite the base plane 193, and a side section 191 connecting between the spherical surface 192 and the base plane 193.
Fig. 4 shows another example of a spherical grating. As shown, one spherical grating 190 corresponds to one composite subpixel, such as green composite subpixel 420. The spherical grating 190 includes a base plane 193 having, for example, a circular shape and a spherical surface 192 connecting the base plane 193.
In other embodiments, the bottom plane of the spherical grating may have other shapes, such as hexagonal, triangular, etc.
In some embodiments, the spherical surface side of the spherical grating is provided with another refractive layer having a refractive index different from that of the spherical grating, a surface of the another refractive layer facing the spherical grating is a concave surface and is fitted with the spherical surface of the spherical grating in a concave-convex fit manner, and a surface facing away from the spherical grating is a plane, for example, a plane parallel to a bottom plane of the spherical grating.
According to the utility model discloses multi-viewpoint bore hole 3D display screen 110 can use in multi-viewpoint bore hole 3D display device. According to the utility model discloses an embodiment, multi-viewpoint bore hole 3D display device includes multi-viewpoint bore hole 3D display screen, video signal interface and 3D processing apparatus. The video signal interface is configured to receive video frames of a 3D video signal. The 3D processing device is configured to render the associated sub-pixel of each composite sub-pixel from the received video frame of the 3D video signal.
Fig. 5A shows a multi-view naked eye 3D display device 100 according to an embodiment of the present invention. As shown in fig. 5A, the multi-view naked-eye 3D display device 100 includes a multi-view naked-eye 3D display screen 110, a 3D processing apparatus 130, and a 3D signal interface (e.g., a video signal interface 140) configured to receive 3D content such as a 3D video signal.
In some embodiments, the 3D video signal comprises video frames.
In some embodiments, the 3D processing device is an FPGA or ASIC chip or an FPGA or ASIC chipset. In some embodiments, the multi-view naked-eye 3D display device 100 may also be provided with more than one 3D processing means 130 that process the rendering of the sub-pixels of each composite sub-pixel of each composite pixel of the multi-view naked-eye 3D display screen 110 in parallel, in series or in a combination of series and parallel. Those skilled in the art will appreciate that there may be other ways for the more than one 3D processing device to distribute and process the multi-row and multi-column composite pixels or composite sub-pixels of the multi-view naked eye 3D display screen 110 in parallel, which falls within the scope of the embodiments of the present invention. In some embodiments, the 3D processing device 130 may also optionally include a buffer 131 to buffer the received video frames.
In some embodiments, the 3D processing device is in communication with a multi-view naked eye 3D display screen. In some embodiments, the 3D processing means is communicatively connected with the driving means of the multi-view naked eye 3D display screen.
Referring to fig. 5A, the multi-view naked-eye 3D display apparatus 100 may further include a processor 120 communicatively connected to the 3D processing device 130 through a video signal interface 140. In some embodiments, the processor is included in a computer or a smart terminal, such as a mobile terminal. Alternatively, the processor may be a processor unit of a computer or an intelligent terminal. It is contemplated that in some embodiments, the processor 120 may be disposed outside the multi-view naked eye 3D display device 100, for example, the multi-view naked eye 3D display device 100 may be a multi-view naked eye 3D display with 3D processing means, such as a non-smart naked eye 3D television.
Such a 3D display device 100 may be, for example, a mobile terminal, and the 3D signal interface 140 may be a MIPI, mini-MIPI interface, L VDS interface, min-L VDS interface, or DisplayPort interface.
In some embodiments, as shown in fig. 5A, the processor 120 of the multi-view naked-eye 3D display apparatus 100 may further include a register 121. The register 121 may be configured to temporarily store instructions, data, and addresses. In some embodiments, the register 121 may be configured to receive information about display requirements of the multi-view naked-eye 3D display screen 110. In some embodiments, the multi-view naked eye 3D display apparatus 100 may further include a codec configured to decompress and codec the compressed 3D video signal and transmit the decompressed 3D video signal to the 3D processing device 130 via the 3D signal interface 140.
In some embodiments, the i × j array homochromatic subpixels of each composite subpixel of the multi-view naked eye 3D display screen 110 correspond to i first-direction viewpoints and j second-direction viewpoints of the multi-view naked eye 3D display device.
As shown in FIG. 6, the correspondence of a red composite subpixel 410 composed of an i × j array of red subpixels R with i first-direction viewpoints and j second-direction viewpoints of a multi-viewpoint naked eye 3D display device is shownijjAs shown, the coordinates of the first red sub-pixel R from the left of the first row in the i × j red sub-pixel array are Ri1j1The coordinates of the second red sub-pixel from the left of the first row are Ri2j1By analogy, the coordinates of the sixth red sub-pixel R from the left of the third row are Ri6j3Correspondingly, red with i × jThe first red sub-pixel Ri from the left of the first row in the color sub-pixel array1j1The corresponding viewpoint is Vi1j1The second red sub-pixel Ri from the left of the first row2j1The corresponding viewpoint is Vi2j1By analogy, the sixth red sub-pixel Ri from the left of the third row6j3The corresponding viewpoint is Vi6j3. The correspondence between the other color composite sub-pixels and the viewpoint can be analogized by referring to the correspondence between the red composite sub-pixels and the viewpoint.
Transmission and display of a 3D video signal within a multi-view naked-eye 3D display device according to an embodiment of the present invention are described below with reference to fig. 7A to 7E. In the illustrated embodiment, the multi-view naked eye 3D display device may define a plurality of views, for example, i first-direction views and j second-direction views. The user's eyes can see the display of the corresponding sub-pixel in the composite sub-pixel of each composite pixel in the display panel at each viewpoint (spatial location). Two different images seen by the two eyes of the user at different viewpoints form parallax, and a 3D image is synthesized in the brain.
In some embodiments of the present invention, the 3D processing device 130 receives video frames, e.g., as decompressed 3D video signals, from the processor 120 through the video signal interface 140, e.g., as an internal interface. Each video frame may contain or consist of two images or a composite image.
In some embodiments, the two images or composite image may include different types of images and may be in various arrangements.
In the embodiment shown in fig. 7A, a video frame of a 3D video signal comprises or consists of two images 601, 602 in a side-by-side format. In some embodiments, the two images may be a left eye parallax image and a right eye parallax image, respectively. In some embodiments, the two images may be a rendered color image and a depth image, respectively.
In the embodiment shown in fig. 7B, a video frame of the 3D video signal comprises or consists of two images 601, 602 in top-bottom format. In some embodiments, the two images may be a left eye parallax image and a right eye parallax image, respectively. In some embodiments, the two images may be a rendered color image and a depth image, respectively.
In the embodiment shown in fig. 7C, the video frame of the 3D video signal contains a composite image 603 in a left-right interlaced format. In some embodiments, the composite image may be left-eye and right-eye parallax composite images interleaved left and right. In some embodiments, the composite image may be a left-right interleaved rendered color image and depth image.
In the embodiment shown in fig. 7D, the video frames of the 3D video signal contain a composite image 603 in a top-bottom interleaved format. In some embodiments, the composite image may be a left-eye and right-eye parallax composite image interleaved up and down. In some embodiments, the composite image may be a rendered color image and depth image interleaved up and down.
In the embodiment shown in fig. 7E, the video frames of the 3D video signal contain composite images 603 interleaved in a checkerboard fashion. In some embodiments, the composite image may be a left-eye and right-eye parallax composite image interleaved in a checkerboard fashion. In some embodiments, the composite image may be a rendered color image and a depth image interleaved in a checkerboard fashion.
It will be appreciated by those skilled in the art that the embodiments shown in the figures are merely illustrative, and that the two images or composite image contained in a video frame of a 3D video signal may comprise other types of images and may take other arrangements, which fall within the scope of the embodiments of the present invention.
In some embodiments, the at least one 3D processing device 130, upon receiving a video frame comprising the two images 601, 602, renders at least one sub-pixel in each composite sub-pixel based on one of the two images and at least another sub-pixel in each composite sub-pixel based on the other of the two images.
In some embodiments, the at least one 3D processing device 130, upon receiving the video frame comprising the composite image, renders at least two of each of the composite sub-pixels based on the composite image. For example, at least one sub-pixel is rendered from a first image (portion) of the composite image and at least another sub-pixel is rendered from a second image (portion).
In some embodiments, this is, for example, dynamic rendering based on real-time eye-tracking data.
In some embodiments, the multi-view naked eye 3D display device further comprises eye tracking data acquisition means, such as eye tracking means or an eye tracking data interface, configured to acquire eye tracking data. In some embodiments, the eye-tracking data includes spatial location information of the user's eyes, such as the spacing of the user's eyes or face relative to the multi-view naked eye 3D display screen or eye-tracking device (i.e., the depth of the user's eyes/face), the position of the user's eyes or face in the vertical direction of the multi-view naked eye 3D display screen, the position of the user's eyes or face in the horizontal direction of the multi-view naked eye 3D display screen, the viewpoint locations of the user's eyes, the user's viewing angle, and so on.
In the embodiment shown in fig. 5B, the multi-view naked eye 3D display apparatus 100 includes a human eye tracking device 150 communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive human eye tracking data.
In some embodiments, an eye tracking device includes an eye tracking unit configured to capture an image of a user (e.g., an image of a face of the user), an eye tracking image signal processor configured to determine a spatial position of an eye based on the captured image of the user, and an eye tracking data interface configured to transmit spatial position information of the eye for the spatial position of the eye.
In some embodiments, the eye-tracking unit includes a first camera configured to capture a first image and a second camera configured to capture a second image, and the eye-tracking image signal processor is configured to identify the presence of a human eye based on at least one of the first image and the second image and to determine the eye-point location based on the spatial location of the human eye present in the first image and the second image.
In some embodiments, the eye-tracking unit includes at least one camera configured to capture at least one image and at least one depth acquisition device configured to acquire depth information of at least both eyes of the user, and the eye-tracking image signal processor is configured to identify the presence of the human eye based on the captured at least one image and determine the viewpoint position of the human eye based on the position of the human eye present in the at least one image and the depth information of both eyes of the user.
In the embodiment shown in fig. 5C, an eye tracking device (not shown) may be connected to the processor 120 directly, for example, while the 3D processing device 130 obtains eye tracking data from the processor 120 via the eye tracking data interface 160. In other embodiments, the eye tracking device may be connected to both the processor and the 3D processing device, such that the 3D processing device 130 may obtain the eye tracking data directly from the eye tracking device on the one hand, and other information that may be obtained by the eye tracking device on the other hand may be processed by the processor.
The 3D processing device renders the sub-pixels of the i × j array homochromatic sub-pixels of each composite sub-pixel corresponding to the first direction viewpoint based on the first direction viewpoint at which the user's eyes are located.
In some embodiments, the human eye tracking device obtains the depth position of the user's eye in real time to determine the second directional viewpoint at which the user's eye is located, or obtains the height position and the depth position of the user's eye in real time to determine the second directional viewpoint at which the user's eye is located.A 3D processing device renders the sub-pixels of the i × j array of same color sub-pixels of each composite sub-pixel based on the second directional viewpoint at which the user's eye is located.
Referring to fig. 6, an example of dynamically rendering corresponding sub-pixels in a composite sub-pixel based on real-time eye-tracking data in a multi-view naked eye 3D display device is shown, wherein a red composite sub-pixel 410 is shown, which is composed of an i × j array of red sub-pixels R, where i-6, corresponds to 6 row-wise views of the multi-view naked eye 3D display device, and j-3, corresponds to a multi-view naked eye 3D display setupAnd 3 column direction viewpoints are prepared. The real-time eye tracking data may be acquired by the eye tracking apparatus in real time, for example. When the eye tracking device obtains the visual point Vi of the two eyes of a user1j1、Vi2j1At this time, an image of a viewpoint at which both eyes of the user are positioned is generated based on the video frame of the 3D video signal, and an i × j array of red subpixels R of the red composite subpixel 410 corresponding to the viewpoint Vi is rendered1j1、Vi2j1Two red sub-pixels Ri1j1、Ri2j1. When the eye tracking device obtains that the two eyes of another user are in the viewpoints Vi3j2、Vi4j2At this time, an image of a viewpoint at which both eyes of the user are positioned is generated based on the video frame of the 3D video signal, and an i × j array of red subpixels R of the red composite subpixel 410 corresponding to the viewpoint Vi is rendered3j2、Vi4j2Two red sub-pixels Ri3j2、Ri4j2. When the human eye tracking device obtains that the two eyes of the other user are at the viewpoints Vi5j3、Vi6j3At this time, an image of a viewpoint at which both eyes of the user are positioned is generated based on the video frame of the 3D video signal, and an i × j array of red subpixels R of the red composite subpixel 410 corresponding to the viewpoint Vi is rendered5j3、Vi6j3Two red sub-pixels Ri5j3、Ri6j3. Thus, users at different row positions (horizontal positions) and column positions (including depth positions and height positions) in front of the display panel can see appropriate 3D images.
According to the utility model discloses multi-viewpoint bore hole 3D display device can use in video playback equipment, can present for example for mobile terminal (like cell-phone or panel computer), TV, mobile television, computer, cinema see shadow system or family see shadow system.
The above description and drawings sufficiently illustrate embodiments of the invention to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of embodiments of the present invention includes the full ambit of the claims, as well as all available equivalents of the claims. The words used in the specification are words of description rather than limitation, and are used in the specification to describe the embodiments and not to limit the claims. The term "comprising" or the like, when used in the present application, refers to the presence of at least one of the stated features, but does not exclude the presence of other features.
Those of skill in the art would appreciate that the elements and algorithm steps of each of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.