WO2021110033A1 - 3d显示设备、方法及终端 - Google Patents

3d显示设备、方法及终端 Download PDF

Info

Publication number
WO2021110033A1
WO2021110033A1 PCT/CN2020/133327 CN2020133327W WO2021110033A1 WO 2021110033 A1 WO2021110033 A1 WO 2021110033A1 CN 2020133327 W CN2020133327 W CN 2020133327W WO 2021110033 A1 WO2021110033 A1 WO 2021110033A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
sub
viewpoint
user
composite
Prior art date
Application number
PCT/CN2020/133327
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
北京芯海视界三维科技有限公司
视觉技术创投私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京芯海视界三维科技有限公司, 视觉技术创投私人有限公司 filed Critical 北京芯海视界三维科技有限公司
Priority to US17/779,648 priority Critical patent/US20220408077A1/en
Priority to EP20896638.2A priority patent/EP4068772A4/en
Publication of WO2021110033A1 publication Critical patent/WO2021110033A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects

Definitions

  • This application relates to 3D display technology, for example, to 3D display devices, 3D display methods, and 3D display terminals.
  • the content to be displayed is usually displayed in a preset fixed display orientation, resulting in a single display mode and low flexibility, which affects the display effect.
  • the embodiments of the present application intend to provide a 3D display device, a 3D display method, a 3D display terminal, a computer-readable storage medium, and a computer program product to improve the flexibility of 3D display.
  • a 3D display device including: a multi-viewpoint 3D display screen, including a plurality of composite pixels, each of the plurality of composite pixels includes a plurality of composite sub-pixels, and Each composite sub-pixel is composed of multiple sub-pixels corresponding to multiple viewpoints; the eye positioning device is configured to obtain the spatial position of the user's eyes; the 3D processing device is configured to determine the viewpoint from the spatial position of the user's eyes, And based on the received 3D signal, the sub-pixels corresponding to the viewpoint among the multiple composite sub-pixels are rendered.
  • the 3D display can be adjusted in time according to the viewing situation, thereby realizing a highly flexible 3D display and providing users with a good viewing experience.
  • the display resolution of the multi-view 3D display screen is defined in the way of composite pixels. Therefore, the display resolution defined by the composite pixels is taken as the consideration factor during transmission and display, which can effectively reduce the amount of transmission and rendering calculations. And still has an excellent display effect. This enables high-quality 3D display.
  • the eye positioning device is configured to obtain the spatial position of the eye of at least one user.
  • the 3D processing device is configured to, in response to one of the eyes of each user in the at least one user being located at a single viewpoint or both eyes being located at a single viewpoint, respectively, rendering the sub-pixels corresponding to the single viewpoint among the multiple composite sub-pixels .
  • an accurate display is realized for the viewpoint positions of the user's eyes.
  • the 3D processing device is further configured to render at least one sub-pixel adjacent to a sub-pixel corresponding to a single viewpoint.
  • the display brightness is enhanced by additionally rendering one or two adjacent sub-pixels of the sub-pixel corresponding to the single viewpoint where the eye is located, so that the display effect is adapted to the strong light environment; it is also possible that according to the eye
  • the positioning data calculates the user's offset or movement trend, and renders the sub-pixels corresponding to the viewpoint position that the user is likely to move to, so as to actively or dynamically adapt to the viewing situation to obtain an excellent viewing experience.
  • the 3D processing apparatus is configured to, in response to one of the eyes of each user in the at least one user being located between the two viewpoints, or the eyes being located between the two viewpoints, respectively, to render two of the multiple composite sub-pixels.
  • a clear display effect can be achieved even when the user's eyes cross the viewpoint; it is also possible to calculate the user's offset or movement based on the eye positioning data
  • the trend is to render the sub-pixels corresponding to the viewpoint position that the user is likely to move to, or to render the sub-pixels corresponding to the viewpoint position passed by the user during the movement process, so as to actively or dynamically adapt to the viewing situation to obtain Excellent viewing experience.
  • the 3D display device further includes a face detection device configured to detect face information of at least one user.
  • the user's identity can be identified by detecting the user's facial information. For example, this is advantageous in the following situations, that is, the user’s eyes or face have been detected once, and the user’s interpupillary distance or other biometric information is known, then the viewpoint position where the eyes are located can use the known information faster It can be calculated to further improve the speed of face recognition or eye positioning.
  • the eye positioning device is configured to obtain the spatial positions of the respective eyes of at least two users.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • each of the multiple composite sub-pixels includes multiple sub-pixels arranged in rows or columns.
  • a 3D display method including: obtaining the spatial position of the user's eyes; determining the viewpoint by the spatial position of the user's eyes; rendering multiple composite sub-pixels in a multi-view 3D display screen based on a 3D signal The sub-pixel corresponding to the viewpoint in the multi-viewpoint 3D display screen; wherein, the multi-viewpoint 3D display screen includes multiple composite pixels, each of the multiple composite pixels includes multiple composite sub-pixels, and each composite sub-pixel in the multiple composite sub-pixels It is composed of a plurality of sub-pixels corresponding to a plurality of viewpoints.
  • acquiring the spatial position of the user's eyes and determining the viewpoint from the spatial position of the user's eyes includes: acquiring the spatial position of at least one user's eyes; and determining the at least one user from the spatial position of the at least one user's eyes The point of view where each user’s eyes are located in.
  • rendering the sub-pixels corresponding to the viewpoints among the multiple composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to at least one user that one of the eyes of each user is located at a single viewpoint or the eyes respectively Located at a single viewpoint, rendering the sub-pixels corresponding to a single viewpoint among multiple composite sub-pixels.
  • rendering the sub-pixels corresponding to the viewpoints among the multiple composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to at least one user that one of the eyes of each user is located at a single viewpoint or the eyes respectively Being located at a single viewpoint, rendering the sub-pixel corresponding to the single viewpoint among the multiple composite sub-pixels and at least one sub-pixel adjacent to the sub-pixel corresponding to the single viewpoint.
  • rendering the sub-pixels corresponding to the viewpoints among the multiple composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal includes: responding to at least one user that one of the eyes of each user is located between the two viewpoints or The two eyes are respectively located between the two viewpoints, and the sub-pixels corresponding to the two viewpoints among the multiple composite sub-pixels are rendered.
  • the 3D display method further includes: detecting facial information of at least one user.
  • detecting the facial information of at least one user includes: detecting the facial information of at least two users.
  • a 3D display terminal which includes a processor and a memory storing program instructions, and also includes a multi-viewpoint 3D display screen.
  • the multi-viewpoint 3D display screen includes multiple composite pixels, each of the multiple composite pixels
  • the composite pixel includes multiple composite sub-pixels, and each composite sub-pixel of the multiple composite sub-pixels is composed of multiple sub-pixels corresponding to multiple viewpoints.
  • the processor is configured to execute the program instructions as claimed in claims 10 to 16 any of the methods.
  • the 3D display terminal is a smart TV, a smart cell phone, a tablet computer, a personal computer, or a wearable device.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the computer-executable instructions are configured to execute the above-mentioned 3D display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D display method.
  • the 3D display device, 3D display method, 3D display terminal, computer readable storage medium, and computer program product provided by the embodiments of the present disclosure can achieve the following technical effects:
  • FIG. 1A and 1B are schematic structural diagrams of a 3D display device according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the hardware structure of a 3D display device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of the software structure of the 3D display device shown in FIG. 2;
  • FIGS. 4A and 4B are schematic diagrams of composite pixels according to an embodiment of the present disclosure.
  • 5A and 5B are schematic diagrams of rendering performed in response to the position of the user's viewpoint according to an embodiment of the present disclosure, wherein the eyes of the user are respectively located in a single viewpoint;
  • 5C is a schematic diagram of rendering performed in response to a user's viewpoint position according to an embodiment of the present disclosure, in which one of the user's eyes spans the viewpoint and the other is located in a single viewpoint;
  • 5D is a schematic diagram of rendering performed in response to the user's viewpoint position according to an embodiment of the present disclosure, in which the user's viewpoint position moves;
  • FIG. 5E is a schematic diagram of rendering performed in response to a user's viewpoint position according to an embodiment of the present disclosure, where there are two users;
  • FIG. 6 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of the steps of a 3D display method according to an embodiment of the present disclosure.
  • Fig. 13 is a schematic structural diagram of a 3D display terminal according to an embodiment of the present disclosure.
  • a 3D display device including: a multi-viewpoint 3D display screen (for example: a multi-viewpoint naked eye 3D display screen), including m ⁇ n composite pixels; a video signal interface configured to receive an image of a 3D signal; 3D processing device; an eye positioning device configured to obtain eye positioning data in real time; wherein each composite pixel includes a plurality of composite sub-pixels, and each composite sub-pixel is composed of i sub-pixels of the same color corresponding to i viewpoints, wherein i ⁇ 3; wherein the 3D processing device is configured to render the sub-pixels in each composite sub-pixel determined by the eye positioning data in an image based on the 3D signal according to the eye positioning data.
  • a multi-viewpoint 3D display screen for example: a multi-viewpoint naked eye 3D display screen
  • a video signal interface configured to receive an image of a 3D signal
  • 3D processing device an eye positioning device configured to obtain eye positioning data in real time
  • each composite pixel includes a plurality of composite sub
  • the 3D display can be adjusted in time according to the viewing situation, thereby realizing a highly flexible 3D display and providing users with a good viewing experience.
  • the display resolution of the multi-view 3D display screen is defined in the way of composite pixels. Therefore, the display resolution defined by the composite pixels is taken as the consideration factor during transmission and display, which can effectively reduce the amount of transmission and rendering calculations. And still has an excellent display effect. This enables high-quality 3D display.
  • the eye positioning device is configured to detect the position of the eye point of at least one user in real time.
  • the 3D processing device is configured to render a sub-pixel corresponding to a single viewpoint in each composite sub-pixel in response to one of the eyes of each user being located in a single viewpoint or both eyes are located in a single viewpoint.
  • an accurate display is realized for the viewpoint positions of the user's eyes.
  • the 3D processing device is configured to also render one or two adjacent sub-pixels corresponding to a single viewpoint.
  • the display brightness is enhanced by additionally rendering one or two adjacent sub-pixels of the sub-pixel corresponding to the single viewpoint where the eye is located, so that the display effect is adapted to the strong light environment; it is also possible that according to the eye
  • the positioning data calculates the user's offset or movement trend, and renders the sub-pixels corresponding to the viewpoint position that the user is likely to move to, so as to actively or dynamically adapt to the viewing situation to obtain an excellent viewing experience.
  • the 3D processing device is configured to, in response to one of the eyes of each user being located between the two viewpoints (across the viewpoint) or the eyes across the viewpoint, rendering each composite sub-pixel and the crossed viewpoint The corresponding sub-pixel.
  • a clear display effect can be achieved even when the user's eyes cross the viewpoint; it is also possible to calculate the user's offset or movement based on the eye positioning data
  • the trend is to render the sub-pixels corresponding to the viewpoint position that the user is likely to move to, or to render the sub-pixels corresponding to the viewpoint position passed by the user during the movement process, so as to actively or dynamically adapt to the viewing situation to obtain Excellent viewing experience.
  • the 3D display device further includes a face detection device configured to detect facial information of at least one user.
  • the user's identity can be identified by detecting the user's facial information. For example, this is advantageous in the following situations, that is, the user’s eyes or face have been detected once, and the user’s interpupillary distance or other biometric information is known, then the viewpoint position where the eyes are located can use the known information faster It can be calculated to further improve the speed of face recognition or eye positioning.
  • the eye positioning device is configured to obtain the position of the viewpoint of the respective eyes of the at least two users in real time.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • each composite sub-pixel includes multiple sub-pixels in a single row or a single column.
  • a 3D display method for a multi-viewpoint 3D display screen includes m ⁇ n composite pixels, each composite pixel includes multiple composite sub-pixels, and each composite sub-pixel Consists of i sub-pixels of the same color corresponding to i viewpoints, where i ⁇ 3, the 3D display method includes: transmitting 3D signal images; obtaining eye positioning data in real time; according to the eye positioning data, image rendering based on 3D signals The sub-pixel in the composite sub-pixel determined by the eye positioning data.
  • real-time acquisition of eye positioning data includes real-time detection of the viewpoint positions of the eyes of at least one user.
  • the rendering step includes: in response to one of the eyes of each user being located in a single viewpoint or both eyes are located in a single viewpoint, respectively, rendering the sub-pixels corresponding to the single viewpoint in each composite sub-pixel.
  • the rendering step includes: in response to one of the eyes of each user being located in a single viewpoint or both eyes are located in a single viewpoint, respectively, rendering the sub-pixels corresponding to the single viewpoint and the sub-pixels corresponding to the single viewpoint in each composite sub-pixel. One or two adjacent sub-pixels.
  • the rendering step includes: in response to one of the eyes of each user across the viewpoint or the eyes across the viewpoint, rendering the sub-pixels in each composite sub-pixel corresponding to the spanned viewpoint.
  • the 3D display method further includes: detecting facial information of at least one user.
  • detecting the facial information of at least one user includes detecting the facial information of at least two users.
  • Fig. 1A shows a schematic structural diagram of a 3D display device according to an embodiment of the present disclosure.
  • a 3D display device 100 there is provided a 3D display device 100, a multi-view 3D display screen 110, a 3D processing device 130, a video signal interface 140 configured to receive an image of a 3D signal, and an eye positioning device 150.
  • the multi-view 3D display screen 110 may include a display panel and a grating (not labeled) covering the display panel.
  • the multi-view 3D display screen 110 includes m ⁇ n composite pixels and thus defines a display resolution of m ⁇ n.
  • the multi-view 3D display screen 110 includes m columns and n rows of composite pixels and thus defines a display resolution of m ⁇ n.
  • the display resolution of m ⁇ n may be a resolution above Full HD (FHD), including but not limited to 1920 ⁇ 1080, 1920 ⁇ 1200, 2048 ⁇ 1280, 2560 ⁇ 1440, 3840 ⁇ 2160, etc. .
  • FHD Full HD
  • the 3D processing device is communicatively connected with the multi-view 3D display screen.
  • the 3D processing device is communicatively connected with the driving device of the multi-view 3D display screen.
  • each composite pixel includes a plurality of composite sub-pixels, and each composite sub-pixel is composed of i sub-pixels of the same color corresponding to i viewpoints, i ⁇ 3.
  • i 6
  • i 6
  • the three composite sub-pixels respectively correspond to three colors, namely red (R), green (G) and blue (B).
  • each composite sub-pixel 410, 420, 430 in the composite pixel 400 are arranged in a single column.
  • Each composite sub-pixel 410, 420, 430 includes sub-pixels 411, 421, and 431 arranged in a single row.
  • the composite sub-pixels in the composite pixel are arranged differently or the sub-pixels in the composite sub-pixel are arranged differently.
  • each composite sub-pixel 440, 450, and 460 in the composite pixel 400 are arranged in a single row.
  • Each composite sub-pixel 440, 450, and 460 includes sub-pixels 441, 451, and 461 in a single row.
  • the 3D display device 100 is provided with a single 3D processing device 130.
  • a single 3D processing device 130 processes the rendering of each composite sub-pixel of each composite pixel of the 3D display screen 110 at the same time.
  • the 3D display device 100 may be provided with, for example, two, three or more 3D processing devices 130, which process the 3D display 110 in parallel, serial, or a combination of serial and parallel. Rendering of each composite sub-pixel of each composite pixel.
  • 3D processing devices can be allocated in other ways and process the multiple rows and multiple columns of composite pixels or composite sub-pixels of the 3D display screen 110 in parallel, which falls into the implementation of the present disclosure. Within the scope of the case.
  • the 3D processing device 130 may further include a buffer 131 to buffer the received image.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • the 3D display device 100 further includes a processor 101 communicatively connected to the 3D processing device 130 through the video signal interface 140.
  • the processor 101 is included in a computer or a smart terminal, such as a mobile terminal, or as a processor unit thereof.
  • the processor 101 may be arranged outside the 3D display device.
  • the 3D display device may be a multi-view 3D display with a 3D processing device, such as a non-smart 3D TV, such as Mobile TV in public transportation facilities.
  • an exemplary embodiment of a 3D display device includes a processor internally.
  • the video signal interface 140 is configured as an internal interface that connects the processor 101 and the 3D processing device 130.
  • This structure can be more clarified with reference to the 3D display device 200 implemented as a mobile terminal shown in FIGS. 2 and 3.
  • the video signal interface 140 as the internal interface of the 3D display device 200 may be a MIPI, a mini-MIPI interface, an LVDS interface, a min-LVDS interface, or a DisplayPort interface.
  • the processor 101 of the 3D display device 100 may further include a register 122.
  • the register 122 can be used to temporarily store instructions, data, and addresses.
  • the 3D display device 100 further includes an eye positioning device 150 configured to obtain eye positioning data in real time, so that the 3D processing device 130 can render the corresponding sub-pixels in the composite pixel (composite sub-pixel) according to the eye positioning data.
  • the eye positioning device 150 is communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
  • an eye positioning data interface (not shown) is also provided, the eye positioning device can be directly connected to the processor of the 3D display device, and the 3D processing device obtains the eye from the processor via the eye positioning data interface. Positioning data.
  • the eye positioning device can be connected to the processor and the 3D processing device at the same time.
  • the 3D processing device can directly obtain eye positioning data from the eye positioning device, and on the other hand, it can make Other information obtained by the eye positioning device can be processed by the processor.
  • the eye positioning device 150 is configured to obtain eye positioning data in real time, and the 3D processing device renders the sub-pixels in each composite sub-pixel determined by the eye positioning data obtained in real time based on the image of the 3D signal. .
  • the eye positioning device may include two black and white cameras, an eye positioning image processor, and an eye positioning data interface.
  • two black-and-white cameras can capture images of the user’s face at high speed (real-time), and the image processor can recognize the user’s eyes and calculate the actual spatial positions of the eyes through the eyes. The actual spatial positions of the eyes that can be transmitted by the positioning data interface.
  • the 3D processing device is configured to determine the viewpoint from the spatial position of the eye.
  • determining the viewpoint based on the spatial position of the eye may also be implemented by the eye positioning image processor of the eye positioning device.
  • FIG. 1B shows a schematic structural diagram of a 3D display device according to an embodiment of the present disclosure.
  • the 3D display device 100 further includes a face detection device 158.
  • the face detection device 158 has a visual recognition function, such as a face recognition function, and is configured to detect at least The facial information of a user.
  • the face detection device 158 may be connected to the eye positioning device 150, and may also be connected to the 3D processing device 130 to transmit the detected face information.
  • the face detection device 158 may be provided as an independent device, or integrated in the eye positioning device 150, or integrated in the processor 101 of the 3D display device 100, or integrated in the 3D Display other parts of the device with similar functions.
  • the face detection device detects the facial information of the two users, and the eye positioning device obtains the viewpoints of the eyes of the two users in real time. position.
  • the 3D processing device renders the sub-pixels in each composite sub-pixel based on the image of the 3D signal according to the viewpoint positions of the respective eyes of the two users.
  • the face detection device and the eye positioning device detect that there is a conflict between the viewpoint positions of the eyes of more than one user, such as two users, the situation is, for example, the left eye of one user and the other The user's right eye is located at the same viewpoint position, and a two-dimensional (2D) display is presented to these users through a multi-viewpoint 3D display screen.
  • 2D two-dimensional
  • the multi-viewpoint 3D display screen 110 can define six viewpoints V1-V6, and the user's eyes can see the composite pixels in the display panel of the multi-viewpoint 3D display screen 110 at each viewpoint (spatial position).
  • the two different images seen by the user's eyes at different points of view form a parallax, and a 3D image is synthesized in the brain.
  • the 3D processing device 130 receives the image of the decompressed 3D signal from the processor 101 through the video signal interface 140 as an internal interface.
  • the image of the 3D signal may be two images with m ⁇ n (signal) resolution or a composite image with 2m ⁇ n or m ⁇ 2n (signal) resolution.
  • the two images or the composite image may include different types of images and may be in various arrangements.
  • two images with m ⁇ n (signal) resolution may be in a side-by-side format or a top-and-bottom format.
  • the two images can be a left-eye parallax image and a right-eye parallax image, respectively, or can be a rendered color image and a depth-of-field image, respectively.
  • a composite image with a resolution of 2m ⁇ n or m ⁇ 2n (signal) may be in a left-right interleaving format, a top-bottom interleaving format, or a checkerboard format.
  • the composite image can be an interlaced left-eye and right-eye parallax composite image, or an interlaced rendered color and depth-of-field composite image.
  • the 3D processing device 130 receives two images of the 3D signal with m ⁇ n (signal) resolution through the video signal interface 140, that is, the (signal) resolution of each image is m ⁇ n and
  • the display resolution m ⁇ n provided by the composite pixels of the multi-viewpoint 3D display screen 110 divided according to viewpoints is consistent.
  • the 3D processing device 130 receives the composite image of the 3D signal with 2m ⁇ n or m ⁇ 2n (signal) resolution through the video signal interface 140, that is, the (signal) resolution of the composite image.
  • One half is consistent with the display resolution m ⁇ n provided by the composite pixels of the multi-viewpoint 3D display screen 110 divided by viewpoints.
  • the viewpoint information has nothing to do with the transmission process, this can achieve a 3D display with a small amount of processing calculation and no loss of resolution; on the other hand, because the composite pixel (composite sub-pixel) corresponds to the viewpoint setting , The rendering of the display screen can be realized in a "point-to-point" manner, which greatly reduces the amount of calculation.
  • the transmission and display of images or videos on conventional 3D monitors are still based on 2D display panels, which not only has the problem of resolution reduction and dramatic increase in rendering calculations, but also multiple format adjustments and images or videos. Display adaptation problems.
  • the register 122 of the processor 101 may be used to receive information about the display requirements of the multi-view 3D display screen 110.
  • the information is typically independent of i views and related to the m ⁇ n display of the multi-view 3D display screen 110. Resolution-related information so that the processor 101 can send to the multi-viewpoint 3D display screen 110 a 3D signal image that meets its display requirements.
  • the information may be, for example, a data packet used to initially establish a video transmission transmission.
  • the processor 101 when transmitting the image of the 3D signal, the processor 101 does not need to consider the information related to the i viewpoints of the multi-view 3D display screen 110 (i ⁇ 3). Instead, the processor 101 can transmit to the multi-view 3D display 110 an image of a 3D signal that meets its requirements by relying on the information related to the m ⁇ n resolution of the multi-view 3D display 110 received by the register 122.
  • the 3D display device 100 may further include a codec configured to decompress and codec the compressed 3D signal and send the decompressed 3D signal to the 3D processing device 130 via the video signal interface 140.
  • a codec configured to decompress and codec the compressed 3D signal and send the decompressed 3D signal to the 3D processing device 130 via the video signal interface 140.
  • the processor 101 of the 3D display device 100 reads from the memory or receives the image of the 3D signal from outside the 3D display device 100, for example, through an external interface, and then reads or receives the image via the video signal interface 140.
  • the image of the 3D signal is transmitted to the 3D processing device 130.
  • the 3D display device 100 further includes a format adjuster (not shown), which is, for example, integrated in the processor 101, configured as a codec or as part of a GPU, for preprocessing images of 3D signals,
  • a format adjuster (not shown), which is, for example, integrated in the processor 101, configured as a codec or as part of a GPU, for preprocessing images of 3D signals,
  • the two images contained therein have a (signal) resolution of m ⁇ n or the composite image contained therein has a (signal) resolution of 2m ⁇ n or m ⁇ 2n.
  • the 3D processing device 130 is configured to render a sub-pixel corresponding to a single viewpoint in each composite sub-pixel in response to one of the eyes of each user being located in a single viewpoint or both eyes are located in a single viewpoint.
  • the user's right eye is at the second viewpoint V2, and the left eye is at the fifth viewpoint V5.
  • the image rendering composite sub-pixel based on the 3D signal corresponds to the two viewpoints V2 and V5. Sub-pixels.
  • the user's eyes see two different images at these two viewpoints to form a parallax, and a 3D image is synthesized in the brain.
  • the 3D processing device 130 is configured to, in response to one of the eyes of each user being located in a single viewpoint or both eyes being located in a single viewpoint, respectively, rendering the sub-pixels corresponding to the single viewpoint in each composite sub-pixel, and also rendering One or two adjacent sub-pixels of sub-pixels corresponding to a single viewpoint.
  • the user's right eye is at the second viewpoint V2, and the left eye is at the fifth viewpoint V5.
  • the image rendering composite sub-pixel based on the 3D signal corresponds to the two viewpoints V2 and V5.
  • sub-pixels corresponding to the viewpoint V1 adjacent to the viewpoint V2 and sub-pixels corresponding to the viewpoint V4 adjacent to the viewpoint V5 are also rendered.
  • sub-pixels corresponding to two viewpoints adjacent to one of the two viewpoints V2 and V5 or respectively adjacent to the two viewpoints may also be rendered.
  • the multi-view 3D display screen may include a self-luminous display panel, such as a MICRO-LED display panel.
  • the self-luminous display panel such as the MICRO-LED display panel, is configured such that sub-pixels that are not rendered do not emit light. For multi-view ultra-high-definition displays, this can greatly save the power consumed by the display.
  • the 3D processing device 130 is configured to, in response to one of the eyes of each user straddling the viewpoint or the eyes straddling the viewpoint, render the sub-pixels in each composite sub-pixel corresponding to the straddled viewpoint.
  • the user's right eye spans two viewpoints V1 and V2, and the left eye is at the fifth viewpoint V5.
  • the image rendering composite sub-pixel based on the 3D signal and the spanned two viewpoints Sub-pixels corresponding to V1 and V2, and sub-pixels corresponding to a single viewpoint V5 are rendered. Therefore, the eyes of the user located between the viewpoints V1, V2 and V5 can see the rendered images at different angles, and generate parallax to form a 3D effect of 3D display.
  • the 3D processing device 130 is configured to, in response to the movement of the viewpoint position of one or both of the user's eyes, render the sub-pixel corresponding to the viewpoint position following the movement of the user's eyes in each composite sub-pixel.
  • the user's right eye moves from viewpoint V1 to viewpoint V3, and the left eye moves from viewpoint V4 to viewpoint V6, and the viewpoint corresponding to the rendered sub-pixel in the composite sub-pixel changes from V1 accordingly.
  • V4 becomes V3 and V6. Therefore, the eyes of the user in a motion state can still see the rendered images from different angles in real time, and the parallax is generated to form the 3D effect of the 3D display.
  • the 3D processing device 130 is configured to, in response to the viewpoint positions of the respective eyes of the at least two users, render the sub-pixels in each composite sub-pixel corresponding to the viewpoint positions of the respective eyes of the at least two users.
  • the eyes of user 1 are at viewpoints V1 and V3
  • the eyes of user 2 are at viewpoints V4 and V6, respectively.
  • the composite sub-pixels are rendered to correspond to these four viewpoints.
  • each user can watch the rendered image corresponding to his own viewing angle and generate parallax to form a 3D effect of 3D display.
  • the 3D display device may be a 3D display device including a processor.
  • the 3D display device may be configured as a smart cell phone, tablet computer, smart TV, wearable device, in-vehicle device, notebook computer, ultra mobile personal computer (UMPC), netbook, personal digital assistant (PDA), etc.
  • UMPC ultra mobile personal computer
  • PDA personal digital assistant
  • FIG. 2 shows a schematic diagram of the hardware structure of a 3D display device 200 implemented as a mobile terminal, such as a smart cell phone or a tablet computer.
  • the 3D display device 200 may include a processor 201, an external storage interface 202, an (internal) memory 203, a universal serial bus (USB) interface 204, a charging management module 205, a power management module 206, a battery 207, a mobile communication module 208, and wireless Communication module 210, antenna 209, 211, audio module 212, speaker 213, receiver 214, microphone 215, earphone interface 216, button 217, motor 218, indicator 219, subscriber identity module (SIM) card interface 220, camera 221, Multi-viewpoint 3D display screen 110, 3D processing device 130, video signal interface 140, eye positioning device 150, face detection device 158, sensor module 230, etc.
  • SIM subscriber identity module
  • the sensor module 230 may include proximity light sensor 2301, ambient light sensor 2302, pressure sensor 2303, air pressure sensor 2304, magnetic sensor 2305, gravity sensor 2306, gyroscope sensor 2307, acceleration sensor 2308, distance sensor 2309, temperature sensor 2310, fingerprint sensor 2311, touch sensor 2312, bone conduction sensor 2313, etc.
  • the structure illustrated in the embodiments of the present disclosure does not constitute a specific limitation on the 3D display device 200.
  • the 3D display device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 201 may include one or more processing units.
  • the processor 201 may include an application processor (AP), a modem processor, a baseband processor, a register 222, a graphics processing unit (GPU) 223, and image signal processing.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processing
  • controller memory
  • codec 224 digital signal processor
  • DSP digital signal processor
  • NPU neural network processor
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 201 may also be provided with a high-speed cache, configured to store instructions or data that the processor 201 has just used or cyclically used. When the processor 201 wants to use the instruction or data again, it can be directly called from the memory.
  • the processor 201 may include one or more interfaces.
  • Interfaces can include integrated circuit (I2C) interface, integrated circuit built-in audio (I2S) interface, pulse code modulation (PCM) interface, universal asynchronous receiver transmitter (UART) interface, mobile industry processor interface (MIPI), universal input and output (GPIO) interface, user identification module (SIM) interface, universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver transmitter
  • MIPI mobile industry processor interface
  • GPIO universal input and output
  • SIM user identification module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 201 may include multiple sets of I2C buses.
  • the processor 201 can communicate with the touch sensor 2312, the charger, the flash, the camera 221, the eye positioning device 150, the face detection device 158 and the like respectively through different I2C bus interfaces.
  • the MIPI interface may be configured to connect the processor 201 and the multi-view 3D display screen 110.
  • the MIPI interface can also be configured to connect peripheral devices such as the camera 221, the eye positioning device 150, and the face detection device 158.
  • the wireless communication function of the 3D display device 200 can be realized by the antennas 209 and 211, the mobile communication module 208, the wireless communication module 210, the modem processor or the baseband processor.
  • the antennas 209, 211 are configured to transmit and receive electromagnetic wave signals.
  • Each antenna in the 3D display device 200 may be configured to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the mobile communication module 208 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the 3D display device 200.
  • at least part of the functional modules of the mobile communication module 208 may be provided in the processor 201.
  • at least part of the functional modules of the mobile communication module 208 and at least part of the modules of the processor 201 may be provided in the same device.
  • the wireless communication module 210 can provide applications on the 3D display device 200 including wireless local area network (WLAN), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication technology (NFC), infrared technology (IR) and other wireless communication solutions.
  • the wireless communication module 210 may be one or more devices integrating at least one communication processing module.
  • the antenna 209 of the 3D display device 200 is coupled with the mobile communication module 208, and the antenna 211 is coupled with the wireless communication module 210, so that the 3D display device 200 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA) , At least one of Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, or IR technologies.
  • the external interface configured to receive 3D signals may include a USB interface 204, a mobile communication module 208, a wireless communication module 209, or a combination thereof.
  • a USB interface 204 may include a USB interface 204, a mobile communication module 208, a wireless communication module 209, or a combination thereof.
  • other feasible configurations are also conceivable as interfaces for receiving 3D signals, such as the interface mentioned in the above question.
  • the memory 203 may be configured to store computer executable program code, and the executable program code includes instructions.
  • the processor 201 executes various functional applications and data processing of the 3D display device 200 by running instructions stored in the memory 203.
  • the external memory interface 202 may be configured to connect to an external memory card, such as a Micro SD card, so as to expand the storage capacity of the 3D display device 200.
  • the external memory card communicates with the processor 201 through the external memory interface 202 to realize the data storage function.
  • the memory of the 3D display device may include (internal) memory 203, an external memory card connected to external memory interface 202, or a combination thereof.
  • the video signal interface may also adopt different internal interface connection modes or combinations of the above-mentioned embodiments.
  • the camera 221 may capture images or videos.
  • the 3D display device 200 implements the display function through the video signal interface 140, the 3D processing device 130, the multi-view 3D display screen 110, and the application processor.
  • the 3D display device 200 may include a GPU 223, for example, configured to process 3D video images in the processor 201, and may also process 2D video images.
  • the 3D display device 200 further includes a codec 224 configured to compress or decompress digital video, for example, a 3D signal.
  • the video signal interface 140 is configured to output a 3D signal processed by the GPU or the codec 224 or both, such as an image of a decompressed 3D signal, to the 3D processing device 130.
  • the GPU or codec 224 is integrated with a format adjuster.
  • the multi-viewpoint 3D display screen 110 is configured to display 3D images or videos or the like.
  • the multi-view 3D display 110 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light emitting diode (OLED), active matrix organic light emitting diode or active matrix organic light emitting diode (AMOLED), flexible light emitting diode (FLED), Mini-LED, Micro -LED, Micro-OLED, Quantum Dot Light Emitting Diode (QLED), etc.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light emitting diode
  • Mini-LED Micro -LED
  • Micro-OLED Quantum Dot Light Emitting Diode
  • the 3D display device 200 may further include an eye positioning device 150 or an eye positioning data interface configured to obtain eye positioning data in real time, so that the 3D processing device 130 can render composite pixels (composite pixels) based on the eye positioning data. Sub-pixel) in the corresponding sub-pixel.
  • the eye positioning device 150 is communicatively connected to the 3D processing device 130, and may also be communicatively connected to the processor 201, for example, bypassed to the processor 201.
  • the eye positioning device 150 may be connected to the processor 201 and the 3D processing device 130 at the same time.
  • the 3D display device 200 further includes a face detection device 158.
  • the face detection device 158 has a visual recognition function, such as a face recognition function, and is configured to detect facial information of at least one user.
  • the face detection device 158 may be connected to the eye positioning device 150, or may be connected to the 3D processing device 130 to transmit the detected face information.
  • the 3D display device 200 can implement audio functions through an audio module 212, a speaker 213, a receiver 214, a microphone 215, a headphone interface 216, an application processor, and the like.
  • the button 217 includes a power button, a volume button, and so on.
  • the button 217 may be a mechanical button. It can also be a touch button.
  • the 3D display device 200 may receive key input, and generate key signal input related to user settings and function control of the 3D display device 200.
  • the motor 218 can generate vibration prompts.
  • the motor 218 may be configured as an incoming call vibration notification, or may be configured as a touch vibration feedback.
  • the SIM card interface 220 is configured to connect to a SIM card.
  • the 3D display device 200 adopts an eSIM, that is, an embedded SIM card.
  • the ambient light sensor 2302 is configured to sense ambient light conditions. For example, the brightness of the display screen can be adjusted accordingly.
  • the 3D processing device 130 when the eyes of the user are located in a single viewpoint, when the ambient light sensor 2302 detects that the brightness of the external environment is high, the 3D processing device 130 not only renders the sub-pixels corresponding to the single viewpoint in each composite sub-pixel, but also renders One or two adjacent sub-pixels of the sub-pixels corresponding to a single viewpoint are used to enhance the display brightness and adapt to the strong light environment.
  • the pressure sensor 2303 is configured to sense a pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 2303 may be provided on the multi-view 3D display screen 110, which falls within the scope of the embodiments of the present disclosure.
  • the air pressure sensor 2304 is configured to measure air pressure.
  • the magnetic sensor 2305 includes a Hall sensor.
  • the gravity sensor 2306 is a sensor that converts motion or gravity into electrical signals, and is mainly configured to measure parameters such as tilt angle, inertial force, impact, and vibration.
  • the gyro sensor 2307 may be configured to determine the movement posture of the 3D display device 200.
  • the acceleration sensor 2308 can detect the magnitude of the acceleration of the 3D display device 200 in various directions (generally three axes).
  • the distance sensor 2309 can be configured to measure distance
  • the temperature sensor 2310 may be configured to detect temperature.
  • the fingerprint sensor 2311 is configured to collect fingerprints.
  • the touch sensor 2312 may be disposed in the multi-viewpoint 3D display screen 110, and the touch screen is composed of the touch sensor 2312 and the multi-viewpoint 3D display screen 110, which is also called a “touch screen”.
  • the bone conduction sensor 2313 can acquire vibration signals.
  • the charging management module 205 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 206 is configured to connect the battery 207, the charging management module 205 and the processor 201.
  • the software system of the 3D display device 200 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment shown in the present disclosure exemplifies the software structure of the 3D display device 200 by taking an Android system with a layered architecture as an example.
  • Android system with a layered architecture as an example.
  • the embodiments of the present disclosure can be implemented in different software systems, such as operating systems.
  • FIG. 3 is a schematic diagram of the software structure of the 3D display device 200 according to an embodiment of the present disclosure.
  • the layered architecture divides the software into several layers. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer 310, the framework layer 320, the core class library and runtime (Runtime) 330, and the kernel layer 340, respectively.
  • the application layer 310 may include a series of application packages. As shown in Figure 3, the application package can include applications such as Bluetooth, WLAN, navigation, music, camera, calendar, call, video, gallery, map, short message, etc. For example, a 3D video display method can be implemented in a video application.
  • the framework layer 320 provides an application programming interface (API) and a programming framework for applications in the application layer.
  • the framework layer includes some predefined functions. For example, in some embodiments of the present disclosure, the function or algorithm for recognizing the collected 3D video image and the algorithm for processing the image may be included in the framework layer.
  • the framework layer 320 may include a resource manager, a phone manager, a content manager, a notification manager, a window manager, a view system, an installation package manager, and the like.
  • Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function to be called by the java language, and the other part is the core library of Android.
  • the application layer and the framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the framework layer as binary files.
  • the virtual machine is configured to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the core class library can include multiple functional modules. For example: 3D graphics processing library (for example: OpenGL ES), surface manager, image processing library, media library, graphics engine (for example: SGL), etc.
  • 3D graphics processing library for example: OpenGL ES
  • surface manager for example: image processing library
  • media library for example: SGL
  • graphics engine for example: SGL
  • the kernel layer 340 is a layer between hardware and software.
  • the kernel layer includes at least camera driver, audio and video interface, call interface, Wifi interface, sensor driver, power management, GPS interface.
  • a 3D display device as a mobile terminal with the structure shown in FIG. 2 and FIG. 3 is taken as an example to describe embodiments of 3D video transmission and display in the 3D display device; however, it is conceivable that in other embodiments More or fewer features can be included or changes can be made to the features.
  • a 3D display device 200 such as a mobile terminal, such as a smart cell phone or a tablet computer, uses the mobile communication module 208 and antenna 209 or the wireless communication module 210 and antenna 211 as external interfaces from a network, such as a cellular network. , WLAN network, and Bluetooth receive, for example, compressed 3D signals.
  • the compressed 3D signals are processed by GPU 223 for image processing, codec 224 is encoded, decoded, and decompressed, and then, for example, video signal interface 140, such as MIPI interface or mini, is used as an internal interface.
  • the MIPI interface sends the decompressed 3D signal to the 3D processing device 130, and the image of the decompressed 3D signal includes two images or a composite image of the embodiment of the present disclosure. Furthermore, the 3D processing device 130 renders the sub-pixels in the composite sub-pixels of the display screen accordingly, thereby realizing 3D video playback.
  • the 3D display device 200 reads the (internal) memory 203 or reads the compressed 3D signal stored in the external memory card through the external memory interface 202, and implements 3D video through corresponding processing, transmission and rendering. Play.
  • the aforementioned 3D video playback is implemented in a video application in the Android system application layer 310.
  • the embodiments of the present disclosure also provide a 3D display method for a multi-viewpoint 3D display screen.
  • the multi-viewpoint 3D display screen includes m ⁇ n composite pixels.
  • Each composite pixel includes multiple composite sub-pixels. It is composed of i sub-pixels of the same color at i viewpoints, where i ⁇ 3.
  • the 3D display method includes:
  • S602 Determine the viewpoint by the spatial position of the user's eyes
  • S603 Render the sub-pixels corresponding to the viewpoint among the multiple composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal.
  • the 3D display method includes:
  • S701 Obtain the spatial position of the eyes of at least one user
  • S702 Determine the viewpoint at which the eyes of each of the at least one user are located by the spatial position of the eyes of the at least one user;
  • S703 Render sub-pixels corresponding to the viewpoint among the multiple composite sub-pixels in the multi-viewpoint 3D display screen based on the 3D signal.
  • the 3D display method includes:
  • S802 Determine the viewpoint at which the eyes of each of the at least one user are located by the spatial positions of the eyes of the at least one user;
  • S803 In response to one of the eyes of each user in the at least one user being located at a single viewpoint or both eyes are located at a single viewpoint respectively, rendering the sub-pixels corresponding to the single viewpoint among the multiple composite sub-pixels.
  • the 3D display method includes:
  • S901 Obtain the spatial position of the eyes of at least one user
  • S902 Determine the viewpoint at which the eyes of each of the at least one user are located by the spatial positions of the eyes of the at least one user;
  • S903 In response to one of the eyes of each user in the at least one user being located at a single viewpoint or both eyes are located at a single viewpoint respectively, rendering the sub-pixels corresponding to the single viewpoint and the sub-pixels adjacent to the sub-pixels corresponding to the single viewpoint among the multiple composite sub-pixels At least one sub-pixel.
  • the 3D display method includes:
  • S1001 Obtain the spatial position of the eyes of at least one user
  • S1002 Determine the viewpoint at which the eyes of each of the at least one user are located by the spatial position of the eyes of the at least one user;
  • S1003 In response to one of the eyes of each user in the at least one user being located between the two viewpoints or the eyes of each user being located between the two viewpoints respectively, rendering the sub-pixels corresponding to the two viewpoints among the plurality of composite sub-pixels.
  • the 3D display method includes:
  • S1101 Detect face information of at least one user
  • S1102 Obtain the spatial position of the eyes of at least one user
  • S1103 Determine the viewpoint at which the eyes of each of the at least one user are located by the spatial position of the eyes of the at least one user;
  • S1104 Rendering, based on the 3D signal, sub-pixels corresponding to the viewpoints of each user's eyes among the multiple composite sub-pixels in the multi-viewpoint 3D display screen.
  • the 3D display method includes:
  • S1201 Detect face information of at least two users
  • S1202 Obtain the spatial positions of the eyes of at least two users
  • S1203 Determine the viewpoint at which the eyes of each of the at least two users are located by the spatial positions of the eyes of the at least two users;
  • S1204 Rendering, based on the 3D signal, sub-pixels corresponding to the viewpoints of each user's eyes among the multiple composite sub-pixels in the multi-viewpoint 3D display screen.
  • the embodiment of the present disclosure provides a 3D display terminal 1300.
  • the 3D display terminal Referring to FIG. 13, the 3D display terminal:
  • the processor 1310 and the memory 1311 may also include a communication interface 1312 and a bus 1313. Among them, the processor 1310, the communication interface 1312, and the memory 1311 communicate with each other through the bus 1313.
  • the communication interface 1313 may be configured to transmit information.
  • the processor 1310 may call logic instructions in the memory 1311 to execute the 3D display method of the foregoing embodiment.
  • logic instructions in the memory 1311 mentioned above can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
  • the memory 1311 can be configured to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 1310 executes functional applications and data processing by running the program instructions/modules stored in the memory 1311, that is, implements the 3D display method in the foregoing method embodiment.
  • the memory 1311 may include a program storage area and a data storage area, where the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the terminal device, and the like.
  • the memory 1311 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the computer-executable instructions are configured to execute the above-mentioned 3D display method.
  • the computer program product provided by the embodiments of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned computer executes the above-mentioned 3D display method.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
  • a typical implementation entity is a computer or its processor or other components.
  • the computer can be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a computer Wearable devices, smart TVs, IoT systems, smart homes, industrial computers, single-chip microcomputer systems, or a combination of these devices.
  • the computer may include one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM).
  • the methods, programs, systems, equipment, devices, etc. in the embodiments of the present application can be executed or implemented in a single or multiple networked computers, and can also be practiced in a distributed computing environment.
  • tasks are performed by remote processing devices connected through a communication network.
  • the embodiments of this specification can be provided as a method, a system, a device, or a computer program product. Therefore, the embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware.
  • the components of the device are described in the form of functional modules/units. It is conceivable that multiple functional modules/units are implemented in one or more "combined" functional modules/units and/or one or more software and/or hardware. It is also conceivable that a single functional module/unit is implemented by multiple sub-functional modules or a combination of sub-units and/or multiple software and/or hardware. The division of functional modules/units may only be a logical function division. In a specific implementation manner, multiple modules/units may be combined or integrated into another system or device.
  • connections of modules, units, devices, systems, equipment and their components mentioned in this article include direct or indirect connections, including feasible electrical, mechanical, and communication connections, especially wired or wireless connections between various interfaces. Connection, including but not limited to HDMI, radar, USB, WiFi, cellular network.
  • the technical features, flowcharts and/or block diagrams of the methods and programs can be applied to corresponding devices, equipment, systems and their modules, units, and components.
  • the various embodiments and features of the device, equipment, system and its modules, units, and components can be applied to the methods and programs according to the embodiments of the present application.
  • computer program instructions can be loaded into the processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing equipment to produce a machine that has one process or multiple processes implemented in a flowchart and/or a block diagram. Corresponding functions or features in the box or multiple boxes.
  • the methods and programs according to the embodiments of the present application may be stored in a computer-readable memory or medium that can guide a computer or other programmable data processing equipment to work in a specific manner in the form of computer program instructions or programs.
  • the embodiments of the present application also relate to a readable memory or medium storing the methods, programs, and instructions that can implement the embodiments of the present application.
  • Storage media include permanent and non-permanent, removable and non-removable items that can be used to store information by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM) ), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, CD-ROM, digital versatile disc (DVD) or other optical storage, magnetic cartridge Tape, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be configured to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technologies
  • CD-ROM compact disc

Abstract

本申请涉及3D显示技术,公开一种3D显示设备,包括:多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成;眼部定位装置,被配置为获取用户眼部的空间位置;3D处理装置,被配置为由用户眼部的空间位置确定视点,并基于接收到的3D信号渲染多个复合子像素中与视点对应的子像素。上述3D显示设备能提高3D显示的灵活性。本申请还公开一种3D显示方法和3D显示终端、计算机可读存储介质、计算机程序产品。

Description

3D显示设备、方法及终端
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231290.X、发明名称为“3D显示设备、方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D显示技术,例如涉及3D显示设备、3D显示方法及3D显示终端。
背景技术
目前,在进行3D显示时,通常向预设的固定显示朝向显示待显示内容,导致显示方式单一、灵活性低,影响显示效果。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了实施例的概括,其不是要确定关键/重要组成元素或描绘发明的保护范围,而是作为后面的详细说明的序言。
本申请的实施例意图提供3D显示设备、3D显示方法和3D显示终端、计算机可读存储介质、计算机程序产品,以提高3D显示的灵活性。
在一个方案中,提供一种3D显示设备,包括:多视点3D显示屏,包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成;眼部定位装置,被配置为获取用户眼部的空间位置;3D处理装置,被配置为由用户眼部的空间位置确定视点,并基于接收到的3D信号渲染多个复合子像素中与视点对应的子像素。
在本公开实施例中,通过利用眼部定位装置实时获取眼部定位数据,能够根据观看情况及时调整3D显示,从而实现灵活度高的3D显示,为用户提供良好的观看体验。而且,以复合像素的方式定义多视点3D显示屏的显示分辨率,因此在传输和显示时均以由复合像素定义的显示分辨率为考量因素,这能够有效实现传输和渲染计算量的减少,且仍具有优良的显示效果。这能实现高质量的3D显示。
在一些实施例中,眼部定位装置被配置为获取至少一个用户的眼部的空间位置。
在一些实施例中,3D处理装置被配置为,响应于至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染多个复合子像素中与单个视点对应的子像素。
在本实施例中,实现针对用户的双眼各自所在的视点位置呈现精确的显示。
在一些实施例中,3D处理装置还被配置为:渲染与单个视点对应的子像素相邻的至少一个子像素。
在本实施例中,通过额外地渲染与眼部所在的单个视点对应的子像素的相邻的一个或两个子像素增强显示亮度,从而使显示效果适应强光环境;也可能的是,根据眼部定位数据计算出用户的偏移或移动趋势,据此渲染用户有可能移动到的视点位置对应的子像素,从而主动地、或者说动态地适应观看情况,以获得优良的观看体验。
在一些实施例中,3D处理装置被配置为,响应于至少一个用户中每个用户的双眼之一位于两个视点之间或双眼分别位于两个视点之间,渲染多个复合子像素中与两个视点对应的子像素。
在本实施例中,能够实现的是,即使在用户的眼部横跨视点的情况下,也能实现清晰的显示效果;也可能的是,根据眼部定位数据计算出用户的偏移或移动趋势,据此渲染用户有可能移动到的视点位置对应的子像素,或者据此渲染在用户移动过程中经过的视点位置对应的子像素,从而主动地、或者说动态地适应观看情况,以获得优良的观看体验。
在一些实施例中,3D显示设备还包括脸部检测装置,被配置为检测至少一个用户的脸部信息。
在本公开实施例中,通过检测用户的脸部信息,可以识别出用户的身份。这例如在如下情况中是有利的,即,曾经对用户的双眼或脸部已进行过检测,已知用户的瞳距或其他生物特征信息,那么双眼所在的视点位置能够利用已知信息更快地计算出来,进一步提升脸部识别速度或眼部定位速度。
在一些实施例中,眼部定位装置被配置为获取至少两个用户的各自的双眼的空间位置。
在本公开实施例中,通过实时获取至少两个用户各自的双眼视点位置,能够分别向至少两个用户提供精确的、定制化的、必要时有差异的3D显示,使每个用户都能获得优良的观看体验。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
在一些实施例中,多个复合子像素中的每个复合子像素包括按行排列或按列排列的多个子像素。
在另一方案中,提供了一种3D显示方法,包括:获取用户眼部的空间位置;由用户眼部的空间位置确定视点;基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素;其中,多视点3D显示屏包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的 多个子像素构成。
在一些实施例中,获取用户眼部的空间位置和由用户眼部的空间位置确定视点包括:获取至少一个用户的眼部的空间位置;由至少一个用户的眼部的空间位置确定至少一个用户中每个用户的眼部所在的视点。
在一些实施例中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素包括:响应于至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染多个复合子像素中与单个视点对应的子像素。
在一些实施例中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素包括:响应于至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染多个复合子像素中与单个视点对应的子像素以及与单个视点对应的子像素相邻的至少一个子像素。
在一些实施例中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素包括:响应于至少一个用户中每个用户的双眼之一位于两个视点之间或双眼分别位于两个视点之间,渲染多个复合子像素中与两个视点对应的子像素。
在一些实施例中,3D显示方法还包括:检测至少一个用户的脸部信息。
在一些实施例中,检测至少一个用户的脸部信息包括:检测至少两个用户的脸部信息。
在另一方案中,提供了3D显示终端,包括处理器和存储有程序指令的存储器,还包括多视点3D显示屏,多视点3D显示屏包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成,处理器被配置为在执行程序指令时,执行如权利要求10至16任一项的方法。
在一些实施例中,3D显示终端为智能电视、智能蜂窝电话、平板电脑、个人计算机或可穿戴设备。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D显示方法。
本公开实施例提供的3D显示设备、3D显示方法及3D显示终端、计算机可读存储介质、计算机程序产品,可以实现以下技术效果:
提高3D显示的灵活性。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A和图1B是根据本公开实施例的3D显示设备的结构示意图;
图2是根据本公开实施例的3D显示设备的硬件结构示意图;
图3是图2所示的3D显示设备的软件结构示意图;
图4A和图4B是根据本公开实施例的复合像素的示意图;
图5A和图5B根据本公开实施例的响应于用户的视点位置执行的渲染的示意图,其中用户的双眼分别位于单个视点中;
图5C是根据本公开实施例的响应于用户的视点位置执行的渲染的示意图,其中用户的双眼之一横跨视点,另一个位于单个视点中;
图5D是根据本公开实施例的响应于用户的视点位置执行的渲染的示意图,其中用户的视点位置发生移动;
图5E是根据本公开实施例的响应于用户的视点位置执行的渲染的示意图,其中有两个用户;
图6是根据本公开实施例的3D显示方法的步骤示意图;
图7是根据本公开实施例的3D显示方法的步骤示意图;
图8是根据本公开实施例的3D显示方法的步骤示意图;
图9是根据本公开实施例的3D显示方法的步骤示意图;
图10是根据本公开实施例的3D显示方法的步骤示意图;
图11是根据本公开实施例的3D显示方法的步骤示意图;
图12是根据本公开实施例的3D显示方法的步骤示意图;
图13根据本公开实施例的3D显示终端的结构示意图。
附图标记:
100:3D显示设备;101:处理器;122:寄存器;110:多视点3D显示屏;130:3D处理装置;131:缓存器;140:视频信号接口;150:眼部定位装置;158:脸部检测装置;CP:复合像素;CSP:复合子像素;200:3D显示设备;201:处理器;202:外部存储器接口;203:存储器;204:USB接口;205:充电管理模块;206:电源管理模块;207:电池;208:移动通信模块;209:天线;210:无线通信模块;211:天线;212:音频模块;213: 扬声器;214:受话器;215:麦克风;216:耳机接口;217:按键;218:马达;219:指示器;220:SIM卡接口;221:摄像装置;222:寄存器;223:GPU;224:编解码器;230:传感器模块;2301:接近光传感器;2302:环境光传感器;2303:压力传感器;2304:气压传感器;2305:磁传感器;2306:重力传感器;2307:陀螺仪传感器;2308:加速度传感器;2309:距离传感器;2310:温度传感器;2311:指纹传感器;2312:触摸传感器;2313:骨传导传感器;310:应用程序层;320:框架层;330:核心类库和运行时(Runtime);340:内核层;400:复合像素;410、420、430:成单列布置的复合子像素;411、421、431:成单行布置的子像素;440、450、460:成单行布置的复合子像素;441、451、461:成单列布置的子像素;1300:3D显示终端;1310:处理器;1311:存储器;1312:通信接口;1313:总线。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
在一个方案中,提供一种3D显示设备,包括:多视点3D显示屏(例如:多视点裸眼3D显示屏),包括m×n个复合像素;配置为接收3D信号的图像的视频信号接口;3D处理装置;配置为实时获取眼部定位数据的眼部定位装置;其中,每个复合像素包括多个复合子像素,各复合子像素由对应于i个视点的i个同色子像素构成,其中i≥3;其中,3D处理装置配置为,根据眼部定位数据,基于3D信号的图像渲染各复合子像素中由眼部定位数据确定的子像素。
在本公开实施例中,通过利用眼部定位装置实时获取眼部定位数据,能够根据观看情况及时调整3D显示,从而实现灵活度高的3D显示,为用户提供良好的观看体验。而且,以复合像素的方式定义多视点3D显示屏的显示分辨率,因此在传输和显示时均以由复合像素定义的显示分辨率为考量因素,这能够有效实现传输和渲染计算量的减少,且仍具有优良的显示效果。这能实现高质量的3D显示。
在一些实施例中,眼部定位装置配置为实时检测至少一个用户的眼部所在的视点位置。
在一些实施例中,3D处理装置配置为,响应于每个用户的双眼之一位于单个视点中或双眼分别位于单个视点中,渲染各复合子像素中与单个视点对应的子像素。
在本实施例中,实现针对用户的双眼各自所在的视点位置呈现精确的显示。
在一些实施例中,3D处理装置配置为还渲染与单个视点对应的子像素的相邻的一个或两个子像素。
在本实施例中,通过额外地渲染与眼部所在的单个视点对应的子像素的相邻的一个或两个子像素增强显示亮度,从而使显示效果适应强光环境;也可能的是,根据眼部定位数据计算出用户的偏移或移动趋势,据此渲染用户有可能移动到的视点位置对应的子像素,从而主动地、或者说动态地适应观看情况,以获得优良的观看体验。
在一些实施例中,3D处理装置配置为,响应于每个用户的双眼之一位于两个视点之间(横跨视点)或双眼横跨视点,渲染各复合子像素中与被横跨的视点对应的子像素。
在本实施例中,能够实现的是,即使在用户的眼部横跨视点的情况下,也能实现清晰的显示效果;也可能的是,根据眼部定位数据计算出用户的偏移或移动趋势,据此渲染用户有可能移动到的视点位置对应的子像素,或者据此渲染在用户移动过程中经过的视点位置对应的子像素,从而主动地、或者说动态地适应观看情况,以获得优良的观看体验。
在一些实施例中,3D显示设备还包括配置为检测至少一个用户的脸部信息的脸部检测装置。
在本公开实施例中,通过检测用户的脸部信息,可以识别出用户的身份。这例如在如下情况中是有利的,即,曾经对用户的双眼或脸部已进行过检测,已知用户的瞳距或其他生物特征信息,那么双眼所在的视点位置能够利用已知信息更快地计算出来,进一步提升脸部识别速度或眼部定位速度。
在一些实施例中,眼部定位装置配置为实时获取至少两个用户各自的双眼所在的视点位置。
在本公开实施例中,通过实时获取至少两个用户各自的双眼视点位置,能够分别向至少两个用户提供精确的、定制化的、必要时有差异的3D显示,使每个用户都能获得优良的观看体验。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
在一些实施例中,各复合子像素包括单行或单列的多个子像素。
在另一方案中,提供了一种用于多视点3D显示屏的3D显示方法,多视点3D显示屏包括m×n个复合像素,每个复合像素包括多个复合子像素,各复合子像素由对应于i个视点的i个同色子像素构成,其中i≥3,3D显示方法包括:传输3D信号的图像;实时获取眼部定位数据;根据眼部定位数据,基于3D信号的图像渲染各复合子像素中由眼部定位数据确定的子像素。
在一些实施例中,实时获取眼部定位数据包括:实时检测至少一个用户的双眼所在的视点位置。
在一些实施例中,渲染步骤包括:响应于每个用户的双眼之一位于单个视点中或双眼 分别位于单个视点中,渲染各复合子像素中与单个视点对应的子像素。
在一些实施例中,渲染步骤包括:响应于每个用户的双眼之一位于单个视点中或双眼分别位于单个视点中,渲染各复合子像素中与单个视点对应的子像素以及与单个视点对应的子像素相邻的一个或两个子像素。
在一些实施例中,渲染步骤包括:响应于每个用户的双眼之一横跨视点或双眼横跨视点,渲染各复合子像素中与被横跨的视点对应的子像素。
在一些实施例中,3D显示方法还包括:检测至少一个用户的脸部信息。
在一些实施例中,检测至少一个用户的脸部信息包括检测至少两个用户的脸部信息。
图1A示出了根据本公开实施例的3D显示设备的结构示意图。参考图1A,提供了一种3D显示设备100,多视点3D显示屏110、3D处理装置130、配置为接收3D信号的图像的视频信号接口140和眼部定位装置150。
多视点3D显示屏110可包括显示面板和覆盖在显示面板上的光栅(未标识)。在图1A所示的实施例中,多视点3D显示屏110包括m×n个复合像素并因此限定出m×n的显示分辨率。如图1A所示,多视点3D显示屏110包括m列n行个复合像素并因此限定出m×n的显示分辨率。
在一些实施例中,m×n的显示分辨率可以为全高清(FHD)以上的分辨率,包括但不限于,1920×1080、1920×1200、2048×1280、2560×1440、3840×2160等。
在一些实施例中,3D处理装置与多视点3D显示屏通信连接。
在一些实施例中,3D处理装置与多视点3D显示屏的驱动装置通信连接。
在一些实施例中,每个复合像素包括多个复合子像素,各复合子像素由对应于i个视点的i个同色子像素构成,i≥3。在图1A所示的实施例中,i=6,但可以想到i为其他数值。在所示的实施例中,多视点3D显示屏可相应地具有i(i=6)个视点(V1-V6),但可以想到可以相应地具有更多或更少个视点。
结合参考图1A和图4A,在所示的实施例中,每个复合像素包括三个复合子像素,并且每个复合子像素由对应于6视点(i=6)的6个同色子像素构成。三个复合子像素分别对应于三种颜色,即红(R)、绿(G)和蓝(B)。
如图4A所示,复合像素400中的三个复合子像素410、420、430成单列布置。各复合子像素410、420、430包括成单行布置的子像素411、421、431。但可以想到,复合像素中的复合子像素不同排布方式或复合子像素中的子像素的不同排布形式。
如图4B所示,复合像素400中的三个复合子像素440、450、460成单行布置。各复 合子像素440、450、460包括成单列形式的子像素441、451、461。
在一些实施例中,例如图1A和图1B所示,3D显示设备100设置有单个3D处理装置130。单个3D处理装置130同时处理对3D显示屏110的各复合像素的各复合子像素的渲染。
在另一些未示出的实施例中,3D显示设备100可设置有例如两个、三个或更多个3D处理装置130,它们并行、串行或串并行结合地处理对3D显示屏110的各复合像素的各复合子像素的渲染。
本领域技术人员将明白,两个、三个或更多个3D处理装置可以有其他的方式分配且并行处理3D显示屏110的多行多列复合像素或复合子像素,这落入本公开实施例的范围内。
在一些实施例中,3D处理装置130还可以包括缓存器131,以便缓存所接收到的图像。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
继续参考图1A,3D显示设备100还包括通过视频信号接口140通信连接至3D处理装置130的处理器101。在一些实施例中,处理器101被包括在计算机或智能终端、如移动终端中或作为其处理器单元。但是可以想到,在另一些实施例中,处理器101可以设置在3D显示设备的外部,例如3D显示设备可以为带有3D处理装置的多视点3D显示器,例如非智能的3D电视,例如设置在公共交通设施中的移动电视。
为简单起见,在下文中,3D显示设备的示例性实施例内部包括处理器。进而,视频信号接口140构造为连接处理器101和3D处理装置130的内部接口,参考图2和图3所示的以移动终端方式实施的3D显示设备200可更明确这种结构。在一些实施例中,作为3D显示设备200的内部接口的视频信号接口140可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。在一些实施例中,如图1A所示,3D显示设备100的处理器101还可包括寄存器122。寄存器122可用与暂存指令、数据和地址。
继续参考图1A,3D显示设备100还包括配置为实时获取眼部定位数据的眼部定位装置150,从而3D处理装置130可以根据眼部定位数据渲染复合像素(复合子像素)中的相应子像素。如图1A所示,眼部定位装置150通信连接至3D处理装置130,由此3D处理装置130可以直接接收眼部定位数据。在另一些实施例中,还设置有眼部定位数据接口(未示出),眼部定位装置可以直接连接3D显示设备的处理器,3D处理装置经由眼部定位数据接口从处理器获得眼部定位数据。在另一些实施例中,眼部定位装置可同时连接处理器和3D处理装置,在这种情况下,一方面3D处理装置可以直接从眼部定位装置获取眼部定位数据,另一方面可以使得眼部定位装置获取的其他信息可以被处理器处理。
在图1A所示的实施例中,眼部定位装置150配置为实时获取眼部定位数据,3D处理装置基于3D信号的图像渲染各复合子像素中由实时获取的眼部定位数据确定的子像素。
示例性地而非限制性地,眼部定位装置可以包括两个黑白摄像头、眼部定位图像处理器和眼部定位数据接口。在这种情况下,通过两个黑白摄像头能够高速度地(实时地)拍摄用户脸部图像,通过眼部定位图像处理器能够识别用户的双眼并计算双眼分别所在的实际空间位置,通过眼部定位数据接口能够传输得到的双眼分别所在的实际空间位置。
在一些实施例中,3D处理装置被配置为由眼部的空间位置确定视点。可选地,由眼部的空间位置确定视点也可由眼部定位装置的眼部定位图像处理器实现。
图1B示出了根据本公开实施例的3D显示设备的结构示意图。参考图1B,在如图1A所提供的3D显示设备的基础上,3D显示设备100还包括脸部检测装置158,脸部检测装置158具有视觉识别功能、例如脸部识别功能并且配置为检测至少一个用户的脸部信息。脸部检测装置158可以连接至眼部定位装置150,也可以连接至3D处理装置130,以传输检测到的脸部信息。示例性地而非限制性地,脸部检测装置158可以作为独立装置设置,也可以集成在眼部定位装置150内,也可以集成在3D显示设备100的处理器101内,也可以集成在3D显示设备中具有类似功能的其他部分内。
在一些实施例中,在一个以上用户、例如两个用户的情况下,脸部检测装置检测这两个用户的脸部信息,并且眼部定位装置实时获取这两个用户各自的双眼所在的视点位置。3D处理装置根据两个用户各自的双眼所在的视点位置,基于3D信号的图像渲染各复合子像素中的子像素。
在一些实施例中,在脸部检测装置和眼部定位装置检测到一个以上用户、例如两个用户各自的双眼所在的视点位置发生冲突时,这种情况例如为一个用户的左眼和另一个用户的右眼位于同一视点位置,通过多视点3D显示屏向这些用户呈现二维(2D)显示。
在本公开实施例中,多视点3D显示屏110可以限定出6个视点V1-V6,用户的眼睛在各视点(空间位置)可看到多视点3D显示屏110的显示面板中各复合像素的复合子像素中相应的子像素的显示。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成3D的画面。
示例性地,3D处理装置130通过作为内部接口的视频信号接口140从处理器101接收解压缩的3D信号的图像。3D信号的图像可以为具有m×n(信号)分辨率的两幅图像或者为具有2m×n或m×2n(信号)分辨率的复合图像。
在一些实施例中,两幅图像或复合图像可以包括不同类型的图像以及可以呈各种排布形式。示例性地,具有m×n(信号)分辨率的两幅图像可以呈并列格式或上下格式。这两 幅图像可以分别为左眼视差图像和右眼视差图像,也可以分别为渲染色彩图像和景深图像。示例性地,具有2m×n或m×2n(信号)分辨率的复合图像可以呈左右交织格式、上下交织格式或棋盘格式。复合图像可以为交织的左眼和右眼视差复合图像,也可以为交织的渲染色彩和景深复合图像。
本领域技术人员将明白,上述图像类型以及排布形式仅是示意性的,3D信号的图像可以包括其他类型的图像以及可以呈其他排布形式,这落入本公开实施例的范围内。
示例性地而非限制地,3D处理装置130通过视频信号接口140接收到3D信号的具有m×n(信号)分辨率的两幅图像,亦即各图像的(信号)分辨率m×n与多视点3D显示屏110的按照视点划分的复合像素所提供的显示分辨率m×n一致。
示例性地而非限制地,3D处理装置130通过视频信号接口140接收到3D信号的具有2m×n或m×2n(信号)分辨率的复合图像,亦即复合图像的(信号)分辨率的一半与多视点3D显示屏110的按照视点划分的复合像素所提供的显示分辨率m×n一致。
在这种情况下,一方面,由于视点信息与传输过程无关,这能够实现处理计算量小且分辨率不受损失的3D显示;另一方面,由于复合像素(复合子像素)对应于视点设置,显示屏的渲染能够以“点对点”的方式实现,大大降低了计算量。相比之下,常规的3D显示器的图像或视频的传输和显示仍以2D显示面板为基础,不仅存在分辨率下降和渲染计算量剧增的问题,还可能存在多次格式调整和图像或视频显示适配的问题。
在一些实施例中,处理器101的寄存器122可用于接收有关多视点3D显示屏110的显示要求的信息,信息典型地为与i个视点无关且与多视点3D显示屏110的m×n显示分辨率相关的信息,以便处理器101向多视点3D显示屏110发送符合其显示要求的3D信号的图像。信息例如可以为用于初始建立视频传输发送的数据包。
因此,在传输3D信号的图像时,处理器101无需考虑与多视点3D显示屏110的i个视点相关的信息(i≥3)。而是,处理器101凭借寄存器122接收到的与多视点3D显示屏110的m×n分辨率相关的信息就能够向多视点3D显示屏110发送符合其要求的3D信号的图像。
在一些实施例中,3D显示设备100还可以包括编解码器,配置为对压缩的3D信号解压缩和编解码并将解压缩的3D信号经视频信号接口140发送至3D处理装置130。
在一些实施例中,3D显示设备100的处理器101从存储器读取或从3D显示设备100以外、例如通过外部接口接收3D信号的图像,然后经由视频信号接口140将读取到的或接收到的3D信号的图像传输到3D处理装置130。
在一些实施例中,3D显示设备100还包括格式调整器(未示出),其例如集成在处理 器101中,构造为编解码器或者作为GPU的一部分,用于预处理3D信号的图像,以使其包含的两幅图像具有m×n的(信号)分辨率或者使其包含的复合图像具有2m×n或m×2n的(信号)分辨率。
在一些实施例中,3D处理装置130配置为,响应于每个用户的双眼之一位于单个视点中或双眼分别位于单个视点中,渲染各复合子像素中与单个视点对应的子像素。
参考图5A,在所示实施例中,用户的右眼处于第2视点V2,左眼处于第5视点V5,基于3D信号的图像渲染复合子像素中与这两个视点V2和V5相对应的子像素。用户的双眼在这两个视点看到两个不同画面形成视差,在大脑中合成3D的画面。
在一些实施例中,3D处理装置130配置为,响应于每个用户的双眼之一位于单个视点中或双眼分别位于单个视点中,渲染各复合子像素中与单个视点对应的子像素,还渲染与单个视点对应的子像素的相邻的一个或两个子像素。
参考图5B,在所示实施例中,用户的右眼处于第2视点V2,左眼处于第5视点V5,基于3D信号的图像渲染复合子像素中与这两个视点V2和V5相对应的子像素,还渲染与视点V2相邻的视点V1相对应的子像素以及与视点V5相邻的视点V4相对应的子像素。在另一些未示出的实施例中,也可渲染与这两个视点V2和V5之一相邻或与两个视点分别相邻的两个视点对应的子像素。
在一些实施例中,多视点3D显示屏可包括自发光显示面板,例如为MICRO-LED显示面板。在一些实施例中,自发光显示面板、如MICRO-LED显示面板配置为未被渲染的子像素不发光。对于多视点的超高清显示器而言,这能够极大节省显示屏所耗的功率。
在一些实施例中,3D处理装置130配置为,响应于每个用户的双眼之一横跨视点或双眼横跨视点,渲染各复合子像素中与被横跨的视点对应的子像素。
参考图5C,在所示实施例中,用户的右眼横跨两个视点V1和V2,左眼处于第5视点V5,基于3D信号的图像渲染复合子像素中与被横跨的两个视点V1和V2对应的子像素,以及渲染与单个视点V5对应的子像素。从而位于视点V1、V2之间和V5的用户的双眼能看到不同角度的渲染画面,产生视差,以形成3D显示的3D效果。
在一些实施例中,3D处理装置130配置为,响应于用户的双眼之一或两者所在的视点位置发生移动,渲染各复合子像素中跟随用户的双眼经移动的视点位置对应的子像素。
参考图5D,在所示实施例中,用户的右眼从视点V1移动至视点V3,左眼从视点V4移动至视点V6,则复合子像素中被渲染的子像素对应的视点相应地从V1和V4变为V3和V6。从而处于运动状态的用户的眼部仍能实时看到不同角度的渲染画面,产生视差,以形成3D显示的3D效果。
在一些实施例中,3D处理装置130配置为,响应于至少两个用户各自的双眼所在视点位置,渲染各复合子像素中与至少两个用户各自的双眼所在视点位置对应的子像素。
参考图5E,在所示实施例中,存在两个用户,用户1的双眼分别处于视点V1和V3,用户2的双眼分别处于视点V4和V6,则渲染各复合子像素中与这四个视点位置对应的子像素。从而每个用户可以观看对应自己观察角度的渲染图像,产生视差,以形成3D显示的3D效果。
如前所述,本公开实施例提供的3D显示设备可以是包含处理器的3D显示设备。在一些实施例中,3D显示设备可构造为智能蜂窝电话、平板电脑、智能电视、可穿戴设备、车载设备、笔记本电脑、超级移动个人计算机(UMPC)、上网本、个人数字助理(PDA)等。
示例性的,图2示出了实施为移动终端、如智能蜂窝电话或平板电脑的3D显示设备200的硬件结构示意图。3D显示设备200可以包括处理器201,外部存储接口202,(内部)存储器203,通用串行总线(USB)接口204,充电管理模块205,电源管理模块206,电池207,移动通信模块208,无线通信模块210,天线209、211,音频模块212,扬声器213,受话器214,麦克风215,耳机接口216,按键217,马达218,指示器219,用户标识模块(SIM)卡接口220,摄像装置221,多视点3D显示屏110,3D处理装置130,视频信号接口140,眼部定位装置150,脸部检测装置158以及传感器模块230等。传感器模块230可以包括接近光传感器2301,环境光传感器2302,压力传感器2303,气压传感器2304,磁传感器2305,重力传感器2306,陀螺仪传感器2307,加速度传感器2308,距离传感器2309,温度传感器2310,指纹传感器2311,触摸传感器2312,骨传导传感器2313等。
可以理解的是,本公开实施例示意的结构并不构成对3D显示设备200的具体限定。在另一些实施例中,3D显示设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器201可以包括一个或一个以上处理单元,例如:处理器201可以包括应用处理器(AP),调制解调处理器,基带处理器,寄存器222,图形处理器(GPU)223,图像信号处理器(ISP),控制器,存储器,编解码器224,数字信号处理器(DSP),基带处理器、神经网络处理器(NPU)等或它们的组合。其中,不同的处理单元可以是独立的器件,也可以集成在一个或一个以上处理器中。
处理器201中还可以设置有高速缓存器,配置为保存处理器201刚用过或循环使用的指令或数据。在处理器201要再次使用指令或数据时,可从存储器中直接调用。
在一些实施例中,处理器201可以包括一个或一个以上接口。接口可以包括集成电路 (I2C)接口、集成电路内置音频(I2S)接口、脉冲编码调制(PCM)接口、通用异步收发传输器(UART)接口、移动产业处理器接口(MIPI)、通用输入输出(GPIO)接口、用户标识模块(SIM)接口、通用串行总线(USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(SDA)和一根串行时钟线(SCL)。在一些实施例中,处理器201可以包含多组I2C总线。处理器201可以通过不同的I2C总线接口分别通信连接触摸传感器2312,充电器,闪光灯,摄像装置221、眼部定位装置150、脸部检测装置158等。
在图2所示的实施例中,MIPI接口可以被配置为连接处理器201与多视点3D显示屏110。此外,MIPI接口还可被配置为连接如摄像装置221、眼部定位装置150、脸部检测装置158等外围器件。
可以理解的是,本公开实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对3D显示设备200的结构限定。
3D显示设备200的无线通信功能可以通过天线209、211,移动通信模块208,无线通信模块210,调制解调处理器或基带处理器等实现。
天线209、211被配置为发射和接收电磁波信号。3D显示设备200中的每个天线可被配置为覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块208可以提供应用在3D显示设备200上的包括2G/3G/4G/5G等无线通信的解决方案。在一些实施例中,移动通信模块208的至少部分功能模块可以被设置于处理器201中。在一些实施例中,移动通信模块208的至少部分功能模块可以与处理器201的至少部分模块被设置在同一个器件中。
无线通信模块210可以提供应用在3D显示设备200上的包括无线局域网(WLAN),蓝牙(BT),全球导航卫星系统(GNSS),调频(FM),近距离无线通信技术(NFC),红外技术(IR)等无线通信的解决方案。无线通信模块210可以是集成至少一个通信处理模块的一个或一个以上器件。
在一些实施例中,3D显示设备200的天线209和移动通信模块208耦合,天线211和无线通信模块210耦合,使得3D显示设备200可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(GSM),通用分组无线服务(GPRS),码分多址接入(CDMA),宽带码分多址(WCDMA),时分码分多址(TD-SCDMA),长期演进(LTE),BT,GNSS,WLAN,NFC,FM,或IR技术等中至少一项。
在一些实施例中,配置为接收3D信号的外部接口可以包括USB接口204、移动通信模块208、无线通信模块209或其组合。此外,还可以想到其他可行的配置为接收3D信号 的接口,例如上问题到的接口。
存储器203可以配置为存储计算机可执行程序代码,可执行程序代码包括指令。处理器201通过运行存储在存储器203的指令,从而执行3D显示设备200的各种功能应用以及数据处理。
外部存储器接口202可以配置为连接外部存储卡,例如Micro SD卡,实现扩展3D显示设备200的存储能力。外部存储卡通过外部存储器接口202与处理器201通信,实现数据存储功能。
在一些实施例中,3D显示设备的存储器可以包括(内部)存储器203、外部存储器接口202连接的外部存储卡或其组合。在本公开另一些实施例中,视频信号接口也可以采用上述实施例中不同的内部接口连接方式或其组合。
在本公开的实施例中,摄像装置221可以采集图像或视频。
在一些实施例中,3D显示设备200通过视频信号接口140、3D处理装置130、多视点3D显示屏110,以及应用处理器等实现显示功能。
在一些实施例中,3D显示设备200可包括GPU 223,例如配置为在处理器201内对3D视频图像进行处理,也可以对2D视频图像进行处理。
在一些实施例中,3D显示设备200还包括编解码器224,配置为对数字视频、例如对3D信号压缩或解压缩。
在一些实施例中,视频信号接口140被配置为将经GPU或编解码器224或两者处理的3D信号、例如解压缩的3D信号的图像输出至3D处理装置130。
在一些实施例中,GPU或编解码器224集成有格式调整器。
多视点3D显示屏110被配置为显示3D图像或视频等。多视点3D显示屏110包括显示面板。显示面板可以采用液晶显示屏(LCD),有机发光二极管(OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(AMOLED),柔性发光二极管(FLED),Mini-LED,Micro-LED,Micro-OLED,量子点发光二极管(QLED)等。
在一些实施例中,3D显示设备200还可包括配置为实时获取眼部定位数据的眼部定位装置150或眼部定位数据接口,从而3D处理装置130可以基于眼部定位数据渲染复合像素(复合子像素)中的相应子像素。示例性地,眼部定位装置150通信连接至3D处理装置130,也可以通信连接至处理器201,例如旁路连接至处理器201。示例性地,眼部定位装置150可同时连接处理器201和3D处理装置130。
在一些实施例中,3D显示设备200还包括脸部检测装置158。脸部检测装置158具有视觉识别功能、例如脸部识别功能并且配置为检测至少一个用户的脸部信息。脸部检测装 置158可以连接至眼部定位装置150,也可以连接至3D处理装置130,以传输检测到的脸部信息。
3D显示设备200可以通过音频模块212,扬声器213,受话器214,麦克风215,耳机接口216,以及应用处理器等实现音频功能。
按键217包括开机键,音量键等。按键217可以是机械按键。也可以是触摸式按键。3D显示设备200可以接收按键输入,产生与3D显示设备200的用户设置以及功能控制有关的键信号输入。
马达218可以产生振动提示。马达218可以被配置为来电振动提示,也可以被配置为触摸振动反馈。
SIM卡接口220被配置为连接SIM卡。在一些实施例中,3D显示设备200采用eSIM,即:嵌入式SIM卡。
环境光传感器2302被配置为感知周围光线情况。例如,显示屏的亮度可据此调节。示例性地,在用户的双眼分别位于单个视点中时,当环境光传感器2302检测到外界环境亮度较高时,3D处理装置130除了渲染各复合子像素中与单个视点对应的子像素,还渲染与单个视点对应的子像素的相邻的一个或两个子像素,以增强显示亮度,适应强光环境。
压力传感器2303被配置为感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器2303可以设置于多视点3D显示屏110,这落入本公开实施例的范围内。
气压传感器2304被配置为测量气压。
磁传感器2305包括霍尔传感器。
重力传感器2306是将运动或重力转换为电信号的传感器,主要被配置为倾斜角、惯性力、冲击及震动等参数的测量。
陀螺仪传感器2307可以被配置为确定3D显示设备200的运动姿态。
加速度传感器2308可检测3D显示设备200在各个方向上(一般为三轴)加速度的大小。
距离传感器2309可被配置为测量距离
温度传感器2310可被配置为检测温度。
指纹传感器2311被配置为采集指纹。
触摸传感器2312可以设置于多视点3D显示屏110中,由触摸传感器2312与多视点3D显示屏110组成触摸屏,也称“触控屏”。
骨传导传感器2313可以获取振动信号。
充电管理模块205被配置为从充电器接收充电输入。其中,充电器可以是无线充电器, 也可以是有线充电器。
电源管理模块206被配置为连接电池207,充电管理模块205与处理器201。
3D显示设备200的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本公开所示的实施例以分层架构的安卓系统为例,示例性说明3D显示设备200的软件结构。但可以想到,本公开的实施例可以在不同的软件系统、如操作系统中实施。
图3是本公开实施例的3D显示设备200的软件结构示意图。分层架构将软件分成若干个层。层与层之间通过软件接口通信。在一些实施例中,将安卓系统分为四层,从上至下分别为应用程序层310,框架层320,核心类库和运行时(Runtime)330,以及内核层340。
应用程序层310可以包括一系列应用程序包。如图3所示,应用程序包可以包括蓝牙,WLAN,导航,音乐,相机,日历,通话,视频,图库,地图,短信息等应用程序。例如可以在视频应用程序中实施3D视频显示方法。
框架层320为应用程序层的应用程序提供应用编程接口(API)和编程框架。框架层包括一些预先定义的函数。例如,在本公开的一些实施例中,对所采集的3D视频图像进行识别的函数或者算法以及处理图像的算法等可以包括在框架层。
如图3所示,框架层320可以包括资源管理器、电话管理器、内容管理器、通知管理器、窗口管理器,视图系统,安装包管理器等。
安卓Runtime(运行时)包括核心库和虚拟机。安卓Runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言要调用的功能函数,另一部分是安卓的核心库。
应用程序层和框架层运行在虚拟机中。虚拟机将应用程序层和框架层的java文件执行为二进制文件。虚拟机被配置为执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
核心类库可以包括多个功能模块。例如:3D图形处理库(例如:OpenGL ES),表面管理器,图像处理库,媒体库,图形引擎(例如:SGL)等。
内核层340是硬件和软件之间的层。内核层至少包含摄像头驱动,音视频接口,通话接口,Wifi接口,传感器驱动,电源管理,GPS接口。
在此,以具有图2和图3所示结构的作为移动终端的3D显示设备为例,描述3D显示设备中的3D视频传输和显示的实施例;但是,可以想到,在另一些实施例中可以包括更多或更少的特征或对其中的特征进行改变。
在一些实施例中,例如为移动终端、如智能蜂窝电话或平板电脑的3D显示设备200例如借助作为外部接口的移动通信模块208及天线209或者无线通信模块210及天线211从网络、如蜂窝网络、WLAN网络、蓝牙接收例如压缩的3D信号,压缩的3D信号例如经GPU 223进行图像处理、编解码器224编解码和解压缩,然后例如经作为内部接口的视频信号接口140、如MIPI接口或mini-MIPI接口将解压缩的3D信号发送至3D处理装置130,解压缩的3D信号的图像包括本公开实施例的两幅图像或复合图像。进而,3D处理装置130相应地渲染显示屏的复合子像素中的子像素,由此实现3D视频播放。
在另一些实施例中,3D显示设备200读取(内部)存储器203或通过外部存储器接口202读取外部存储卡中存储的压缩的3D信号,并经相应的处理、传输和渲染来实现3D视频播放。
在一些实施例中,上文提到的3D视频的播放是在安卓系统应用程序层310中的视频应用程序中实施的。
本公开实施例还提供一种用于多视点3D显示屏的3D显示方法,多视点3D显示屏包括m×n个复合像素,每个复合像素包括多个复合子像素,各复合子像素由对应于i个视点的i个同色子像素构成,其中i≥3。
参考图6,在一些实施例中,3D显示方法包括:
S601:获取用户双眼的空间位置;
S602:由用户双眼的空间位置确定视点;
S603:基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素。
参考图7,在一些实施例中,3D显示方法包括:
S701:获取至少一个用户的双眼的空间位置;
S702:由至少一个用户的双眼的空间位置确定至少一个用户中每个用户的双眼所在的视点;
S703:基于3D信号渲染多视点3D显示屏中的多个复合子像素中与视点对应的子像素。
参考图8,在一些实施例中,3D显示方法包括:
S801:获取至少一个用户的双眼的空间位置;
S802:由至少一个用户的双眼的空间位置确定至少一个用户中每个用户的双眼所在的视点;
S803:响应于至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个 视点,渲染多个复合子像素中与单个视点对应的子像素。
参考图9,在一些实施例中,3D显示方法包括:
S901:获取至少一个用户的双眼的空间位置;
S902:由至少一个用户的双眼的空间位置确定至少一个用户中每个用户的双眼所在的视点;
S903:响应于至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染多个复合子像素中与单个视点对应的子像素以及与单个视点对应的子像素相邻的至少一个子像素。
参考图10,在一些实施例中,3D显示方法包括:
S1001:获取至少一个用户的双眼的空间位置;
S1002:由至少一个用户的双眼的空间位置确定至少一个用户中每个用户的双眼所在的视点;
S1003:响应于至少一个用户中每个用户的双眼之一位于两个视点之间或双眼分别位于两个视点之间,渲染多个复合子像素中与两个视点对应的子像素。
参考图11,在一些实施例中,3D显示方法包括:
S1101:检测至少一个用户的脸部信息;
S1102:获取至少一个用户的双眼的空间位置;
S1103:由至少一个用户的双眼的空间位置确定至少一个用户中每个用户的双眼所在的视点;
S1104:基于3D信号渲染多视点3D显示屏中的多个复合子像素中与每个用户的双眼所在的视点对应的子像素。
参考图12,在一些实施例中,3D显示方法包括:
S1201:检测至少两个用户的脸部信息;
S1202:获取至少两个用户的双眼的空间位置;
S1203:由至少两个用户的双眼的空间位置确定至少两个用户中每个用户的双眼所在的视点;
S1204:基于3D信号渲染多视点3D显示屏中的多个复合子像素中与每个用户的双眼所在的视点对应的子像素。
本公开实施例提供一种3D显示终端1300,参考图13,3D显示终端:
处理器1310和存储器1311,还可以包括通信接口1312和总线1313。其中,处理器1310、通信接口1312、存储器1311以通过总线1313完成相互间的通信。通信接口1313 可以被配置为传输信息。处理器1310可以调用存储器1311中的逻辑指令,以执行上述实施例的3D显示方法。
此外,上文提到的存储器1311中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器1311作为一种计算机可读存储介质,可被配置为存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器1310通过运行存储在存储器1311中的程序指令/模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的3D显示方法。
存储器1311可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器1311可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的3D显示方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的3D显示方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
上述实施例阐明的系统、设备、装置、模块或单元,可以由各种可能的实体来来实现。一种典型的实现实体为计算机或其处理器或其他部件。计算机例如可以为个人计算机、膝上型计算机、车载人机交互设备、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板电脑、可穿戴设备、智能电视、物联网系统、智能家居、工业计算机、单片机系统或者这些设备中的组合。在一个典型的配置中,计算机可包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。
在本申请的实施例的方法、程序、系统、设备、装置等,可以在单个或多个连网的计算机中执行或实现,也可以在分布式计算环境中实践。在本说明书实施例中,在这些分布 式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统、设备或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。
本领域技术人员可想到,上述实施例阐明的功能模块/单元或控制器以及相关方法步骤的实现,可以用软件、硬件和软/硬件结合的方式实现。例如,可以以纯计算机可读程序代码方式实现,也可以部分或全部通过将方法步骤进行逻辑编程来使得控制器以硬件来实现相同功能,包括但不限于逻辑门、开关、专用集成电路、可编程逻辑控制器(如FPGA)和嵌入微控制器。
在本申请的一些实施例中,以功能模块/单元的形式来描述装置的部件。可以想到,多个功能模块/单元一个或多个“组合”功能模块/单元和/或一个或多个软件和/或硬件中实现。也可以想到,单个功能模块/单元由多个子功能模块或子单元的组合和/或多个软件和/或硬件实现。功能模块/单元的划分,可以仅为一种逻辑功能划分,在具体的实现方式中,多个模块/单元可以结合或者可以集成到另一个系统、设备。此外,本文提到的模块、单元、装置、系统、设备及其部件的连接包括直接或间接的连接,涵盖可行的电的、机械的、通信的连接,尤其包括各种接口间的有线或无线连接,包括但不限于HDMI、雷达、USB、WiFi、蜂窝网络。
在本申请的实施例中,方法、程序的技术特征、流程图和/或方框图可以应用到相应的装置、设备、系统及其模块、单元、部件中。反过来,装置、设备、系统及其模块、单元、部件的各实施例和特征可以应用至根据本申请实施例的方法、程序中。例如,计算机程序指令可装载到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,其具有实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中相应的功能或特征。
根据本申请实施例的方法、程序可以以计算机程序指令或程序的方式存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读的存储器或介质中。本申请实施例也涉及存储有可实施本申请实施例的方法、程序、指令的可读存储器或介质。
存储介质包括永久性和非永久性、可移动和非可移动的可以由任何方法或技术来实现信息存储的物品。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器 (CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可被配置为存储可以被计算设备访问的信息。
除非明确指出,根据本申请实施例记载的方法、程序的动作或步骤并不必须按照特定的顺序来执行并且仍然可以实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本文中,各实施例的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
已参考上述实施例具体示出并描述了本申请的示例性系统、设备及方法,其仅为实施本系统、设备及方法的示例。本领域的技术人员可以理解的是可以在实施本系统、设备及/或方法时对这里描述的系统、设备及方法的实施例做各种改变而不脱离界定在所附权利要求中的本申请的精神及范围。所附权利要求意在界定本系统、设备及方法的范围,故落入这些权利要求中及与其等同的系统、设备及方法可被涵盖。

Claims (20)

  1. 一种3D显示设备,包括:
    多视点3D显示屏,包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成;
    眼部定位装置,被配置为获取用户眼部的空间位置;
    3D处理装置,被配置为由所述用户眼部的空间位置确定视点,并基于接收到的3D信号渲染所述多个复合子像素中与所述视点对应的子像素。
  2. 根据权利要求1所述的3D显示设备,其中,所述眼部定位装置被配置为获取至少一个用户的眼部的空间位置。
  3. 根据权利要求2所述的3D显示设备,其中,所述3D处理装置被配置为,响应于所述至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染所述多个复合子像素中与所述单个视点对应的子像素。
  4. 根据权利要求3所述的3D显示设备,其中,所述3D处理装置还被配置为:
    渲染与所述单个视点对应的子像素相邻的至少一个子像素。
  5. 根据权利要求2所述的3D显示设备,其中,所述3D处理装置被配置为,响应于所述至少一个用户中每个用户的双眼之一位于两个视点之间或双眼分别位于两个视点之间,渲染所述多个复合子像素中与所述两个视点对应的子像素。
  6. 根据权利要求2至5任一项所述的3D显示设备,还包括脸部检测装置,被配置为检测所述至少一个用户的脸部信息。
  7. 根据权利要求6所述的3D显示设备,其中,所述眼部定位装置被配置为获取至少两个用户的各自的双眼的空间位置。
  8. 根据权利要求1至5任一项所述的3D显示设备,其中,所述3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
  9. 根据权利要求1至5任一项所述的3D显示设备,其中,所述多个复合子像素中的每个复合子像素包括按行排列或按列排列的多个子像素。
  10. 一种3D显示方法,包括:
    获取用户眼部的空间位置;
    由所述用户眼部的空间位置确定视点;
    基于3D信号渲染多视点3D显示屏中的多个复合子像素中与所述视点对应的子像素;
    其中,所述多视点3D显示屏包括多个复合像素,所述多个复合像素中的每个复合像 素包括多个复合子像素,所述多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成。
  11. 根据权利要求10所述的方法,其中,获取用户眼部的空间位置和由所述用户眼部的空间位置确定视点包括:
    获取至少一个用户的眼部的空间位置;
    由所述至少一个用户的眼部的空间位置确定所述至少一个用户中每个用户的眼部所在的视点。
  12. 根据权利要求11所述的方法,其中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与所述视点对应的子像素包括:
    响应于所述至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染所述多个复合子像素中与所述单个视点对应的子像素。
  13. 根据权利要求11所述的方法,其中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与所述视点对应的子像素包括:
    响应于所述至少一个用户中每个用户的双眼之一位于单个视点或双眼分别位于单个视点,渲染所述多个复合子像素中与所述单个视点对应的子像素以及与所述单个视点对应的子像素相邻的至少一个子像素。
  14. 根据权利要求11所述的方法,其中,基于3D信号渲染多视点3D显示屏中的多个复合子像素中与所述视点对应的子像素包括:
    响应于所述至少一个用户中每个用户的双眼之一位于两个视点之间或双眼分别位于两个视点之间,渲染所述多个复合子像素中与所述两个视点对应的子像素。
  15. 根据权利要求10至14任一项所述的方法,还包括:
    检测所述至少一个用户的脸部信息。
  16. 根据权利要求15所述的方法,其中,检测所述至少一个用户的脸部信息包括:检测至少两个用户的脸部信息。
  17. 一种3D显示终端,包括处理器和存储有程序指令的存储器,还包括多视点3D显示屏,所述多视点3D显示屏包括多个复合像素,所述多个复合像素中的每个复合像素包括多个复合子像素,所述多个复合子像素中的每个复合子像素由对应于多个视点的多个子像素构成,所述处理器被配置为在执行所述程序指令时,执行如权利要求10至16任一项所述的方法。
  18. 根据权利要求17所述的3D显示终端,其中,所述3D显示终端为智能电视、智能蜂窝电话、平板电脑、个人计算机或可穿戴设备。
  19. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求10至16任一项所述的方法。
  20. 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求10至16任一项所述的方法。
PCT/CN2020/133327 2019-12-05 2020-12-02 3d显示设备、方法及终端 WO2021110033A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/779,648 US20220408077A1 (en) 2019-12-05 2020-12-02 3d display device, method and terminal
EP20896638.2A EP4068772A4 (en) 2019-12-05 2020-12-02 3D DISPLAY DEVICE, METHOD AND TERMINAL

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231290.XA CN112929643B (zh) 2019-12-05 2019-12-05 3d显示设备、方法及终端
CN201911231290.X 2019-12-05

Publications (1)

Publication Number Publication Date
WO2021110033A1 true WO2021110033A1 (zh) 2021-06-10

Family

ID=76160749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133327 WO2021110033A1 (zh) 2019-12-05 2020-12-02 3d显示设备、方法及终端

Country Status (5)

Country Link
US (1) US20220408077A1 (zh)
EP (1) EP4068772A4 (zh)
CN (1) CN112929643B (zh)
TW (1) TWI782361B (zh)
WO (1) WO2021110033A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079765A (zh) * 2021-11-17 2022-02-22 京东方科技集团股份有限公司 图像显示方法、装置及系统
CN114581634A (zh) * 2022-03-01 2022-06-03 江苏蓝创文化科技有限公司 透视点自动追踪可变的裸眼3d立体全息互动体验系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070058034A1 (en) * 2005-09-12 2007-03-15 Shunichi Numazaki Stereoscopic image display device, stereoscopic display program, and stereoscopic display method
US20110037830A1 (en) * 2008-04-24 2011-02-17 Nokia Corporation Plug and play multiplexer for any stereoscopic viewing device
CN103988504A (zh) * 2011-11-30 2014-08-13 三星电子株式会社 用于子像素渲染的图像处理设备和方法
CN104079919A (zh) * 2009-11-04 2014-10-01 三星电子株式会社 使用主动亚像素渲染的高密度多视点图像显示系统及方法
CN104769660A (zh) * 2012-10-01 2015-07-08 视瑞尔技术公司 用于相干光的相位调制的可控设备
CN108885377A (zh) * 2018-06-14 2018-11-23 京东方科技集团股份有限公司 显示设备及其驱动方法
CN109495734A (zh) * 2017-09-12 2019-03-19 三星电子株式会社 用于自动立体三维显示器的图像处理方法和设备
CN109561294A (zh) * 2017-09-25 2019-04-02 三星电子株式会社 用于渲染图像的方法和设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006229725A (ja) * 2005-02-18 2006-08-31 Konica Minolta Photo Imaging Inc 画像生成システム及び画像生成方法
CN101621706B (zh) * 2009-07-21 2010-12-08 四川大学 一种减小柱面光栅自由立体显示器图像串扰的方法
JP2012108316A (ja) * 2010-11-17 2012-06-07 Sony Corp 立体表示装置
US9363504B2 (en) * 2011-06-23 2016-06-07 Lg Electronics Inc. Apparatus and method for displaying 3-dimensional image
KR20130093369A (ko) * 2012-02-14 2013-08-22 삼성디스플레이 주식회사 표시 장치 및 이를 이용한 입체 영상 표시 방법
CN104536578B (zh) * 2015-01-13 2018-02-16 京东方科技集团股份有限公司 裸眼3d显示装置的控制方法及装置、裸眼3d显示装置
KR101688400B1 (ko) * 2015-06-04 2016-12-22 한국과학기술연구원 입체 영상 표시 장치 및 입체 영상 표시 장치의 설계 방법
CN105372823B (zh) * 2015-12-08 2017-10-27 上海天马微电子有限公司 立体显示装置
JP7094266B2 (ja) * 2016-08-04 2022-07-01 ドルビー ラボラトリーズ ライセンシング コーポレイション 単一深度追跡型の遠近調節-両眼転導ソリューション
CN108307187B (zh) * 2016-09-28 2024-01-12 擎中科技(上海)有限公司 裸眼3d显示设备及其显示方法
US10078228B2 (en) * 2016-09-29 2018-09-18 Jeremy Paul Willden Three-dimensional imaging system
KR102564479B1 (ko) * 2016-11-22 2023-08-07 삼성전자주식회사 사용자의 눈을 위한 3d 렌더링 방법 및 장치
US20180357981A1 (en) * 2017-06-13 2018-12-13 Misapplied Sciences, Inc. Coordinated multi-view display experiences
CN109104603B (zh) * 2018-09-25 2020-11-03 张家港康得新光电材料有限公司 一种视点补偿方法、装置、电子设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070058034A1 (en) * 2005-09-12 2007-03-15 Shunichi Numazaki Stereoscopic image display device, stereoscopic display program, and stereoscopic display method
US20110037830A1 (en) * 2008-04-24 2011-02-17 Nokia Corporation Plug and play multiplexer for any stereoscopic viewing device
CN104079919A (zh) * 2009-11-04 2014-10-01 三星电子株式会社 使用主动亚像素渲染的高密度多视点图像显示系统及方法
CN103988504A (zh) * 2011-11-30 2014-08-13 三星电子株式会社 用于子像素渲染的图像处理设备和方法
CN104769660A (zh) * 2012-10-01 2015-07-08 视瑞尔技术公司 用于相干光的相位调制的可控设备
CN109495734A (zh) * 2017-09-12 2019-03-19 三星电子株式会社 用于自动立体三维显示器的图像处理方法和设备
CN109561294A (zh) * 2017-09-25 2019-04-02 三星电子株式会社 用于渲染图像的方法和设备
CN108885377A (zh) * 2018-06-14 2018-11-23 京东方科技集团股份有限公司 显示设备及其驱动方法

Also Published As

Publication number Publication date
TWI782361B (zh) 2022-11-01
EP4068772A4 (en) 2023-08-23
US20220408077A1 (en) 2022-12-22
EP4068772A1 (en) 2022-10-05
CN112929643A (zh) 2021-06-08
TW202137758A (zh) 2021-10-01
CN112929643B (zh) 2022-06-28

Similar Documents

Publication Publication Date Title
TWI746302B (zh) 多視點3d顯示屏、多視點3d顯示終端
TWI818211B (zh) 眼部定位裝置、方法及3d顯示裝置、方法
CN110557626B (zh) 一种图像显示的方法及电子设备
CN112929647A (zh) 3d显示设备、方法和终端
WO2021110033A1 (zh) 3d显示设备、方法及终端
CN112584125A (zh) 三维图像显示设备及其显示方法
CN211791829U (zh) 3d显示设备
CN211791828U (zh) 3d显示设备
WO2021110027A1 (zh) 实现3d图像显示的方法、3d显示设备
CN211128026U (zh) 多视点裸眼3d显示屏、多视点裸眼3d显示终端
WO2021110040A1 (zh) 多视点3d显示屏、3d显示终端
CN211528831U (zh) 多视点裸眼3d显示屏、裸眼3d显示终端
CN211930763U (zh) 3d显示设备
US20220417494A1 (en) Method for realizing 3d image display, and 3d display device
CN112929645A (zh) 3d显示设备、系统和方法及3d视频数据通信方法
WO2021110037A1 (zh) 实现3d图像显示的方法、3d显示设备
TWI840636B (zh) 實現3d圖像顯示的方法、3d顯示設備
KR20200019028A (ko) 모바일 디바이스 및 그 제어 방법
CN112929641B (zh) 3d图像显示方法、3d显示设备
CN116841350A (zh) 一种3d显示方法以及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896638

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020896638

Country of ref document: EP

Effective date: 20220628

NENP Non-entry into the national phase

Ref country code: DE