WO2021110026A1 - 实现3d图像显示的方法、3d显示设备 - Google Patents

实现3d图像显示的方法、3d显示设备 Download PDF

Info

Publication number
WO2021110026A1
WO2021110026A1 PCT/CN2020/133317 CN2020133317W WO2021110026A1 WO 2021110026 A1 WO2021110026 A1 WO 2021110026A1 CN 2020133317 W CN2020133317 W CN 2020133317W WO 2021110026 A1 WO2021110026 A1 WO 2021110026A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
posture
display device
image
orientation
Prior art date
Application number
PCT/CN2020/133317
Other languages
English (en)
French (fr)
Inventor
刁鸿浩
黄玲溪
Original Assignee
视觉技术创投私人有限公司
北京芯海视界三维科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 视觉技术创投私人有限公司, 北京芯海视界三维科技有限公司 filed Critical 视觉技术创投私人有限公司
Priority to US17/781,377 priority Critical patent/US20220417494A1/en
Priority to EP20896949.3A priority patent/EP4068780A4/en
Publication of WO2021110026A1 publication Critical patent/WO2021110026A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0492Change of orientation of the displayed image, e.g. upside-down, mirrored
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • This application relates to the field of 3D display technology, for example, to a method for realizing 3D image display and a 3D display device.
  • 3D display devices implement 3D display effects through grating refracted pixels.
  • the related technology has at least the following problem: the display device is configured to display a suitable 3D effect in one posture, and does not have the function of displaying a suitable picture in another posture.
  • the embodiments of the present disclosure provide a method for realizing 3D image display, a 3D display device, a computer readable storage medium, and a computer program product to solve the technical problem that an electronic device cannot display a suitable screen after adjusting its posture.
  • a method for realizing 3D image display includes: detecting a posture change of the 3D display device; when the posture change of the 3D display device is detected, adjusting the displayed image to be the same as that of the 3D display device.
  • the display dimensions before the posture change are different, and the display orientation of the displayed image is adjusted to keep the display orientation of the displayed image at the initial display orientation before the posture change of the 3D display device.
  • detecting the posture change of the 3D display device includes: detecting the rotation angular velocity of the 3D display device, and determining the posture change of the 3D display device according to the rotation angular velocity; adjusting the display orientation of the displayed image includes: rotating in the plane where the image is located The display orientation of the image, so that the image maintains the initial display orientation before the posture change of the 3D display device.
  • the posture of the 3D display device includes at least one of the following: a horizontal screen display posture, a vertical screen display posture, and an oblique screen display posture.
  • the first posture of the 3D display device before the posture change includes any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture
  • the second posture of the 3D display device after the posture change includes : Any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture that is different from the first posture
  • adjusting the display orientation of the displayed image includes: rotating the image to keep the image at the initial position corresponding to the first posture Show orientation.
  • adjusting the display orientation of the displayed image further includes: displaying the image in a full-screen display manner.
  • adjusting the display orientation of the displayed image includes: rotating the display orientation of the image in the plane where the image is located, so that the image remains within the initial display orientation range; wherein the initial display orientation range includes the initial display orientation.
  • the method for implementing 3D image display further includes: adjusting the display orientation of the displayed image according to the viewing orientation of the user, so that the display orientation of the image is consistent with the viewing orientation of the user.
  • the viewing orientation of the user includes any one of a horizontal viewing orientation, a vertical viewing orientation, and an oblique viewing orientation; the method for realizing the 3D image display further includes: positioning the user's eyes according to the obtained eye The positioning data determines the user’s viewing direction.
  • adjusting the display orientation of the displayed image includes: rendering sub-pixels in the multi-view 3D display screen of the 3D display device based on the adjusted display orientation of the image.
  • adjusting the displayed image to a display dimension different from the display dimension before the posture of the 3D display device is changed includes adjusting the displayed image to a 3D image.
  • adjusting the displayed image to a 3D image includes: in response to a change in the posture of the 3D display device, rendering according to the 3D image to be played among multiple composite sub-pixels in the multi-viewpoint 3D display screen of the 3D display device The corresponding sub-pixel.
  • adjusting the displayed image to a display dimension that is different from the display dimension before the posture of the 3D display device is changed includes adjusting the displayed image to a 2D image.
  • adjusting the displayed image to a 2D image includes: in response to a change in the pose of the 3D display device, rendering each composite sub-pixel in the multi-viewpoint 3D display screen of the 3D display device according to the 2D image to be played At least one sub-pixel.
  • rendering at least one sub-pixel in each composite sub-pixel according to the 2D image to be played includes: based on the eye positioning data, rendering in the multi-view 3D display screen of the 3D display device according to the 2D image to be played The corresponding sub-pixel in each composite sub-pixel.
  • a 3D display device including: a processor; and a memory storing program instructions; wherein the processor is configured to execute the above-mentioned method when executing the above-mentioned program instructions.
  • a 3D display device including: a posture detection device configured to detect a posture change of the 3D display device; and a 3D processing device configured to convert the posture of the 3D display device based on the detected posture change of the 3D display device
  • the displayed image is adjusted to a display dimension different from the display dimension before the posture of the 3D display device is changed, and the display orientation of the displayed image is adjusted to keep the displayed image at the initial state before the posture of the 3D display device is changed. Show orientation.
  • the posture detection device is configured to detect the rotation angular velocity of the 3D display device, and determine the posture change of the 3D display device according to the rotation angular velocity; the 3D processing device is configured to rotate the display orientation of the image in the plane where the displayed image is located , So that the image remains in the initial display orientation before the posture change of the 3D display device.
  • the posture of the 3D display device includes at least one of the following: a horizontal screen display posture, a vertical screen display posture, and an oblique screen display posture.
  • the first posture of the 3D display device before the posture change includes any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture
  • the second posture of the 3D display device after the posture change includes : Any one of the horizontal display posture, the vertical display posture, and the oblique display posture that is different from the first posture
  • the 3D processing device is configured to rotate the displayed image to keep the image in the initial display corresponding to the first posture Towards.
  • the 3D processing device is configured to display the adjusted image in a full-screen display mode when any one of the first posture and the second posture is an oblique screen display posture.
  • the 3D processing device is configured to rotate the display orientation of the image in the plane where the displayed image is located, so as to keep the image within the initial display orientation range; wherein the initial display orientation range includes the initial display orientation.
  • the 3D processing device is configured to adjust the display orientation of the displayed image according to the viewing orientation of the user, so that the display orientation of the image is consistent with the viewing orientation of the user.
  • the user’s viewing orientation includes any one of a horizontal viewing orientation, a vertical viewing orientation, and an oblique viewing orientation;
  • the 3D display device further includes an eye positioning device or eye positioning device configured to obtain eye positioning data. Part positioning data interface; the 3D processing device is configured to determine the user's viewing direction according to the obtained eye positioning data.
  • the 3D processing apparatus is configured to render the sub-pixels in the multi-view 3D display screen of the 3D display device based on the display orientation of the adjusted image.
  • the 3D processing device is configured to render corresponding sub-pixels among the multiple composite sub-pixels in the multi-view 3D display screen of the 3D display device according to the 3D image to be played in response to a change in the pose of the 3D display device.
  • the 3D processing device is configured to render at least one sub-pixel in each composite sub-pixel in the multi-view 3D display screen of the 3D display device according to the 2D image to be played in response to a change in the pose of the 3D display device .
  • the 3D processing device is configured to render the corresponding sub-pixel in each composite sub-pixel in the multi-view 3D display screen of the 3D display device based on the eye positioning data according to the 2D image to be played.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned method for implementing 3D image display.
  • the computer program product provided by the embodiment of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions.
  • the program instructions When the program instructions are executed by a computer, the computer executes the above-mentioned 3D image display. method.
  • the method for realizing 3D image display, the 3D display device, the computer-readable storage medium, and the computer program product provided by the embodiments of the present disclosure can achieve the following technical effects:
  • the electronic device can provide good 3D or 2D display in different postures, and the posture conversion will not affect the user experience.
  • the display resolution of the multi-view 3D display screen is defined in the way of composite pixels, and the display resolution defined by the composite pixels is taken as the consideration factor during transmission and display, which reduces the transmission while ensuring high-definition display effects. And the amount of rendering calculations to achieve high-quality 3D display.
  • FIGS. 1A to 1C are schematic structural diagrams of a 3D display device according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the hardware structure of a 3D display device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of the software structure of a 3D display device according to an embodiment of the present disclosure
  • 4A and 4B are schematic diagrams of the format and content of an image contained in a video frame of a 3D video signal according to an embodiment of the present disclosure
  • 5A is a schematic front view of a 3D display device in a first posture according to an embodiment of the present disclosure
  • 5B is a schematic front view of a 3D display device in a second posture according to an embodiment of the present disclosure
  • 6A and 6B are schematic diagrams of rendering sub-pixels by a 3D display device in a first posture according to an embodiment of the present disclosure
  • FIGS. 7A to 7D are schematic diagrams of rendering sub-pixels in a second posture by a 3D display device according to an embodiment of the present disclosure
  • FIG. 8 is a flowchart of switching and displaying a 3D image and a 2D image in a 3D display device according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a 3D display device according to an embodiment of the present disclosure.
  • An embodiment according to the present disclosure provides a 3D display device including a multi-viewpoint 3D display screen (for example, a multi-viewpoint naked eye 3D display screen).
  • the multi-view 3D display screen includes multiple composite pixels, each of the multiple composite pixels includes multiple composite sub-pixels, and each composite sub-pixel of the multiple composite sub-pixels is included in the 3D display device. Multiple sub-pixels corresponding to multiple viewpoints in one pose.
  • the 3D display device may include a gesture detection device and a 3D processing device.
  • the multi-view 3D display screen includes a plurality of composite pixels, a first gesture play area corresponding to the first gesture of the 3D display device, and a second gesture play area corresponding to the second gesture of the 3D display device.
  • Each composite pixel includes multiple composite sub-pixels, each composite sub-pixel is composed of multiple same-color sub-pixels, and the multiple same-color sub-pixels of each composite sub-pixel correspond to multiple viewpoints in the first posture of the 3D display device.
  • the posture detection device is configured to detect the posture of the 3D display device.
  • the 3D signal interface is configured to receive 3D signals; the 3D processing device is configured to process 3D signals to play 3D images from the 3D signal in the first attitude playback area and to play 2D images from the 3D signal in the second attitude playback area.
  • the 3D processing device is communicatively connected with the multi-view 3D display screen. In some embodiments, the 3D processing device is communicatively connected with the driving device of the multi-view 3D display screen.
  • the gesture detection device is communicatively connected with the 3D processing device.
  • the posture of the 3D display device includes at least one of the following: a horizontal screen display posture, a vertical screen display posture, and an oblique screen display posture.
  • the first posture of the 3D display device before the posture change includes any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture
  • the second posture of the 3D display device after the posture change includes : Any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture which is different from the first posture.
  • FIG. 1A shows a 3D display device 100 according to an embodiment of the present disclosure.
  • the 3D display device 100 includes a multi-view 3D display screen 110, at least one 3D processing device 130, a 3D signal interface (such as a video signal interface 140) configured to receive a 3D signal such as a video frame of a 3D video signal, and a processing ⁇ 120 and posture detection device 180.
  • the multi-view 3D display screen 110 may include a display panel and a grating (not shown) covering the display panel.
  • the multi-view 3D display screen 110 may include m columns and n rows (m ⁇ n) of composite pixels 400 and thus defines a display resolution of m ⁇ n.
  • each composite pixel includes a plurality of composite sub-pixels.
  • each composite pixel 400 includes three composite sub-pixels 410, 420, and 430.
  • the three composite sub-pixels correspond to three colors, namely, the red composite sub-pixel 410, the green composite sub-pixel 420, and the blue composite sub-pixel 430, respectively.
  • Each composite sub-pixel is composed of i sub-pixels of the same color corresponding to i viewpoints, i ⁇ 3.
  • the 3D display device 100 may correspondingly have 6 viewpoints V1-V6.
  • the red composite sub-pixel 410 has 6 red sub-pixels R
  • the green composite sub-pixel 420 has 6 green sub-pixels G
  • the blue composite sub-pixel 430 has 6 blue sub-pixels B.
  • i is other values greater than or less than 6.
  • each composite sub-pixel 410, 420, 430 are arranged in a row, for example, in a single row, and the composite sub-pixels 410, 420, 430 in the row are parallel to each other.
  • the composite sub-pixels in the composite pixel have other different arrangements or the sub-pixels in the composite sub-pixels have other different arrangements.
  • the sub-pixels in each composite sub-pixel are arranged in columns, for example, in a single column.
  • the sub-pixels in each composite sub-pixel are arranged in an array.
  • each composite sub-pixel has a corresponding sub-pixel corresponding to the viewpoint.
  • the multiple sub-pixels of each composite sub-pixel are arranged in rows in the lateral direction of the multi-view 3D display screen, and the multiple sub-pixels in the rows have the same color. Since the multiple viewpoints of the 3D display device are arranged roughly along the horizontal direction of the multi-viewpoint 3D display screen, when the user moves and causes the eyes to be at different viewpoints, it is necessary to dynamically render each composite sub-pixel corresponding to the corresponding viewpoint. Different sub-pixels. Since the sub-pixels of the same color in each composite sub-pixel are arranged in rows, the cross-color problem caused by persistence of vision can be avoided.
  • the 3D display device 100 may be provided with a single 3D processing device 130.
  • a single 3D processing device 130 processes the rendering of sub-pixels in each composite sub-pixel of the 3D display screen 110 at the same time.
  • the 3D display device may be provided with at least two 3D processing devices. At least two 3D processing devices process the rendering of the sub-pixels in each composite sub-pixel of the multi-view 3D display screen in parallel, serial or a combination of serial and parallel.
  • the 3D processing device 130 may also optionally include a buffer 131 to buffer the received video frames.
  • the 3D processing device is an FPGA or ASIC chip or FPGA or ASIC chipset.
  • the 3D display device 100 may further include a processor 120 communicatively connected to the 3D processing device 130 through the video signal interface 140.
  • the processor 120 is included in a computer or a smart terminal, such as a mobile terminal, or as a processor device.
  • an exemplary embodiment of the 3D display device 100 includes a processor 120 internally.
  • the video signal interface 140 is correspondingly configured as an internal interface connecting the processor 120 and the 3D processing device 130.
  • Such a 3D display device 100 may be, for example, a mobile terminal, and the video signal interface 140 as an internal interface of the 3D display device 100 may be a MIPI, mini-MIPI, LVDS, min-LVDS, or DisplayPort interface.
  • the processor 120 of the 3D display device 100 may further include a register 121.
  • the register 121 can be configured to temporarily store instructions, data, and addresses.
  • the gesture detection device 180 is communicatively connected with the processor 120.
  • the posture detection device 180 includes a gravity sensor.
  • the attitude detection device 180 includes a gyroscope sensor.
  • the attitude detection device 180 includes a gravity sensor and a gyroscope sensor.
  • the 3D display device further includes an eye positioning device or an eye positioning data interface configured to obtain eye positioning data.
  • the 3D display device 100 further includes an eye positioning device 150 communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive eye positioning data.
  • the eye positioning device (not shown) may be directly connected to the processor 120, for example, and the 3D processing device 130 obtains eye positioning data from the processor 120 via the eye positioning data interface 160.
  • the eye positioning device can be connected to the processor and the 3D processing device at the same time, so that on the one hand, the 3D processing device 130 can directly obtain eye positioning data from the eye positioning device, and on the other hand, it can obtain the eye positioning device.
  • the other information can be processed by the processor.
  • FIG. 2 shows a schematic diagram of the hardware structure of a 3D display device 200 implemented as a mobile terminal, such as a smart cell phone or a tablet computer.
  • the 3D display device 200 may include a processor 201, an external memory interface 211, a (internal) memory 210, a universal serial bus (USB) interface 213, a charging management module 214, a power management module 215, Battery 216, mobile communication module 240, wireless communication module 242, antennas 239 and 241, audio module 234, speaker 235, receiver 236, microphone 237, earphone interface 238, buttons 209, motor 208, indicator 207, subscriber identification module (SIM ) Card interface 221, multi-view 3D display screen 202, 3D processing device 203, 3D signal interface (such as video signal interface 204), camera 206, eye positioning device 205, sensor module 220 and so on.
  • SIM subscriber identification module
  • the sensor module 220 may include a proximity light sensor 221, an ambient light sensor 222, a pressure sensor 223, an air pressure sensor 224, a magnetic sensor 225, a gravity sensor 226, a gyroscope sensor 227, an acceleration sensor 228, a distance sensor 229,
  • the structure illustrated in the embodiments of the present disclosure does not constitute a specific limitation on the 3D display device 200.
  • the 3D display device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 201 may include one or more processing units.
  • the processor 201 may include one of the following or a combination of at least two of the following: application processor (AP), modem processor, baseband processor, graphics processor (GPU), image signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, neural network processor (NPU), etc.
  • AP application processor
  • modem processor baseband processor
  • GPU graphics processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural network processor
  • different processing units may be independent devices, and in some embodiments, different processing units may be integrated in one or more processors.
  • the processor 201 may also be provided with a cache memory for storing instructions or data that have just been used or recycled by the processor 201.
  • a cache memory for storing instructions or data that have just been used or recycled by the processor 201.
  • the processor 201 wants to use the instruction or data again, it can be directly called from the memory.
  • the processor 201 may include one or more interfaces.
  • Interfaces can include integrated circuit (I2C) interface, integrated circuit built-in audio (I2S) interface, pulse code modulation (PCM) interface, universal asynchronous receiver transmitter (UART) interface, mobile industry processor interface (MIPI), universal input and output (GPIO) interface, user identification module (SIM) interface, universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver transmitter
  • MIPI mobile industry processor interface
  • GPIO universal input and output
  • SIM user identification module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 201 may include multiple sets of I2C buses. The processor 201 can communicate with the touch sensor, the charger, the flashlight, the camera device, the eye positioning device, etc., respectively, through different I2C bus interfaces.
  • Both I2S interface and PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is used to connect the processor 201 and the wireless communication module 242.
  • the MIPI interface can be used to connect the processor 201 and the multi-view 3D display 202.
  • the MIPI interface can also be used to connect peripheral devices such as the camera 206 and the eye positioning device 205.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 201 and the camera 206, the multi-view 3D display 202, the wireless communication module 242, the audio module 234, the sensor module 220, and so on.
  • the USB interface 213 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 213 can be used to connect a charger to charge the 3D display device 200, and can also be used to transfer data between the 3D display device 200 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the wireless communication function of the 3D display device 200 may be implemented by the antennas 241 and 239, the mobile communication module 240, the wireless communication module 242, the modem processor, or the baseband processor.
  • the antennas 241 and 239 are configured to transmit and receive electromagnetic wave signals.
  • Each antenna in the 3D display device 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the mobile communication module 240 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the 3D display device 200.
  • the mobile communication module 240 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 240 can receive electromagnetic waves by the antenna 239, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 240 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation by the antenna 239.
  • at least part of the functional modules of the mobile communication module 240 may be provided in the processor 201.
  • at least part of the functional modules of the mobile communication module 240 and at least part of the modules of the processor 201 may be provided in the same device.
  • the wireless communication module 242 can provide applications on the 3D display device 200 including wireless local area network (WLAN), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), short-range wireless communication technology (NFC), and infrared technology. (IR) and other wireless communication solutions.
  • the wireless communication module 242 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 242 receives electromagnetic waves via the antenna 241, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 201.
  • the wireless communication module 242 may also receive a signal to be sent from the processor 201, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 241.
  • the antenna 239 of the 3D display device 200 is coupled with the mobile communication module 240, and the antenna 241 is coupled with the wireless communication module 242, so that the 3D display device 200 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies can include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA) , At least one of Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, or IR technology.
  • GNSS may include at least one of Global Positioning Satellite System (GPS), Global Navigation Satellite System (GLONASS), Beidou Satellite Navigation System (BDS), Quasi-Zenith Satellite System (QZSS), or Satellite-Based Augmentation System (SBAS).
  • GPS Global Positioning Satellite System
  • GLONASS Global Navigation Satellite System
  • BDS Beidou Satellite Navigation System
  • QZSS Quasi-Zenith Satellite System
  • SBAS Satellite-Based Augmentation System
  • the external interface for receiving 3D video signals may include a USB interface 213, a mobile communication module 240, a wireless communication module 242, or any combination thereof.
  • other feasible interfaces for receiving 3D video signals are also conceivable, such as the aforementioned interfaces.
  • the memory 210 may be used to store computer executable program code, and the executable program code includes instructions.
  • the processor 201 executes various functional applications and data processing of the 3D display device 200 by running instructions stored in the memory 210.
  • the memory 210 may include a program storage area and a data storage area. Among them, the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book) created during the use of the 3D display device 200 and the like.
  • the memory 203 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
  • UFS universal flash memory
  • the external memory interface 212 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the 3D display device 200.
  • the external memory card communicates with the processor 201 through the external memory interface 212 to realize the data storage function.
  • the memory of the 3D display device may include (internal) memory 210, an external memory card connected to external memory interface 212, or a combination thereof.
  • the video signal interface may also adopt different internal interface connection modes or combinations of the above-mentioned embodiments.
  • the camera 206 may capture images or videos.
  • the 3D display device 200 implements the display function through the video signal interface 204, the 3D processing device 203, the multi-view 3D display screen 202, and the application processor.
  • the 3D display device 200 may include a GPU 218, for example, used in the processor 201 to process 3D video images, and may also process 2D video images.
  • the 3D display device 200 further includes a video codec 219 configured to compress or decompress digital video.
  • the video signal interface 204 is configured to output a 3D video signal processed by the GPU 218 or the codec 219 or both, such as a video frame of a decompressed 3D video signal, to the 3D processing device 203.
  • the GPU 218 or the codec 219 is integrated with a format adjuster.
  • the multi-view 3D display screen 202 is used to display three-dimensional (3D) images, videos, and the like.
  • the multi-view 3D display screen 202 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light emitting diode (OLED), active matrix organic light emitting diode or active matrix organic light emitting diode (AMOLED), flexible light emitting diode (FLED), Mini-LED, Micro -LED, Micro-OLED, and Quantum Dot Light Emitting Diode (QLED), etc.
  • the eye positioning device 205 is communicatively connected to the 3D processing device 203, so that the 3D processing device 203 can render the corresponding sub-pixels in the composite pixel (composite sub-pixel) based on the eye positioning data.
  • the eye positioning device 205 may also be connected to the processor 201, for example, the processor 201 is bypassed.
  • the 3D display device 200 can implement audio functions through an audio module 234, a speaker 235, a receiver 236, a microphone 237, a headphone interface 238, an application processor, and the like.
  • the audio module 234 is configured to convert digital audio information into an analog audio signal for output, and is also configured to convert an analog audio input into a digital audio signal.
  • the audio module 234 may also be configured to encode and decode audio signals.
  • the audio module 234 may be provided in the processor 201, or part of the functional modules of the audio module 234 may be provided in the processor 201.
  • the speaker 235 is configured to convert audio electrical signals into sound signals.
  • the 3D display device 200 can listen to music through the speaker 235, or listen to a hands-free call.
  • the receiver 236 is also called a "earpiece” and is used to convert audio electrical signals into sound signals. When the 3D display device 200 answers a call or voice message, the voice can be picked up by bringing the receiver 236 close to the ear.
  • the microphone 237 is configured to convert a sound signal into an electric signal.
  • the earphone interface 238 is configured to connect a wired earphone.
  • the earphone interface 238 may be a USB interface, or a 3.5mm Open Mobile 3D Display Device Platform (OMTP) standard interface or a US Cellular Telecommunications Industry Association (CTIA) standard interface.
  • OMTP Open Mobile 3D Display Device Platform
  • CTIA US Cellular Telecommunications Industry Association
  • the button 209 includes a power-on button, a volume button, and so on.
  • the button 209 may be a mechanical button or a touch button.
  • the 3D display device 200 may receive key input, and generate key signal input related to user settings and function control of the 3D display device 200.
  • the motor 208 can generate vibration prompts.
  • the motor 208 can be configured to vibrate to prompt an incoming call, or can be configured to vibrate to feedback touches.
  • the SIM card interface 211 is configured to connect to a SIM card.
  • the 3D display device 200 uses an embedded SIM card (eSIM).
  • eSIM embedded SIM card
  • the pressure sensor 223 is configured to sense a pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 223 may be disposed on the multi-view 3D display screen 202, which falls within the scope of the embodiments of the present disclosure.
  • the air pressure sensor 224 is used to measure air pressure.
  • the 3D display device 200 calculates the altitude based on the air pressure value measured by the air pressure sensor 224 to assist positioning and navigation.
  • the magnetic sensor 225 includes a Hall sensor.
  • the gravity sensor 226, as a posture detection device, can convert motion or gravity into electrical signals, and is configured to measure parameters such as tilt angle, inertial force, impact, and vibration.
  • the gyro sensor 227 is configured as a posture detection device to determine the motion posture of the 3D display device 200.
  • the 3D display device 200 With the aid of the gravity sensor 226 or the gyroscope sensor 227, it can be detected that the 3D display device 200 is in the first posture or in the second posture different from the first posture.
  • the acceleration sensor 228 can detect the magnitude of the acceleration of the 3D display device 200 in various directions (generally three axes).
  • Distance sensor 229 can be configured to measure distance
  • the temperature sensor 230 may be configured to detect temperature.
  • the fingerprint sensor 231 may be configured to collect fingerprints.
  • the 3D display device 200 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application lock, fingerprint photography, fingerprint answering calls, and so on.
  • the touch sensor 232 may be arranged in the multi-view 3D display 202, and the touch sensor 232 and the multi-view 3D display 202 form a touch screen, which is also called a “touch screen”.
  • the bone conduction sensor 233 can acquire vibration signals.
  • the charging management module 214 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 214 may receive the charging input of the wired charger through the USB interface 213.
  • the charging management module 214 may receive the wireless charging input through the wireless charging coil of the 3D display device 200.
  • the power management module 215 is configured to connect the battery 216 and the charging management module 214 to the processor 201.
  • the power management module 215 receives input from at least one of the battery 216 or the charging management module 214, and supplies power to the processor 201, the memory 210, the external memory, the multi-view 3D display 202, the camera 206, and the wireless communication module 242.
  • the power management module 215 and the charging management module 214 may also be provided in the same device.
  • the software system of the 3D display device 200 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment shown in the present disclosure exemplifies the software structure of the 3D display device 200 by taking an Android system with a layered architecture as an example.
  • Android system with a layered architecture as an example.
  • the embodiments of the present disclosure can be implemented in different software systems, such as operating systems.
  • FIG. 3 is a schematic diagram of the software structure of a 3D display device 200 according to an embodiment of the present disclosure.
  • the layered architecture divides the software into several layers. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer 510, the framework layer 520, the core class library and runtime (Runtime) 530, and the kernel layer 540, respectively.
  • the application layer 510 may include a series of application packages. As shown in Figure 3, the application package may include applications such as Bluetooth, WLAN, navigation, music, camera, calendar, call, video, gallery, map, and SMS.
  • the 3D video display method according to the embodiment of the present disclosure may be implemented in a video application program, for example.
  • the framework layer 520 provides an application programming interface (API) and a programming framework for applications in the application layer.
  • the framework layer includes some predefined functions. For example, in some embodiments of the present disclosure, the function or algorithm for recognizing the collected 3D video image and the algorithm for processing the image may be included in the framework layer.
  • the framework layer 520 may include a resource manager, a phone manager, a content manager, a notification manager, a window manager, a view system installation package, and a manager.
  • Android Runtime includes core libraries and virtual machines. Android Runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function to be called by the java language, and the other part is the core library of Android.
  • the application layer and the framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the core class library can include multiple functional modules. For example: 3D graphics processing library (for example: OpenGL ES), surface manager, image processing library, media library and graphics engine (for example: SGL), etc.
  • 3D graphics processing library for example: OpenGL ES
  • surface manager for example: image processing library
  • media library for example: SGL
  • graphics engine for example: SGL
  • the kernel layer 540 is a layer between hardware and software.
  • the kernel layer includes at least camera driver, audio and video interface, call interface, Wifi interface, sensor driver, power management and GPS interface.
  • a 3D display device as a mobile terminal having the structure shown in FIG. 2 and FIG. 3 is taken as an example to describe an embodiment of 3D video transmission and display in the 3D display device. It is conceivable that more or fewer features may be included or changes may be made to the features in other embodiments.
  • the 3D display device 200 uses the mobile communication module 240 and the antenna 239 or the wireless communication module 242 and the antenna 241 as an external interface from a network, such as a cellular network, for example.
  • a network such as a cellular network, for example.
  • WLAN network Bluetooth receives, for example, compressed 3D video signals, compressed 3D video signals, for example, GPU218 for image processing, codec 219 encoding and decoding and decompression, and then, for example, through the video signal interface 204 as an internal interface, such as MIPI interface Or a mini-MIPI interface to send the decompressed 3D video signal to at least one 3D processing device 203.
  • the video frame of the decompressed 3D video signal includes two images or a composite image of the embodiment of the present disclosure. Furthermore, the 3D processing device 203 renders the sub-pixels in the composite sub-pixels of the multi-view 3D display screen 202 accordingly, thereby realizing 3D video playback.
  • the 3D display device 200 reads the (internal) memory 210 or reads the compressed 3D video signal stored in the external memory card through the external memory interface 212, and implements 3D through corresponding processing, transmission and rendering. Video playback.
  • the playback of the aforementioned 3D video is implemented in a video application in the Android system application layer 510.
  • the multi-viewpoint 3D display screen 110 can define six viewpoints V1-V6, and the user's eyes can see each composite in the display panel of the multi-viewpoint 3D display screen 110 at each viewpoint (spatial position).
  • the display of the corresponding sub-pixel in the composite sub-pixel of the pixel The two different images seen by the user's eyes at different points of view form a parallax, and a 3D image is synthesized in the brain.
  • the 3D processing device 130 receives a video frame, for example, a decompressed 3D video signal, from the processor 120 through, for example, a video signal interface 140 as an internal interface.
  • a video frame for example, a decompressed 3D video signal
  • Each video frame can contain two images or a composite image, or consist of them.
  • the two images or the composite image may include different types of images and may be in various arrangements.
  • the video frame of the 3D video signal includes or consists of two parallel images 601 and 602.
  • the two images may be a left-eye parallax image and a right-eye parallax image, respectively.
  • the two images may be a rendered color image and a depth image, respectively.
  • the video frames of the 3D video signal include interlaced composite images.
  • the composite image may be an interlaced left-eye and right-eye parallax composite image, and an interlaced rendered color and depth-of-field composite image.
  • At least one 3D video processing device 130 after at least one 3D video processing device 130 receives a video frame including two images 601, 602, it renders at least one sub-pixel in each composite sub-pixel based on one of the two images and based on the two images. The other one renders at least another sub-pixel in each composite sub-pixel.
  • At least one 3D video processing device after receiving the video frame including the composite image, renders at least two sub-pixels in each composite sub-pixel based on the composite image. For example, at least one sub-pixel is rendered according to the first image (partial) in the composite image, and at least another sub-pixel is rendered according to the second image (partial).
  • the rendering of the sub-pixels in the composite sub-pixel is, for example, dynamic rendering based on eye positioning data.
  • At least one 3D video processing device 130 after receiving video frames of two images 601, 602 of left-eye parallax image and right-eye parallax image, renders based on one of the two images.
  • Each composite pixel can play 2D images.
  • each composite pixel is rendered based on the image 601.
  • each composite pixel may also be rendered based on the image 602.
  • the two images 601 and 602 are a rendered color image and a depth image, respectively, and the 2D image to be played is generated by the rendered color image and the depth image.
  • the 3D display device 100 further includes a format adjuster (not shown), which is, for example, integrated in the processor 120 and configured as a codec or as a part of a GPU.
  • the format adjuster is configured to preprocess the video frames of the 3D video signal, so that the 3D image or 2D image to be played is adapted to the resolution required by the display or the device.
  • the 3D display device has two postures, and is adapted to the two postures to define two play areas.
  • the 3D display device 700 is, for example, a mobile terminal.
  • FIG. 5A shows a schematic front view of the 3D display device 700 in the first posture.
  • the first posture is, for example, the horizontal display posture of the 3D display device 700
  • the 3D display device 700 is adapted to the first posture to define the first posture play area 720 in the multi-viewpoint 3D display screen 730.
  • FIG. 5B shows a schematic front view of the 3D display device 700 in the second posture.
  • the second posture is, for example, the vertical screen display posture of the 3D display device 700, and the 3D display device 700 is adapted to the second posture to define a second posture play area 740 in the multi-viewpoint 3D display screen 730.
  • the 3D display device 700 may have a built-in attitude detection device and a 3D processing device.
  • the posture detection device is, for example, a gravity sensor or a gyroscope sensor, and is configured to detect the posture of the 3D display device 700, or the switch of posture, or both.
  • the 3D processing device is configured to process a video frame of a 3D signal, such as a 3D video signal, to play the 3D image from the 3D signal in the first posture playback area 720 when the 3D display device is in a horizontal screen, and to play the 3D image from the 3D signal when the 3D display device is in a vertical screen
  • the 2D image from the 3D signal is played in the second gesture playing area 740.
  • the 3D display device 700 is provided with an eye positioning device 710, and the eye positioning device 710 is configured to obtain eye positioning data.
  • the 3D processing device when the 3D display device is in the first posture or is switched from the second posture to the first posture, the 3D processing device renders in the first posture play area according to the 3D image to be played based on the eye positioning data
  • the corresponding sub-pixel in the composite sub-pixel includes, for example, the user's eye spatial position information, and the 3D processing device can obtain the viewpoint position of the user's eye based on the eye spatial position information.
  • the corresponding sub-pixels rendered in the first gesture play area are the sub-pixels corresponding to the position of the viewpoint of the user's eyes.
  • the correspondence between viewpoints and eye spatial positions, and the correspondence between sub-pixels and viewpoints may each be stored in the 3D processing device in the form of a correspondence table, or the 3D processing device may receive/acquire the correspondence table between viewpoints and eye spatial positions , Table of correspondence between sub-pixels and viewpoints.
  • the 3D display device may have 6 viewpoints V1-V6 corresponding to the first posture, and each composite pixel 400 in the multi-viewpoint 3D display screen of the 3D display device may have a red composite Sub-pixel 410, green composite sub-pixel 420, and blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only the corresponding relationship between one composite pixel 400 and six viewpoints is shown in FIG. 6A.
  • the eye positioning device detects that the user's eyes are each at one viewpoint, for example, the left eye is at viewpoint V2 and the right eye is at viewpoint V4, based on the video frame of the 3D video signal, generate images of the two viewpoints of the user's eyes, and render the corresponding sub-pixels of each composite sub-pixel corresponding to the two viewpoints in the first playback area.
  • the composite sub-pixels 410, 420, and 430 respectively correspond to the sub-pixels R2, G2, and B2 of the viewpoint V2 and the sub-pixels R4, G4, and B4 corresponding to the viewpoint V4.
  • the 3D display device may have 6 viewpoints V1-V6 corresponding to the first posture, and each composite pixel 400 in the multi-viewpoint 3D display screen of the 3D display device may have a red composite Sub-pixel 410, green composite sub-pixel 420, and blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only the corresponding relationship between one composite pixel 400 and six viewpoints is shown in FIG. 6B.
  • the eye positioning device detects that the user’s eyes each involve two adjacent viewpoints, for example, the left eye involves viewpoints V2 and V3 and the right eye refers to viewpoints V4 and V5.
  • the images of the two viewpoints of the user's eyes are generated, and the corresponding composite sub-pixels corresponding to these four viewpoints are rendered in the first playback area. Sub-pixels.
  • the composite sub-pixels 410, 420, and 430 are rendered as sub-pixels R2, R3, G2, G3, B2, B3 corresponding to viewpoints V2 and V3, and sub-pixels corresponding to viewpoints V4 and V5.
  • the 3D processing device when the 3D display device is in the second posture or is switched from the first posture to the second posture, the 3D processing device is configured to render each composite pixel in the second posture playback area according to the 2D image to be played At least one of the composite sub-pixels.
  • the 3D display device plays the 2D image from the 3D signal to the user in the second posture.
  • the 3D display device may have 6 viewpoints V1-V6 (not shown) corresponding to the first posture, and each composite pixel in the multi-viewpoint 3D display screen of the 3D display device 400 may have a red composite sub-pixel 410, a green composite sub-pixel 420, and a blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only one composite pixel 400 is shown in FIG. 7A.
  • the 3D display device When the 3D display device is in the second posture or is switched from the first posture to the second posture, an image is generated based on the video frame of the 3D video signal, and all sub-pixels of each composite sub-pixel are rendered in the second playback area. Thus, the 3D display device plays the 2D image from the 3D signal in the second posture.
  • the 3D display device may have 6 viewpoints V1-V6 (not shown) corresponding to the first posture, and each composite pixel in the multi-viewpoint 3D display screen of the 3D display device 400 may have a red composite sub-pixel 410, a green composite sub-pixel 420, and a blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only one composite pixel 400 is shown in FIG. 7B.
  • the 3D display device When the 3D display device is in the second posture or is switched from the first posture to the second posture, an image is generated based on the video frame of the 3D video signal, and one sub-pixel of each composite sub-pixel is rendered in the second playback area.
  • R6 of the red composite sub-pixel 410, G6 of the green composite sub-pixel 420, and B6 of the blue composite sub-pixel 430 are rendered.
  • the 3D display device plays the 2D image from the 3D signal in the second posture. It is conceivable that in other embodiments, other one or more sub-pixels in each composite sub-pixel may be selected for rendering.
  • the 3D processing device when the 3D display device is in the second posture or is switched from the first posture to the second posture, the 3D processing device based on real-time eye positioning data, according to the 2D image to be played, in the second posture play area In-render the corresponding sub-pixels in each composite sub-pixel.
  • the 3D display device may have 6 viewpoints V1-V6 corresponding to the first posture, and each composite pixel 400 in the multi-viewpoint 3D display screen of the 3D display device may have a red composite Sub-pixel 410, green composite sub-pixel 420, and blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only the corresponding relationship between one composite pixel 400 and 6 viewpoints is shown in FIG. 7C.
  • the eye positioning device is used to detect the position of the user's eyes corresponding to the viewpoint of the first posture.
  • the user's eyes are at a single viewpoint in the first posture, for example, viewpoint V3.
  • an image of a single viewpoint of the user's eyes is generated based on the video frames of the 3D video signal, and the sub-pixels of each composite sub-pixel corresponding to the single viewpoint are rendered in the second playback area.
  • FIG. 7C the eye positioning device is used to detect the position of the user's eyes corresponding to the viewpoint of the first posture.
  • the composite sub-pixels 410, 420, and 430 respectively correspond to the sub-pixels R3, G3, and B3 of the viewpoint V3 in the first attitude.
  • the 3D display device plays the 2D image from the 3D signal to the user located at the viewpoint V3 in the second posture.
  • the 3D display device may have 6 viewpoints V1-V6 corresponding to the first posture, and each composite pixel 400 in the multi-viewpoint 3D display screen of the 3D display device may have a red composite Sub-pixel 410, green composite sub-pixel 420, and blue composite sub-pixel 430.
  • Each composite sub-pixel has 6 sub-pixels corresponding to 6 viewpoints. For clarity, only the correspondence between one composite pixel 400 and 6 viewpoints is shown in FIG. 7D.
  • the eye positioning device is used to detect the position of the user's eyes corresponding to the viewpoint of the first posture.
  • the user's eyes are related to two viewpoints in the first posture, for example, viewpoints V3 and V4.
  • images of the two viewpoints involved in the user's eyes are generated, and the corresponding sub-pixels of each composite sub-pixel and the sub-pixels of the two viewpoints are rendered in the second playback area.
  • FIG. 7D the eye positioning device is used to detect the position of the user's eyes corresponding to the viewpoint of the first posture.
  • the user's eyes are related to two viewpoints in the first posture, for example, viewpoints V3 and V4.
  • images of the two viewpoints involved in the user's eyes are generated, and the corresponding sub-pixels of each composite sub-pixel and the sub-pixels of the two viewpoints are rendered in the second playback area.
  • the composite sub-pixels 410, 420, and 430 respectively correspond to the sub-pixels R3, R4, G3, G4, B3, and B4 of the viewpoints V3 and V4 in the first attitude.
  • the 3D display device plays the 2D image from the 3D signal to the user involved in the viewpoints V3 and V4 in the second posture.
  • the 3D display device further includes a format adjuster (not shown) configured to adjust the format of the 3D signal, such as preprocessing the video frame of the 3D video signal, so as to be suitable for playing 2D in the second posture playback area. image.
  • a format adjuster configured to adjust the format of the 3D signal, such as preprocessing the video frame of the 3D video signal, so as to be suitable for playing 2D in the second posture playback area. image.
  • the format adjuster preprocesses the resolution of the 3D signal to adapt to the display resolution of the second attitude playback area.
  • An embodiment according to the present disclosure provides a method for implementing 3D image display with the above-mentioned 3D display device.
  • Methods to realize 3D image display include:
  • Detect the posture of the 3D display device for example, detect the posture of the 3D display device, such as in the first posture or in the second posture, or detect the posture change of the 3D display device, or detect the posture and posture of the 3D display device, for example The change;
  • the displayed image is adjusted to a display dimension different from the display dimension before the posture of the 3D display device is changed, and the display orientation of the displayed image is adjusted to make the displayed image
  • the display orientation of the image is maintained at the initial display orientation before the posture change of the 3D display device.
  • the display dimensions of the 3D display device include 2D display dimensions and 3D display dimensions.
  • the 3D display device plays a 3D image based on being in the first posture, and plays a 2D image based on being in the second posture.
  • the method for implementing 3D image display includes:
  • step S200 may include: when a posture change of the 3D display device is detected, adjusting the display dimension of the displayed image so that the display dimension after the posture change is different from the display dimension before the posture change (for example, the posture The 3D image displayed before the change, the 2D image is displayed after the posture is changed, or vice versa), and the display of the displayed image is adjusted to keep the display orientation of the displayed image at the initial display orientation before the posture change of the 3D display device. In this way, the displayed image can always be adapted to the user's viewing direction.
  • detecting the posture or posture change of the 3D display device can be completed by a posture detection device.
  • the display dimension of the displayed image is adjusted so that the display dimension after the posture change is different from the display dimension before the posture change, and the display of the displayed image is adjusted so that the display orientation of the displayed image Maintaining the initial display orientation before the posture change of the 3D display device, this step can be completed by the 3D processing device.
  • detecting the posture change of the 3D display device includes detecting the rotation angular velocity of the 3D display device, and determining the posture change of the 3D display device according to the rotation angular velocity.
  • adjusting the display orientation of the displayed image includes rotating the display orientation of the image in the plane where the image is located, so that the image maintains the initial display orientation before the pose change of the 3D display device.
  • the posture of the 3D display device includes at least one of the following: a horizontal screen display posture, a vertical screen display posture, and an oblique screen display posture.
  • the first posture of the 3D display device before the posture change includes any one of the horizontal screen display posture, the vertical screen display posture, and the oblique screen display posture
  • the second posture of the 3D display device after the posture change includes : Any one of the horizontal screen display posture, the vertical screen display posture, and the inclined screen display posture that is different from the first posture.
  • adjusting the display orientation of the displayed image includes rotating the image to keep the image in the initial display orientation corresponding to the first posture. In this way, for the user, no matter how the posture of the 3D display device is adjusted, the display orientation of the 3D image they see is the same.
  • adjusting the display orientation of the displayed image further includes: displaying the image in a full-screen display mode.
  • adjusting the display orientation of the displayed image includes: rotating the display orientation of the image in the plane where the image is located, so that the image remains within the initial display orientation range; wherein the initial display orientation range includes the initial display orientation. In this way, the display orientation of the displayed 3D image can be fine-tuned or adjusted according to the user's movement to adapt to the user's movement.
  • the display orientation of the displayed image is adjusted according to the viewing orientation of the user, so that the display orientation of the image is consistent with the viewing orientation of the user.
  • the viewing orientation of the user may include any one of a horizontal viewing orientation, a vertical viewing orientation, and an oblique viewing orientation.
  • adjusting the display orientation of the displayed image includes: rendering the display orientation of the multi-view 3D display screen of the 3D display device based on the adjusted display orientation of the image (or after the posture of the 3D display device is changed). Sub-pixels.
  • adjusting the displayed image to be a 3D image includes: in response to a change in the posture of the 3D display device, rendering a corresponding sub-pixel in each composite sub-pixel according to the 3D image to be played.
  • adjusting the displayed image to a 2D image includes: in response to a change in the pose of the 3D display device, rendering at least one sub-pixel in each composite sub-pixel according to the 2D image to be played.
  • rendering at least one sub-pixel in each composite sub-pixel according to the 2D image to be played includes: rendering a corresponding sub-pixel in each composite sub-pixel according to the 2D image to be played based on eye positioning data.
  • the above adjustment of the display orientation of the 3D image and the rendering of the sub-pixels can be completed by a 3D processing device.
  • the method for implementing 3D image display further includes acquiring a 3D signal.
  • the "posture" of the 3D display device is equivalent to the "orientation" of the 3D display device.
  • the method for implementing 3D image display further includes switching and playing the 3D image from the 3D signal in the 3D display device in response to the posture transition of the 3D display device. This may include: responding to the 3D display device moving to the first posture For the signal that is transformed or in the first posture, the 3D image from the 3D signal is played in the first posture playback area defined by the multi-view 3D display screen.
  • the method for realizing 3D image display further includes switching and playing the 2D image from the 3D signal in the 3D display device in response to the transition of the 3D display device's posture. This may include: responding to the 3D display device moving to the second posture For the signal that is transformed or in the second posture, the 2D image from the 3D signal is played in the second posture playback area of the multi-view 3D display screen.
  • playing the 3D image from the 3D signal in the first posture playback area defined by the multi-viewpoint 3D display screen of the 3D display device includes: responding to the 3D display A signal for the device to switch from the second posture to the first posture, and switch from playing 2D images to playing 3D images.
  • playing the 2D image from the 3D signal in the second posture playback area defined by the multi-viewpoint 3D display screen of the 3D display device includes: responding to the 3D display A signal for the device to switch from the first posture to the second posture, and switch from playing 3D images to playing 2D images.
  • the 3D signal is a 3D video, such as a video frame of a 3D video.
  • the 3D signal includes a left-eye parallax image and a right-eye parallax image.
  • the 3D signal includes rendered color images and depth images.
  • the 2D image to be played is selected from one of a left-eye parallax image and a right-eye parallax image.
  • the 2D image to be played is generated by rendering color images and depth images.
  • the method for switching and displaying a 3D image and a 2D image in a 3D display device further includes: acquiring real-time eye positioning data in response to a signal that the 3D display device is in the first posture.
  • playing a 3D image from a 3D signal includes: based on real-time eye positioning data, rendering a corresponding sub-pixel in each composite sub-pixel in the first posture playback area according to the 3D image to be played.
  • the 3D display device when the 3D display device is in the first posture or is switched from the second posture to the first posture, when the real-time eye positioning data indicates that the user’s eyes each correspond to a viewpoint of the 3D display device, according to the 3D display device to be played The image is rendered in each composite sub-pixel corresponding to a viewpoint corresponding to each of the two eyes in the first posture playback area.
  • the 3D display device when the 3D display device is in the first posture or switched from the second posture to the first posture, when the real-time eye positioning data indicates that the user’s eyes correspond to two adjacent viewpoints of the 3D display device. , According to the 3D image to be played, render the sub-pixels corresponding to the two viewpoints of the two eyes in each composite sub-pixel in the first posture play area.
  • playing a 2D image from a 3D signal includes: rendering at least one sub-pixel of each composite sub-pixel in the second posture playback area according to the 2D image to be played.
  • all sub-pixels in the composite sub-pixels of each composite pixel are rendered in the second posture playback area according to the 2D image to be played. Pixels.
  • one of the composite sub-pixels of each composite pixel is rendered in the second posture playback area according to the 2D image to be played. Or more than one sub-pixel.
  • the method for switching and displaying 3D images and 2D images in a 3D display device further includes: in response to the signal that the 3D display device is switched from the first posture to the second posture or the 3D display device is in the second posture, acquiring real-time signals Eye positioning data.
  • acquiring real-time eye positioning data includes: acquiring the eye position corresponding to the first posture The real-time position of the viewpoint.
  • playing a 2D image from a 3D signal includes: based on real-time eye positioning data, according to the 2D image to be played, rendering a corresponding sub-pixel in each composite sub-pixel in the second posture playback area.
  • the 3D display device when the 3D display device is in the second posture or switched from the first posture to the second posture, when the real-time eye positioning data indicates that the user’s eyes correspond to the same one in the first posture of the 3D display device
  • the sub-pixel corresponding to this viewpoint among the composite sub-pixels is rendered in the second posture playback area according to the 2D image to be played.
  • the 3D display device when the 3D display device is in the second posture or is switched from the first posture to the second posture, when the real-time eye positioning data indicates that the user’s eyes correspond to the first posture of the 3D display device
  • the sub-pixels corresponding to the two viewpoints in each composite sub-pixel are rendered in the second posture playback area according to the 2D image to be played.
  • the signal of the 3D display device in the first posture, the signal in the second posture, the signal of switching from the first posture to the second posture, and the signal of switching from the second posture to the first posture are determined by the posture. Obtained by the detection and detection device.
  • the attitude detection device is, for example, a gravity sensor or a gyroscope sensor.
  • playing the 2D image from the 3D signal further includes: adjusting the format of the 3D signal to be suitable for playing the 2D image in the second posture playing area.
  • Adjusting the format of the 3D signal can be implemented by, for example, a format adjuster.
  • the first posture is the horizontal orientation of the 3D display device
  • the second posture is the vertical orientation of the 3D display device
  • the 3D display device 300 includes a processor 320 and a memory 310.
  • the 3D display device 300 may further include a communication interface 340 and a bus 330.
  • the processor 320, the communication interface 340, and the memory 310 communicate with each other through the bus 330.
  • the communication interface 340 may be configured to transmit information.
  • the processor 320 may call logical instructions in the memory 310 to execute the method of switching and displaying a 3D image and a 2D image in a 3D display device of the above-mentioned embodiment.
  • the aforementioned logic instructions in the memory 310 can be implemented in the form of a software functional unit and when sold or used as an independent product, they can be stored in a computer readable storage medium.
  • the memory 310 can be used to store software programs and computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure.
  • the processor 320 executes functional applications and data processing by running program instructions/modules stored in the memory 310, that is, implements the method of switching and displaying 3D images and 2D images in a 3D display device in the foregoing method embodiment.
  • the memory 310 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the terminal device and the like.
  • the memory 310 may include a high-speed random access memory, and may also include a non-volatile memory.
  • the computer-readable storage medium provided by the embodiment of the present disclosure stores computer-executable instructions, and the above-mentioned computer-executable instructions are configured to execute the above-mentioned method for implementing 3D image display.
  • the computer program product provided by the embodiment of the present disclosure includes a computer program stored on a computer-readable storage medium.
  • the computer program includes program instructions.
  • the program instructions When the program instructions are executed by a computer, the computer executes the above-mentioned 3D image display. method.
  • the technical solutions of the embodiments of the present disclosure can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which can be a personal computer, a server, or a network). Equipment, etc.) execute all or part of the steps of the method of the embodiment of the present disclosure.
  • the aforementioned storage media can be non-transitory storage media, including: U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other media that can store program codes, or it can be a transient storage medium. .
  • each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other.
  • the relevant parts can be referred to the description of the method parts.
  • the disclosed methods and products can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of units may only be a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to implement this embodiment.
  • the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more options for realizing the specified logic function.
  • Execute instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.

Abstract

本申请涉及3D显示技术,公开一种实现3D图像显示的方法,包括:检测3D显示设备的姿态变化;在检测到3D显示设备的姿态发生变化时,将所显示的图像调整为与3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向。本公开解决电子设备调整姿态无法显示合适画面的问题。本申请还公开3D显示设备、计算机可读存储介质、计算机程序产品。

Description

实现3D图像显示的方法、3D显示设备
本申请要求在2019年12月05日提交中国知识产权局、申请号为201911231156.X、发明名称为“实现3D图像显示的方法、3D显示设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D显示技术领域,例如涉及实现3D图像显示的方法、3D显示设备。
背景技术
目前,3D显示设备通过光栅折射像素实现3D显示效果。
在实现本公开实施例的过程中,发现相关技术中至少存在如下问题:显示设备配置为在一个姿态下显示合适的3D效果,不具备在另一个姿态下显示合适画面的功能。
发明内容
为了对披露的实施例的一些方面有基本的理解,下面给出了简单的概括。该概括不是泛泛评述,也不是要确定关键/重要组成元素或描绘这些实施例的保护范围,而是作为后面的详细说明的序言。
本公开实施例提供了一种实现3D图像显示的方法、3D显示设备、计算机可读存储介质、计算机程序产品,以解决电子设备调整姿态后无法显示合适画面的技术问题。
在一些实施例中提供了一种实现3D图像显示的方法,包括:检测3D显示设备的姿态变化;在检测到3D显示设备的姿态发生变化时,将所显示的图像调整为与3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,检测3D显示设备的姿态变化包括:检测3D显示设备的旋转角速度,根据旋转角速度确定3D显示设备的姿态变化;调整所显示的图像的显示朝向包括:在图像所在平面中旋转图像的显示朝向,以使图像保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
在一些实施例中,3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显 示姿态、斜屏显示姿态中的任一个;3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于第一姿态的任一个;调整所显示的图像的显示朝向包括:旋转图像以使图像保持在对应于第一姿态的初始显示朝向。
在一些实施例中,第一姿态、第二姿态中任一个为斜屏显示姿态时,调整所显示的图像的显示朝向时还包括:以全屏显示方式显示图像。
在一些实施例中,调整所显示的图像的显示朝向包括:在图像所在平面中旋转图像的显示朝向,以使图像保持在初始显示朝向范围内;其中,初始显示朝向范围包括初始显示朝向。
在一些实施例中,实现3D图像显示的方法还包括:根据用户的观看朝向调整所显示的图像的显示朝向,使得图像的显示朝向与用户的观看朝向一致。
在一些实施例中,用户的观看朝向包括:横向观看朝向、竖向观看朝向、斜向观看朝向中的任一个;实现3D图像显示的方法还包括:对用户进行眼部定位,根据得到的眼部定位数据确定用户的观看朝向。
在一些实施例中,调整所显示的图像的显示朝向包括:基于调整后的图像的显示朝向,渲染3D显示设备的多视点3D显示屏中的子像素。
在一些实施例中,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度包括将所显示的图像调整为3D图像。在一些实施例中,将所显示的图像调整为3D图像包括:响应于3D显示设备的姿态变化,根据要播放的3D图像渲染3D显示设备的多视点3D显示屏中的多个复合子像素中相应的子像素。
在一些实施例中,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度包括将所显示的图像调整为2D图像。在一些实施例中,将所显示的图像调整为2D图像包括:响应于3D显示设备的姿态变化,根据要播放的2D图像渲染3D显示设备的多视点3D显示屏中的每个复合子像素中的至少一个子像素。
在一些实施例中,根据要播放的2D图像渲染每个复合子像素中的至少一个子像素包括:基于眼部定位数据,根据要播放的2D图像渲染3D显示设备的多视点3D显示屏中的每个复合子像素中相应的子像素。
在一些实施例中,提供一种3D显示设备,包括:处理器;和存储有程序指令的存储器;其中,处理器被配置为在执行上述程序指令时,执行如上所述的方法。
在一些实施例中,提供一种3D显示设备,包括:姿态检测装置,被配置为检测3D显示设备的姿态变化;和3D处理装置,被配置为基于检测到的3D显示设备的姿态变化,将所显示的图像调整为与3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调 整所显示的图像的显示朝向,以使所显示的图像保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,姿态检测装置被配置为检测3D显示设备的旋转角速度,根据旋转角速度确定3D显示设备的姿态变化;3D处理装置被配置为在所显示的图像所在平面中旋转图像的显示朝向,以使图像保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
在一些实施例中,3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的任一个;3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于第一姿态的任一个;3D处理装置被配置为旋转所显示的图像以使图像保持在对应于第一姿态的初始显示朝向。
在一些实施例中,3D处理装置被配置为在第一姿态、第二姿态中任一个为斜屏显示姿态时,以全屏显示方式显示调整后的图像。
在一些实施例中,3D处理装置被配置为在所显示的图像所在平面中旋转图像的显示朝向,以使图像保持在初始显示朝向范围内;其中,初始显示朝向范围包括初始显示朝向。
在一些实施例中,3D处理装置被配置为根据用户的观看朝向调整所显示的图像的显示朝向,使得图像的显示朝向与用户的观看朝向一致。
在一些实施例中,用户的观看朝向包括:横向观看朝向、竖向观看朝向、斜向观看朝向中的任一个;3D显示设备还包括被配置为获取眼部定位数据的眼部定位装置或眼部定位数据接口;3D处理装置被配置为根据得到的眼部定位数据确定用户的观看朝向。
在一些实施例中,3D处理装置被配置为基于调整后的图像的显示朝向,渲染3D显示设备的多视点3D显示屏中的子像素。
在一些实施例中,3D处理装置被配置为响应于3D显示设备的姿态变化,根据要播放的3D图像渲染3D显示设备的多视点3D显示屏中的多个复合子像素中相应的子像素。
在一些实施例中,3D处理装置被配置为响应于3D显示设备的姿态变化,根据要播放的2D图像渲染3D显示设备的多视点3D显示屏中的每个复合子像素中的至少一个子像素。
在一些实施例中,3D处理装置被配置为基于眼部定位数据,根据要播放的2D图像渲染3D显示设备的多视点3D显示屏中的每个复合子像素中相应的子像素。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的实现3D图像显示的方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的实现3D图像显示的方法。
本公开实施例提供的实现3D图像显示的方法、3D显示设备,以及计算机可读存储介质、计算机程序产品,可以实现以下技术效果:
电子设备在不同姿态下都能提供良好的3D或2D显示,姿态转换不会对用户的体验造成影响。此外,以复合像素的方式定义多视点3D显示屏的显示分辨率,在传输和显示时均以由复合像素定义的显示分辨率为考量因素,在确保高清晰度显示效果的情况下减少了传输和渲染的计算量,实现高质量的3D显示。
以上的总体描述和下文中的描述仅是示例性和解释性的,不用于限制本申请。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1A至图1C是根据本公开实施例的3D显示设备的结构示意图;
图2是根据本公开实施例的3D显示设备的硬件结构示意图;
图3是根据本公开实施例的3D显示设备的软件结构示意图;
图4A和图4B是根据本公开实施例的3D视频信号的视频帧所包含图像的格式及内容的示意图;
图5A是根据本公开的实施例的3D显示设备在第一姿态下的正面示意图;
图5B是根据本公开的实施例的3D显示设备在第二姿态下的正面示意图;
图6A和6B是根据本公开的实施例的3D显示设备在第一姿态下渲染子像素的示意图;
图7A至图7D是根据本公开的实施例的3D显示设备在第二姿态下渲染子像素的示意图;
图8是根据本公开的实施例的在3D显示设备中切换显示3D图像和2D图像的流程图;以及
图9是根据本公开的实施例的3D显示设备的结构示意图。
附图标记:
100:3D显示设备;110:多视点3D显示屏;120:处理器;121:寄存器;130:3D处理装置;131:缓存器;140:视频信号接口;150:眼部定位装置;160:眼部定位数据接口;180:姿态检测装置;200:3D显示设备;201:处理器;202:多视点3D显示屏;203: 3D处理装置;204:视频信号接口;205:眼部定位装置;206:摄像装置;207:指示器;208:马达;209:按键;210:存储器;211:用户标识模块(SIM)卡接口;212:外部存储器接口;213:通用串行总线接口;214:充电管理模块;215:电源管理模块;216:电池;217:寄存器;218:GPU;219:编解码器;220:传感器模块;221:接近光传感器;222:环境光传感器;223:压力传感器;224:气压传感器;225:磁传感器;226:重力传感器;227:陀螺仪传感器;228:加速度传感器;229:距离传感器;230:温度传感器;231:指纹传感器;232:触摸传感器;233:骨传导传感器;234:音频模块;235:扬声器;236:受话器;237:麦克风;238:耳机接口;239:天线;240:移动通信模块;241:天线;242:无线通信模块;300:3D显示设备;310:存储器;320:处理器;330:总线;340:通信接口;400:复合像素;410:红色复合子像素;420:绿色复合子像素;430:蓝色复合子像素;510:应用程序层;520:框架层;530:核心类库和运行时(Runtime);540:内核层;601:3D视频信号的视频帧所包含的两幅图像之一;602:3D视频信号的视频帧所包含的两幅图像之一;700:3D显示设备;710:眼部定位装置;720:第一姿态播放区域;730:多视点3D显示屏;740:第二姿态播放区域。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。
根据本公开的实施例提供了一种3D显示设备,包括多视点3D显示屏(例如:多视点裸眼3D显示屏)。可选地,多视点3D显示屏包括多个复合像素,多个复合像素中的每个复合像素包括多个复合子像素,多个复合子像素中的每个复合子像素包括在3D显示设备的一个姿态下对应于多个视点的多个子像素。
在一些实施例中,3D显示设备可以包括姿态检测装置和3D处理装置。
在一些实施例中,多视点3D显示屏包括多个复合像素、对应于3D显示设备的第一姿态的第一姿态播放区域和对应于3D显示设备的第二姿态的第二姿态播放区域。各复合像素包括多个复合子像素,各复合子像素由多个同色子像素构成,各复合子像素的多个同色子像素在3D显示设备的第一姿态下对应于多视点。姿态检测装置配置为检测3D显示设备的姿态。3D信号接口,配置为接收3D信号;3D处理装置配置为处理3D信号以在第一姿态播放区域内播放来自3D信号的3D图像和在第二姿态播放区域内播放来自3D信号的2D图像。
在一些实施例中,3D处理装置与多视点3D显示屏通信连接。在一些实施例中,3D处 理装置与多视点3D显示屏的驱动装置通信连接。
在一些实施例中,姿态检测装置与3D处理装置通信连接。
在一些实施例中,3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
在一些实施例中,3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的任一个,3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于第一姿态的任一个。
图1A示出了根据本公开实施例的3D显示设备100。如图1A所示,3D显示设备100包括多视点3D显示屏110、至少一个3D处理装置130、配置为接收3D信号如3D视频信号的视频帧的3D信号接口(如视频信号接口140)、处理器120和姿态检测装置180。
在一些实施例中,多视点3D显示屏110可包括显示面板和覆盖显示面板的光栅(未示出)。在图1A所示的实施例中,多视点3D显示屏110可包括m列n行(m×n)个复合像素400并因此限定出m×n的显示分辨率。
在一些实施例中,每个复合像素包括多个复合子像素。在图1A所示的实施例中,每个复合像素400包括三个复合子像素410、420、430。三个复合子像素分别对应于三种颜色,即红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。
各复合子像素由对应于i个视点的i个同色子像素构成,i≥3。在图1A所示的实施例中,i=6,各复合子像素具有6个同色子像素,且3D显示设备100可相应地具有6个视点V1-V6。如图1A所示,红色复合子像素410具有6个红色子像素R,绿色复合子像素420具有6个绿色子像素G,蓝色复合子像素430具有6个蓝色子像素B。在其他实施例中可以想到i为大于或小于6的其他值。
在图1A所示的实施例中,各复合子像素410、420、430的子像素R、G、B分别成行布置,例如呈单行布置,且成行的复合子像素410、420、430彼此平行。但可以想到,复合像素中的复合子像素具有其他不同的排布方式或复合子像素中的子像素具有其他不同的排布形式。在一些实施例中,各复合子像素中的子像素成列布置,例如呈单列布置。在一些实施例中,各复合子像素中的子像素呈阵列形式布置。
在本公开的实施例中,每个复合子像素具有对应于视点的相应子像素。每个复合子像素的多个子像素在多视点3D显示屏的横向上成行布置,且成行的多个子像素的颜色相同。由于3D显示设备的多个视点是大致沿多视点3D显示屏的横向排布的,这样,在用户移动导致眼部处于不同的视点时,需要相应动态渲染每个复合子像素中对应于相应视点的不同子像素。由于每个复合子像素中的同色子像素成行排列,所以能够避免由于视觉暂留带来 的串色问题。此外,由于光栅的折射,有可能会在相邻的视点位置看见当前显示子像素的一部分,而通过同色、同行排列,即使当前显示子像素的一部分被看见,也不会出现混色的问题。
在一些实施例中,如图1A所示,3D显示设备100可设置有单个3D处理装置130。单个3D处理装置130同时处理对3D显示屏110的各复合子像素中的子像素的渲染。在一些实施例中,3D显示设备可设置有至少两个3D处理装置。至少两个3D处理装置并行、串行或串并行结合地处理对多视点3D显示屏的各复合子像素中的子像素的渲染。本领域技术人员将明白,上述至少两个3D处理装置可以有其他的方式分配且并行处理多视点3D显示屏的多行多列复合像素或复合子像素,这落入本公开实施例的范围内。
在一些实施例中,3D处理装置130还可以选择性地包括缓存器131,以便缓存所接收到的视频帧。
在一些实施例中,3D处理装置为FPGA或ASIC芯片或FPGA或ASIC芯片组。
参见图1A,3D显示设备100还可包括通过视频信号接口140通信连接至3D处理装置130的处理器120。在一些实施例中,处理器120被包括在计算机或智能终端、如移动终端中或作为处理器装置。
为简单起见,在下文中,3D显示设备100的示例性实施例内部包括处理器120。视频信号接口140相应构造为连接处理器120与3D处理装置130的内部接口。这样的3D显示设备100例如可以是移动终端,而作为3D显示设备100的内部接口的视频信号接口140可以为MIPI、mini-MIPI接口、LVDS接口、min-LVDS接口或Display Port接口。
在一些实施例中,如图1A所示,3D显示设备100的处理器120还可包括寄存器121。寄存器121可配置为暂存指令、数据和地址。
在一些实施例中,姿态检测装置180与处理器120通信连接。在一些实施例中,姿态检测装置180包括重力传感器。在另一些实施例中,姿态检测装置180包括陀螺仪传感器。在又一些实施例中,姿态检测装置180包括重力传感器和陀螺仪传感器。
在一些实施例中,3D显示设备还包括配置为获取眼部定位数据的眼部定位装置或眼部定位数据接口。
例如图1B所示的实施例中,3D显示设备100还包括通信连接至3D处理装置130的眼部定位装置150,由此3D处理装置130可以直接接收眼部定位数据。在图1C所示的实施例中,眼部定位装置(未示出)例如可以直接连接处理器120,而3D处理装置130经由眼部定位数据接口160从处理器120获得眼部定位数据。在另一些实施例中,眼部定位装置可同时连接处理器和3D处理装置,使得一方面3D处理装置130可以直接从眼部定位 装置获取眼部定位数据,另一方面可以眼部定位装置获取的其他信息可以被处理器处理。
示例性的,图2示出了实施为移动终端、如智能蜂窝电话或平板电脑的3D显示设备200的硬件结构示意图。在所示出的实施例中,3D显示设备200可以包括处理器201、外部存储器接口211、(内部)存储器210、通用串行总线(USB)接口213、充电管理模块214、电源管理模块215、电池216、移动通信模块240、无线通信模块242、天线239和241、音频模块234、扬声器235、受话器236、麦克风237、耳机接口238、按键209、马达208、指示器207、用户标识模块(SIM)卡接口221、多视点3D显示屏202、3D处理装置203、3D信号接口(如视频信号接口204)、摄像装置206、眼部定位装置205,以及传感器模块220等。
在一些实施例中,传感器模块220可以包括接近光传感器221、环境光传感器222、压力传感器223、气压传感器224、磁传感器225、重力传感器226、陀螺仪传感器227、加速度传感器228、距离传感器229、温度传感器230、指纹传感器231、触摸传感器232和骨传导传感器233等。
可以理解的是,本公开实施例示意的结构并不构成对3D显示设备200的具体限定。在本公开另一些实施例中,3D显示设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器201可以包括一个或一个以上处理单元。在一些实施例中,处理器201可以包括以下之一或以下至少两种的组合:应用处理器(AP)、调制解调处理器、基带处理器、图形处理器(GPU)、图像信号处理器(ISP)、控制器、存储器、视频编解码器、数字信号处理器(DSP)、基带处理器、神经网络处理器(NPU)等。在一些实施例中,不同的处理单元可以是独立的器件,在一些实施例中,不同的处理单元可以集成在一个或一个以上处理器中。
在一些实施例中,处理器201中还可以设置有高速缓存器,用于保存处理器201刚用过或循环使用的指令或数据。在处理器201要再次使用指令或数据时,可从存储器中直接调用。
在一些实施例中,处理器201可以包括一个或一个以上接口。接口可以包括集成电路(I2C)接口、集成电路内置音频(I2S)接口、脉冲编码调制(PCM)接口、通用异步收发传输器(UART)接口、移动产业处理器接口(MIPI)、通用输入输出(GPIO)接口、用户标识模块(SIM)接口、通用串行总线(USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(SDA)和一根串行时钟线 (SCL)。在一些实施例中,处理器201可以包含多组I2C总线。处理器201可以通过不同的I2C总线接口分别通讯连接触摸传感器、充电器、闪光灯、摄像装置、眼部定位装置等。
I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口被用于连接处理器201与无线通信模块242。
在图2所示的实施例中,MIPI接口可以被用于连接处理器201与多视点3D显示屏202。此外,MIPI接口还可被用于连接如摄像装置206、眼部定位装置205等外围器件。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器201与摄像装置206、多视点3D显示屏202、无线通信模块242、音频模块234、传感器模块220等。
USB接口213是符合USB标准规范的接口,可以是Mini USB接口、Micro USB接口、USB Type C接口等。USB接口213可以用于连接充电器为3D显示设备200充电,也可以用于3D显示设备200与外围设备之间传输数据。也可以用于连接耳机并通过耳机播放音频。
可以理解的是,本公开实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对3D显示设备200的结构限定。
3D显示设备200的无线通信功能可以通过天线241和239、移动通信模块240、无线通信模块242、调制解调处理器或基带处理器等实现。
天线241和239配置为发射和接收电磁波信号。3D显示设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块240可以提供应用在3D显示设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块240可以包括至少一个滤波器、开关、功率放大器、低噪声放大器(LNA)等。移动通信模块240可以由天线239接收电磁波,并对接收的电磁波进行滤波、放大等处理,传送至调制解调处理器进行解调。移动通信模块240还可以对经调制解调处理器调制后的信号放大,经天线239转为电磁波辐射出去。在一些实施例中,移动通信模块240的至少部分功能模块可以被设置于处理器201中。在一些实施例中,移动通信模块240的至少部分功能模块可以与处理器201的至少部分模块被设置在同一个器件中。
无线通信模块242可以提供应用在3D显示设备200上的包括无线局域网(WLAN)、蓝牙(BT)、全球导航卫星系统(GNSS)、调频(FM)、近距离无线通信技术(NFC)、红 外技术(IR)等无线通信的解决方案。无线通信模块242可以是集成至少一个通信处理模块的一个或一个以上器件。无线通信模块242经由天线241接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器201。无线通信模块242还可以从处理器201接收待发送的信号,对其进行调频,放大,经天线241转为电磁波辐射出去。
在一些实施例中,3D显示设备200的天线239与移动通信模块240耦合,天线241与无线通信模块242耦合,使得3D显示设备200可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(GSM)、通用分组无线服务(GPRS)、码分多址接入(CDMA)、宽带码分多址(WCDMA)、时分码分多址(TD-SCDMA)、长期演进(LTE)、BT、GNSS、WLAN、NFC、FM,或IR技术等中至少一项。GNSS可以包括全球卫星定位系统(GPS)、全球导航卫星系统(GLONASS)、北斗卫星导航系统(BDS)、准天顶卫星系统(QZSS)或星基增强系统(SBAS)中至少一项。
在一些实施例中,用于接收3D视频信号的外部接口可以包括USB接口213、移动通信模块240、无线通信模块242或它们的任意组合。此外,还可以想到其他可行的用于接收3D视频信号的接口,例如上述的接口。
存储器210可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器201通过运行存储在存储器210中的指令,从而执行3D显示设备200的各种功能应用以及数据处理。存储器210可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储3D显示设备200使用过程中所创建的数据(比如音频数据、电话本)等。存储器203可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(UFS)等。
外部存储器接口212可以用于连接外部存储卡,例如Micro SD卡,实现扩展3D显示设备200的存储能力。外部存储卡通过外部存储器接口212与处理器201通信,实现数据存储功能。
在一些实施例中,3D显示设备的存储器可以包括(内部)存储器210、外部存储器接口212连接的外部存储卡或其组合。在本公开另一些实施例中,视频信号接口也可以采用上述实施例中不同的内部接口连接方式或其组合。
在本公开的实施例中,摄像装置206可以采集图像或视频。
在一些实施例中,3D显示设备200通过视频信号接口204、3D处理装置203、多视点3D显示屏202,以及应用处理器等实现显示功能。
在一些实施例中,3D显示设备200可包括GPU218,例如在处理器201内用于对3D 视频图像进行处理,也可以对2D视频图像进行处理。
在一些实施例中,3D显示设备200还包括配置为对数字视频压缩或解压缩的视频编解码器219。
在一些实施例中,视频信号接口204配置为将经GPU218或编解码器219或两者处理的3D视频信号、例如解压缩的3D视频信号的视频帧输出至3D处理装置203。
在一些实施例中,GPU218或编解码器219集成有格式调整器。
多视点3D显示屏202用于显示三维(3D)图像或视频等。多视点3D显示屏202包括显示面板。显示面板可以采用液晶显示屏(LCD)、有机发光二极管(OLED)、有源矩阵有机发光二极体或主动矩阵有机发光二极体(AMOLED)、柔性发光二极管(FLED)、Mini-LED、Micro-LED、Micro-OLED,以及量子点发光二极管(QLED)等。
在一些实施例中,眼部定位装置205通信连接至3D处理装置203,从而3D处理装置203可以基于眼部定位数据渲染复合像素(复合子像素)中的相应子像素。在一些实施例中,眼部定位装置205还可连接处理器201,例如旁路连接处理器201。
3D显示设备200可以通过音频模块234、扬声器235、受话器236、麦克风237、耳机接口238以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块234配置为将数字音频信息转换成模拟音频信号输出,也配置为将模拟音频输入转换为数字音频信号。音频模块234还可以配置为对音频信号编码和解码。在一些实施例中,音频模块234可以设置于处理器201中,或将音频模块234的部分功能模块设置于处理器201中。扬声器235配置为将音频电信号转换为声音信号。3D显示设备200可以通过扬声器235收听音乐,或收听免提通话。受话器236也称“听筒”,用于将音频电信号转换成声音信号。当3D显示设备200接听电话或语音信息时,可以通过将受话器236靠近耳部接听语音。麦克风237配置为将声音信号转换为电信号。耳机接口238配置为连接有线耳机。耳机接口238可以是USB接口,也可以是3.5mm的开放移动3D显示设备平台(OMTP)标准接口、美国蜂窝电信工业协会(CTIA)标准接口。
按键209包括开机键、音量键等。按键209可以是机械按键,也可以是触摸式按键。3D显示设备200可以接收按键输入,产生与3D显示设备200的用户设置以及功能控制有关的键信号输入。
马达208可以产生振动提示。马达208可以配置为振动以提示来电,也可以配置为振动以反馈触摸。
SIM卡接口211配置为连接SIM卡。在一些实施例中,3D显示设备200采用嵌入式SIM卡(eSIM)。
压力传感器223配置为感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器223可以设置于多视点3D显示屏202,这落入本公开实施例的范围内。
气压传感器224用于测量气压。在一些实施例中,3D显示设备200通过气压传感器224测得的气压值计算海拔高度,辅助定位和导航。
磁传感器225包括霍尔传感器。
重力传感器226作为姿态检测装置能够将运动或重力转换为电信号,配置为测量倾斜角、惯性力、冲击及震动等参数。
陀螺仪传感器227作为姿态检测装置配置为确定3D显示设备200的运动姿态。
借助重力传感器226或陀螺仪传感器227能够检测到3D显示设备200处于第一姿态或是处于不同于第一姿态的第二姿态。
加速度传感器228可检测3D显示设备200在各个方向上(一般为三轴)加速度的大小。
距离传感器229可配置为测量距离
温度传感器230可配置为检测温度。
指纹传感器231可配置为采集指纹。3D显示设备200可以利用采集的指纹特性实现指纹解锁、访问应用锁、指纹拍照、指纹接听来电等。
触摸传感器232可以设置于多视点3D显示屏202中,由触摸传感器232与多视点3D显示屏202组成触摸屏,也称“触控屏”。
骨传导传感器233可以获取振动信号。
充电管理模块214配置为从充电器接收充电输入。在一些实施例中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块214可以通过USB接口213接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块214可以通过3D显示设备200的无线充电线圈接收无线充电输入。
电源管理模块215配置为将电池216和充电管理模块214连接至处理器201。电源管理模块215接收电池216或充电管理模块214中至少一项的输入,为处理器201、存储器210、外部存储器、多视点3D显示屏202、摄像装置206和无线通信模块242等供电。在另一些实施例中,电源管理模块215和充电管理模块214也可以设置于同一个器件中。
3D显示设备200的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构。本公开所示的实施例以分层架构的安卓系统为例,示例性说明3D显示设备200的软件结构。但可以想到,本公开的实施例可以在不同的软件系统、如操作系统中实施。
图3是根据本公开实施例的3D显示设备200的软件结构示意图。分层架构将软件分成若干个层。层与层之间通过软件接口通信。在一些实施例中,将安卓系统分为四层,从上至下分别为应用程序层510、框架层520、核心类库和运行时(Runtime)530,以及内核层540。
应用程序层510可以包括一系列应用程序包。如图3所示,应用程序包可以包括蓝牙、WLAN、导航、音乐、相机、日历、通话、视频、图库、地图和短信息等应用程序。根据本公开实施例的3D视频显示方法,例如可以在视频应用程序中实施。
框架层520为应用程序层的应用程序提供应用编程接口(API)和编程框架。框架层包括一些预先定义的函数。例如,在本公开的一些实施例中,对所采集的3D视频图像进行识别的函数或者算法以及处理图像的算法等可以包括在框架层。
如图3所示,框架层520可以包括资源管理器、电话管理器、内容管理器、通知管理器、窗口管理器、视图系统安装包和管理器等。
安卓Runtime(运行时)包括核心库和虚拟机。安卓Runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言要调用的功能函数,另一部分是安卓的核心库。
应用程序层和框架层运行在虚拟机中。虚拟机将应用程序层和框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
核心类库可以包括多个功能模块。例如:3D图形处理库(例如:OpenGL ES)、表面管理器、图像处理库、媒体库和图形引擎(例如:SGL)等。
内核层540是硬件和软件之间的层。内核层至少包含摄像头驱动、音视频接口、通话接口、Wifi接口、传感器驱动、电源管理和GPS接口。
在此,以具有图2和图3所示结构的作为移动终端的3D显示设备为例,描述3D显示设备中的3D视频传输和显示的实施例。可以想到的是,在另一些实施例中可以包括更多或更少的特征或对其中的特征进行改变。
在一些实施例中,例如为移动终端、如智能蜂窝电话或平板电脑的3D显示设备200例如借助作为外部接口的移动通信模块240及天线239或者无线通信模块242及天线241从网络,如蜂窝网络、WLAN网络、蓝牙接收例如压缩的3D视频信号、压缩的3D视频信号,例如经GPU218进行图像处理、编解码器219编解码和解压缩,然后例如经作为内部接口的视频信号接口204,如MIPI接口或mini-MIPI接口,将解压缩的3D视频信号发送至至少一个3D处理装置203。解压缩的3D视频信号的视频帧包括本公开实施例的两幅 图像或复合图像。进而,3D处理装置203相应地渲染多视点3D显示屏202的复合子像素中的子像素,由此实现3D视频播放。
在另一些实施例中,3D显示设备200读取(内部)存储器210或通过外部存储器接口212读取外部存储卡中存储的压缩的3D视频信号,并经相应的处理、传输和渲染来实现3D视频播放。
在一些实施例中,上述3D视频的播放是在安卓系统应用程序层510中的视频应用程序中实施的。
下面参见图1A、图4A和图4B来描述根据本公开的实施例的3D显示设备内的3D视频信号的传输和显示。在所示出的实施例中,多视点3D显示屏110可限定出6个视点V1-V6,用户的眼睛在各视点(空间位置)可看到多视点3D显示屏110的显示面板中各复合像素的复合子像素中相应的子像素的显示。用户的双眼在不同的视点看到的两个不同画面形成视差,在大脑中合成3D的画面。
在本公开的一些实施例中,3D处理装置130通过例如作为内部接口的视频信号接口140从处理器120接收例如为解压缩的3D视频信号的视频帧。各视频帧可包含两幅图像或者包含复合图像,或者由其构成。
在一些实施例中,两幅图像或复合图像可以包括不同类型的图像以及可以呈各种排布形式。
如图4A所示,3D视频信号的视频帧包含并列的两幅图像601、602或由其构成。在一些实施例中,两幅图像可以分别为左眼视差图像和右眼视差图像。在一些实施例中,两幅图像可以分别为渲染色彩图像和景深图像。
在一些实施例中,3D视频信号的视频帧包含交织的复合图像。在一些实施例中,复合图像可以为交织的左眼和右眼视差复合图像、交织的渲染色彩和景深复合图像。
本领域技术人员将明白,附图所示的实施例仅是示意性的,3D视频信号的视频帧所包含的两幅图像或复合图像可以包括其他类型的图像以及可以呈其他排布形式,这落入本公开实施例的范围内。
在一些实施例中,至少一个3D视频处理装置130在接收到包括两幅图像601、602的视频帧后,基于两幅图像之一渲染各复合子像素中至少一个子像素并基于两幅图像中的另一幅渲染各复合子像素中至少另一个子像素。
在另一些实施例中,至少一个3D视频处理装置在接收到包括复合图像的视频帧后,基于复合图像渲染各复合子像素中至少两个子像素。例如,根据复合图像中的第一图像(部分)渲染至少一个子像素,根据第二图像(部分)渲染至少另一个子像素。
在一些实施例中,对复合子像素中的子像素的渲染例如是基于眼部定位数据而进行的动态渲染。
参见图4B,在一些实施例中,至少一个3D视频处理装置130在接收到分别为左眼视差图像和右眼视差图像的两幅图像601、602的视频帧后,基于两幅图像之一渲染各复合像素以播放2D图像。在图4B所示出的实施例中,基于图像601渲染各复合像素。在其他实施例中,也可以基于图像602渲染各复合像素。
在一些实施例中,两幅图像601、602分别为渲染色彩图像和景深图像,要播放的2D图像由渲染色彩图像和景深图像生成。
在一些实施例中,3D显示设备100还包括格式调整器(未示出),其例如集成在处理器120中,并构造为编解码器或者作为GPU的一部分。格式调整器配置为预处理3D视频信号的视频帧,以使播放的3D图像或2D图像适配于显示要求或设备要求的分辨率。
根据本公开的实施例,3D显示设备具有两个姿态,并适配于两个姿态而限定出两个播放区域。参见图5A和图5B,在所示出的实施例中,3D显示设备700例如为移动终端。图5A示出了3D显示设备700处于第一姿态的正面示意图。如图所示,第一姿态例如为3D显示设备700的横屏显示姿态,3D显示设备700适配于第一姿态在多视点3D显示屏730中限定出第一姿态播放区域720。显示屏730中各复合像素的复合子像素的多个(同色)子像素在3D显示设备700的第一姿态下对应于3D显示设备700的多视点。图5B示出了3D显示设备700处于第二姿态的正面示意图。如图所示,第二姿态例如为3D显示设备700的竖屏显示姿态,3D显示设备700适配于第二姿态在多视点3D显示屏730中限定出第二姿态播放区域740。
3D显示设备700可以内置姿态检测装置和3D处理装置。姿态检测装置例如为重力传感器或陀螺仪传感器,并配置为检测3D显示设备700所处的姿态,或姿态的切换,或检测这两者。3D处理装置配置为处理3D信号例如3D视频信号的视频帧以在3D显示设备处于横屏时在第一姿态播放区域720内播放来自3D信号的3D图像和在3D显示设备处于竖屏时在第二姿态播放区域740内播放来自3D信号的2D图像。
在一些实施例中,3D显示设备700设置有眼部定位装置710,眼部定位装置710配置为获取眼部定位数据。
在一些实施例中,当3D显示设备处于第一姿态或从第二姿态朝着第一姿态切换时,3D处理装置基于眼部定位数据,根据要播放的3D图像在第一姿态播放区域内渲染复合子像素中相应的子像素。眼部定位数据例如包括用户的眼部空间位置信息,3D处理装置可以基于眼部空间位置信息获得用户眼部所处的视点位置。第一姿态播放区域内被渲染的相应 子像素是与用户眼部所处视点位置相对应的子像素。视点与眼部空间位置的对应关系、子像素与视点的对应关系可各自以对应关系表的形式存储于3D处理装置中,或者3D处理装置可以接收/获取视点与眼部空间位置的对应关系表、子像素与视点的对应关系表。
参见图6A,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6,3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图6A中仅示出了一个复合像素400与6个视点的对应关系。
在3D显示设备处于第一姿态或从第二姿态朝着第二姿态切换的情况下,当眼部定位装置检测到用户的双眼各处于一个视点时,例如左眼处于视点V2且右眼处于视点V4,基于3D视频信号的视频帧生成用户双眼所处的两个视点的图像,并在第一播放区域内渲染各复合子像素的对应于这两个视点的相应子像素。在图6所示出的实施例中,渲染的是复合子像素410、420、430各自对应于视点V2的子像素R2、G2、B2和对应于视点V4的子像素R4、G4、B4。
参见图6B,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6,3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图6B中仅示出了一个复合像素400与6个视点的对应关系。
在3D显示设备处于第一姿态或从第二姿态朝着第一姿态切换的情况下,当眼部定位装置检测到用户的双眼各自涉及相邻的两个视点时,例如左眼涉及视点V2和V3且右眼涉及视点V4和V5,基于3D视频信号的视频帧生成用户双眼各自涉及的两个视点的图像,并在第一播放区域内渲染各复合子像素的对应于这四个视点的相应子像素。在图7所示出的实施例中,渲染的是复合子像素410、420、430各自对应于视点V2和V3的子像素R2、R3、G2、G3、B2、B3和对应于视点V4和V5的子像素R4、R5、G4、G5、B4、B5。
在一些实施例中,当3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换时,3D处理装置配置为在第二姿态播放区域内,根据要播放的2D图像渲染各复合像素的复合子像素中的至少一个子像素。由此,3D显示设备在第二姿态下向用户播放来自3D信号的2D图像。
参见图7A,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6(未示出),3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图7A中仅示出了一个复合像素400。
在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,基于3D视频信号的视频帧生成图像,并在第二播放区域内渲染各复合子像素的全部子像素。由此,3D显示设备在第二姿态下播放来自3D信号的2D图像。
参见图7B,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6(未示出),3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图7B中仅示出了一个复合像素400。
在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,基于3D视频信号的视频帧生成图像,并在第二播放区域内渲染各复合子像素的一个子像素。在所示出的实施例中,渲染的是红色复合子像素410的R6、绿色复合子像素420的G6、蓝色复合子像素430的B6。由此,3D显示设备在第二姿态下播放来自3D信号的2D图像。可以想到的是,在其他实施例中,可以选择各复合子像素中的其他一个或一个以上子像素进行渲染。
在一些实施例中,当3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换时,3D处理装置基于实时眼部定位数据,根据要播放的2D图像,在第二姿态播放区域内渲染各复合子像素中相应的子像素。
参见图7C,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6,3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图7C中仅示出了一个复合像素400与6个视点的对应关系。
在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,利用眼部定位装置检测用户的双眼所处的对应于第一姿态的视点的位置。在图7C所示的实施例中,用户的双眼处于以第一姿态来说的单个视点,例如视点V3。获取实时眼部定位数据之后,基于3D视频信号的视频帧生成用户双眼所处单个视点的图像,并在第二播放区域内渲染各复合子像素的对应于单个视点的子像素。在图7C所示出的实施例中,渲染的是复合子像素410、420、430各自对应于第一姿态下的视点V3的子像素R3、G3、B3。由此,3D显示设备在第二姿态下向位于视点V3的用户播放来自3D信号的2D图像。
参见图7D,在所示出的实施例中,3D显示设备可以具有对应于第一姿态的6个视点V1-V6,3D显示设备的多视点3D显示屏中的各复合像素400可以具有红色复合子像素410、绿色复合子像素420和蓝色复合子像素430。各复合子像素具有对应于6个视点的6个子像素。为清楚起见,在图7D中仅示出了一个复合像素400与6个视点的对应关系。
在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,利用眼部定位装置检测用户的双眼所处的对应于第一姿态的视点的位置。在图7D所示的实施例中,用户的双眼涉及以第一姿态来说的两个视点,例如视点V3和V4。获取实时眼部定位数据之后,基于3D视频信号的视频帧生成用户双眼所涉及的两个视点的图像,并在第二播放区域内渲染各复合子像素的对应与这两个视点的子像素。在图7D所示出的实施例中,渲染的是复合子像素410、420、430各自对应于第一姿态下的视点V3和V4的子像素R3、R4、G3、G4、B3、B4。由此,3D显示设备在第二姿态下向涉及视点V3和V4的用户播放来自3D信号的2D图像。
在一些实施例中,3D显示设备还包括格式调整器(未示出),配置为调整3D信号的格式,例如预处理3D视频信号的视频帧,以适于在第二姿态播放区域内播放2D图像。例如,当3D信号的分辨率与第二姿态播放区域的显示分辨率不一致时,格式调整器将3D信号的分辨率进行预处理,以适配于第二姿态播放区域的显示分辨率。
根据本公开的实施例提供了用上述的3D显示设备实现3D图像显示的方法。实现3D图像显示的方法包括:
检测3D显示设备的姿态,例如检测3D显示设备所处的姿态,比如处于第一姿态或处于第二姿态,或者例如检测3D显示设备的姿态变化,或者例如检测3D显示设备所处的姿态以及姿态的变化;
在检测到3D显示设备的姿态发生变化时,将所显示的图像调整为与3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向。
在本公开的实施例中,3D显示设备的显示维度包括2D显示维度和3D显示维度。在一些实施例中,3D显示设备基于处在第一姿态而播放3D图像,并基于处在第二姿态而播放2D图像。
如图8所示的实施例,在一些实施例中,实现3D图像显示的方法包括:
S100,检测3D显示设备的姿态变化;和
S200,在检测到3D显示设备的姿态发生变化时,将所显示的图像调整为与3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,步骤S200可以包括:在检测到3D显示设备发生姿态变化时,调整所显示的图像的显示维度,以使姿态改变后的显示维度不同于姿态改变前的显示维度(例如姿态改变前显示的3D图像,姿态改变后显示2D图像,或者反之),且调整所显示的图 像的显示,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向。这样可以使显示的图像始终适配于用户的观看朝向。
在一些实施例中,检测3D显示设备所处的姿态或姿态变化可以由姿态检测装置完成。在一些实施例中,调整所显示的图像的显示维度,以使姿态改变后的显示维度不同于姿态改变前的显示维度,且调整所显示的图像的显示,以使所显示的图像的显示朝向保持在3D显示设备发生姿态变化前的初始显示朝向,这个步骤可以由3D处理装置完成。
在一些实施例中,检测3D显示设备的姿态变化包括:检测3D显示设备的旋转角速度,根据旋转角速度确定所述3D显示设备的姿态变化。
在一些实施例中,调整所显示的图像的显示朝向包括:在图像所在平面中旋转图像的显示朝向,以使图像保持在3D显示设备发生姿态变化前的初始显示朝向。
在一些实施例中,3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
在一些实施例中,3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的任一个,3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于所述第一姿态的任一个。
在一些实施例中,调整所显示的图像的显示朝向包括:旋转图像以使图像保持在对应于所述第一姿态的初始显示朝向。这样,对于用户来说,不论怎样调整3D显示设备的姿态,所看到的3D图像的显示朝向都是一致的。
在一些实施例中,第一姿态、第二姿态中任一个为斜屏显示姿态时,调整所显示的图像的显示朝向时还包括:以全屏显示方式显示所述图像。
在一些实施例中,调整所显示的图像的显示朝向包括:在图像所在平面中旋转图像的显示朝向,以使图像保持在初始显示朝向范围内;其中,初始显示朝向范围包括初始显示朝向。这样,所显示的3D图像的显示朝向可以根据用户的运动而微调或调整,以适应用户的运动。
在一些实施例中,根据用户的观看朝向调整所显示的图像的显示朝向,使得图像的显示朝向与用户的观看朝向一致。用户的观看朝向可以包括横向观看朝向、竖向观看朝向、斜向观看朝向中的任一个。
在一些实施例中,还可以对用户进行眼部定位,根据得到的眼部定位数据确定用户的观看朝向。这例如可以通过眼部定位装置实现。
在一些实施例中,调整所显示的图像的显示朝向包括:基于调整后的(或者说是3D显示设备的姿态改变后的)图像的显示朝向,渲染3D显示设备的多视点3D显示屏中的子像 素。
在一些实施例中,调整调整所显示的图像为3D图像包括:响应于3D显示设备的姿态变化,根据要播放的3D图像渲染每个复合子像素中相应的子像素。
在一些实施例中,调整所显示的图像为2D图像包括:响应于3D显示设备的姿态变化,根据要播放的2D图像渲染每个复合子像素中的至少一个子像素。
在一些实施例中,根据要播放的2D图像渲染各复合子像素中的至少一个子像素包括:基于眼部定位数据,根据要播放的2D图像渲染每个复合子像素中相应的子像素。
上述调整3D图像的显示朝向以及对子像素的渲染可以由3D处理装置完成。
在一些实施例中,实现3D图像显示的方法还包括获取3D信号。
根据本公开的实施例,3D显示设备的“姿态”等同于3D显示设备的“取向”。
在一些实施例中,实现3D图像显示的方法还包括响应于3D显示设备的姿态转变,在3D显示设备中切换播放来自3D信号的3D图像,这可以包括:响应于3D显示设备向第一姿态转变或处于第一姿态的信号,在多视点3D显示屏限定的第一姿态播放区域内播放来自3D信号的3D图像。
在一些实施例中,实现3D图像显示的方法还包括响应于3D显示设备的姿态转变,在3D显示设备中切换播放来自3D信号的2D图像,这可以包括:响应于3D显示设备向第二姿态转变或处于第二姿态的信号,在多视点3D显示屏的第二姿态播放区域内播放来自3D信号的2D图像。
在一些实施例中,响应于3D显示设备向第一姿态转变的信号,在3D显示设备的多视点3D显示屏限定的第一姿态播放区域内播放来自3D信号的3D图像包括:响应于3D显示设备由第二姿态切换至第一姿态的信号,由播放2D图像切换至播放3D图像。
在一些实施例中,响应于3D显示设备向第二姿态转变的信号,在3D显示设备的多视点3D显示屏限定的第二姿态播放区域内播放来自3D信号的2D图像包括:响应于3D显示设备由第一姿态切换至第二姿态的信号,由播放3D图像切换至播放2D图像。
在一些实施例中,3D信号为3D视频,例如3D视频的视频帧。
在一些实施例中,3D信号包括左眼视差图像和右眼视差图像。
在一些实施例中,3D信号包括渲染色彩图像和景深图像。
在一些实施例中,要播放的2D图像选自左眼视差图像和右眼视差图像之一。
在一些实施例中,要播放的2D图像由渲染色彩图像和景深图像生成。
在一些实施例中,在3D显示设备中切换显示3D图像和2D图像的方法还包括:响应于3D显示设备处于第一姿态的信号,获取实时眼部定位数据。
在一些实施例中,播放来自3D信号的3D图像包括:基于实时眼部定位数据,根据要播放的3D图像在第一姿态播放区域内渲染各复合子像素中相应的子像素。
例如,在3D显示设备处于第一姿态或从第二姿态朝着第一姿态切换情况下,当实时眼部定位数据表明用户的双眼各自对应于3D显示设备的一个视点时,根据要播放的3D图像在第一姿态播放区域内渲染各复合子像素中与双眼各自所对应的一个视点相应的子像素。
又例如,在3D显示设备处于第一姿态或从第二姿态朝着第一姿态切换情况下,当实时眼部定位数据表明用户的双眼各自对应于3D显示设备的彼此相邻的两个视点时,根据要播放的3D图像在第一姿态播放区域内渲染各复合子像素中与双眼各自对应的两个视点相应的子像素。
在一些实施例中,播放来自3D信号的2D图像包括:根据要播放的2D图像,在第二姿态播放区域内渲染各复合子像素中的至少一个子像素。
例如,在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,根据要播放的2D图像在第二姿态播放区域内渲染各复合像素的复合子像素中的所有子像素。
又例如,在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,根据要播放的2D图像在第二姿态播放区域内渲染各复合像素的复合子像素中的一个或一个以上子像素。
在一些实施例中,在3D显示设备中切换显示3D图像和2D图像的方法还包括:响应于3D显示设备由第一姿态切换至第二姿态或3D显示设备处于第二姿态的信号,获取实时眼部定位数据。
在一些实施例中,响应于3D显示设备由第一姿态切换至第二姿态或3D显示设备处于第二姿态的信号,获取实时眼部定位数据包括:获取眼部所处的对应于第一姿态的视点的实时位置。
在一些实施例中,播放来自3D信号的2D图像包括:基于实时眼部定位数据,根据要播放的2D图像,在第二姿态播放区域内渲染各复合子像素中相应的子像素。
例如,在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,当实时眼部定位数据表明用户的双眼对应于以3D显示设备的第一姿态来说的同一个视点时,根据要播放的2D图像在第二姿态播放区域内渲染各复合子像素中对应于这个视点的子像素。
又例如,在3D显示设备处于第二姿态或从第一姿态朝着第二姿态切换的情况下,当实时眼部定位数据表明用户的双眼都对应于以3D显示设备的第一姿态来说的彼此相邻的 两个视点时,根据要播放的2D图像在第二姿态播放区域内渲染各复合子像素中对应于这两视点的子像素。
在一些实施例中,3D显示设备处于第一姿态的信号、处于第二姿态的信号、从第一姿态朝着第二姿态切换的信号、从第二姿态朝着第一姿态切换的信号由姿态检测检测装置获取。姿态检测装置例如为重力传感器或陀螺仪传感器。
在一些实施例中,播放来自3D信号的2D图像还包括:调整3D信号的格式,以适于在第二姿态播放区域内播放2D图像。调整3D信号的格式例如可以通过格式调整器实施。
在一些实施例中,第一姿态为3D显示设备的横向,第二姿态为3D显示设备的竖向。
本公开实施例提供了一种3D显示设备300,参考图9,3D显示设备300包括处理器320和存储器310。在一些实施例中,3D显示设备300还可以包括通信接口340和总线330。其中,处理器320、通信接口340和存储器310通过总线330完成相互间的通信。通信接口340可配置为传输信息。处理器320可以调用存储器310中的逻辑指令,以执行上述实施例的在3D显示设备中切换显示3D图像和2D图像的方法。
此外,上述的存储器310中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。
存储器310作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序,如本公开实施例中的方法对应的程序指令/模块。处理器320通过运行存储在存储器310中的程序指令/模块,从而执行功能应用以及数据处理,即实现上述方法实施例中的在3D显示设备中切换显示3D图像和2D图像的方法。
存储器310可包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端设备的使用所创建的数据等。此外,存储器310可以包括高速随机存取存储器,还可以包括非易失性存储器。
本公开实施例提供的计算机可读存储介质,存储有计算机可执行指令,上述计算机可执行指令设置为执行上述的实现3D图像显示的方法。
本公开实施例提供的计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,上述计算机程序包括程序指令,当该程序指令被计算机执行时,使上述计算机执行上述的实现3D图像显示的方法。
本公开实施例的技术方案可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括一个或多个指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开实施例的方法的全部或部分步骤。而前述的存储介质可以是非暂态存储介质,包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或 者光盘等多种可以存储程序代码的介质,也可以是暂态存储介质。
以上描述和附图充分地示出了本公开的实施例,以使本领域技术人员能够实践它们。其他实施例可以包括结构的、逻辑的、电气的、过程的以及其他的改变。实施例仅代表可能的变化。除非明确要求,否则单独的部件和功能是可选的,并且操作的顺序可以变化。一些实施例的部分和特征可以被包括在或替换其他实施例的部分和特征。本公开实施例的范围包括权利要求书的整个范围,以及权利要求书的所有可获得的等同物。而且,本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。另外,当用于本申请中时,术语“包括”等指陈述的特征、整体、步骤、操作、元素或组件中至少一项的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件或这些的分组的存在或添加。本文中,每个实施例重点说明的可以是与其他实施例的不同之处,各个实施例之间相同相似部分可以互相参见。对于实施例公开的方法、产品等而言,如果其与实施例公开的方法部分相对应,那么相关之处可以参见方法部分的描述。
本文所披露的实施例中,所揭露的方法、产品(包括但不限于装置、设备等),可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,可以仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例。另外,在本公开实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
附图中的流程图和框图显示了根据本公开实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。在附图中的流程图和框图所对应的描述中,不同的方框所对应的操作或步骤也可以以不同于描述中所披露的顺序发生,有时不同的操作或步骤之间不存在特定的顺序。例如,两个连续的操作或步骤实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这可以依所涉及的功能而定。 框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。

Claims (27)

  1. 一种实现3D图像显示的方法,包括:
    检测3D显示设备的姿态变化;和
    在检测到所述3D显示设备的姿态发生变化时,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像的显示朝向保持在所述3D显示设备发生姿态变化前的初始显示朝向。
  2. 根据权利要求1所述的方法,其中,
    检测3D显示设备的姿态变化包括:检测所述3D显示设备的旋转角速度,根据所述旋转角速度确定所述3D显示设备的姿态变化;
    调整所显示的图像的显示朝向包括:在所述图像所在平面中旋转所述图像的显示朝向,以使所述图像保持在所述3D显示设备发生姿态变化前的初始显示朝向。
  3. 根据权利要求2所述的方法,其中,所述3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
  4. 根据权利要求3所述的方法,其中,
    所述3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的任一个;
    所述3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于所述第一姿态的任一个;
    调整所显示的图像的显示朝向包括:旋转所述图像以使所述图像保持在对应于所述第一姿态的初始显示朝向。
  5. 根据权利要求4所述的方法,其中,所述第一姿态、第二姿态中任一个为斜屏显示姿态时,调整所显示的图像的显示朝向时还包括:
    以全屏显示方式显示所述图像。
  6. 根据权利要求2所述的方法,其中,调整所显示的图像的显示朝向包括:
    在所述图像所在平面中旋转所述图像的显示朝向,以使所述图像保持在初始显示朝向范围内;
    其中,所述初始显示朝向范围包括所述初始显示朝向。
  7. 根据权利要求1至6任一项所述的方法,还包括:
    根据用户的观看朝向调整所显示的图像的显示朝向,使得所述图像的显示朝向与所述用户的观看朝向一致。
  8. 根据权利要求7所述的方法,其中,
    所述用户的观看朝向包括:横向观看朝向、竖向观看朝向、斜向观看朝向中的任一个;
    所述方法还包括:对所述用户进行眼部定位,根据得到的眼部定位数据确定所述用户的观看朝向。
  9. 根据权利要求1至8任一项所述的方法,其中,调整所显示的图像的显示朝向包括:
    基于调整后的图像的显示朝向,渲染所述3D显示设备的多视点3D显示屏中的子像素。
  10. 根据权利要求1至8任一项所述的方法,其中,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度包括将所显示的图像调整为3D图像,包括:
    响应于所述3D显示设备的姿态变化,根据要播放的3D图像渲染所述3D显示设备的多视点3D显示屏中的多个复合子像素中相应的子像素。
  11. 根据权利要求1至8任一项所述的方法,其中,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度包括将所显示的图像调整为2D图像,包括:
    响应于所述3D显示设备的姿态变化,根据要播放的2D图像渲染所述3D显示设备的多视点3D显示屏中的每个复合子像素中的至少一个子像素。
  12. 根据权利要求11所述的方法,其中,根据要播放的2D图像渲染所述每个复合子像素中的至少一个子像素包括:
    基于眼部定位数据,根据要播放的2D图像渲染所述每个复合子像素中相应的子像素。
  13. 一种3D显示设备,包括:
    处理器;和
    存储有程序指令的存储器;
    其中,所述处理器被配置为在执行上述程序指令时,执行如权利要求1至12任一项所述的方法。
  14. 一种3D显示设备,包括:
    姿态检测装置,被配置为检测所述3D显示设备的姿态变化;和
    3D处理装置,被配置为基于检测到的所述3D显示设备的姿态变化,将所显示的图像调整为与所述3D显示设备的姿态发生变化前的显示维度不同的显示维度,且调整所显示的图像的显示朝向,以使所显示的图像保持在所述3D显示设备发生姿态变化前的初始显示朝向。
  15. 根据权利要求14所述的3D显示设备,其中,
    所述姿态检测装置被配置为检测所述3D显示设备的旋转角速度,根据所述旋转角速度确定所述3D显示设备的姿态变化;
    所述3D处理装置被配置为在所显示的图像所在平面中旋转所述图像的显示朝向,以使所述图像保持在所述3D显示设备发生姿态变化前的初始显示朝向。
  16. 根据权利要求15所述的3D显示设备,其中,所述3D显示设备的姿态包括以下至少之一:横屏显示姿态、竖屏显示姿态、斜屏显示姿态。
  17. 根据权利要求16所述的3D显示设备,其中,
    所述3D显示设备在姿态变化前的第一姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的任一个;
    所述3D显示设备在姿态变化后的第二姿态包括:横屏显示姿态、竖屏显示姿态、斜屏显示姿态中的不同于所述第一姿态的任一个;
    所述3D处理装置被配置为旋转所显示的图像以使所述图像保持在对应于所述第一姿态的初始显示朝向。
  18. 根据权利要求17所述的3D显示设备,其中,所述3D处理装置被配置为在所述第一姿态、第二姿态中任一个为斜屏显示姿态时,以全屏显示方式显示调整后的图像。
  19. 根据权利要求15所述的3D显示设备,其中,所述3D处理装置被配置为在所显示的图像所在平面中旋转所述图像的显示朝向,以使所述图像保持在初始显示朝向范围内;
    其中,所述初始显示朝向范围包括所述初始显示朝向。
  20. 根据权利要求14至19任一项所述的3D显示设备,其中,所述3D处理装置被配置为根据用户的观看朝向调整所显示的图像的显示朝向,使得所述图像的显示朝向与所述用户的观看朝向一致。
  21. 根据权利要求20所述的3D显示设备,其中,所述用户的观看朝向包括:横向观看朝向、竖向观看朝向、斜向观看朝向中的任一个;
    所述3D显示设备还包括被配置为获取眼部定位数据的眼部定位装置或眼部定位数据接口;
    所述3D处理装置被配置为根据得到的眼部定位数据确定所述用户的观看朝向。
  22. 根据权利要求14至21任一项所述的3D显示设备,其中,所述3D处理装置被配置为基于调整后的图像的显示朝向,渲染所述3D显示设备的多视点3D显示屏中的子像素。
  23. 根据权利要求14至21任一项所述的3D显示设备,其中,所述3D处理装置被配 置为响应于所述3D显示设备的姿态变化,根据要播放的3D图像渲染所述3D显示设备的多视点3D显示屏中的多个复合子像素中相应的子像素。
  24. 根据权利要求14至21任一项所述的3D显示设备,其中,所述3D处理装置被配置为响应于所述3D显示设备的姿态变化,根据要播放的2D图像渲染所述3D显示设备的多视点3D显示屏中的每个复合子像素中的至少一个子像素。
  25. 根据权利要求24所述的3D显示设备,其中,所述3D处理装置被配置为基于眼部定位数据,根据要播放的2D图像渲染所述3D显示设备的多视点3D显示屏中的每个复合子像素中相应的子像素。
  26. 一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令设置为执行如权利要求1至12任一项所述的方法。
  27. 一种计算机程序产品,包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当该程序指令被计算机执行时,使所述计算机执行如权利要求1至12任一项所述的方法。
PCT/CN2020/133317 2019-12-05 2020-12-02 实现3d图像显示的方法、3d显示设备 WO2021110026A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/781,377 US20220417494A1 (en) 2019-12-05 2020-12-02 Method for realizing 3d image display, and 3d display device
EP20896949.3A EP4068780A4 (en) 2019-12-05 2020-12-02 METHOD FOR REALIZING A 3D IMAGE DISPLAY AND 3D DISPLAY DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911231156.XA CN112929637A (zh) 2019-12-05 2019-12-05 实现3d图像显示的方法、3d显示设备
CN201911231156.X 2019-12-05

Publications (1)

Publication Number Publication Date
WO2021110026A1 true WO2021110026A1 (zh) 2021-06-10

Family

ID=76160819

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/133317 WO2021110026A1 (zh) 2019-12-05 2020-12-02 实现3d图像显示的方法、3d显示设备

Country Status (5)

Country Link
US (1) US20220417494A1 (zh)
EP (1) EP4068780A4 (zh)
CN (1) CN112929637A (zh)
TW (1) TW202137759A (zh)
WO (1) WO2021110026A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079765A (zh) * 2021-11-17 2022-02-22 京东方科技集团股份有限公司 图像显示方法、装置及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012060412A1 (ja) * 2010-11-02 2012-05-10 シャープ株式会社 映像表示装置
CN103718148A (zh) * 2013-01-24 2014-04-09 华为终端有限公司 屏幕显示模式确定方法及终端设备
CN104661011A (zh) * 2014-11-26 2015-05-27 深圳超多维光电子有限公司 立体图像显示方法及手持终端
CN106990544A (zh) * 2017-04-17 2017-07-28 宁波万维显示科技有限公司 显示面板及立体显示装置
CN107145269A (zh) * 2017-04-19 2017-09-08 腾讯科技(深圳)有限公司 一种数据旋转方法以及装置
CN108076208A (zh) * 2016-11-15 2018-05-25 中兴通讯股份有限公司 一种显示处理方法及装置、终端
CN109547650A (zh) * 2019-02-02 2019-03-29 京东方科技集团股份有限公司 一种控制图像旋转的方法及装置和电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959676B (zh) * 2016-05-31 2018-09-25 上海易维视科技股份有限公司 可横竖显示的裸眼3d显示系统
CN108989785B (zh) * 2018-08-22 2020-07-24 张家港康得新光电材料有限公司 基于人眼跟踪的裸眼3d显示方法、装置、终端和介质
CN110072099A (zh) * 2019-03-21 2019-07-30 朱晨乐 一种裸眼3d视频像素排列结构及排列方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012060412A1 (ja) * 2010-11-02 2012-05-10 シャープ株式会社 映像表示装置
CN103718148A (zh) * 2013-01-24 2014-04-09 华为终端有限公司 屏幕显示模式确定方法及终端设备
CN104661011A (zh) * 2014-11-26 2015-05-27 深圳超多维光电子有限公司 立体图像显示方法及手持终端
CN108076208A (zh) * 2016-11-15 2018-05-25 中兴通讯股份有限公司 一种显示处理方法及装置、终端
CN106990544A (zh) * 2017-04-17 2017-07-28 宁波万维显示科技有限公司 显示面板及立体显示装置
CN107145269A (zh) * 2017-04-19 2017-09-08 腾讯科技(深圳)有限公司 一种数据旋转方法以及装置
CN109547650A (zh) * 2019-02-02 2019-03-29 京东方科技集团股份有限公司 一种控制图像旋转的方法及装置和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4068780A4

Also Published As

Publication number Publication date
EP4068780A4 (en) 2023-12-20
EP4068780A1 (en) 2022-10-05
CN112929637A (zh) 2021-06-08
TW202137759A (zh) 2021-10-01
US20220417494A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
TWI818211B (zh) 眼部定位裝置、方法及3d顯示裝置、方法
TWI746302B (zh) 多視點3d顯示屏、多視點3d顯示終端
CN112584125A (zh) 三维图像显示设备及其显示方法
CN113596319A (zh) 基于画中画的图像处理方法、设备、存储介质和程序产品
CN211791829U (zh) 3d显示设备
US20240105114A1 (en) Always On Display Method and Mobile Device
WO2021110027A1 (zh) 实现3d图像显示的方法、3d显示设备
WO2021110033A1 (zh) 3d显示设备、方法及终端
WO2021110026A1 (zh) 实现3d图像显示的方法、3d显示设备
US20230350629A1 (en) Double-Channel Screen Mirroring Method and Electronic Device
CN211128026U (zh) 多视点裸眼3d显示屏、多视点裸眼3d显示终端
CN211528831U (zh) 多视点裸眼3d显示屏、裸眼3d显示终端
WO2021110040A1 (zh) 多视点3d显示屏、3d显示终端
CN112541861A (zh) 图像处理方法、装置、设备及计算机存储介质
CN211930763U (zh) 3d显示设备
CN113923351A (zh) 多路视频拍摄的退出方法、设备、存储介质和程序产品
WO2021110037A1 (zh) 实现3d图像显示的方法、3d显示设备
CN112929645A (zh) 3d显示设备、系统和方法及3d视频数据通信方法
TWI840636B (zh) 實現3d圖像顯示的方法、3d顯示設備
CN112929641A (zh) 3d图像显示方法、3d显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020896949

Country of ref document: EP

Effective date: 20220630