WO2022160795A1 - Procédé et appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux - Google Patents

Procédé et appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux Download PDF

Info

Publication number
WO2022160795A1
WO2022160795A1 PCT/CN2021/125396 CN2021125396W WO2022160795A1 WO 2022160795 A1 WO2022160795 A1 WO 2022160795A1 CN 2021125396 W CN2021125396 W CN 2021125396W WO 2022160795 A1 WO2022160795 A1 WO 2022160795A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
image
display
light field
images
Prior art date
Application number
PCT/CN2021/125396
Other languages
English (en)
Chinese (zh)
Inventor
卢文正
刘晟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022160795A1 publication Critical patent/WO2022160795A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the present application relates to the field of display technology, and in particular, to a method and a device for converting a display mode based on light field display.
  • Display devices are one of the important ways in which the digital world interacts with the physical world.
  • display technology has been continuously innovated: from black and white to color, from thick to thin, from low resolution to high definition, from high radiation to low energy consumption.
  • various display technologies have shown a development trend from two-dimensional (2D) to three-dimensional (3D) display content.
  • the naked-eye three-dimensional display technology has been sought after and favored by people because it does not need to wear additional auxiliary equipment and the display content is closer to the real world.
  • Glasses-free 3D display technology has developed tremendously from 2010 to 2017, and major manufacturers in the market have launched traditional glasses-free 3D TVs, computers, mobile phones and other products based on binocular parallax.
  • the traditional naked-eye 3D display technology based on binocular parallax was adopted in the market.
  • the realization principle of binocular parallax 3D display was to see two different parallax images with both eyes, and realize 3D through the depth perception of binocular parallax by the human brain. Effect.
  • the key of the technical solution is to project the two parallax pictures on the display screen to the left eye and the right eye of a person respectively, so that the left eye and the right eye can only see one corresponding picture at the same time (as shown in FIG. 1 ).
  • the display pixels are divided into two parts, corresponding to the left eye and the right eye of the viewer respectively, and the image resolution seen by a single eyeglass is 1/2 of the display screen resolution.
  • the shortcomings of the naked-eye 3D display technology based on binocular parallax, such as single viewing angle, easy formation of crosstalk and ghosting, and easy to cause viewing fatigue commercial products based on this technology have not been recognized by consumers, and the market response has been relatively bleak. .
  • the naked-eye 3D light field display technology has gradually matured, which can give viewers a better naked-eye 3D viewing experience.
  • the naked-eye 3D light field display technology enables viewers to form a naked-eye 3D experience with no crosstalk, no ghosting, and high comfort in different viewing window positions by forming multiple three-dimensional viewing angles in the space.
  • the light field display technology is in the early stage of development and is the key technology for the next generation of naked-eye 3D display in the future.
  • the realization principle of light field 3D display is to project multiple different 2D pictures in a continuous angle range, and the content displayed in each viewing angle is a set of continuously changing parallax pictures .
  • the viewing positions where these perspectives are located are called windows.
  • the viewer watches the display screen through different windows, and will see different continuous parallax pictures (as shown in FIG. 2 ).
  • the architecture of a light field display consists of a flat panel display and layers of optical structures.
  • the design of the optical structure layer directly affects the distribution of the window.
  • the size and position of the viewing window will be adapted to the human eyes, so that the left and right eyes of the human being can fall into different viewing windows respectively, thereby forming a dual depth experience of binocular parallax + mobile parallax.
  • the more windows are designed the more continuous parallax pictures, the larger the field of view (FOV, Field of view), and the better the viewing effect.
  • FOV Field of view
  • the disadvantage of the traditional 2D content display method is the low resolution.
  • the resolution of the two-dimensional content that can be displayed is the same as the display resolution in a single window in the three-dimensional mode, which is 1/N of the screen resolution (N is the number of windows).
  • N is the number of windows.
  • the total screen resolution is 8K (7680 ⁇ 4320)
  • the total number of windows is 64 (8 ⁇ 8 window distribution)
  • the displayed two-dimensional screen resolution is only 960 ⁇ 540
  • the display 2D screen resolution is only 480 ⁇ 270. Therefore, this 2D/3D mode conversion loses too much resolution to meet human viewing needs.
  • the embodiments of the present application provide a light field display-based display mode conversion method and conversion device, which are used to solve the problem of converting to a two-dimensional display mode in a multi-viewing three-dimensional light field display architecture in the prior art.
  • an embodiment of the present application provides a display mode conversion method based on light field display, the method comprising:
  • a first sub-image and a second sub-image are acquired based on the first image; wherein the resolutions of the first sub-image and the second sub-image are the same and lower than the resolution of the first image, the The distinguishing content information provided by the first sub-image and the second sub-image is different, and the first image, the first sub-image and the second sub-image are used to display the same target object;
  • the first sub-image and the second sub-image are sequentially displayed in the display window of the light field at intervals.
  • the collected high-definition two-dimensional image is converted as the first image, so that it is still a high-definition two-dimensional image when viewed by the audience, and the resolution of the image is not significantly reduced in terms of appearance and perception. It provides better compatibility of the display content of 2D images for the 3D light field display architecture.
  • the entire conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve the resolution improvement in the 2D display mode. It is widely applicable to all multi-view naked-eye 3D light field display architectures.
  • the step of acquiring a first sub-image and a second sub-image based on the first image includes the following steps:
  • a plurality of sub-areas are divided in units of adjacent n pixels, and each of the sub-areas includes n pixels;
  • a certain pixel is selected from the n pixels in the sub-area, and the selected pixel is displayed in the sub-area, so that the sub-area is sampled to display only the selected pixel. partition;
  • Each of the sub-regions is sampled to form the sub-regions, all the sub-regions form sub-images, and the resolution of the sub-images is 1/n of the first image; wherein, n ⁇ 2;
  • Two sub-images are selected from the plurality of sub-images as the first sub-image and the second sub-image.
  • the first image is converted into a first sub-image and a second sub-image that display different resolution content information in the first image through partitioning, sampling and reorganization.
  • the resolutions of the first sub-image and the second sub-image are lower than that of the first image.
  • sub-image redistribute the resolution content information of the first image into the low-resolution first sub-image and the second sub-image, so that the first sub-image and the second sub-image perform image fusion in the eyes of the viewer, To achieve the purpose of improving the resolution of two-dimensional images.
  • the number of the sub-images is two, and the resolved content information of the first sub-image and the second sub-image are complementary.
  • the resolved content information displayed by the first sub-image and the second sub-image can be fused into a fused image whose display content is closer to the first image in the eyes of the viewer, which can be restored to the greatest extent. Resolved content information and resolution of the first image.
  • the number of the sub-images selected is twice the number of the display windows visible in the monocular field of view.
  • the number of display windows observed by the viewer's eyes is the same, which ensures that the converted first images observed in each window position are the same.
  • the step of sequentially displaying the first sub-image and the second sub-image in the display window of the light field at intervals further includes the following steps:
  • the first sub-image and the second sub-image are respectively matched and displayed in display windows in the window group, and the matched sub-images of two adjacent display windows are different.
  • the viewer when the viewer's eyes simultaneously see two sets of first word images and second sub-images with different resolution content information, the viewer can feel the fused image with improved resolution. There is no one-to-one correspondence between the eyes of the viewer and the first sub-image and the second sub-image, and the resolution of the fused image viewed at different positions is improved compared to the resolution of the unprocessed two-dimensional image.
  • the number of display windows included in the window group is the same as the number of acquired sub-images.
  • the sum of the display contents of all sub-images displayed in each window group is the display contents of the first image.
  • the display windows are linearly arranged in the light field.
  • the solution provided by this embodiment is aimed at application scenarios of terminal devices such as personal computers, monitors, and televisions.
  • the plurality of display windows are arranged in an array in the light field.
  • the solution provided by this embodiment is aimed at application scenarios of terminal devices such as mobile phones, tablet computers, and watches.
  • an embodiment of the present application provides a display mode conversion device based on light field display, the conversion device comprising: an acquisition module, a sampling module, and a display module;
  • the acquisition module is used to acquire the first image
  • the sampling module is configured to obtain a first sub-image and a second sub-image based on the first image; wherein the resolution of the first sub-image and the second sub-image is the same and lower than that of the first image resolution, the resolution content information provided by the first sub-image and the second sub-image is different, and the first image, the first sub-image and the second sub-image are used to display the same target object ;
  • the display module is configured to display the first sub-image and the second sub-image in sequence in the display window of the light field at intervals.
  • the sampling module performs conversion on the high-definition two-dimensional image collected by the acquisition module as the first image, so that it is still a high-definition two-dimensional image when viewed by the audience.
  • There is no significant reduction in the effect which provides better compatibility of the display content of 2D images for the 3D light field display architecture.
  • the entire conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve 2D display mode.
  • the improved resolution can be widely applied to all multi-view naked-eye 3D light field display architectures.
  • the sampling module includes a partition unit, a sampling unit, a conversion unit, a combination unit and a selection unit;
  • the partition unit is configured to divide a plurality of sub-regions in the first image with adjacent n pixels as a unit, and each of the sub-regions includes n pixels;
  • the sampling unit is used to select a certain pixel among the n pixels of the sub-area, and display the selected pixel in the sub-area instead of other unselected pixels, so as to display the sub-area.
  • the area is sampled to form a subarea that only displays the selected pixels;
  • the conversion unit is configured to sample each of the sub-regions to form the sub-regions, all the sub-regions form sub-images, and the resolution of the sub-images is 1/n of the first image; wherein, n ⁇ 2;
  • the combining unit is configured to form a plurality of the sub-images according to the different pixels selected and displayed in each of the sub-regions, and the resolution content information provided by each of the sub-images is different;
  • the selecting unit is configured to select at least two sub-images from the plurality of sub-images as the first sub-image and the second sub-image.
  • the first image is converted into a first sub-image and a second sub-image displaying different resolution content information in the first image through the reorganization of the partitioning unit, sampling unit and conversion unit sampling and combining unit
  • These two sub-images have lower resolutions than the first image, redistribute the resolution content information of the first image into the first and second low-resolution sub-images, so that the first sub-image and the second sub-image are
  • the sub-image performs image fusion in the eyes of the viewer to achieve the purpose of improving the resolution of the two-dimensional image.
  • the number of the sub-images formed by the combining unit is two, and the resolved content information of the first sub-image and the second sub-image are complementary.
  • the distinguishing content information displayed by the combining unit based on the first sub-image and the second sub-image formed by each sub-region can be merged into a display content closer to the first image in the eyes of the viewer. Fusion images can restore the resolution content information and resolution of the first image to the greatest extent.
  • the number of the sub-images selected by the selection unit is twice the number of the display windows visible in the monocular field of view.
  • the number of display windows observed by the viewer's eyes is the same, which ensures that the converted first images observed in each window position are the same.
  • the display module includes a grouping unit and a matching unit
  • the grouping unit is configured to divide the display windows in the light field into a plurality of window groups, each of which has two display windows;
  • the matching unit is configured to match and display the first sub-image and the second sub-image in display windows in the window group respectively, and the sub-images matched by two adjacent display windows are not identical. same.
  • the viewer when the viewer's eyes simultaneously see two sets of first word images and second sub-images with different resolution content information, the viewer can feel the fused image with improved resolution. There is no one-to-one correspondence between the eyes of the viewer and the first sub-image and the second sub-image, and the resolution of the fused image viewed at different positions is improved compared to the resolution of the unprocessed two-dimensional image.
  • the number of display windows included in the window group is the same as the number of acquired sub-images.
  • the sum of the display contents of all sub-images displayed in each window group is the display contents of the first image.
  • the display windows are linearly arranged in the light field.
  • the solution provided by this embodiment is aimed at application scenarios of terminal devices such as personal computers, monitors, and televisions. .
  • the plurality of display windows are arranged in an array in the light field.
  • the solution provided by this embodiment is aimed at application scenarios of terminal devices such as mobile phones, tablet computers, and watches.
  • an embodiment of the present application provides a terminal device, including: a memory and a processor: the memory is used to store a computer program; the processor is used to execute the computer program stored in the memory to The terminal device is caused to execute the method described in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, including a program or an instruction, and when the program or the instruction is run on a computer, the method according to the first aspect is executed.
  • the light field display-based display mode conversion method and conversion device disclosed in the embodiments of the present application can realize that under the three-dimensional light field display structure, the resolution of the display content of the two-dimensional image can be improved, so that the two-dimensional image seen by the audience can be improved.
  • the resolution of the image can be increased by at least 2 times, thereby providing better compatibility of the display content of the two-dimensional image for the display architecture of the three-dimensional light field.
  • the whole conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve resolution improvement in the two-dimensional display mode, and can be widely applied to all multi-view naked-eye three-dimensional light field display architectures.
  • FIG. 1 is a schematic structural diagram of a terminal device provided in Embodiment 1 of the present application.
  • Fig. 2 is the general flow schematic diagram of the conversion method provided by the embodiment 2 of the present application.
  • FIG. 3 is a schematic flowchart of a specific flow of step Step 200 in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 4 is a specific flowchart of step Step300 in the conversion method provided in Embodiment 2 of the present application;
  • 5a is a schematic diagram of selecting a pixel point A in a sub-region of 2 ⁇ 2 pixels by adopting Step 202 in the conversion method provided in Embodiment 2 of the present application;
  • 5b is a schematic diagram of selecting a pixel point A' in a sub-region of 2 ⁇ 2 pixels using Step 203 in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 6 is a schematic diagram of matching the first sub-image and the second sub-image in a linearly arranged display window by adopting Step 302 in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 7 is a schematic diagram of fusing a first sub-image and a second sub-image in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 8 shows the content presentation mode of the three-dimensional light field when the display windows are linearly arranged in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 9 shows the content presentation mode of the three-dimensional light field when the display windows are arranged in an array in the conversion method provided in Embodiment 2 of the present application;
  • FIG. 10 is a schematic structural diagram of a conversion device provided in Embodiment 3 of the present application.
  • FIG. 11 is a schematic structural diagram of a sampling module in the conversion device provided in Embodiment 3 of the present application.
  • FIG. 12 is a schematic structural diagram of a display module in the conversion device provided in Embodiment 3 of the present application.
  • the terminal device may be a mobile phone (also known as a smart terminal device), a tablet (personal computer), a personal digital assistant (personal digital assistant), an e-book Reader (e-book reader) or virtual reality interactive device (virtual reality interactive device), etc.
  • the terminal device can be connected to various types of communication systems, such as: long term evolution (long term evolution, LTE) system, future The fifth generation (5th Generation, 5G) system, a new generation of wireless access technology (new radio access technology, NR), and future communication systems, such as 6G systems; can also be wireless local area networks (wireless local area networks, WLAN), etc.
  • LTE long term evolution
  • 5G fifth generation
  • 5G new generation of wireless access technology
  • 6G systems can also be wireless local area networks (wireless local area networks, WLAN), etc.
  • an intelligent terminal device is used as an example for description.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus) , USB) interface 130, charging management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the terminal device 100 .
  • the terminal device 100 may include more or less components than those shown in the drawings, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor ( image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in the processor 110 is a cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/ transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or general-purpose Serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the terminal device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the shooting function of the terminal device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the terminal device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transmit data between the terminal device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones. This interface can also be used to connect other terminal devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the terminal device 100 .
  • the terminal device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the terminal device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in terminal device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the terminal device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a separate device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 may provide applications on the terminal device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves
  • the antenna 1 of the terminal device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou satellite navigation system (beidounavigation satellite system, BDS), a quasi-zenith satellite system (quasi- zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou satellite navigation system
  • BDS Beidounavigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 194 is used to display images, videos, etc., wherein the display screen 194 includes a display panel, and the display screen may specifically include a folding screen, a special-shaped screen, etc.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (organic light-emitting diode, OLED), active matrix organic light emitting diode or active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light emitting diode (flex light-emitting diode, FLED) ), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (quantum dot light emitting diodes, QLED), etc.
  • the terminal device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the terminal device 100 can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the terminal device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the terminal device 100 may support one or more video codecs.
  • the terminal device 100 can play or record videos in various encoding formats, for example, moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the terminal device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the terminal device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the terminal device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the terminal device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In one embodiment, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the terminal device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the terminal device 100 answers a call or a voice message, the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the terminal device 100 may be provided with at least one microphone 170C.
  • the terminal device 100 may be provided with two microphones 170C, which may implement a noise reduction function in addition to collecting sound signals.
  • the terminal device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the terminal device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the terminal device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the terminal device 100 .
  • the angular velocity of the terminal device 100 about three axes ie, the x, y and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the terminal device 100, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to offset the shaking of the terminal device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the terminal device 100 calculates the altitude by using the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the terminal device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the terminal device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the terminal device 100 is stationary. It can also be used to identify the posture of terminal devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the terminal device 100 can measure the distance through infrared or laser. In one embodiment, when shooting a scene, the terminal device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the terminal device 100 emits infrared light to the outside through the light emitting diode.
  • the terminal device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100 . When insufficient reflected light is detected, the terminal device 100 may determine that there is no object near the terminal device 100 .
  • the terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the terminal device 100 uses the temperature detected by the temperature sensor 180J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the terminal device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the terminal device 100 when the temperature is lower than another threshold, the terminal device 100 heats the battery 142 to avoid abnormal shutdown of the terminal device 100 caused by the low temperature.
  • the terminal device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the terminal device 100 , which is different from the position where the display screen 194 is located.
  • the touch screen composed of the touch sensor 180K and the display screen 194 may be located in the side area or the folded area of the terminal device 100 to determine the position touched by the user when the user's hand touches the touch screen and touch gestures; for example, when the user holds the terminal device, he can click any position on the touch screen with his thumb, then the touch sensor 180K can detect the user's click operation, and transmit the click operation to the processor, The processor determines, according to the click operation, that the click operation is used to wake up the screen.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, and combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the terminal device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the terminal device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the terminal device 100 interacts with the network through the SIM card to realize functions such as calls and data communication.
  • the terminal device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal device 100 and cannot be separated from the terminal device 100 .
  • the touch display screen of the terminal device may include multiple touch display areas.
  • the folding screen of the terminal device includes a folding area in a folded state, and the folding area can also realize touch control. response.
  • the operation of a terminal device on a specific touch display area is relatively limited, and there is no relevant operation specifically for a specific touch display area. Based on this, an embodiment of the present application provides a gesture interaction method.
  • the terminal device can obtain the input event of the touch response area, and in response to the input event, trigger the terminal device to execute the input event
  • the corresponding operation instructions are used to implement gesture operations on the side area or the folded area of the terminal device to improve the control experience of the terminal device.
  • the memory is used to store a computer program
  • the processor is used to execute the computer program stored in the memory, so that the terminal device executes the method described in Embodiment 2 of the present application.
  • the method for converting a display mode based on light field display provided in Embodiment 2 of the present application is used to overcome the resolution of the display content of the two-dimensional image when displaying the two-dimensional image in the three-dimensional light field display architecture. too low problem.
  • the conversion method improves the resolution of the two-dimensional image seen by the audience in the three-dimensional light field by performing the steps of partitioning 12, sampling, recombining and matching the two-dimensional image. Specifically include the following steps:
  • Step100 Obtain the first image 10.
  • Step 100 a high-definition two-dimensional image displayed in a three-dimensional light field is collected.
  • Step 200 Acquire the first sub-image 13 and the second sub-image 14 based on the first image 10 .
  • Step 200 the high-definition two-dimensional image collected in Step 100 is converted as the first image 10, so that it is still a high-definition two-dimensional image when viewed by the audience, and the resolution of the image is not significantly reduced in terms of look and feel Effect.
  • the resolution of the converted first image 10 is 1080P.
  • the resolutions of the first sub-image 13 and the second sub-image 14 are the same and lower than the resolution of the first image 10, the resolution content information provided by the first sub-image 13 and the second sub-image 14 is different, and the first sub-image 10, The first sub-image 13 and the second sub-image 14 are used to display the same target object.
  • the distinguishing content information of the first sub-image 13 and the second sub-image 14 refers to the screen shape, outline and definition displayed by the first sub-image 13 and the second sub-image 14 .
  • the resolution of the first image 10 is 4K
  • the resolutions of the first sub-image 13 and the second sub-image 14 are both 1080P, although the resolutions of the first sub-image 13 and the second sub-image 14 are lower than that of the first image 10 , but the target object displayed by the first sub-image 13 and the second sub-image 14 is the target object displayed by the first image 10 , but the first sub-image 13 and the second sub-image 14 from the viewer’s perspective are smaller than the first image.
  • the distinguishing content information displayed in the first sub-image 13 and the second sub-image 14 is only a part of the first image 10, respectively, forming the first sub-image 13 and the second sub-image 14 at each corresponding position
  • the displayed resolution content information all comes from the first image 10, the resolution content information displayed by the first sub-image 13 and the second sub-image 14 respectively enters the viewer's eyes, and the first sub-image 13 and the second sub-image 14 are merged through the viewer's eyes.
  • the resolved content information of the second sub-image 14 is more blurred, and the distinguishing content information displayed in the first sub-image 13 and the second sub-image 14 is only a part of the first image 10, respectively, forming the first sub-image 13 and the second sub-image 14 at each corresponding position
  • the displayed resolution content information all comes from the first image 10, the resolution content information displayed by the first sub-image 13 and the second sub-image 14 respectively enters the viewer's eyes, and the first sub-image 13 and the second sub-image 14 are merged through the viewer's eyes
  • step Step200 the first sub-image 13 and the second sub-image 14 will be acquired from the first image 10 in the manner of partitioning 12, sampling and recombination, as shown in FIG. 3, which specifically includes the following steps:
  • Step 201 In the first image 10, a plurality of sub-regions 11 are divided in units of adjacent n pixels, and each sub-region 11 includes n pixels.
  • Step 201 all pixels in the first image 10 are partitioned 12.
  • the resolution of the first image 10 is a ⁇ b, that is, the first image 10 has a ⁇ b pixels arranged in an array.
  • the first image 10 is divided into several sub-regions 11 in a manner of forming a group of adjacent n pixels. These sub-regions 11 include n pixels and have the same shape.
  • the resolution information content and resolution displayed by the first image 10 do not change.
  • Step202 Select a certain pixel point A from the n pixel points of the sub-area 11, and display the selected pixel point A in the sub-area 11, so that the sub-area 11 is sampled to form a partition where only the selected pixel point A is displayed 12.
  • Step 202 a single sub-region 11 in the first image 10 that has passed through the partition 12 is sampled.
  • the contents displayed by the n pixels are different.
  • One of the pixels A is selected among these pixels, and the other n-1 other than the pixel A is not displayed by displaying the pixel A at the same time.
  • the sampling process of a single sub-area 11 is completed, so that the sub-area 11 is sampled into a sub-area 12 that only displays the selected pixel point A.
  • Step 203 Sampling each sub-region 11 to form a sub-region 12 , all the sub-regions 12 form a sub-image, and the resolution of the sub-image is 1/n of the first image 10 . Among them, n ⁇ 2.
  • Step 203 the other sub-regions 11 in the first image 10 that have passed through the sub-region 12 are sampled respectively.
  • the positions of the pixels selected in all the sub-areas 11 for sampling among the n pixel points should be the same as the position of the pixel A in the sub-areas 11 for sampling in step 202 .
  • these sub-areas 12 After all sub-areas 11 are sampled to form sub-areas 12, these sub-areas 12 only display pixel points A, and a sub-image is reconstructed from the content displayed by these pixel points A, since this sub-image only displays 1/1 of the first image 10 There are n pixels, so the resolution of the sub-image is only 1/n of that of the first image 10 .
  • Step 204 According to the difference of the pixels displayed in each sub-area 11, a plurality of sub-images are formed, and the resolution content information provided by each sub-image is different.
  • Step 204 the sampling and reorganization process of Step 202 and Step 203 are repeated, and multiple sub-images are formed by selecting pixels at different positions each time, and the positions of the pixels displayed in each sub-image in the sub-region 11 are If it is staggered, the number of final composed sub-images is determined according to the actual application scenario. Since the pixels selected by each sub-image in the process of sampling and recombination are different, the resolution content information provided by them is naturally also different.
  • the sub-image only includes the first sub-image 13 and the second sub-image 14
  • the resolutions of the first sub-image 13 and the second sub-image 14 are the same, but the first sub-image 13
  • the resolved content information of the second sub-image 14 is different, and a complementary relationship can also be formed.
  • the resolution content information displayed by 14 can be merged into a fusion image 18 whose display content is closer to the first image 10 in the eyes of the viewer, and the resolution content information of the first sub-image 13 and the second sub-image 14 is formed at this time.
  • the complementary relationship can restore the resolved content information and resolution of the first image 10 to the greatest extent.
  • Step 205 Select two sub-images from the plurality of sub-images as the first sub-image 13 and the second sub-image 14 .
  • Step 205 the viewer's requirements for the resolution of the two-dimensional image that he wants to see are different in different application scenarios, and according to different requirements, the viewer selects the resolution that needs to be arranged in the display window 15 of the three-dimensional light field.
  • the number of sub-images that is, the number of display windows 15 that can be observed by the viewer.
  • the number of the selected sub-images determines the magnification of the resolution of the first image 10.
  • the viewer when two sub-images are selected in Step 205, the viewer finally sees
  • the resulting 2D image resolution can be improved by a factor of 2.
  • Step 200 by converting the first image 10 into a first sub-image 13 and a second sub-image 14 that display different resolution content information in the first image 10, the two resolutions are higher than the first sub-image 13 and the second sub-image 14.
  • a sub-image with a low resolution of the image 10 redistribute the resolution content information of the first image 10 into the low-resolution first sub-image 13 and the second sub-image 14, so that the first sub-image 13 and the second sub-image 14 are Image fusion is performed in the eyes of the viewer to achieve the purpose of improving the resolution of two-dimensional images.
  • Step 300 Display the first sub-image 13 and the second sub-image 14 in sequence in the display window 15 of the light field at intervals.
  • Step 300 when the display mode of the three-dimensional light field is changed to the two-dimensional display mode, due to the different fields of view of the viewer's eyes, in the same three-dimensional light field, the positions of the display windows 15 that can be viewed by both eyes are also different. If the observer wants to see the complete content displayed by the first image 10, one of the observer's eyes needs to see the first sub-image 13, while the other eye needs to see the second sub-image 14, which makes The pictures displayed in different display windows 15 are different, and the pictures displayed in the adjacent display windows 15 have different resolution information contents.
  • Step 300 to enable more first sub-images 13 and second sub-images 14 to be observed in the field of view of the viewer's eyes, after arranging the display windows 15 in the three-dimensional light field, The sub-image 13 and the second sub-image 14 are matched to the corresponding display window 15, as shown in FIG. 4, which specifically includes the following steps:
  • Step 301 Divide the display windows 15 in the light field into a plurality of window groups, and each window group has two display windows 15 .
  • Step 301 since the viewing distance between the eyes of the viewer and the field of view of the eyes are different, the number of display windows 15 needs to be allocated according to the number of sub-images, and the number of selected sub-images is usually the viewer's single eye Twice the number of display windows 15 visible in the field of view, and the same number of display windows 15 observed in the viewer's eyes, ensuring that the transformed first image 10 observed in each window position 17 is the same .
  • the monocular field of view refers to the entire front area that can be viewed by the left eye or the right eye of the viewer.
  • the display window 15 is arranged for the first sub-image 13 and the second sub-image 14. Grouped in pairs, two display windows 15 form a window group, and a plurality of window groups are arranged in sequence.
  • Step 302 The first sub-image 13 and the second sub-image 14 are respectively matched and displayed in the display windows 15 in the window group, and the matched sub-images of the two adjacent display windows 15 are different.
  • Step 301 the two display windows 15 in each window group match and display the first sub-image 13 and the second sub-image 14 respectively. In this way, one eye of the viewer only sees the first sub-image 13 and the other eye only sees the second sub-image 14 .
  • the adjacent two display windows 15 in the adjacent two window groups also match different sub-images. If one of the display windows 15 matches the first sub-image 13, the other display window 15 matches the second sub-image. 14. If one of the display windows 15 matches the second sub-image 14, the other display window 15 matches the first sub-image 13.
  • Step 300 uses the display window 15 to refer to the viewing position or angle corresponding to each angle information of the 3D light field, and the position of the display window 15 is determined by the display structure of the 3D light field.
  • the viewer's eyes see two sets of the first sub-image 13 and the second sub-image 14 with different resolution content information at the same time, the viewer can feel that the resolution is improved
  • the fused image 18 after that There is no need for a one-to-one correspondence between the viewer's eyes and the first sub-image 13 and the second sub-image 14, and the resolution of the fused image 18 viewed at different positions is the same as the resolution of the two-dimensional image that has not been processed by the above steps. is elevated.
  • the acquired first image 10 (a high-definition image with 4K resolution) is partitioned 12 by two-by-two pixel intervals through Step 201 , and divided into several sub-regions 11 with 2 ⁇ 2 pixels.
  • Step 202 selects 1/4 pixel point A (ie, one pixel point) in the sub-area 11 with 2 ⁇ 2 pixels, and displays the pixel point A in the whole sub-area 11 with 2 ⁇ 2 pixels. Taking an image as an example, referring to Fig.
  • a Cartesian coordinate system is established for the pixel points of the first image 10, and the pixel points whose coordinates are (1,1) are displayed on the positions consisting of (1,1), (1,2), (2, In the sub-regions 11 composed of 1) and (2, 2), the first sub-image 13 is formed after sampling each sub-region 11 through Step 203 .
  • Step202 and Step203 again, and select another pixel point A' that is displaced by one pixel from the pixel point A.
  • the pixel with coordinates (2,2) is displayed in the position by (2,2), (2. , 3), (3, 2) and (3, 3) in the sub-region 11, the second sub-image 14 is finally formed.
  • step Step301 the display windows 15 in the three-dimensional light field are linearly arranged and divided into 12 window groups, and through step Step302, the first sub-image 13 and the second sub-image 14 are sequentially matched according to the 12 window groups into the display window.
  • step Step302 the arrangement of "ABABAB" is presented in sequence. As shown in Fig. 6, the display windows 151, 3, 5... 21, 23 are used to present the first sub-image 13, and the display windows 152, 4, 6 . . . 22, 24 are used to present the second sub-image 14.
  • the left eye and the right eye can fall into two different fields of view at the same time, that is, the left eye can receive the display window that only displays the first sub-image 13 15.
  • the right eye can receive the display window 15 in which only the second sub-image 14 is displayed.
  • the human brain fuses the received images through visual neural processing, and fuses the two low-resolution images of the first sub-image 13 and the second sub-image 14 into a higher-resolution fused image 18, such as shown in Figure 7.
  • the conversion method is performed through the above steps, and by changing the content displayed in the display window 15 in the light field, the original two-dimensional image is converted into the first image 10 obtained in this embodiment 2.
  • the resolution of the first sub-image 13 and the second sub-image 14 seen by the viewer in the display window 15 is lower than that of the first image 10, and the resolution content information is complementary,
  • the resolution of the display content of the two-dimensional image can be improved, so that the resolution of the two-dimensional image seen by the audience can be increased by at least 2 times, thereby providing a better display structure for the three-dimensional light field.
  • Good 2D image display content compatibility The whole conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve resolution improvement in the two-dimensional display mode, and can be widely applied to all multi-view naked-eye three-dimensional light field display architectures.
  • the arrangement of the display windows 15 in the light field may be different according to different actual application scenarios.
  • the display windows 15 of the three-dimensional light field display structure can be arranged linearly in the light field, and the arrangement of the sub-images in the display windows 15 is the first The linearly spaced arrangement of the first sub-image 13 and the second sub-image 14 is shown in FIG. 8 .
  • the display windows 15 of the three-dimensional light field display structure can be arranged in an array in the light field. In the two-dimensional plane, these display windows 15 are in the horizontal and vertical directions.
  • a square matrix is formed, and the sub-images are arranged in the display window 15 in a checkerboard-like spaced arrangement of the first sub-image 13 and the second sub-image 14, as shown in FIG. 9, in this application scenario, regardless of the viewer Whether you look at the screen vertically or horizontally, you can still get the technical effect of increasing the resolution in 2D mode.
  • Step 205 when the conversion method is executed to Step 205, in some application scenarios, since the pictures displayed by each window in the three-dimensional light field have a certain intensity distribution and overlap each other, when the distribution of the display window 15 is compared When it is dense, one eye can observe 2 or more display windows 15 at the same time, and binocular can observe 4 display windows 15 or more display windows 15 at the same time, so it is selected from multiple sub-images to display in the display
  • the number of sub-images in the window 15 is correspondingly more than 2, and the number of sub-images acquired at this time is greater than 2, but when step Step301 is executed, the number of display windows 15 included in the window group is always the same as the number of acquired sub-images.
  • the sum of the display contents of all sub-images displayed in each window group is the display contents of the first image 10 .
  • the resolution of the two-dimensional image can be increased by up to 4 times, and its arrangement can be either a linear "ABCDABCD" interval arrangement, or a checkerboard arrangement with dislocation intervals.
  • the resolution improvement effect of the two-dimensional image can be up to 6 times, and so on.
  • the sub-images in the display window 15 may not be arranged at intervals.
  • the sub-images in the display window 15 are arranged in the manner of "AABBAABB". At this time, only the display window needs to be adjusted.
  • the structure of the optical structure layer 16 before 15 is such that only the first sub-image 13 can be seen by one eye of the viewer, and only the second sub-image 14 can be seen by the other eye.
  • the resolution in the two-dimensional display mode of the three-dimensional light field can be effectively improved, which can be increased by 2 times or more compared with the prior art.
  • the conversion method of Embodiment 2 does not need to modify the device structure at the hardware level, and adjusts the display image through software, which can realize the improvement of the resolution in the two-dimensional display mode of the three-dimensional light field, and is widely applicable to different three-dimensional light field display architectures.
  • the conversion method in Embodiment 2 can also be applied to a virtual reality or augmented reality headset display device, so that the first sub-image 13 and the second sub-image 14 are displayed in the left-eye lens and the right-eye lens respectively, and the final viewing When viewing the two-dimensional image content, the viewer can obtain a resolution improvement of the two-dimensional image, so that the viewer can see a clearer image.
  • a conversion device for a display mode based on light field display provided in Embodiment 3 of the present application.
  • the conversion device includes an acquisition module 20 , a sampling module 30 and a display module 40 .
  • the conversion device overcomes the problem that the resolution of the display content of the two-dimensional image is too low when displaying the two-dimensional image in the display structure of the three-dimensional light field.
  • the acquisition module 20 is used for acquiring the first image 10 .
  • Acquiring the image The high-definition two-dimensional image displayed in the three-dimensional light field is collected as the first image 10 and converted, so as to achieve the effect of still being a high-definition two-dimensional image when viewed by the audience.
  • the acquisition module 20 acquires the first image 10 with a resolution of 4K, and when finally viewed by the audience, the resolution of the converted first image 10 is 1080P.
  • the sampling module 30 is configured to acquire the first sub-image 13 and the second sub-image 14 based on the first image 10 .
  • the resolutions of the first sub-image 13 and the second sub-image 14 are the same and lower than the resolution of the first image 10, the resolution content information provided by the first sub-image 13 and the second sub-image 14 is different, and the first sub-image 10, The first sub-image 13 and the second sub-image 14 are used to display the same target object.
  • the resolution of the first image 10 is 4K
  • the resolutions of the first sub-image 13 and the second sub-image 14 are both 1080P, although the resolutions of the first sub-image 13 and the second sub-image 14 are lower than that of the first image 10 , but the target object displayed by the first sub-image 13 and the second sub-image 14 is the target object displayed by the first image 10 , but the first sub-image 13 and the second sub-image 14 from the viewer’s perspective are smaller than the first image.
  • the distinguishing content information displayed in the first sub-image 13 and the second sub-image 14 is only a part of the first image 10, respectively, forming the first sub-image 13 and the second sub-image 14 at each corresponding position
  • the displayed resolution content information all comes from the first image 10, the resolution content information displayed by the first sub-image 13 and the second sub-image 14 respectively enters the viewer's eyes, and the first sub-image 13 and the second sub-image 14 are merged through the viewer's eyes.
  • the resolved content information of the second sub-image 14 is more blurred, and the distinguishing content information displayed in the first sub-image 13 and the second sub-image 14 is only a part of the first image 10, respectively, forming the first sub-image 13 and the second sub-image 14 at each corresponding position
  • the displayed resolution content information all comes from the first image 10, the resolution content information displayed by the first sub-image 13 and the second sub-image 14 respectively enters the viewer's eyes, and the first sub-image 13 and the second sub-image 14 are merged through the viewer's eyes
  • the sampling module 30 obtains the first sub-image 13 and the second sub-image 14 from the first image 10 in the manner of partition 12, sampling and recombination, and the sampling module 30 includes a partition unit 31, a sampling unit 32, Conversion unit 33 , combination unit 34 and selection unit 35 .
  • the partition unit 31 is configured to divide a plurality of sub-regions 11 in the first image 10 by using adjacent n pixels as a unit, and each sub-region 11 includes n pixels.
  • the partition unit 31 performs partition 12 on all the pixels in the first image 10.
  • the resolution of the first image 10 is a ⁇ b, that is, the first image 10 has a ⁇ b pixels arranged in an array.
  • the first image 10 is divided into several sub-regions 11 in the manner of a group of adjacent n pixels in the dots, and these sub-regions 11 include n pixels and have the same shape.
  • the first image 10 is divided into a plurality of sub-areas 11, and each sub-area 11 has the same shape, the resolution information content and resolution displayed by the first image 10 do not change.
  • the sampling unit 32 is used to select a certain pixel point in the n pixel points of the sub-area 11, and the selected pixel point is displayed in the sub-area 11 in place of other unselected pixel points, so that the sub-area 11 is sampled to form a display only. Partition 12 of the selected pixels.
  • the sampling unit 32 samples a single subregion 11 in the first image 10 that has passed through the partition 12 . In the sub-area 11, the contents displayed by the n pixels are different. One of the pixels A is selected among these pixels, and the other n-1 other than the pixel A is not displayed by displaying the pixel A at the same time. pixel points, the sampling process of a single sub-area 11 is completed, so that the sub-area 11 is sampled into a sub-area 12 that only displays the selected pixel point A.
  • the conversion unit 33 is configured to sample each sub-region 11 to form a sub-region 12 , and all the sub-regions 12 form a sub-image, and the resolution of the sub-image is 1/n of the first image 10 . Among them, n ⁇ 2.
  • the conversion unit 33 samples the other sub-regions 11 in the first image 10 that have passed through the partition 12, respectively. At this time, the positions of the pixels selected in all the sub-areas 11 for sampling among the n pixel points should be the same as the position of the pixel A in the sub-areas 11 for sampling in step 202 .
  • sub-areas 11 After all sub-areas 11 are sampled to form sub-areas 12, these sub-areas 12 only display pixel points A, and a sub-image is reconstructed from the content displayed by these pixel points A, since this sub-image only displays 1/1 of the first image 10 There are n pixels, so the resolution of the sub-image is only 1/n of that of the first image 10 .
  • the combining unit 34 is configured to compose a plurality of sub-images according to the different pixels selected and displayed in each sub-area 11 , and the resolution content information provided by each sub-image is different.
  • the sampling and the reorganization process performed by the sampling unit 32 and the conversion unit 33 are repeated, and a plurality of sub-images are formed by selecting pixels at different positions each time, and the positions of the pixels displayed in the sub-images in the sub-region 11 are staggered. , the number of final composed sub-images is determined according to the actual application scenario. Since the pixels selected by each sub-image in the process of sampling and recombination are different, the resolution content information provided by them is naturally also different.
  • the sub-image only includes the first sub-image 13 and the second sub-image 14
  • the resolutions of the first sub-image 13 and the second sub-image 14 are the same, but the first sub-image 13
  • the resolved content information of the second sub-image 14 is different, and a complementary relationship can also be formed. This is because when the first image 10 is converted into two sub-images, the positions of the pixels selected in the sampling unit 32 and the conversion unit 33 are shifted by one pixel position from each other, and the first sub-image formed by this is 13 and the resolved content information displayed by the second sub-image 14 can be fused into a fused image 18 whose display content is closer to the first image 10 in the eyes of the viewer. At this time, the first sub-image 13 and the second sub-image 14 Discriminating content information forms a complementary relationship.
  • the selecting unit 35 is configured to select at least two sub-images from the plurality of sub-images as the first sub-image 13 and the second sub-image 14 .
  • the requirements of the viewer for the resolution of the two-dimensional image they want to see are different in different application scenarios.
  • the number of images that is, the number of display windows 15 visible to the viewer.
  • the number of sub-images selected by the selection unit 35 determines the magnification of the resolution of the first image 10. Compared with the resolution of the two-dimensional image in the three-dimensional light field in the prior art, when the selection unit 35 selects two sub-images, The final resolution of the two-dimensional image seen by the viewer can be increased by a factor of 2.
  • the first image 10 is converted into two resolution ratios of the first sub-image 13 and the second sub-image 14 that display different resolution content information in the first image 10.
  • the low-resolution sub-image of the first image 10 redistributes the resolution content information of the first image 10 into the low-resolution first sub-image 13 and the second sub-image 14, so that the first sub-image 13 and the second sub-image 14 Image fusion is performed in the eyes of the viewer to achieve the purpose of improving the resolution of two-dimensional images.
  • the display module 40 is configured to display the first sub-image 13 and the second sub-image 14 in the display window 15 of the light field at intervals in sequence.
  • the positions of the display windows 15 that can be viewed by both eyes in the same three-dimensional light field are also different due to the different fields of view of the viewer's eyes.
  • the pictures displayed in the display windows 15 are different, and the pictures displayed in the adjacent display windows 15 have different resolution information contents.
  • the display module 40 needs to arrange the display windows 15 in the three-dimensional light field after , and then the first sub-image 13 and the second sub-image 14 are matched and displayed in the corresponding display window 15 , so the display module 40 specifically includes a grouping unit 41 and a matching unit 42 .
  • the grouping unit 41 is used to divide the display windows 15 in the light field into a plurality of window groups, and each window group has two display windows 15 . Since the viewing distance between the eyes of the viewer and the field of view of the two eyes are different, the grouping unit 41 needs to allocate the number of display windows 15 according to the number of sub-images.
  • the number of sub-images selected by the grouping unit 41 is usually the number of the viewer. Twice the number of display windows 15 visible in the field of view of one eye.
  • the monocular field of view refers to the entire front area that can be viewed by the left eye or the right eye of the viewer.
  • the grouping unit 41 will display the first sub-image 13 and the second sub-image 14
  • the windows 15 are grouped in pairs, two display windows 15 form a window group, and a plurality of window groups are arranged in sequence.
  • the matching unit 42 is configured to match the first sub-image 13 and the second sub-image 14 to display windows 15 in the window group respectively, and the sub-images matched by two adjacent display windows 15 are different.
  • the matching unit 42 matches the two display windows 15 in each window group to the first sub-image 13 and the second sub-image 14, respectively. In this way, one eye of the viewer only sees the first sub-image 13 and the other eye only sees the second sub-image 14 .
  • the two adjacent display windows 15 in the adjacent two window groups are also matched with different sub-images. If the matching unit 42 matches one of the display windows 15 with the first sub-image 13, the other display window 15 matches the first sub-image 13. For the second sub-image 14 , if the matching unit 42 matches one of the display windows 15 with the second sub-image 14 , the other display window 15 matches the first sub-image 13 .
  • the display window 15 is used in the display module 40 to refer to the viewing position or angle corresponding to each angle information of the 3D light field, and the position of the display window 15 is determined by the display structure of the 3D light field.
  • the viewer's eyes simultaneously see the two sets of the first character image and the second sub-image 14 with different content information the viewer can feel that the resolution is improved
  • the fused image 18 after that There is no need for a one-to-one correspondence between the viewer's eyes and the first sub-image 13 and the second sub-image 14, and the resolution of the fused image 18 viewed at different positions is the same as the resolution of the two-dimensional image that has not been processed by the above steps. is elevated.
  • the conversion device by using the conversion device, the content displayed in the display window 15 in the light field is changed, from the original two-dimensional image to the converted first image 10 obtained in the third embodiment.
  • a sub-image 13 and a second sub-image 14 the resolution of the first sub-image 13 and the second sub-image 14 seen by the viewer in the display window 15 is lower than that of the first image 10, and the resolution content information is complementary, so that it can be Under the 3D light field display architecture, the resolution of the display content of the 2D image is improved, so that the resolution of the 2D image seen by the audience can be increased by at least 2 times, thus providing a better display structure for the 3D light field. Compatibility of display contents of 2D images.
  • the whole conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve resolution improvement in the two-dimensional display mode, and can be widely applied to all multi-view naked-eye three-dimensional light field display architectures.
  • the display windows 15 of the three-dimensional light field display structure can be arranged linearly in the light field, and the arrangement of the sub-images in the display windows 15 is the first A linearly spaced arrangement of a sub-image 13 and a second sub-image 14 .
  • the display windows 15 of the three-dimensional light field display structure can be arranged in an array in the light field. In the two-dimensional plane, these display windows 15 are in the horizontal and vertical directions.
  • a square matrix is formed, and the sub-images are arranged in the display window 15 in a checkerboard-like spaced arrangement of the first sub-image 13 and the second sub-image 14.
  • the selection unit 35 when the selection unit 35 is used to select sub-images, in some application scenarios, since the images displayed by each window in the three-dimensional light field have a certain intensity distribution and overlap with each other, when the distribution of the display window 15 When it is relatively dense, one eye can observe two or more display windows 15 at the same time, and binocular can observe four display windows 15 or more display windows 15 at the same time.
  • the number of sub-images in the display window 15 is correspondingly more than 2. At this time, the number of acquired sub-images is greater than 2.
  • the grouping unit 41 when the grouping unit 41 is used, the number of display windows 15 included in the window group is always the same as the acquired sub-images. The quantity is the same.
  • the two-dimensional content screen of four low-resolution sub-images with different resolution information contents By displaying the two-dimensional content screen of four low-resolution sub-images with different resolution information contents, and arranging the four sub-images in the display window 15 in a continuous presentation manner, such as "ABCDABCD", this can be achieved.
  • the resolution of the two-dimensional image can be increased by up to 4 times, and its arrangement can be either a linear "ABCDABCD" interval arrangement, or a checkerboard arrangement with dislocation intervals.
  • the conversion device of the third embodiment can achieve a resolution improvement effect of up to 6 times of the two-dimensional image, and so on.
  • the sub-images in the display window 15 may not be arranged at intervals.
  • the sub-images in the display window 15 are arranged in the manner of "AABBAABB". At this time, only the display window needs to be adjusted.
  • the structure of the optical structure layer 16 before 15 is such that only the first sub-image 13 can be seen by one eye of the viewer, and only the second sub-image 14 can be seen by the other eye.
  • the conversion device of the third embodiment can effectively improve the resolution in the two-dimensional display mode of the three-dimensional light field, which can be increased by 2 times or more compared with the prior art.
  • the conversion method of Embodiment 1 does not need to modify the device structure at the hardware level, and adjusts the display image through software, which can realize the improvement of the resolution in the two-dimensional display mode of the three-dimensional light field, and is widely applicable to different three-dimensional light field display architectures.
  • the conversion device in Embodiment 3 can also be applied to a virtual reality or augmented reality headset display device, so that the first sub-image 13 and the second sub-image 14 are displayed in the left-eye lens and the right-eye lens respectively, and the final viewing When viewing the two-dimensional image content, the viewer can obtain a resolution improvement of the two-dimensional image, so that the viewer can see a clearer image.
  • Embodiment 4 of the present application provides a computer-readable storage medium, including a program or an instruction, and when the program or instruction runs on a computer, the conversion method disclosed in Embodiment 2 of the present application is executed.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, optical fiber, Digital Subscriber Line, DSL) or wireless (eg infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, high-density digital video discs (DVDs)), or semiconductor media (eg, solid state disks, SSD)) etc.
  • the light field display-based display mode conversion method and conversion device disclosed in the embodiments of the present application can realize that under the three-dimensional light field display structure, the resolution of the display content of the two-dimensional image can be improved, so that the two-dimensional image seen by the audience can be improved.
  • the resolution of the image can be increased by at least 2 times, thereby providing better compatibility of the display content of the two-dimensional image for the display architecture of the three-dimensional light field.
  • the whole conversion process does not need to dynamically adjust the optical devices and optical path design on the hardware to achieve resolution improvement in the two-dimensional display mode, and can be widely applied to all multi-view naked-eye three-dimensional light field display architectures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Sont divulgués ici un procédé et un appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux. La résolution du contenu d'affichage d'une image bidimensionnelle peut être améliorée sous une architecture d'affichage d'un champ lumineux tridimensionnel, de telle sorte que la résolution de l'image bidimensionnelle vue par un observateur peut être améliorée d'au moins deux fois, ce qui permet de fournir l'architecture d'affichage du champ lumineux tridimensionnel avec une meilleure compatibilité du contenu d'affichage de l'image bidimensionnelle. Pendant tout processus de conversion, il n'est pas nécessaire de réaliser une amélioration de la résolution dans un mode d'affichage bidimensionnel au moyen de l'ajustement et de la commande dynamiques d'un dispositif optique, une conception de trajet optique, etc. du matériel, et la présente invention peut être largement appliquée à toutes les architectures d'affichage de champ lumineux tridimensionnel multi-vues à l'œil nu.
PCT/CN2021/125396 2021-01-29 2021-10-21 Procédé et appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux WO2022160795A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110127079.4 2021-01-29
CN202110127079.4A CN114827440A (zh) 2021-01-29 2021-01-29 基于光场显示的显示模式的转换方法及转换装置

Publications (1)

Publication Number Publication Date
WO2022160795A1 true WO2022160795A1 (fr) 2022-08-04

Family

ID=82526794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125396 WO2022160795A1 (fr) 2021-01-29 2021-10-21 Procédé et appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux

Country Status (2)

Country Link
CN (1) CN114827440A (fr)
WO (1) WO2022160795A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0717373A2 (fr) * 1994-12-15 1996-06-19 Sanyo Electric Co. Ltd Méthode pour convertir des images bi-dimensionnelles en images tri-dimensionnelles dans un jeu vidéo
CN1706201A (zh) * 2002-11-25 2005-12-07 三洋电机株式会社 立体视觉用途图像提供方法和立体图像显示设备
CN102868893A (zh) * 2011-07-05 2013-01-09 天马微电子股份有限公司 一种裸眼3d图像的形成方法、装置及3d显示系统
US9131209B1 (en) * 2014-10-27 2015-09-08 Can Demirba{hacek over (g)} Method for automated realtime conversion of 2D RGB images and video to red-cyan stereoscopic anaglyph 3D
CN111147848A (zh) * 2019-12-30 2020-05-12 清华大学深圳国际研究生院 一种基于内容自适应的光场视频编码方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2405543A (en) * 2003-08-30 2005-03-02 Sharp Kk Multiple view directional display having means for imaging parallax optic or display.
JP4974984B2 (ja) * 2008-09-11 2012-07-11 三菱電機株式会社 映像記録装置及び方法
KR20110044573A (ko) * 2009-10-23 2011-04-29 삼성전자주식회사 디스플레이장치 및 그 영상표시방법
CN102914892B (zh) * 2012-11-19 2015-12-23 中航华东光电有限公司 一种液晶狭缝光栅的驱动方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0717373A2 (fr) * 1994-12-15 1996-06-19 Sanyo Electric Co. Ltd Méthode pour convertir des images bi-dimensionnelles en images tri-dimensionnelles dans un jeu vidéo
CN1706201A (zh) * 2002-11-25 2005-12-07 三洋电机株式会社 立体视觉用途图像提供方法和立体图像显示设备
CN102868893A (zh) * 2011-07-05 2013-01-09 天马微电子股份有限公司 一种裸眼3d图像的形成方法、装置及3d显示系统
US9131209B1 (en) * 2014-10-27 2015-09-08 Can Demirba{hacek over (g)} Method for automated realtime conversion of 2D RGB images and video to red-cyan stereoscopic anaglyph 3D
CN111147848A (zh) * 2019-12-30 2020-05-12 清华大学深圳国际研究生院 一种基于内容自适应的光场视频编码方法

Also Published As

Publication number Publication date
CN114827440A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
US20220247857A1 (en) Full-screen display method for mobile terminal and device
WO2020192458A1 (fr) Procédé d'affichage d'image et dispositif de visiocasque
KR20210130206A (ko) 헤드 마운트 디스플레이를 위한 이미지 디스플레이 방법 및 장치
CN113475057B (zh) 一种录像帧率的控制方法及相关装置
WO2020093988A1 (fr) Procédé de traitement d'image et dispositif électronique
WO2022262313A1 (fr) Procédé de traitement d'image à base d'incrustation d'image, dispositif, support de stockage, et produit de programme
WO2022143128A1 (fr) Procédé et appareil d'appel vidéo basés sur un avatar, et terminal
WO2021052342A1 (fr) Procédé de réglage de couleur de trame pour un appareil électronique et dispositif
WO2022017205A1 (fr) Procédé d'affichage de multiples fenêtres et dispositif électronique
KR20220101693A (ko) 에너지 효율적인 디스플레이 처리 방법 및 디바이스
WO2022095744A1 (fr) Procédé de commande d'affichage vr, dispositif électronique et support de stockage lisible par ordinateur
WO2023005298A1 (fr) Procédé et appareil de masquage de contenu d'image basés sur de multiples caméras
CN111103975B (zh) 显示方法、电子设备及系统
WO2022166624A1 (fr) Procédé d'affichage sur écran et appareil associé
CN105227828B (zh) 拍摄装置和方法
CN113805983B (zh) 调整窗口刷新率的方法及电子设备
CN113573120A (zh) 音频的处理方法及电子设备
WO2022160795A1 (fr) Procédé et appareil de conversion de mode d'affichage basés sur un affichage de champ lumineux
CN113923351B (zh) 多路视频拍摄的退出方法、设备和存储介质
WO2022135195A1 (fr) Procédé et appareil permettant d'afficher une interface de réalité virtuelle, dispositif, et support de stockage lisible
WO2022062985A1 (fr) Procédé et appareil d'ajout d'effet spécial dans une vidéo et dispositif terminal
WO2022033344A1 (fr) Procédé de stabilisation vidéo, dispositif de terminal et support de stockage lisible par ordinateur
CN113596320B (zh) 视频拍摄变速录制方法、设备、存储介质
WO2023030067A1 (fr) Procédé de commande à distance, dispositif de commande à distance et dispositif commandé
CN113810595B (zh) 视频拍摄的编码方法、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21922386

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21922386

Country of ref document: EP

Kind code of ref document: A1