CN112235562A - 3D display terminal, controller and image processing method - Google Patents

3D display terminal, controller and image processing method Download PDF

Info

Publication number
CN112235562A
CN112235562A CN202011082903.0A CN202011082903A CN112235562A CN 112235562 A CN112235562 A CN 112235562A CN 202011082903 A CN202011082903 A CN 202011082903A CN 112235562 A CN112235562 A CN 112235562A
Authority
CN
China
Prior art keywords
movement distance
camera
video source
format
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011082903.0A
Other languages
Chinese (zh)
Other versions
CN112235562B (en
Inventor
任子健
史东平
吴连朋
王宝云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Media Network Technology Co Ltd
Original Assignee
Qingdao Hisense Media Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Media Network Technology Co Ltd filed Critical Qingdao Hisense Media Network Technology Co Ltd
Priority to CN202011082903.0A priority Critical patent/CN112235562B/en
Publication of CN112235562A publication Critical patent/CN112235562A/en
Application granted granted Critical
Publication of CN112235562B publication Critical patent/CN112235562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

Abstract

The embodiment of the application discloses a 3D display terminal, a controller and an image processing method, wherein the 3D display terminal comprises: a rendering camera, a pixel shader, and a controller. The 3D display terminal disclosed in this embodiment may calculate the left movement distance and the right movement distance of the current camera by using the controller, select corresponding camera parameters according to the magnitude relationship between the left movement distance and the right movement distance of the current camera, and calculate U1V1 based on the camera parameters and U0V0, so that the pixel shader performs sampling according to the U1V 1. And finally, the pixel shader presents the picture obtained after sampling to the corresponding eye.

Description

3D display terminal, controller and image processing method
Technical Field
The application relates to the technical field of display equipment, in particular to a 3D display terminal, a controller and an image processing method.
Background
The 3D (three-dimensional) video is a two-view image of a scene captured by a camera device such as a human eye using two lenses, and then the two images of the two views are played synchronously, so that the two slightly different images are displayed in different areas of a screen, and then the left and right eyes of a person view the two different areas, respectively, thereby achieving the effect of deceiving the brain and generating a stereoscopic feeling.
3D stereoscopic video has appeared for many years, from early red and blue 3D video playing, shutter type playing of 3D television to polarized 3D movie playing of movie theaters in recent years, these traditional 3D stereoscopic videos all display different video contents of left and right eyes on the same screen for superposition display, and then project the video contents of different eyes on the screen into left and right eyes of people by separating the video contents of different eyes with the help of glasses of different principles.
In recent years, VR (Virtual Reality) technology is rapidly developed, and 3D stereoscopic video has a new display medium: VR glasses. Unlike the traditional approach, VR glasses are closed displays with two display screens corresponding to two eyes of a person (or one display screen split into two corresponding to two eyes of a person), rather than projecting pictures of different eyes onto the same screen. How to automatically judge the screen corresponding to which eye is currently displayed and automatically select the picture of the corresponding eye from the synthesized 3D video source for output and display is a technical problem to be solved urgently.
Disclosure of Invention
The application aims to provide a 3D display terminal, a controller and an image processing method, so as to solve the problems in the prior art.
A first aspect of an embodiment of the present application shows a 3D display terminal, including:
a rendering camera for rendering of an image, comprising: a left camera and a right camera;
a pixel shader for rendering of an image;
a controller configured to: if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
A second aspect of the embodiment of the present application shows an image processing method, which is applied to a 3D display terminal shown in the embodiment of the present application, and includes:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
A third aspect of embodiments of the present application shows a controller configured to:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
According to the technical scheme, the embodiment of the application discloses a 3D display terminal, a controller and an image processing method, wherein the 3D display terminal comprises: a rendering camera for rendering of an image, comprising: a left camera and a right camera; a pixel shader for rendering of an image; a controller configured to: if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera; if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate; outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1. The 3D display terminal disclosed in the embodiment of the present application may calculate the left movement distance and the right movement distance of the current camera by using the controller, select corresponding camera parameters according to the magnitude relationship between the left movement distance and the right movement distance of the current camera, and calculate U1V1 based on the camera parameters and U0V0, so that the pixel shader performs sampling according to U1V 1. And finally, the pixel shader presents the picture obtained after sampling to the corresponding eye.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic view of an operation scenario between a display device and a control apparatus according to an embodiment of the present application;
fig. 2 is a block diagram illustrating a hardware configuration of the control apparatus 100 in fig. 1 according to an embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a hardware configuration of the display device 200 in fig. 1 according to an embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating an architectural configuration of an operating system in a memory of the display device 200 according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating a 3D display terminal according to a possible embodiment;
FIG. 6 is a flowchart illustrating operation of a 3D display terminal according to one possible embodiment;
FIG. 7A is an image rendered by a 3D video source according to one possible embodiment;
FIG. 7B is an image rendered by a 3D video source according to one possible embodiment;
FIG. 8 is a flow chart illustrating the calculation of left and right movement distances of a current camera according to one possible embodiment;
FIG. 9 is a schematic diagram of a viewing matrix shown in accordance with a possible embodiment;
fig. 10 is a schematic diagram of an observation matrix shown according to a possible embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The 3D display terminal that this application embodiment relates to can be display device, VR glasses etc. and be used for playing the terminal of 3D video source. The following describes a hardware structure of a 3D display terminal, taking a display device as an example.
Fig. 1 is a schematic diagram illustrating an operation scenario between a display device and a control apparatus. As shown in fig. 1, the control apparatus 100 and the display device 200 may communicate with each other in a wired or wireless manner.
Among them, the control apparatus 100 is configured to control the display device 200, which may receive an operation instruction input by a user and convert the operation instruction into an instruction recognizable and responsive by the display device 200, serving as an intermediary for interaction between the user and the display device 200. Such as: the user operates the channel up/down key on the control device 100, and the display device 200 responds to the channel up/down operation.
The control device 100 may be a remote controller 100A, which includes infrared protocol communication or bluetooth protocol communication, and other short-distance communication methods, etc. to control the display apparatus 200 in a wireless or other wired manner. The user may input a user instruction through a key on a remote controller, voice input, control panel input, etc., to control the display apparatus 200. Such as: the user can input a corresponding control command through a volume up/down key, a channel control key, up/down/left/right moving keys, a voice input key, a menu key, a power on/off key, etc. on the remote controller, to implement the function of controlling the display device 200.
The control device 100 may also be an intelligent device, such as a mobile display device 100B, a tablet computer, a notebook computer, a remote control handle, and the like. For example, the display device 200 is controlled using an application program running on the smart device. The application program can provide various controls for a User through an intuitive User Interface (UI) on a screen associated with the smart device through configuration.
For example, the mobile display device 100B may install a software application with the display device 200, implement connection communication through a network communication protocol, and implement the purpose of one-to-one control operation and data communication. Such as: the mobile display apparatus 100B may be caused to establish a control instruction protocol with the display apparatus 200 to implement the function of the physical keys as arranged in the remote control 100A by operating various function keys or virtual buttons of the user interface provided on the mobile display apparatus 100B. The audio and video content displayed on the mobile display device 100B may also be transmitted to the display device 200 to implement a synchronous display function.
The display apparatus 200 may provide a network television function of a broadcast receiving function and a computer support function. The display device may be implemented as a digital television, a web television, an Internet Protocol Television (IPTV), or the like.
The display device 200 may be a liquid crystal display, an organic light emitting display, a projection device, or a head-mounted display device. The specific display device type, size, resolution, etc. are not limited.
The display apparatus 200 also performs data communication with the server 300 through various communication means. Here, the display apparatus 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 300 may provide various contents and interactions to the display apparatus 200. By way of example, the display device 200 may send and receive information such as: receiving Electronic Program Guide (EPG) data, receiving software Program updates, or accessing a remotely stored digital media library. The servers 300 may be a group or groups of servers, and may be one or more types of servers. Other web service contents such as a video on demand and an advertisement service are provided through the server 300.
Fig. 2 is a block diagram illustrating the configuration of the control device 100. As shown in fig. 2, the control device 100 includes a controller 110, a memory 120, a communicator 130, a user input interface 140, an output interface 150, and a power supply 160.
The controller 110 includes a RAM (Random Access Memory) 111, a ROM (Read-Only Memory) 112, a processor 113, a communication interface, and a communication bus. The controller 110 is used to control the operation of the control device 100, as well as the internal components of the communication cooperation, external and internal data processing functions.
Illustratively, when an interaction of a user pressing a key disposed on the remote controller 100A or an interaction of touching a touch panel disposed on the remote controller 100A is detected, the controller 110 may control to generate a signal corresponding to the detected interaction and transmit the signal to the display device 200.
And a memory 120 for storing various operation programs, data and applications for driving and controlling the control apparatus 100 under the control of the controller 110. The memory 120 may store various control signal commands input by a user.
The communicator 130 enables communication of control signals and data signals with the display apparatus 200 under the control of the controller 110. Such as: the control apparatus 100 transmits a control signal (e.g., a touch signal or a button signal) to the display device 200 via the communicator 130, and the control apparatus 100 may receive the signal transmitted by the display device 200 via the communicator 130. The communicator 130 may include an infrared module 131 (infrared signal interface), a radio frequency signal interface 132, and a bluetooth module 133. For example: when the infrared signal interface is used, the user input instruction needs to be converted into an infrared control signal according to an infrared control protocol, and the infrared control signal is sent to the display device 200 through the infrared sending module. The following steps are repeated: when the rf signal interface is used, a user input command needs to be converted into a digital signal, and then the digital signal is modulated according to the rf control signal modulation protocol and then transmitted to the display device 200 through the rf transmitting terminal.
The user input interface 140 may include at least one of a microphone 141, a touch pad 142, a sensor 143, a key 144, and the like, so that a user can input a user instruction regarding controlling the display apparatus 200 to the control apparatus 100 through voice, touch, gesture, press, and the like.
The output interface 150 outputs a user instruction received by the user input interface 140 to the display apparatus 200, or outputs an image or voice signal received by the display apparatus 200. Here, the output interface 150 may include an LED interface 151, a vibration interface 152 generating vibration, a sound output interface 153 outputting sound, a display 154 outputting an image, and the like. For example, the remote controller 100A may receive an output signal such as audio, video, or data from the output interface 150, and display the output signal in the form of an image on the display 154, in the form of audio on the sound output interface 153, or in the form of vibration on the vibration interface 152.
And a power supply 160 for providing operation power support for each element of the control device 100 under the control of the controller 110. In the form of a battery and associated control circuitry.
A hardware configuration block diagram of the display device 200 is exemplarily shown in fig. 3. As shown in fig. 3, the display apparatus 200 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio output interface 285, and a power supply 290.
The tuner demodulator 210 receives the broadcast television signal in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, and resonance, and is configured to demodulate, from a plurality of wireless or wired broadcast television signals, an audio/video signal carried in a frequency of a television channel selected by a user, and additional information (e.g., EPG data).
The tuner demodulator 210 is responsive to the user selected frequency of the television channel and the television signal carried by the frequency, as selected by the user and controlled by the controller 250.
The tuner demodulator 210 can receive a television signal in various ways according to the broadcasting system of the television signal, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and can demodulate the analog signal and the digital signal according to the different kinds of the received television signals.
In other exemplary embodiments, the tuning demodulator 210 may also be in an external device, such as an external set-top box. In this way, the set-top box outputs a television signal after modulation and demodulation, and inputs the television signal into the display apparatus 200 through the external device interface 240.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the display apparatus 200 may transmit content data to an external apparatus connected via the communicator 220, or browse and download content data from an external apparatus connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth module 222, and a wired ethernet module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the display apparatus 200 for collecting signals of an external environment or interaction with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive a user's sound, such as a voice signal of a control instruction of the user to control the display device 200; alternatively, ambient sounds may be collected that identify the type of ambient scene, enabling the display device 200 to adapt to ambient noise.
In some other exemplary embodiments, the detector 230, which may further include an image collector 232, such as a camera, a video camera, etc., may be configured to collect external environment scenes to adaptively change the display parameters of the display device 200; and the function of acquiring the attribute of the user or interacting gestures with the user so as to realize the interaction between the display equipment and the user.
In some other exemplary embodiments, the detector 230 may further include a light receiver (not shown) for collecting the intensity of the ambient light to adapt to the display parameter variation of the display device 200.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor (not shown), such as by sensing an ambient temperature, and the display device 200 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the display apparatus 200 may be adjusted to display a color temperature of an image that is cooler; when the temperature is lower, the display device 200 may be adjusted to display a warmer color temperature of the image.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the display apparatus 200 and an external apparatus. The external device interface 240 may be connected to an external apparatus such as a set-top box, a game device, a notebook computer, etc. in a wired/wireless manner, and may receive data such as a video signal (e.g., moving image), an audio signal (e.g., music), additional information (e.g., EPG), etc. of the external apparatus.
The external device interface 240 may include: one or more of an HDMI (High Definition Multimedia Interface) terminal 241, a CVBS (Composite Video Blanking and Sync) terminal 242, a Component (analog or digital) terminal 243, a USB (Universal Serial Bus) terminal 244, a Component (Component) terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the display device 200 and responds to the operation of the user by running various software control programs (such as an operating system and various application programs) stored on the memory 260.
As shown in fig. 3, the controller 250 includes a RAM (random access memory) 251, a ROM (read only memory) 252, an image processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. Among them, the RAM251, the ROM252, the image processor 253, the CPU processor 254, and the communication interface 255 are connected by a communication bus 256.
The ROM252 stores various system boot instructions. When the display apparatus 200 starts power-on upon receiving the power-on signal, the CPU processor 254 executes a system boot instruction in the ROM252, copies the operating system stored in the memory 260 to the RAM251, and starts running the boot operating system. After the start of the operating system is completed, the CPU processor 254 copies the various application programs in the memory 260 to the RAM251 and then starts running and starting the various application programs.
An image processor 253 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The image processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, thereby displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing operating system and application program instructions stored in memory 260. And according to the received user input instruction, processing of various application programs, data and contents is executed so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the display apparatus 200 in the display apparatus preload mode and/or operations of displaying a screen in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface, a second interface, and an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a User input command for selecting a GUI (Graphical User Interface) object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the User input command.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink page, document, image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the display apparatus 200 or a voice command corresponding to a voice spoken by the user.
A memory 260 for storing various types of data, software programs, or applications for driving and controlling the operation of the display device 200. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the display device 200.
In some embodiments, the memory 260 is specifically used for storing an operating program for driving the controller 250 of the display device 200; storing various application programs built in the display apparatus 200 and downloaded by a user from an external apparatus; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting GUI objects are stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface.
In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, an Application Programming Interface (API), and/or an Application program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., the middleware, APIs, or applications); at the same time, the kernel may provide an interface to allow middleware, APIs, or applications to access the controller to enable control or management of system resources.
In some embodiments, the display device 100 may have no control device, which performs reception of an input operation through its own control input component. A block diagram of the architectural configuration of the operating system in the memory of the display device 200 is illustrated in fig. 4. The operating system architecture comprises an application layer, a middleware layer and a kernel layer from top to bottom.
The application layer, the application program built in the system and the application program of the non-system layer all belong to the application layer. Is responsible for direct interaction with the user. The application layer may include a plurality of applications such as a setup application, a post application, a media center application, and the like. These applications may be implemented as Web applications that execute based on a WebKit engine, and in particular may be developed and executed based on HTML5, Cascading Style Sheets (CSS), and JavaScript.
Here, HTML, which is called HyperText Markup Language (HyperText Markup Language), is a standard Markup Language for creating web pages, and describes the web pages by Markup tags, where the HTML tags are used to describe characters, graphics, animation, sound, tables, links, etc., and a browser reads an HTML document, interprets the content of the tags in the document, and displays the content in the form of web pages.
CSS, known as Cascading Style Sheets (Cascading Style Sheets), is a computer language used to represent the Style of HTML documents, and may be used to define Style structures, such as fonts, colors, locations, etc. The CSS style can be directly stored in the HTML webpage or a separate style file, so that the style in the webpage can be controlled.
JavaScript, a language suitable for Web page programming, can be inserted into an HTML page and interpreted and executed by a browser. The interaction logic of the Web application is realized by JavaScript. The JavaScript can package a JavaScript extension interface through a browser, realize the communication with the kernel layer,
the middleware layer may provide some standardized interfaces to support the operation of various environments and systems. For example, the middleware layer may be implemented as Multimedia and Hypermedia Experts Group (MHEG) middleware related to data broadcasting, DLNA (Digital Living Network Alliance) middleware of middleware related to communication with an external device, middleware providing a browser environment in which each application program in the display device operates, and the like.
The kernel layer provides core system services, such as: file management, memory management, process management, network management, system security authority management and the like. The kernel layer may be implemented as a kernel based on various operating systems, for example, a kernel based on the Linux operating system.
The kernel layer also provides communication between system software and hardware, and provides device driver services for various hardware, such as: the display driver is provided for the display, the camera driver is provided for the camera, the key driver is provided for the remote controller, the WIFI driver is provided for the WIFI module, the audio driver is provided for the audio output interface, the Power Management driver is provided for the Power Management (PM) module, and the like.
In some embodiments, the display device may use software systems of other architectures.
A user interface 265 receives various user interactions. Specifically, it is used to transmit an input signal of a user to the controller 250 or transmit an output signal from the controller 250 to the user. For example, the remote controller 100A may transmit an input signal, such as a power switch signal, a channel selection signal, a volume adjustment signal, etc., input by the user to the user interface 265, and then the input signal is transferred to the controller 250 through the user interface 265; alternatively, the remote controller 100A may receive an output signal such as audio, video, or data output from the user interface 265 via the controller 250, and display the received output signal or output the received output signal in audio or vibration form.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on the display 275, and the user interface 265 receives the user input commands through the GUI. Specifically, the user interface 265 may receive user input commands for controlling the position of a selector in the GUI to select different objects or items.
Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user interface 265 receives the user input command by recognizing the sound or gesture through the sensor.
The video processor 270 is configured to receive an external video signal, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a video signal that is directly displayed or played on the display 275.
Illustratively, the video processor 270 includes a demultiplexing module, a video decoding module, an image synthesizing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is configured to demultiplex an input audio/video data stream, where, for example, an input MPEG-2 stream (based on a compression standard of a digital storage media moving image and voice), the demultiplexing module demultiplexes the input audio/video data stream into a video signal and an audio signal.
And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like.
And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display.
The frame rate conversion module is configured to convert a frame rate of an input video, for example, convert a frame rate of an input 60Hz video into a frame rate of 120Hz or 240Hz, where a common format is implemented by using, for example, an interpolation frame method.
And a display formatting module for converting the signal output by the frame rate conversion module into a signal conforming to a display format of a display, such as converting the format of the signal output by the frame rate conversion module to output an RGB data signal.
A display 275 for receiving the image signal from the video processor 270 and displaying the video content, the image and the menu manipulation interface. The display video content may be from the video content in the broadcast signal received by the tuner-demodulator 210, or from the video content input by the communicator 220 or the external device interface 240. The display 275, while displaying a user manipulation interface UI generated in the display apparatus 200 and used to control the display apparatus 200.
And, the display 275 may include a display screen assembly for presenting a picture and a driving assembly for driving the display of an image. Alternatively, a projection device and projection screen may be included, provided display 275 is a projection display.
The audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing to obtain an audio signal that can be played by the speaker 286.
Illustratively, audio processor 280 may support various audio formats. Such as MPEG-2, MPEG-4, high-level audio coding (AAC), high-efficiency AAC (HE-AAC), and the like.
The audio output interface 285 is used for receiving an audio signal output by the audio processor 280 under the control of the controller 250, and the audio output interface 285 may include a speaker 286 or an external sound output terminal 287, such as an earphone output terminal, for outputting to a generating device of an external device.
In other exemplary embodiments, video processor 270 may comprise one or more chips. Audio processor 280 may also comprise one or more chips.
And, in other exemplary embodiments, the video processor 270 and the audio processor 280 may be separate chips or may be integrated with the controller 250 in one or more chips.
And a power supply 290 for supplying power supply support to the display apparatus 200 from the power input from the external power source under the control of the controller 250. The power supply 290 may be a built-in power supply circuit installed inside the display apparatus 200 or may be a power supply installed outside the display apparatus 200.
In some embodiments, components in the display apparatus 100 or the control apparatus 200 may be added or deleted as needed.
The same and similar parts in the various embodiments are referred to each other in this specification.
In recent years, VR (Virtual Reality) technology is rapidly developed, and 3D stereoscopic video has a new display medium: VR glasses. Unlike the traditional approach, VR glasses are closed displays with two display screens corresponding to two eyes of a person (or one display screen split into two corresponding to two eyes of a person), rather than projecting pictures of different eyes onto the same screen. Therefore, it is necessary to automatically determine which eye is currently displayed on the screen, and automatically select a picture corresponding to the eye from the synthesized 3D video source for output and display.
In order to automatically select a picture of a corresponding eye from a synthesized 3D video source for output and display, an embodiment of the present application provides a 3D display terminal, specifically referring to fig. 5 and 6, fig. 5 is a schematic diagram of a 3D display terminal according to a feasible embodiment, and it can be seen that the 3D display terminal at least includes a rendering camera 1 controller 2 (corresponding to the above-mentioned implemented controller 250) and a pixel shader 3. Fig. 6 is a flowchart illustrating the operation of the 3D display terminal according to a possible embodiment.
The controller is configured to perform the steps of:
s101, receiving a video source;
the video source in this application is a video signal that may include: the network video signal, the pre-downloaded product signal and the wired video signal transmitted through the USB interface. The live television application program can be installed on the controller and can provide live television through different signal sources; a video-on-demand application may be installed on the controller and the on-demand application may provide video signals from different storage sources. Unlike live television applications, video-on-demand provides video displays from some storage source; the media center application program can be installed on the controller and can receive the video signal sent by the server. It should be noted that the present embodiment is only an exemplary description of several video sources, and the video sources may be, but are not limited to, the above forms in the process of practical application.
S102, judging the format of the video source;
the video source shown in the embodiment of the present application includes a 3D format video source and a 2D format video source.
The 3D video source is characterized by being divided into a left-eye signal and a right-eye signal, wherein the left-eye signal and the right-eye signal may correspond to the upper half part and the right-eye signal respectively, or the left-eye signal and the right-eye signal respectively; for example, fig. 7A shows images rendered by a 3D video source according to a possible embodiment, where the left image corresponds to a left-eye signal and the right image corresponds to a right-eye signal. It is also possible that in a field signal, the upper half is the left eye signal and the lower half is the left eye signal, or the left half is the left eye signal and the right half is the right eye signal. For example, fig. 7B illustrates images rendered by a 3D video source according to a possible embodiment, where the upper image corresponds to a left-eye signal and the lower image corresponds to a right-eye signal. The embodiment of the application is only an exemplary video recording two 3D formats, and in the process of practical application, the video in the 3D format may be in another format.
Judging various realization modes of the formats of the video sources;
for example, the server may configure different identifications for the 2D video source and the 3D video source in advance. When the server issues the video source, the video source is issued together with the identification value. The controller can judge the format of the received video source by reading the identifier. For example, the server may add a first identifier to the 2D video source in advance, add a second identifier to the 3D video source, and when the server receives a play request sent by the 3D display terminal, the server sends the video source carrying the identifier to the controller, and the controller may determine the format of the received video source by reading the identifier.
For another example, the server may send a notification before issuing a video source, the notification being used to notify the controller whether the video source to be sent later is a 3D format video source or a 2D format video source. For example, the server may send a notification of a 2D video source to the controller in advance, and the controller determines that the video source received later is a video source in a 2D format in response to the 2D video source notification.
The embodiment of the present application merely describes two implementation manners of determining the format of the video source by way of example, and in the process of practical application, the implementation manner of determining the format of the video source may be, but is not limited to, the two manners described above.
If the video source format is a 3D format, executing step S10311 to calculate the left movement distance and the right movement distance of the current camera, wherein the current camera is the rendering camera sampled currently;
in a feasible embodiment, the left movement distance and the right movement distance of the current camera can be calculated in the following manner. Specifically, referring to fig. 8, fig. 8 is a flowchart illustrating a calculation process of the left movement distance and the right movement distance of the current camera according to a feasible embodiment, wherein the controller is further configured to:
S10311A reads a reference coordinate and an observation matrix, wherein the reference coordinate is a coordinate of the middle camera;
the 3D display device in this application includes: 3 cameras, Left camera C _ Left, Middle camera C _ Middle, and Right camera C _ Right, respectively. In the embodiment of the present application, the world coordinates of the camera are set to (0.0, 0.0, 0.0) for example, and the camera is located in the middle of the other two cameras, and is used to assist in determining whether the currently rendered camera is a left camera or a right camera, and is not used for rendering display, so that the camera does not generate a rendering frame. In the embodiment of the present application, the world coordinate of C _ Left is set to (-0.05, 0.0, 0.0) by way of example, and the camera is used for rendering and displaying a Left-eye picture. The world coordinate of the third camera, referred to as C _ Right in the embodiment of the present application, is set to (0.05, 0.0, 0.0) by way of example, and the camera is used for rendering and displaying a Right-eye picture. C _ Left and C _ Right are positioned at two sides of C _ Middle, and correspond to respective rendering processes, and each frame respectively executes the rendering processes of two cameras;
the controller may read an observation matrix and reference coordinates, the reference coordinates being coordinates of the intermediate camera, the observation matrix being known in the art.
S10311B reading the offset vector of the current camera in the preset direction in the observation matrix;
the controller may derive a shift vector of the current camera in a preset direction based on the observation matrix.
For example: the first three elements of the observation matrix row a are the right offset vector of the current camera, and in a possible embodiment, a is 1; the controller may calculate the offset vector of the current camera in the right direction by reading the first three elements of the a-th row of the observation matrix and then according to the reading result. For example, fig. 9 is a schematic diagram of an observation matrix according to a possible embodiment, where the first three elements in the 1 st row of the observation matrix are right offset vectors of the current camera, and the controller client reads the first three elements in the 1 st row of the observation matrix to obtain offset vectors (0, 1, 1) of the current camera in the right direction.
For another example: in a possible embodiment, the offset vector of the current camera in the left direction is the negative of the offset vector of the current camera in the right direction, and in particular, with reference to fig. 9, the offset vector (0, -1, -1) of the current camera in the left direction can be continued.
S10311C calculates a right movement distance and a left movement distance of the current camera according to the reference coordinates and the offset vector.
The manner of calculating the moving distance of the current camera may be: the controller is further configured to: calculating the right movement distance and the left movement distance of the current camera according to the following formula; distance (world middleposition + world camera height, world camera position); distance (world middleposition-world camera height, world camera position); wherein, DisRight is the right movement distance of the current camera, and Disleft is the left movement distance of the current camera; WorldMiddlePosition is a reference coordinate; WorldCameraRight is the offset vector in the right direction for the current camera. In this application, Distance (x, y) denotes the Distance between point x and point y.
The right and left movement distances of the current camera are described below with reference to specific examples.
In one possible embodiment, WorldMiddlePosition is (A, B, C); the controller reads the observation matrix, which can be referred to in fig. 10, and the WorldCameraRight is obtained as (0, E, F);
DisRight=Distance[(A,B,C)+(0,E,F),(A,B,C)];
in one possible embodiment, WorldMiddlePosition is (A, B, C); the controller reads the observation matrix, which can be referred to in fig. 10, and the WorldCameraRight is obtained as (0, E, F);
DisLeft=Distance[(A,B,C)-(0,E,F),(A,B,C)]。
the manner of calculating the moving distance of the current camera may also be: the controller is further configured to: calculating the left movement distance and the left movement distance of the current camera according to the following formulas; distance (world middle position-world camera left, world camera position);
distance (world middlebosition + world camera left, world camera position); wherein, DisRight is the right movement distance of the current camera, and Disleft is the left movement distance of the current camera; WorldMiddlePosition is a reference coordinate; WorldCameraLeft is the offset vector of the current camera in the left direction.
The right and left movement distances of the current camera are described below with reference to specific examples.
In one possible embodiment, WorldMiddlePosition is (A, B, C); the controller reads the observation matrix as shown in FIG. 10, which results in WorldCameraLeft values of (0, -E, -F);
DisRight=Distance[(A,B,C)-(0,-E,-F),(A,B,C)];
in one possible embodiment, WorldMiddlePosition is (A, B, C); the controller reads the observation matrix, which can be seen in fig. 10, and the WorldCameraLeft is (0, 0, G);
DisLeft=Distance[(A,B,C)+(0,-E,-F),(A,B,C)]。
if the left movement distance is less than the right movement distance, executing step S10312 to calculate U1V1 according to the left camera parameters and U0V 0;
if the right movement distance is less than the left movement distance, executing step S10313 to calculate U1V1 according to the right camera parameters and U0V0, where U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
s10314 outputs the U1V1 to the pixel shader, so that the pixel shader samples according to the U1V 1.
In the embodiment of the application, if DisRight is smaller than DisLeft, the current rendering camera is a Left camera C _ Left, and U1V1 is calculated according to Left camera parameters and U0V 0; if DisRight is less than Disleft, the current rendering camera is the Right camera C _ Right, and U1V1 is calculated according to the left camera parameters and U0V 0;
specifically, the controller may calculate U1V1 according to the following formula; wherein the camera parameters may include: a factor U _ Scale of U0V0 scaling in the U direction when upsampling on the 3D video source, a factor V _ Scale of V scaling in the V direction when upsampling on the 3D video source for U0V0, a distance U _ Offset in the U direction when upsampling on the 3D video source for U0V0 and a distance V _ Offset in the V direction when upsampling on the 3D video source for U0V 0; wherein, U _ Scale may include U _ Scale _ L and U _ Scale _ R, V _ Scale may include V _ Scale _ L and _ V _ Scale _ R, U _ Offset may include U _ Offset _ L and U _ Offset _ R, and V _ Offset may include V _ Offset _ L and V _ Offset _ R; in the present application, U _ Scale _ L, V _ Scale _ L, U _ Offset _ L, and V _ Offset _ L are referred to as left camera parameters, and U _ Scale _ R, V _ Scale _ R, U _ Offset _ R, and V _ Offset _ R are referred to as right camera parameters.
The calculation process of U1V1 may be: if the left movement distance is less than the right movement distance, U1 is U _ Scale _ L U + U _ Offset _ L; v1 ═ V _ Scale _ L × V + V _ Offset _ L.
If the right movement distance is less than the left movement distance, U1 ═ U _ Scale _ R + U _ Offset _ R; v1 ═ V _ Scale _ R × V + V _ Offset _ R.
In some feasible embodiments, the 3D display device may process a 3D video source of a top-bottom format and a 3D video source of a left-right format; the camera parameters of the up-down format 3D video source and the left-right format 3D video source have a certain difference, so the controller needs to distinguish the 3D video source as the up-down format 3D video source or the left-right format 3D video source in advance;
there are various ways of implementing whether the 3D video source is a 3D video source of an up-down format or a 3D video source of a left-right format;
for example, the server may configure different identifications in advance for the 3D video source in the top-bottom format and the 3D in the left-right format. When the server issues the video source, the video source is issued together with the identification value. The controller can judge whether the received 3D video source is a 3D video source in a top-bottom format or a 3D video source in a left-right format by reading the identifier.
For another example, the server may transmit a notification before downloading the video source, the notification being used to notify the controller whether the video source to be transmitted later is a 3D video source in a top-bottom format or a 3D video source in a left-right format.
The embodiment of the present application merely describes two implementation manners of determining the format of the video source by way of example, and in the process of practical application, the implementation manner of determining the format of the video source may be, but is not limited to, the two manners described above.
If the left movement distance is less than the right movement distance, the controller is further configured to: if the video source is a left-right format 3D video source, calculating U1V1 according to the first parameter and U0V 0; if the video source is a 3D video source in a top-bottom format, calculating U1V1 according to the second parameter and U0V0, wherein the first parameter is different from the second parameter.
The first parameters include U _ Scale _ L _ LR being 0.5, V _ Scale _ L _ LR being 1.0, U _ Offset _ L _ LR being 0.0, V _ Offset _ L _ LR being 0.0,
the second parameters include U _ Scale _ L _ UD ═ 1.0, V _ Scale _ L _ UD ═ 0.5, U _ Offset _ L _ UD ═ 0.0, and V _ Offset _ L _ UD ═ 0.5.
If the right movement distance is less than the left movement distance, the controller is further configured to: if the video source is a left-right format 3D video source, calculating U1V1 according to the third parameter and U0V 0; if the video source is a 3D video source in a top-bottom format, calculating U1V1 according to the fourth parameter and U0V0, wherein the third parameter is different from the fourth parameter.
The third parameter includes: u _ Scale _ R _ LR is 0.5, V _ Scale _ R _ LR is 1.0, U _ Offset _ R _ LR is 0.5, V _ Offset _ R _ LR is 0.0;
the fourth parameter includes: u _ Scale _ R _ UD is 1.0, V _ Scale _ R _ UD is 0.5, U _ Offset _ R _ UD is 0.0, V _ Offset _ R _ UD is 0.0;
when the video source format is Top _ Bottom _3D and the current rendering camera is C _ Left, U _ Scale _ L _ UD is 1.0, V _ Scale _ L _ UD is 0.5, U _ Offset _ L _ UD is 0.0, and V _ Offset _ L _ UD is 0.5. Then U1 ═ U, V1 ═ 0.5V + 0.5;
when the video source format is Top _ Bottom _3D and the current rendering camera is C _ Right, U _ Scale _ R _ UD is 1.0, V _ Scale _ R _ UD is 0.5, U _ Offset _ R _ UD is 0.0, and V _ Offset _ R _ UD is 0.0. Then U1 ═ U, V1 ═ 0.5V;
when the video source format is Left _ Right _3D and the current rendering camera is C _ Left, U _ Scale _ L _ LR is 0.5, V _ Scale _ L _ LR is 1.0, U _ Offset _ L _ LR is 0.0, and V _ Offset _ L _ LR is 0.0. Then U1 ═ 0.5U, V1 ═ V;
when the video source format is Left _ Right _3D and the current rendering camera is C _ Right, U _ Scale _ R _ LR is 0.5, V _ Scale _ R _ LR is 1.0, U _ Offset _ R _ LR is 0.5, and V _ Offset _ R _ LR is 0.0. Then U1 ═ 0.5U +0.5, V1 ═ V; and finally, outputting U1 and V1, and finally completing sampling and rendering in the pixel shader.
If the video source format is 2D format, step S10321 is executed to output the UV coordinates of the currently processed vertex to the pixel shader.
If the right movement distance is larger than the left movement distance, presenting the rendered image to the left eye;
if the right movement distance is smaller than the left movement distance, presenting the rendered image to the right eye;
the embodiment of the application discloses a 3D display terminal, a controller and an image processing method, wherein the 3D display terminal comprises: a rendering camera, a pixel shader, and a controller. The 3D display terminal disclosed in this embodiment may calculate the left movement distance and the right movement distance of the current camera by using the controller, select corresponding camera parameters according to the magnitude relationship between the left movement distance and the right movement distance of the current camera, and calculate U1V1 based on the camera parameters and U0V0, so that the pixel shader performs sampling according to the U1V 1. Finally, the pixel shader presents the picture obtained after sampling to the corresponding eye
A second aspect of the embodiments of the present application discloses an image processing method, where the method is applied to a 3D display terminal shown in the embodiments of the present application, and includes:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
A third aspect of embodiments of the present application discloses a controller configured to:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1. The embodiment of the application also provides a chip which is connected with the memory or comprises the memory and is used for reading and executing the software program stored in the memory, and the method provided by the embodiment of the application.
Embodiments of the present application also provide a computer program product comprising one or more computer program instructions. When the computer program instructions are loaded and executed by a computer, the processes or functions according to the various embodiments described above in the present application are generated in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. When the method is run on a computer, the method provided by the embodiment of the application is executed by the computer.
There is also provided a computer readable storage medium in the embodiment, wherein the computer readable storage medium can store computer program instructions, and when the program instructions are executed, all the steps of the image processing method of the above embodiments of the present application can be implemented. The computer readable storage medium includes a magnetic disk, an optical disk, a read only memory ROM, a random access memory RAM, and the like.
In the above embodiments, all or part may be implemented by software, hardware, firmware, or any combination thereof. When implemented in software, the embodiments may be implemented in whole or in part in the form of a computer program product, which is not limited.
Those skilled in the art will also appreciate that the various illustrative logical blocks and steps (step) set forth herein may be implemented in electronic hardware, computer software, or combinations of both. Whether such functionality is implemented as hardware or software depends upon the particular application and design requirements of the overall system. Those skilled in the art may implement the functions in various ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The various illustrative logical units and circuits described in this application may be implemented or operated through the design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in this application may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be located in a UE. In the alternative, the processor and the storage medium may reside in different components in the UE.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the present application.

Claims (10)

1. A3D display terminal, comprising:
a rendering camera for rendering of an image, comprising: a left camera and a right camera;
a pixel shader for rendering of an image;
a controller configured to: if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
2. The 3D display terminal of claim 1, wherein if the video source format is a 2D format, the controller is further configured to: and outputting the UV coordinates of the currently processed vertex to the pixel shader.
3. The 3D display terminal of claim 1, wherein the controller is further configured to:
reading U _ Scale, V _ Scale, U _ Offset, V _ Offset, wherein U _ Scale is a multiple of U0V0 scaled in the U direction when upsampled on the 3D video source, V _ Scale is a multiple of V direction scaled when U0V0 upsampled on the 3D video source, U _ Offset is a distance moved in the U direction when U0V0 upsampled on the 3D video source, and V _ Offset is a distance moved in the V direction when U0V0 upsampled on the 3D video source;
calculating U1V1 according to the following formula, wherein the U1V1 is a UV coordinate of a display image;
U1=U_Scale*U+U_Offset;
V1=V_Scale*V+V_Offset。
4. the 3D display terminal of claim 1 or 2, further comprising a middle camera disposed between the left camera and the right camera;
if the video source format is a 3D format, the controller is further configured to:
reading a reference coordinate and an observation matrix, wherein the reference coordinate is the coordinate of the middle camera;
reading a deviation vector of the current camera in a preset direction in the observation matrix;
and calculating the right movement distance and the left movement distance of the current camera according to the reference coordinate and the offset vector.
5. The 3D display terminal of claim 4, wherein the controller is further configured to:
calculating the right movement distance and the left movement distance of the current camera according to the following formula;
DisRight=Distance(WorldMiddlePosition+WorldCameraRight,WorldCameraPosition);
distance (world middleposition-world camera height, world camera position); wherein, DisRight is the right movement distance of the current camera, and Disleft is the left movement distance of the current camera; WorldMiddlePosition is a reference coordinate; WorldCameraRight is the offset vector in the right direction for the current camera.
6. The 3D display terminal of claim 4, wherein the controller is further configured to:
calculating the left movement distance and the right movement distance of the current camera according to the following formula;
DisRight=Distance(WorldMiddlePosition-WorldCameraLeft,WorldCameraPosition);
distance (world middlebosition + world camera left, world camera position); wherein, DisRight is the right movement distance of the current camera, and Disleft is the left movement distance of the current camera; WorldMiddlePosition is a reference coordinate; WorldCameraLeft is the offset vector of the current camera in the left direction.
7. The 3D display terminal according to any of claims 1-3, wherein if the left movement distance is less than the right movement distance, the controller is further configured to: the video source comprises a left-right format 3D video source and a top-down format 3D video source;
if the video source is a left-right format 3D video source, calculating U1V1 according to the first parameter and U0V 0;
if the video source is a 3D video source in a top-bottom format, calculating U1V1 according to the second parameter and U0V0, wherein the first parameter is different from the second parameter.
8. The 3D display terminal according to any of claims 1-3, wherein if the right movement distance is less than the left movement distance, the controller is further configured to: the video source comprises a left-right format 3D video source and a top-down format 3D video source;
if the video source is a left-right format 3D video source, calculating U1V1 according to the third parameter and U0V 0;
if the video source is a 3D video source in a top-bottom format, calculating U1V1 according to the fourth parameter and U0V0, wherein the third parameter is different from the fourth parameter.
9. An image processing method applied to the 3D display terminal according to any one of claims 1 to 8, comprising:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
10. A controller, configured to:
if the video source format is a 3D format, calculating the left movement distance and the right movement distance of the current camera, wherein the current camera is a current sampling rendering camera;
if the left movement distance is smaller than the right movement distance, calculating U1V1 according to the left camera parameters and U0V 0; if the right movement distance is smaller than the left movement distance, calculating U1V1 according to the right camera parameters and U0V0, wherein U0V0 is the UV coordinate of the currently processed vertex, and U1V1 is the converted UV coordinate;
outputting the U1V1 to the pixel shader such that the pixel shader samples according to the U1V 1.
CN202011082903.0A 2020-10-12 2020-10-12 3D display terminal, controller and image processing method Active CN112235562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011082903.0A CN112235562B (en) 2020-10-12 2020-10-12 3D display terminal, controller and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011082903.0A CN112235562B (en) 2020-10-12 2020-10-12 3D display terminal, controller and image processing method

Publications (2)

Publication Number Publication Date
CN112235562A true CN112235562A (en) 2021-01-15
CN112235562B CN112235562B (en) 2023-09-15

Family

ID=74113301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011082903.0A Active CN112235562B (en) 2020-10-12 2020-10-12 3D display terminal, controller and image processing method

Country Status (1)

Country Link
CN (1) CN112235562B (en)

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1909676A (en) * 2005-08-05 2007-02-07 三星Sdi株式会社 3d graphics processor and autostereoscopic display device using the same
KR20070017785A (en) * 2005-08-08 2007-02-13 (주) 시선커뮤니티 3 dimensional solid rendering method
US20080309660A1 (en) * 2007-06-12 2008-12-18 Microsoft Corporation Three dimensional rendering of display information
CN101488230A (en) * 2009-02-24 2009-07-22 南京师范大学 VirtualEarth oriented ture three-dimensional stereo display method
CN101511034A (en) * 2009-02-24 2009-08-19 南京师范大学 Truly three-dimensional stereo display method facing Skyline
US20090231329A1 (en) * 2008-03-14 2009-09-17 Vasanth Swaminathan Processing Graphics Data For A Stereoscopic Display
JP2011253554A (en) * 2011-08-02 2011-12-15 Td Vision Corporation S.A. De C.V. Three-dimensional video game system
CN103996215A (en) * 2013-11-05 2014-08-20 深圳市云立方信息科技有限公司 Method and apparatus for realizing conversion from virtual view to three-dimensional view
US20140327613A1 (en) * 2011-12-14 2014-11-06 Universita' Degli Studidi Genova Improved three-dimensional stereoscopic rendering of virtual objects for a moving observer
US20150049001A1 (en) * 2013-08-19 2015-02-19 Qualcomm Incorporated Enabling remote screen sharing in optical see-through head mounted display with augmented reality
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
CN106375637A (en) * 2015-07-21 2017-02-01 Lg电子株式会社 Mobile terminal and method for controlling the same
CN107302694A (en) * 2017-05-22 2017-10-27 歌尔科技有限公司 Method, equipment and the virtual reality device of scene are presented by virtual reality device
US20170330365A1 (en) * 2016-05-10 2017-11-16 Jaunt Inc. Virtual reality resource scheduling of processes in a cloud-based virtual reality processing system
CN107562185A (en) * 2017-07-14 2018-01-09 西安电子科技大学 It is a kind of based on the light field display system and implementation method of wearing VR equipment
CN107846584A (en) * 2017-11-02 2018-03-27 中国电子科技集团公司第二十八研究所 The adaptive desktop synchronized projection method of virtual reality based on scene management development library
CN108093235A (en) * 2017-12-22 2018-05-29 威创集团股份有限公司 The 3D rendering processing method and system of tiled display equipment
CN109361913A (en) * 2015-05-18 2019-02-19 韩国电子通信研究院 For providing the method and apparatus of 3-D image for head-mounted display
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN109525830A (en) * 2018-11-28 2019-03-26 浙江未来技术研究院(嘉兴) A kind of dimensional video collecting system
CN109598796A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN109874002A (en) * 2017-12-04 2019-06-11 深圳市冠旭电子股份有限公司 VR intelligence helmet and VR image display system
CN110620917A (en) * 2019-10-22 2019-12-27 上海第二工业大学 Virtual reality cross-screen stereoscopic display method
CN110930489A (en) * 2018-08-29 2020-03-27 英特尔公司 Real-time system and method for rendering stereoscopic panoramic images
CN111754614A (en) * 2020-06-30 2020-10-09 平安国际智慧城市科技股份有限公司 Video rendering method and device based on VR (virtual reality), electronic equipment and storage medium

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1909676A (en) * 2005-08-05 2007-02-07 三星Sdi株式会社 3d graphics processor and autostereoscopic display device using the same
KR20070017785A (en) * 2005-08-08 2007-02-13 (주) 시선커뮤니티 3 dimensional solid rendering method
US20080309660A1 (en) * 2007-06-12 2008-12-18 Microsoft Corporation Three dimensional rendering of display information
US20090231329A1 (en) * 2008-03-14 2009-09-17 Vasanth Swaminathan Processing Graphics Data For A Stereoscopic Display
CN101488230A (en) * 2009-02-24 2009-07-22 南京师范大学 VirtualEarth oriented ture three-dimensional stereo display method
CN101511034A (en) * 2009-02-24 2009-08-19 南京师范大学 Truly three-dimensional stereo display method facing Skyline
JP2011253554A (en) * 2011-08-02 2011-12-15 Td Vision Corporation S.A. De C.V. Three-dimensional video game system
US20140327613A1 (en) * 2011-12-14 2014-11-06 Universita' Degli Studidi Genova Improved three-dimensional stereoscopic rendering of virtual objects for a moving observer
US20150049001A1 (en) * 2013-08-19 2015-02-19 Qualcomm Incorporated Enabling remote screen sharing in optical see-through head mounted display with augmented reality
CN103996215A (en) * 2013-11-05 2014-08-20 深圳市云立方信息科技有限公司 Method and apparatus for realizing conversion from virtual view to three-dimensional view
CN105282532A (en) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3D display method and device
CN109361913A (en) * 2015-05-18 2019-02-19 韩国电子通信研究院 For providing the method and apparatus of 3-D image for head-mounted display
CN106375637A (en) * 2015-07-21 2017-02-01 Lg电子株式会社 Mobile terminal and method for controlling the same
CN105469405A (en) * 2015-11-26 2016-04-06 清华大学 Visual ranging-based simultaneous localization and map construction method
US20170330365A1 (en) * 2016-05-10 2017-11-16 Jaunt Inc. Virtual reality resource scheduling of processes in a cloud-based virtual reality processing system
CN107302694A (en) * 2017-05-22 2017-10-27 歌尔科技有限公司 Method, equipment and the virtual reality device of scene are presented by virtual reality device
CN107562185A (en) * 2017-07-14 2018-01-09 西安电子科技大学 It is a kind of based on the light field display system and implementation method of wearing VR equipment
CN109598796A (en) * 2017-09-30 2019-04-09 深圳超多维科技有限公司 Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN107846584A (en) * 2017-11-02 2018-03-27 中国电子科技集团公司第二十八研究所 The adaptive desktop synchronized projection method of virtual reality based on scene management development library
CN109874002A (en) * 2017-12-04 2019-06-11 深圳市冠旭电子股份有限公司 VR intelligence helmet and VR image display system
CN108093235A (en) * 2017-12-22 2018-05-29 威创集团股份有限公司 The 3D rendering processing method and system of tiled display equipment
CN110930489A (en) * 2018-08-29 2020-03-27 英特尔公司 Real-time system and method for rendering stereoscopic panoramic images
CN109525830A (en) * 2018-11-28 2019-03-26 浙江未来技术研究院(嘉兴) A kind of dimensional video collecting system
CN109640070A (en) * 2018-12-29 2019-04-16 上海曼恒数字技术股份有限公司 A kind of stereo display method, device, equipment and storage medium
CN109510975A (en) * 2019-01-21 2019-03-22 恒信东方文化股份有限公司 A kind of extracting method of video image, equipment and system
CN110620917A (en) * 2019-10-22 2019-12-27 上海第二工业大学 Virtual reality cross-screen stereoscopic display method
CN111754614A (en) * 2020-06-30 2020-10-09 平安国际智慧城市科技股份有限公司 Video rendering method and device based on VR (virtual reality), electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
席小霞;宋文爱;邱子璇;史磊;: "基于RGB-D值的三维图像重建系统研究", 测试技术学报, no. 05 *

Also Published As

Publication number Publication date
CN112235562B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111200746B (en) Method for awakening display equipment in standby state and display equipment
WO2021147299A1 (en) Content display method and display device
CN111479152A (en) Display device
CN111427643A (en) Display device and display method of operation guide based on display device
CN112055256B (en) Image processing method and display device for panoramic image
CN112565839A (en) Display method and display device of screen projection image
CN111246309A (en) Method for displaying channel list in display device and display device
CN111107428A (en) Method for playing two-way media stream data and display equipment
CN111601144B (en) Streaming media file playing method and display equipment
CN111414216A (en) Display device and display method of operation guide based on display device
CN112565861A (en) Display device
CN111212293A (en) Image processing method and display device
CN111045557A (en) Moving method of focus object and display device
WO2021232914A1 (en) Display method and display device
CN113347413A (en) Window position detection method and display device
CN111669638B (en) Video rotation playing method and display device
CN111857502A (en) Image display method and display equipment
CN113115092A (en) Display device and detail page display method
CN111885415B (en) Audio data rapid output method and display device
CN111314739B (en) Image processing method, server and display device
CN111405329B (en) Display device and control method for EPG user interface display
CN113115093B (en) Display device and detail page display method
CN111277911B (en) Image processing method of panoramic video, display device and server
CN112235562B (en) 3D display terminal, controller and image processing method
CN111601147A (en) Content display method and display equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant