WO2012007867A1 - Signaling for multiview 3d video - Google Patents

Signaling for multiview 3d video Download PDF

Info

Publication number
WO2012007867A1
WO2012007867A1 PCT/IB2011/052938 IB2011052938W WO2012007867A1 WO 2012007867 A1 WO2012007867 A1 WO 2012007867A1 IB 2011052938 W IB2011052938 W IB 2011052938W WO 2012007867 A1 WO2012007867 A1 WO 2012007867A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
data
video
view
multiple views
Prior art date
Application number
PCT/IB2011/052938
Other languages
French (fr)
Inventor
Philip Steven Newton
Bart Kroon
Reinier Bernardus Maria Klein Gunnewiek
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2012007867A1 publication Critical patent/WO2012007867A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • H04N13/351Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information

Definitions

  • the invention relates to a video processing device for processing three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data, the device comprising
  • a video processor for processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format, and
  • a display interface for interfacing with a 3D display device for transferring the 3D display signal.
  • the invention further relates to a display device for displaying 3D video information, the 3D video information comprising 3D video data and auxiliary data, the device comprising
  • a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
  • a display processor for providing a display control signal representing the multiple views to the 3D display based on the 3D display signal.
  • the invention further relates to a 3D display signal, method and computer program for transferring 3D video information via an interface between a video processing device and a display device.
  • the invention relates to the field of 3D video rendering on auto stereoscopic displays based on generating multiple views; different views being perceived by the respective eyes of a viewer.
  • a 3D video processing device like a BD player or set top box may be coupled to a 3D display device like a TV set or monitor for transferring the 3D video data via a display signal on suitable interface, preferably a high-speed digital interface like HDMI.
  • auxiliary information like subtitles, graphics or menus, or a further video signal, may be combined with the main video data to be displayed.
  • Video data defines the content of the main video to be displayed.
  • Auxiliary data defines any other data that may be displayed in combination with the main video data, such as graphical data or subtitles.
  • the auxiliary data is combined with the main data for display in overlay on the 3D video data, e.g. at a depth that is in front of any object of main video.
  • the 3D display device receives a 3D display signal via the interface and provides different images to the respective eyes of a viewer to create a 3D effect.
  • the display device may be a stereoscopic device, e.g. for a viewer wearing shutter glasses that pass left and right views displayed sequentially to the respective left and right eye of a viewer.
  • the display device may also be an auto stereoscopic display that generates multiple views, e.g. 9 views; different views being perceived by the respective eyes of a viewer not wearing glasses.
  • the invention is focused on the specific type of 3D displays, usually called auto-stereoscopic displays, which provide multiple images in a spatial distribution so that a viewer does not need to wear glasses.
  • the spatial arrangement comprises multiple views, usually at least 5, and pairs of the different views are arranged to be perceived by respective eyes of a viewer, when correctly positioned with respect to said spatial distribution, for generating a 3D effect.
  • the 3D video processing device generates a display signal that is transferred to the display device.
  • the display signal provides a left and a right view.
  • auto stereoscopic devices need to generate the multiple views based on the input from the display signal, which is not trivial.
  • the video processing device as described in the opening paragraph is arranged for receiving view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device, and the video processor is arranged for generating the multiple views according to the view mask data, and for including the multiple views in the display signal, the display format being different from the input format.
  • the display device as described in the opening paragraph is arranged for transferring view data including view mask data to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and the display processor is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
  • a 3D display signal is provided for transferring 3D video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
  • the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
  • the 3D display signal comprising
  • view data including view mask data to be transferred from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
  • a method for transferring 3D video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
  • the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
  • view mask data including view mask data from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views
  • a computer program product for transferring 3D video information via an interface between a video processing device and a display device, which program is operative to cause a processor to perform the method as defined above.
  • the view data defines properties of the display and the viewing configuration
  • the view mask data included in the view data defines the configuration and properties of the multiple views as generated by the 3D display device, in particular the arrangement and type of pixels on a display panel that is conjugated with a lenticular lens or barrier optical element, which optical element guides the output light of the pixels such that multiple views are shown in a spatial distribution.
  • auto-stereoscopic displays are known.
  • the view data including the view mask data is output by the 3D display device and received by the video processing device.
  • the video processing device is now aware of the specific configuration and requirements of the 3D display and now generates the multiple views.
  • the combining of the auxiliary data with the main video is performed in the video processing device, which does have the original, full and non-occluded main video data.
  • the multiple views, including the auxiliary information are generated in the video processing device.
  • the 3D display device there is no need for recovering depth or disparity data.
  • the invention is also based on the following recognition.
  • auxiliary information like subtitles or menus may already be overlayed over the main video.
  • some parts of the main video are occluded by the auxiliary information, and must be interpolated or estimated when generating other views.
  • the inventors have seen that the main video is fully available in the video processing device. They have proposed to transfer mask view data that defines the properties of the respective multiple views to the video processing device for enabling that device to generate the multiple views, and subsequently overlaying the auxiliary data without causing artifacts or requiring estimating occluded areas.
  • the mask view data may be substantially different for different displays.
  • the inventors have proposed to define a standardized format that accommodates transferring all relevant parameters of mask view data.
  • the video processing device receiving the mask view data, is provided with processing functions controlled by the mask view data to generate the respective views required for the specific 3D display device coupled to the display interface.
  • the set of views is transferred in the display signal, e.g. in separate frames or in a single frame, the views being interleaved corresponding to the final arrangement of pixels in the display device.
  • processing power available in the video processing device can be used while less processing power is needed in the 3D display device.
  • the display interface is arranged for said receiving the view data including view mask data from the 3D display device via the 3D display signal.
  • the 3D display device directly transfers the relevant mask view data to the video processing device, via the same interface that transfers the multiple views to the 3D display device.
  • the display interface is a High Definition Multimedia Interface [HDMI] arranged for said receiving the view data including view mask data from the 3D display device via Enhanced Extended Display Identification Data [E-EDID]. This has the advantage that the HDMI standard is extended for enabling generating and transferring multiple views.
  • HDMI High Definition Multimedia Interface
  • the view mask data comprises at least one of
  • the view mask data comprises a pixel processing definition
  • the video processor is arranged for executing the pixel processing definition for generating the multiple views.
  • the processing definition defines how the pixels of the multiple vies have to be generated.
  • the view data comprises at least one of:
  • the video processor is arranged for adapting the multiple views based on the parameters.
  • User parameters e.g. a preferred depth range or depth limit, or display parameters, like properties of the views dependent on constructional elements of the display, are transferred to the video processing device and enable the multiple views to be adapted, e.g. filtered or adjusted to a minimal depth. This has the advantage that the quality of the 3D video is adapted to user preferences and/or viewing
  • Figure 1 shows a system for processing 3D video information
  • Figure 2 shows a 3D display providing multiple views
  • Figure 3 shows a lenticular screen
  • Figure 4 shows generating multiple views
  • Figure 5 shows a view mask of a display
  • Figure 6 shows a view rendering process
  • Figure 7 shows a 3D player model
  • Figure 8 shows a system architecture using view mask data
  • Figure 9 shows a 3D View Mask Data Block
  • Figure 10 shows a view description
  • Figure 11 shows view mask data of sub-pixels
  • Figure 12 shows a sub pixel structure
  • Figure 13 shows lens configuration data
  • Figure 1 shows a system for processing three dimensional (3D) video information.
  • the 3D video information includes 3D video data, also called main video data, and auxiliary data, such as subtitles, graphics and other additional visual information.
  • a 3D video processing device 100 is coupled to a 3D display device 120 for transferring a 3D display signal 110.
  • the 3D video processing device has input means for receiving the 3D video data according to an input format, including an input unit 101 for retrieving the 3D video data, e.g. a video disc player, media player or a set top box.
  • the input means may include an optical disc unit 103 for retrieving video and auxiliary information from an optical record carrier 105 like a DVD or Blu-ray Disc (BD).
  • the input means may include a network interface unit 102 for coupling to a network 104, for example the internet or a broadcast network.
  • Video data may be retrieved from a broadcaster, remote media server or website.
  • the 3D video processing device may also be a satellite receiver, or a media server directly providing the display signals, i.e. any video device that outputs a 3D display signal to be coupled to a display device.
  • the device may be provided with user control elements for setting user preferences, e.g. rendering parameters of 3D video.
  • the 3D video processing device has an image processor 106 coupled to the input unit 101 for processing the video information for generating a 3D display signal 110 to be transferred via a display interface unit 107 to the display device.
  • the auxiliary data may be added to the video data, e.g. overlaying subtitles on the main video.
  • the video processor 106 is arranged for including the video information in the 3D display signal 110 to be transferred to the 3D display device 120.
  • the 3D display device 120 is for displaying 3D video information.
  • the device has a 3D display 123 receiving 3D display control signals for displaying the video information by generating multiple views, for example a lenticular LCD.
  • the 3D display is further elucidated with reference to Figures 2-4.
  • the device has a display interface unit 121 for receiving the 3D display signal 110 including the 3D video information transferred from the 3D video processing device 100.
  • the device has a display processor 122 coupled to the interface 121.
  • the transferred video data is processed in the display processor 122 for generating the 3D display control signals for rendering the 3D video information on a 3D display 123 based on the 3D video data.
  • the display device 13 may be any type of stereoscopic display that provides multiple views, and has a display depth range indicated by arrow 124.
  • the display device may be provided with user control elements for setting display parameters of the display, such as contrast, color or depth parameters.
  • the input unit 101 is arranged for retrieving video data from a source.
  • Auxiliary data may be generated in the device, e.g. menus or buttons, or may also be received from an external source, e.g. via the internet, or may be provided by the source together with the main video data, e.g. subtitles in various languages, one of which may be selected by the user.
  • the video processor 106 is arranged for processing the 3D video information, as follows.
  • the video processor processes the 3D video information and generates the 3D display signal.
  • the 3D display signal represents the 3D video data and the auxiliary data according to a display format, e.g. HDMI.
  • the display interface 107 interfaces with the 3D display device 120 for transferring the 3D display signal.
  • the video processing device 100 is arranged for receiving view data including view mask data from the 3D display device, e.g. dynamically when coupled to the display device.
  • the view mask data defines a pixel arrangement of the multiple views, as discussed below in detail. For example at least part of the view data may be transferred via the display interface.
  • the display processor 122 is arranged for providing a display control signal representing the multiple views to the 3D display based on the 3D display signal as received on the interface 121.
  • the display device is arranged for transferring the view data including view mask data to the video processing device.
  • the view mask data may be stored in a memory, e.g. provided during production of the 3D display device.
  • the display processor, or a further controller may transfer the view mask data via the interface, i.e. in the direction towards the video processing device.
  • the display processor is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
  • the display interface is arranged for said receiving the view data including view mask data from the 3D display device via the 3D display signal.
  • the view data may be included by the 3D display device in a bi-directional 3D display signal as transferred over a suitable high speed digital video interface, e.g. in a HDMI signal using the well known HDMI interface (e.g. see "High Definition Multimedia Interface Specification Version 1.3a of Nov 10 2006), in particular see section 8.3 on the via Enhanced Extended Display Identification Data, the E-EDID data structure, extended to define the view data as defined below.
  • the display interface is a High Definition Multimedia Interface [HDMI] arranged for said receiving the view data including view mask data from the 3D display device via Enhanced Extended Display Identification Data [E-EDID].
  • HDMI High Definition Multimedia Interface
  • E-EDID Enhanced Extended Display Identification Data
  • view data is transferred via a separate path, e.g. via a local network or the internet.
  • the manufacturer of the display device may provide at least part of the view data via a website, a software update, a device property table, via an optical disc or a USB memory device, etc.
  • the view data includes the view mask data, which defines a pixel arrangement of multiple views to be displayed by the 3D display device.
  • the video processor is arranged for generating the multiple views according to the view mask data. Any auxiliary data to be combined is overlayed on the main video data. Subsequently the multiple views are included in the display signal.
  • the display format of the 3D display signal is different from the input format, in particular with respect to the number of said multiple views.
  • the number of views usually is two, i.e. a left and a right view.
  • the number of views is determined by the 3D display device, e.g. 7 or 9, as elucidated now.
  • Figure 2 shows a 3D display providing multiple views.
  • One horizontal scan line of a display panel 21 is schematically shown and provides multiple views 23 as indicated by seven diverging arrows.
  • the views are generated at a suitable viewing distance in front of the display, e.g. 2 meters in front of a TV set.
  • a pair of different views is to be perceived by the respective eyes of a viewer 22.
  • In each view is a perceived pixel is generated by a respective pixel 24 of the panel, a sequence of seven pixels corresponding to one pixel in each of the seven views.
  • different sub-pixels are required to provide at least three colors for rendering 3D video in color.
  • the sequence of seven pixels 24 is repeated along the scan line, and optical elements, like lenses or barriers, are located in front of the display panel to guide the light emitted from the respective pixels to the respective different views.
  • Figure 3 shows a lenticular screen.
  • the Figure shows a display panel 31, e.g. an LCD panel, having a repetitive pattern of 6 pixels constituting a period of a 6 view 3D display.
  • a lenticular lens 32 is mounted in front of the display panel for generating light bundle 33 towards multiple views 34.
  • the pixels in the panel are numbered 1,2, ..6, and the views are numbered correspondingly.
  • One eye of a viewer 35 perceives the third view, the other eye the fourth view.
  • the lenticular display is a parallax 3D display, capable of showing multiple images (usually eight, or nine) images for different horizontal viewing directions. This way, the viewer can experience motion parallax and stereoscopic cues. Both effects exist because the eyes perceive different views, and by moving the head horizontally the perceived views change.
  • the lenticular lenses accommodate that for a specific viewing angle, the viewer only sees a subset of the subpixels of the underlying LCD. More specific, if the appropriate values are set to the associated pixels for the various viewing directions, the viewer will see different images from different viewing directions. This enables the possibility to render stereoscopic images.
  • Various types of multiview displays are known as such.
  • a basic type has vertical sheets of lenses such that horizontal resolution is sacrificed for views. To balance the resolution loss in vertical and horizontal direction slanted lenses have been developed.
  • a third type is a fractional view display where the pitch of the lens is a non- integral times wider than the (sub-)pixel width. Hence, which pixel constitutes to which view is configuration-specific, and corresponding display control signals for the respective views have to be generated based on an available video input signal.
  • FIG 4 shows generating multiple views. Processing for generating multiple views 44 is schematically indicated.
  • An input signal 47 is received in an input unit for a demultiplexing step 40.
  • the de-multiplexing step retrieves the respective video data 41, in the example being a 2D frame and depth map Z, and optionally further auxiliary data, audio data, etc from the input signal.
  • control parameters 43 may be retrieved also, e.g. from a header or data message included in the input signal, to adjust the rendering of the video information.
  • the video information is processed in a processor, which performs a rendering process 42 for generating the nine multiple views 44, each view being the same scene viewed from a slightly different position.
  • the multiple views are interweaved, i.e. formatted as required to control the 3D display panel, e.g. a panel as discussed above with reference to Figures 2 and 3. It is to be noted that, traditionally, the processing is performed in a display processor directly coupled to the display panel.
  • the processing is performed in the video processing device, where the final step of interweaving produces the output 3D display signal according to a display format, e.g. HDMI.
  • the views may be transferred sequentially, or a single interweaved frame may be transferred having the pixel data corresponding to the physical location of the pixels in the 3D display device, as defined by the view mask data.
  • Figure 5 shows a view mask of a display.
  • the Figure schematically shows a display panel 51 having sub-pixel colums 52 R-G-B for the respective primary colors red, green, and blue. In practice, more or different colors may be used.
  • a slanted lenticular lens 53 is provided on the panel. Due to the lenticular lens, and viewed for a specific direction, only some pixels are visible for a viewer, constituting a view 54 as shown in the right halve of the figure. The respective pixels that are visible are highlighted on the panel 51 by bold rectangles 55.
  • subset of subpixels of the underlying matrix display is called interleaving.
  • a subpixel When a subpixel is lit, it illuminates the entire lens above the subpixel, seen from its corresponding viewing direction as shown in the right half Figure.
  • the location of the respective pixels of a single view is called a view mask.
  • the view mask can be defined by a set of view mask data, e.g. the position of each pixel of a respective view relative to a starting point.
  • the pattern of the view mask is repetitive, and the view mask data only needs to define a single period of the pattern, i.e. only a small portion of the total screen.
  • 3D multiview displays having a non-repetitive pattern, or a very complex pattern, are possible.
  • a view mask may be provided for the entire screen size.
  • the view mask data may also include parameters of the lens, like the width or angle of slanting. Further examples of view mask data are given below.
  • Figure 6 shows a view rendering process. It is noted that the total process traditionally is performed in a multiview 3D display device.
  • the 3D data content 61 is available as a stream of data containing the video information according to a video format, such as L+R (Left + Right), 2D+depth (2 dimensional data and depth map), Multiview Depth (MVD; see C. L.
  • step 62 From the content views are rendered in step 62 Render-views by morphing in such a way that image features make a horizontal translation that depends on the feature depth and the view. If not taken into account in step 62 already, the views have to be filtered in step 63 Anti-aliasing, e.g.
  • step Interleaving 64 the views are interleaved in step Interleaving 64 to form one frame of the native resolution of the screen.
  • additional filtering may be applied in step 65 Cross-talk, for instance to reduce cross-talk between views.
  • 3D display control signals are generated and the frame is displayed on the screen in step 66 Displaying.
  • Figure 7 shows a 3D player model.
  • a 3D player model In the traditional systems such a player is coupled to a multiview 3D display as described with Figure 6.
  • the model may be applied to DVB set-up boxes, DVD-players, Blu-ray Disc (BD) players and other similar equipment.
  • Video information 71 is available from an input stream, e.g. from a BD or DVB.
  • the video content is on a first plane 72 of the available 3D planes.
  • Other planes are dedicated to auxiliary data like graphics 73 such as subtitles, Picture in Picture (PIP) and the player onscreen display (OSD) 74.
  • the process of merging the planes before sending them to the screen is called compositing 75.
  • a player or PC graphics card has more 3D data available than fits in the format transmitted to the screen.
  • 3D video content is overlaid with a semi-transparent menu
  • artifacts are likely with image + depth, image+ depth + occlusion, or stereo + depth as a format.
  • Other problems are related to Z-buffer to depth conversion.
  • 3D graphics a Z-buffer is used to hold the Z value of objects for the removal of hidden surfaces.
  • the values in this buffer depend on the camera projection set by the application and have to be converted to be suitable for use as "depth” values.
  • the value of the Z buffer is often not reliable as it depends heavily on the way the game has been programmed.
  • the above problems are solved by moving the task of rendering views from the multiview 3D display device to the video processing device, such as the disc player, a PC, or a separate device like a 3D receiver between the player and the display.
  • the video processing device such as the disc player, a PC, or a separate device like a 3D receiver between the player and the display.
  • view mask data which preferably is standardized such that players and displays of different brands can co-operate.
  • the display device Via the view mask data the display device is able to describe its physical configuration in such a way that the player does not have to be aware of display specifics when generating the multiple views, and the display functions in an uncompromised manner based on the multiple views provided in the display signal on the input interface of the 3D display..
  • CEA Consumer Electronics Association
  • the view mask parameters are sent to the playback device that based on these parameters calculates the correct views and mapping for the display.
  • the view mask data comprises a pixel processing definition
  • the video processor in the video processing device is arranged for executing the pixel processing definition for generating the multiple views.
  • the processing definition is a pixel shader specific to the display to the playback device.
  • the pixel shader is then executed on the video processor of the playback device to create the rendered output.
  • a pixel shader is a computation kernel function that computes color and other attributes of each pixel. Additional view mask data defining the computation kernel function is to be transmitted between the TV and the rendering/playback device.
  • Figure 8 shows a system architecture using view mask data.
  • a 3D player 78 is coupled to a 3D display 79 for transferring a display signal 77.
  • the 3D display transfers view data including view mask data 80 to the 3D player from a property data memory.
  • the display data function is called EDID 87.
  • In the player 3D data 81 is used to render multiple views 82 based on the view mask data 80.
  • the views may be further filtered 83
  • Anti-aliasing and interleaved 84 as described above, which function may now be performed in the video processing device based on the view mask data provided.
  • the 3D display device 79 may perform Cross-talk filtering 85, and performs displaying 86. It is noted that the anti-aliasing and view rendering steps may be moved to the player but display- specific filtering (for instance to reduce cross-talk) may remain in the display.
  • the player and display are connected by a link (such as HDMI).
  • Every view has an associated view mask, which is a binary color image that per sub-pixel indicates if it belongs to the view or to another one, e.g. by a binary value (1) or (0).
  • a more efficient way to store a view mask for a many- view display would be as an image where the value of each sub pixel is the view index, e.g.: an ordinal value (0...N-1) with N the number of views (typically 9 but could be many; experiments already have 45). Interleaving is performed by copying a view image only for those sub-pixels that belong to the view.
  • the structure of the view mask is periodic and relates to the type of lenticular display. To describe the entire view mask, it suffices to supply:
  • the black-matrix size i.e. defining the structure of sub-pixels
  • the view mask data comprises
  • the distance between sub-pixel units on a scan line corresponds to the lens pitch.
  • the lens pitch as such i.e. the width of the micro lenses expressed in relation to the sub-pixel pitch (e.g. 4.5), may be indicated in this parameter.
  • the order of RGB components is also used for sub-pixel smoothing of fonts.
  • the view mask data may include lenticular view configuration metadata.
  • the 3D display device may send its lenticular configuration to the player (or PC).
  • the view mask may be defined by including a limited number of parameters, as discussed in the previous section.
  • an alternative is to include all view masks in full in the view mask data.
  • the view mask is commonly periodic, only a fraction, namely one period, may be sent.
  • the view mask data comprises a frame having a value per sub-pixel which encodes the view number that the subpixel represents. Additionally the view mask data may include, per view number, the orientation of the viewing cone, which in combination with a reported physical display size and optimal viewing distance provides enough information to correctly render 3D data.
  • the view mask data includes at least one of
  • the view mask can be added to the E-EDID specification by defining a new CEA Data Block. For example, one of the Extended Tag Codes reserved for video-related blocks (i.e: 6) for a "3D View Mask Data Block" may be used.
  • Figure 9 shows a 3D View Mask Data Block.
  • the figure shows a table of the data block.
  • the block is formatted according to the CEA-861-E standard, as indicated for various field in the Table.
  • a few fields (bytes 0,1, 32,33,64,65) are for indicating the new Extended Tag Code and indicate the type of data in the data block.
  • the field of byte 2 defines the number of views.
  • the fields of bytes 3-4 define size (height and width) of a period of the view mask, i.e. the repetitive pattern therein.
  • the parameters in fields 5-6 provide an example of view data that may be relevant for rendering the multiple views.
  • Part of the data block has a variable size based on the number of views and the size of the view mask.
  • Fields 7-31 and 34-63 are defined by the Tables according to Figures 10 and 11.
  • fields 66-76 and 77- 85 are defined by the Tables according to Figures 12 and 13 respectively.
  • Figure 10 shows a view description.
  • a table 91 shows a description of a view, i.e. a length parameter, a view offset at optimal viewing distance for the centre pixel. The table may be repeated for every view
  • Figure 11 shows view mask data of sub-pixels.
  • a table 92 shows the view number of a set of sub-pixels for the respective colors. The values for each sub-pixel provide the view mask.
  • Figure 12 shows a sub pixel structure.
  • the sub pixel structure is also called black matrix.
  • a table 93 shows parameters that define the pixel structure.
  • the pixel structure parameter may be an identifier to a table stored in the "rendering" device that provides a mapping between the identifier and the pixel structure.
  • the Pixel layout parameter may be an identifier to a table stored in the "rendering" device that provides a mapping between the identifier and the pixel layout, i.e. RGB or BGR, V-RGB etc.
  • Figure 13 shows lens configuration data.
  • the lens configuration data may be included in the view mask data.
  • a table 94 shows lens parameters.
  • the lens type may be an identifier 0-255 to a table stored in the "rendering" device that provides a mapping between the lens type and the type of "lens" used. (i.e. Barrier, lenticular, micro lens arrays etc.).
  • the lens parameter is an identifier 0-255 to a table stored in the "rendering" device that provides a mapping between the lens parameter identifier and specific lens characteristics (shape, angle of view etc.). The value depends on the value of lens type.
  • the view data is extended to include depth parameters indicative of depth capabilities of the 3D display.
  • the video processor is arranged for adapting the multiple views based on the depth parameters. By applying the view data the video processor is enabled to adjust the multiple views to the depth capabilities.
  • user preference settings metadata may be included in the view data. People may have different preferences for depth in 3D video. Some like a lot of depth whereas others like a subtle amount of depth. The same could hold for, for example, the crispiness of the depth and the zeroplane.
  • the view data is extended to send the user parameters indicative of settings for 3D viewing. By applying the view data the video processor is enabled to adjust the multiple views to the user preferences.
  • the video processing device is arranged for including depth metadata in the display signal towards the 3D display device.
  • the depth metadata may be a parameter indicating the minimum depth of the current video information, or a depth map indicative of the depths occurring in various parts of the screen.
  • the depth metatdata relates to the combined main and auxiliary data as processed in the video processing device. The depth metatdata enables the 3D display to position in depth further auxiliary data, like a menu or button, in front of any other data present in the 3D video information.
  • the invention may be implemented in hardware and/or software, using programmable components.
  • a method for implementing the invention has the steps corresponding to the functions defined for the system as described with reference to Figure 1.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A video processing device (100) is for processing three dimensional [3D] video information and is coupled to an auto-stereoscopic 3D display device (120), e.g. a TV having a lenticular 3D display (123). The video processing device has an input unit (101) for receiving the 3D video data and a video processor (106) for generating a 3D display signal (HO) representing the 3D video data and overlayed auxiliary data. The video processing device receives view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device. The video processor generates the multiple views according to the view mask data, and includes the multiple views in the display signal. Advantageously the auxiliary data is combined with the main data when generating the multiple views, which avoids artifacts.

Description

Signaling for multiview 3D video
FIELD OF THE INVENTION
The invention relates to a video processing device for processing three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data, the device comprising
- input means for receiving the 3D video data according to an input format,
- a video processor for processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format, and
- a display interface for interfacing with a 3D display device for transferring the 3D display signal.
The invention further relates to a display device for displaying 3D video information, the 3D video information comprising 3D video data and auxiliary data, the device comprising
- an interface for interfacing with a video processing device for transferring a 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
- a display processor for providing a display control signal representing the multiple views to the 3D display based on the 3D display signal.
The invention further relates to a 3D display signal, method and computer program for transferring 3D video information via an interface between a video processing device and a display device.
The invention relates to the field of 3D video rendering on auto stereoscopic displays based on generating multiple views; different views being perceived by the respective eyes of a viewer. BACKGROUND OF THE INVENTION
A 3D video processing device like a BD player or set top box may be coupled to a 3D display device like a TV set or monitor for transferring the 3D video data via a display signal on suitable interface, preferably a high-speed digital interface like HDMI.
In addition to the main 3D video auxiliary information like subtitles, graphics or menus, or a further video signal, may be combined with the main video data to be displayed. Video data defines the content of the main video to be displayed. Auxiliary data defines any other data that may be displayed in combination with the main video data, such as graphical data or subtitles. The auxiliary data is combined with the main data for display in overlay on the 3D video data, e.g. at a depth that is in front of any object of main video.
The 3D display device receives a 3D display signal via the interface and provides different images to the respective eyes of a viewer to create a 3D effect. The display device may be a stereoscopic device, e.g. for a viewer wearing shutter glasses that pass left and right views displayed sequentially to the respective left and right eye of a viewer.
However, the display device may also be an auto stereoscopic display that generates multiple views, e.g. 9 views; different views being perceived by the respective eyes of a viewer not wearing glasses.
The invention is focused on the specific type of 3D displays, usually called auto-stereoscopic displays, which provide multiple images in a spatial distribution so that a viewer does not need to wear glasses. The spatial arrangement comprises multiple views, usually at least 5, and pairs of the different views are arranged to be perceived by respective eyes of a viewer, when correctly positioned with respect to said spatial distribution, for generating a 3D effect.
The article "Integrating 3D Point Clouds with Multi-viewpoint Video; by Feng Chen, Irene Cheng and Anup Basu; Dept. of Computing Sc., Univ. of Alberta, Canada, IEEE 3DTV-CON 2009" describes combining 3D main video and graphical objects. One of the key problems in such a system is to re-construct depth information of a captured scene to enable said combining. The main video usually provides only two views. In most methods proposed to solve this problem, recovering the depth information Z is converted to estimating the disparity d, which is inversely correlated to the depth.
The 3D video processing device generates a display signal that is transferred to the display device. Commonly the display signal provides a left and a right view. Hence auto stereoscopic devices need to generate the multiple views based on the input from the display signal, which is not trivial. SUMMARY OF THE INVENTION
For generating multiple views based on a stereoscopic input signal depth information has to be regenerated. In particular when auxiliary information like graphical objects has been combined with the main video some video information may be occluded.
It is an object of the invention to provide a system for displaying 3D video information including auxiliary data on a multiview display which avoids difficulties in generating the multiple views and artifacts.
For this purpose, according to a first aspect of the invention, the video processing device as described in the opening paragraph is arranged for receiving view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device, and the video processor is arranged for generating the multiple views according to the view mask data, and for including the multiple views in the display signal, the display format being different from the input format.
For this purpose, according to a further aspect of the invention, the display device as described in the opening paragraph is arranged for transferring view data including view mask data to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and the display processor is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
Also, a 3D display signal is provided for transferring 3D video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
- the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
the 3D display signal comprising
- view data including view mask data to be transferred from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
- the multiple views according to the view mask data to be transferred from the video processing device to the display device. Also, a method is provided for transferring 3D video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
- the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
the method comprising
- processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- transferring the 3D display signal via the interface to the display device,
which method comprises
- transferring view data including view mask data from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
- including, in the 3D display signal, the multiple views according to the view mask data.
Also, a computer program product is provided for transferring 3D video information via an interface between a video processing device and a display device, which program is operative to cause a processor to perform the method as defined above.
The above features have the following effect. The view data defines properties of the display and the viewing configuration, and the view mask data included in the view data defines the configuration and properties of the multiple views as generated by the 3D display device, in particular the arrangement and type of pixels on a display panel that is conjugated with a lenticular lens or barrier optical element, which optical element guides the output light of the pixels such that multiple views are shown in a spatial distribution. As such, auto-stereoscopic displays are known.
The view data including the view mask data is output by the 3D display device and received by the video processing device. The video processing device is now aware of the specific configuration and requirements of the 3D display and now generates the multiple views. In particular, the combining of the auxiliary data with the main video is performed in the video processing device, which does have the original, full and non-occluded main video data. Hence the multiple views, including the auxiliary information, are generated in the video processing device. Advantageously, in the 3D display device, there is no need for recovering depth or disparity data. In particular, there are no occluded areas which occurred in the prior art due to first overlaying the auxiliary data on the main video, and subsequently recovering depth to generate multiple views. The invention is also based on the following recognition. Traditionally auto- stereoscopic displays have to generate a multitude of views, e.g. 9, based on a display signal having a left and a right view. In particular, auxiliary information like subtitles or menus may already be overlayed over the main video. When generating the additional views, some parts of the main video are occluded by the auxiliary information, and must be interpolated or estimated when generating other views. The inventors have seen that the main video is fully available in the video processing device. They have proposed to transfer mask view data that defines the properties of the respective multiple views to the video processing device for enabling that device to generate the multiple views, and subsequently overlaying the auxiliary data without causing artifacts or requiring estimating occluded areas. It is to be noted that the mask view data, as such, may be substantially different for different displays. Traditionally, such mask view data was only used internally in the display device, e.g. when designing the display processor or embedded software. The inventors have proposed to define a standardized format that accommodates transferring all relevant parameters of mask view data. The video processing device, receiving the mask view data, is provided with processing functions controlled by the mask view data to generate the respective views required for the specific 3D display device coupled to the display interface. The set of views is transferred in the display signal, e.g. in separate frames or in a single frame, the views being interleaved corresponding to the final arrangement of pixels in the display device. Advantageously, processing power available in the video processing device can be used while less processing power is needed in the 3D display device.
In an embodiment of the video processing device, the display interface is arranged for said receiving the view data including view mask data from the 3D display device via the 3D display signal. This has the advantage that the 3D display device directly transfers the relevant mask view data to the video processing device, via the same interface that transfers the multiple views to the 3D display device. In a further embodiment the display interface is a High Definition Multimedia Interface [HDMI] arranged for said receiving the view data including view mask data from the 3D display device via Enhanced Extended Display Identification Data [E-EDID]. This has the advantage that the HDMI standard is extended for enabling generating and transferring multiple views.
In an embodiment of the system, the view mask data comprises at least one of
- pixel structure data indicative of a location of pixels of respective views;
- a display type indicator indicative of the arrangement of a lenticular display;
- multiview data indicative of properties of the multiple views; - mask period data indicative of properties of a repetitive pattern of pixels assigned to respective views;
- sub-pixel data indicative of a structure of sub-pixels for respective colors;
- lens data indicative of the arrangement of a lens configured on the pixels of the display. This has the advantage that, using a suitable combination of the above data elements, the properties of a large variety of 3D displays is definable.
In an embodiment of the system, the view mask data comprises a pixel processing definition, and the video processor is arranged for executing the pixel processing definition for generating the multiple views. The processing definition defines how the pixels of the multiple vies have to be generated. By providing and executing such a code a very flexible system for generating multiple views is made available. This has the advantage that displays having a pixel arrangement or properties that do not match a predefined set of parameters of view mask data can still be accommodated.
In an embodiment of the system, the view data comprises at least one of:
- user parameters indicative of settings for 3D viewing;
- display parameters indicative of capabilities of the 3D display;
and the video processor is arranged for adapting the multiple views based on the parameters. User parameters, e.g. a preferred depth range or depth limit, or display parameters, like properties of the views dependent on constructional elements of the display, are transferred to the video processing device and enable the multiple views to be adapted, e.g. filtered or adjusted to a minimal depth. This has the advantage that the quality of the 3D video is adapted to user preferences and/or viewing
Further preferred embodiments of the devices and method according to the invention are given in the appended claims, disclosure of which is incorporated herein by reference. Features defined in dependent claims for a particular method or device
correspondingly apply to other devices or methods.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated further with reference to the embodiments described by way of example in the following description and with reference to the accompanying drawings, in which
Figure 1 shows a system for processing 3D video information,
Figure 2 shows a 3D display providing multiple views,
Figure 3 shows a lenticular screen, Figure 4 shows generating multiple views,
Figure 5 shows a view mask of a display,
Figure 6 shows a view rendering process,
Figure 7 shows a 3D player model,
Figure 8 shows a system architecture using view mask data,
Figure 9 shows a 3D View Mask Data Block,
Figure 10 shows a view description,
Figure 11 shows view mask data of sub-pixels,
Figure 12 shows a sub pixel structure, and
Figure 13 shows lens configuration data.
The figures are purely diagrammatic and not drawn to scale. In the Figures, elements which correspond to elements already described have the same reference numerals.
DETAILED DESCRIPTION OF EMBODIMENTS
Figure 1 shows a system for processing three dimensional (3D) video information. The 3D video information includes 3D video data, also called main video data, and auxiliary data, such as subtitles, graphics and other additional visual information. A 3D video processing device 100 is coupled to a 3D display device 120 for transferring a 3D display signal 110.
The 3D video processing device has input means for receiving the 3D video data according to an input format, including an input unit 101 for retrieving the 3D video data, e.g. a video disc player, media player or a set top box. For example the input means may include an optical disc unit 103 for retrieving video and auxiliary information from an optical record carrier 105 like a DVD or Blu-ray Disc (BD). In an embodiment the input means may include a network interface unit 102 for coupling to a network 104, for example the internet or a broadcast network. Video data may be retrieved from a broadcaster, remote media server or website. The 3D video processing device may also be a satellite receiver, or a media server directly providing the display signals, i.e. any video device that outputs a 3D display signal to be coupled to a display device. The device may be provided with user control elements for setting user preferences, e.g. rendering parameters of 3D video.
The 3D video processing device has an image processor 106 coupled to the input unit 101 for processing the video information for generating a 3D display signal 110 to be transferred via a display interface unit 107 to the display device. The auxiliary data may be added to the video data, e.g. overlaying subtitles on the main video. The video processor 106 is arranged for including the video information in the 3D display signal 110 to be transferred to the 3D display device 120.
The 3D display device 120 is for displaying 3D video information. The device has a 3D display 123 receiving 3D display control signals for displaying the video information by generating multiple views, for example a lenticular LCD. The 3D display is further elucidated with reference to Figures 2-4. The device has a display interface unit 121 for receiving the 3D display signal 110 including the 3D video information transferred from the 3D video processing device 100. The device has a display processor 122 coupled to the interface 121. The transferred video data is processed in the display processor 122 for generating the 3D display control signals for rendering the 3D video information on a 3D display 123 based on the 3D video data. The display device 13 may be any type of stereoscopic display that provides multiple views, and has a display depth range indicated by arrow 124. The display device may be provided with user control elements for setting display parameters of the display, such as contrast, color or depth parameters.
The input unit 101 is arranged for retrieving video data from a source.
Auxiliary data may be generated in the device, e.g. menus or buttons, or may also be received from an external source, e.g. via the internet, or may be provided by the source together with the main video data, e.g. subtitles in various languages, one of which may be selected by the user.
The video processor 106 is arranged for processing the 3D video information, as follows. The video processor processes the 3D video information and generates the 3D display signal. The 3D display signal represents the 3D video data and the auxiliary data according to a display format, e.g. HDMI. The display interface 107 interfaces with the 3D display device 120 for transferring the 3D display signal. The video processing device 100 is arranged for receiving view data including view mask data from the 3D display device, e.g. dynamically when coupled to the display device. The view mask data defines a pixel arrangement of the multiple views, as discussed below in detail. For example at least part of the view data may be transferred via the display interface.
The display processor 122 is arranged for providing a display control signal representing the multiple views to the 3D display based on the 3D display signal as received on the interface 121. The display device is arranged for transferring the view data including view mask data to the video processing device. The view mask data may be stored in a memory, e.g. provided during production of the 3D display device. The display processor, or a further controller, may transfer the view mask data via the interface, i.e. in the direction towards the video processing device. The display processor is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
In an embodiment of the video processing device, the display interface is arranged for said receiving the view data including view mask data from the 3D display device via the 3D display signal. The view data may be included by the 3D display device in a bi-directional 3D display signal as transferred over a suitable high speed digital video interface, e.g. in a HDMI signal using the well known HDMI interface (e.g. see "High Definition Multimedia Interface Specification Version 1.3a of Nov 10 2006), in particular see section 8.3 on the via Enhanced Extended Display Identification Data, the E-EDID data structure, extended to define the view data as defined below. Hence in a further embodiment the display interface is a High Definition Multimedia Interface [HDMI] arranged for said receiving the view data including view mask data from the 3D display device via Enhanced Extended Display Identification Data [E-EDID]. Specific examples are described with reference to Figures 9-13.
In an embodiment view data is transferred via a separate path, e.g. via a local network or the internet. For example, the manufacturer of the display device may provide at least part of the view data via a website, a software update, a device property table, via an optical disc or a USB memory device, etc.
The view data includes the view mask data, which defines a pixel arrangement of multiple views to be displayed by the 3D display device. The video processor is arranged for generating the multiple views according to the view mask data. Any auxiliary data to be combined is overlayed on the main video data. Subsequently the multiple views are included in the display signal.
It is noted that, the display format of the 3D display signal is different from the input format, in particular with respect to the number of said multiple views. In the input format the number of views usually is two, i.e. a left and a right view. In the output format the number of views is determined by the 3D display device, e.g. 7 or 9, as elucidated now.
Figure 2 shows a 3D display providing multiple views. One horizontal scan line of a display panel 21 is schematically shown and provides multiple views 23 as indicated by seven diverging arrows. The views are generated at a suitable viewing distance in front of the display, e.g. 2 meters in front of a TV set. A pair of different views is to be perceived by the respective eyes of a viewer 22. In each view is a perceived pixel is generated by a respective pixel 24 of the panel, a sequence of seven pixels corresponding to one pixel in each of the seven views. It is noted that, in practice, different sub-pixels are required to provide at least three colors for rendering 3D video in color. The sequence of seven pixels 24 is repeated along the scan line, and optical elements, like lenses or barriers, are located in front of the display panel to guide the light emitted from the respective pixels to the respective different views.
Figure 3 shows a lenticular screen. The Figure shows a display panel 31, e.g. an LCD panel, having a repetitive pattern of 6 pixels constituting a period of a 6 view 3D display. A lenticular lens 32 is mounted in front of the display panel for generating light bundle 33 towards multiple views 34. The pixels in the panel are numbered 1,2, ..6, and the views are numbered correspondingly. One eye of a viewer 35 perceives the third view, the other eye the fourth view.
The lenticular display is a parallax 3D display, capable of showing multiple images (usually eight, or nine) images for different horizontal viewing directions. This way, the viewer can experience motion parallax and stereoscopic cues. Both effects exist because the eyes perceive different views, and by moving the head horizontally the perceived views change.
The lenticular lenses accommodate that for a specific viewing angle, the viewer only sees a subset of the subpixels of the underlying LCD. More specific, if the appropriate values are set to the associated pixels for the various viewing directions, the viewer will see different images from different viewing directions. This enables the possibility to render stereoscopic images. Various types of multiview displays are known as such. A basic type has vertical sheets of lenses such that horizontal resolution is sacrificed for views. To balance the resolution loss in vertical and horizontal direction slanted lenses have been developed. A third type is a fractional view display where the pitch of the lens is a non- integral times wider than the (sub-)pixel width. Hence, which pixel constitutes to which view is configuration-specific, and corresponding display control signals for the respective views have to be generated based on an available video input signal.
Figure 4 shows generating multiple views. Processing for generating multiple views 44 is schematically indicated. An input signal 47 is received in an input unit for a demultiplexing step 40. The de-multiplexing step retrieves the respective video data 41, in the example being a 2D frame and depth map Z, and optionally further auxiliary data, audio data, etc from the input signal. In particular control parameters 43 may be retrieved also, e.g. from a header or data message included in the input signal, to adjust the rendering of the video information. The video information is processed in a processor, which performs a rendering process 42 for generating the nine multiple views 44, each view being the same scene viewed from a slightly different position. In a further processing step 45 the multiple views are interweaved, i.e. formatted as required to control the 3D display panel, e.g. a panel as discussed above with reference to Figures 2 and 3. It is to be noted that, traditionally, the processing is performed in a display processor directly coupled to the display panel.
However, in the system according to the invention, the processing is performed in the video processing device, where the final step of interweaving produces the output 3D display signal according to a display format, e.g. HDMI. For example, the views may be transferred sequentially, or a single interweaved frame may be transferred having the pixel data corresponding to the physical location of the pixels in the 3D display device, as defined by the view mask data.
Figure 5 shows a view mask of a display. The Figure schematically shows a display panel 51 having sub-pixel colums 52 R-G-B for the respective primary colors red, green, and blue. In practice, more or different colors may be used. A slanted lenticular lens 53 is provided on the panel. Due to the lenticular lens, and viewed for a specific direction, only some pixels are visible for a viewer, constituting a view 54 as shown in the right halve of the figure. The respective pixels that are visible are highlighted on the panel 51 by bold rectangles 55.
For this nine view display, nine different subsets of pixels can be identified, for nine viewing directions. Hence, we can display nine different images simultaneously. The process of drawing these nine images, each to its associated
subset of subpixels of the underlying matrix display is called interleaving. When a subpixel is lit, it illuminates the entire lens above the subpixel, seen from its corresponding viewing direction as shown in the right half Figure.
The location of the respective pixels of a single view is called a view mask. The view mask can be defined by a set of view mask data, e.g. the position of each pixel of a respective view relative to a starting point. Usually the pattern of the view mask is repetitive, and the view mask data only needs to define a single period of the pattern, i.e. only a small portion of the total screen. However, also 3D multiview displays having a non-repetitive pattern, or a very complex pattern, are possible. For such displays a view mask may be provided for the entire screen size. The view mask data may also include parameters of the lens, like the width or angle of slanting. Further examples of view mask data are given below.
Figure 6 shows a view rendering process. It is noted that the total process traditionally is performed in a multiview 3D display device. The 3D data content 61 is available as a stream of data containing the video information according to a video format, such as L+R (Left + Right), 2D+depth (2 dimensional data and depth map), Multiview Depth (MVD; see C. L. Zitnick et al., "High-Quality Video View Interpolation Using a Layered Representation", ACM SIGGRAPH and ACM Trans, on Graphics, Los Angeles, CA, USA, August 2004), or via a program that interfaces with a suitable API (such as OpenGL; see http://www.opengl.org) or DirectX (see http://msdn.microsoft.com/en- us/directx/default.aspx). From the content views are rendered in step 62 Render-views by morphing in such a way that image features make a horizontal translation that depends on the feature depth and the view. If not taken into account in step 62 already, the views have to be filtered in step 63 Anti-aliasing, e.g. smoothed to prevent aliasing. After this, the views are interleaved in step Interleaving 64 to form one frame of the native resolution of the screen. Before rending additional filtering may be applied in step 65 Cross-talk, for instance to reduce cross-talk between views. Finally the 3D display control signals are generated and the frame is displayed on the screen in step 66 Displaying.
Figure 7 shows a 3D player model. In the traditional systems such a player is coupled to a multiview 3D display as described with Figure 6. The model may be applied to DVB set-up boxes, DVD-players, Blu-ray Disc (BD) players and other similar equipment. Video information 71 is available from an input stream, e.g. from a BD or DVB. In the model the video content is on a first plane 72 of the available 3D planes. Other planes are dedicated to auxiliary data like graphics 73 such as subtitles, Picture in Picture (PIP) and the player onscreen display (OSD) 74. The process of merging the planes before sending them to the screen is called compositing 75. This is a straight- forward operation for L+R, or multiview formats where all the views to be provided on the output for the both the video and the graphics are present in the player. However for intermediate formats such as 2D+depth, MVD but for other formats like OpenGL it requires occlusions tests or rules to determine which content should be in an occlusion layer as required by such formats.
Commonly a player or PC graphics card has more 3D data available than fits in the format transmitted to the screen. For instance, if 3D video content is overlaid with a semi-transparent menu, artifacts are likely with image + depth, image+ depth + occlusion, or stereo + depth as a format. Other problems are related to Z-buffer to depth conversion. In 3D graphics a Z-buffer is used to hold the Z value of objects for the removal of hidden surfaces. The values in this buffer depend on the camera projection set by the application and have to be converted to be suitable for use as "depth" values. The value of the Z buffer is often not reliable as it depends heavily on the way the game has been programmed. These problems cannot be solved by sending all 3D data to the screen, because this creates bandwidth problems and makes displays more expensive.
The above problems are solved by moving the task of rendering views from the multiview 3D display device to the video processing device, such as the disc player, a PC, or a separate device like a 3D receiver between the player and the display. The result is that maximum display quality is maintained because all available 3D data is used in the rendering process, and the bandwidth problem is solved because only frames of the screens native resolution have to be sent over the link such as HDMI or DisplayPort.
Achieving this solution requires taking into account that many different displays exist, and data defining the multiview 3D display must be provided. This data is called view mask data, which preferably is standardized such that players and displays of different brands can co-operate. Via the view mask data the display device is able to describe its physical configuration in such a way that the player does not have to be aware of display specifics when generating the multiple views, and the display functions in an uncompromised manner based on the multiple views provided in the display signal on the input interface of the 3D display..
In a preferred embodiment we propose to extend the current E-EDID specification by adding view mask data containing parameters on the lens or view mapping configuration in a new data block. The new data block may be made compliant with the Consumer Electronics Association (CEA) standard "CEA-861-E A DTV Profile for
Uncompressed High Speed Digital Interfaces, March 2008". The view mask parameters are sent to the playback device that based on these parameters calculates the correct views and mapping for the display.
Alternatively in an embodiment we propose to define a new data block that allows the display to send information on the processing as required. Thereto the view mask data comprises a pixel processing definition, and the video processor in the video processing device is arranged for executing the pixel processing definition for generating the multiple views. For example, the processing definition is a pixel shader specific to the display to the playback device. The pixel shader is then executed on the video processor of the playback device to create the rendered output. It is noted that a pixel shader is a computation kernel function that computes color and other attributes of each pixel. Additional view mask data defining the computation kernel function is to be transmitted between the TV and the rendering/playback device. The advantage of this solution is that it supports display specific post processing/filtering. Figure 8 shows a system architecture using view mask data. A 3D player 78 is coupled to a 3D display 79 for transferring a display signal 77. The 3D display transfers view data including view mask data 80 to the 3D player from a property data memory. According to HDMI the display data function is called EDID 87. In the player 3D data 81 is used to render multiple views 82 based on the view mask data 80. The views may be further filtered 83 Anti-aliasing and interleaved 84 as described above, which function may now be performed in the video processing device based on the view mask data provided. The 3D display device 79 may perform Cross-talk filtering 85, and performs displaying 86. It is noted that the anti-aliasing and view rendering steps may be moved to the player but display- specific filtering (for instance to reduce cross-talk) may remain in the display. The player and display are connected by a link (such as HDMI).
The player is able to query the view mask data from the 3D display device that describes the view mask of the screen and other parameters relevant for the processing of the multiple views. In an embodiment every view has an associated view mask, which is a binary color image that per sub-pixel indicates if it belongs to the view or to another one, e.g. by a binary value (1) or (0). Alternatively, as each sub-pixel belongs to exactly one view, a more efficient way to store a view mask for a many- view display would be as an image where the value of each sub pixel is the view index, e.g.: an ordinal value (0...N-1) with N the number of views (typically 9 but could be many; experiments already have 45). Interleaving is performed by copying a view image only for those sub-pixels that belong to the view.
For various types of lenticular display, e.g. basic vertical, slanted and fractional, the structure of the view mask is periodic and relates to the type of lenticular display. To describe the entire view mask, it suffices to supply:
- the position and color (xref, yref, cref) of one sub-pixel that is in the view;
- the distance (Δχ) in sub-pixel units (p) between two sub-pixels on the same scan line;
- the black-matrix size, i.e. defining the structure of sub-pixels,
- the horizontal shift or slant (s) when moving down one scan line; and
- the order of the RGB components.
Hence, in an embodiment the view mask data comprises
- sub-pixel data of a reference sub-pixel in a respective view;
- a distance between sub-pixel units on a scan line;
- black matrix data indicative of a structure of the sub-pixels;
- an order of sub-pixel colors in the sub-pixel unit
- a slant indicative of a difference in position in neighboring scan lines. For example the distance between sub-pixel units on a scan line corresponds to the lens pitch. The lens pitch as such, i.e. the width of the micro lenses expressed in relation to the sub-pixel pitch (e.g. 4.5), may be indicated in this parameter.
The order of RGB components is also used for sub-pixel smoothing of fonts. Furthermore, the view mask data may include lenticular view configuration metadata. The 3D display device may send its lenticular configuration to the player (or PC).
The view mask may be defined by including a limited number of parameters, as discussed in the previous section. To be more future proof, an alternative is to include all view masks in full in the view mask data. However, as the view mask is commonly periodic, only a fraction, namely one period, may be sent.
As future lenticular screens may have many views, sending the separate view masks requires substantial bandwidth. In an embodiment the view mask data comprises a frame having a value per sub-pixel which encodes the view number that the subpixel represents. Additionally the view mask data may include, per view number, the orientation of the viewing cone, which in combination with a reported physical display size and optimal viewing distance provides enough information to correctly render 3D data.
In an embodiment, the view mask data includes at least one of
- pixel structure data indicative of a location of pixels of respective views;
- a display type indicator indicative of the arrangement of a lenticular display;
- multiview data indicative of properties of the multiple views;
- mask period data indicative of properties of a repetitive pattern of pixels assigned to respective views;
- sub-pixel data indicative of a structure of sub-pixels for respective colors;
- lens data indicative of the arrangement of a lens configured on the pixels of the display. Examples of the above view mask data are discussed below with reference to Figures 9-13. The view mask can be added to the E-EDID specification by defining a new CEA Data Block. For example, one of the Extended Tag Codes reserved for video-related blocks (i.e: 6) for a "3D View Mask Data Block" may be used.
Figure 9 shows a 3D View Mask Data Block. The figure shows a table of the data block. The block is formatted according to the CEA-861-E standard, as indicated for various field in the Table. A few fields (bytes 0,1, 32,33,64,65) are for indicating the new Extended Tag Code and indicate the type of data in the data block. The field of byte 2 defines the number of views. The fields of bytes 3-4 define size (height and width) of a period of the view mask, i.e. the repetitive pattern therein. The parameters in fields 5-6 provide an example of view data that may be relevant for rendering the multiple views. Part of the data block has a variable size based on the number of views and the size of the view mask. Fields 7-31 and 34-63 are defined by the Tables according to Figures 10 and 11. Finally, fields 66-76 and 77- 85 are defined by the Tables according to Figures 12 and 13 respectively.
Figure 10 shows a view description. A table 91 shows a description of a view, i.e. a length parameter, a view offset at optimal viewing distance for the centre pixel. The table may be repeated for every view
Figure 11 shows view mask data of sub-pixels. A table 92 shows the view number of a set of sub-pixels for the respective colors. The values for each sub-pixel provide the view mask.
Figure 12 shows a sub pixel structure. The sub pixel structure is also called black matrix. A table 93 shows parameters that define the pixel structure. The pixel structure parameter may be an identifier to a table stored in the "rendering" device that provides a mapping between the identifier and the pixel structure. The Pixel layout parameter may be an identifier to a table stored in the "rendering" device that provides a mapping between the identifier and the pixel layout, i.e. RGB or BGR, V-RGB etc.
Figure 13 shows lens configuration data. The lens configuration data may be included in the view mask data. A table 94 shows lens parameters. The lens type may be an identifier 0-255 to a table stored in the "rendering" device that provides a mapping between the lens type and the type of "lens" used. (i.e. Barrier, lenticular, micro lens arrays etc.). The lens parameter is an identifier 0-255 to a table stored in the "rendering" device that provides a mapping between the lens parameter identifier and specific lens characteristics (shape, angle of view etc.). The value depends on the value of lens type.
It is noted that depth capabilities of displays may differ between designs, e.g. due to the thickness of a glass plate in front of the pixels, laminated or glued lenses, etc. Therefore, in an embodiment, the view data is extended to include depth parameters indicative of depth capabilities of the 3D display. The video processor is arranged for adapting the multiple views based on the depth parameters. By applying the view data the video processor is enabled to adjust the multiple views to the depth capabilities.
In an embodiment, user preference settings metadata may be included in the view data. People may have different preferences for depth in 3D video. Some like a lot of depth whereas others like a subtle amount of depth. The same could hold for, for example, the crispiness of the depth and the zeroplane. The view data is extended to send the user parameters indicative of settings for 3D viewing. By applying the view data the video processor is enabled to adjust the multiple views to the user preferences.
In an embodiment, the video processing device is arranged for including depth metadata in the display signal towards the 3D display device. The depth metadata may be a parameter indicating the minimum depth of the current video information, or a depth map indicative of the depths occurring in various parts of the screen. It is to be noted that the depth metatdata relates to the combined main and auxiliary data as processed in the video processing device. The depth metatdata enables the 3D display to position in depth further auxiliary data, like a menu or button, in front of any other data present in the 3D video information.
It is to be noted that the invention may be implemented in hardware and/or software, using programmable components. A method for implementing the invention has the steps corresponding to the functions defined for the system as described with reference to Figure 1.
It will be appreciated that the above description for clarity has described embodiments of the invention with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units or processors may be used without deviating from the invention. For example, functionality illustrated to be performed by separate units, processors or controllers may be performed by the same processor or controllers. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality rather than indicative of a strict logical or physical structure or organization. The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these.
It is noted, that in this document the word 'comprising' does not exclude the presence of other elements or steps than those listed and the word 'a' or 'an' preceding an element does not exclude the presence of a plurality of such elements, that any reference signs do not limit the scope of the claims, that the invention may be implemented by means of both hardware and software, and that several 'means' or 'units' may be represented by the same item of hardware or software, and a processor may fulfill the function of one or more units, possibly in cooperation with hardware elements. Further, the invention is not limited to the embodiments, and the invention lies in each and every novel feature or combination of features described above or recited in mutually different dependent claims.

Claims

CLAIMS:
1. Video processing device for processing three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data, the device (100) comprising
- input means (101,102,103) for receiving the 3D video data according to an input format,
- a video processor (106) for processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- a display interface (107) for interfacing with a 3D display device (120) for transferring the 3D display signal (110),
which video processing device is arranged for receiving view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device, and
the video processor is arranged for generating the multiple views according to the view mask data, and for including the multiple views in the display signal, the display format being different from the input format.
2. Video processing device as claimed in claim 1, wherein the display interface (107) is arranged for said receiving the view data including view mask data from the 3D display device (120) via the 3D display signal (110).
3. Video processing device as claimed in claim 2, wherein the display interface (107) is a High Definition Multimedia Interface [HDMI] arranged for said receiving the view data including view mask data from the 3D display device via Enhanced Extended Display Identification Data [E-EDID].
4. Video processing device as claimed in claim 1, wherein the view mask data comprises at least one of
- pixel structure data indicative of a location of pixels of respective views;
- a display type indicator indicative of the arrangement of a lenticular display; - multiview data indicative of properties of the multiple views;
- mask period data indicative of properties of a repetitive pattern of pixels assigned to respective views;
- sub-pixel data indicative of a structure of sub-pixels for respective colors;
- lens data indicative of the arrangement of a lens configured on the pixels of the display.
5. Video processing device as claimed in claim 1, wherein the view mask data comprises
- sub-pixel data of a reference sub-pixel in a respective view;
- a distance between sub-pixel units on a scanline;
- a size of a black matrix;
- an order of sub-pixel colors in the sub-pixel unit
- a slant indicative of a difference in position in neighboring scanlines.
6. Video processing device as claimed in claim 1, wherein the view mask data comprises a pixel processing definition, and the video processor (106) is arranged for executing the pixel processing definition for generating the multiple views.
7. Video processing device as claimed in claim 1, wherein the view data comprises at least one of:
- user parameters indicative of settings for 3D viewing;
- depth parameters indicative of depth capabilities of the 3D display;
and the video processor is arranged for adapting the multiple views based on the parameters.
8. Display device for displaying three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data,
the device (120) comprising
- an interface (121) for interfacing with a video processing device (100) for transferring a 3D display signal (110) representing the 3D video data and the auxiliary data according to a display format,
- a 3D display (123) for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
- a display processor (122) for providing a display control signal representing the multiple views to the 3D display based on the 3D display signal, the display device being arranged for transferring view data including view mask data to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
the display processor is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
9. 3D display signal for transferring three dimensional [3D] video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
- the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
the 3D display signal comprising
- view data including view mask data to be transferred from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
- the multiple views according to the view mask data to be transferred from the video processing device to the display device.
10. Method for transferring three dimensional [3D] video information via an interface between a video processing device and a display device, the 3D video information comprising 3D video data and auxiliary data,
- the display device comprising a 3D display for displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
the method comprising
- processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- transferring the 3D display signal via the interface to the display device,
which method comprises
- transferring view data including view mask data from the display device to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
- including, in the 3D display signal, the multiple views according to the view mask data.
11. Method for processing three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data,
the method comprising
- receiving the 3D video data according to an input format,
- processing the 3D video information and generating a 3D display signal, the 3D display signal representing the 3D video data and the auxiliary data according to a display format,
- interfacing with a 3D display device (120) for transferring the 3D display signal (110), which step of receiving is arranged for receiving view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device, and
which step of generating is arranged for generating the multiple views according to the view mask data, and for including the multiple views in the display signal, the display format being different from the input format.
12. Method for displaying three dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data,
the method comprising
- interfacing with a video processing device (100) for transferring a 3D display signal (110) representing the 3D video data and the auxiliary data according to a display format,
- displaying multiple views, a pair of different views being arranged to be perceived by respective eyes of a viewer,
- providing a display control signal representing the multiple views to the 3D display based on the 3D display signal,
the step of interfacing being arranged for transferring view data including view mask data to the video processing device, the view mask data defining a pixel arrangement of the multiple views, and
the step of providing is arranged for providing the display control signal based on retrieving, from the display signal, the multiple views according to the view mask data.
13. Computer program product for processing three dimensional [3D] video information, which program is operative to cause a processor to perform the method as claimed in any one of the claims 10, 11 or 12.
PCT/IB2011/052938 2010-07-12 2011-07-04 Signaling for multiview 3d video WO2012007867A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP10169210 2010-07-12
EP10169210.1 2010-07-12

Publications (1)

Publication Number Publication Date
WO2012007867A1 true WO2012007867A1 (en) 2012-01-19

Family

ID=44583206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2011/052938 WO2012007867A1 (en) 2010-07-12 2011-07-04 Signaling for multiview 3d video

Country Status (2)

Country Link
TW (1) TW201215102A (en)
WO (1) WO2012007867A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2765775A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating intermediate view images
EP2765774A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating an intermediate view image
EP2914003A4 (en) * 2012-10-25 2016-11-02 Lg Electronics Inc Method and apparatus for processing edge violation phenomenon in multi-view 3dtv service
US9596446B2 (en) 2013-02-06 2017-03-14 Koninklijke Philips N.V. Method of encoding a video data signal for use with a multi-view stereoscopic display device
EP2862357B1 (en) * 2012-06-14 2018-03-28 Dolby Laboratories Licensing Corporation Frame compatible depth map delivery formats for stereoscopic and auto-stereoscopic displays
US10212532B1 (en) 2017-12-13 2019-02-19 At&T Intellectual Property I, L.P. Immersive media with media device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428354B (en) * 2019-06-25 2023-04-07 福建华佳彩有限公司 Panel sampling method, storage medium and computer
WO2022225977A1 (en) * 2021-04-19 2022-10-27 Looking Glass Factory, Inc. System and method for displaying a three-dimensional image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072289A2 (en) * 2005-12-20 2007-06-28 Koninklijke Philips Electronics N.V. Autostereoscopic display device
WO2009130542A1 (en) * 2008-04-24 2009-10-29 Nokia Corporation Plug and play multiplexer for any stereoscopic viewing device
WO2010058354A1 (en) * 2008-11-24 2010-05-27 Koninklijke Philips Electronics N.V. 3d video reproduction matching the output format to the 3d processing ability of a display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072289A2 (en) * 2005-12-20 2007-06-28 Koninklijke Philips Electronics N.V. Autostereoscopic display device
WO2009130542A1 (en) * 2008-04-24 2009-10-29 Nokia Corporation Plug and play multiplexer for any stereoscopic viewing device
WO2010058354A1 (en) * 2008-11-24 2010-05-27 Koninklijke Philips Electronics N.V. 3d video reproduction matching the output format to the 3d processing ability of a display

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. L. ZITNICK ET AL.: "High-Quality Video View Interpolation Using a Layered Representation", ACM SIGGRAPH AND ACM TRANS. ON GRAPHICS, August 2004 (2004-08-01)
CEA-861-E A DTV PROFILE FOR UNCOMPRESSED HIGH SPEED DIGITAL INTERFACES, March 2008 (2008-03-01)
FENG CHEN, IRENE CHENG, ANUP BASU: "Integrating 3D Point Clouds with Multi-viewpoint Video", DEPT. OF COMPUTING SC., 2009
HIGH DEFINITION MULTIMEDIA INTERFACE SPECIFICATION VERSION 1.3A, 10 November 2006 (2006-11-10)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10165251B2 (en) 2012-06-14 2018-12-25 Dolby Laboratories Licensing Corporation Frame compatible depth map delivery formats for stereoscopic and auto-stereoscopic displays
EP2862357B1 (en) * 2012-06-14 2018-03-28 Dolby Laboratories Licensing Corporation Frame compatible depth map delivery formats for stereoscopic and auto-stereoscopic displays
EP2914003A4 (en) * 2012-10-25 2016-11-02 Lg Electronics Inc Method and apparatus for processing edge violation phenomenon in multi-view 3dtv service
US9578300B2 (en) 2012-10-25 2017-02-21 Lg Electronics Inc. Method and apparatus for processing edge violation phenomenon in multi-view 3DTV service
US9596446B2 (en) 2013-02-06 2017-03-14 Koninklijke Philips N.V. Method of encoding a video data signal for use with a multi-view stereoscopic display device
WO2014122012A1 (en) * 2013-02-06 2014-08-14 Koninklijke Philips N.V. System for generating intermediate view images
WO2014121860A1 (en) * 2013-02-06 2014-08-14 Koninklijke Philips N.V. System for generating an intermediate view image
EP2954674B1 (en) 2013-02-06 2017-03-08 Koninklijke Philips N.V. System for generating an intermediate view image
EP2765775A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating intermediate view images
CN104982033B (en) * 2013-02-06 2017-11-24 皇家飞利浦有限公司 System for generating medial view image
RU2640645C2 (en) * 2013-02-06 2018-01-10 Конинклейке Филипс Н.В. System for generating intermediate image
CN104982033A (en) * 2013-02-06 2015-10-14 皇家飞利浦有限公司 System for generating intermediate view images
US9967537B2 (en) 2013-02-06 2018-05-08 Koninklijke Philips N.V. System for generating intermediate view images
EP2765774A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating an intermediate view image
US10212532B1 (en) 2017-12-13 2019-02-19 At&T Intellectual Property I, L.P. Immersive media with media device
US10812923B2 (en) 2017-12-13 2020-10-20 At&T Intellectual Property I, L.P. Immersive media with media device
US11212633B2 (en) 2017-12-13 2021-12-28 At&T Intellectual Property I, L.P. Immersive media with media device
US11632642B2 (en) 2017-12-13 2023-04-18 At&T Intellectual Property I, L.P. Immersive media with media device

Also Published As

Publication number Publication date
TW201215102A (en) 2012-04-01

Similar Documents

Publication Publication Date Title
JP5809064B2 (en) Transfer of 3D image data
US8422801B2 (en) Image encoding method for stereoscopic rendering
WO2012007867A1 (en) Signaling for multiview 3d video
EP2235685B1 (en) Image processor for overlaying a graphics object
US20110298795A1 (en) Transferring of 3d viewer metadata
US20120069154A1 (en) Transferring of 3d image data
US20110293240A1 (en) Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays
EP2299724A2 (en) Video processing system and video processing method
US11381800B2 (en) Transferring of three-dimensional image data
CN103442241A (en) 3D displaying method and 3D displaying device
US9197883B2 (en) Display apparatus and control method thereof
JP6085626B2 (en) Transfer of 3D image data
JP2012205285A (en) Video signal processing apparatus and video signal processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11744075

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11744075

Country of ref document: EP

Kind code of ref document: A1