WO2014042299A1 - Method and apparatus of controlling a content on 3-dimensional display - Google Patents

Method and apparatus of controlling a content on 3-dimensional display Download PDF

Info

Publication number
WO2014042299A1
WO2014042299A1 PCT/KR2012/007377 KR2012007377W WO2014042299A1 WO 2014042299 A1 WO2014042299 A1 WO 2014042299A1 KR 2012007377 W KR2012007377 W KR 2012007377W WO 2014042299 A1 WO2014042299 A1 WO 2014042299A1
Authority
WO
WIPO (PCT)
Prior art keywords
window
depth value
objects
depth
display
Prior art date
Application number
PCT/KR2012/007377
Other languages
French (fr)
Inventor
Soonbo HAN
Donghyun Kang
Hyungseok JANG
Sangjo Park
Dongyoung Lee
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to PCT/KR2012/007377 priority Critical patent/WO2014042299A1/en
Priority to KR1020157006999A priority patent/KR101691839B1/en
Publication of WO2014042299A1 publication Critical patent/WO2014042299A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals

Definitions

  • the present invention relates to a method and apparatus for controlling content on a 3-dimensional (3D) display and, more particularly, to a method and apparatus for controlling content output by adjusting depths of a window and an object displayed on a 3D display.
  • 3D images provide a 3D effect using stereoscopic visual principles of human eyes. Since a person feels perspective according to binocular parallax caused by a distance between eyes, which is approximately 65mm, the 3D effect and perspective of a 3D image can be obtained by providing the 3D image such that left and right eyes respectively see 2D images.
  • 3D image display methods include a stereoscopic scheme, a volumetric scheme, a holographic scheme, etc.
  • a left view image seen by a left eye of a user and a right view image seen by a right eye of the user are provided such that the user can recognize a 3D effect by respectively viewing the left view image and the right view image through his left and right eyes using polarizing eyeglasses or a display device.
  • a 3D service considers a 3D effect and presence to be important. Accordingly, content may be seen differently according to how depths as well as locations and sizes of objects are controlled.
  • An object of the present invention is to provide a method and apparatus for controlling display of an additional window and/or object on a 3D display at the request of a user when one or more windows or objects have been displayed on the 3D display.
  • a method of controlling content on a 3D (3-Dimensional) display includes displaying a first window including one or more 3D objects, and displaying a second window according to a first 3D object included in the first window, wherein the second window is displayed with a depth value larger than a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, or with a basic depth value.
  • depth values of all 3D objects included in the first window may be changed to values smaller than the basic depth value.
  • the depth value of each of the 3D objects may be linearly changed at the ratio of the depth value of each 3D object to the basic depth value.
  • all the depth values of the 3D objects included in the first window may be changed to a predetermined depth value on the basis of the basic depth value irrespective of previous depth values of the 3D objects.
  • the basic depth value may be zero (0).
  • the 3D objects may include at least one of a video clip, an image, an input field, link information and content-editable attributes.
  • the second window may output at least one of a pop-up window, a virtual keyboard, a dialog box and a web notification message according to the 3D objects.
  • the virtual keyboard may have a depth value at which the display location of the virtual keyboard corresponds to touched points on the virtual keyboard for touch input of a user.
  • the depth value of the second window may be adjusted to be the basic depth value or a predetermined depth value.
  • the method may further include extracting a maximum depth value in the first window when a predetermined 3D object included in the first window is selected, and determining the depth value of the second window on the basis of the extracted maximum depth value.
  • the method may further includes, when a predetermined 3D object included in the first window is selected, extracting depth values of one or more 3D objects adjacent to the selected 3D object, and determining the depth value of the second window on the basis of a depth value of a 3D object overlapping with the second window, from among the 3D objects adjacent to the selected 3D object.
  • a method of controlling content on a 3D display includes displaying a plurality of windows including one or more 3D objects, when a predetermined 3D object is selected, extracting information about a window including the selected 3D object, displaying the window including the selected 3D object as a top window on the basis of the extracted window information, and displaying a second window according to the selected 3D object with a depth value larger than a depth value of the 3D object in the window.
  • a 3D display includes a receiver for receiving a signal corresponding to a 3D object, a decoder for decoding the received signal corresponding to the 3D object, a display unit for configuring and displaying the 3D object and a window for the 3D object, and a controller for controlling display of a first window including one or more 3D objects and a second window according to a first 3D object included in the first window, wherein the controller controls the second window to be displayed with a depth value larger than one of a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, with or a basic depth value.
  • an additional window and/or object can be easily displayed such that a user can easily recognize the additional window and/or object.
  • FIG. 1 illustrates components of a 3D display according to an embodiment of the present invention
  • FIG. 2 illustrates a method of outputting OSD on a 3D display
  • FIG. 3 illustrates a first window including objects having different depths
  • FIG. 4 illustrates an exemplary second window popped up on the first window shown in FIG. 3;
  • FIG. 5 illustrates another exemplary second window popped up on the first window shown in FIG. 3;
  • FIG. 6 illustrates another exemplary second window popped up on the first window shown in FIG. 3;
  • FIG. 7 illustrates exemplary objects capable of generating an additional object or window in a window
  • FIGS. 8 to 10 illustrate various embodiments of the additional object or window generated according to the objects shown in FIG. 7;
  • FIG. 11 illustrates an exemplary processing method when a virtual keyboard is provided
  • FIG. 12 is a flowchart illustrating a method for controlling content on a 3D web browser according to an embodiment of the present invention.
  • FIG. 13 is a flowchart illustrating a method for controlling content on a 3D web browser according to another embodiment of the present invention.
  • a method and apparatus for controlling locations and/or depth ranges of a window and an object, for example, on a 3D display that considers a 3D effect and presence to be important are described because content displayed on the 3D display can provide different feelings to a user according to how the depth of the content is controlled.
  • the present invention provides a method and apparatus for controlling display of an additional window and/or additional object on the 3D display even when one or more windows or objects are being displayed on the 3D display.
  • 3D display represents a digital device including a display capable of displaying 3D content.
  • the 3D display may be a display device that simply outputs content processed in a set-top box (STB) or may be integrated with the STB.
  • Examples of the 3D display include a stand device such as a 3D TV and a mobile device such as a smartphone, a tablet PC, a notebook PC, etc.
  • 3D web browser is provided on the 3D display or displayed at the request of a user.
  • Window/object is an overlay and may be used as a predetermined unit of 3D content provided on the 3D web browser. For example, if an information unit is an object, one or more objects can be included in a window and this window can be regarded as a container in the disclosure. Furthemore, the window and/or object may induce an additional wondow or object, that is, a new window or object according to user choice and characteristics of the window and/or object. In the present invention, it is not limited to the term window and object.
  • minimum and maximum disparity (disparity_min and/or disparity_max) values of video content may be transmtited as information necessary to display an overlay.
  • an overlay such as one-screen display (OSD), a web browser or the like needs to be displayed while the video content is broadcast, a receiver sets an appropirate depth of the overlay and displays the overlay at the set depth on the basis of the transmitted display information so as to provide a 3D effect to a user.
  • An overlay includes an embedded overlay (or open overlay) including a graphical image that is not video data but included in a video signal and transmitted, such as a broadcasting station logo, an open caption, etc.
  • This embedded overlay is data that must be output when the receiver decodes a video stream, distinigushed from a closed caption and a subtitle. That is, the embedded overlay is a caption or graphic (e.g. sports game score, entertainment program caption, newsflash, etc.) embedded in content, and the closed caption/graphic may be a caption/graphic transmitted through a separate stream.
  • a variable such as an interface necessary to check and provide link information and additional information, may be additionally needed even when one or more objects are displayed in a window.
  • depth control in the 3D service may largely affect the 3D effect. That is, if the 3D service is not appropriately arranged or depth control is not properly performed, a user may be confused or the 3D effect may be deteriorated.
  • FIG. 1 illustrates exemplary components of a 3D display according to an embodiment of the present invention.
  • An exemplary 3D display may include a receiver for receiving a signal corresponding to a 3D object, a decoder for decoding the 3D object, a display for configuring and displaying the 3D object and a window for the 3D object, and a controller for controlling display of a first window including one or more 3D objects and a second window according to a first 3D object included in the first window such that the second window is displayed with a depth value larger than a depth value of one of the first 3D object and a 3D object having a maximum depth value in the first window, or with a basic depth value.
  • exemplary components of the 3D display may include a receiving unit 1010, a decoder 1020, a demultiplexer 1030, an SI processor 1040, a 3D video decoder 1050, a primary video decoder 1052, a secondary video decoder 1054, a 3D graphic engine 1060, a left view mixer 1070, a right view mixer 1080, and a 3D output formatter 1090.
  • the receiving unit 1010 tunes a channel on which 3D content is transmitted through an RF channel to receive a 3D signal.
  • the receiving unit 1010 may receive an Ethernet frame or IP datagram through a network instead of the RF channel.
  • the receiving unit 1010 may be implemented as individual components for respectively processing the Ethernet frame and the IP datagram.
  • the receiving unit 1010 may further include a component necessary to process content included in the IP datagram or a component which will be described below may be modified to process the content included in the IP diagram.
  • the decoder 1020 decodes the 3D signal received through the receiving unit 1010.
  • the decoder 1020 may be called a VSB decoder or an OFDM decoder according to decoding schemes, or may be implemented as individual components for respective decoding schemes.
  • the demultiplexer 1030 demultiplexes a transport stream packet included in the 3D signal using a packet identifier to divide the transport stream packet into an audio signal, a video signal and data and transmits the audio signal, video signal and data to corresponding components.
  • the SI processor 1040 processes a signaling signal received through the demultiplexer 1030.
  • SI represents system information, service information or signaling information and may be program specific information/program and system information protocol/digital video broadcast-service information (PSI/PSIP/DVB-SI) according to schemes.
  • PSI/PSIP/DVB-SI system information protocol/digital video broadcast-service information
  • the SI processor 1040 may temporarily store processed SI data in connection with an internal or external database.
  • the 3D video decoder 1050 processes video data demultiplexed by the demultiplxer 1030.
  • the 3D video decoder 1050 may include the primary video decoder 1052 and/or the secondary video decoder 1054 according to 3D signal transmission scheme to decoder 3D video data.
  • the primary video decoder 1052 decodes primary video data. For example, if MVC coding is applied to video data, primary video data may be a base or enhanced layer signal. Alternatively, the primary video decoder 1052 may decode left view video data.
  • the secondary video decoder 1054 decodes secondary video data. For example, if MVC coding is applied to video data, secondary video data may be an enhanced or base layer signal. Alternatively, the secondary video decoder 1054 may decode right view video data.
  • the 3D graphic engine 1060 processes a graphical output such as OSD. For example, the 3D graphic engine 1060 determines a depth at which OSD will be displayed at a specific service, program, event and/or scene level by analyzing 3D depth information and controls display of the OSD. In accordance with the present invention, the 3D graphic engine 1060 may display a web browser, determine 3D depths of one or more windows and objects and display the windows and objects on the web browser.
  • the left view mixer 1070 processes a left view that forms a 3D video image and the right view mixer 1080 processes a right view that forms the 3D video image.
  • the 3D output formatter 1090 processes the left view and the right view constituting the 3D video image such that the left view and the right view can be displayed on a screen.
  • the 3D output formatter 1090 may process OSD display such that OSD can be displayed with the left view and the right view.
  • the 3D display may be aware of the above-mentioned 3D scene-level depth information included in SI, for example, PMT (Program Map Table), VCT (Virtual Channel Table) and/or SDT (Service Description Table), EIT (Event Information Table), etc. and scene level or time-period based depth range information about a currently viewed program, channel, service, etc. in the case of 3D broadcast service.
  • SI Session Information Table
  • the receiver can use the above-described video depth range information when a window and/or an object, an interaction or message of a user, graphic according to execution of an alarm function, and OSD are displayed on a web browser according to the present invention.
  • the receiver may control appropriate display of each overlay by determining a depth range and a depth degree within a corresponding range using min_display and max_disparity values.
  • FIG. 2 illustrates a method of outputting OSD on a 3D display.
  • FIG. 2 shows a first window 2010 on the assumption that the first window 2010 is displayed on a web browser in the 3D display.
  • the first window 2010 includes a first object 2020 displayed at a first depth in area A and a second object 2030 displayed at a second depth in area B.
  • the first depth and the second depth may be equal to each other.
  • the 3D display needs to know at least one of depth_min and depth_max values and disparity_min and displarity_max values to determine a depth range or a disparity range and an appropriate depth in a corresponding range in order to provide an object at a predetermined depth on the first window 2010 in the web browser.
  • an OSD 2020 and a web browser 2030 may be respectively displayed on area A and area B of the 3D display screen 2010.
  • the OSD image 2020 and the web browser 2030 displayed on the areas may have the same depth.
  • a window and/or an object based on the OSD image 2020 and the web browser 2030 may be restricted or not by depth ranges or disparity ranges of the OSD image 2020 and the web browser 2030.
  • At least one of the above-mentioned depth_min, depth_max, disparity_min, disparity_max, depth_range and disparity_range values may be transmitted from a transmitter, for example.
  • the at least one of the values may be transmitted in the form of a table and/or a descriptor at a system level or transmitted through an SEI message at a video level.
  • this information is referred to when a window and/or an object are displayed on the 3D display, and a new range may be defined using corresponding information as necessary and applied to and used for the 3D display.
  • FIGS. 3 to 6 illustrate display of an additional window such as a pop-up window on a 3D web browser on which at least one window has been displayed.
  • FIG. 3 illustrates a first window including objects having different depths.
  • a first window 310 is displayed on a screen and object A 320 and object B 330 belonging to the first window 310 respectively have predetermined depths on the screen. It is assumed that both object A 320 and object B 330 are 3D objects.
  • FIG. 3(b) shows implementation of object A and object B displayed on the screen, shown in FIG. 3(a).
  • object A 320 has a depth value of 5
  • object B 330 has a depth value of 10.
  • the 3D web browser service can provide an additional window and/or object automatically or by accessing the provided window or object.
  • FIGS. 4 to 6 illustrate exemplary second windows provided in this case.
  • FIG. 4 illustrates an exemplary second window popped up on the first window shown in FIG. 3.
  • a second window 430 which is provided in the form of a pop-up window when object B in the first window is accessed, has a depth value of 0, which is different from those of object A and object B.
  • an area in which the second window 430 and object A and object B overlap may exist in the second window 430 due to the size of the second window 430.
  • This overlap is generated caused by the size and location of an additional window, for example, and thus the overlap may be minimized by adjusting the size and location of the additional window.
  • the additional window or object is provided according to a previously selected window or object and may be information in which the user is interested or important information, it is desirable that the additional window or object have a depth equal to or larger than that of the previously selected window or object in terms of information transmission or user satisfaction.
  • FIG. 5 illustrates another exemplary second window popped up on the first window shown in FIG. 3.
  • the additional second window may have a depth value larger than those of all objects in the first window.
  • the second window 540 has the same size and location as those of the second window 430 shown in FIG. 4(b), it has a depth value of 15, which is larger than the depth values of object A and object B in the first window.
  • FIG. 6 illustrates another exemplary second window popped up on the first window shown in FIG. 3.
  • object A and object B included in the first window respectively have depth values of -5 and 0.
  • the second window 640 has the same depth value as that of object B which has a largest depth value in the first window, and thus correct information about the second window may be provided even when an area in which the second window and objects A and B overlap is generated.
  • the present invention can display the first window including the selected object as a top window and output a pop-up window according to the selected object, that is, the second window such that the second window has a depth value equal to or larger than that of the first window.
  • Control of depth values of an additional window and/or object has been described. In this case, it is possible to control 3D content more correctly by adding control of locations and/or sizes of the additional window and/or object to control of the depth values thereof.
  • the depth values of a previously selected window and/or object may be smaller than current depth values while maintaining the depth values, locations and/or sizes of the additional window and/or object.
  • the previously selected window and/or object can be controlled to have minimum depth values within depth_max value in the aforementioned depth range or disparity_max value in the disparity range.
  • the above two methods may be combined. For example, the depth values of the previously selected window and/or object are decreased and the depth values of the additional window and/or object are increased. In this case, it is possible to further consider size and location factors.
  • the aforementioned depth control schemes may be implemented in the 3D display, or information about the depth control schemes may be predefined by a transmitter and provided to the 3D display.
  • the 3D display can newly define locations, sizes and depths of the window and/or object on a web browser on the basis of the information transmitted from the transmitter.
  • the 3D display may newly configure a window and/or an object with reference to the information transmitted from the transmitter without being restricted by the ranges determined by the transmitter. This may be performed only when an additional window and/or object are provided. That is, while the first window is provided according to information transmitted from the transmitter, the second window can be newly configured such that information about the second window can be correctly transmitted.
  • the size and/or location of the second window are controlled if correct information about the second window can be provided even when the second window is not newly configured in consideration of factors with respect to the size and/or location thereof.
  • the depth of at least one of the previously selected window and the additional window may be newly set if a problem may be generated in information recognition or it is difficult to change the size and/or location of the second window.
  • FIGS. 7 to 10 illustrate exemplary objects capable of generating an additional object or window in a window.
  • FIG. 7 illustrates an additional object in a window or an object capable of generating a window.
  • a video clip 720, an image 730, an input field 740, a link field 750, and a content-editable attribute 760 may be displayed in a first window 710 on a web browser.
  • the video clip 720, image 730, input field 740, link field 750, and content-editable attribute 760 are respectively displayed in first, second, third, fourth and fifth areas.
  • FIGS. 8 to 10 illustrate various embodiments of an additional object or window displayed according to FIG. 7.
  • an additional window and/or object may be provided.
  • a pop-up window 810 if one area shown in FIG. 7 is accessed, at least one of a pop-up window 810, a virtual keyboard 820, a modal dialog box 830, and a web notification box 840 may be provided as a second window or object.
  • FIG. 9 illustrates an example of controlling the depth of a previously selected object instead of an additional window.
  • While the virtual keyboard 820 shown in FIG. 8 can be provided for the input field 740 in the third area, the link field 750 in the fourth area and the content-editable attribute 760 in the fifth area, shown in FIG. 7, for example, it is possible to increase the depth of an object to a value larger than those of objects in consideration of the size or location of the object when a pointer, for example, is located at a selected or related area such that the user can easily select the object. In this case, it is possible to additionally adjust the size and location of the object. Alternatively, objects other than selected objects may be removed from the corresponding window and only the selected objects may be displayed in consideration of at least one of the depths, sizes and locations of the selected objects, as show in FIG. 9.
  • FIG. 10 illustrates various schemes of providing a virtual keyboard.
  • a selected object and a virtual keyboard are newly configured on a window and displayed in front of the window. Otherwise, only the virtual keyboard is displayed at the bottom of the window, as shown in FIG. 10(b) or provided as shown in FIG. 10(c).
  • the virtual keyboard may have a fixed depth for user convenience.
  • the virtual keyboard can have a fixed depth value of 0 to increase a recognition rate. This is because the recognition rate can increase when a display location of the virtual keyboard and touched points on the virtual keyboard are on the same plane.
  • FIG. 11 illustrates an exemplary processing method when a virtual keyboard is provided.
  • the basic principle of the processing method illustrated in FIG. 11 reduces the depth of a displayed 3D object when the virtual keyboard is displayed such that a user interface can be clearly seen.
  • depth D’(VK) of the vertical keyboard may be arbitrarily defined.
  • D’(VK) may be defined as 0 because a recognition rate can increase when a display location of the virtual keyboard and touched points on the virtual keyboard are on the same plane, as described above.
  • Depth of D’(IF) an interface may be larger or smaller than D(IF).
  • D’(IF) it is important to clearly expose interface IF to the user. Accordingly, D’(VK) may be larger or smaller than D’(IF).
  • the virtual keyboard or interface may be a basis object S of depth control.
  • D’(p) may correspond to D’(p’).
  • FIGS. 11(a) and 11(b) shows a case in which D’(VK) is larger than D’(IF).
  • object S is the virtual keyboard and D’(p) is smaller than D’(VK).
  • object S is the interface and D’(p) is smaller than D’(IF).
  • FIGS. 11(c) and 11(d) shows a case in which D’(VK) is smaller than D’(IF).
  • object S corresponds to the virtual keyboard and D’(p) is smaller than D’(IF).
  • object S corresponds to the interface and D’(p) is smaller than D’(VK).
  • a method of decreasing D(p) to a value smaller than D(S), making D’(p) equal to D’(s) and maintaining depths of other objects a method of making D’(p) correspond to a value obtained by subtracting D(s) from D(p), and a method of making D’(p) satisfy D’(p’) if D(p) is D(p’) can be used.
  • FIG. 12 is a flowchart illustrating a method for controlling content on a 3D display according to an embodiment of the present invention.
  • the 3D display displays a first window including one or more 3D objects through a 3D web browser (S1010).
  • the 3D display displays a second window according to a first 3D object included in the first window (S1020).
  • the second window may be displayed with a depth value, which is greater than a depth value of one of the first 3D object and a 3D object having a maximum depth value in the first window, or with a basic depth value.
  • FIG. 13 is a flowchart illustrating a method for controlling content on a 3D display according to another embodiment of the present invention.
  • the 3D display displays a plurality of windows including one or more 3D objects (S2010).
  • the window including the selected 3D object is displayed as a top window on the basis of the extracted window information (S2030).
  • a second window according to the selected 3D object is displayed with a depth value greater than a depth value of the 3B object in the corresponding window (S2040).
  • depth values of all the 3D objects included in the first window may be changed to depth values smaller than the basic depth value.
  • the depth value of each of the 3D objects may be linearly changed at the ratio of the depth value of each 3D object to the basic depth value, or all the depth values of the 3D objects may be changed to a predetermined depth value on the basis of the basic depth value irrespective of previous depth values of the 3D objects.
  • the basic depth value may be 0.
  • the second window that is, an additional window may output at least one of a pop-up window, a virtual keyboard, a dialog box and a web notification message according to 3D objects as shown in FIG. 7.
  • the virtual keyboard may have a depth value (e.g. the basic depth value, that is, 0) at which the display location of the virtual keyboard corresponds to touched points on the virtual keyboard for touch input of the user.
  • a depth value e.g. the basic depth value, that is, 0
  • the depth value of the second window may be adjusted to be the basic depth value or a predetermined depth value.
  • a maximum depth value in the first window may be extracted and the depth value of the second window may be determined on the basis of the maximum depth value.
  • depth values of one or more 3D objects adjacent to the selected 3D object may be extracted and the depth value of the second window may be determined on the basis of a depth value of a 3D object that overlaps with the second window, from among the 3D objects adjacent to the selected 3D object.
  • the present invention is partially or wholly applied to a digital broadcast system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and apparatus for controlling content on a 3D display are provided. The method of controlling content on a 3D display includes displaying a first window including one or more 3D objects, and displaying a second window according to a first 3D object included in the first window, wherein the second window is displayed with a depth value larger than a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, or with a basic depth value.

Description

METHOD AND APPARATUS OF CONTROLLING A CONTENT ON 3-DIMENSIONAL DISPLAY
The present invention relates to a method and apparatus for controlling content on a 3-dimensional (3D) display and, more particularly, to a method and apparatus for controlling content output by adjusting depths of a window and an object displayed on a 3D display.
With the propagation of 3DTV, supply of 3D content is increasingly accelerated.
3D images provide a 3D effect using stereoscopic visual principles of human eyes. Since a person feels perspective according to binocular parallax caused by a distance between eyes, which is approximately 65mm, the 3D effect and perspective of a 3D image can be obtained by providing the 3D image such that left and right eyes respectively see 2D images.
3D image display methods include a stereoscopic scheme, a volumetric scheme, a holographic scheme, etc. According to the stereoscopic scheme, a left view image seen by a left eye of a user and a right view image seen by a right eye of the user are provided such that the user can recognize a 3D effect by respectively viewing the left view image and the right view image through his left and right eyes using polarizing eyeglasses or a display device.
Distinguished from the conventional 2D (2-Dimensional) service, a 3D service considers a 3D effect and presence to be important. Accordingly, content may be seen differently according to how depths as well as locations and sizes of objects are controlled.
An object of the present invention is to provide a method and apparatus for controlling display of an additional window and/or object on a 3D display at the request of a user when one or more windows or objects have been displayed on the 3D display.
According to one aspect of the present invention, a method of controlling content on a 3D (3-Dimensional) display includes displaying a first window including one or more 3D objects, and displaying a second window according to a first 3D object included in the first window, wherein the second window is displayed with a depth value larger than a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, or with a basic depth value.
When the second window is displayed with the basic depth value, depth values of all 3D objects included in the first window may be changed to values smaller than the basic depth value.
When the depth values of all the 3D objects included in the first window are changed, the depth value of each of the 3D objects may be linearly changed at the ratio of the depth value of each 3D object to the basic depth value.
When the depth values of all the 3D objects included in the first window are changed, all the depth values of the 3D objects included in the first window may be changed to a predetermined depth value on the basis of the basic depth value irrespective of previous depth values of the 3D objects.
The basic depth value may be zero (0).
The 3D objects may include at least one of a video clip, an image, an input field, link information and content-editable attributes.
The second window may output at least one of a pop-up window, a virtual keyboard, a dialog box and a web notification message according to the 3D objects.
When the second window is provided as the virtual keyboard, the virtual keyboard may have a depth value at which the display location of the virtual keyboard corresponds to touched points on the virtual keyboard for touch input of a user.
If the depth value of a selected 3D object in the first window is negative, the depth value of the second window may be adjusted to be the basic depth value or a predetermined depth value.
The method may further include extracting a maximum depth value in the first window when a predetermined 3D object included in the first window is selected, and determining the depth value of the second window on the basis of the extracted maximum depth value.
The method may further includes, when a predetermined 3D object included in the first window is selected, extracting depth values of one or more 3D objects adjacent to the selected 3D object, and determining the depth value of the second window on the basis of a depth value of a 3D object overlapping with the second window, from among the 3D objects adjacent to the selected 3D object.
According to another aspect of the present invention, a method of controlling content on a 3D display includes displaying a plurality of windows including one or more 3D objects, when a predetermined 3D object is selected, extracting information about a window including the selected 3D object, displaying the window including the selected 3D object as a top window on the basis of the extracted window information, and displaying a second window according to the selected 3D object with a depth value larger than a depth value of the 3D object in the window.
According to another aspect of the present invention, a 3D display includes a receiver for receiving a signal corresponding to a 3D object, a decoder for decoding the received signal corresponding to the 3D object, a display unit for configuring and displaying the 3D object and a window for the 3D object, and a controller for controlling display of a first window including one or more 3D objects and a second window according to a first 3D object included in the first window, wherein the controller controls the second window to be displayed with a depth value larger than one of a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, with or a basic depth value.
According to embodiments of the present invention, when one or more windows or objects are displayed on a 3D display, an additional window and/or object can be easily displayed such that a user can easily recognize the additional window and/or object.
Therefore, user convenience and product satisfaction can be improved so as to promote demands for purchasing products related to the 3D service.
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings:
FIG. 1 illustrates components of a 3D display according to an embodiment of the present invention;
FIG. 2 illustrates a method of outputting OSD on a 3D display;
FIG. 3 illustrates a first window including objects having different depths;
FIG. 4 illustrates an exemplary second window popped up on the first window shown in FIG. 3;
FIG. 5 illustrates another exemplary second window popped up on the first window shown in FIG. 3;
FIG. 6 illustrates another exemplary second window popped up on the first window shown in FIG. 3;
FIG. 7 illustrates exemplary objects capable of generating an additional object or window in a window;
FIGS. 8 to 10 illustrate various embodiments of the additional object or window generated according to the objects shown in FIG. 7;
FIG. 11 illustrates an exemplary processing method when a virtual keyboard is provided;
FIG. 12 is a flowchart illustrating a method for controlling content on a 3D web browser according to an embodiment of the present invention; and
FIG. 13 is a flowchart illustrating a method for controlling content on a 3D web browser according to another embodiment of the present invention.
Embodiemnts of the present ivnention are described with reference to the accompanying drawings. However, it will be apparent to those skilled in the art that the technical spirit of the present invention is not limited to embodiments as set forth herein.
Although most terms used in the present invention have been selected from general ones widely used in the art, some terms have been arbitrarily selected by the applicant and their meanings are explained in detail in the following description as needed. Thus, the present invention should be understood with the intended meanings of the terms rather than their simple names or meanings.
A description will be given of a method and apparatus for controlling content on a 3D (3-Dimensional) display according to the embodiments of the present invention.
In the disclosure, a method and apparatus for controlling locations and/or depth ranges of a window and an object, for example, on a 3D display that considers a 3D effect and presence to be important are described because content displayed on the 3D display can provide different feelings to a user according to how the depth of the content is controlled.
To achieve this, the present invention provides a method and apparatus for controlling display of an additional window and/or additional object on the 3D display even when one or more windows or objects are being displayed on the 3D display.
In the disclosure, “3D display” represents a digital device including a display capable of displaying 3D content. The 3D display may be a display device that simply outputs content processed in a set-top box (STB) or may be integrated with the STB. Examples of the 3D display include a stand device such as a 3D TV and a mobile device such as a smartphone, a tablet PC, a notebook PC, etc.
In addition, “3D web browser” is provided on the 3D display or displayed at the request of a user.
“Window/object” is an overlay and may be used as a predetermined unit of 3D content provided on the 3D web browser. For example, if an information unit is an object, one or more objects can be included in a window and this window can be regarded as a container in the disclosure. Furthemore, the window and/or object may induce an additional wondow or object, that is, a new window or object according to user choice and characteristics of the window and/or object. In the present invention, it is not limited to the term window and object.
While the 3D web browser is provided on the 3D display in the following description, the present invention is not limited thereto and the scope of the present invention is determined by claims.
In 3DTV broadcast, minimum and maximum disparity (disparity_min and/or disparity_max) values of video content may be transmtited as information necessary to display an overlay. When an overlay such as one-screen display (OSD), a web browser or the like needs to be displayed while the video content is broadcast, a receiver sets an appropirate depth of the overlay and displays the overlay at the set depth on the basis of the transmitted display information so as to provide a 3D effect to a user.
An overlay includes an embedded overlay (or open overlay) including a graphical image that is not video data but included in a video signal and transmitted, such as a broadcasting station logo, an open caption, etc. This embedded overlay is data that must be output when the receiver decodes a video stream, distinigushed from a closed caption and a subtitle. That is, the embedded overlay is a caption or graphic (e.g. sports game score, entertainment program caption, newsflash, etc.) embedded in content, and the closed caption/graphic may be a caption/graphic transmitted through a separate stream.
When a 3D service is provided through a web browser, there are many variables, as compared to a case in which the conventional 3D broadcast service is provided. For example, in the case of a 3D web browser service, a variable, such as an interface necessary to check and provide link information and additional information, may be additionally needed even when one or more objects are displayed in a window.
In this case, depth control in the 3D service may largely affect the 3D effect. That is, if the 3D service is not appropriately arranged or depth control is not properly performed, a user may be confused or the 3D effect may be deteriorated.
Accordingly, when the 3D web browser service is provided, it is necessary to appropriately arrange an overlay such as an additional window or object and/or to determine a depth of the overlay according to situations. Embodiments of the present invention will now be described in detail.
FIG. 1 illustrates exemplary components of a 3D display according to an embodiment of the present invention.
An exemplary 3D display according to an embodiment of the present invention may include a receiver for receiving a signal corresponding to a 3D object, a decoder for decoding the 3D object, a display for configuring and displaying the 3D object and a window for the 3D object, and a controller for controlling display of a first window including one or more 3D objects and a second window according to a first 3D object included in the first window such that the second window is displayed with a depth value larger than a depth value of one of the first 3D object and a 3D object having a maximum depth value in the first window, or with a basic depth value.
Referring to FIG. 1, exemplary components of the 3D display may include a receiving unit 1010, a decoder 1020, a demultiplexer 1030, an SI processor 1040, a 3D video decoder 1050, a primary video decoder 1052, a secondary video decoder 1054, a 3D graphic engine 1060, a left view mixer 1070, a right view mixer 1080, and a 3D output formatter 1090.
The receiving unit 1010 tunes a channel on which 3D content is transmitted through an RF channel to receive a 3D signal. Here, the receiving unit 1010 may receive an Ethernet frame or IP datagram through a network instead of the RF channel. The receiving unit 1010 may be implemented as individual components for respectively processing the Ethernet frame and the IP datagram. In this case, when the receiving unit 1010 receives the IP datagram, particularly, the receiving unit 1010 may further include a component necessary to process content included in the IP datagram or a component which will be described below may be modified to process the content included in the IP diagram.
The decoder 1020 decodes the 3D signal received through the receiving unit 1010. The decoder 1020 may be called a VSB decoder or an OFDM decoder according to decoding schemes, or may be implemented as individual components for respective decoding schemes.
The demultiplexer 1030 demultiplexes a transport stream packet included in the 3D signal using a packet identifier to divide the transport stream packet into an audio signal, a video signal and data and transmits the audio signal, video signal and data to corresponding components.
The SI processor 1040 processes a signaling signal received through the demultiplexer 1030. Here, SI represents system information, service information or signaling information and may be program specific information/program and system information protocol/digital video broadcast-service information (PSI/PSIP/DVB-SI) according to schemes. The SI processor 1040 may temporarily store processed SI data in connection with an internal or external database.
The 3D video decoder 1050 processes video data demultiplexed by the demultiplxer 1030. For example, the 3D video decoder 1050 may include the primary video decoder 1052 and/or the secondary video decoder 1054 according to 3D signal transmission scheme to decoder 3D video data.
The primary video decoder 1052 decodes primary video data. For example, if MVC coding is applied to video data, primary video data may be a base or enhanced layer signal. Alternatively, the primary video decoder 1052 may decode left view video data.
The secondary video decoder 1054 decodes secondary video data. For example, if MVC coding is applied to video data, secondary video data may be an enhanced or base layer signal. Alternatively, the secondary video decoder 1054 may decode right view video data.
The 3D graphic engine 1060 processes a graphical output such as OSD. For example, the 3D graphic engine 1060 determines a depth at which OSD will be displayed at a specific service, program, event and/or scene level by analyzing 3D depth information and controls display of the OSD. In accordance with the present invention, the 3D graphic engine 1060 may display a web browser, determine 3D depths of one or more windows and objects and display the windows and objects on the web browser.
The left view mixer 1070 processes a left view that forms a 3D video image and the right view mixer 1080 processes a right view that forms the 3D video image.
The 3D output formatter 1090 processes the left view and the right view constituting the 3D video image such that the left view and the right view can be displayed on a screen. Here, the 3D output formatter 1090 may process OSD display such that OSD can be displayed with the left view and the right view.
The 3D display may be aware of the above-mentioned 3D scene-level depth information included in SI, for example, PMT (Program Map Table), VCT (Virtual Channel Table) and/or SDT (Service Description Table), EIT (Event Information Table), etc. and scene level or time-period based depth range information about a currently viewed program, channel, service, etc. in the case of 3D broadcast service.
The receiver can use the above-described video depth range information when a window and/or an object, an interaction or message of a user, graphic according to execution of an alarm function, and OSD are displayed on a web browser according to the present invention. Here, the receiver may control appropriate display of each overlay by determining a depth range and a depth degree within a corresponding range using min_display and max_disparity values.
FIG. 2 illustrates a method of outputting OSD on a 3D display.
FIG. 2 shows a first window 2010 on the assumption that the first window 2010 is displayed on a web browser in the 3D display.
The first window 2010 includes a first object 2020 displayed at a first depth in area A and a second object 2030 displayed at a second depth in area B. Here, the first depth and the second depth may be equal to each other.
Meantime, the 3D display needs to know at least one of depth_min and depth_max values and disparity_min and displarity_max values to determine a depth range or a disparity range and an appropriate depth in a corresponding range in order to provide an object at a predetermined depth on the first window 2010 in the web browser.
Alternatively, an OSD 2020 and a web browser 2030 may be respectively displayed on area A and area B of the 3D display screen 2010.
The OSD image 2020 and the web browser 2030 displayed on the areas may have the same depth. In this case, a window and/or an object based on the OSD image 2020 and the web browser 2030 may be restricted or not by depth ranges or disparity ranges of the OSD image 2020 and the web browser 2030.
At least one of the above-mentioned depth_min, depth_max, disparity_min, disparity_max, depth_range and disparity_range values may be transmitted from a transmitter, for example. In this case, the at least one of the values may be transmitted in the form of a table and/or a descriptor at a system level or transmitted through an SEI message at a video level.
Furthermore, this information is referred to when a window and/or an object are displayed on the 3D display, and a new range may be defined using corresponding information as necessary and applied to and used for the 3D display.
FIGS. 3 to 6 illustrate display of an additional window such as a pop-up window on a 3D web browser on which at least one window has been displayed.
FIG. 3 illustrates a first window including objects having different depths.
Referring to FIG. 3(a), a first window 310 is displayed on a screen and object A 320 and object B 330 belonging to the first window 310 respectively have predetermined depths on the screen. It is assumed that both object A 320 and object B 330 are 3D objects.
FIG. 3(b) shows implementation of object A and object B displayed on the screen, shown in FIG. 3(a). Referring to FIG. 3(b), object A 320 has a depth value of 5 and object B 330 has a depth value of 10.
As described above, when a window or an object is provided, the 3D web browser service can provide an additional window and/or object automatically or by accessing the provided window or object.
It is assumed that an additional window is displayed in order to provide additional information when object B 330 is selected by a user.
FIGS. 4 to 6 illustrate exemplary second windows provided in this case.
FIG. 4 illustrates an exemplary second window popped up on the first window shown in FIG. 3.
Referring to FIG. 4(a), a second window 430, which is provided in the form of a pop-up window when object B in the first window is accessed, has a depth value of 0, which is different from those of object A and object B.
Here, an area in which the second window 430 and object A and object B overlap may exist in the second window 430 due to the size of the second window 430.
It can be easily seen from FIG. 4(b) that the second window 430 and objects A and B overlap.
This overlap is generated caused by the size and location of an additional window, for example, and thus the overlap may be minimized by adjusting the size and location of the additional window. However, since the additional window or object is provided according to a previously selected window or object and may be information in which the user is interested or important information, it is desirable that the additional window or object have a depth equal to or larger than that of the previously selected window or object in terms of information transmission or user satisfaction.
FIG. 5 illustrates another exemplary second window popped up on the first window shown in FIG. 3.
The additional second window may have a depth value larger than those of all objects in the first window.
Referring to FIG. 5, while the second window 540 has the same size and location as those of the second window 430 shown in FIG. 4(b), it has a depth value of 15, which is larger than the depth values of object A and object B in the first window.
Accordingly, it is possible to provide correct information about the second window 540 to the user irrespective of the location and size of the second window even when an area in which the second window 540 and objects A and B overlap is generated.
FIG. 6 illustrates another exemplary second window popped up on the first window shown in FIG. 3.
In FIG. 6, object A and object B included in the first window respectively have depth values of -5 and 0.
The second window 640 has the same depth value as that of object B which has a largest depth value in the first window, and thus correct information about the second window may be provided even when an area in which the second window and objects A and B overlap is generated.
When a predetermined object included in the first window is selected, the present invention can display the first window including the selected object as a top window and output a pop-up window according to the selected object, that is, the second window such that the second window has a depth value equal to or larger than that of the first window.
Control of depth values of an additional window and/or object has been described. In this case, it is possible to control 3D content more correctly by adding control of locations and/or sizes of the additional window and/or object to control of the depth values thereof.
Alternatively, it may be possible to control the depth values of a previously selected window and/or object to be smaller than current depth values while maintaining the depth values, locations and/or sizes of the additional window and/or object. For example, the previously selected window and/or object can be controlled to have minimum depth values within depth_max value in the aforementioned depth range or disparity_max value in the disparity range.
Otherwise, the above two methods may be combined. For example, the depth values of the previously selected window and/or object are decreased and the depth values of the additional window and/or object are increased. In this case, it is possible to further consider size and location factors.
The aforementioned depth control schemes may be implemented in the 3D display, or information about the depth control schemes may be predefined by a transmitter and provided to the 3D display.
For example, even when the transmitter defines sizes, locations and depth information or disparity information about a window and/or object or determines ranges thereof, the 3D display can newly define locations, sizes and depths of the window and/or object on a web browser on the basis of the information transmitted from the transmitter.
The 3D display may newly configure a window and/or an object with reference to the information transmitted from the transmitter without being restricted by the ranges determined by the transmitter. This may be performed only when an additional window and/or object are provided. That is, while the first window is provided according to information transmitted from the transmitter, the second window can be newly configured such that information about the second window can be correctly transmitted.
In the latter case, it is determined whether there is an area in which the second window and another window and/or object overlap on the basis of sizes and/or locations, and only depth information may be controlled when the overlap area does not exist.
When the overlap area exists, the size and/or location of the second window are controlled if correct information about the second window can be provided even when the second window is not newly configured in consideration of factors with respect to the size and/or location thereof.
However, when the overlap area exists and the size and/or location of the second window are controlled, the depth of at least one of the previously selected window and the additional window may be newly set if a problem may be generated in information recognition or it is difficult to change the size and/or location of the second window.
In addition, it may be possible to simultaneously control all windows and objects, that is, simultaneously adjust depths of all windows and objects to 0 at a time when an additional window is provided (when the first object in the first window is selected) and to display a pop-up window and object thereon with a predetermined depth value. In this case, it is possible to control the depth values of the windows and objects to be returned to previous values when pop-up is cancelled.
FIGS. 7 to 10 illustrate exemplary objects capable of generating an additional object or window in a window.
FIG. 7 illustrates an additional object in a window or an object capable of generating a window.
Referring to FIG. 7, at least one of a video clip 720, an image 730, an input field 740, a link field 750, and a content-editable attribute 760 may be displayed in a first window 710 on a web browser. The video clip 720, image 730, input field 740, link field 750, and content-editable attribute 760 are respectively displayed in first, second, third, fourth and fifth areas.
FIGS. 8 to 10 illustrate various embodiments of an additional object or window displayed according to FIG. 7.
When the user accesses at least one of the areas shown in FIG. 7, an additional window and/or object may be provided.
Referring to FIG. 8, if one area shown in FIG. 7 is accessed, at least one of a pop-up window 810, a virtual keyboard 820, a modal dialog box 830, and a web notification box 840 may be provided as a second window or object.
FIG. 9 illustrates an example of controlling the depth of a previously selected object instead of an additional window.
While the virtual keyboard 820 shown in FIG. 8 can be provided for the input field 740 in the third area, the link field 750 in the fourth area and the content-editable attribute 760 in the fifth area, shown in FIG. 7, for example, it is possible to increase the depth of an object to a value larger than those of objects in consideration of the size or location of the object when a pointer, for example, is located at a selected or related area such that the user can easily select the object. In this case, it is possible to additionally adjust the size and location of the object. Alternatively, objects other than selected objects may be removed from the corresponding window and only the selected objects may be displayed in consideration of at least one of the depths, sizes and locations of the selected objects, as show in FIG. 9.
FIG. 10 illustrates various schemes of providing a virtual keyboard. Referring to FIG. 10(a), a selected object and a virtual keyboard are newly configured on a window and displayed in front of the window. Otherwise, only the virtual keyboard is displayed at the bottom of the window, as shown in FIG. 10(b) or provided as shown in FIG. 10(c).
The virtual keyboard may have a fixed depth for user convenience. For example, the virtual keyboard can have a fixed depth value of 0 to increase a recognition rate. This is because the recognition rate can increase when a display location of the virtual keyboard and touched points on the virtual keyboard are on the same plane.
FIG. 11 illustrates an exemplary processing method when a virtual keyboard is provided.
The basic principle of the processing method illustrated in FIG. 11 reduces the depth of a displayed 3D object when the virtual keyboard is displayed such that a user interface can be clearly seen.
Referring to FIGS. 11(a) and 11(b), depth D’(VK) of the vertical keyboard may be arbitrarily defined. However, D’(VK) may be defined as 0 because a recognition rate can increase when a display location of the virtual keyboard and touched points on the virtual keyboard are on the same plane, as described above.
Depth of D’(IF) an interface may be larger or smaller than D(IF). When D’(IF) is defined, it is important to clearly expose interface IF to the user. Accordingly, D’(VK) may be larger or smaller than D’(IF).
The virtual keyboard or interface may be a basis object S of depth control.
When 3D objects p and p’ respectively have depths D(p) and D(p’) and D(p) is larger than D(p’), D’(p) may correspond to D’(p’).
FIGS. 11(a) and 11(b) shows a case in which D’(VK) is larger than D’(IF). In the case of FIG. 11(a), object S is the virtual keyboard and D’(p) is smaller than D’(VK). In the case of FIG. 11(b), object S is the interface and D’(p) is smaller than D’(IF).
FIGS. 11(c) and 11(d) shows a case in which D’(VK) is smaller than D’(IF). In the case of FIG. 11(c), object S corresponds to the virtual keyboard and D’(p) is smaller than D’(IF). In the case of FIG. 11(d), object S corresponds to the interface and D’(p) is smaller than D’(VK).
To determine D’(p), a method of decreasing D(p) to a value smaller than D(S), making D’(p) equal to D’(s) and maintaining depths of other objects, a method of making D’(p) correspond to a value obtained by subtracting D(s) from D(p), and a method of making D’(p) satisfy D’(p’) if D(p) is D(p’) can be used.
Referring to FIGS. 11(b) and 11(d), it is possible to avoid a situation of D(VK)<D(p)<D(IF) by selecting one of the interface and the virtual keyboard, which has a smaller value D’, as the basis object S.
FIG. 12 is a flowchart illustrating a method for controlling content on a 3D display according to an embodiment of the present invention.
Referring to FIG. 12, the 3D display displays a first window including one or more 3D objects through a 3D web browser (S1010).
Then, the 3D display displays a second window according to a first 3D object included in the first window (S1020).
Here, the second window may be displayed with a depth value, which is greater than a depth value of one of the first 3D object and a 3D object having a maximum depth value in the first window, or with a basic depth value.
FIG. 13 is a flowchart illustrating a method for controlling content on a 3D display according to another embodiment of the present invention.
Referring to FIG. 13, the 3D display displays a plurality of windows including one or more 3D objects (S2010).
Upon selection of a predetermined 3D object, information about a window including the selected 3D object is extracted (S2020).
The window including the selected 3D object is displayed as a top window on the basis of the extracted window information (S2030).
A second window according to the selected 3D object is displayed with a depth value greater than a depth value of the 3B object in the corresponding window (S2040).
In FIGS. 12 and 13, when the second window is displayed with the basic depth value, depth values of all the 3D objects included in the first window may be changed to depth values smaller than the basic depth value.
In this case, the depth value of each of the 3D objects may be linearly changed at the ratio of the depth value of each 3D object to the basic depth value, or all the depth values of the 3D objects may be changed to a predetermined depth value on the basis of the basic depth value irrespective of previous depth values of the 3D objects.
The basic depth value may be 0.
The second window, that is, an additional window may output at least one of a pop-up window, a virtual keyboard, a dialog box and a web notification message according to 3D objects as shown in FIG. 7.
When the second window is provided as the virtual keyboard, the virtual keyboard may have a depth value (e.g. the basic depth value, that is, 0) at which the display location of the virtual keyboard corresponds to touched points on the virtual keyboard for touch input of the user.
If the depth value of the selected 3D object in the first window is negative, the depth value of the second window may be adjusted to be the basic depth value or a predetermined depth value.
Furthermore, when the predetermined 3D object included in the first window is selected, a maximum depth value in the first window may be extracted and the depth value of the second window may be determined on the basis of the maximum depth value.
Alternatively, when the predetermined 3D object included in the first window is selected, depth values of one or more 3D objects adjacent to the selected 3D object may be extracted and the depth value of the second window may be determined on the basis of a depth value of a 3D object that overlaps with the second window, from among the 3D objects adjacent to the selected 3D object.
Various embodiments have been described in the best mode for carrying out the invention.
As described above, the present invention is partially or wholly applied to a digital broadcast system.

Claims (13)

  1. A method of controlling content on a 3-dimensional, 3D, display, the method comprising:
    displaying a first window including one or more 3D objects; and
    displaying a second window in connection with a first 3D object included in the first window,
    wherein the second window is displayed with a depth value larger than a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, or with a basic depth value.
  2. The method according to claim 1, wherein, when the second window is displayed with the basic depth value, depth values of all the 3D objects included in the first window are changed to values smaller than the basic depth value.
  3. The method according to claim 2, wherein, when the depth values of all the 3D objects included in the first window are changed, the depth value of each of the 3D objects is linearly changed at the ratio of the depth value of each 3D object to the basic depth value.
  4. The method according to claim 2, wherein, when the depth values of all the 3D objects included in the first window are changed, all the depth values of the 3D objects included in the first window are changed to a predetermined depth value on the basis of the basic depth value irrespective of previous depth values of the 3D objects.
  5. The method according to any one of claims 1 to 4, wherein the basic depth value is 0.
  6. The method according to claim 1, wherein the 3D objects include at least one of a video clip, an image, an input field, link information and content-editable attributes.
  7. The method according to claim 6, wherein the second window outputs at least one of a pop-up window, a virtual keyboard, a dialog box and a web notification message according to the 3D objects.
  8. The method according to claim 7, wherein, when the second window is provided as the virtual keyboard, the virtual keyboard has a depth value at which the display location of the virtual keyboard corresponds to touched points on the virtual keyboard for touch input of a user.
  9. The method according to claim 1, wherein, if the depth value of a selected 3D object in the first window is negative, the depth value of the second window is adjusted to be the basic depth value or a predetermined depth value.
  10. The method according to claim 1, further comprising:
    extracting a maximum depth value in the first window when a predetermined 3D object included in the first window is selected; and
    determining the depth value of the second window on the basis of the extracted maximum depth value.
  11. The method according to claim 1, further comprising:
    when a predetermined 3D object included in the first window is selected, extracting depth values of one or more 3D objects adjacent to the selected 3D object; and
    determining the depth value of the second window on the basis of a depth value of a 3D object overlapping with the second window, from among the 3D objects adjacent to the selected 3D object.
  12. A method of controlling content on a 3D display, the method comprising:
    displaying a plurality of windows including one or more 3D objects;
    when a predetermined 3D object is selected, extracting information about a window including the selected 3D object;
    displaying the window including the selected 3D object as a top window on the basis of the extracted window information; and
    displaying a second window according to the selected 3D object with a depth value larger than a depth value of the 3D object in the window.
  13. A 3D display comprising:
    a receiver for receiving a signal corresponding to a 3D object;
    a decoder for decoding the received signal corresponding to the 3D object;
    a display unit for configuring and displaying the 3D object and a window for the 3D object; and
    a controller for controlling display of a first window including one or more 3D objects and a second window according to a first 3D object included in the first window, wherein the controller controls the second window to be displayed with a depth value larger than one of a depth value of the first 3D object or a depth value of a 3D object having a maximum depth value in the first window, with or a basic depth value.
PCT/KR2012/007377 2012-09-14 2012-09-14 Method and apparatus of controlling a content on 3-dimensional display WO2014042299A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/KR2012/007377 WO2014042299A1 (en) 2012-09-14 2012-09-14 Method and apparatus of controlling a content on 3-dimensional display
KR1020157006999A KR101691839B1 (en) 2012-09-14 2012-09-14 Method and apparatus of controlling a content on 3-dimensional display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2012/007377 WO2014042299A1 (en) 2012-09-14 2012-09-14 Method and apparatus of controlling a content on 3-dimensional display

Publications (1)

Publication Number Publication Date
WO2014042299A1 true WO2014042299A1 (en) 2014-03-20

Family

ID=50278396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/007377 WO2014042299A1 (en) 2012-09-14 2012-09-14 Method and apparatus of controlling a content on 3-dimensional display

Country Status (2)

Country Link
KR (1) KR101691839B1 (en)
WO (1) WO2014042299A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140157186A1 (en) * 2012-12-03 2014-06-05 Himanshu Jagadish Bhat Three dimensional desktop rendering in a data processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6326964B1 (en) * 1995-08-04 2001-12-04 Microsoft Corporation Method for sorting 3D object geometry among image chunks for rendering in a layered graphics rendering system
KR20100022911A (en) * 2008-08-20 2010-03-03 삼성전자주식회사 3d video apparatus and method for providing osd applied to the same
KR20110060180A (en) * 2009-11-30 2011-06-08 한국전자통신연구원 Method and apparatus for producing 3d models by interactively selecting interested objects
KR20110125866A (en) * 2010-05-14 2011-11-22 퍼펙트데이타시스템 주식회사 Method and apparatus for providing information through augmented reality

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101647722B1 (en) * 2009-11-13 2016-08-23 엘지전자 주식회사 Image Display Device and Operating Method for the Same
KR20120037858A (en) * 2010-10-12 2012-04-20 삼성전자주식회사 Three-dimensional image display apparatus and user interface providing method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6326964B1 (en) * 1995-08-04 2001-12-04 Microsoft Corporation Method for sorting 3D object geometry among image chunks for rendering in a layered graphics rendering system
KR20100022911A (en) * 2008-08-20 2010-03-03 삼성전자주식회사 3d video apparatus and method for providing osd applied to the same
KR20110060180A (en) * 2009-11-30 2011-06-08 한국전자통신연구원 Method and apparatus for producing 3d models by interactively selecting interested objects
KR20110125866A (en) * 2010-05-14 2011-11-22 퍼펙트데이타시스템 주식회사 Method and apparatus for providing information through augmented reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140157186A1 (en) * 2012-12-03 2014-06-05 Himanshu Jagadish Bhat Three dimensional desktop rendering in a data processing device

Also Published As

Publication number Publication date
KR101691839B1 (en) 2017-01-02
KR20150046197A (en) 2015-04-29

Similar Documents

Publication Publication Date Title
WO2011046279A1 (en) Method for indicating a 3d contents and apparatus for processing a signal
WO2012177049A2 (en) Method and apparatus for processing broadcast signal for 3-dimensional broadcast service
WO2010041896A2 (en) Receiving system and method of processing data
WO2012044128A4 (en) Display device, signal-processing device, and methods therefor
WO2010150976A2 (en) Receiving system and method of providing 3d image
WO2011059261A2 (en) Image display apparatus and operating method thereof
WO2011084021A2 (en) Broadcasting receiver and method for displaying 3d images
WO2010151027A4 (en) Video display device and operating method therefor
WO2011005056A2 (en) Image output method for a display device which outputs three-dimensional contents, and a display device employing the method
WO2011129566A2 (en) Method and apparatus for displaying images
WO2013100376A1 (en) Apparatus and method for displaying
WO2010093115A2 (en) Broadcast receiver and 3d subtitle data processing method thereof
WO2012074328A2 (en) Receiving device and method for receiving multiview three-dimensional broadcast signal
WO2010087621A2 (en) Broadcast receiver and video data processing method thereof
WO2014092509A1 (en) Glasses apparatus and method for controlling glasses apparatus, audio apparatus and method for providing audio signal and display apparatus
WO2011005025A2 (en) Signal processing method and apparatus therefor using screen size of display device
WO2011021894A2 (en) Image display apparatus and method for operating the same
WO2010095835A2 (en) Method and apparatus for processing video image
WO2012002690A2 (en) Digital receiver and method for processing caption data in the digital receiver
WO2012050366A2 (en) 3d image display apparatus and display method thereof
WO2011155766A2 (en) Image processing method and image display device according to the method
WO2018012727A1 (en) Display apparatus and recording medium
WO2012046990A2 (en) Image display apparatus and method for operating the same
WO2015046724A1 (en) Image display apparatus, server for synchronizing contents, and method for operating the server
WO2014065635A1 (en) Method and apparatus for processing edge violation phenomenon in multi-view 3dtv service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12884660

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20157006999

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 12884660

Country of ref document: EP

Kind code of ref document: A1