WO2012044128A4 - 디스플레이 장치 및 신호 처리 장치와, 그 방법들 - Google Patents
디스플레이 장치 및 신호 처리 장치와, 그 방법들 Download PDFInfo
- Publication number
- WO2012044128A4 WO2012044128A4 PCT/KR2011/007285 KR2011007285W WO2012044128A4 WO 2012044128 A4 WO2012044128 A4 WO 2012044128A4 KR 2011007285 W KR2011007285 W KR 2011007285W WO 2012044128 A4 WO2012044128 A4 WO 2012044128A4
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- disparity information
- disparity
- information
- graphic
- layer
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
Definitions
- the present invention relates to a display apparatus and a signal processing apparatus and methods thereof, and more particularly, to a display apparatus and a signal processing apparatus and methods thereof for stably displaying a three-dimensional graphic object.
- the content provided by the display device is not limited to a broadcast signal.
- various kinds of applications or widget programs may be installed and provided to the user.
- a 3D display device is a device that gives a three-dimensional effect to an object displayed on a screen, thereby enabling a user to view a realistic screen.
- efforts to develop 3D contents to be outputted from a 3D display device have been accelerated.
- various types of graphic objects such as a screen caption and an OSD menu are displayed in a superimposed manner on an image.
- a screen reversal phenomenon such that such a graphic object is recognized as being located behind the image . Accordingly, there are cases where the user feels dizziness or inconvenience in viewing 3D contents.
- a display apparatus including a video processor for processing an image signal to form an image, a graphics processor for processing graphics data to form a graphic object, A display unit for displaying an object; a display unit for giving a different stereoscopic effect to each of the image and the graphic object to maintain a state in which the graphic object is displayed on an overlay layer that is an upper layer of a reference layer in which the image is displayed; And a control unit for controlling the graphic processing unit.
- the apparatus may further include a receiver for receiving first disparity information for the reference layer and second disparity information for the overlay layer from an external source.
- control unit controls the video processing unit to give a stereoscopic effect to the image according to the first disparity information, and controls the graphic processing unit to give a stereoscopic effect to the graphic object according to the second disparity information .
- the receiving unit receives the broadcasting signal including the video signal, the graphic data, the first disparity information, and the second disparity information, and the video processing unit and the graphics processing unit include the broadcasting signal And the first disparity information and the second disparity information from the program information table or the user data area, respectively.
- the apparatus may further include a receiver for receiving first disparity information for the reference layer from an external source, and a disparity information generator for generating second disparity information for the overlay layer.
- control unit controls the video processing unit to give a stereoscopic effect to the image according to the first disparity information, and controls the graphic processing unit to give a stereoscopic effect to the graphic object according to the second disparity information .
- the disparity information generating unit may generate the disparity information based on the disparity of the overlay layer according to the disparity variation state of the reference layer, To generate the second disparity information.
- the disparity information generation unit may generate the second disparity information so that the overlay layer has a fixed depth.
- the disparity information generation unit may detect a maximum disparity of the reference layer in an arbitrary stream unit and generate the second disparity information based on the detected information.
- the disparity information generation unit may detect the disparity of the reference layer at the time when the graphic object is displayed, and may generate the second disparity information based on the detected information.
- the apparatus may further include a disparity information generator for generating first disparity information for the reference layer and second disparity information for the overlay layer according to the depth information and the storage unit storing predetermined depth information, have.
- a disparity information generator for generating first disparity information for the reference layer and second disparity information for the overlay layer according to the depth information and the storage unit storing predetermined depth information, have.
- control unit controls the video processing unit to give a stereoscopic effect to the image according to the first disparity information, and controls the graphic processing unit to give a stereoscopic effect to the graphic object according to the second disparity information .
- the overlay layer includes a plurality of layers having different depths, and different types of graphic objects may be displayed in each layer.
- the display order of the graphic object types displayed on the respective layers may be changed according to the user's selection.
- the graphic object may include at least one of an OSD menu, a caption, program information, an application icon, an application window, and a GUI window.
- a signal processing apparatus includes a receiver for receiving an input signal, a video processor for processing an image signal included in the input signal and configuring an image to be displayed on a reference layer, An audio processing unit for processing audio signals included in the reference layer to generate an audio signal, a graphics processing unit for processing graphics data and configuring a graphic object to be displayed on an overlay layer that is an upper layer of the reference layer, And an interface unit for transmitting the information.
- the video processing unit may detect first disparity information included in the input signal and give a stereoscopic effect to the image based on the first disparity information, and the graphics processing unit may include a second The disparity information may be detected to give a stereoscopic effect to the graphic object based on the second disparity information.
- the signal processing apparatus may further include a disparity information generating unit for generating second disparity information for the overlay layer.
- the video processing unit may detect first disparity information included in the input signal and give a stereoscopic effect to the image based on the first disparity information, and the graphics processing unit may generate the disparity information And may give a stereoscopic effect to the graphic object according to the second disparity information.
- the disparity information generating unit may generate the disparity information based on the disparity of the overlay layer according to the disparity variation state of the reference layer, To generate the second disparity information.
- the disparity information generation unit may generate the second disparity information so that the overlay layer has a fixed depth.
- the disparity information generation unit may detect a maximum disparity of the reference layer in an arbitrary stream unit and generate the second disparity information based on the detected information.
- the disparity information generation unit may detect the disparity of the reference layer at the time when the graphic object is displayed, and may generate the second disparity information based on the detected information.
- the apparatus further includes a storage unit for storing predetermined depth information and a disparity information generation unit for generating first disparity information for the reference layer and second disparity information for the overlay layer according to the depth information, .
- the video processing unit detects the first disparity information included in the input signal and gives a stereoscopic effect to the video based on the first disparity information.
- the disparity information may be detected to give a stereoscopic effect to the graphic object based on the second disparity information.
- the overlay layer includes a plurality of layers having different depths, and different types of graphic objects may be displayed in each layer.
- the display order of the graphic object types displayed on the respective layers may be changed according to the user's selection.
- the graphic object may include at least one kind of object such as an OSD menu, a subtitle, program information, an application icon, an application window, and a GUI window.
- object such as an OSD menu, a subtitle, program information, an application icon, an application window, and a GUI window.
- a signal processing method including processing a video signal to form an image to be displayed on a reference layer, processing graphics data, Constructing an object, and transmitting the image and the graphic object to an output means.
- the method may further include receiving the first disparity information for the reference layer and the second disparity information for the overlay layer from an external source.
- the image may be given a three-dimensional sensation according to the first disparity information
- the graphic object may be configured to be given a three-dimensional sensation according to the second disparity information.
- the receiving step may include receiving a broadcast signal including the video signal, the graphic data, the first disparity information, and the second disparity information, the program information table included in the broadcast signal or the user data And detecting the first disparity information and the second disparity information from the region, respectively.
- the method may further include receiving first disparity information for the reference layer from an external source, and generating second disparity information for the overlay layer.
- the image may be given a three-dimensional sensation according to the first disparity information
- the graphic object may be configured to be given a three-dimensional sensation according to the second disparity information.
- the generating of the second disparity information may include analyzing the first disparity information to check a disparity change state of the reference layer, determining a state of the disparity of the reference layer based on the disparity variation state of the reference layer, And generating the second disparity information based on the first disparity information so that the disparity varies and the depth difference between the overlay layers maintains a predetermined size.
- the second disparity information may be generated so that the overlay layer has a fixed depth.
- the second disparity information may be generated based on a maximum disparity of the reference layer detected in an arbitrary stream unit.
- the second disparity information may be generated based on a disparity of the reference layer detected at the time when the graphic object is displayed.
- a method of generating depth information comprising: reading the depth information from a storage unit storing preset depth information; The method may further comprise generating information.
- the image may be given a three-dimensional sensation according to the first disparity information
- the graphic object may be configured to be given a three-dimensional sensation according to the second disparity information.
- the overlay layer includes a plurality of layers having different depths, and different types of graphic objects may be displayed in each layer.
- the display order of the graphic object types displayed on the respective layers may be changed according to the user's selection.
- the graphic object may include at least one of an OSD menu, a caption, program information, an application icon, an application window, and a GUI window.
- FIG. 1 is a block diagram showing a configuration of a display device according to an embodiment of the present invention
- FIG. 2 is a diagram for explaining a relation between a reference layer and a plurality of overlay layers
- FIGS. 3 to 5 are block diagrams showing a configuration of a display device according to various embodiments of the present invention.
- 6 to 8 are diagrams for explaining various embodiments for fixing disparity of an overlay layer
- 9 and 10 are diagrams for explaining an embodiment for flexibly changing the disparity of the overlay layer
- FIG. 11 is a diagram showing an example of a UI for changing the state of an overlay layer
- FIGS. 12 to 14 are diagrams showing a configuration of a signal processing apparatus according to various embodiments of the present invention.
- 15 is a block diagram showing a configuration of a broadcast transmission apparatus according to an embodiment of the present invention.
- 16 and 17 are diagrams for explaining a display state changed according to the contents of the program information table
- FIG. 18 is a flowchart for explaining a signal processing method according to various embodiments of the present invention.
- a display device refers to various devices having a display function such as a TV, a PC, an electronic frame, a PDA, a mobile phone, a notebook PC, a tablet PC, an electronic book, and the like.
- the video processing unit 110 processes a video signal to form an image.
- a video signal may be detected from a broadcast signal transmitted from a broadcast transmission apparatus, or may be a signal provided from various external sources such as a web server, an internal or external storage medium, a playback apparatus, and the like.
- the video signal may be a stereo image for 3D output.
- a stereo image means two or more images.
- two images captured from different angles of a subject i.e., a first input image and a second input image
- the first input image is referred to as a left eye image (or left image)
- the second input image is referred to as a right eye image (or right image).
- the video processing unit 110 decodes each of the data to generate a left eye image frame and a right eye image frame Can be generated.
- the video signal may be a 2D image.
- the video processing unit 110 may perform various signal processing such as decoding, deinterleaving, and scaling on the 2D image to form one image frame.
- the video processing unit 110 sets an image frame composed of the input 2D image as a reference frame, and determines the positions of pixels corresponding to the objects in the frame Can be shifted to generate a new frame.
- the reference frame is used as a left eye image frame, and a new frame having disparity can be used as a right eye image frame.
- the graphic processing unit 120 may process graphic data to construct a graphic object.
- the graphic object may be a subtitle or closed caption corresponding to an image.
- Various types of objects such as an OSD menu, program information, an application icon, an application window, a GUI window, and the like may be generated by the graphic processing unit 120.
- the control unit 130 provides the video processor 110 and the graphics processor 120 with a stereoscopic effect to each of the graphics objects configured in the video processor 110 and the graphics objects configured in the graphics processor 120, Can be controlled. Specifically, when the video processor 110 forms an image in the 3D system, the controller 130 controls the video processor 110 to maintain a state in which the graphic object is displayed on a layer having a greater depth than the layer in which the 3D image is displayed 110 and the graphic processing unit 120, respectively.
- a layer in which an image is displayed is referred to as a reference layer
- a layer in which a graphic object is displayed is referred to as an overlay layer.
- various kinds of graphic objects having graphical elements other than images can be displayed.
- the disparity of the overlay layer can be set to a larger value than the reference layer in which the image is displayed. Concretely, it is set to a value that can guarantee that no inversion occurs.
- the display unit 140 displays an image frame configured in the video processing unit 110 and a graphic object configured in the graphic processing unit 120 on the screen.
- the display unit 140 may alternatively display the left eye image frame and the right eye image frame to display the image in 3D.
- the left and right eye graphic objects and the right eye graphic object may be alternately displayed to display the graphic object in 3D.
- the video processing unit 110 may configure a multi-view image, and the graphics processing unit 120 may convert the graphic object into a multi- .
- the display unit 140 divides the multi-view image and the multi-view object spatially and outputs the 3D image by sensing the distance from the object without using glasses. More specifically, in this case, the display unit 140 may be implemented as a display panel according to a Parallax Barrier technique or a Lenticular technique.
- FIG. 2 shows an example of a state in which a reference layer and an overlay layer are displayed.
- image and graphic objects are respectively output in a 3D manner.
- various depth senses are displayed according to the disparity.
- the depth can be referred to as a layer or a plane, respectively.
- the reference layer 10, in which an image is displayed among a plurality of layers, corresponds to the base, and at least one overlay layer 20, 30 may be provided thereon.
- FIG. 2 shows one reference layer 10, when the image is displayed in 3D, a plurality of reference layers 10 may be provided.
- the lowest overlay layer 20 among all the overlay layers is configured to have at least the same depth feeling as the uppermost reference layer 10 or to have a greater depth feeling. Accordingly, even if the stereoscopic effect of the 3D content is large, the graphic object is always displayed so as to be closer to the user than the image, and the inversion does not occur.
- various kinds of graphic objects may be displayed on one overlay layer or may be displayed on a plurality of overlay layers having different depth senses depending on the type.
- the disparity information of the reference layer and the disparity information of the overlay layer can be provided in various ways.
- the display device 100 may include a video processing unit 110, a graphics processing unit 120, a control unit 130, a display unit 140, and a receiving unit 150.
- the receiver 150 may receive first disparity information for a reference layer and second disparity information for an overlay layer from an external source.
- the external source may be a broadcasting station for transmitting a broadcasting signal, or may be a variety of devices such as a storage medium, an external server, and a reproducing apparatus.
- the external source may set the size of the second disparity information to be larger than the first disparity information and transmit the second disparity information so that the graphic object is always displayed above the image.
- the control unit 130 controls the video processing unit 110 to give a stereoscopic effect to the video according to the first disparity information , And control the graphic processor 120 to give a stereoscopic effect to the graphic object according to the second disparity information.
- the first disparity information is information on a depth or a disparity of a video that can be referred to as a display reference of an overlay layer.
- the second disparity information is an explicit value that directly indicates the depth or disparity of the overlay layer.
- the display apparatus 100 can represent images and graphic objects in a 3D manner without causing a screen inversion phenomenon.
- the display apparatus includes a video processing unit 110, a graphics processing unit 120, a receiving unit 150, and a demultiplexer 160.
- the demultiplexer 160 detects a video signal and graphic data from a broadcast signal received through the receiver 150.
- the receiving unit 150 may receive a broadcasting signal including a video signal, graphic data, first disparity information, and second disparity information.
- the receiver 150 may include various components such as an antenna, an RF down-converter, a demodulator, and an equalizer. Accordingly, the received RF signal can be down-converted to the intermediate band, and then demodulated and equalized to restore the signal to be provided to the demultiplexer 160.
- the demultiplexer 160 demultiplexes the provided signal, provides the video signal to the video processing unit 110, and provides the graphic data to the graphics processing unit 120.
- an audio signal (not shown) for processing an audio signal may be further included because it also includes an audio signal in the case of a broadcast signal.
- the audio signal is not directly related to the processing of the graphic object, and thus the illustration and description thereof are omitted.
- the first disparity information and the second disparity information may be recorded in a predetermined area provided in the broadcast signal.
- the broadcast signal may include a program information table area in which program information is recorded, a user data area in which a broadcasting company or users can freely use the program information table area, and the like.
- the first and second disparity information may be transmitted using these valid regions. This will be described later in detail.
- the video processing unit 110 includes a video decoder 111, an L buffer 112, an R buffer 113, an L frame configuration unit 114, an R frame configuration unit 115, a first switch 116 ).
- the video decoder 111 decodes the video signal provided from the demultiplexer 160.
- various decoding such as RS decoding, Viterbi decoding, turbo decoding, trellis decoding, or the like, or a combination thereof may be performed.
- the video processing unit 110 may include a deinterleaver for deinterleaving.
- the left eye image data among the decoded data in the video decoder 111 is stored in the L buffer 112 and the right eye image data is stored in the R buffer 113.
- the L frame construction unit 114 generates a left eye image frame using data stored in the L buffer 112.
- the R frame forming unit 115 generates a right eye image frame using the data stored in the R buffer 113.
- the first switch 116 alternately outputs the left eye image frame and the right eye image frame constituted by the L frame constituent part 114 and the R frame constituent part 115, respectively. At this time, a black frame may be displayed between the left eye image frame and the right eye image frame. In addition, instead of outputting only one left eye image frame and one right eye image frame at the time of outputting every time, the same plural left eye image frames and the same plural right eye image frames may be output.
- the graphics processing unit 120 includes a graphic data decoder 121, an L object configuration unit 122, an R object configuration unit 123, and a second switch 124.
- the graphic data decoder 121 decodes the graphic data supplied from the demultiplexer 160.
- the decoding scheme may correspond to the encoding scheme applied at the transmitting end, and a known technique may be applied to such a data encoding and decoding scheme. Therefore, a detailed description of the decoding method and configuration will be omitted.
- the decoded data in the graphic data decoder 121 are provided to the L object configuration unit 122 and the R object configuration unit 123, respectively. Although not shown in FIG. 4, it is needless to say that an L buffer and an R buffer may be provided and used in the graphics processor 120.
- FIG. The disparity between the left eye graphic object and the right eye graphic object formed by the L object constructing unit 122 and the R object constructing unit 123 is determined by the disparity between the frames generated by the L frame constructing unit 114 and the R frame constructing unit 115 Lt; / RTI >
- the second switch 124 alternately outputs the left and right eye graphic objects configured in the L object configuration unit 122 and the R object configuration unit 123 in association with the operation of the first switch 116.
- the image and the graphic object corresponding thereto can be superimposed on each other with a different sense of depth on one screen, and can be expressed in a 3D manner.
- the display apparatus includes a video processing unit 110, a graphics processing unit 120, a control unit 130, a display unit 140, a receiving unit 150, a disparity information generating unit 170, a storage unit 180, .
- the receiving unit 150 may receive data to be output from the display device. Specifically, it can be received from various sources such as a broadcasting station, a web server, a storage medium, a reproducing apparatus, and the like.
- the received data may include information related to the depth of the image. That is, the receiving unit 150 may receive the first disparity information for the reference layer from an external source.
- control unit 130 can control the video processing unit 110 to give a stereoscopic effect to the image according to the received first disparity information.
- the disparity information generator 170 may be used when only the first disparity information for the reference layer can be received through the receiver 150.
- the disparity information generation unit 170 generates the second disparity information for the overlay layer.
- the control unit 130 may control the graphic processing unit 120 to give a stereoscopic effect to the graphic object according to the second disparity information generated by the disparity information generation unit 170.
- the second disparity information may be generated in various manners according to the embodiment. That is, when the image is expressed in 3D, the disparity of the reference layer can be changed every hour.
- the disparity information generating unit 170 may analyze the first disparity information and check the disparity, and then generate the second disparity information using the result of the checking.
- the disparity information generation unit 170 may generate the second disparity information so that the overlay layer always has a fixed depth.
- 6 to 8 show various examples of a method for fixedly determining the disparity of the overlay layer in the display device.
- the disparity information generating unit 170 may detect a maximum disparity of a reference layer in an arbitrary stream unit, and may generate second disparity information based on the detected information.
- the stream unit may be one GoP (Group of Picture), a broadcast program unit, a predetermined number of packets, a fixed time unit, and the like.
- the disparity information generation unit 170 keeps the disparity at the time t1 as it is or increases the predetermined value to a value equal to the disparity of the overlay layer And generate the second disparity information accordingly.
- the second disparity information may be generated using the disparity of the reference layer at the time point t3 at which the graphic object should be displayed as it is. That is, the second disparity information can be determined so that the depth of the same level as the reference layer at that time point at which the image and the graphic object are displayed together is fixedly fixed.
- Such an event to be displayed as an overlay layer may be a case where a caption is input, a confirmation command for confirming an OSD menu or an icon is inputted, a case where an application or a widget is executed and a UI window is displayed, , And a case where a graphic object is displayed.
- the disparity of the first overlay layer i.e., the graphic plane
- the second overlay layer on which the OSD menu is displayed i.e., the OSD plane
- the OSD plane has a slightly larger disparity than the first overlay layer. Accordingly, it is possible to have a different depth sense depending on the kind of the graphic object.
- the disparity information generating unit 170 may generate the second disparity information so that the overlay layer has a fluid depth sense. That is, the disparity information generating unit 170 generates the disparity information based on the first disparity information, so that the disparity of the overlay layer is also changed according to the disparity variation state of the reference layer, It is possible to generate disparity information.
- 9 and 10 are views for explaining a method of dynamically determining a disparity of an overlay layer in a display device.
- the disparity of the reference layer is continuously changed every hour, and the disparity of the overlay layer is changed to maintain a constant interval based on the reference layer.
- FIG. 10 shows a state in which the depth of the reference layer varies in the vertical direction with respect to the screen of the display device, and the depth of the overlay layer also changes in the vertical direction.
- the second disparity information can be fixedly or dynamically determined using the first disparity information.
- the present invention is not limited thereto. That is, even when only the second disparity information is provided, the first disparity information may be generated based on the second disparity information. It goes without saying that the disparity of the reference layer can also be determined in a floating or fixed manner.
- the second disparity information itself may be predetermined and stored in the storage unit 180.
- the disparity information generation unit 170 may generate the second disparity information with a value stored in the storage unit 180, regardless of the first disparity information.
- both the first disparity information and the second disparity information may not be provided from the outside.
- the disparity information generating unit 170 may generate the first and second disparity information using the preset disparity information.
- the storage unit 180 may store arbitrarily determined depth information or disparity information. For example, assuming that the depth of the screen is 0, the reference layer is set so that the disparity varies within about -10 to +10 pixels, the first overlay layer is set to +15 pixels, the second overlay layer is set to about +20 pixels
- the disparity can be set.
- Such disparity may have various sizes depending on the type of display device. That is, in the case of a TV having a large screen size, larger disparity information may be set in comparison with a small display device such as a mobile phone.
- the disparity information generation unit 170 may generate first and second disparity information according to the depth information stored in the storage unit 180 and may provide the first and second disparity information to the video processing unit 110 and the graphics processing unit 120 .
- the disparity information generating unit 170 may analyze the disparity of the reference layer by comparing the left eye image frame and the right eye image frame constructed from the video signal, and checking the distance between the matching points.
- the disparity information generation unit 170 divides the left eye image frame and the right eye image frame into a plurality of blocks, and then compares pixel representative values of the respective blocks. As a result, the blocks in which the pixel representative values fall within the similar range are determined as matching points. Thus, a depth map is generated based on the movement distance between the determined matching points. That is, the position of a pixel constituting the subject in the left eye image is compared with the position of a pixel in the right eye image, and the difference is calculated. Thus, an image having a gray level corresponding to the calculated difference, i.e., a depth map is generated.
- the depth can be defined as the distance between the subject and the camera, the distance between the subject and the recording medium (for example, a film) on which the image of the subject is formed, and the degree of stereoscopic effect. Accordingly, the difference in distance between the points of the left eye image and the right eye image corresponds to the disparity, and the larger the value, the more the stereoscopic effect is increased.
- Depth map means that the state of change of depth is composed of one image.
- the disparity information generation unit 170 can determine the disparity information of the overlay layer based on the depth map and determine the second disparity information as fixed or fluid.
- a plurality of overlay layers may be provided, and graphic objects of different kinds may be displayed in each overlay layer.
- a graphical object such as an OSD menu
- a graphical object such as a caption
- This display order can be changed according to the user's selection.
- FIG. 11 shows an example of a user interface (UI) that allows a user to change the display order of graphic objects.
- UI user interface
- a plurality of menus are displayed on the screen of the display device 100.
- a graphic object such as a subtitle may be placed on the topmost overlay layer, and the remaining graphic objects may be displayed below the graphic object.
- the OSD emphasis mode (b) a graphic object such as an OSD menu may be placed on the topmost overlay layer, and remaining graphical objects may be displayed below.
- the user can directly set the depth of each graphic object by selecting the user setting mode (c). That is, as shown in FIG. 11, when the user setting mode (c) is selected, a new UI (d) is displayed.
- the user can directly set the depth of the graphic caption, the depth of the OSD menu, and the like on the UI (d). In this case, the depth can be set using the bar graph as shown in FIG. 11. Alternatively, the depth can be set by the user directly by inputting numbers, text, or the like.
- the signal processing apparatus 200 includes an OSD decoder 210, a memory 220, a PID filter 230, a video decoder 240, a graphic decoder 250, a 3D manager unit 260, A graphics buffer 270, a graphics buffer 280, a video buffer 290, and a mux 295.
- the signal processing device may be a set-top box, a playback device that plays back various storage media such as a DVD, a Blu-ray, a VCR, or the like, or may be implemented with a chip or a module embedded in various devices.
- the OSD decoder 210 reads the OSD data from the memory 220 according to a command of the user, decodes the OSD data, and provides the OSD data to the 3D manager unit 260.
- the 3D manager unit 260 generates a left eye OSD object and a right eye OSD object using the provided OSD data. In this case, the disparity between the left-eye OSD object and the right-eye OSD object is set to match the disparity of the overlay layer on which the OSD menu is to be displayed.
- the generated left-eye and right-eye OSD menus are stored in the OSD buffer 270.
- the detection unit 230 processes the transport stream to separate the graphic data and the video data. More specifically, when the transport stream is a stream conforming to the MPEG-2 standard, the detecting unit 230 detects the program specific information (PSI) table from the MPEG-2 transport stream. Accordingly, a PID filter (Program IDentifier filter) is used to store all kinds of information such as ATSC program, System Information Protocol (PSIP) table, DVB service information (SI), conditional access table (CAT), DSM- PSI data can be acquired. The detection unit 230 can separate the video data and the graphic data using the acquired data. Meanwhile, the detector 230 detects the depth packets related to the disparity information for the overlay layer and provides the detected depth packets to the 3D manager unit 260.
- PSI program specific information
- the graphic data is provided to the graphic decoder 250.
- the graphic decoder 250 decodes the graphic data and provides the decoded graphic data to the 3D manager unit 260.
- the 3D manager 260 generates left eye graphic objects and right eye graphic objects using the depth packets provided from the detector 230 and the decoded graphic data.
- the disparity between the left and right graphic objects is set to match the disparity of the overlay layer on which the subtitles are to be displayed.
- the generated left and right eye graphic objects are stored in the graphic buffer 280. [ As described above, the information on the disparity of the overlay layer may be transmitted in the same stream as the video signal, or may be transmitted in a separate stream.
- the video decoder 240 decodes the video data and provides the decoded video data to the video buffer 290. If the video signal contained in the TS is a 2D signal, the video buffer 290 stores a 2D image frame. On the other hand, when the video signal itself includes the left eye image frame and the right eye image frame, the left eye image frame and the right eye image frame can be stored in the video buffer 290 without a separate 3D conversion process. Although not shown in FIG. 12, when a 3D image conversion module is further included, it is of course possible to generate a left eye image frame and a right eye image frame by using a 2D video signal.
- the data stored in the OSD buffer 270, the graphic buffer 280, and the video buffer 290 are combined by the MUX 295 to form screen data.
- the configured data may be transmitted to the external display means through an interface provided separately, or may be stored in a separate storage unit.
- the signal processing apparatus 300 includes a receiving unit 310, a video processing unit 320, an audio processing unit 330, a graphics processing unit 340, and an interface unit 350.
- the receiving unit 310 receives an input signal.
- the input signal may be a multimedia signal provided from an internal or external storage medium, a playback apparatus, or the like, as well as a broadcast signal transmitted from a broadcasting station.
- the video signal included in the input signal received by the receiving unit 310 is provided to the video processing unit 320.
- the video processing unit 320 processes the video signal to construct an image that can be displayed on the reference layer.
- the audio processing unit 330 processes the audio signal included in the input signal to generate sound.
- the graphic processing unit 340 processes graphic data to construct a graphic object to be displayed on an overlay layer that is a parent of the reference layer.
- the graphic data may be caption data or the like included together with the input signal, or may be data provided from another source.
- it may be an OSD menu, various icons, windows, and the like.
- the data processed by each processing unit is transmitted to the output means by the interface unit 350.
- disparity information for video data i.e., first disparity information and disparity information for graphic data, i.e., second disparity information, May be provided, or may not be all provided.
- the video processing unit 320 detects the first disparity information from the input signal, and based on the detected first disparity information, Giving a three-dimensional feeling.
- the graphic processing unit detects the second disparity information included in the input signal and gives a three-dimensional effect to the graphic object based on the second disparity information.
- the signal processing apparatus 300 includes a receiving unit 310, a video processing unit 320, an audio processing unit 330, a graphics processing unit 340, an interface unit 350, a disparity information generating unit 360, And a storage unit 370.
- the disparity information generating unit 360 If only the first disparity information is included in the input signal, the disparity information generating unit 360 generates second disparity information for the overlay layer.
- the disparity information generating unit 360 generates second disparity information so that the disparity of the overlay layer varies according to the disparity variation state of the reference layer. That is, the depth of the overlay layer can be flexibly changed as described above.
- the disparity information generation unit 360 may generate the second disparity information so that the overlay layer has a fixed depth.
- the disparity information generation unit 360 provides the generated second disparity information to the graphic processing unit 340.
- the graphic processing unit 340 gives a stereoscopic effect to the graphic object according to the second disparity information generated by the disparity information generation unit 360.
- both the first disparity information and the second disparity information may not be included in the input signal.
- the disparity information generation unit 360 generates the first and second disparity information using the depth information stored in the storage unit 370.
- the video processing unit 320 and the graphics processing unit 340 respectively apply the first and second disparity information to the video and graphic objects.
- a transmitting apparatus 400 includes a video encoder 410, a video packetizer 420, an audio encoder 430, an audio packetizer 440, a data encoder 450, a packetizer 460, A parity information processing unit 470, a multiplexer 480, and an output unit 490.
- the video encoder 410, the audio encoder 430, and the data encoder 450 encode video data, audio data, and general data, respectively.
- the video packetizing unit 420, the audio packetizing unit 440, and the packetizing unit 460 constitute packets each including encoded data. Specifically, a plurality of packets including a header, a payload, a parity, and the like are configured.
- Mux 480 muxes each configured packet. Specifically, the video packetizer 420, the audio packetizer 440, and the packetizer 460 combine a plurality of packets as many as the predetermined number.
- the output unit 490 performs processing such as randomization, RS encoding, interleaving, trellis encoding, sync muxing, pilot insertion, modulation, and RF up-conversion on the frame in which the packets are combined, and outputs the frame through an antenna .
- the disparity information processing unit 470 generates information on at least one disparity among the reference layer and the overlay layer, and provides the information to the mux 480.
- This information may be recorded in a predetermined field in the stream. Specifically, it may be recorded in a program map table (PMT) descriptor, a user data area, or the like. Alternatively, it may be provided via a separate stream.
- PMT program map table
- Such disparity information may be provided by various parameters such as depth style information, depth control permission information, and the like.
- Table 1 shows the syntax of information for informing the depth or disparity of the overlay layer.
- depth_control_permission is a parameter that allows the user to directly adjust the depth of the overlay layer. That is, when this value is 1, the user can perform the depth adjustment.
- the depth is 0, even if the depth is adjustable in the external reproducing apparatus or display device capable of 3D reproduction, the depth adjustment is not allowed according to the author's intention to make the author.
- the depth or disparity of the overlay layer can be provided to a receiver (i.e., a display device or a signal processing device) using a depth style function as follows.
- videp-mode is information indicating whether the 2D mode or the 3D mode is used. That is, 0 means 2D mode, 1 means 3D mode.
- optimized_graphic_depth indicates the optimal depth or disparity of the subtitles determined by the author
- osd_offset indicates the depth or disparity of the OSD menu determined by the author.
- Min_graphic_depth indicates the minimum depth or disparity of the overlay layer determined so as to prevent the depth inversion phenomenon.
- Max_graphic_depth indicates the maximum depth or disparity of the overlay layer for optimizing the stereoscopic effect while minimizing the viewing inconvenience of the user.
- the definition position of the overlay plane depth () as shown in Table 1 can be the PMT descriptor portion.
- the descriptor for overlay_plane_depth can be defined as shown in the following table.
- the overlay_plane_depth_descriptor can be defined in the same manner as Table 3 in the User private area of the descriptor_tag defined in ISO / IEC 13818-1.
- overlay_plane_depth () can also be defined in the ES User data area, but there is no restriction on the definition period.
- video-mode optimized_graphic_depth, osd_offset, min_graphic_depth, max_graphic_depth, etc. in Table 2 can be set to various values.
- Fig. 16 shows a screen configuration when parameters are defined as shown in Table 4. Fig. That is, when osd_offset is set to 0 as shown in Table 4, the OSD menu 11 is displayed on the layer where the video is displayed, that is, the reference layer. On the other hand, if min_graphic_depth is displayed as 10, the graphic object 12 is displayed on the overlay layer.
- each parameter may be defined as shown in Table 5 below.
- Fig. 17 shows a screen configuration when parameters are defined as shown in Table 5. Fig. That is, when osd_offset is set to 10 as shown in Table 5, the OSD menu 11 is displayed on the overlay layer. On the other hand, the graphic object 12 is displayed on the reference layer.
- disparity information for a graphic object When disparity information for a graphic object is provided from outside, various graphic objects can be displayed on at least one overlay layer or a reference layer according to the disparity information.
- a separate PES stream may be defined to define the depth or disparity of the overlay layer.
- a PES stream having the following format can be prepared.
- data_identifier is an identifier for identifying a stream containing information on the depth or disparity of the overlay layer.
- the overlay_plane_depth_segment in Table 6 can be composed of parameters having the same meaning as the depth_style shown in Table 2. [
- overlay_plane_depth_descriptor shown in Table 3 may be defined as shown in the following table.
- FIG. 18 is a flowchart for explaining a signal processing method according to an embodiment of the present invention. Referring to FIG. 18, when a signal is received (S1810), an image is formed using the received signal (S1820).
- a graphic object is configured (S1840).
- the situation in which the graphic data is to be displayed means that when there is a caption to be displayed together with a video, a user command for selecting an OSD menu is input, a user command for displaying icons, windows, And the like.
- a graphic object is generated by giving a stereoscopic effect so that it can be displayed on an overlay layer that is higher than a layer on which an image is displayed.
- the information on the disparity of the overlay layer may be provided from the outside as described above, may be generated in the apparatus itself based on the disparity of the reference layer, or may be generated using the stored depth information separately.
- the image and graphic object are transmitted to the external device (S1850).
- the external device may be a display device separately provided outside the apparatus in which the present method is performed, or may refer to another chip in the same apparatus.
- Such a signal processing method can be implemented in various ways as described above. That is, different kinds of graphic objects may be displayed on a plurality of overlays, or the display order among the overlay layers may be changed.
- the program for performing the method according to various embodiments of the present invention described above can be stored and used in various types of recording media.
- the code for performing the above-described methods may be stored in a storage medium such as a RAM (Random Access Memory), a FLASH memory, a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electronically Erasable and Programmable ROM)
- a storage medium such as a RAM (Random Access Memory), a FLASH memory, a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electronically Erasable and Programmable ROM)
- a storage medium such as a RAM (Random Access Memory), a FLASH memory, a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electronically Erasable and Programmable ROM)
- a floppy disk such as a floppy disk, a removable disk, a memory card, a USB memory, a CD-ROM, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Overlay_plane_depth(){ | No. of bits |
... | |
depth_control_permission | 1 |
reserved | 7 |
if(depth_control_permission =='1'){ | |
depth_style_number | 4 |
reserved | 4 |
for(i=0; i<depth_style_number;i++){ | |
depth_style() | |
} | |
.... | |
} |
depth_style() | No.of bits |
... | |
video_mode | 1 |
optimized_graphic_depth | 8 |
osd_offset | 8 |
min_graphic_depth | 8 |
max_graphic_depth | 8 |
reserved | 7 |
... | |
} |
overlay_plane_depth_descriptor{ | No. of bits |
... | |
descriptor_tag | 8 |
descriptor_length | 8 |
overlay_plane_depth() | |
... |
video_mode | 0(2D) |
min_graphic_depth | 10 |
optimized_graphic_depth | 15 |
max_graphic_depth | 20 |
osd_offset | 0 |
video_mode | 0(2D) |
min_graphic_depth | 0 |
optimized_graphic_depth | 0 |
max_graphic_depth | 0 |
osd_offset | 10 |
syntax | size |
PES_data_field(){ | |
data_identifier | 8 |
while nextbits() == sync_byte{ | |
overlay_plane_depth_segment() | |
} | |
end_of_PES_data_field_marker | 8 |
} |
overlay_plane_depth_descriptor{ | No. of bits |
descriptor_tag | 8 |
descriptor_length | 8 |
depth_control_permission | 1 |
reserved | 7 |
} |
Claims (35)
- 비디오 신호를 처리하여 영상을 구성하는 비디오 처리부;그래픽 데이터를 처리하여 그래픽 객체를 구성하는 그래픽 처리부;상기 영상 및 상기 그래픽 객체를 표시하기 위한 디스플레이부;상기 영상 및 상기 그래픽 객체 각각에 서로 다른 입체감을 부여하여, 상기 영상이 표시되는 레퍼런스 레이어의 상위인 오버레이 레이어에 상기 그래픽 객체가 표시되는 상태를 유지하도록, 상기 비디오 처리부 및 상기 그래픽 처리부를 제어하는 제어부;를 포함하는 디스플레이 장치.
- 제1항에 있어서,외부 소스로부터 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보 및 상기 오버레이 레이어에 대한 제2 디스패리티 정보를 수신하는 수신부;를 더 포함하며,상기 제어부는 상기 제1 디스패리티 정보에 따라 상기 영상에 입체감을 부여하도록 상기 비디오 처리부를 제어하고, 상기 제2 디스패리티 정보에 따라 상기 그래픽 객체에 입체감을 부여하도록 상기 그래픽 처리부를 제어하는 것을 특징으로 하는 디스플레이 장치.
- 제2항에 있어서,상기 수신부는, 상기 비디오 신호, 상기 그래픽 데이터, 상기 제1 디스패리티 정보 및 상기 제2 디스패리티 정보를 포함하는 방송 신호를 수신하며,상기 비디오 처리부 및 상기 그래픽 처리부는, 상기 방송 신호에 포함된 프로그램 정보 테이블 또는 유저 데이터 영역으로부터 상기 제1 디스패리티 정보 및 상기 제2 디스패리티 정보를 각각 검출하는 것을 특징으로 하는 디스플레이 장치.
- 제1항에 있어서,외부 소스로부터 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보를 수신하는 수신부; 및,상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 디스패리티 정보 생성부;를 더 포함하며,상기 제어부는 상기 제1 디스패리티 정보에 따라 상기 영상에 입체감을 부여하도록 상기 비디오 처리부를 제어하고, 상기 제2 디스패리티 정보에 따라 상기 그래픽 객체에 입체감을 부여하도록 상기 그래픽 처리부를 제어하는 것을 특징으로 하는 디스플레이 장치.
- 제4항에 있어서,상기 디스패리티 정보 생성부는,상기 레퍼런스 레이어의 디스패리티 변동 상태에 따라 상기 오버레이 레이어의 디스패리티가 변동되어 상기 오버레이 레이어 간의 깊이 차이가 기 설정된 크기를 유지하도록, 상기 제1 디스패리티 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 디스플레이 장치.
- 제4항에 있어서,상기 디스패리티 정보 생성부는,상기 오버레이 레이어가 고정된 깊이를 가지도록 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 디스플레이 장치.
- 제6항에 있어서,상기 디스패리티 정보 생성부는,임의의 스트림 단위 내에서 상기 레퍼런스 레이어의 최대 디스패리티를 검출하여, 검출된 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 디스플레이 장치.
- 제6항에 있어서,상기 디스패리티 정보 생성부는,상기 그래픽 객체가 표시되는 시점에서의 상기 레퍼런스 레이어의 디스패리티를 검출하여, 검출된 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 디스플레이 장치.
- 제1항에 있어서,기 설정된 깊이 정보가 저장된 저장부; 및상기 깊이 정보에 따라, 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보 및 상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 디스패리티 정보 생성부;를 더 포함하며,상기 제어부는 상기 제1 디스패리티 정보에 따라 상기 영상에 입체감을 부여하도록 상기 비디오 처리부를 제어하고, 상기 제2 디스패리티 정보에 따라 상기 그래픽 객체에 입체감을 부여하도록 상기 그래픽 처리부를 제어하는 것을 특징으로 하는 디스플레이 장치.
- 제1항 내지 제9항 중 어느 한 항에 있어서,상기 오버레이 레이어는 서로 다른 깊이를 가지는 복수 개의 레이어를 포함하며,각 레이어에는 서로 다른 종류의 그래픽 객체가 표시되는 것을 특징으로 하는 디스플레이 장치.
- 제10항에 있어서,상기 각 레이어에 표시되는 그래픽 객체 종류의 표시 순서는 사용자의 선택에 따라 서로 변경 가능한 것을 특징으로 하는 디스플레이 장치.
- 제10항에 있어서,상기 그래픽 객체는 OSD 메뉴, 자막, 프로그램 정보, 어플리케이션 아이콘, 어플리케이션 창, GUI 창 중 적어도 하나 이상의 종류의 객체를 포함하는 것을 특징으로 하는 디스플레이 장치.
- 입력 신호를 수신하는 수신부;상기 입력 신호에 포함된 비디오 신호를 처리하여 레퍼런스 레이어에 디스플레이될 수 있도록 영상을 구성하는 비디오 처리부;상기 입력 신호에 포함된 오디오 신호를 처리하여 음향을 생성하는 오디오 처리부;그래픽 데이터를 처리하여 상기 레퍼런스 레이어의 상위인 오버레이 레이어에 디스플레이되도록 그래픽 객체를 구성하는 그래픽 처리부; 및,상기 영상, 상기 음향, 상기 그래픽 객체를 출력 수단으로 전송하는 인터페이스부;를 더 포함하는 것을 특징으로 하는 신호 처리 장치.
- 제13항에 있어서,상기 비디오 처리부는 상기 입력 신호에 포함된 제1 디스패리티 정보를 검출하여 상기 제1 디스패리티 정보에 기초하여 상기 영상에 입체감을 부여하고,상기 그래픽 처리부는, 상기 입력 신호에 포함된 제2 디스패리티 정보를 검출하여 상기 제2 디스패리티 정보에 기초하여 상기 그래픽 객체에 입체감을 부여하는 것을 특징으로 하는 신호 처리 장치.
- 제14항에 있어서,상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 디스패리티 정보 생성부;를 더 포함하며,상기 비디오 처리부는 상기 입력 신호에 포함된 제1 디스패리티 정보를 검출하여 상기 제1 디스패리티 정보에 기초하여 상기 영상에 입체감을 부여하고,상기 그래픽 처리부는 상기 디스패리티 정보 생성부에서 생성한 상기 제2 디스패리티 정보에 따라 상기 그래픽 객체에 입체감을 부여하는 것을 특징으로 하는 신호 처리 장치.
- 제15항에 있어서,상기 디스패리티 정보 생성부는,상기 레퍼런스 레이어의 디스패리티 변동 상태에 따라 상기 오버레이 레이어의 디스패리티가 변동되어 상기 오버레이 레이어 간의 깊이 차이가 기 설정된 크기를 유지하도록, 상기 제1 디스패리티 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 신호 처리 장치.
- 제15항에 있어서,상기 디스패리티 정보 생성부는,상기 오버레이 레이어가 고정된 깊이를 가지도록 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 신호 처리 장치.
- 제17항에 있어서,상기 디스패리티 정보 생성부는,임의의 스트림 단위 내에서 상기 레퍼런스 레이어의 최대 디스패리티를 검출하여, 검출된 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 신호 처리 장치.
- 제17항에 있어서,상기 디스패리티 정보 생성부는,상기 그래픽 객체가 표시되는 시점에서의 상기 레퍼런스 레이어의 디스패리티를 검출하여, 검출된 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 것을 특징으로 하는 신호 처리 장치.
- 제13항에 있어서,기 설정된 깊이 정보가 저장된 저장부; 및상기 깊이 정보에 따라, 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보 및 상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 디스패리티 정보 생성부;를 더 포함하며,상기 비디오 처리부는 상기 입력 신호에 포함된 제1 디스패리티 정보를 검출하여 상기 제1 디스패리티 정보에 기초하여 상기 영상에 입체감을 부여하고,상기 그래픽 처리부는, 상기 입력 신호에 포함된 제2 디스패리티 정보를 검출하여 상기 제2 디스패리티 정보에 기초하여 상기 그래픽 객체에 입체감을 부여하는 것을 특징으로 하는 신호 처리 장치.
- 제13항 내지 제20항 중 어느 한 항에 있어서,상기 오버레이 레이어는 서로 다른 깊이를 가지는 복수 개의 레이어를 포함하며,각 레이어에는 서로 다른 종류의 그래픽 객체가 표시되는 것을 특징으로 하는 신호 처리 장치.
- 제21항에 있어서,상기 각 레이어에 표시되는 그래픽 객체 종류의 표시 순서는 사용자의 선택에 따라 서로 변경 가능한 것을 특징으로 하는 신호 처리 장치.
- 제21항에 있어서,상기 그래픽 객체는 OSD 메뉴, 자막, 프로그램 정보, 어플리케이션 아이콘, 어플리케이션 창, GUI 창 중 적어도 하나 이상의 종류의 객체를 포함하는 것을 특징으로 하는 신호 처리 장치.
- 비디오 신호를 처리하여 레퍼런스 레이어에 디스플레이될 수 있도록 영상을 구성하는 단계;그래픽 데이터를 처리하여 상기 레퍼런스 레이어의 상위인 오버레이 레이어에 디스플레이되도록 그래픽 객체를 구성하는 단계; 및,상기 영상 및 상기 그래픽 객체를 출력 수단으로 전송하는 단계;를 더 포함하는 것을 특징으로 하는 신호 처리 방법.
- 제24항에 있어서,외부 소스로부터 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보 및 상기 오버레이 레이어에 대한 제2 디스패리티 정보를 수신하는 수신 단계;를 더 포함하며,상기 영상은 상기 제1 디스패리티 정보에 따라 입체감이 부여되며, 상기 그래픽 객체는 상기 제2 디스패리티 정보에 따라 입체감이 부여되어 구성되는 것을 특징으로 하는 신호 처리 방법.
- 제25항에 있어서,상기 수신단계는,상기 비디오 신호, 상기 그래픽 데이터, 상기 제1 디스패리티 정보 및 상기 제2 디스패리티 정보를 포함하는 방송 신호를 수신하는 단계;상기 방송 신호에 포함된 프로그램 정보 테이블 또는 유저 데이터 영역으로부터 상기 제1 디스패리티 정보 및 상기 제2 디스패리티 정보를 각각 검출하는 단계;를 포함하는 것을 특징으로 하는 신호 처리 방법.
- 제24항에 있어서,외부 소스로부터 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보를 수신하는 단계; 및,상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 단계;를 더 포함하며,상기 영상은 상기 제1 디스패리티 정보에 따라 입체감이 부여되며, 상기 그래픽 객체는 상기 제2 디스패리티 정보에 따라 입체감이 부여되어 구성되는 것을 특징으로 하는 신호 처리 방법.
- 제27항에 있어서,상기 제2 디스패리티 정보를 생성하는 단계는,상기 제1 디스패리티 정보를 분석하여 상기 레퍼런스 레이어의 디스패리티 변동 상태를 확인하는 단계;상기 레퍼런스 레이어의 디스패리티 변동 상태에 따라 상기 오버레이 레이어의 디스패리티가 변동되어 상기 오버레이 레이어 간의 깊이 차이가 기 설정된 크기를 유지하도록, 상기 제1 디스패리티 정보를 기준으로 상기 제2 디스패리티 정보를 생성하는 단계;를 포함하는 것을 특징으로 하는 신호 처리 방법.
- 제27항에 있어서,상기 제2 디스패리티 정보는, 상기 오버레이 레이어가 고정된 깊이를 가지도록 생성되는 것임을 특징으로 하는 신호 처리 방법.
- 제29항에 있어서,상기 제2 디스패리티 정보는,임의의 스트림 단위 내에서 검출된 상기 레퍼런스 레이어의 최대 디스패리티를 기준으로 생성되는 것임을 특징으로 하는 신호 처리 방법.
- 제29항에 있어서,상기 제2 디스패리티 정보는,상기 그래픽 객체가 표시되는 시점에 검출되는 상기 레퍼런스 레이어의 디스패리티를 기준으로 생성되는 것임을 특징으로 하는 신호 처리 방법.
- 제24항에 있어서,기 설정된 깊이 정보가 저장된 저장부로부터 상기 깊이 정보를 독출하는 단계; 및상기 깊이 정보에 따라, 상기 레퍼런스 레이어에 대한 제1 디스패리티 정보 및 상기 오버레이 레이어에 대한 제2 디스패리티 정보를 생성하는 단계;를 더 포함하며,상기 영상은 상기 제1 디스패리티 정보에 따라 입체감이 부여되며, 상기 그래픽 객체는 상기 제2 디스패리티 정보에 따라 입체감이 부여되어 구성되는 것을 특징으로 하는 신호 처리 방법.
- 제24항 내지 제32항 중 어느 한 항에 있어서,상기 오버레이 레이어는 서로 다른 깊이를 가지는 복수 개의 레이어를 포함하며,각 레이어에는 서로 다른 종류의 그래픽 객체가 표시되는 것을 특징으로 하는 신호 처리 방법.
- 제33항에 있어서,상기 각 레이어에 표시되는 그래픽 객체 종류의 표시 순서는 사용자의 선택에 따라 서로 변경 가능한 것을 특징으로 하는 신호 처리 방법.
- 제33항에 있어서,상기 그래픽 객체는 OSD 메뉴, 자막, 프로그램 정보, 어플리케이션 아이콘, 어플리케이션 창, GUI 창 중 적어도 하나 이상의 종류의 객체를 포함하는 것을 특징으로 하는 신호 처리 방법.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013531503A JP2013546220A (ja) | 2010-10-01 | 2011-09-30 | ディスプレイ装置および信号処理装置並びにその方法 |
CN2011800474498A CN103155577A (zh) | 2010-10-01 | 2011-09-30 | 显示装置和信号处理装置及其方法 |
US13/824,818 US20130182072A1 (en) | 2010-10-01 | 2011-09-30 | Display apparatus, signal processing apparatus and methods thereof for stable display of three-dimensional objects |
EP11829634.2A EP2624571A4 (en) | 2010-10-01 | 2011-09-30 | DISPLAY DEVICE, SIGNAL PROCESSING DEVICE AND CORRESPONDING METHODS |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US38877010P | 2010-10-01 | 2010-10-01 | |
US61/388,770 | 2010-10-01 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2012044128A2 WO2012044128A2 (ko) | 2012-04-05 |
WO2012044128A3 WO2012044128A3 (ko) | 2012-05-31 |
WO2012044128A4 true WO2012044128A4 (ko) | 2012-07-26 |
Family
ID=45893706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/007285 WO2012044128A2 (ko) | 2010-10-01 | 2011-09-30 | 디스플레이 장치 및 신호 처리 장치와, 그 방법들 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20130182072A1 (ko) |
EP (1) | EP2624571A4 (ko) |
JP (1) | JP2013546220A (ko) |
KR (1) | KR20120034574A (ko) |
CN (1) | CN103155577A (ko) |
WO (1) | WO2012044128A2 (ko) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103329549B (zh) * | 2011-01-25 | 2016-03-09 | 富士胶片株式会社 | 立体视频处理器、立体成像装置和立体视频处理方法 |
US8923686B2 (en) * | 2011-05-20 | 2014-12-30 | Echostar Technologies L.L.C. | Dynamically configurable 3D display |
EP2867757A4 (en) * | 2012-06-30 | 2015-12-23 | Intel Corp | 3D GRAPHIC USER INTERFACE |
KR20140061098A (ko) * | 2012-11-13 | 2014-05-21 | 엘지전자 주식회사 | 영상표시장치, 및 그 동작방법 |
CN103841394B (zh) * | 2012-11-23 | 2017-07-07 | 北京三星通信技术研究有限公司 | 多层式三维显示器的标定设备和标定方法 |
KR101289527B1 (ko) * | 2012-11-26 | 2013-07-24 | 김영민 | 이동 단말기 및 그 제어방법 |
US20140237403A1 (en) * | 2013-02-15 | 2014-08-21 | Samsung Electronics Co., Ltd | User terminal and method of displaying image thereof |
JP6253307B2 (ja) * | 2013-08-21 | 2017-12-27 | キヤノン株式会社 | 撮像装置、外部装置、撮像システム、撮像装置の制御方法、外部装置の制御方法、撮像システムの制御方法、及びプログラム |
CN104581341B (zh) * | 2013-10-24 | 2018-05-29 | 华为终端有限公司 | 一种字幕显示方法及字幕显示设备 |
KR102176474B1 (ko) * | 2014-01-06 | 2020-11-09 | 삼성전자주식회사 | 영상표시장치, 영상표시장치의 구동방법 및 영상표시방법 |
CN106060622B (zh) * | 2016-07-26 | 2019-02-19 | 青岛海信电器股份有限公司 | 电视的截屏方法及电视 |
KR102423295B1 (ko) | 2017-08-18 | 2022-07-21 | 삼성전자주식회사 | 심도 맵을 이용하여 객체를 합성하기 위한 장치 및 그에 관한 방법 |
EP3687178B1 (en) | 2017-09-26 | 2023-03-15 | LG Electronics Inc. | Overlay processing method in 360 video system, and device thereof |
EP3493431A1 (en) * | 2017-11-30 | 2019-06-05 | Advanced Digital Broadcast S.A. | A method for parallel detection of disparities in a high resolution video |
KR20190132072A (ko) * | 2018-05-18 | 2019-11-27 | 삼성전자주식회사 | 전자장치, 그 제어방법 및 기록매체 |
US11222470B1 (en) * | 2018-08-21 | 2022-01-11 | Palantir Technologies Inc. | Systems and methods for generating augmented reality content |
US11314383B2 (en) * | 2019-03-24 | 2022-04-26 | Apple Inc. | Stacked media elements with selective parallax effects |
US10937362B1 (en) * | 2019-09-23 | 2021-03-02 | Au Optronics Corporation | Electronic apparatus and operating method thereof |
US11210844B1 (en) | 2021-04-13 | 2021-12-28 | Dapper Labs Inc. | System and method for creating, managing, and displaying 3D digital collectibles |
US11099709B1 (en) | 2021-04-13 | 2021-08-24 | Dapper Labs Inc. | System and method for creating, managing, and displaying an interactive display for 3D digital collectibles |
USD991271S1 (en) | 2021-04-30 | 2023-07-04 | Dapper Labs, Inc. | Display screen with an animated graphical user interface |
US11227010B1 (en) | 2021-05-03 | 2022-01-18 | Dapper Labs Inc. | System and method for creating, managing, and displaying user owned collections of 3D digital collectibles |
US11170582B1 (en) | 2021-05-04 | 2021-11-09 | Dapper Labs Inc. | System and method for creating, managing, and displaying limited edition, serialized 3D digital collectibles with visual indicators of rarity classifications |
US11533467B2 (en) * | 2021-05-04 | 2022-12-20 | Dapper Labs, Inc. | System and method for creating, managing, and displaying 3D digital collectibles with overlay display elements and surrounding structure display elements |
US11830106B2 (en) * | 2021-11-19 | 2023-11-28 | Lemon Inc. | Procedural pattern generation for layered two-dimensional augmented reality effects |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11113028A (ja) * | 1997-09-30 | 1999-04-23 | Toshiba Corp | 3次元映像表示装置 |
US7623140B1 (en) * | 1999-03-05 | 2009-11-24 | Zoran Corporation | Method and apparatus for processing video and graphics data to create a composite output image having independent and separate layers of video and graphics |
JP2005267655A (ja) * | 2002-08-29 | 2005-09-29 | Sharp Corp | コンテンツ再生装置、コンテンツ再生方法、コンテンツ再生プログラム、コンテンツ再生プログラムを記録した記録媒体、および携帯通信端末 |
KR100597406B1 (ko) * | 2004-06-29 | 2006-07-06 | 삼성전자주식회사 | Osd 화면 상에서 애니메이션이 진행되는 동안 실시간키입력이 가능한 셋탑박스 및 osd 데이터 출력 방법 |
WO2008038205A2 (en) * | 2006-09-28 | 2008-04-03 | Koninklijke Philips Electronics N.V. | 3 menu display |
EP2105032A2 (en) * | 2006-10-11 | 2009-09-30 | Koninklijke Philips Electronics N.V. | Creating three dimensional graphics data |
WO2009083863A1 (en) * | 2007-12-20 | 2009-07-09 | Koninklijke Philips Electronics N.V. | Playback and overlay of 3d graphics onto 3d video |
KR101520659B1 (ko) * | 2008-02-29 | 2015-05-15 | 엘지전자 주식회사 | 개인용 비디오 레코더를 이용한 영상 비교 장치 및 방법 |
BRPI0922046A2 (pt) * | 2008-11-18 | 2019-09-24 | Panasonic Corp | dispositivo de reprodução, método de reprodução e programa para reprodução estereoscópica |
US9013551B2 (en) * | 2008-12-01 | 2015-04-21 | Imax Corporation | Methods and systems for presenting three-dimensional motion pictures with content adaptive information |
WO2010092823A1 (ja) * | 2009-02-13 | 2010-08-19 | パナソニック株式会社 | 表示制御装置 |
KR101639053B1 (ko) * | 2009-02-17 | 2016-07-13 | 코닌클리케 필립스 엔.브이. | 3d 이미지 및 그래픽 데이터의 조합 |
CA2752691C (en) * | 2009-02-27 | 2017-09-05 | Laurence James Claydon | Systems, apparatus and methods for subtitling for stereoscopic content |
EP2489198A4 (en) * | 2009-10-16 | 2013-09-25 | Lg Electronics Inc | METHOD FOR DISPLAYING 3D CONTENTS AND DEVICE FOR PROCESSING A SIGNAL |
US8605136B2 (en) * | 2010-08-10 | 2013-12-10 | Sony Corporation | 2D to 3D user interface content data conversion |
-
2011
- 2011-09-30 EP EP11829634.2A patent/EP2624571A4/en not_active Withdrawn
- 2011-09-30 KR KR1020110099829A patent/KR20120034574A/ko not_active Application Discontinuation
- 2011-09-30 US US13/824,818 patent/US20130182072A1/en not_active Abandoned
- 2011-09-30 CN CN2011800474498A patent/CN103155577A/zh active Pending
- 2011-09-30 JP JP2013531503A patent/JP2013546220A/ja not_active Ceased
- 2011-09-30 WO PCT/KR2011/007285 patent/WO2012044128A2/ko active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2012044128A3 (ko) | 2012-05-31 |
US20130182072A1 (en) | 2013-07-18 |
EP2624571A2 (en) | 2013-08-07 |
EP2624571A4 (en) | 2014-06-04 |
JP2013546220A (ja) | 2013-12-26 |
CN103155577A (zh) | 2013-06-12 |
WO2012044128A2 (ko) | 2012-04-05 |
KR20120034574A (ko) | 2012-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2012044128A4 (ko) | 디스플레이 장치 및 신호 처리 장치와, 그 방법들 | |
WO2014054845A1 (en) | Content processing apparatus for processing high resolution content and method thereof | |
WO2011136622A2 (en) | An apparatus of processing an image and a method of processing thereof | |
WO2011046279A1 (en) | Method for indicating a 3d contents and apparatus for processing a signal | |
WO2010151027A4 (ko) | 영상표시장치 및 그 동작방법 | |
WO2011059261A2 (en) | Image display apparatus and operating method thereof | |
WO2011062335A1 (en) | Method for playing contents | |
WO2011059260A2 (en) | Image display apparatus and image display method thereof | |
WO2014054847A1 (en) | Content processing apparatus for processing high resolution content and content processing method thereof | |
WO2012074328A2 (ko) | 다시점 3차원 방송 신호를 수신하기 위한 수신 장치 및 방법 | |
WO2010147289A1 (en) | Broadcast transmitter, broadcast receiver and 3d video processing method thereof | |
WO2011059270A2 (en) | Image display apparatus and operating method thereof | |
WO2012177049A2 (en) | Method and apparatus for processing broadcast signal for 3-dimensional broadcast service | |
WO2010087621A2 (en) | Broadcast receiver and video data processing method thereof | |
WO2016089093A1 (ko) | 방송 신호 송수신 방법 및 장치 | |
WO2011021894A2 (en) | Image display apparatus and method for operating the same | |
WO2013100376A1 (en) | Apparatus and method for displaying | |
WO2011152633A2 (en) | Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle | |
WO2011159128A2 (en) | Method and apparatus for providing digital broadcasting service with 3-dimensional subtitle | |
WO2011046338A2 (en) | Broadcast receiver and 3d video data processing method thereof | |
WO2011028073A2 (en) | Image display apparatus and operation method therefore | |
WO2014025239A1 (ko) | 3d 영상을 위한 영상 컴포넌트 송수신 처리 방법 및 장치 | |
EP2356820A2 (en) | 3d caption display method and 3d display apparatus for implementing the same | |
WO2010123324A2 (ko) | 영상표시장치 및 그 동작방법 | |
WO2012002690A2 (ko) | 디지털 수신기 및 디지털 수신기에서의 캡션 데이터 처리 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180047449.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11829634 Country of ref document: EP Kind code of ref document: A2 |
|
REEP | Request for entry into the european phase |
Ref document number: 2011829634 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011829634 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13824818 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2013531503 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |