KR20130076674A - Signal processing device for processing a plurality of 3d contents, display device for displaying them and methods thereof - Google Patents

Signal processing device for processing a plurality of 3d contents, display device for displaying them and methods thereof Download PDF

Info

Publication number
KR20130076674A
KR20130076674A KR1020120054864A KR20120054864A KR20130076674A KR 20130076674 A KR20130076674 A KR 20130076674A KR 1020120054864 A KR1020120054864 A KR 1020120054864A KR 20120054864 A KR20120054864 A KR 20120054864A KR 20130076674 A KR20130076674 A KR 20130076674A
Authority
KR
South Korea
Prior art keywords
content
frame
contents
frame rate
image
Prior art date
Application number
KR1020120054864A
Other languages
Korean (ko)
Inventor
추진호
김태성
최학훈
김정민
김형길
정춘식
조순제
함철희
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to US13/614,277 priority Critical patent/US20130169755A1/en
Priority to EP12184610.9A priority patent/EP2611161B1/en
Priority to CN2012105899170A priority patent/CN103188509A/en
Publication of KR20130076674A publication Critical patent/KR20130076674A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers

Abstract

A display device is disclosed. The display apparatus includes a plurality of scalers for reducing a data size of a plurality of receivers, a storage unit, and a plurality of contents which receive a plurality of contents, storing them in a storage unit, and reading out each content stored in the storage unit according to an output timing. And a plurality of frame rate converters for converting the frame rates of the read contents, and an image output unit for combining and displaying the respective contents output from the plurality of frame rate converters. Accordingly, resources can be minimized.

Description

SIGNAL PROCESSING DEVICE FOR PROCESSING A PLURALITY OF 3D CONTENTS, DISPLAY DEVICE FOR DISPLAYING THEM AND METHODS THEREOF}

The present invention relates to a signal processing apparatus, a display apparatus and methods thereof, and more particularly, to a signal processing apparatus for processing a plurality of contents, a display apparatus for displaying the same and methods thereof.

Various types of electronic products are being developed and distributed by the development of electronic technology. In particular, various display devices such as TVs, mobile phones, PCs, notebook PCs, and PDAs are used in most households.

As the use of display devices has increased, the user needs for more diverse functions have also increased. As a result, the effort of each manufacturer to meet user needs has increased, and products with new functions that have not been available in the past are emerging.

Accordingly, various contents processed by the display apparatus are also provided. In particular, recently, contents having large data sizes such as high resolution content or 3D content have been provided.

In addition, in recent years, a development effort has been made for a display device in which a plurality of contents are simultaneously provided and a plurality of users can view different contents. In the case of such a display device, a resource such as a memory or a bus is required more than processing and displaying one content. Therefore, image processing may not be performed smoothly.

In particular, when displaying a plurality of content having a large data size such as 3D content in combination, it is difficult to implement because it takes much more resources.

Accordingly, there is a need for a technology capable of effectively displaying a multi-view by processing a plurality of contents.

SUMMARY OF THE INVENTION The present invention has been made to solve the above-described problem, and an object of the present invention is to provide a signal processing apparatus capable of processing a plurality of contents, a display apparatus for displaying the same, and methods thereof.

According to an exemplary embodiment of the present invention, a display apparatus includes a plurality of receivers, a storage unit, and a plurality of contents which reduce a data size of the plurality of contents and store the plurality of contents in the storage unit. A plurality of scaler units for reading each content stored in the unit according to an output timing, a plurality of frame rate converters for converting the frame rates of the read content, and a plurality of content output from the plurality of frame rate converters And an image output unit for displaying.

The plurality of contents may be 3D contents including a left eye image and a right eye image, respectively, and each of the plurality of scalers may store the downscaled plurality of 3D contents and reduce a frame rate in the storage unit.

Alternatively, each of the plurality of scalers down-scales the plurality of 3D contents and stores the plurality of 3D contents in the storage unit. The down frame may be provided to the plurality of frame rate converters.

Or, at least one of the plurality of scalers, if the 3D content is 3: 2 pull-down film image content, down-scaling the film image content, extract only the key frame to store in the storage unit, When the key frame is read from the storage unit, the frame rate converter may convert the frame rate of each 3D content into a multi-content display rate by interpolating the frame based on the read key frame.

The image output unit may multiplex each content provided by the plurality of frame rate converters in order according to a predetermined arrangement order, and upscale and display the multiplexed data according to a screen size.

On the other hand, according to an embodiment of the present invention, in the multi-content display method of the display device, receiving a plurality of contents each comprising a left eye image and a right eye image, reducing the data size of the plurality of content to store And converting the frame rates of the plurality of stored contents, respectively, and displaying the combined contents of the converted frame rates.

The plurality of contents may be 3D contents including a left eye image and a right eye image, respectively.

In this case, reducing and storing the data size of the plurality of contents may include:

Downscaling the plurality of 3D content, reducing the frame rate of each of the downscaled 3D content, storing each 3D content having the reduced frame rate, and converting the frame rate The frame rate of each 3D content may be converted into a multi-content display rate.

Alternatively, reducing and storing the data size of the plurality of contents may include: downscaling the film image contents when the 3D content is 3: 2 pull-down film image content; The method may include extracting and storing only a key frame, and converting the frame rate may convert the frame rate of each 3D content by interpolating the frame based on the stored key frame.

In addition, the displaying may include multiplexing each content so as to be sequentially arranged according to a predetermined arrangement order, upscaling the multiplexed data to a screen size, and displaying the upscaled data. Can be.

Meanwhile, according to an embodiment of the present invention, a signal processing apparatus includes a plurality of scalers configured to reduce a data size of a plurality of 3D contents each including a left eye image and a right eye image, and a plurality of 3Ds processed by the plurality of scalers. A storage unit for storing content, and a plurality of frame rate converter for converting the frame rate of the plurality of 3D content stored in the storage unit to a multi-content display rate.

The plurality of scalers may downscale the plurality of 3D contents and store the downscaled 3D contents in the storage unit. When the downscaled 3D contents are read from the storage unit, the plurality of scaled units convert the read 3D contents into the plurality of frame rate converters. Can be converted to a format that can be processed by.

The apparatus may further include an image processor configured to configure multi-content frame data and a interface unit to transmit the multi-content frame data to a display device by using the plurality of 3D contents having the frame rates converted by the plurality of frame rate converters. Can be.

According to an embodiment of the present disclosure, a signal processing method includes downscaling a plurality of 3D contents each including a left eye image and a right eye image, and converting a frame rate of the 3D content using a plurality of frame rate converters. And using the plurality of 3D contents having the converted frame rate, constructing multi content frame data and transmitting the 3D multi content frame data to a display device.

The method may further include converting the plurality of down-scaled 3D contents into a format that may be processed by the plurality of frame rate converters.

Meanwhile, according to another embodiment of the present disclosure, the multi-view display method may include receiving a plurality of different content having different frame rates, matching frame rates of the plurality of contents, and matching the frame rates. And displaying a multi-view frame using each content having a.

The matching of the frame rates may include storing the plurality of contents, generating the plurality of image frames by processing the plurality of contents, and having a relatively small frame rate among the plurality of contents. And interpolating the video frame.

The interpolating may include: comparing a reception time of each of the plurality of contents, and confirming a storage ratio of a corresponding frame of other contents when one image frame of one of the plurality of contents is stored. And generating an interpolation frame by combining the corresponding frame and the next frame of the corresponding frame according to the identified storage ratio.

The generating of the interpolation frame may include comparing the corresponding frame and the next frame to estimate a motion of an object displayed in the frame, and apply the reception ratio to the estimated motion to generate the interpolation frame. .

The matching of the frame rate may include detecting a key frame of each of the plurality of contents, and integrating the detected key frames.

In the step of integrating the key frames, if the number of key frames of each of the plurality of contents is different, the number of key frames may be matched by performing a frame repeating or skipping operation, and the corresponding key frames of the respective contents may be integrated. have.

The matching of the frame rate may further include performing motion judging removal by performing interpolation on the integrated key frame.

According to another embodiment of the present invention, a display apparatus includes a plurality of receivers for receiving a plurality of 3D contents, a plurality of system on chips (SoCs) each having a display processor for processing 3D contents, and the plurality of SoCs. And an output unit for outputting a plurality of content views by combining the image frames of each of the 3D content processed in.

Here, one SoC of the plurality of SoCs may include a mux for muxing data processed by a display processor mounted in the SoC and data output from another SoC.

Alternatively, the apparatus may further include a SoC equipped with a mux for muxing data output from the plurality of SoCs and a frame rate converter for converting a frame rate of the muxed data from the mux.

According to various embodiments of the present disclosure as described above, a plurality of users may view different contents on one display device, respectively.

1 is a block diagram showing a configuration of a display device according to an embodiment of the present invention,
2 is a view for explaining a method of providing different 3D content to a plurality of users;
3 and 4 are views for explaining various examples of a method of reducing and processing data sizes of a plurality of 3D contents;
5 is a view for explaining a process of converting a frame rate for one 3D content;
6 is a diagram illustrating an example of a method of composing multi-content frame data by combining a plurality of 3D contents;
7 and 8 are block diagrams showing the configuration of a signal processing apparatus according to an embodiment of the present invention;
9 is a flowchart illustrating a 3D multi content display method according to an embodiment of the present invention;
10 is a flowchart illustrating a 3D multi content display method according to another embodiment of the present invention;
11 is an exemplary diagram illustrating a system for providing a plurality of contents to a plurality of users according to another embodiment of the present invention;
12 is a block diagram of a display device used in the system of FIG.
FIG. 13 is a block diagram of a signal processor used in the display device of FIG. 12. FIG.
14 is an exemplary diagram illustrating an arrangement position of a first content and a second content relative to an image frame based on an output sink;
15 is a flowchart illustrating a multi-view display method according to another embodiment of the present invention;
16 and 17 are schematic views showing the configuration of a content providing system according to another embodiment of the present invention;
FIG. 18 is a block diagram illustrating a configuration of a display apparatus used in the system illustrated in FIGS. 16 and 17.
19 to 22 are diagrams for explaining a method of incorporating key frames of respective contents having different frame rates;
FIG. 23 is a block diagram illustrating a detailed configuration of the display device of FIG. 18.
24 is a block diagram for explaining a configuration of a spectacle device used in the system shown in FIGS. 16 and 17;
25 is a flowchart for explaining a content providing method of a display apparatus according to another exemplary embodiment;
FIG. 26 is a block diagram illustrating a configuration of a display apparatus according to another exemplary embodiment.
27 is a diagram illustrating a 3D multi view mode for displaying a plurality of 3D contents;
28 is a diagram illustrating a 2D multi view mode of displaying a plurality of 2D contents;
29 is a block diagram illustrating an example of a configuration of one SoC;
30 is a block diagram illustrating an example of a detailed configuration of a display apparatus;
31 is a block diagram illustrating a configuration of a display apparatus according to another embodiment of the present invention.
32 is a flowchart for explaining a display method according to another exemplary embodiment.

Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram showing a configuration of a display device according to an embodiment of the present invention. Referring to FIG. 1, the display apparatus 100 includes the receivers 1 and 2 110 and 120, the scalers 1 and 2 130 and 140, the storage 150 and the frame rate converters 1 and 2 160 and 170. The image output unit 180 is included. The display device 100 of FIG. 1 may be implemented as various devices including a display unit such as a TV, a mobile phone, a PDA, a notebook PC, a monitor, a tablet PC, an e-book, an electronic picture frame, a kiosk, a personal medical device, and the like.

The receivers 1 and 2 110 and 120 respectively receive content from different sources. The source may be a broadcasting station for transmitting broadcast program content using a broadcast network, or may be a web server for transmitting content files using the Internet, and may be provided in the display apparatus 100 or connected to the display apparatus 100. Various recording medium reproducing apparatus may be provided. The recording medium playback device refers to a device that plays back content stored in various types of recording media such as CD, DVD, hard disk, Blu-ray disk, memory card, USB memory,

In an embodiment of receiving content from a broadcasting station, the receivers 1 and 2 110 and 120 may be implemented in a form including a tuner, a demodulator, an equalizer, a decoder (not shown), and the like. On the other hand, in an embodiment of receiving content from a source such as a web server, the receivers 1 and 2 110 and 120 may be implemented as a network interface card (not shown). Alternatively, in the case of receiving contents from the above-described various recording medium reproducing apparatuses, the receiving units 1 and 2 110 and 120 may be implemented as an interface unit (not shown) connected to the recording medium reproducing apparatus. As such, the receivers 1 and 2 110 and 120 may be implemented in various forms according to embodiments.

In addition, the receivers 1 and 2 110 and 120 do not necessarily receive the content from the same type of source, and the receiver 1 110 and the receiver 2 120 may receive the content from different types of sources. . For example, the receiver 1 110 may be implemented in a form including a tuner, a demodulator, an equalizer, a decoder, and the like, and the receiver 2 120 may be implemented as a network interface card.

The scalers 1 and 2 130 and 140 reduce the data size of each content received by the receivers 1 and 2 110 and 120 and store the data in the storage 150. When the output timing of the content stored in the storage unit 150 arrives, the scalers 1 and 2 130 and 140 read the content and provide the content to the frame rate converters 1 and 2 160 and 170.

The operation of reducing the data size may be performed in various ways according to the embodiment. For example, the scalers 1 and 2 130 and 140 may store the content in the storage 150 after performing down scaling to reduce the size of the content.

Alternatively, the scalers 1 and 2 130 and 140 may perform down scaling on each content and reduce the frame rate to store the content in the storage 150.

Alternatively, the scalers 1 and 2 130 and 140 perform down scaling on each content and store the content in the storage unit 150. When the stored contents are read out according to the output timing, the scaler 1, 2 (130, 140) reads the frame rate of the read content. It may be reduced and provided to the frame rate converters 1 and 2 (160 and 170).

In particular, when the content is 3: 2 pulled down film image content, the scalers 1 and 2 130 and 140 may downscale the film image content, extract only a key frame, and store the extracted key frame in the storage 150.

In addition, the scalers 1 and 2 130 and 140 may perform down scaling of content and convert the content into a data format corresponding to each of the corresponding frame rate converters 1 and 2 160 and 170. Specifically, the input data is in a top-to-bottom format, but the frame rate converters 1 and 2 (160 and 170) frame the side-by-side format. In the case of processing, the scalers 1 and 2 (130 and 140) separate the left eye image and the right eye image of each content and connect them in a horizontal direction to convert them into a side-by-side format.

The data format conversion operation may be performed before storing the down-scaled content in the storage 150 or after the content is read from the storage 150.

As described above, when the scalers 1 and 2 (130 and 140) reduce and store the data size of the content, the use capacity of the storage 150 may be reduced, and the storage 150 and the scalers 1 and 2 may be reduced. Reference numerals 130 and 140 and frame rate converters 1 and 2 (160 and 170) may also reduce the amount of buses used. As a result, resource usage can be minimized.

The frame rate converters 1 and 2 (160 and 170) convert the frame rates of the contents provided by the scalers 1 and 2 (130 and 140) to match the multi-content display rate by referring to the output rate of the display apparatus 100. do. Specifically, if the display apparatus 100 operates at 60 Hz, the frame rate converters 1 and 2 (160 and 170) convert the frame rate of each content to 120 Hz. Meanwhile, as described above, when only the key frame is read and stored in the storage unit 150 with respect to the film image content, the corresponding frame rate converters 160 and 170 refer to the key frame read from the storage unit 150. By interpolating the frame, the frame rate of each content is converted into a frame rate corresponding to the image output unit 180.

The image output unit 180 combines and displays each content output from the frame rate converters 1 and 2 (160 and 170). Specifically, the image output unit 180 multiplexes so that at least one image frame of each content provided by the frame rate converters 1 and 2 (160, 170) is alternately arranged, and adjusts the multiplexed data to fit the screen size. The multi-content frame data is configured by upscaling and then displayed. The multi content frame data refers to frame data configured to allow a plurality of users to view a plurality of contents, respectively. The method of configuring the multi-content frame data may be variously implemented according to the driving method of the display apparatus.

For example, in the case of a shutter glass display device, the image output unit 180 configures and displays multi-content frame data by alternately arranging the image frame of the first content and the image frame of the second content. . The user wears a spectacle device that interlocks with the display timing of the image output unit 180 to watch a desired content. Specifically, the eyeglass device is worn with a left eye shutter glass and a right eye shutter glass. The left eye shutter glass and the right eye shutter glass are alternately turned on / off when watching 3D content, but when the plurality of contents are displayed, they are collectively turned on / off according to the output timing of the content synchronized with the spectacles. Accordingly, the user can watch the content separately from other users.

As another example, in the case of a glassesless display device, the image output unit 180 divides the first and second contents into a plurality of lines and alternately combines the divided lines to form at least one or more multi-content frame data. do. The image output unit 180 displays the multi-content frame data using a display panel (not shown) provided with a parallax barrier or a lenticular lens, so that different content frames are recognized for each user. do.

Although FIG. 1 illustrates a configuration for receiving and processing two pieces of content, an embodiment of receiving and processing three or more pieces of content may be implemented. In this case, three or more receivers, scalers, and frame rate converters may be provided.

As described above, the mode for configuring and displaying the multi-content frame data may be referred to as a multi view mode (or a dual view mode). When the display apparatus 100 operates in a normal mode (or a single view mode) displaying only one 2D content or 3D content, the display apparatus 100 may activate one of the receivers 1, 2 (110, 120) to process the content. In order to reduce resource usage even in the normal mode, various data size reduction operations may be performed as described above. When the user selects the multi view mode while operating in the normal mode, the display apparatus 100 processes the data in the above-described manner by activating the remaining receiver.

Meanwhile, the above-described content may be 2D content or 3D content. 3D content refers to content that allows a user to feel three-dimensional by using a multi-view image in which the same object is expressed from different perspectives.

In order to compose a multi-content frame using a plurality of 3D content, the image output unit 180 multiplexes the left eye image and the right eye image included in each 3D content provided by the frame rate converters 1 and 2 (160 and 170). Then, they are alternately arranged according to a preset arrangement order. The multiplexed data is upscaled to fit the screen size to form a multi-content frame.

Accordingly, the left eye image, the right eye image of the first content, the left eye image of the second content, and the right eye image are sequentially arranged and displayed according to a preset arrangement order. The image is recognized.

Meanwhile, although not shown in FIG. 1, the display apparatus 100 further includes a configuration in which audio data included in each content is differently provided for each user when operating in the multi-view mode. That is, a demux (not shown) for separating audio data from content received from each receiver 110 and 120, an audio decoder (not shown) for decoding the separated audio data, respectively, and different frequencies of the decoded audio data. The apparatus may further include a modulator (not shown) for modulating the signal and an output unit (not shown) for transmitting the modulated audio data to the spectacle device. Each audio data output from the output unit is provided to the user through output means such as earphones provided in the spectacles device. Since these configurations are not directly related to the present invention, a separate illustration is omitted.

FIG. 2 is a diagram illustrating an operation of a shutter glass display apparatus that displays a multi-content frame using a plurality of 3D contents.

According to FIG. 2, the display apparatus 100 includes a signal transmitter 190. The signal transmitter 190 outputs each of the left eye image and the right eye image while the image output unit 180 displays the multi-content frame 10 including the left eye image and the right eye image constituting the plurality of 3D contents. Transmit synchronization signals to synchronize different 3D glasses devices in timing.

The synchronization signal may be generated and transmitted in various forms. For example, the signal transmitter 190 may generate a plurality of Infra Red (IR) synchronization signals or RF (Radio Frequency) synchronization signals having different frequencies and provide them to each eyeglass device.

Alternatively, the signal transmitter 190 may generate a synchronization signal according to various wireless communication standards such as Bluetooth and transmit the generated synchronization signal to the eyeglasses 210 and 220. To this end, the spectacle apparatus pairs with the display apparatus 100. When pairing is completed, information about each eyeglass device, for example, a device ID, may be registered in the display apparatus 100. The signal transmitter 190 may match display timing of each content with eyeglass device information to generate and transmit one synchronization signal according to a communication standard.

In FIG. 2, the signal transmitter 190 protrudes from the outside of the display apparatus 100. However, in the embodiment of transmitting a synchronization signal according to a wireless communication standard, the signal transmitter 190 may be embedded in the display apparatus 100.

When the synchronization signal is received, each of the glasses apparatuses 210 and 220 may check the display timing corresponding to the glasses apparatus information and turn on or off the left eye shutter glass and the right eye shutter glass according to the confirmed display timing. In addition, the synchronization signal may be generated in various ways.

Each eyeglass device 210 or 220 controls the left eye glass and the right eye glass on / off individually according to the synchronization signal. Specifically, in the first glasses device 210 for viewing the main 3D content, when the left eye image ML1 of the main 3D content is displayed, the left eye glass is turned on and the right eye glass is turned off, and the right eye of the main 3D content is turned on. When the image MR1 is displayed, the right eye glass is turned on and the left eye glass is turned off. On the other hand, when the left eye image SL1 and the right eye image SR1 of the sub 3D content are displayed, the first spectacle apparatus 210 turns off both the left eye glass and the right eye glass. The second eyeglass apparatus 220 turns off both the left eye glass and the right eye glass when the left eye image ML1 and the right eye image MR1 of the main 3D content are displayed, as opposed to the first eyeglass apparatus.

Accordingly, a user wearing the first glasses device 210 may watch the main 3D content, and a user wearing the second glasses device 220 may watch the sub 3D content.

3 is a diagram illustrating an example of a process of reducing and storing a data size. Referring to FIG. 3, when the main 3D content 21 and the sub 3D content 22 are received, the data size is reduced by the scaler 1, 2 130 and 140, respectively, and then stored in the storage 150. do. The stored contents 31 and 32 are read out according to the output timing and displayed after the frame rate is converted. The scalers 1 and 2 130 and 140 may perform only down scaling, or after performing down scaling and frame rate down, the corresponding content may be stored in the storage 150. In FIG. 3, the main 3D content 21 and the sub 3D content 22 each represent a state in which left and right eye images are received in a top-to-bottom format and processed as they are.

4 is a diagram illustrating another example of a process of reducing and storing a data size. According to FIG. 4, when the main 3D content 21 and the sub 3D content 22 of the top-to-bottom format are received, respectively, by the scalers 1 and 2 130 and 140, respectively. The data size is reduced, converted to a side-by-side format, and then stored in the storage 150. The stored contents 41 and 42 are read out according to the output timing and provided to the frame rate converters 1 and 2 160 and 170. The frame rate converters 1 and 2 160 and 170 perform a frame rate conversion operation on the content data whose size is reduced. Accordingly, it is possible to minimize the resources used in the frame rate conversion.

5 is a diagram illustrating a frame rate conversion process for one 3D content. According to FIG. 5, the main content includes left eye images ML0, ML1, ML2,..., And right eye images ML0, ML1, ML2, ..., in a vertical synchronization signal period (b). If scaler unit 130 receives the main content, scaler unit 130 performs down scaling and then lowers the frame rate (b). According to FIG. 5, the frame rate is reduced by half. That is, if the frame rate of the input content is 60 Hz, the frame rate is lowered to 30 Hz.

When the timing of processing and outputting the 3D content in this state arrives, the frame rate converter 1 160 increases the frame rate to the target frame rate (c). The frame rate converter 1 160 adds a new frame ML0 ', MR0', ML1 ', MR1', ML2 ', MR2', ML3 ', MR3' and the like using the downscaled data frame to adjust the frame rate. Increase. According to FIG. 5, the frame rate is increased to 120 Hz. In other words, the multi-content display rate is 120 Hz.

FIG. 6 illustrates a process of configuring a multi-content frame using main content and sub content processed by the frame rate converters 1 and 2 (160 and 170), respectively. Referring to FIG. 6, the image output unit 180 configures a multi-content frame by combining main content and sub content in an arrangement pattern such as ML0, SL0, MR0, and SR0. In FIG. 6, one image frame of each content is arranged. However, two image frames may be sequentially arranged such as ML0, ML0, SL0, SL0, MR0, MR0, SR0, and SR0.

Meanwhile, the signal transmitter 190 generates and outputs a synchronization signal for synchronizing each eyeglass device according to the output timing of each content. In FIG. 6, the signal transmitter 190 includes a left eye shutter glass of the first eyeglass apparatus 210, a left eye shutter glass of the second eyeglass apparatus 220, a right eye shutter glass of the first eyeglass apparatus 210 according to a Bluetooth standard. A synchronization signal for sequentially turning on the right eye shutter glass of the second spectacle device 220 is shown.

Meanwhile, the above-described embodiments may be applied to a signal processing device in addition to the display device. The signal processing device refers to a device that receives and processes content and provides the same to a display device, such as a set-top box, a recording medium reproducing device, an image processing chip, and the like.

7 is a diagram illustrating a configuration of a signal processing apparatus 300 according to an embodiment of the present invention. According to FIG. 7, the signal processing apparatus 300 includes a plurality of scalers 310-1 and 310-2, a storage 320, and a plurality of frame rate converters 330-1 and 330-2. .

The scalers 310-1 and 310-2 receive a plurality of contents and reduce the data size. The content may be various contents such as 2D and 3D. Hereinafter, a description will be given based on a case where 3D content is received.

As described in the above-described embodiments, the scalers 310-1 and 310-2 may reduce the data size by performing various processes such as down scaling, frame rate down, and data format conversion. This data size reduction operation may be performed before the content is stored in the storage 320, or may be performed after the content is read from the storage 320.

The storage unit 320 stores a plurality of 3D contents processed by the plurality of scalers. The frame rate converters 330-1 and 330-2 convert the frame rate of each 3D content.

Each 3D content whose frame rate is converted is provided to a display device connected to the signal processing device 300. The display apparatus may configure and display a multi-content frame by combining 3D contents transmitted from the signal processing apparatus 300.

8 is a diagram illustrating a configuration of a signal processing apparatus 300 according to another embodiment of the present invention. According to FIG. 8, the signal processing apparatus 300 includes a plurality of scalers 310-1, 310-2,..., 310-n, a storage 320, and a plurality of frame rate converters 330-1. , 330-2,..., 330-n), an image processor 340, an interface unit 350, and a bus 50 serving as a data transmission / reception path between these components. In FIG. 7, one main bus 50 is illustrated, but a plurality of buses may be provided.

A plurality of scalers 310-1, 310-2, ..., 310-n, storage 320, a plurality of frame rate converters 330-1, 330-2, ..., 330- Operation of n) is the same as described in the above-described embodiments, and thus redundant description is omitted.

The image processor 340 configures a multi-content frame using a plurality of 3D contents having frame rates converted by the plurality of frame rate converters 330-1, 330-2,..., 330-n. do. In more detail, as in the method illustrated in FIG. 6, a multi-content frame may be configured.

The interface unit 350 transmits data regarding the multi content frame configured in the image processor 340 to the display device. The interface unit 350 may be connected to an external display device through an I2C interface, a serial interface, and various known wired / wireless communication interfaces to transmit data.

7 and 8 may be connected to a display device to support a multi-view function.

9 is a flowchart illustrating a multi-content display method of a display apparatus according to an exemplary embodiment. According to FIG. 9, when a plurality of contents is received (S910), the data size of each content is reduced (S920) and stored (S930).

When the timing of outputting each content arrives, the stored contents are read out, the frame rate is converted (S940), and the combination is displayed to display the multi-content frame (S950). Since the method for reducing the data size has been described in detail in the above-described exemplary embodiments, redundant description thereof will be omitted.

10 is a flowchart illustrating a multi-contents display method of a display apparatus according to another exemplary embodiment. According to FIG. 10, when a plurality of contents are received (S1010), after downscaling is performed (S1020), the plurality of contents are stored (S1030). Thereafter, when a situation in which the corresponding content needs to be read is generated (S1040), at least one data processing operation of data format conversion and frame rate down is performed by reading data (S1050). Then, after converting the frame rate to the target frame rate level (S1060), the multi-content frame is displayed by combining them.

Although not shown in FIG. 9 and FIG. 10, it is obvious that an audio data processing step or a synchronization signal transmission step may be further included in the multi-content display method. In addition, the content processed in FIGS. 9 and 10 may be 2D content or 3D content.

In addition, the signal processing method according to an embodiment of the present invention, the step of down-scaling a plurality of 3D content each comprising a left eye image and a right eye image, converting the frame rate of the 3D content using a plurality of frame rate converter The method may include configuring a multi-content frame using a plurality of 3D contents having a converted frame rate, and transmitting the multi-content frame to a display device.

The method may further include converting the plurality of down-scaled 3D contents into a format that may be processed by the plurality of frame rate converters.

Since each step of the signal processing method is the same as described in the above-described various embodiments, illustration and overlapping description are omitted.

As described above, according to various embodiments of the present disclosure, resources required for signal processing and display may be reduced. Accordingly, a technology of simultaneously providing a plurality of contents, in particular, a plurality of 3D contents to a plurality of users in one display apparatus can be effectively implemented.

As described above, the display apparatus may receive a plurality of contents to provide a multi-view. The contents may be of various kinds provided from various sources. Therefore, the frame rates of the contents may be different from each other. In this case, the multi-view display method may include receiving a plurality of different contents having different frame rates, matching frame rates of the plurality of contents, and multi-view frames using respective contents having the matched frame rates. It may include the step of displaying. Matching frame rates can be accomplished in a variety of ways. That is, frames may be interpolated, repeated, or skipped to be matched. Hereinafter, various embodiments of a configuration and method for configuring a multi-view by receiving contents having different frame rates will be described.

[Processing for Different Frame Rates]

First, a first embodiment of the case where the frame rates are different from each other is a method of interpolating content having a relatively small frame rate.

According to the present embodiment, even when the frame rates are different from each other, the multi-view can be provided by effectively processing the content.

11 is an exemplary diagram illustrating a system for providing a plurality of contents to a plurality of users according to the present embodiment.

As shown in FIG. 11, the system includes a display device 1100 and glasses devices 1210 and 1220.

The display apparatus 1100 alternately displays a plurality of contents and transmits a synchronization signal corresponding to the display timing of each of the contents to the glasses apparatuses 1210 and 1220. In addition, the display apparatus 1100 outputs an audio signal of each content to the eyeglasses 1210 and 1220 corresponding to the plurality of contents. The display device 1100 may be implemented as various devices including a display unit such as a TV, a mobile phone, a PDA, a notebook PC, a monitor, a tablet PC, an e-book, an electronic picture frame, a kiosk, and the like.

 The spectacles 1210 and 1220 control opening timing of the left eye and right eye shutter glasses according to the synchronization signal received from the display device 1100. That is, the eyeglasses 1210 and 1220 may open the left and right eye shutter glasses during the timing at which the content is displayed, according to the information included in the received synchronization signal, so as to view a video image of one of the plurality of contents. You can do that.

According to an embodiment, the first glasses device 1210 opens the left eye and right eye shutter glasses at the time when the content 1 of the contents 1 to 4 displayed alternately is displayed according to the synchronization signal received from the display device 1100. can do. Accordingly, a user wearing the first eyeglass device 1210 may view a video image of content 1 of a plurality of contents displayed on the display apparatus 1100 through the corresponding eyeglass device 200.

Meanwhile, the display apparatus 1100 alternately displaying the contents 1 to 4 may output an audio signal for the contents 1 to 4 in response to the timing at which the contents 1 to 4 are displayed. Therefore, in the above-described embodiment, the first glasses device 1210 that opens the left and right eye shutter glasses at the time when the content 1 is displayed may receive and output the audio signal for the content 1 output from the display device 1100. Can be. Accordingly, a user wearing the first glasses device 1210 may watch the video image for the content 1 and simultaneously listen to the audio for the content 1.

On the other hand, the second eyeglasses 1220 may open the left and right eye shutter glasses at the time when the content 3 of the contents displayed alternately is displayed according to the synchronization signal received from the display apparatus 1100. As described above, when the display apparatus 1100 also outputs an audio signal for the contents 1 to 4, the second glasses device 1220 receives and outputs an audio signal for the content 3 output from the display apparatus 1100. can do. Accordingly, a user wearing the second eyeglasses device 1220 may be provided with a video image and audio regarding content 3 to view it.

Up to now, a brief description has been made of a system including a display apparatus 1100 for providing a plurality of contents and glasses apparatuses 1210 and 1220 for viewing a plurality of contents provided from the display apparatus 1100.

In the present embodiment, in order to perform multi-view display of a plurality of contents in the display apparatus 1100, the synchronization with respect to each image frame for the plurality of contents is performed. Therefore, since the configuration of the display apparatus and the spectacle apparatus which perform the multi-view mode for the plurality of contents are not directly related to the present invention, detailed description thereof will be omitted.

Hereinafter, the configuration of the display apparatus 1100 described above will be described in detail.

12 is a block diagram of a display apparatus according to an exemplary embodiment.

As shown in FIG. 12, the display apparatus includes a receiver 1110, a signal processor 1120, a storage 130, an output 140, and a controller 150.

The receiver 1110 is configured to receive a plurality of contents, and includes a first receiver 1111 for receiving first content and a second receiver 1112 for receiving second content having a frame rate smaller than the first content. Include. As described above, the first and second receivers 1111 and 1112 may receive content having different frame rates. According to an embodiment, the first receiver 1111 may be implemented in a form including a tuner, a demodulator, an equalizer, a decoder, and the like, and through this configuration, the first receiver 1111 may receive first content transmitted from an external broadcast channel. Can be. Since each component included in the first receiver 1111 is a well-known technology, the description of the operation of each component will be omitted.

The second receiver 1112 may receive the second content through a source device such as a web server or a playback device such as a DVD device through at least one of a CART, AV, HDMI, COMPONENT, and USB interface. Meanwhile, the second receiver 1112 may receive second content transmitted from another external broadcast channel like the first receiver 1111. The second content may have a smaller frame rate than the first content, but the present invention is not limited thereto, and the second content may have a smaller rate than the frame rate of the second content. .

The storage unit 130 stores image frames of the first content received from the first receiver 1111 and the second content received from the second receiver 1112. The signal processor 1120 generates image frames of the first content and the second content received from the first and second receivers 1111 and 1112 and stores them in the storage 130. The signal processor 1120 includes a first signal processor 1121 and a second signal processor 1122.

The first signal processor 1121 generates an image frame for the first content received from the first receiver 1111 and stores the image frame in the storage 1130, and the second signal processor 1122 is the second receiver 1112. An image frame for the second content received from the terminal is generated and stored in the storage unit 1130. As such, the first signal processing unit 1121 and the second signal processing unit 1122 generating image frames for the first content and the second content received from the first receiving unit 1111 and the second receiving unit 1112 are illustrated in FIG. 13 may be configured as follows.

13 is a block diagram of a signal processor according to an exemplary embodiment of the present invention.

As shown in FIG. 13, the first signal processing unit 1121 and the second signal processing unit 1122 are provided to the first content received from the first receiver 1111 and the second content received from the second receiver 1112. Creates an image frame for Since the configurations of the first signal processing unit 1121 and the second signal processing unit 1122 are the same, only the configuration of the first signal processing unit 1121 will be described in detail in the present invention.

As illustrated, the first signal processor 1121 includes a first video processor 1121-1, a first audio processor 1122-1, and a first additional data processor 1123-1.

When the first content is received from the first receiver 1111, the first video processor 1121-1 detects video data included in the received first content and performs signal processing. In detail, when content is received from the first receiver 1111, the first video processor 1121-1 detects video data from the received content, and decodes the detected video data. Thereafter, the first video processor 1121-1 performs scaling up or down on the image frame for the decoded video data according to the screen size of the video output unit 1131 to be described later. When scaling is performed on the video data, the first video processor 1121-1 converts each scaled image frame according to the multi-content display rate with reference to the output rate of the display device. In detail, when the display apparatus operates at 60 Hz, the first video processor 1121-1 may convert the frame rate of each scaled image frame to 120 Hz.

Meanwhile, the first audio processor 1122-1 detects audio data included in the received content and performs signal processing. In detail, when content is received from the first receiver 1111, the first audio processor 1122-1 demuxes the received content to separate audio data from the corresponding content, and decodes the separated audio data. do. Thereafter, the first audio processor 1122-1 modulates the decoded audio data into an audio signal. In this case, the audio signal modulated by the first audio processor 1122-1 may have a different frequency channel from the audio signal modulated by another audio processor.

Meanwhile, the first additional data processor 1123-1 determines whether additional data such as an electronic program guide (EPG) and a subtitle is included in the received content, and when the additional data is included, the first additional data processor 1123-1 adds the additional data from the received content. Separate data. Thereafter, the first additional data processor 1123-1 may add the separated additional data to the corresponding image frame.

As such, data about the first content and the second content signal-processed by the first signal processor 1121 and the second signal processor 1122 may be output through the output unit 1140 as a multi-view and a multi-sound. . Since the present embodiment is for displaying a multi-view of a plurality of contents, an operation of displaying an image frame for the plurality of contents in a multi-view form by the output unit 1140 will be described in detail.

The output unit 1140 alternately arranges an image frame for the second content in the first content processed by the first and second signal processing units 1121 and 1122 to display the multi view. As described above, the first and second signal processing units 1121 and 1122 generate image frames for the first and second contents received by the first and second receivers 1111 and 1112 and store the storage unit 1130. ). Therefore, the output unit 1140 combines the image frame for the first content and the second content stored in the storage unit 1130, and displays the multi-view frame. Here, the multi-view frame refers to frame data configured to allow a plurality of users to view a video image of a plurality of contents.

According to an embodiment, in the case of a shutter glass type display device, the output unit 1140 alternates at least one image frame of the first content and the second content output from the first and second signal processors 1121 and 1122. Perform multiplexing so that it is deployed as: Thereafter, the output unit 1140 upscales the image frames of the multiplexed first content and the second content to fit the screen size, and then configures a multi-view frame in which the image frames of the first content and the second content are combined. To display.

As such, when the multi-view frame in which the image frames for the first content and the second content are combined is displayed, each of the plurality of users may view video images of different contents through the glasses device worn by the plurality of users. Specifically, the spectacle device includes a left eye shutter glass and a right eye shutter glass. When the multi-view frame is output through the output unit 1140, the spectacle apparatus turns on / off the left and right eye shutter glasses collectively.

In this way, by turning on and off the left eye and right eye shutter glasses collectively, a user wearing the spectacle device can view a video image of the content separate from other users. However, the present invention is not limited thereto, and the display device may display a multi-view frame for the first content and the second content in a polarized glass method or another method.

The controller 1150 may interpolate the image frame for the second content according to a difference in the frame rate of the image frame for the first content and the second content stored in the storage 1130. To control. As described above, when the second content has a smaller frame rate than the first content, the second signal processing unit 1122 according to the control command of the control unit 1150, the second content stored in the storage unit 1130 Interpolate video frames. However, when the first content has a smaller frame rate than the second content, the controller 150 may control the first signal processor 1121 to interpolate an image frame for the first content stored in the storage 1130. Can be.

The controller 1150 controls the second signal processor 1122 to interpolate an image frame of the second content by comparing a relative arrangement position between each image frame of the first content and the second content based on the output sink. . Here, the output sink refers to a synchronized signal for image frames of the first content and the second content output from the output unit 1140. Such an output sink may be set to a frame rate of the first content having a frame rate larger than that of the second content or may be set according to information input from the outside.

Accordingly, the second signal processing unit 1122 may determine each image of the first content and the second content based on the output sink set according to the above-described condition in which the interpolation control command for the image frame of the second content is input from the controller 1150. Know the relative placement of the frames. As described above, the second signal processor 1122 will be described with reference to FIG. 14 a relative arrangement position of the first content and the second content with respect to the image frame based on the output sink.

14 is an exemplary diagram illustrating a relative arrangement position of an image frame of first content and second content on the basis of an output sink according to an embodiment of the present invention.

As illustrated in FIG. 14, an image frame of the first content may be set to a frame rate of 30 Hz, and an image frame of the second content may be set to a frame rate of 24 Hz. The output sync for the image frames of the first content and the second content may be set to 60 Hz.

As such, when the output sink is set to 60 Hz, the relative arrangement position of each image frame of the first content may be determined by being divided into 0.5 units. That is, the position where the output sink is set to 60 Hz relative to the image frame of the first content corresponding to the first section 1/60 becomes 0.5 of the image frame A-0. The position where the output sink is set to 60 Hz relative to the image frame of the second content corresponding to the first section 1/60 becomes 0.4 of the image frame B-0.

Meanwhile, the second signal processor 1122 according to the present invention refers to a relative arrangement position from the output sink by referring to the number of lines for the image frames of the first content and the second content or the image frame information of the first content and the second content. I can figure it out. For example, the entire input line of the image frame for the second content may be 1125 lines, and the 112th line of the image frame for the second content may be stored in the current storage unit 1130. As such, when an output sink occurs at a time point when the 112th line of the entire input lines of the image frame for the second content is stored in the storage 1130, the second signal processor 1122 may generate the image frame for the second content. The line stored in the current storage unit 1130 is divided from all the input lines in and the resulting value is calculated. Such a result value may be an arrangement position value relative to the second content when the output sink is generated. That is, the second signal processor 1122 divides the entire input line 1125 of the image frame for the second content into the input line 112 of the image frame for the second content currently stored in the storage 1130. Accordingly, it is possible to know the relative arrangement position of the image frame of the second content with respect to the time point at which the output sink is generated from the result value 3.1.

Meanwhile, like the second signal processor 1122, when the output sink is generated, the first signal processor 1121 is also stored in the storage 1130 from the entire input line of the image frame for the first content at the time when the output signal is generated. The input line of the image frame for the first content is divided. Accordingly, it is possible to know a relative arrangement position of the image frame of the first content with respect to the time point at which the output sink is generated from the resultant value.

As such, when a relative arrangement position between each image frame of the first content and the second content is obtained based on an output sink according to a preset condition, the controller 1150 acquires each image frame of the first content and the second content. The second signal processor 1122 is controlled to interpolate an image frame of the second content by comparing the relative arrangement positions of the two frames.

According to the control command of the controller 1150, the second signal processing unit 1122 generates an image frame of the second content corresponding to a point corresponding to the relative arrangement position of the first content with reference to the front and back image frames. Do this.

However, the present invention is not limited thereto, and the controller 1150 may control the second signal processor 1122 to interpolate an image frame for the second content according to a reception time of the first content and the second content. In detail, when the first content and the second content are received, the controller 1150 compares a reception time point of the received first content and the second content, and stores one image frame of the first content in the storage 1130. The storage ratio of the corresponding frame of the second content corresponding to the completed time point is checked. Thereafter, the controller 1150 controls the second signal processor 1122 to generate an interpolation frame by combining the corresponding frame of the second content and the next frame of the corresponding frame according to the identified storage ratio.

According to the control command, the second signal processor 1122 may select a corresponding frame of the second content and a next frame corresponding to a time point when one image frame of the image frame of the first content is completely stored in the storage unit 1130. By comparison, the motion of an object displayed in the frame may be estimated, and an interpolation frame may be generated by applying a reception ratio to the estimated motion.

For example, as described with reference to FIG. 14, if the image frame A-0 of the image frames A-0,1,2,3,4,5 of the first content is stored in the storage unit 1130, the storage is completed. In this case, about 80% of the image frame B-0, which is a corresponding frame of the second content corresponding to the time point at which the image frame A-0 is completed, may be stored. Accordingly, the second signal processor 1122 compares the image frame B-0, which is the corresponding frame of the second content, and the image frame B-1, which is the next frame, to estimate the motion of the object displayed in the frame, and The interpolation frame may be generated by applying a rate (80%) at which the image frame B-0, which is a corresponding frame of the second content, is received or stored to the motion.

Up to now, each configuration of the display device according to the present invention has been described in detail. Hereinafter, a method of performing a multi-view display on a plurality of contents in the display apparatus according to the present invention will be described in detail.

15 is a flowchart of a multi-view display method in a display device according to an embodiment of the present invention.

As shown in FIG. 15, the display apparatus receives first content and second content having a frame rate smaller than the first content (S1510). Thereafter, the display apparatus stores the received first content and the second content in the storage unit, and processes the first content and the second content stored in the storage unit through the first signal processor and the second signal processor to process the first content and the second content. An image frame for the second content is generated (S1520 and S1530). Thereafter, the display apparatus compares the difference between the frame rate of the first content and the frame rate of the second content through the second signal processor, and interpolates the image frame of the second content according to the comparison result (S1540).

Thereafter, the display apparatus displays a combination of alternately arranged image frames of the second content generated by interpolating the image frame of the first content and the image frame of the second content generated by the first signal processor (S1550). . Accordingly, the display device according to the present invention can perform multi-view display on a plurality of contents.

In detail, the display apparatus receives the first content and the second content through the first and second receivers. Here, the first content and the second content may be transmitted from an external broadcast channel or provided from a source device such as a web server or a reproduction device such as a DVD device. One of the first content and the second content may have a smaller frame rate than another content. In the present invention, it will be described under the assumption that the frame rate of the second content has a smaller frame rate than the frame rate of the second content.

When the first content and the second content are received through the first and second receivers, the display apparatus stores the received first content and the second content in the storage unit. When the first content and the second content are stored in the storage unit, the display apparatus generates an image frame for the first content and the second content stored in the storage unit through the first signal processor and the second signal processor. An operation of generating an image frame for the first content and the second content by the first signal processor and the second signal processor has been described in detail with reference to FIG. 13, and thus, a detailed description thereof will be omitted.

When an image frame for the first content and the second content is generated through the first signal processor and the second signal processor, the display device stores the generated image frame for the first content and the second content in the storage unit. . Subsequently, the display apparatus compares the frame rate of the image frame of the first content and the frame rate of the image frame of the second content stored in the storage to interpolate the image frame of the second content to interpolate the image of the second content. Create a frame.

In detail, the display apparatus may interpolate an image frame of the second content by comparing a relative arrangement position between each image frame of the first content and the second content based on the output sink. Here, the output sink refers to a synchronized signal for an image frame in which image frames of the first content and the second content are alternately displayed. Such an output sink may be set to a frame rate of the first content having a frame rate larger than that of the second content or may be set according to information input from the outside.

Accordingly, the display device may know a relative arrangement position between each image frame of the first content and the second content based on the output sink set according to the condition. As described with reference to FIG. 14, an image frame of the first content may be set to a frame rate of 30 Hz, and an image frame of the second content may be set to a frame rate of 24 Hz. The output sync for the image frames of the first content and the second content may be set to 60 Hz.

As such, when the output sink is set to 60 Hz, the relative arrangement position of each image frame of the first content may be determined by being divided into 0.5 units. That is, the position where the output sink is set to 60 Hz relative to the image frame of the first content corresponding to the first section 1/60 becomes 0.5 of the image frame A-0. The position where the output sink is set to 60 Hz relative to the image frame of the second content corresponding to the first section 1/60 becomes 0.4 of the image frame B-0.

Meanwhile, the display apparatus may determine the relative arrangement position from the output sink by referring to the number of lines for the image frames of the first content and the second content or the image frame information of the first content and the second content. For example, the entire input line of the image frame for the second content may be 1125 lines, and the 112th line of the image frame for the second content may be stored in the current storage unit. As such, when an output sink occurs at a time point when the 112th line of the entire input lines of the image frame for the second content is stored in the storage unit, the display apparatus displays the current storage unit in the entire input line of the image frame for the second content. By dividing the lines stored in, the resulting value 3.1 is calculated. From this result, it is possible to know a relative arrangement position of the image frame of the second content with respect to the time point at which the output sink is generated.

Such a display device can know the relative arrangement position of the image frame of the first content with respect to the time when the output sink is generated through the method described above. As such, when a relative arrangement position between each image frame of the first content and the second content is obtained based on an output sink according to a preset condition, the display apparatus may determine a relative relationship between each image frame of the first content and the second content that have been acquired. The image frames of the second content may be interpolated by comparing the arrangement positions.

Such a display device performs interpolation to generate an image frame of the second content corresponding to a point corresponding to a relative arrangement position of the first content with reference to the front and back image frames. However, the present invention is not limited thereto, and the display apparatus may interpolate an image frame for the second content according to a reception time of the first content and the second content. In detail, when the first content and the second content are received, the display apparatus compares the reception time points of the received first content and the second content, and corresponds to a time point when one image frame of the first content is stored in the storage unit. The storage ratio of the corresponding frame of the second content is checked. Thereafter, the display apparatus generates an interpolation frame by combining the corresponding frame of the second content and the next frame of the corresponding frame according to the identified storage ratio.

Such a display device estimates and estimates the motion of an object displayed in the frame by comparing the next frame with the corresponding frame of the second content corresponding to the time when one image frame of the image frame of the first content is stored in the storage unit. The interpolation frame may be generated by applying a reception ratio to the motion.

For example, as described with reference to FIG. 14, if the image frame A-0 of the image frames A-0,1,2,3,4,5 of the first content is stored in the storage 130, the storage unit 130 is completed. In this case, about 80% of the image frame B-0, which is a corresponding frame of the second content corresponding to the time point at which the image frame A-0 is completed, may be stored. Therefore, the display apparatus compares the image frame B-0, which is the corresponding frame of the second content, with the image frame B-1, which is the next frame, to estimate the motion of the object displayed in the frame, and to calculate the second content in the estimated motion. An interpolation frame may be generated by applying a rate (80%) at which an image frame B-0, which is a corresponding frame, is received or stored.

As described above, an effective multi-view display using a plurality of contents may be implemented by interpolating a frame rate.

Unlike the above-described embodiment, the frame rate may be matched by repeating or skipping the frame. That is, according to another embodiment of the case where the frame rates are different, the frame rates may be matched by integrating key frames by repeating or skipping frames. Hereinafter, embodiments of integrating and processing frame rates will be described.

16 and 17 are schematic views for explaining the configuration and operation of the content providing system according to the present embodiment. As shown in FIGS. 16 and 17, the present content providing system includes a display device 2100 and first and second eyeglasses 2200-1 and 2200-2.

According to FIG. 16, the display apparatus 2100 alternately displays a plurality of 2D contents (contents A and B), generates a synchronization signal corresponding to each of the contents, and generates the first and second eyeglasses 2200-1, 2200-2). Although two glasses apparatuses are illustrated in FIG. 16, the number of glasses apparatuses may be variously implemented. That is, three glasses may be used in the triple view mode that provides three contents among the multi view mode, and four glasses may be used when the quadruple view mode provides four contents. 16 illustrates a dual view mode in which two contents A and B are provided.

The first spectacle apparatus 2200-1 opens both the left shutter glass and the right shutter glass when one content A is displayed according to the synchronization signal, and opens the left shutter glass and the other when the other content B is displayed. It may be operable to turn off all of the right shutter glasses. Accordingly, viewer 1 wearing the first eyeglasses 2200-1 is synchronized with the first eyeglasses 2200-1 of the plurality of contents A and B that are alternately displayed. Only the content A can be viewed. Similarly, viewer 2 wearing the second eyeglasses 2200-2 may watch only the content B. FIG.

17 is a diagram for describing a method of providing a plurality of 3D contents according to an embodiment of the present invention.

As shown in the drawing, when the plurality of 3D contents (contents A and B) are 3D contents, the display apparatus 2100 alternately displays the plurality of 3D contents (contents A and B), and displays a left eye image of each 3D content and The right eye image can also be displayed alternately.

For example, the left eye image and the right eye images AL and AR of the 3D content A may be displayed, and the left eye image and the right eye images BL and BR of the 3D content B may be alternately displayed. In this case, the first eyeglass device 2200-1 opens the left eye and the right eye glasses at the display time point of the left eye image and the right eye images AL and AR of the 3D content A, and the second eyeglass device 2200-2 displays the 3D object. The left eye and the right eye glasses may be opened at the time of displaying the left eye image and the right eye images BL and BR of the content B.

Accordingly, viewer 1 wearing the first glasses device 2200-1 views only the 3D content A, and viewer 2 wearing the second glasses device 2200-2 views the 3D content. You only watch B.

However, this is a description of a case in which the shutter glass method is assumed, and in the case of the polarization method, the polarization direction of each of the plurality of content images and the polarization direction of the first and second eyeglasses may be implemented to support the multi-view mode. It will be apparent to those skilled in the art.

18 is a block diagram illustrating a configuration of a display apparatus according to an exemplary embodiment. According to FIG. 18, the display apparatus 2100 may include a plurality of receivers 2110-1, 2110-2,..., 2110-n, and a plurality of detectors 2120-1, 2120-2,. n), an integrated unit 2130, a signal processor 2140, and a display unit 2150.

The plurality of receivers 2110-1, 2110-2,..., 2110-n respectively receive a plurality of contents. Specifically, each receiving unit 2110-1, 2110-2,..., 2110-n stores contents from a broadcasting station transmitting broadcast program content using a broadcast network or a web server transmitting content files using the Internet. Receive. In addition, content may be received from various recording medium reproducing apparatuses provided in the display apparatus 2100 or connected to the display apparatus 2100. The recording medium playback device refers to a device that plays back content stored in various types of recording media such as CD, DVD, hard disk, Blu-ray disk, memory card, USB memory,

In an embodiment of receiving content from a broadcasting station, the plurality of receivers 2110-1, 2110-2,..., 2110-n may include a tuner (not shown), a demodulator (not shown), and an equalizer (not shown). It may be implemented in a form including a configuration such as. On the other hand, in an embodiment of receiving content from a source such as a web server, the plurality of receivers 2110-1, 2110-2,..., 2110-n may be implemented as a network interface card (not shown). . Alternatively, in the case of receiving the contents from the various recording medium reproducing apparatuses described above, the plurality of receiving units 2110-1, 2110-2,..., 2110-n are connected to the recording medium reproducing apparatus. May be implemented. As such, the plurality of receivers 2110-1, 2110-2,..., 2110-n may be implemented in various forms according to embodiments.

In addition, the plurality of receivers 2110-1, 2110-2,..., 2110-n do not necessarily receive content from a source of the same type, and the plurality of receivers 2110-1, 2110-2. 2110-n) may receive content from different types of sources. For example, the receiver 1 2110-1 may be implemented in a form including a tuner, a demodulator, an equalizer, and the like, and the receiver 2 2110-2 may be implemented as a network interface card.

Meanwhile, the plurality of receivers 2110-1, 2110-2,..., And 2110-n may each receive a plurality of contents having different frame rates. Specifically, each receiver 2110-1, 2110-2,..., 2110-n may receive content generated at 24 frames per second or 30 frames per second.

In addition, the content received by the plurality of receivers 2110-1, 2110-2,..., 2110-n may be 2D content or 3D content. 3D content refers to content that allows a user to feel three-dimensional by using a multi-view image in which the same object is expressed from different perspectives.

3D content can be in a variety of formats, and in particular, general top-bottom, side by side, horizontal interleave, vertical interleave, or checkers. It may be in a format according to one of a checker board method and a sequential frame.

The plurality of detection units 2120-1, 2120-2,..., And 2120-n detect a key frame of each of the plurality of contents. The plurality of detection units 2120-1, 2120-2,..., And 2120-n may detect key frames constituting content input in various ways.

For example, when an image frame constituting content is input at a frame rate of 24 frames per second or 30 frames per second, each detector 2120-1, 2120-2,..., 2120-n stores each frame as a key frame. Can be detected.

On the other hand, when an image frame constituting the content is input at a frame rate of 60 frames per second, each detector 2120-1, 2120-2, ..., 2120-n extracts a pull-down method of the input frame. The key frame can be detected. For example, if the current frame is repeated three times and the next frame is repeated two times, the detection units 2120-1, 2120-2,..., 2120-n display the input content on the display device 2100. In order to determine the conversion of the 3: 2 pull-down method, one frame is detected in three repeated frames and one frame is detected as a key frame in two repeated frames.

The integration unit 2130 integrates the detected key frames with the plurality of detection units 2120-1, 2120-2,..., 2120-n. Specifically, when the number of key frames of each of the plurality of contents is different, the integrator 2130 performs a frame repeat or frame skip operation to match the number of key frames, and key frames corresponding to each of the contents. Can integrate with each other. In this case, the integrator 2130 may display a key frame of each of the plurality of contents in a top to bottom, side by side format, checker board format, or interlaced format. Can be integrated into. 19 to 22 for more detailed description.

19 to 22 are diagrams for describing a method of integrating key frames of content having a frame rate with each other, according to an exemplary embodiment. In particular, in each of the figures, content A has a frame rate of 24 frames per second, in a 3: 2 pull down manner, and content B has a frame rate of 30 frames per second, in a 2: 2 pull down manner.

As shown in Fig. 19, when the key frames (Aa, Ab, Ac, ...) detected in the content A and the key frames (Ba, Bb, Bc, ...) detected in the content B are inputted, The unit 130 skips some key frames constituting the content having a high frame rate to equalize the number of key frames of each of the plurality of contents. Here, skipping a key frame may be interpreted to mean removing the frame.

For example, as shown in FIG. 20, the integrator 130 skips the third key frame Bc, the seventh key frame Bg,..., And the like in the content B having a 2: 2 pull down method. Thus, the number and number of key frames of the content A can be made the same.

Here, the skipped key frames may be key frames that do not coincide with each other in time according to the pull-down method of each content. That is, as shown in FIG. 20, the first key frame Aa, the fourth key frame Ad, the fifth key frame Ae, etc. of the content A having a 3: 2 pull down method are each 2. The second key frame Ba, the fifth key frame Be, the sixth key frame Bf, etc. of the content B having a pull-down method correspond to each other in time. Accordingly, the integration unit 2130 may skip at least one key frame among key frames that do not coincide in time except for these, and may equalize the key frame of each of the plurality of contents.

Meanwhile, the integrator 2130 may rearrange the key frames in order to integrate the key frames corresponding to each other of the contents. For example, as shown in FIG. 21, the integrator 2130 shifts the third key frame Ac of the content A to coincide with the fourth key frame Bd of the content B in time, and The second and seventh key frames Bb and Bg may be shifted to coincide with the second and sixth key frames Ab and Af of the content A, respectively. As such, the integrator 2130 may reorder the key frames skipped and the same number, so that the key frames of the respective contents may be matched in time.

Meanwhile, in the above-described embodiment, the integration unit 2130 matches the number of key frames by skipping key frames, but may also repeat the key frames to match the number of key frames. That is, the integrator 2130 repeats some key frames constituting the content having a small frame rate to equal the number of key frames of each of the plurality of contents.

For example, in FIG. 20, the integrator 2130 may include a key frame of the content A corresponding to the third frame Bc of the content B, a key frame of the content A corresponding to the seventh frame Bg of the content B, ... may be generated to equalize the number of key frames of each of the plurality of contents.

Here, the integrator 2130 may generate a key frame of the content A that is adjacent in time to the third frame B-c of the content B, the seventh frame B-g of the content B, and so on. That is, the integration unit 2130 generates a key frame of the content A corresponding to the third frame Bc of the content B by copying the second frame Ab of the content A, and the seventh frame Bg of the content B. The key frame of the content A corresponding to may be generated by copying the sixth key frame Af of the content A.

The integrator 2130 integrates key frames aligned in time in various ways.

For example, the integrator 2130 may integrate the key frames of the contents A and the contents B in the top-to-bottom format as shown in ① and ③ of FIG. 22. In detail, the top-to-bottom format is a format in which a key frame for one content is positioned at the top and a key frame for another content is located at the bottom. After half subsampling, they may be placed on top and bottom respectively.

On the other hand, the integration unit 2130 may integrate the key frames of the content A and the content B in the side-by-side format as shown in ②, ④ of FIG. In detail, the side-by-side format is a format in which a key frame for one content is placed on the left side and a key frame for another content is placed on the right side. After two subsampling, they may be positioned on the left and right sides, respectively.

On the other hand, the integration unit 2130 may integrate the key frames of the content A and the content B in a checker board format as shown in ⑤ of FIG. 22. Specifically, the checker board format is a format for sub-sampling a key frame for one content and a key frame for another content in the vertical and horizontal directions by 1/2, respectively, and alternately positioning the pixels of each sampled key frame. .

Meanwhile, in addition to the above-described format, an interlaced key frame for one content and a key frame for another content are each half subsampled in the vertical direction, and the pixels of each key frame are alternately positioned for each line. The format can also be used to integrate each key frame. As such, the integrator 2130 may integrate key frames for each of the plurality of contents according to various methods according to various methods.

Meanwhile, when a key frame for 3D content is received, the integrator 2130 generates a left eye image frame and a right eye image frame constituting the 3D content according to a format method, respectively, and integrates the key frames of each of the plurality of contents. You can perform the operation.

For example, the format of the 3D image is a top-bottom method, side by side method, horizontal interleave method, vertical interleave method, or checker board. Method, and the format according to the sequential frame, the integrator 2130 extracts the left eye image portion and the right eye image portion from each image frame and scales or interpolates the left eye image frame and the right eye image frame. Will generate each.

In addition, when the format of the 3D image is a general frame sequence method, the integrator 2130 extracts a left eye image frame and a right eye image frame from each frame.

The integrator 2130 performs a skip operation on the left eye image frame and the right eye image frame constituting each of the plurality of 3D contents to match the number of frames of each 3D content, and integrates them to generate an integrated key frame. can do.

The signal processor 2140 performs a process on the key frame integrated in the integrator 2130. That is, the signal processor 2140 performs motion judder cancellation by performing interpolation on the integrated key frames in the integrator 2130. In detail, the signal processor 2140 performs frame rate control (FRC) for converting the frame rate of the integrated key frame into the frame rate displayable on the display device 2100. For example, in the case of the NTSC (National Television System Committee) method, the frame rate displayable in the display apparatus 100 may be 60 frames per second.

In this case, the signal processor 2140 generates an interpolation frame by estimating the motion of the object included in the current frame and the next frame in the integrated key frame, and inserts the generated interpolation frame between the current frame and the next frame. The frame rate of the received key frame may be converted into a frame rate displayable on the display device 2100. Since a method of estimating motion to generate an interpolated frame can be used in any known method, a detailed description is omitted.

In addition, the signal processor 2140 may separate the frame having the frame rate converted for each content, and perform up or down scaling on each frame according to the screen size of the display unit 2150 using a scaler (not shown). .

The display unit 2150 displays the multi view frame using the data output from the signal processor 2140. In detail, the display unit 2150 may display the multi-view frame by multiplexing the image frames for each content provided by the signal processor 2140 so as to be alternately arranged.

For example, in the case of a shutter glass display device, the display unit 2150 alternately arranges at least one image frame of the first content, the image frame of the second content, ..., and the image frame of the n-th content. Configure to display.

 When the frame rate displayable by the display apparatus 2100 is 60 frames per second, the frame rate of each of the left eye image and the right eye image constituting the 3D content is converted to 60 frames per second according to the NTSC method. . The display unit 2150 may display the left eye image frame and the right eye image frame for the alternately arranged 3D content at a driving frequency of n × 60 Hz. The user may wear a glasses device (not shown) that cooperates with the timing at which the content is displayed on the display unit 2150, and watch the desired content.

Specifically, the eyeglass unit is provided with a left eye shutter glass and a right eye shutter glass. The left eye shutter glass and the right eye shutter glass are alternately opened / off when watching 3D content, but as described above, when the image frames of each content are alternately arranged and displayed, It is open / off collectively according to the output timing. Accordingly, the user can watch the content separately from other users.

As described above, a mode of alternately arranging image frames of each content and displaying the same may be referred to as a multi view mode (or a dual view mode). When the display apparatus 2100 operates in a normal mode (or a single view mode) displaying only one 2D content or 3D content, only one of the plurality of receivers 2110-1, 2110-2,..., 2110-n is displayed. Can be activated to process content. When the user selects the multi view mode while operating in the normal mode, the display apparatus 2100 may activate the remaining receiver to process data in the above-described manner.

On the other hand, when using a plurality of 3D content, the display unit 2150 multiplexes the left eye image and the right eye image included in each of the 3D content provided by the signal processor 2140 in a predetermined arrangement form, and the image frame of the other content and Can be placed alternately.

In detail, when the display apparatus 2100 operates at 60 Hz, the display unit 2150 may include a left eye image, a right eye image, a left eye image, a right eye image, and a left eye image of the second content. The right eye images may be sequentially arranged and displayed at a driving frequency of 2 × n × 60 Hz. The user recognizes a left eye image and a right eye image of one 3D content through the glasses device.

Meanwhile, although not shown in FIG. 18, the display apparatus 2100 further includes a configuration for differently providing audio data included in each content for each user when operating in the multi-view mode. That is, a demultiplexer (not shown) for separating video data and audio data from content received at each receiving unit 2110-1, 2110-2, ..., 2110-n, and an audio decoder for decoding the separated audio data. (Not shown), a modulator (not shown) for modulating the decoded audio data into different frequency signals, an output unit (not shown) for transmitting the modulated audio data to the spectacle device, and the like. Each audio data output from the output unit is provided to the user through output means such as earphones provided in the spectacle device. Since these configurations are not directly related to the present invention, a separate illustration is omitted.

In some cases, if the content includes additional information such as an electronic program guide (EPG) and subtitles, the demultiplexer may further separate the additional data from the content and transmit the additional data to the controller 2160 to be described later. In addition, the display apparatus 2100 may add a subtitle or the like processed to be displayed through an additional data processor (not shown) to the corresponding image frame.

Meanwhile, in the normal mode (particularly, displaying 3D content), the 3D content is stored through the activated receiver 1 (2120-1) among the plurality of receivers 2120-1, 2120-2, ..., 2120-n. When received, the signal processor 1 2120-1 performs signal processing on a left eye image and a right eye image constituting 3D content.

In detail, the display unit 2150 alternately displays the left eye image frame and the right eye image frame of each of the 3D contents processed by the signal processor 2140. In detail, the display unit 2150 alternately displays 3D content in the order of 'left eye image frame-> right eye image frame-> left eye image frame-> right eye image frame-> ...'. According to the NTSC method, when the frame rate displayable by the display apparatus 2100 is 60 frames per second, the signal processor 2140 converts each of the left eye image and the right eye image constituting 3D content into 60 frames per second. The display unit 2150 may display the left eye image frame and the right eye image frame for the 3D content that is alternately arranged at a driving frequency of 120 Hz.

23 is a block diagram illustrating a detailed configuration of a display apparatus. According to FIG. 23, the display apparatus 2100 includes a plurality of receivers 2110-1, 2110-2,..., 2110-n, and a plurality of detectors 2120-1, 2120-2,. n), an integrated unit 2130, a signal processor 2140, a display unit 2150, a controller 2160, a synchronization signal generator 2170, and an interface unit 2180. In the description of FIG. 23, a description having the same reference numerals as in FIG. 18 performs the same function, and thus redundant description will be omitted.

The controller 2160 controls the overall operation of the display apparatus 2100. Specifically, the controller 2160 may include a plurality of receivers 2110-1, 2110-2,..., 2110-n, a plurality of detectors 2120-1, 2120-2,. The integrated unit 2130, the signal processor 2140, and the display unit 2150 may be controlled to control each component to perform a corresponding function. Since these structures were described above with reference to FIG. 18, redundant descriptions are omitted.

The controller 2160 may control the synchronization signal generation unit 2170 and the interface unit 2180 to control the glasses device to be synchronized with the display timing of the content displayed on the display unit 2150.

The synchronization signal generator 2170 generates a synchronization signal for synchronizing the glasses device corresponding to each content according to the display timing of each content. In detail, the synchronization signal generator 2170 generates a synchronization signal for synchronizing the glasses with the display timing of the image frame for each of the plurality of contents in the multi-view mode, and in the normal mode, the left eye image frame and the right eye of the 3D content. A synchronization signal for synchronizing the spectacles with the display timing of the image frame is generated.

The interface unit 2180 transmits a synchronization signal to the spectacles. The interface unit 2180 communicates with the spectacle apparatus according to various wireless methods, and transmits a synchronization signal to the spectacle apparatus.

For example, the interface unit 2180 may be provided with a Bluetooth communication module to communicate with the eyeglass device, and generate a synchronization signal as a transmission packet according to the Bluetooth communication standard and transmit the same to the eyeglass device.

The transport packet includes time information for opening / off the shutter glass of the spectacle device in synchronization with the display timing of the content. Specifically, the time information includes a left shutter open offset for opening the left eye shutter glass of the spectacle device, a left shutter close offset for turning off the left eye shutter glass, a right shutter for opening the right eye shutter glass, open offset, and right shutter close offset for turning off the right eye shutter glass.

The offset time is delay information from the reference time set for each content to the opening or the turning off of the shutter glass. That is, the spectacle apparatus opens / offs the left eye shutter glass and the right eye shutter glass when the offset time has elapsed from the reference time point.

For example, the reference time point may be a time point at which a vertical sync signal (ie, frame sync) is generated in an image frame, and information about the reference time point may be included in a transport packet. In addition, the transport packet may include information about a clock signal used in the display device 2100. Therefore, upon receiving the transport packet, the spectacle device synchronizes its clock signal with the clock signal of the display device 2100, and determines whether the offset time has been reached from the time when the vertical synchronization signal is generated using the clock signal. You can open / off the glass.

In addition, the transport packet may further include information on the period of the frame sink, information for indicating the decimal point information when the period of the frame sink has a decimal point.

Meanwhile, the interface unit 2180 transmits / receives a Bluetooth device address, a pin code, and the like with the glasses device, and performs pairing according to a Bluetooth communication method. When pairing is completed, the communication interface 2150 may transmit a synchronization signal corresponding to one of the plurality of contents to the eyeglasses based on the information obtained through the paying.

In addition, when pairing with the plurality of eyeglasses is completed, the communication interface 2150 may transmit the same or different synchronization signals to different eyeglasses based on the information obtained through the paying. In addition, the communication interface 2150 may transmit the same synchronization signal to some eyeglasses. For example, the communication interface unit 2150 may transmit a synchronization signal corresponding to the content 1 to the first glasses device, a synchronization signal corresponding to the content 2 to the second glasses device, and a synchronization signal corresponding to the content 1 to the glasses device 3. Can transmit

Although the above-described embodiment has been described as the interface unit 2180 and the glasses device to perform communication according to the Bluetooth communication method, this is only an example. That is, in addition to the Bluetooth method, it is possible to use a communication method such as infrared communication, Zigbee, etc., and to perform communication according to various wireless communication methods capable of transmitting and receiving signals by forming a communication channel in other local areas. .

In addition, in the above-described embodiment, the configuration for generating the synchronization signal and the configuration for transmitting the synchronization signal have been described as separate, but this is for convenience of description. That is, the interface unit 2180 may generate a synchronization signal and transmit the synchronization signal to the eyeglass device. In this case, the synchronization signal generator 2170 may be omitted.

In addition, in the above-described embodiment, the display apparatus 2100 generates the synchronization signals corresponding to the display timing of the contents, and transmits them to the eyeglasses, but this is merely an example.

That is, the controller 2160 may control the interface unit 2180 to generate a synchronization signal corresponding to the display timing of each content as one transport packet according to the Bluetooth communication standard. That is, the interface unit 2180 may synchronize the display timing of the first content to open / off the shutter glass of the spectacle device, and to open / off the shutter glass of the spectacle device to be synchronized with the display timing of the second content. Time information, ..., and one transmission packet including all the time information for opening / off the shutter glass of the spectacle device in synchronization with the display timing of the n-th content.

In this case, the interface unit 2180 may generate a transport packet by matching information about the spectacles with the display timing of each content. For example, the display device 2100 may match information about different glasses devices for each content in the order in which the image frames of the content are arranged. That is, when two contents are provided in the multi-view mode, the image frames of the first, third, ..., n-th placed images match information about the first spectacle device, and the second and fourth The image frame of the content arranged at the n + 1 th position may match information about the second spectacle device (where n is odd). In addition, the interface unit 2180 may transmit a transmission packet generated to include a synchronization signal for each of the plurality of contents to the spectacles. The spectacle device may open / off the shutter glass by using a synchronization signal including its spectacle device information among the synchronization signals for each of the plurality of contents.

24 is a block diagram illustrating a configuration of an eyeglass device according to an embodiment of the present invention. Since the first and second eyeglasses 2200-1 and 2200-2 in FIGS. 16 and 17 have the same configuration, the configuration of any one eyeglasses 2200 is illustrated in FIG. 24. According to FIG. 24, the spectacle device 2200 includes an interface unit 2210, a controller 2220, a shutter glass driver 2230, a first shutter glass unit 2240, and a second shutter glass unit 2250. .

The interface unit 2210 receives a synchronization signal from the display device. The interface unit 2210 may use various communication methods. For example, communication may be performed by various wireless communication standards such as Bluetooth, Wi-Fi, Zigbee, IEEE, etc., RF signal transmission, and IR signal transmission and reception. The interface unit 2210 may communicate with a display device to receive a synchronization signal.

The synchronization signal is a signal for synchronizing the content view output time point of the display device with the spectacle device. As described above, the synchronization signal may be received in the form of a transport packet according to various communication standards. The transport packet may include time information for indicating the display timing of the content. Since the information included in the transport packet has been described above with reference to FIG. 23, duplicate description thereof will be omitted.

The controller 2220 controls the operation of the first half of the spectacles 2200. In particular, the controller 2220 transfers the synchronization signal received from the interface unit 2210 to the shutter glass driver 2230 to control the operation of the shutter glass driver 2230. That is, the controller 2220 controls the shutter glass driver 2230 to generate driving signals for driving the first shutter glass unit 2240 and the second shutter glass unit 2250 based on the synchronization signal. In order to receive the driving signal, the controller 2220 may perform pairing with the display device.

The shutter glass driver 2230 generates a driving signal based on the synchronization signal received from the controller 2220. The shutter glass driver 2230 provides the generated driving signals to the shutter glass units 2240 and 2250 so as to display the first shutter glass unit in accordance with the display timing of one of a plurality of contents displayed on the display apparatus 2100. 2240 and the second shutter glass unit 2250 may be opened.

The first shutter glass unit 2240 and the second shutter glass unit 2250 open or off the shutter glass according to a driving signal received from the shutter glass driver 2230.

Specifically, the first shutter glass unit 2240 and the second shutter glass unit 2250 simultaneously open the shutter glass when one of the plurality of contents is displayed, and turn off all of the shutter glasses when the other contents are displayed. Let's do it. Accordingly, a user wearing the spectacles 2200 may watch a single content.

Meanwhile, in the case of 3D content, the first shutter glass unit 2240 and the second shutter glass unit 2250 may alternately open / close the glasses. That is, the first shutter glass unit 2240 is opened at the timing at which the left eye image constituting one 3D content is displayed according to the driving signal, and the second shutter glass unit 2250 is opened at the timing at which the right eye image is displayed. Can be.

Meanwhile, in the above-described exemplary embodiment, the display apparatus generates the synchronization signals corresponding to the display timing of the contents, and transmits them to the glasses device 2200, but this is merely an example. That is, the display device may generate a transmission signal corresponding to the display timing of each content as one transport packet according to the Bluetooth communication standard and transmit the same to the eyeglasses device.

When the synchronization signal is received, the controller 2220 may check the display timing corresponding to the glasses device information and open or turn off the shutter glass according to the confirmed display timing.

In addition, in the above-described embodiment, the display apparatus and the eyeglass apparatus communicate with each other according to various wireless communication methods capable of transmitting and receiving signals by forming a communication channel at a short distance, but this is merely an example. That is, the display device provides an IR (Infra Red) synchronization signal having a different frequency to the glasses device, the glasses device receives a synchronization signal having a specific frequency, and open or off the shutter glass in accordance with the display timing of the corresponding content Of course you can.

25 is a flowchart illustrating a content providing method of a display apparatus according to another exemplary embodiment.

First, a plurality of contents are received (S2310). In detail, a plurality of contents having different frame rates may be received.

Thereafter, a key frame of each of the plurality of contents is detected (S2320).

For example, when an image frame constituting the content is input at a frame rate of 24 frames per second or 30 frames per second, each frame may be detected as a key frame.

On the other hand, when an image frame constituting the content is input at a frame rate of 60 frames per second, a key frame may be detected by extracting a pull-down method of the input frame. For example, if the current frame is repeated three times and the next frame is repeated two times, it is determined that the input content is converted to a 3: 2 pull-down method, and one frame and two frames are repeated in the three repeated frames. One frame in the frame is detected as a key frame.

Then, the detected key frame is integrated (S2330). In detail, when the number of key frames of each of the plurality of contents is different, the number of key frames may be matched by performing a frame skip operation, and frames corresponding to each other of the contents may be integrated.

In this case, a key frame of each of the plurality of contents may be integrated into a top to bottom, side by side format, or checker board format. Since each of these embodiments has been described above, a redundant description thereof will be omitted.

Then, signal processing for the integrated key frame is performed (S2340). That is, the motion judging removal process can be performed by interpolating the integrated key frame. Specifically, Frame Rate Control (FRC) is performed to convert the frame rate of the integrated key frame into a frame rate displayable on the display device. For example, in the case of the National Television System Committee (NTSC) scheme, the frame rate displayable on the display device 2100 may be 60 frames per second.

In this case, the interpolated frame is generated by estimating the motion of the object included in the current frame and the next frame in the integrated key frame, and the generated interpolation frame is inserted between the current frame and the next frame, thereby the frame rate of the integrated key frame. Can be converted to a frame rate displayable on the display device.

Meanwhile, a frame in which the frame rate is converted is divided for each content, and each frame may be up or down scaled according to the screen size of the display apparatus using a scaler.

In operation S2350, the multi-view frame is displayed using the processed key frame. In detail, the multi-view frame may be displayed by multiplexing the image frames for each content to be arranged alternately at least one.

For example, in the case of a shutter glass type display apparatus, at least one image frame of the first content, the image frame of the second content, ..., and the image frame of the nth content are configured to be alternately arranged and displayed. In this case, when the processed frame rate is 60 Hz, each content is displayed at n × 60 Hz, and the user can watch a desired content by wearing an eyeglass device (not shown) that is linked to the timing at which the content is displayed. .

Meanwhile, when using a plurality of 3D contents, the left eye image and the right eye image included in each 3D content may be multiplexed in a predetermined arrangement, and alternately arranged with an image frame of another content.

In detail, when the display apparatus operates at 60 Hz, the left eye image, the right eye image, the left eye image, the right eye image of the second content, the left eye image and the right eye image of the n-th content are sequentially arranged. It can be displayed at a driving frequency of 2 x n x 60 Hz. The user recognizes a left eye image and a right eye image of one 3D content through the glasses device.

In addition, the content providing method according to the present embodiment may further include generating a synchronization signal for synchronizing the glasses device corresponding to each content according to the display timing of each content, and transmitting the synchronization signal to the glasses device. .

Specifically, in the multi-view mode, a synchronization signal for synchronizing the spectacle apparatus with the display timing of the image frame for one of the plurality of contents is generated, and in the normal mode, the display timing of the left eye image frame and the right eye image frame of the 3D content Generate a synchronization signal for synchronizing the spectacles device.

In addition, communication with the eyeglasses device may be performed according to various wireless communication methods to transmit a corresponding synchronization signal. Since the case in which the synchronization signal is transmitted through the Bluetooth communication method has been described above, redundant description will be omitted.

[Example using multiple SoCs]

As described above, in order to process a plurality of contents, a large number of components should be provided in comparison with the processing of one content. In particular, in order to effectively provide a multi-view, a plurality of display processors may be provided. In this case, designing an SoC with a plurality of display processors requires a lot of effort and cost. In view of this, the following provides a display apparatus and method according to another embodiment of the present invention for displaying a plurality of content views using a plurality of SoCs.

26 is a block diagram illustrating a configuration of a display apparatus according to another exemplary embodiment. The display device 3100 of FIG. 26 may be implemented as various devices including a display unit such as a TV, a mobile phone, a PDA, a notebook PC, a monitor, a tablet PC, an e-book, an electronic picture frame, a kiosk, and the like.

According to FIG. 26, the display apparatus 3100 includes receivers 1 and 2 3110 and 3120, first and second SoCs 3130 and 3140, and an output unit 3150.

The receivers 1 and 2 3110 and 3120 respectively receive content from different sources. The received content may be 2D content or 3D content. As described in FIG. 1, the source may be implemented in various types. Since the operations of the receivers 1 and 2 (3110 and 3120) are the same as those of the receivers 1 and 2 (110 and 120) in the embodiment described in FIG. 1, redundant description thereof will be omitted.

SoCs 1 and 2 3130 and 3140 include display processors 3131 and 3141, respectively. The display processor 1 3131 mounted on the SoC 1 3130 processes the content received by the receiver 1 3110. The display processor 1 3131 performs various signal processing on the video data in the content. Specifically, processing such as data decoding, scaling, frame rate conversion, and the like can be performed.

The display processor 2 3141 included in the SoC 2 3140 processes the content received by the receiver 2 3120. Display processor 2 3141 performs various signal processing on video data in the content. Specifically, processing such as data decoding, scaling, frame rate conversion, and the like can be performed.

Data processed by the display processor 1 3131 and the display processor 2 3141, respectively, is output to the mux 3142 in the SoC 2 3140. The mux 3322 muxes each data to generate data including a plurality of content views. The output unit 3150 may output a plurality of content views using data provided from the mux 3142.

The output unit 3150 includes a video output unit which displays data output from the mux 3142. For example, in the case of the shutter glass display device, the video output unit may display image frames of the first content and the second content that are alternately arranged.

As another example, in the case of a polarization type display device, the video output unit may display a frame in which image frames of each content are separated and arranged alternately for each line. In the polarization method, the spectacle apparatus for viewing 3D content and the spectacle apparatus for using the multi-view mode are different from each other. That is, the spectacle apparatus for viewing 3D content has different polarization directions of the left eye and the right eye, and the spectacle apparatus for using the multi-view mode has the same polarization direction for the left eye and the right eye.

The output unit 3150 may also include an audio output unit. The audio output unit may modulate the audio data processed by the separately provided audio signal processor (not shown) into different radio frequency signals and output them to each eyeglass device, or may be transmitted through an interface unit (not shown).

The display apparatus may perform a multi-view mode in which a plurality of 2D contents or a plurality of 3D contents are combined.

FIG. 27 is a diagram illustrating an operation of a shutter glass type display apparatus that receives and displays a plurality of 3D contents.

Referring to FIG. 27, the output unit 3150 of the display apparatus 3100 displays a plurality of content views 10 including left and right eye images constituting a plurality of 3D contents on a screen. Each content view 10 corresponds to an image frame having a screen size. The system of FIG. 27 is similar to the system shown in FIG. In FIG. 2, the signal transmitter 190 protrudes to the outside of the apparatus and is displayed. However, in FIG. 27, the signal transmitter 190 is implemented in an embedded state. In terms of operation, the operation of FIG. 27 is similar to that of FIG. 2, and thus redundant description will be omitted.

FIG. 28 is a diagram illustrating an operation of a shutter glass display apparatus that receives and displays a plurality of 2D contents. According to FIG. 28, image frames of different contents are displayed in the content views 1 and 2. The eyeglasses 3210 and 3220 collectively open the left and right eye glasses at a timing at which a corresponding content view is output. According to FIG. 28, the first eyeglass apparatus 3210 may watch the content view 1, and the second eyeglass apparatus 3220 may watch the content view 2.

The display apparatus 3100 matches the content view according to the pairing order of the eyeglasses 3210 and 3220. For example, in a dual view mode that provides two content views, if the first glasses device 3210 is first paired, the content view 1 is matched, and if the second glasses device 3220 is subsequently paired, the content view 2 is matched. .

FIG. 29 illustrates an example of a configuration of an SoC 1 3130 used in the display apparatus 3100 of FIG. 26. SoC 1 3130 includes a display processor 3131, a video decoder 3132, a CPU 3133, and a memory 3134.

The video decoder 3132 is configured to decode video data in content received by the receiver 1 3110. The display processor 3131 performs processing such as scaling and frame rate conversion on the video data output from the video decoder 3132 as described above.

The memory 3134 stores programs and data necessary for the operation of the SoC 1 3130. The CPU 3133 controls the operations of the video decoder 3132 and the display processor 3131 using the memory 3134.

SoC 1 (3130) receives the 3D content through the HDMI port. The SoC 1 3130 outputs the data processed by the display processor 3131 to the SoC 2 3140 through a high speed interface such as LVDS Tx. SoC 2 3140 receives data through LVDS Rx and processes the data in display processor 3141. In addition, the mux 3142 muxes each data and provides the data to the output unit 3150.

As described above, the display apparatus 3100 may process 2D content or 3D content. Hereinafter, a case of receiving a plurality of 3D content will be described as an example.

30 is a block diagram showing a detailed configuration of a display apparatus. According to FIG. 30, the display apparatus includes the receivers 1 and 2 (3110 and 3120), the SoC 1 and 2 (3130 and 3140), the frame rate converter 3150, the output unit 3160, the controller 3170, and the interface unit ( 3180, and a synchronization signal generator 3190.

The receivers 1 and 2 3110 and 3120 respectively receive 3D content from various sources. SoCs 1 and 2 (3130 and 3140) perform signal processing on each 3D content. Since the receivers 1 and 2 (3110 and 3120) and the SoCs 1 and 2 (3130 and 3140) have been described in detail with reference to FIG. 26, redundant description thereof will be omitted. The frame rate converter 3150 converts the frame rate of data output from the SoC 2 3140. The frame rate converter 3150 may convert the frame rate according to the type of the multi-view mode. That is, the multi view mode may include various modes, such as a dual view mode, a triple view mode, and a quadruple view mode, depending on the number of content views. For example, if the display apparatus 3100 operates at 60 Hz and is in the dual view mode, the frame rate converter 3150 converts the frame rate of each 3D content to 120 Hz.

The interface unit 3180 communicates with the spectacles. In detail, the interface unit 3180 may transmit an audio signal or a synchronization signal to the eyeglasses according to various wireless communication standards such as Bluetooth, Wi-Fi, Zigbee, IEEE, and the like. Alternatively, the interface unit 3180 may be implemented as an RF lamp that outputs an IR lamp or an RF synchronization signal that emits an IR synchronization signal. When the interface unit 3180 is implemented as an IR lamp or an RF transmitter, the interface unit 3180 may be provided on the exterior as shown in the signal transmitter 190 of FIG. 2.

The synchronization signal generator 3190 generates a synchronization signal for synchronizing the plurality of content views and the plurality of glasses devices output from the output unit 3150 and transmits the synchronization signals to the respective glasses devices through the interface unit 3180. The synchronization signal generator 3190 may generate the synchronization signal in a format corresponding to the interface method with the spectacles. That is, the synchronization signal may be generated in the form of a data stream, an RF signal, or an IR signal according to various wireless communication standards. The synchronization signal generator 3190 may be integrated with the interface unit 3180.

The controller 3170 controls the overall operation of the display apparatus 3100. The controller 3170 may change an operation mode of the display apparatus 3100 according to a user selection. The user may select one of various operation modes, such as a single view mode for viewing a single content and a multi-view mode for viewing a plurality of content. In the single view mode, one content such as 2D content or 3D content is output, but in the multi view mode, a plurality of contents are combined and provided as a plurality of content views as described above. In the multi-view mode, the content view is retained even when the content playback assigned to one content view ends and the next content starts playing.

The controller 3170 controls the SoC 1, 2 3130 and 3140 and the output unit 3150 to combine and output a plurality of contents when the user inputs a mode switching command while operating in the single view mode. When switching to the multi-view mode, the controller 3170 controls the synchronization signal generator 3190 and the interface unit 3180 to transmit a synchronization signal to each eyeglass device matching each content.

31 is a block diagram illustrating a configuration of a display apparatus according to another exemplary embodiment. According to FIG. 31, the display apparatus includes receivers 1 and 2 3310 and 3320, a plurality of SoCs 3330, 3340 and 3350, a frame rate converter 3360, and an output 3370.

The receivers 1 and 2 3310 and 3320 may receive various types of content from various sources as described with reference to FIG. 26.

SoC 1 3330 and SoC 2 3340 include display processors 1 and 2 3331 and 3341, respectively. And SoC 3 3350 includes mux 3331. The mux 3331 muxes the data output from the SoCs 1 and 2 3330 and 3340 and outputs the mux to the frame rate converter 3360.

The frame rate converter 3360 converts the frame rate of the data muxed by the mux 3331 and outputs the same to the output unit 3370.

The output unit 3370 outputs a plurality of content views according to the data output from the frame rate converter 3360.

Meanwhile, the spectacles shown in FIGS. 27 and 28 may have a configuration as shown in FIG. 24. That is, the first or second glasses device 3210. 3220 includes first and second shutter glass units 2240 and 2250, a shutter glass driver 2230, a controller 2220, and an interface unit 2210. Since the spectacle device has been described in detail with reference to FIG. 24, duplicate description thereof will be omitted.

32 is a flowchart for explaining a display method according to another exemplary embodiment. Referring to FIG. 32, when a 3D multi-view mode for receiving and outputting a plurality of 3D contents is started (S3810), a plurality of 3D contents are received (S3820), and each 3D content is processed using a plurality of SoCs (S3830). ).

In the processing of each content using a plurality of SoCs, the data processed in each SoC may be muxed using a mux mounted on one of the SoCs, and the frame rate of the muxed data may be converted.

Alternatively, after processing each 3D content in a plurality of SoCs, the data may be muxed using muxes mounted on separate SoCs, and the frame rate of the muxed data may be converted.

Accordingly, a plurality of content views are displayed by combining image frames of each 3D content (S3840), and a synchronization signal is transmitted (S3850).

Although not shown in FIG. 32, the present invention may further include performing pairing with a plurality of glasses devices, and sequentially matching the plurality of glasses devices and the plurality of content views according to the pairing order.

As described above, according to various embodiments of the present disclosure, a plurality of contents may be received to effectively provide a multi-view.

The method according to the various embodiments described above may be programmed as an application and provided to a display device and an eyeglass device.

Specifically, receiving a plurality of contents each comprising a left eye image and a right eye image, reducing the data size of the plurality of contents, storing the contents, converting the frame rates of the plurality of stored contents, and the frame rate A non-transitory computer readable medium in which a program for providing a multi-view display is stored by sequentially combining and displaying each converted content is embedded in the display device, or connected to the display device for use. Can be. These programs can also be downloaded from a variety of sources, such as web servers or management servers.

Or downscaling a plurality of 3D contents each including a left eye image and a right eye image. Converting the frame rate of the 3D content using the plurality of frame rate converters, constructing the multi content frame data using the plurality of 3D contents having the converted frame rate, and converting the 3D multi content frame data to the display apparatus. A program for performing signal processing by sequentially performing the transmitting may be provided to the display apparatus through a non-transitory readable medium or a network.

Or sequentially receiving a plurality of different contents having different frame rates, matching the frame rates of the plurality of contents, and displaying the multi-view frame using the respective contents having the matched frame rates. Programs to perform may be provided to the display device via a non-transitory readable medium or network.

The non-transitory readable medium described above is not a medium storing data for a short time such as a register, a cache, a memory, but a semi-permanent data, and means a medium that can be read by a device. Specifically, the various applications or programs described above may be stored and provided in a non-transitory readable medium such as a CD, a DVD, a hard disk, a Blu-ray disk, a USB, a memory card, a ROM, or the like.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention.

110, 120: receiver 1, 2 130, 140: scaler 1, 2
150: storage unit 180: image output unit
160, 170: frame rate converter 1, 2

Claims (24)

A plurality of receivers for receiving a plurality of contents;
A storage unit;
A plurality of scalers configured to reduce the data size of the plurality of contents and store the contents in the storage unit, and read out each of the contents stored in the storage unit according to an output timing;
A plurality of frame rate converters for converting frame rates of the read contents; And
And a video output unit configured to display and display respective content output from the plurality of frame rate converters.
The method of claim 1,
The plurality of contents are 3D contents each including a left eye image and a right eye image,
Each of the plurality of scalers,
And down-scaling the plurality of 3D contents and reducing a frame rate and storing the plurality of 3D contents in the storage unit.
The method of claim 2,
The plurality of contents are 3D contents each including a left eye image and a right eye image,
Each of the plurality of scalers,
Downscaling the plurality of 3D contents and storing the 3D content in the storage;
And displaying each 3D content stored in the storage unit according to an output timing, and lowering the frame rate of the read 3D content to the plurality of frame rate converters.
The method of claim 1,
The plurality of contents are 3D contents each including a left eye image and a right eye image,
At least one of the plurality of scalers,
If the 3D content is a 3: 2 pull-down film image content, down-scaling the film image content and extracting only a key frame is stored in the storage unit,
And the frame rate converter converts the frame rate of each 3D content into a multi-content display rate by interpolating the frame based on the read key frame when the key frame is read from the storage unit.
The method of claim 1,
The image output unit,
And multiplexing each content provided by the plurality of frame rate converters in order according to a predetermined arrangement order, and up-scaling the multiplexed data to fit the screen size.
In the multi-content display method of the display device,
Receiving a plurality of contents each including a left eye image and a right eye image;
Reducing and storing data sizes of the plurality of contents;
Converting frame rates of the plurality of stored contents, respectively; And
And combining and displaying each content of which the frame rate is converted.
The method according to claim 6,
The plurality of contents are 3D contents each including a left eye image and a right eye image,
Reducing and storing the data size of the plurality of contents,
Downscaling the plurality of 3D content;
Reducing the frame rate of each of the downscaled 3D content;
Storing each 3D content having the reduced frame rate;
Converting the frame rate,
And converting the frame rate of each 3D content into a multi content display rate.
The method according to claim 6,
The plurality of contents are 3D contents each including a left eye image and a right eye image,
Reducing and storing the data size of the plurality of contents,
If the 3D content is 3: 2 pull down film image content, downscaling the film image content;
And extracting and storing only a key frame of the downscaled film image content.
Converting the frame rate,
And converting the frame rate of each 3D content by interpolating the frame based on the stored key frame.
The method according to claim 6,
Wherein the displaying comprises:
Multiplexing each content so that the contents are sequentially arranged according to a predetermined arrangement order;
Upscaling the multiplexed data to fit the screen size;
Displaying the upscaled data.
In the signal processing apparatus,
A plurality of scalers configured to reduce a data size of a plurality of 3D contents each including a left eye image and a right eye image;
A storage unit which stores a plurality of 3D contents processed by the plurality of scalers;
And a plurality of frame rate converters for converting the frame rates of the plurality of 3D contents stored in the storage unit into a multi-content display rate.
The method of claim 10,
The plurality of scalers,
Downscaling the plurality of 3D contents and storing the 3D content in the storage;
And when the down-scaled 3D content is read from the storage unit, converting the read 3D content into a format that can be processed by the plurality of frame rate converters.
The method of claim 10,
An image processor configured to construct multi-content frame data using a plurality of 3D contents having a frame rate converted by the plurality of frame rate converters; And
And an interface unit configured to transmit the multi-content frame data to a display device.
In the signal processing method,
Downscaling a plurality of 3D contents each including a left eye image and a right eye image;
Converting the frame rate of the 3D content using a plurality of frame rate converters;
Constructing multi-content frame data using a plurality of 3D contents having the converted frame rate; And
Transmitting the 3D multi content frame data to a display device.
The method of claim 13,
And converting the plurality of down-scaled 3D contents into a format that can be processed by the plurality of frame rate converters.
Receiving a plurality of different contents having different frame rates;
Matching frame rates of the plurality of contents;
Displaying a multi-view frame using each content having the matched frame rate.
16. The method of claim 15,
Matching the frame rate,
Storing the plurality of contents;
Generating a plurality of image frames by processing the plurality of contents, respectively;
And interpolating the video frame of the content having a relatively small frame rate among the plurality of contents.
17. The method of claim 16,
The interpolation process,
Comparing a reception time of each of the plurality of contents, and confirming a storage ratio of a corresponding frame of other contents at a time when one image frame of one of the plurality of contents has been stored;
And generating an interpolation frame by combining the corresponding frame and a next frame of the corresponding frame according to the identified storage ratio.
18. The method of claim 17,
Generating the interpolation frame,
And comparing the corresponding frame and the next frame to estimate a motion of an object displayed in the frame, and applying the reception ratio to the estimated motion to generate the interpolated frame.
16. The method of claim 15,
Matching the frame rate,
Detecting a key frame of each of the plurality of contents;
Integrating the detected key frames.
20. The method of claim 19,
Integrating the key frame,
And if the number of key frames of each of the plurality of contents is different, performing a frame repeat or skip operation to match the number of key frames, and integrating key frames corresponding to each of the contents.
21. The method of claim 20,
Matching the frame rate,
And performing motion judging removal by performing interpolation on the integrated key frame.
A plurality of receivers for receiving a plurality of 3D contents;
A plurality of system on chips (SoCs) each having a display processor for processing 3D content;
And an output unit for outputting a plurality of content views by combining image frames of each of the 3D contents processed by the plurality of SoCs.
The method of claim 22,
One SoC of the plurality of SoCs includes a mux for muxing data processed by a display processor mounted on the SoC and data output from another SoC.
The method of claim 22,
A SoC equipped with a mux for muxing data output from the plurality of SoCs; And
And a frame rate converter for converting the frame rate of the data muxed in the mux.
KR1020120054864A 2011-12-28 2012-05-23 Signal processing device for processing a plurality of 3d contents, display device for displaying them and methods thereof KR20130076674A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/614,277 US20130169755A1 (en) 2011-12-28 2012-09-13 Signal processing device for processing plurality of 3d content, display device for displaying the content, and methods thereof
EP12184610.9A EP2611161B1 (en) 2011-12-28 2012-09-17 Signal processing device for processing plurality of 3D content, display device for displaying the content, and methods thereof
CN2012105899170A CN103188509A (en) 2011-12-28 2012-12-28 Signal processing device for processing a plurality of 3d content, display device, and methods thereof

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR20110144365 2011-12-28
KR1020110145280 2011-12-28
KR20110145280 2011-12-28
KR1020110144365 2011-12-28
KR1020110147502 2011-12-30
KR20110147291 2011-12-30
KR20110147502 2011-12-30
KR1020110147291 2011-12-30

Publications (1)

Publication Number Publication Date
KR20130076674A true KR20130076674A (en) 2013-07-08

Family

ID=48990210

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120054864A KR20130076674A (en) 2011-12-28 2012-05-23 Signal processing device for processing a plurality of 3d contents, display device for displaying them and methods thereof

Country Status (1)

Country Link
KR (1) KR20130076674A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150058809A (en) * 2013-11-21 2015-05-29 삼성전자주식회사 Apparatus and method for reproducing multi image
WO2016036073A1 (en) * 2014-09-02 2016-03-10 삼성전자 주식회사 Display device, system and controlling method therefor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150058809A (en) * 2013-11-21 2015-05-29 삼성전자주식회사 Apparatus and method for reproducing multi image
WO2016036073A1 (en) * 2014-09-02 2016-03-10 삼성전자 주식회사 Display device, system and controlling method therefor
US10140685B2 (en) 2014-09-02 2018-11-27 Samsung Electronics Co., Ltd. Display device, system and controlling method therefor
US10878532B2 (en) 2014-09-02 2020-12-29 Samsung Electronics Co., Ltd. Display device, system and controlling method therefor

Similar Documents

Publication Publication Date Title
EP2611161B1 (en) Signal processing device for processing plurality of 3D content, display device for displaying the content, and methods thereof
EP2375767A1 (en) Stereoscopic video player, stereoscopic video playback system, stereoscopic video playback method, and semiconductor device for stereoscopic video playback
US8760468B2 (en) Image processing apparatus and image processing method
EP2320669B1 (en) Stereoscopic image reproduction method in case of pause mode and stereoscopic image reproduction apparatus using same
US20120113113A1 (en) Method of processing data for 3d images and audio/video system
US8994787B2 (en) Video signal processing device and video signal processing method
US20110063422A1 (en) Video processing system and video processing method
US9438895B2 (en) Receiving apparatus, transmitting apparatus, communication system, control method of the receiving apparatus and program
US20150035958A1 (en) Apparatus and method for concurrently displaying multiple views
US20130141534A1 (en) Image processing device and method
KR20130028098A (en) Method and apparatus for displaying images
JP2009296144A (en) Digital video data transmission apparatus, digital video data reception apparatus, digital video data transport system, digital video data transmission method, digital video data reception method, and digital video data transport method
US20150289015A1 (en) Broadcast receiving apparatus, upgrade device for upgrading the apparatus, broadcast signal processing system, and methods thereof
US20140015941A1 (en) Image display apparatus, method for displaying image and glasses apparatus
JP2013090020A (en) Image output device and image output method
JP5412404B2 (en) Information integration device, information display device, information recording device
KR101885215B1 (en) Display apparatus and display method using the same
US20130169698A1 (en) Backlight providing apparatus, display apparatus and controlling method thereof
KR20130076674A (en) Signal processing device for processing a plurality of 3d contents, display device for displaying them and methods thereof
US20130266287A1 (en) Reproduction device and reproduction method
CN103188513A (en) Device and method for displaying video
US20110310222A1 (en) Image distributing apparatus, display apparatus, and image distributing method thereof
CN204697147U (en) For the device of display video
KR20120062428A (en) Image display apparatus, and method for operating the same
JP2013090019A (en) Image output device and image output method

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination