KR20150024198A - Image controlling apparatus and method thereof - Google Patents

Image controlling apparatus and method thereof Download PDF

Info

Publication number
KR20150024198A
KR20150024198A KR20130101365A KR20130101365A KR20150024198A KR 20150024198 A KR20150024198 A KR 20150024198A KR 20130101365 A KR20130101365 A KR 20130101365A KR 20130101365 A KR20130101365 A KR 20130101365A KR 20150024198 A KR20150024198 A KR 20150024198A
Authority
KR
South Korea
Prior art keywords
viewer
information
viewing
registration information
image
Prior art date
Application number
KR20130101365A
Other languages
Korean (ko)
Inventor
황보상규
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR20130101365A priority Critical patent/KR20150024198A/en
Publication of KR20150024198A publication Critical patent/KR20150024198A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present invention relates to an image processing apparatus capable of easily and quickly performing registration of a user (viewer) position for viewing a non-eyeglass 3D image of a multi-viewpoint, and capable of effectively providing a 3D image based on the registered user position and The image processing apparatus according to an embodiment disclosed herein includes a display unit for displaying a stereoscopic image in a multi-point manner; Setting a plurality of viewing zones in advance according to the viewing distance of the stereoscopic image, presetting a plurality of viewer location registration information corresponding to each of the plurality of viewing zones in advance, and sequentially setting the plurality of viewer location registration information And registering any one of the viewer position registration information selected by the viewer among the plurality of viewer position registration information sequentially displayed as the viewing position information of the viewer.

Description

[0001] IMAGE CONTROLLING APPARATUS AND METHOD THEREOF [0002]

The present invention relates to an image processing apparatus and a method thereof.

The image display device includes both a device for receiving and displaying a broadcast, a device for recording and reproducing a moving image, and a device for recording and reproducing audio. The video display device includes, for example, a television, a computer monitor, a projector, a tablet, and the like. As the functions of such a video display device are diversified, a multimedia device (multimedia player) having a complex function of shooting a picture or a video, receiving a game, receiving a broadcast, etc. in addition to a function of broadcasting, . Furthermore, in recent years, video display devices have been implemented as smart devices (for example, smart television). Accordingly, the video display device operates in conjunction with the mobile terminal or the computer, as well as the execution of the Internet and the like.

In recent years, interest in stereoscopic image services has been increasing, and devices for providing stereoscopic images have been continuously developed. For example, a 3D stereoscopic image display apparatus detects depth information of a stereoscopic object included in a 3D image, and displays a 3D image on a flat panel based on the detected depth information. A conventional image display apparatus and an operation method thereof are disclosed in Korean Patent Application No. 10-2013-0011041.

The present invention relates to an image processing apparatus capable of easily and quickly performing registration of a user (viewer) position for viewing a non-eyeglass 3D image of a multi-viewpoint, and capable of effectively providing a 3D image based on the registered user position and The purpose is to provide a method.

According to an embodiment of the present invention, there is provided an image processing apparatus including: a display unit for displaying a stereoscopic image in a multi-view mode; Setting a plurality of viewing zones in advance according to the viewing distance of the stereoscopic image, presetting a plurality of viewer location registration information corresponding to each of the plurality of viewing zones in advance, and sequentially setting the plurality of viewer location registration information And registering any one of the viewer position registration information selected by the viewer among the plurality of viewer position registration information sequentially displayed as the viewing position information of the viewer.

In one embodiment of the present invention, each of the plurality of viewer location registration information includes guide information having the viewer at a sweet spot and having an optimum viewing distance corresponding to one of the plurality of viewing regions; A stereoscopic image having an optimum viewing distance corresponding to any one of the viewing zones; And one of the zone information and the viewer information input item having the optimum viewing distance corresponding to any one of the viewing zones.

As an example related to the present specification, the guide information includes information requesting the viewer to move leftward or rightward so that the position of the viewer is located in the sweet spot; Information indicating that the position of the viewer is located in the sweet spot; And may include information requesting the viewer to move forward or backward so that the position of the viewer is located in the sweet spot.

As an example related to the present specification, the stereoscopic image may include a plurality of stereoscopic images having an optimum viewing distance corresponding to each of the plurality of viewing areas.

As an example related to the present specification, the zone information may include information indicating one of the plurality of viewing zones; And information indicating a viewer information input item for registering viewer information corresponding to the one viewing area among the plurality of viewing areas.

In one embodiment of the present invention, when the one of the plurality of viewing zones is selected from among the plurality of viewing zones, the controller displays the viewer position registration information corresponding to the selected viewing zone among the plurality of viewer location registration information on the display unit Can be displayed.

In one embodiment of the present invention, when the viewer information is input to the viewer information entry item included in the first viewer location registration information among the plurality of viewer location registration information, the controller displays the first viewer location registration information It can be registered as the viewing position information of the viewer.

In one embodiment of the present invention, when the viewer information entry item included in the first viewer location registration information is selected from the plurality of viewer location registration information, the controller recognizes the face of the viewer through the camera, The face image can be automatically input as the viewer information.

As an example related to the present specification, the controller displays the registered viewing position information on the display unit when the information requesting the registered viewing position information is received, and the registered viewing position information includes the registered Viewing area information indicating a viewing area; Viewer information associated with the viewing area information; And information indicating that the viewer position of the registered viewing area is a sweet spot.

According to an embodiment of the present invention, there is provided an image processing method including: displaying a multi-viewpoint stereoscopic image on a display unit; Setting a plurality of viewing zones in advance according to the viewing distance of the stereoscopic image; Setting a plurality of viewer location registration information corresponding to each of the plurality of viewing areas in advance; Displaying the plurality of viewer location registration information sequentially on the display unit at the request of the viewer; And registering any one viewer position registration information selected by the viewer among the plurality of viewer position registration information sequentially displayed as the viewing position information of the viewer.

The image processing apparatus and method according to embodiments of the present invention can easily and quickly perform user (viewer) location registration for 3D image viewing and provide a 3D image based on the registered user location, (Viewers) can enjoy a high-quality 3D image easily and conveniently.

1 is a block diagram showing an image display apparatus and an external input apparatus according to the present invention.
2 is a block diagram illustrating the configuration of a 3D image display apparatus to which an image processing apparatus according to embodiments of the present invention is applied.
Fig. 3 is an exemplary view showing a multi-view glasses 3D display for explaining the present invention.
4A is an exemplary view showing information displayed to a user located in a sweet spot.
4B is an exemplary view showing information displayed to a user located in the dead zone.
5 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
Figure 6 is an illustration of multiple viewing zones according to an embodiment of the present invention.
7 is an exemplary view showing viewer position registration information according to an embodiment of the present invention.
8 is an exemplary view illustrating registered viewing position information according to an embodiment of the present invention.

It is noted that the technical terms used herein are used only to describe specific embodiments and are not intended to limit the invention. It is also to be understood that the technical terms used herein are to be interpreted in a sense generally understood by a person skilled in the art to which the present invention belongs, Should not be construed to mean, or be interpreted in an excessively reduced sense. Further, when a technical term used herein is an erroneous technical term that does not accurately express the spirit of the present invention, it should be understood that technical terms that can be understood by a person skilled in the art are replaced. In addition, the general terms used in the present invention should be interpreted according to a predefined or prior context, and should not be construed as being excessively reduced.

Also, the singular forms "as used herein include plural referents unless the context clearly dictates otherwise. In the present application, the term "comprising" or "comprising" or the like should not be construed as necessarily including the various elements or steps described in the specification, Or may be further comprised of additional components or steps.

Furthermore, terms including ordinals such as first, second, etc. used in this specification can be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings, wherein like reference numerals refer to like or similar elements throughout the several views, and redundant description thereof will be omitted.

In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail. It is to be noted that the accompanying drawings are only for the purpose of facilitating understanding of the present invention, and should not be construed as limiting the scope of the present invention with reference to the accompanying drawings.

In this specification, an image display apparatus includes both a device for receiving and displaying a broadcast, a device for recording and reproducing a moving image, and a device for recording and reproducing audio. Hereinafter, as an example of this, a television will be described as an example.

1 is a block diagram showing an image display apparatus 100 and an external input apparatus 190 according to the present invention. The video display apparatus 100 includes a tuner 110, a demodulator 120, a signal input / output unit 130, an interface unit 140, a control unit 150, a storage unit 160, a display unit 170, And an output unit 180. However, the external input device 190 may be a separate device from the image display device 100, or may be included as a component of the image display device 100. [

1, a tuner 110 selects an RF broadcast signal corresponding to a channel selected by a user from among RF (Radio Frequency) broadcast signals received through an antenna, converts the RF broadcast signal into an intermediate frequency signal or a baseband image / Convert to voice signal. For example, if the RF broadcast signal is a digital broadcast signal, the tuner 110 converts the RF broadcast signal into a digital IF signal (DIF). On the other hand, if the RF broadcast signal is an analog broadcast signal, the tuner 110 converts the RF broadcast signal into an analog baseband video / audio signal (CVBS / SIF). As described above, the tuner 110 may be a hybrid tuner capable of processing a digital broadcasting signal and an analog broadcasting signal.

The digital IF signal DIF output from the tuner 110 is input to the demodulator 120 and the analog baseband video / audio signal CVBS / SIF output from the tuner 110 is input to the controller 160 . The tuner 120 can receive an RF broadcast signal of a single carrier according to an Advanced Television Systems Committee (ATSC) scheme or an RF broadcast signal of a plurality of carriers according to a DVB (Digital Video Broadcasting) scheme.

Although one tuner 110 is shown in the drawing, the present invention is not limited thereto. The video display device 100 may include a plurality of tuners, for example, first and second tuners. In this case, the first tuner may receive the first RF broadcast signal corresponding to the broadcast channel selected by the user, and the second tuner may sequentially or periodically receive the second RF broadcast signal corresponding to the previously stored broadcast channel . The second tuner can convert the RF broadcast signal into a digital IF signal (DIF) or an analog baseband video / audio signal (CVBS / SIF) in the same manner as the first tuner.

The demodulator 120 receives the digital IF signal DIF converted by the tuner 110 and performs a demodulation operation. For example, if the digital IF signal DIF output from the tuner 110 is of the ATSC scheme, the demodulator 120 performs an 8-VSB (8-Vestigal Side Band) demodulation. At this time, the demodulator 120 may perform channel decoding such as trellis decoding, de-interleaving, and Reed-Solomon decoding. For this, the demodulator 120 may include a trellis decoder, a de-interleaver, and a Reed Solomon decoder.

For example, if the digital IF signal DIF output from the tuner 110 is of the DVB scheme, the demodulator 120 performs COFDMA (Coded Orthogonal Frequency Division Modulation) demodulation. At this time, the demodulator 120 may perform channel decoding such as convolutional decoding, deinterleaving, and Reed-Solomon decoding. For this, the demodulator 120 may include a convolution decoder, a deinterleaver, and a Reed-Solomon decoder.

The signal input / output unit 130 may be connected to an external device to perform signal input and output operations, and may include an A / V input / output unit and a wireless communication unit.

The A / V I / O section is composed of an Ethernet terminal, a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S- video terminal (analog), a DVI (digital visual interface) terminal, , A Mobile High-definition Link (MHL) terminal, an RGB terminal, a D-SUB terminal, an IEEE 1394 terminal, a SPDIF terminal, and a liquid HD terminal. The digital signal input through these terminals may be transmitted to the control unit 150. At this time, the analog signal inputted through the CVBS terminal and the S-video terminal may be converted into a digital signal through an analog-digital converter (not shown) and transmitted to the controller 150.

The wireless communication unit can perform a wireless Internet connection. For example, the wireless communication unit performs wireless Internet access using a WLAN (Wi-Fi), a Wibro (wireless broadband), a Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) can do. In addition, the wireless communication unit can perform short-range wireless communication with other electronic devices. For example, the wireless communication unit can perform near field wireless communication using Bluetooth, Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), ZigBee have.

The signal input / output unit 130 may be a video input / output unit for inputting video signals and audio signals provided from external devices such as a digital versatile disk (DVD) player, a Blu-ray player, a game device, a camcorder, Signals and data signals to the controller 150. [ The controller 150 can also transmit video signals, audio signals, and data signals of various media files stored in an external storage device such as a memory device, a hard disk, and the like. In addition, the video signal, audio signal, and data signal processed by the control unit 150 can be output to other external apparatuses.

The signal input / output unit 130 may be connected to a set top box, for example, a set top box for IPTV (Internet Protocol TV) through at least one of the various terminals described above to perform signal input and output operations. For example, the signal input / output unit 130 can transmit a video signal, an audio signal, and a data signal processed by the settop box for IPTV to the controller 150 so as to enable bidirectional communication, Signals to IPTV set-top boxes. Here, the IPTV may include ADSL-TV, VDSL-TV, FTTH-TV, etc. classified according to the transmission network.

The digital signal output from the demodulation unit 120 and the signal output unit 130 may include a stream signal TS. The stream signal TS may be a signal in which a video signal, an audio signal, and a data signal are multiplexed. For example, the stream signal TS may be an MPEG-2 TS (Transprt Stream) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, or the like. Here, the MPEG-2 TS may include a header of 4 bytes and a payload of 184 bytes.

The interface unit 140 may receive an input signal for power control, channel selection, screen setting, etc. from the external input device 190, or may transmit the signal processed by the control unit 160 to the external input device 190 . The interface unit 140 and the external input device 190 may be connected by wire or wirelessly.

As an example of the interface unit 140, a sensor unit may be provided, and the sensor unit is configured to sense the input signal from a remote controller, for example, a remote controller.

The network interface unit (not shown) provides an interface for connecting the video display device 100 to a wired / wireless network including the Internet network. The network interface unit may include an Ethernet terminal or the like for connection with a wired network and may be a wireless LAN (WLAN) (Wi-Fi), a Wibro (wireless broadband), a Wimax (World Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access) communication standards, and the like.

A network interface unit (not shown) can access a predetermined web page via a network. That is, it is possible to access a predetermined web page through a network and transmit or receive data with the server. In addition, content or data provided by a content provider or a network operator can be received. That is, it can receive contents such as a movie, an advertisement, a game, a VOD, a broadcast signal, and related information provided from a content provider or a network provider through a network. In addition, the update information and the update file of the firmware provided by the network operator can be received. It may also transmit data to the Internet or a content provider or network operator.

Also, the network interface unit (not shown) can select and receive a desired application among the applications open to the public via the network.

The control unit 150 may control the overall operation of the image display apparatus 100. [ More specifically, the control unit 150 is configured to control generation and output of an image. For example, the controller 150 may control the tuner 110 to tuning an RF broadcast signal corresponding to a channel selected by the user or a previously stored channel. Although not shown in the figure, the controller 150 may include a demultiplexer, an image processor, a voice processor, a data processor, an OSD (On Screen Display) generator, and the like. In addition, the control unit 150 may include a CPU, a peripheral device, and the like in hardware.

The control unit 150 may demultiplex the stream signal TS, for example, the MPEG-2 TS into a video signal, a voice signal, and a data signal.

The control unit 150 may perform image processing, e.g., decoding, on the demultiplexed video signal. In more detail, the control unit 150 decodes an MPEG-2 standard-encoded video signal using an MPEG-2 decoder, and encodes the video signal in accordance with a DMB (Digital Multimedia Broadcasting) scheme or an H It is possible to decode the encoded video signal of the .264 standard. In addition, the controller 150 may perform image processing such that the brightness, tint, and color of the image signal are adjusted. The image signal processed by the control unit 150 may be transmitted to the display unit 170 or may be transmitted to an external output device (not shown) through an external output terminal.

The control unit 150 may perform speech processing, e.g., decoding, on the demultiplexed speech signal. More specifically, the control unit 150 decodes an MPEG-2 standard-encoded audio signal using an MPEG-2 decoder, and uses an MPEG 4 decoder to decode an MPEG 4 bit sliced arithmetic coding Decodes the encoded voice signal, and decodes the encoded voice signal of the AAC (Advanced Audio Codec) standard of MPEG 2 according to the satellite DMB scheme or DVB-H using the AAC decoder. In addition, the controller 150 may process a base, a treble, a volume control, and the like. The voice signal processed in the control unit 150 may be transmitted to the audio output unit 180, for example, a speaker, or may be transmitted to an external output device.

The control unit 150 may perform signal processing on the analog baseband video / audio signal (CVBS / SIF). Here, the analog baseband video / audio signal CVBS / SIF input to the controller 150 may be an analog baseband video / audio signal output from the tuner 110 or the signal input / output unit 130. The signal-processed video signal is displayed through the display unit 170, and the signal-processed audio signal is output through the audio output unit 180.

The control unit 150 may perform data processing, for example, decoding, on the demultiplexed data signal. Here, the data signal may include electronic program guide (EPG) information including broadcast information such as a start time and an end time of a broadcast program broadcasted on each channel. The EPG information includes, for example, TSC-PSIP (ATSC-Program and System Information Protocol) information in the ATSC scheme and DVB-SI information (DVB-SI information) in the DVB scheme. The ATSC-PSIP information or the DVB-SI information may be included in the header (4 bytes) of the MPEG-2 TS.

The controller 150 may perform a control operation for the OSD process. In more detail, the control unit 150 may include at least one of an image signal and a data signal, or an OSD for displaying various information in a graphic or text form based on an input signal received from the external input device 190. [ Signal can be generated. The OSD signal may include various data such as a user interface screen of the image display apparatus 100, a menu screen, a widget, and an icon.

The storage unit 160 may store a program for signal processing and control of the controller 150 or may store a video signal, a voice signal, and a data signal. The storage unit 160 may be a flash memory, a hard disk, a multimedia card micro type, a card type memory (for example, SD or XD memory), a random access (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM) Or at least one storage medium.

The display unit 170 may convert a video signal, a data signal, an OSD signal, etc. processed by the controller 150 into an RGB signal to generate a driving signal. Thus, the display unit 170 outputs an image. The display unit 170 may be a plasma display panel (PDP), a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT-LCD), an organic light emitting diode A light emitting diode (OLED), a flexible display, a three-dimensional display (3D display), and an electronic ink display (e-ink display). The display 180 may also be implemented as a touch screen to perform the functions of an input device.

The audio output unit 180 outputs the audio signal processed by the control unit 150, for example, a stereo signal or a 5.1-channel signal. The audio output unit 180 may be implemented by various types of speakers.

On the other hand, a photographing unit (not shown) for photographing a user can be further provided. The photographing unit (not shown) may be implemented by a single camera, but the present invention is not limited thereto, and may be implemented by a plurality of cameras. The image information photographed by the photographing unit (not shown) is input to the control unit 150.

In order to detect a user's gesture, a sensing unit (not shown) having at least one of a touch sensor, a voice sensor, a position sensor, and an operation sensor may be further provided in the image display apparatus 100 have. The signal sensed by the sensing unit (not shown) may be transmitted to the controller 150 through the user input interface unit 140.

The control unit 150 may detect the gesture of the user by combining the images photographed by the photographing unit (not shown) or sensed signals from the sensing unit (not shown).

A power supply unit (not shown) supplies power to the entire video display device 100. Particularly, it is possible to supply power to a control unit 150 that can be implemented in the form of a system on chip (SOC), a display unit 170 for displaying an image, and an audio output unit 180 for audio output .

To this end, the power supply unit (not shown) may include a converter (not shown) for converting AC power to DC power. Meanwhile, for example, when the display unit 170 is implemented as a liquid crystal panel having a plurality of backlight lamps, an inverter (not shown) that can perform PWM operation for variable luminance or dimming driving is further provided It is possible.

The external input device 190 is connected to the interface 140 via a wired or wireless network and transmits an input signal generated according to a user input to the interface 140. The external input device 190 may include a remote controller, a mouse, a keyboard, and the like. The remote controller can transmit an input signal to the interface unit 140 through Bluetooth, RF communication, infrared communication, UWB (Ultra Wideband), ZigBee, or the like. The remote controller can be implemented as a space remote controller. The spatial remote control device can detect an operation of the main body in a space and generate an input signal.

The video display apparatus 100 may be a digital broadcasting system of an ATSC system (8-VSB system), a digital broadcasting system of a DVB-T system (COFDM system), a digital broadcasting system of a DVB-C system (QAM system) ) Digital broadcasting, an ISDB-T (BST-OFDM) digital broadcasting, or the like. In addition, the video display device 100 may be a terrestrial DMB digital broadcasting, a satellite DMB digital broadcasting, an ATSC-M / H digital broadcast, a DVB-H (COFDM) digital broadcast, a Media Foward Link Only) type digital broadcasting, or the like. Also, the video display device 100 may be implemented as a digital broadcast receiver for cable, satellite communication, and IPTV.

Meanwhile, the video display device of the present invention is configured to provide a stereoscopic image. The term 3-D or 3D is used to describe a visual representation or display technique that attempts to reproduce a stereoscopic image (hereinafter, referred to as '3D image') having an optical illusion of depth. For the left eye and right eye images, the visual cortex of the observer interprets the two images as a single 3D image.

Three-dimensional (3D) display technology employs 3D image processing and representation techniques for devices capable of displaying 3D images. Alternatively, a device capable of 3D image display may have to use a special viewing device to effectively provide a three-dimensional image to an observer.

Examples of 3D image processing and representation include stereoscopic image / video capture, multi-view video / video capture using multiple cameras, and processing of two-dimensional images and depth information. Examples of the display device capable of 3D image display include an LCD (Liquid Crystal Display), a digital TV screen, and a computer monitor with appropriate hardware and / or software supporting 3D image display technology. Examples of special viewing devices include specialized glasses, goggles, headgear, and eyewear.

Specifically, the 3D image display technology can be applied to anaglyph stereoscopic images (commonly used with passive red-eye glasses), polarized stereoscopic images (commonly used with passive polarized glasses), alternate-frame sequencing ) (Commonly used with active shutter glasses / headgear), autostereoscopic displays using lenticular or barrier screens, and the like.

For 3D image processing, a stereo image or multi-view image can be compression-coded and transmitted in various ways including MPEG (Moving Picture Experts Group). For example, a stereo image or a multi-view image may be compression-coded and transmitted by the H.264 / AVC (Advanced Video Coding) method. At this time, the receiving system can obtain the 3D image by decoding the reception image inversely to the H.264 / AVC coding scheme. In this case, the receiving system may be provided as one configuration of the 3D stereoscopic image display apparatus.

Hereinafter, the configuration of the 3D stereoscopic image display device 200 will be described with reference to FIG.

2 is a block diagram illustrating the configuration of a 3D image display apparatus to which an image processing apparatus according to embodiments of the present invention is applied.

2, the 3D image display apparatus 200 according to embodiments of the present invention includes a tuner 210, a demodulator 220, an external device interface 230, a network interface 235, A storage unit 240, a user input interface unit 250, a control unit 270, a display unit 280, an audio output unit 285, and a 3D viewing device 295. Hereinafter, the same components as those in FIG. 1 will be described with emphasis on the portions related to the output of the 3D image, and the portions overlapping with the above-described portions will be omitted.

The tuner (tuner unit) 210 receives the broadcast signal, detects the signal, corrects the error, and generates a transport stream for the left eye and right eye images.

The demodulator 220 may be a first decoder that decodes the reference view video and a second decoder that decodes the extended view video. In this case, the video stream is output to the first decoder by the demultiplexing unit if the video stream corresponds to the reference view video, and is output to the second decoder when the video stream corresponds to the video stream of the extension viewpoint.

The external device interface unit 230 can transmit or receive data with the connected external device. To this end, the external device interface unit 230 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

The external device interface unit 230 is connected to an external device (not shown) such as a DVD (Digital Versatile Disk), a Blu ray, a game device, a camera, a camcorder, . The external device interface unit 230 transmits external video, audio, or data signals to the controller 270 of the video display device 200 through the connected external device. Also, the control unit 270 can output the processed video, audio, or data signal to the connected external device. To this end, the external device interface unit 230 may include an A / V input / output unit (not shown) or a wireless communication unit (not shown).

A / V input / output unit includes a USB terminal, a CVBS (Composite Video Banking Sync) terminal, a component terminal, an S-video terminal (analog), a DVI (Digital Visual Interface) terminal, an HDMI (High Definition Multimedia Interface) terminal, an RGB terminal, a D-SUB terminal, and the like.

The wireless communication unit can perform short-range wireless communication with other electronic devices. The video display device 200 may be a communication device such as a Bluetooth, a Radio Frequency Identification (RFID), an IrDA, an Ultra Wideband (UWB), a ZigBee, a Digital Living Network Alliance Depending on the standard, it can be networked with other electronic devices.

Also, the external device interface unit 230 may be connected to the various set-top boxes via at least one of the various terminals described above to perform input / output operations with the set-top box.

On the other hand, the external device interface unit 230 can transmit and receive data to and from the 3D viewing device 295.

The network interface unit 235 provides an interface for connecting the video display device 200 to a wired / wireless network including the Internet network. The network interface unit 235 may include an Ethernet terminal and the like for connection to a wired network and may be a WLAN (Wireless LAN) (Wi-Fi), a Wibro (Wireless) broadband), Wimax (World Interoperability for Microwave Access), and HSDPA (High Speed Downlink Packet Access) communication standards.

The network interface unit 235 can receive, via the network, contents or data provided by the Internet or a content provider or a network operator. In other words, it is possible to receive contents such as movies, advertisements, games, VOD, broadcasting signals, and related information provided from the Internet and contents providers through a network. In addition, the update information and the update file of the firmware provided by the network operator can be received. It may also transmit data to the Internet or a content provider or network operator.

The network interface unit 235 is connected to, for example, an IP (Internet Protocol) TV, receives the processed video, audio, or data signals from the settop box for IPTV, And can transmit the signals processed by the controller 270 to the IPTV set-top box.

Meanwhile, the IPTV may include ADSL-TV, VDSL-TV, FTTH-TV and the like depending on the type of the transmission network. The IPTV may include a TV over DSL, a video over DSL, a TV over IP BTV), and the like. In addition, IPTV may also mean an Internet TV capable of accessing the Internet, or a full browsing TV.

The storage unit 240 may store a program for each signal processing and control in the control unit 270, and may store image-processed video, audio, or data signals.

In addition, the storage unit 240 may perform a function for temporarily storing video, audio, or data signals input to the external device interface unit 230. In addition, the storage unit 240 may store information on a predetermined broadcast channel through a channel memory function such as a channel map.

The storage unit 240 may be a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory) RAM, ROM (EEPROM, etc.), and the like. The video display device 200 can reproduce files (moving picture files, still picture files, music files, document files, etc.) stored in the storage unit 240 and provide them to the user. 2 illustrates an embodiment in which the storage unit 240 is provided separately from the control unit 270, the scope of the present invention is not limited thereto. The storage unit 240 may be included in the controller 270.

The description of the user input interface unit 250 is replaced with the description of the interface unit 140 described above with reference to FIG.

The control unit 270 demultiplexes the input stream or processes the demultiplexed signals through the tuner 210 or the demodulation unit 220 or the external device interface unit 230 to generate a signal for video or audio output Can be generated and output.

The image signal processed by the controller 270 may be input to the display unit 280 and displayed as an image corresponding to the image signal. In addition, the video signal processed by the controller 270 may be input to the external output device through the external device interface 230.

The audio signal processed by the control unit 270 may be output to the audio output unit 285 by sound. The audio signal processed by the control unit 270 may be input to the external output device through the external device interface unit 230.

The control unit 270 may include a demultiplexing unit, an image processing unit, and the like. The control unit 270 can control the overall operation in the video display device 200. [ For example, the controller 270 controls the tuner 210 to control the tuner 210 to select an RF broadcast corresponding to a channel selected by the user or a previously stored channel.

In addition, the controller 270 can control the image display apparatus 200 according to a user command or an internal program input through the user input interface unit 250.

For example, the controller 270 controls the tuner 210 to input a signal of a selected channel according to a predetermined channel selection command received through the user input interface unit 250. Then, video, audio, or data signals of the selected channel are processed. The control unit 270 allows the display unit 280 or the audio output unit 285 to output the video or audio signal processed by the user through the channel information or the like selected by the user.

For example, the control unit 270 may be connected to the external device interface unit 230 via the external device interface unit 230, for example, a camera or a camcorder, according to an external device video playback command received through the user input interface unit 250 So that the video signal or the audio signal of the video signal can be output through the display unit 280 or the audio output unit 285.

The control unit 270 may control the display unit 280 to display an image. For example, the broadcast image input through the tuner 210, the external input image input through the external device interface unit 230, the image input through the network interface unit 235, or the image stored in the storage unit 240 Can be displayed on the display unit 280. At this time, the image displayed on the display unit 280 may be a still image or a moving image, and may be a 2D image or a 3D image.

The control unit 270 generates a 3D object for a predetermined object among the images displayed on the display unit 280, and displays the 3D object. For example, the object may be at least one of a connected web screen (newspaper, magazine, etc.), EPG (Electronic Program Guide), various menus, widgets, icons, still images, moving images, and text. The 3D object may be processed to have a depth different from that of the image displayed on the display unit 280. The control unit 270 can process the 3D object such that the 3D object is protruded from the image displayed on the display unit 280. [

The control unit 270 recognizes the position of the user based on the image photographed from the photographing unit (not shown). For example, the distance (z-axis coordinate) between the user and the image display apparatus 200 can be grasped. In addition, the x-axis coordinate and the y-axis coordinate in the display unit 280 corresponding to the user position can be grasped.

Although not shown in FIG. 2, it is also possible to provide a channel browsing processing unit for generating a channel signal or a thumbnail image corresponding to an external input signal. The channel browsing processing unit receives a stream signal TS output from the demodulation unit 220 or a stream signal output from the external device interface unit 230 and extracts an image from an input stream signal to generate a thumbnail image . The generated thumbnail image may be encoded as it is or may be input to the controller 270. In addition, the generated thumbnail image may be encoded in a stream format and input to the controller 270.

The control unit 270 may display a thumbnail list including a plurality of thumbnail images on the display unit 280 using the input thumbnail images. At this time, the thumbnail list may be displayed in a simple view mode displayed on a part of the screen in a state in which a predetermined image is displayed on the display unit 280, or in a full view mode displayed in most areas of the display unit 280 . The thumbnail images in the thumbnail list can be sequentially updated.

The display unit 280 converts a video signal, a data signal, an OSD signal, a control signal processed by the control unit 270 or a video signal, a data signal, a control signal and the like received from the external device interface unit 230, .

The display unit 280 may be a PDP, an LCD, an OLED, a flexible display, or the like. In particular, a three-dimensional display (3D display) may be possible according to an embodiment of the present invention.

The display unit 280 for three-dimensional image viewing can be divided into an additional display method and a single display method. The single display method can realize a 3D image by itself without a separate additional display, for example, a glass or the like, on the display unit 280 (non-spectacle 3D display), and examples thereof include a lenticular method, Various methods such as a parallax barrier may be applied. The additional display method can implement a 3D image using an additional display in addition to the display unit 280. For example, various methods such as a head mount display (HMD) type and a glasses type can be applied.

The glasses type can be further divided into a passive type such as a polarizing glasses type and an active type such as a shutter glass type. On the other hand, head-mounted display type can be divided into a passive type and an active type.

A 3D viewing apparatus (glass for 3D) 295 for viewing a stereoscopic image can include a passive polarizing glass or an active shutter glass, and is described as a concept including the above-described head mount type.

Meanwhile, the display unit 280 may be configured as a touch screen and used as an input device in addition to the output device.

The audio output unit 285 receives a signal processed by the control unit 270, for example, a stereo signal, a 3.1 channel signal, or a 5.1 channel signal, and outputs it as a voice. The voice output unit 185 may be implemented by various types of speakers.

Meanwhile, in order to detect the user's gesture, a sensing unit (not shown) having at least one of a touch sensor, a voice sensor, a position sensor, and an operation sensor may be further included in the image display device 200 . A signal sensed by a sensing unit (not shown) is transmitted to the controller 170 through the user input interface unit 150.

The control unit 270 can detect the gesture of the user by combining the images captured from the photographing unit (not shown) or the sensed signals from the sensing unit (not shown).

The remote control device 260 transmits the user input to the user input interface unit 250. To this end, the remote control device 260 can use Bluetooth, RF (radio frequency) communication, infrared (IR) communication, UWB (Ultra Wideband), ZigBee, or the like. Also, the remote control device 260 can receive the video, audio, or data signal output from the user input interface unit 250 and display it or output it by the remote control device 260.

The image display apparatus 200 described above can be applied to digital broadcasting of ATSC system (7-VSB system), digital broadcasting of DVB-T system (COFDM system), digital broadcasting of ISDB-T system (BST-OFDM system) And a digital broadcasting receiver capable of receiving at least one of the digital broadcasting signals. In addition, as a portable type, digital terrestrial DMB broadcasting, satellite DMB broadcasting, ATSC-M / H broadcasting, DVB-H broadcasting (COFDM broadcasting), MediaFoward Link Only And a digital broadcasting receiver capable of receiving at least one of digital broadcasting and the like. It may also be a digital broadcast receiver for cable, satellite communications, or IPTV.

The video display device described herein may include a TV receiver, a mobile phone, a smart phone, a notebook computer, a digital broadcasting terminal, a PDA (Personal Digital Assistants), a PMP (Portable Multimedia Player) have.

The configuration of the image display apparatus 200 shown in FIG. 2 is a configuration diagram for embodiments of the present invention. Each component of the configuration diagram may be integrated, added, or omitted according to the specifications of the video display device 200 actually implemented. That is, two or more constituent elements may be combined into one constituent element, or one constituent element may be constituted by two or more constituent elements, if necessary. In addition, the functions performed in each block are intended to illustrate the embodiments of the present invention, and the specific operations and apparatuses do not limit the scope of the present invention.

The video signal decoded by the video display device 200 may be a 3D video signal of various formats. For example, a 3D image signal composed of a color image and a depth image, or a 3D image signal composed of a plurality of view-point image signals. The plurality of viewpoint image signals may include, for example, a left eye image signal and a right eye image signal. Here, the format of the 3D video signal is a side-by-side format in which the left-eye image signal L and the right-eye image signal R are arranged in left and right directions, a top- An interlaced format in which the left and right eye image signals and the right eye image signal are mixed line by line, a checker box for mixing the left eye image signal and the right eye image signal box by box, Format, and the like.

Also, the video display device described above can be applied to a mobile terminal. The mobile terminal may be a mobile phone, a smart phone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation device, a slate PC, tablet PCs, ultrabooks, and the like.

When a video display device is used as a mobile terminal, a wireless communication unit may be added.

The wireless communication unit may include one or more modules that enable wireless communication between the video display device 100 and the wireless communication system or between the mobile terminal and the network in which the mobile terminal is located. For example, the wireless communication unit may include at least one of a broadcast receiving module, a mobile communication module wireless Internet module, a short distance communication module and a location information module.

The broadcast receiving module receives broadcast signals and / or broadcast-related information from an external broadcast management server through a broadcast channel.

The broadcast channel may include a satellite channel and a terrestrial channel. The broadcast management server may refer to a server for generating and transmitting broadcast signals and / or broadcast related information, or a server for receiving broadcast signals and / or broadcast related information generated by the broadcast management server and transmitting the generated broadcast signals and / or broadcast related information. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and a broadcast signal in which a data broadcast signal is combined with a TV broadcast signal or a radio broadcast signal.

The broadcast-related information may refer to a broadcast channel, a broadcast program, or information related to a broadcast service provider. The broadcast-related information may also be provided through a mobile communication network. In this case, it may be received by the mobile communication module 112.

The broadcast-related information may exist in various forms. For example, an EPG (Electronic Program Guide) of DMB (Digital Multimedia Broadcasting) or an ESG (Electronic Service Guide) of Digital Video Broadcast-Handheld (DVB-H).

For example, the broadcast receiving module may be a Digital Multimedia Broadcasting-Terrestrial (DMB-T), a Digital Multimedia Broadcasting-Satellite (DMB-S), a Media Forward Link Only (DVF-H), a Digital Video Broadcast- , Integrated Services Digital Broadcast-Terrestrial (ISDB-T), or the like. Of course, the broadcast receiving module 111 may be adapted to other broadcasting systems as well as the digital broadcasting system described above.

The broadcast signal and / or broadcast related information received through the broadcast receiving module may be stored in a memory.

The mobile communication module transmits and receives radio signals to and from at least one of a base station, an external terminal, and a server on a mobile communication network. The wireless signal may include various types of data depending on a voice call signal, a video call signal or a text / multimedia message transmission / reception.

The mobile communication module is configured to implement a video communication mode and a voice communication mode. The video call mode refers to a state of talking while viewing a video of the other party, and the voice call mode refers to a state in which a call is made without viewing the other party's video. In order to implement the video communication mode and the voice communication mode, the mobile communication module 112 is configured to transmit and receive at least one of voice and image.

The wireless Internet module refers to a module for wireless Internet access, and may be built in or enclosed in the mobile terminal 100. Examples of the wireless Internet technology include a wireless LAN (WLAN), a wireless fidelity (WiFi) direct, a DLNA (Digital Living Network Alliance), a Wibro (Wireless broadband), a Wimax (World Interoperability for Microwave Access), HSDPA Can be used.

The short-range communication module is a module for short-range communication. As a short range communication technology, Bluetooth ™, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigBee, Near Field Communication (NFC) Direct or the like may be used.

The location information module is a module for acquiring the location of the mobile terminal, and representative examples thereof include a Global Position System (GPS) module or a Wireless Fidelity (WiFi) module.

Meanwhile, when the display unit and a sensor (hereinafter, referred to as a 'touch sensor') that detects a touch operation form a mutual layer structure (hereinafter referred to as a 'touch screen'), It can also be used as a device. The touch sensor may have the form of, for example, a touch film, a touch sheet, a touch pad, or the like.

The touch sensor may be configured to convert a change in a pressure applied to a specific portion of the display portion or a capacitance occurring in a specific portion of the display portion into an electrical input signal. The touch sensor can be configured to detect not only the position and area where the touch object is touched on the touch sensor but also the pressure at the time of touch. Here, the touch object may be a finger, a touch pen, a stylus pen, a pointer, or the like as an object to which a touch is applied to the touch sensor.

If there is a touch input to the touch sensor, the corresponding signal (s) is sent to the touch controller. The touch controller processes the signal (s) and then transmits the corresponding data to the controller. Thus, the control unit can know which area of the display unit 151 is touched or the like.

The position detection unit 291 detects the position of the user through a head tracking technique and outputs the detected user position to the control unit 270. For example, the position detecting unit 291 can detect the position of the user by detecting and tracking the head of the user from the image photographed through the camera 293. [

The voice recognition unit 292 recognizes the user's voice and outputs the recognized user's voice to the control unit 270. [

The position detector 291 detects the position of the user through a head tracking technique and outputs the detected user position (user position information) to the controller 270.

The controller 270 determines whether the user is located in a sweet spot or a dead zone based on the detected user position. For example, the controller 270 determines whether the user is located in the left sweet spot, the right sweet spot, or the dead zone based on the detected user position.

FIG. 3 is a diagram illustrating an example of a non-eyeglass 3D display at multiple points (for example, first to fourth points of time) for explaining the present invention, and is a view showing a viewing area classified according to a user's position .

As shown in Fig. 3, the non-eyeglass 3D display of the multi-viewpoint system (V1 (first view), V2 (second view), V3 (third view, V4 A sweet spot and a dead zone of the image are divided according to the center, left, and right sides of the image. The user can enjoy a high quality 3D image when positioned in a sweet spot. However, if you are in the dead zone, you can feel the visual fatigue without feeling the 3D feeling properly.

The controller 270 may detect the user position through the position detector 291 to avoid the dead zone and then control the sweet spot of the image according to the user position. At this time, the controller 270 may avoid the dead zone according to the algorithm performance of the head tracking, the camera performance, the ambient illuminance, the head occlusion of the user, the movement speed of the user, and the like.

When the user is in a dead zone, the controller 270 displays guidance information on the display unit 280 to move the user's location to the sweet spot (to avoid the dead zone). The controller 270 may guide the user to avoid the dead zone by displaying guide information on a view image that causes a dead zone to avoid the dead zone.

4A is an exemplary view showing information displayed to a user located in a sweet spot.

As shown in FIG. 4A, when the user is located in the sweet spot, the controller 270 displays on the display unit 280 a guide indicator 4-1 indicating that the user is located in the sweet spot . The guide information indicating that the user is located in the sweet spot may be a letter, a symbol, an icon, and the like.

4B is an exemplary view showing information displayed to a user located in the dead zone.

4B, when the user is located in the dead zone, the controller 270 displays, on the display unit 280, a guide information 4-2 indicating that the user is located in the dead zone . The guide information indicating that the user is located in the dead zone may be a character, a symbol, an icon, an arrow, and the like. For example, when the user is located in the dead zone, the control unit 270 displays on the display unit 280 an arrow (4-2) requesting the user to move to the right or left to avoid the dead zone . If the user is located in the dead zone, the control unit 270 may display on the display unit 280 a character 4-3 requesting the user to move to the right or left to move the user to the sweet spot .

The control unit 270 displays the guidance information 4-2 and 4-3 indicating that the user is located in the dead zone on the display unit 280 and displays the guidance information 4-2 and 4-3 on the display unit 280. [ It is determined whether a voice has been received. For example, the control unit 270 displays the guide information 4-2 and 4-3 indicating that the user is located in the dead zone on the display unit 280, &Quot;, it is determined whether a user voice requesting the shift of the sweet spot is received.

The controller 270 shifts the sweet spot based on the user's voice requesting the shift of the sweet spot. For example, the controller 270 does not move the user so that the user is located in the sweet spot based on a user voice requesting the shift of the sweet spot, but displays the sweet spot on the display unit 280 The stereoscopic image is shifted to move the sweet spot.

The control unit 270 shifts the stereoscopic image right or left by one or two or more views (for example, V1, V1 to V2) every time a user voice requesting the shift of the sweet spot is received To move the sweet spot. For example, the controller 270 does not move the user so that the user is located in the sweet spot based on a user voice requesting the shift of the sweet spot, but displays the sweet spot on the display unit 280 And moves the sweet spot by shifting the image. That is, as the sweet spot moves, the position of the dead zone is shifted.

The controller 270 may automatically move the sweet spot in real time so that the user is positioned at the sweet spot in response to a change in position of the user.

Hereinafter, a user (viewer) position registration for viewing a 3D image can be performed easily and quickly, and a 3D image is provided based on the registered user position, so that a user (viewer) can easily and conveniently provide a high- An image processing apparatus and method therefor will be described.

5 is a flowchart illustrating an image processing method according to an embodiment of the present invention.

First, the control unit 270 sets a plurality of viewing zones in advance according to the 3D image view distance (S11). For example, the controller 270 sets a first viewing area, a second viewing area, a third viewing area, and the like in accordance with a 3D image view distance.

Figure 6 is an illustration of multiple viewing zones according to an embodiment of the present invention.

As shown in FIG. 6, the controller 270 controls the first viewing area 6-1, the second viewing area 6-2, the third viewing area 6- 3), and the range of each viewing area can be variously changed according to the designer's intention.

The controller 270 sets viewer position registration information corresponding to each of the plurality of viewing areas in advance (S12). For example, the control unit 270 may store first viewer position registration information corresponding to the first viewing zone 6-1, second viewer position registration information corresponding to the second viewing zone 6-2, The third viewer position registration information corresponding to the viewing area 6-3 is set in advance.

7 is an exemplary view showing viewer position registration information according to an embodiment of the present invention.

7, each viewer location registration information according to the embodiment of the present invention includes a location (a viewer) located in a sweet spot, and a viewer having an optimum viewing distance corresponding to one of the plurality of viewing regions An indicator 7-1; A stereoscopic image (stereoscopic image sample) 7-2 having an optimum viewing distance corresponding to any one of the viewing zones; (7-3a) of the zone information (viewer position information) having the optimum viewing distance corresponding to any one of the viewing zones and viewer information 7-3 composed of the viewer information input item (7-3b) do. The optimal viewing distance means an viewing position that provides an optimal stereoscopic effect.

The guide information 7-1 includes information (7-1a) for requesting (instructing) the viewer to move left or right to the viewer so that the viewer position is located in the sweet spot; Information 7-1b indicating that the viewer position is located in the sweet spot; And information (7-1c) for requesting (or instructing) the viewer position to move forward or back so as to be located in the sweet spot.

The stereoscopic image 7-2 may include a plurality of stereoscopic image samples 7-2a having an optimum viewing distance corresponding to each of a plurality of viewing areas. The stereoscopic image sample is a stereoscopic image indicating a maximum protrusion and a maximum depth sense, and serves as a reference stereoscopic image for confirming the stereoscopic performance of the image.

The zone information (viewer position information) includes: information (7-3a) indicating one of the plurality of viewing zones; And information 7-3b indicating viewer information (e.g., viewer name, viewer photograph, etc.) corresponding to any one of the plurality of viewing areas.

The first viewer position registration information includes a first guide information (7-1) having a user (viewer) positioned at a sweet spot and having an optimum viewing distance corresponding to the first viewing area; A stereoscopic image sample (7-2) having an optimum viewing distance corresponding to the first viewing area; The first viewing zone information (viewer position information, position 1) and the viewer information 7-3 having the optimum viewing distance corresponding to the first viewing area. The second viewer position registration information includes a second guide information (7-1) having a user (viewer) located at a sweet spot and having an optimum viewing distance corresponding to the second viewing area; A second stereoscopic image sample (7-2) having an optimum viewing distance corresponding to the second viewing area; The second viewing zone information (viewer position information, position 2) and the viewer information 7-3 having the optimum viewing distance corresponding to the second viewing area. The third viewer position registration information includes a third guide information (7-1) having a user (viewer) located at the sweet spot and having an optimum viewing distance corresponding to the third viewing area; A stereoscopic image sample (7-2) having an optimum viewing distance corresponding to the third viewing area; The third viewing zone information (viewer position information, position 3) and the viewer information 7-3 having the optimum viewing distance corresponding to the third viewing area.

If any one of the first to third viewing zone information (positions 1 to 3) is selected by the user (viewer) (S13), the controller 270 displays viewer position registration information corresponding to the selected viewing area information (S14). For example, when the first viewing zone information (position 1) among the first to third viewing zone information (positions 1 to 3) is selected by the user (viewer), the controller 270 displays the selected viewing area information position 1 on the display unit 280. The first viewer position registration information is displayed on the display unit 280. [ If the second viewing zone information (position 2) among the first to third viewing zone information (positions 1 to 3) is selected by the user (viewer), the control unit 270 displays the selected viewing zone information (position 2) And displays the corresponding second viewer position registration information on the display unit 280. [ When the third viewing zone information (position 3) among the first through third viewing zone information (positions 1 through 3) is selected by the user (viewer), the controller 270 displays the selected viewing zone information (position 3) And displays the corresponding third viewer position registration information on the display unit 280. [

Assuming that the current position of the viewer (user) is the second viewing area, the viewer (user) sequentially selects the first to third viewing area information (positions 1 to 3) in the second viewing area, The third viewer position registration information, and the second viewer position registration information among the first through third viewer position registration information can be confirmed as the optimal distance. Therefore, the viewer (user) can register the second viewer position registration information as his / her viewing position. On the other hand, when it is assumed that the current position of the viewer (user) is the third viewing area, the viewer (user) sequentially selects the first to third viewing area information (positions 1 to 3) in the third viewing area, It is possible to view the first to third viewer location registration information and to confirm that the third viewer location registration information is the optimum viewing distance from the first to third viewer location registration information. Therefore, the viewer (user) can register the third viewer position registration information as his / her viewing position.

When the control unit 270 receives a signal requesting to register the currently displayed viewer position registration information (e.g., second viewer position registration information) as the viewing position among the first to third viewer position registration information (S15 ) The viewer position registration information currently displayed (for example, the second viewer position registration information) is registered as the viewing position (S16). For example, the controller 270 determines whether or not the viewer information input item (7- (2)) included in the viewer position registration information (for example, the second viewer position registration information) currently displayed among the first to third viewer position registration information, 3b), the currently displayed viewer location registration information (for example, the second viewer location registration information) is registered as the first viewing location. The controller 270 adds the viewer information input item 7-3b included in the currently displayed viewer location registration information (for example, the third viewer location registration information) among the first through third viewer location registration information 2 viewer name is input, the currently displayed viewer location registration information (for example, the third viewer location registration information) is registered as the second viewing location. That is, the controller 270 can register a plurality of viewer positions.

When the viewer information input item 7-3b included in the currently displayed viewer location registration information (for example, the third viewer location registration information) among the first to third viewer location registration information is selected The viewer's face is recognized through the camera 293, the recognized face image is automatically input as viewer information, and the currently displayed viewer position registration information (for example, third viewer position registration information) It may be registered as a viewing position.

8 is an exemplary view illustrating registered viewing position information according to an embodiment of the present invention.

As shown in FIG. 8, the controller 270 displays the registered viewing position information on the display unit 280 when the information requesting the registered viewing position information is received by the user. The registered viewing position information includes viewing area information (viewing position information) (Pos. Rec .: # 1) (8-5) indicating the registered viewing area; Viewer information (e.g., viewer name, viewer image, etc.) 8-6 associated with (linked to) the viewing area information (viewer position information) 8-5; (E.g., content) 8-3 indicating that the user location of the registered viewing area is a sweet spot. Here, reference numeral 8-1 in FIG. 8 denotes a sweet spot, and reference numeral 8-2 in FIG. 8 denotes a dead zone. The controller 270 may register a plurality of viewer position information for one viewing area. The viewer information (e.g., viewer name, viewer photograph, etc.) (8-6) can be edited.

When the information requesting the registered viewing position information is received by the user, the controller 270 responds to the viewer information 8-6 associated with (linked to) the last used viewing area information 8-5 8-4 corresponding to the viewer information 8-6 associated with (linked to) the viewing area information 8-5 used before the last used, and activates the contents 8-4 ) May be deactivated.

When the information for switching the viewing area information (the viewer position information) 8-5 indicating the registered viewing area is received, the controller 270 displays the viewing area information indicating the registered viewing area ) 8-5. For example, the controller 270 may change the viewing area information (viewer position information) indicating the registered first viewing area to viewing area information (viewer position information) indicating the registered second viewing area When the key signal (for example, the direction key signal of the remote control device 260) is received, the viewing area information (viewer position information) indicating the registered first viewing area is displayed in the view of the registered second viewing area And switches to zone information (viewer position information).

The controller 270 can recognize the viewer's face through the camera 293 when the 3D image mode is selected. If the number of recognized viewers is large, the controller 270 may provide a stereoscopic image based on the viewer location registration information registered by the final viewer.

As described above, the image processing apparatus and method according to the embodiment of the present invention can easily and quickly perform user (viewer) location registration for 3D image viewing, and based on the registered user position, By providing images, users (viewers) can enjoy high quality 3D images easily and conveniently.

It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

270: control unit 280:
291: position detecting section 292:
293: Camera

Claims (18)

A display unit for displaying a stereoscopic image in a multi-point format;
Setting a plurality of viewing zones in advance according to the viewing distance of the stereoscopic image, presetting a plurality of viewer location registration information corresponding to each of the plurality of viewing zones in advance, and sequentially setting the plurality of viewer location registration information And registering any one of the viewer position registration information selected by the viewer among the plurality of viewer position registration information sequentially displayed as the viewing position information of the viewer Processing device.
The method of claim 1, wherein each of the plurality of viewer location registration information includes:
Guidance information having an optimum viewing distance corresponding to one of the plurality of viewing regions;
A stereoscopic image having an optimum viewing distance corresponding to any one of the viewing zones;
And one of the zone information and the viewer information input item having the optimum viewing distance corresponding to any one of the viewing zones.
The information processing apparatus according to claim 2,
Information requesting the viewer to move leftward or rightward so that the position of the viewer is located in the sweet spot;
Information indicating that the position of the viewer is located in the sweet spot;
And information for requesting the viewer to move forward or backward so that the position of the viewer is located in the sweet spot.
3. The stereoscopic image display device according to claim 2,
And a plurality of stereoscopic images having an optimum viewing distance corresponding to each of the plurality of viewing areas.
3. The method of claim 2,
Information indicating one of the plurality of viewing areas;
And information indicating a viewer information input item for registering viewer information corresponding to any one of the plurality of viewing areas.
The apparatus of claim 1,
And displaying the viewer location registration information corresponding to the selected viewing area from among the plurality of viewer location registration information on the display unit when any one of the plurality of viewing areas is selected.
3. The apparatus of claim 2,
And registers the first viewer position registration information as viewing position information of the viewer when the viewer information is input to the viewer information entry item included in the first viewer position registration information among the plurality of viewer position registration information. Image processing apparatus.
3. The apparatus of claim 2,
When the viewer information input item included in the first viewer location registration information is selected from the plurality of viewer location registration information, the face of the viewer is recognized through the camera, and the recognized face image is automatically input as the viewer information And the image processing apparatus.
The apparatus of claim 1,
When the information requesting the registered viewing position information is received, displays the registered viewing position information on the display unit,
The registered viewing position information may include:
Viewing area information indicating the registered viewing area;
Viewer information associated with the viewing area information;
And information indicating that the viewer position of the registered viewing area is a sweet spot.
Displaying a stereoscopic image of a multi-viewpoint on the display unit;
Setting a plurality of viewing zones in advance according to the viewing distance of the stereoscopic image;
Setting a plurality of viewer location registration information corresponding to each of the plurality of viewing areas in advance;
Displaying the plurality of viewer location registration information sequentially on the display unit at the request of the viewer;
And registering, as the viewing position information of the viewer, any one viewer position registration information selected by the viewer from among the sequentially displayed plurality of viewer position registration information.
11. The method of claim 10, wherein each of the plurality of viewer location registration information comprises:
Guidance information having an optimum viewing distance corresponding to one of the plurality of viewing regions;
A stereoscopic image having an optimum viewing distance corresponding to any one of the viewing zones;
And one of the zone information and the viewer information input item having the optimum viewing distance corresponding to any one of the viewing zones.
The information processing apparatus according to claim 11,
Information requesting the viewer to move leftward or rightward so that the position of the viewer is located in the sweet spot;
Information indicating that the position of the viewer is located in the sweet spot;
And information for requesting the viewer to move forward or backward so that the position of the viewer is located in the sweet spot.
12. The stereoscopic image display device according to claim 11,
And a plurality of stereoscopic images having an optimum viewing distance corresponding to each of the plurality of viewing areas.
12. The method of claim 11,
Information indicating one of the plurality of viewing areas;
And information indicating a viewer information input item for registering viewer information corresponding to the one viewing area among the plurality of viewing areas.
The method as claimed in claim 10, wherein the step of displaying the plurality of viewer location registration information on the display unit comprises:
And displaying the viewer location registration information corresponding to the selected viewing area from the plurality of viewer location registration information on the display unit when any one of the plurality of viewing areas is selected. Way.
12. The method of claim 11, wherein registering the viewing position information of the viewer comprises:
And registering the first viewer position registration information as viewing position information of the viewer when the viewer information is input to the viewer information entry item included in the first viewer position registration information among the plurality of viewer position registration information And the image processing method.
12. The method of claim 11,
Recognizing the viewer's face through a camera when the viewer information input item included in the first viewer location registration information is selected from the plurality of viewer location registration information;
And automatically inputting the recognized face image as the viewer information.
11. The method of claim 10,
And displaying the registered viewing position information on the display unit when the information requesting the registered viewing position information is received,
The registered viewing position information may include:
Viewing area information indicating the registered viewing area;
Viewer information associated with the viewing area information;
And information indicating that the viewer position of the registered viewing area is a sweet spot.
KR20130101365A 2013-08-26 2013-08-26 Image controlling apparatus and method thereof KR20150024198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130101365A KR20150024198A (en) 2013-08-26 2013-08-26 Image controlling apparatus and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130101365A KR20150024198A (en) 2013-08-26 2013-08-26 Image controlling apparatus and method thereof

Publications (1)

Publication Number Publication Date
KR20150024198A true KR20150024198A (en) 2015-03-06

Family

ID=53020957

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130101365A KR20150024198A (en) 2013-08-26 2013-08-26 Image controlling apparatus and method thereof

Country Status (1)

Country Link
KR (1) KR20150024198A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11016442B2 (en) 2016-12-22 2021-05-25 Samsung Electronics Co., Ltd. Apparatus for displaying holographic images and method of controlling the same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11016442B2 (en) 2016-12-22 2021-05-25 Samsung Electronics Co., Ltd. Apparatus for displaying holographic images and method of controlling the same

Similar Documents

Publication Publication Date Title
KR101788060B1 (en) Image display device and method of managing contents using the same
CN107567713B (en) Television and method for controlling television
KR101735610B1 (en) Method for operating an apparatus for displaying image
KR102063075B1 (en) Service system, digital device and method of processing a service thereof
KR101349276B1 (en) Video display device and operating method therefor
US20120050267A1 (en) Method for operating image display apparatus
US20110119611A1 (en) Method for playing contents
KR20140109168A (en) Image controlling apparatus and method thereof
US20130291017A1 (en) Image display apparatus and method for operating the same
KR102158210B1 (en) Speech recognition apparatus and method thereof
KR101897773B1 (en) Stereo-scopic image capture appartus capable of selecting captuer mode with respect to stereo-scopic image and method thereof
KR20160009415A (en) Video display apparatus capable of sharing ccontents with external input appatatus
KR20150024198A (en) Image controlling apparatus and method thereof
KR20140130904A (en) Image displaying apparatus and method thereof
KR101861697B1 (en) Apparatus of broadcasting program recording reservation for mobile communication terminal and method thereof
KR20170025562A (en) Image display device and method for controlling
KR101657564B1 (en) Apparatus for displaying image and method for operating the same
KR101691795B1 (en) Image display apparatus and method for operationg the same
KR20140131797A (en) Image controlling apparatus and method thereof
KR101832332B1 (en) Liquid crystal display panel
KR20150031080A (en) Video processing apparatus and method thereof
KR20170018562A (en) Digital device and method of processing data the same
KR20160008893A (en) Apparatus for controlling image display and method thereof
KR101691801B1 (en) Multi vision system
KR20150021399A (en) Video processing apparatus and method thereof

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination