KR20130010613A - Apparatus and method for converting 2 dimensional image into 3 dimensional image - Google Patents

Apparatus and method for converting 2 dimensional image into 3 dimensional image Download PDF

Info

Publication number
KR20130010613A
KR20130010613A KR1020110071346A KR20110071346A KR20130010613A KR 20130010613 A KR20130010613 A KR 20130010613A KR 1020110071346 A KR1020110071346 A KR 1020110071346A KR 20110071346 A KR20110071346 A KR 20110071346A KR 20130010613 A KR20130010613 A KR 20130010613A
Authority
KR
South Korea
Prior art keywords
stereoscopic
adjustment value
image
image frame
user action
Prior art date
Application number
KR1020110071346A
Other languages
Korean (ko)
Inventor
김희경
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to KR1020110071346A priority Critical patent/KR20130010613A/en
Publication of KR20130010613A publication Critical patent/KR20130010613A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • H04N13/359Switching between monoscopic and stereoscopic modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

PURPOSE: A device converting a 2D image into a 3D image and a method thereof are provided to calculate a control value of various 3D effect methods by one control value, thereby easily setting a 3D image of a desired type through setting the one control value. CONSTITUTION: A control unit calculates an individual control value for one or more 3D effect methods based on a 3D effect control value(S100). An image processing unit executes a 3D effect method for a 2D image frame corresponding to one or more 3D effect methods based on the calculated individual control value(S110). A display displays the image frame with the 3D effect method(S120). [Reference numerals] (AA) Start; (BB) End; (S100) Calculating an individual control value based on a set 3D effect control value; (S110) Executing a 3D effect method for a 2D image frame; (S120) Displaying the image frame with the 3D effect method

Description

Apparatus and method for converting 2 Dimensional image into 3 Dimensional image}

The present invention relates to an apparatus and method for converting a 2D image into a 3D image, and more particularly, to an apparatus and method for converting and displaying a received 2D image into a 3D image.

Currently, the broadcasting environment is rapidly changing from analog broadcasting to digital broadcasting. Accordingly, the amount of content for digital broadcasting is increasing rapidly. In addition to the content for displaying a two-dimensional (2D) video signal as a two-dimensional image as a content for digital broadcasting, content that displays a three-dimensional (3D) video signal as a three-dimensional image is produced and It is planned.

The technique of displaying a 3D image uses the principle of binocular parallax, in which an observer feels a stereoscopic feeling due to binocular disparity, and is classified into a shutter glass method, a glasses-free method, a full three-dimensional method, and the like. The glasses method refers to a method in which a viewer wears glasses having a special function in order to watch a stereoscopic image. The glasses may be classified into two types, a shutter glass method in which left and right are alternately opened and a polarization method in which circularly polarizing plates in opposite directions are mounted on eyeglass lens portions in left and right eyes.

An object of the present invention is to provide an apparatus and method for converting a two-dimensional image into a three-dimensional image to more easily and intuitively select the shape of the three-dimensional image that the user wants to watch.

Another object of the present invention is to provide an apparatus and method for converting a 2D image into a 3D image to enable a user to easily set a setting value for the 3D image conversion method.

In order to achieve the above technical problem, the method for converting a two-dimensional image to a three-dimensional image according to the present invention, calculating the individual adjustment value for at least one stereoscopic method based on the stereoscopic adjustment value, the calculated According to the at least one stereoscopic method based on the individual adjustment value may comprise the step of stereoscopic two-dimensional image frame and the step of displaying the stereoscopic image frame. The stereoscopic adjustment value may be received multiplexed with the image frame. The stereoscopic adjustment value may be set based on the detected user action.

The method for converting the 2D image into the 3D image may include: detecting a user action requesting the setting of the stereoscopic adjustment value, and in response to detecting the user action, a graphic user interface (GUI) for setting the stereoscopic adjustment value A graphical user interface, and displaying the numerical value of the stereoscopic adjustment value, detecting a user action requesting a change of the stereoscopic adjustment value, and in response to detecting the user action. The method may further include changing a stereoscopic adjustment value, wherein the 2D image frame may be stereoscopically based on the changed stereoscopic adjustment value.

The at least one stereoscopic method may be set based on the sensed user action. Also, the at least one stereoscopic method may include at least one of a stereoscopic control method, a 3D viewpoint transformation method, and a sloop transformation method. The execution order of the at least one stereoscopic method may be set based on user actions.

The individual adjustment value may include at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a sloop adjustment value.

The individual adjustment value may be calculated using a stereoscopic adjustment reference table.

The method for converting the 2D image into the 3D image may include detecting a user action requesting the setting of the stereoscopic adjustment reference table, and in response to detecting the user action, chart a value included in the stereoscopic adjustment reference table. Wherein the chart includes a graph representing an individual adjustment value for the stereoscopic adjustment value, wherein the graph is movably displayed on coordinates of the chart according to a detected user action, the stereoscopic adjustment reference table The method may include detecting a user action requesting a change of a value included in the step, and in response to detecting the user action, changing a value included in the stereoscopic adjustment reference table according to the graph.

The method for converting the 2D image into a 3D image includes: receiving the 2D image frame, restoring the received 2D image frame, scaling the stereoscopic image frame, and scaling the scaled image frame. Sampling the image frame in a three-dimensional stereoscopic image format, wherein the three-dimensionalizing the two-dimensional image frame comprises three-dimensionalizing the reconstructed two-dimensional image frame,

The displaying of the stereoscopic image frame may include displaying the sampled image frame.

The stereoscopic image frame may include a left eye view image frame and a right eye view image frame.

In order to achieve the above technical problem, an apparatus for converting a two-dimensional image to a three-dimensional image according to the present invention includes a control unit for calculating an individual adjustment value for at least one stereoscopic method based on the stereoscopic adjustment value, and the The image processing unit may be configured to stereoscopic two-dimensional image frames according to the at least one stereoscopic method based on the calculated individual adjustment value, and the controller may control the stereoscopic image frame to be displayed. The apparatus for converting the 2D image into the 3D image may further include an interface unit for sensing a user action. The interface unit may include an external signal receiver and an input device.

The controller detects a user action requesting the setting of the stereoscopic adjustment value, and controls a graphic user interface (GUI) for setting a stereoscopic adjustment value in response to the detection of the user action. A user action for requesting a change in the stereoscopic adjustment value may be detected, and in response to the detection of the user action, the stereoscopic adjustment value may be changed. The graphic user interface may display a numerical value of the stereoscopic adjustment value, and the 2D image frame may be stereoscopically based on the changed stereoscopic adjustment value.

The image processing unit may include a three-dimensional adjustment unit for performing a three-dimensional adjustment method for an image frame, a displacement changer for performing a three-dimensional view transformation method for the image frame, and a sloop transform unit for performing a sloop conversion method for the image frame. Can be.

The controller may set an execution order of the at least one stereoscopic method based on a user action.

The controller may calculate the individual adjustment value using a stereoscopic adjustment reference table.

The controller detects a user action requesting the setting of the stereoscopic adjustment reference table, and in response to detecting the user action, displays a value included in the stereoscopic adjustment reference table in a chart, and includes the same in the stereoscopic adjustment reference table. A user action for requesting a change of a predetermined value may be detected, and in response to the detection of the user action, a value included in the stereoscopic adjustment reference table may be changed according to the graph. The chart may include a graph representing individual adjustment values for the stereoscopic adjustment value, and the graph may be displayed to be movable on the coordinates of the chart according to a detected user action.

The apparatus for converting the 2D image into the 3D image includes: a receiver configured to receive the 2D image frame, a video decoder to reconstruct the received 2D image frame, a scaler to scale the stereoscopic image frame, and the scaled image. The apparatus may further include a formatter configured to sample the image frame in a 3D stereoscopic image format and output a sampled image frame, wherein the image processing unit may three-dimensionalize the restored 2D image frame.

In order to achieve the above technical problem, the apparatus for converting the 2D image into the 3D image according to the present invention includes: a receiving unit for receiving a 2D image frame and a user action for requesting a change of a stereoscopic adjustment value; A controller configured to detect the sensed user action, change the stereoscopic adjustment value in response to the detection of the user action, and calculate an individual adjustment value for at least one stereoscopic method based on the changed stereoscopic adjustment value; The apparatus may include a signal processor configured to stereoscopic 2D image frames according to the at least one stereoscopic method, and a display configured to display the stereoscopic image frames based on the calculated individual adjustment values. The interface unit may include an external signal receiver and an input device.

According to the apparatus and method for converting a two-dimensional image to a three-dimensional image according to the present invention, since the adjustment value of various stereoscopic methods is calculated with one adjustment value, the user can set the three-dimensional of a desired shape by setting one adjustment value. Since the image can be easily set and the converted 3D image is displayed according to the changed adjustment value, the user can set the adjustment value while checking the displayed image without understanding the conversion method to the 3D image.

1 is a block diagram showing the configuration of a preferred embodiment of an apparatus for converting a 2D image into a 3D image according to the present invention;
2 is a block diagram showing a configuration of a preferred embodiment of the signal processing unit;
3 is a diagram illustrating a binocular parallax scheme;
4 is a view for explaining a three-dimensional control method;
5 is a diagram illustrating a principle of distance sensing of a detected object;
FIG. 6 is a diagram illustrating one embodiment of a detected object, a left eye view image, and a right eye view image on coordinates; FIG.
7 is a diagram illustrating another embodiment of a detected object, a left eye view image, and a right eye view image on coordinates;
8 is a view for explaining a 3D viewpoint transformation method;
9 is a view for explaining a sloop conversion method;
FIG. 10 illustrates a preferred embodiment of a look up table (LUT),
FIG. 11 is a diagram illustrating a chart of values included in the table of FIG. 10; FIG.
FIG. 12 is a diagram illustrating a screen on which a graphical user interface (GUI) for setting stereoscopic adjustment values is displayed; FIG.
FIG. 13 illustrates a screen on which a graphical user interface (GUI) for selecting a stereoscopic method is displayed; FIG.
FIG. 14 is a flowchart illustrating a preferred embodiment of a method for converting a 2D image into a 3D image according to the present invention; FIG.
15 is a view showing a process of performing a preferred embodiment for the stereoscopic adjustment value setting method according to the present invention,
16 is a view showing a process of carrying out a preferred embodiment of the method for changing individual set values according to the present invention;
17 is a view showing a process of performing a preferred embodiment for the stereoscopic process according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. At this time, the configuration and operation of the present invention shown in the drawings and described by it will be described as at least one embodiment, by which the technical spirit of the present invention and its core configuration and operation is not limited.

Although the terms used in the present invention have been selected in consideration of the functions of the present invention, it is possible to use general terms that are currently widely used, but this may vary depending on the intention or custom of a person skilled in the art or the emergence of new technology. Also, in certain cases, there may be a term selected arbitrarily by the applicant, in which case the meaning thereof will be described in detail in the description of the corresponding invention. Therefore, it is to be understood that the term used in the present invention should be defined based on the meaning of the term rather than the name of the term, and on the contents of the present invention throughout.

1 is a block diagram showing the configuration of a preferred embodiment of an apparatus for converting a 2D image into a 3D image according to the present invention.

Referring to FIG. 1, the apparatus 100 for converting a 2D image into a 3D image according to the present invention includes a receiver 101, a signal processor 140, a display 150, an audio output unit 160, and an input device. 170 may include a storage unit 180 and a controller 190. The apparatus 100 for converting a 2D image into a 3D image may be a personal computer system such as a desktop, a laptop, a tablet, or a handheld computer. In addition, the apparatus 100 for converting a 2D image into a 3D image may be a mobile terminal such as a mobile phone, a smart phone, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation device, or the like. And a fixed home appliance such as a digital TV.

The receiver 101 may receive broadcast data, video data, audio data, information data, and program codes. The image data may be two-dimensional image data or stereoscopic image data of a binocular disparity method. The stereoscopic image data may be a stereo viewpoint image or a multiview image.

The receiver 101 may include a tuner 110, a demodulator 120, a mobile communication unit 115, a network interface unit 130, and an external signal receiver 130.

The tuner unit 110 selects an RF broadcast signal corresponding to a channel selected by a user from among radio frequency (RF) broadcast signals received through an antenna, and converts the selected RF broadcast signal into an intermediate frequency signal or a baseband video or audio signal. To convert.

The demodulator 120 receives the digital IF signal DIF converted by the tuner 110 and performs a demodulation operation. For example, when the digital IF signal output from the tuner unit 110 is an ATSC scheme, the demodulator 120 performs 8-VSB (8-Vestigial Side Band) demodulation. As another example, when the digital IF signal output from the tuner 110 is a DVB scheme, the demodulator 120 performs coded orthogonal frequency division modulation (COFDMA) demodulation.

Also, the demodulation unit 120 may perform channel decoding. To this end, the demodulator 120 includes a trellis decoder, a de-interleaver, and a reed solomon decoder to perform trellis decoding, deinterleaving, Solomon decoding can be performed.

The demodulation unit 120 may perform demodulation and channel decoding, and then output a stream signal TS. In this case, the stream signal may be a signal multiplexed with a video signal, an audio signal, or a data signal. For example, the stream signal may be an MPEG-2 Transport Stream (TS) multiplexed with an MPEG-2 standard video signal, a Dolby AC-3 standard audio signal, and the like. Specifically, the MPEG-2 TS may include a header of 4 bytes and a payload of 184 bytes.

The stream signal output from the demodulator 120 may be input to the signal processor 140.

The mobile communication unit 115 transmits and receives a radio signal with at least one of a base station, an external terminal, and a server on a mobile communication network. The wireless signal may include various types of data according to transmission and reception of a voice call signal, a video call signal, or a text / multimedia message.

 The external signal receiver 135 may provide an interface for connecting the external device and the device 100 for converting the 2D image into the 3D image. Here, the external device may refer to various types of video or audio output devices such as a DVD (Digital Versatile Disk), Blu-ray (Bluray), a game device, a camcorder, a computer (laptop), and a USB memory or a USB hard disk. It may be a device. The apparatus 100 for converting the 2D image into the 3D image may control to display the image signal and the audio signal received from the external signal receiver 135, and may store or use the data signal.

Also, the external device may be the photographing device 90. The photographing apparatus 90 may include a plurality of cameras. The imaging device 90 can image a person. The photographing apparatus 90 may recognize a hand region of a person, focus on the hand region, and zoom in to capture an image. In this case, the captured hand may be recognized as a spatial gesture. That is, the controller 190 may recognize the captured hand as a spatial gesture and execute commands for performing operations associated with the recognized spatial gesture. Here, the spatial gesture may be defined as a gesture recognized from an image frame or an image received from the photographing apparatus 90, which is mapped to one or more specific computing operations.

In some embodiments, the apparatus 100 for converting a 2D image into a 3D image may include a photographing apparatus 90.

The signal processor 140 demultiplexes the stream signal output by the demodulator 210 and performs signal processing on the demultiplexed signal, and then outputs an image to the display 150 to the audio output unit 160. The sound 161 is output. In addition, the signal processor 140 may receive image data, audio data, and broadcast data from the mobile communication unit 115, the network interface unit 130, and the external signal receiving unit 135.

The signal processor 140 may receive the stereoscopic adjustment value from the controller 190. The signal processor 140 may convert the 2D image data into the tertiary image data according to the received stereoscopic adjustment value. In some embodiments, the signal processor 140 may receive at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a sloop adjustment value, and the received stereoscopic adjustment value, left eye viewpoint adjustment value, and right eye viewpoint. The 2D image data may be converted into 3D image data based on the adjustment value and the slew adjustment value.

The display 150 displays the image 152. In this case, the image 152 may display image data converted by the signal processor 140 into 3D image data.

In addition, the display 150 may operate in connection with the controller 190. The display 150 may display a graphical user interface (GUI) 153 that provides an easy-to-use interface between the user of the device converting the 2D image to the 3D image and an application running on the operating system or operating system. . The GUI 153 presents the program, file, and operation options in a graphical image. A graphical image may include a window, a field, a dialog box, a menu, an icon, a button, a cursor, a scroll bar, and the like. Such images may be arranged in a predefined layout or may be dynamically generated to help with the particular action the user is taking. During operation, a user can select and activate an image to present functions and tasks associated with various graphical images. By way of example, a user may select a button that opens, closes, minimizes, or maximizes a window, or an icon that launches a particular program.

In some embodiments, the display 150 may display a screen including a graphical user interface (GUI) for setting a stereoscopic adjustment value and an area in which image data is displayed. In addition, the display 150 may display a 3D image configured according to the stereoscopic adjustment value set through the GUI on the screen. In this case, the 3D image may display image data obtained by converting the image data into 3D image data.

The voice output unit 160 may receive voice data from the signal processor 140 and the controller 190 and output a sound 161 in which the received voice data is reproduced.

The input device 170 may be a touch screen disposed on or in front of the display 150. The touch screen may be integrated with the display 150 or may be a separate component. As the touch screen is disposed in front of the display 150, the user may directly manipulate the GUI 153. For example, a user may simply place his finger on an object to be controlled. In the touchpad, the touchpad is generally in a different plane away from the display 150. For example, display 150 is generally located in a vertical plane and the touchpad is generally located in a horizontal plane. In addition, the input device 170 may be a multi-point input device.

The storage unit 180 generally provides a place for storing program code and data used by the apparatus 100 for converting a 2D image into a 3D image. For example, the storage unit 180 may be implemented as a read only memory (ROM), a random access memory (RAM), a hard disk drive, or the like. The program code and data may reside in a removable storage medium and may be loaded or installed on the device 100 for converting a two-dimensional image into a three-dimensional image when necessary. Removable storage media herein include CD-ROMs, PC-CARDs, memory cards, floppy disks, magnetic tape, and network components. In addition, the storage unit 180 stores the stereoscopic adjustment value. In this case, the stereoscopic adjustment value may be set by default, may be received from a broadcasting station, or may be set by a user.

The controller 190 executes a command and performs an operation associated with the apparatus 100 for converting a 2D image into a 3D image. For example, using the command retrieved from the storage unit 180, the controller 190 controls the input and output between the components of the apparatus 100 for converting a 2D image into a 3D image, and receiving and processing data. can do. The controller 190 may be implemented on a single chip, multiple chips, or multiple electrical components. For example, various architectures may be used for the controller 190, including dedicated or embedded processors, single purpose processors, controllers, ASICs, and the like.

The controller 190 executes computer code together with an operating system to generate and use data. The operating system is generally known and will not be described in more detail. By way of example, the operating system may be a Window based OS, Unix, Linux, Palm OS, DOS, Android, Macintosh, and the like. The operating system, other computer code, and data may be present in the storage unit 180 that operates in conjunction with the control unit 190.

The controller 190 may recognize the user action and control the apparatus 100 for converting the 2D image into the 3D image based on the recognized user action. The user action may include selecting a physical button of a device or a remote controller for converting a 2D image into a 3D image, performing a predetermined gesture on a touch screen display surface, selecting a soft button, and a predetermined image recognized from an image photographed by an imaging device. It may include the execution of a gesture of and the execution of a predetermined utterance recognized by speech recognition. The external signal receiver 135 may receive a signal for a user action of selecting a physical button of the remote controller through the remote controller.

The input device 170 receives the gesture 171, and the controller 190 executes commands for performing operations associated with the gesture 171. In addition, the storage unit 180 may include a gesture operating program 181 which may be part of an operating system or a separate application. Gesture operator 181 generally recognizes the occurrence of gesture 171 and in turn responds to gesture 171 and / or gesture 171 to inform one or more software agents of what action (s) should be taken. Contains the command of.

When the user performs one or more gestures, the input device 170 transmits gesture information to the controller 190. Using commands from the storage unit 180, more specifically, the gesture operation program 181, the controller 190 interprets the gesture 171 and stores the storage unit 180, the display 150, and the voice output unit ( 160, different components of the apparatus 100 for converting 2D images such as the signal processor 140, the network interface unit 130, and the input device 170 into 3D images are controlled. The gesture 171 performs an operation in an application stored in the storage unit 180, modifies a GUI object displayed on the display 150, modifies data stored in the storage unit 180, and the network interface unit 130. ) May be identified as a command for performing an operation in the signal processor 140. By way of example, these commands may be associated with zooming, panning, scrolling, page turning, rotation, resizing, video channel change, content reception, Internet connection, and the like. As a further example, the command may also be used to launch a particular program, open a file or document, view a menu, make a selection, execute a command, log on to an Internet site system, authorized person Permitting access to restricted areas of the computer system, loading a user profile associated with a user's preferred array of desktops, and / or the like.

In some embodiments, depending on the magnitude of a parameter (eg, capacitance) between the finger and the touch screen display, when this parameter exceeds a predetermined threshold, a down event occurs and the parameter exceeds the predetermined threshold and While the finger's corresponding cursor position moves from position A to position B, a dragging event occurs, and an up event occurs when this parameter falls below the threshold level.

The controller 190 may set the stereoscopic adjustment value. Here, the stereoscopic adjustment value may be set according to the sensed user action and may be set based on the received broadcast information. In addition, the stereoscopic adjustment value may be set by default at the time of fabrication of the apparatus 100 for converting the 2D image into the 3D image, and may be set during the software installation of the apparatus 100 for converting the 2D image into the 3D image. It may be updated when a software update of the apparatus 100 for converting a 2D image into a 3D image.

In addition, the controller 190 may detect a user action for requesting a graphical user interface (GUI) for setting a stereoscopic adjustment value. In response to the detection of the user action, the controller 190 may control to generate a signal for displaying a screen including a graphic user interface (GUI) and an area where image data is displayed.

The controller 190 may convert the 2D image data into 3D image data including left eye view image data and right eye view image data or control the signal processor 140 to convert the 2D image data according to the stereoscopic adjustment value set through the GUI. Can be.

In addition, the controller 190 may control the stereoscopic adjustment value set through the GUI to be stored in the storage unit 180.

2 is a block diagram showing the configuration of a preferred embodiment of the signal processor.

Referring to FIG. 2, the signal processor 140 may include a demultiplexer 210, an audio decoder 220, a video decoder 230, an image processor 240, a scaler 260, a mixer 270, and a formatter ( 280).

The demultiplexer 210 may receive a stream signal from the mobile communication unit 115, the network interface unit 130, and the external signal input unit 135. The demultiplexer 210 may convert the received stream signal into image data, The audio data and the data may be demultiplexed and output to the video decoder 230, the audio decoder 220, and the controller 190, respectively.

The audio decoder 220 may receive voice data from the demultiplexer 210, restore the received voice data, and output the restored data to the scaler 260 or the voice output unit 160.

The video decoder 230 receives image data from the demultiplexer 210, restores the received image data, and outputs the received image data to the image processor 240. The image signal may include at least one of a 2D image signal and a 3D image signal. The video decoder 230 and the image processor 240 may be configured as one module. When the controller 190 performs a role performed by the image processor 240, the video decoder 230 may output the reconstructed image data to the controller 190.

The image processor 240 may convert the image data reconstructed according to the stereoscopic adjustment value into left eye view image data and right eye view image data. The image processing unit 240 may include a stereoscopic adjusting unit 245, a 3D view converting unit 250, and a slew transforming unit 255.

The 3D adjustment unit 245 may perform a 3D adjustment method for the received image frame based on the 3D adjustment value.

The 3D view converter 250 performs a 3D view conversion method on the received image frame based on at least one of the left eye view control value and the right eye view control value.

The sloop converter 255 may perform a sloop conversion method on the received image frame based on the sloop adjustment value.

The scaler 260 outputs image data and audio data processed by the video decoder 230, the image processor 240, the controller 190, and the audio decoder 220 through the display 150 or a speaker (not shown). The signal is scaled with an appropriately sized signal. In detail, the scaler 260 receives the stereoscopic image and scales it to match the resolution or the predetermined aspect ratio of the display 150. The display 150 may be manufactured to output an image screen having a predetermined resolution, for example, 720x480 format, 1024x768, or the like according to product specifications. Accordingly, the scaler 260 may convert the resolution of the stereoscopic image, which may be input with various values, to match the resolution of the corresponding display.

In addition, the scaler 260 adjusts and outputs an aspect ratio of a stereoscopic image according to the type of content to be displayed or user setting. The aspect ratio value may be a value such as 16: 9, 4: 3, or 3: 2, and the scaler 260 may adjust the ratio of the screen length in the horizontal direction to the screen length in the vertical direction.

The mixer 270 mixes and outputs the outputs of the scaler 260 and the controller 190.

The formatter 280 samples the received image data in a 3D stereoscopic image format to output a stereoscopic image, outputs the sampled image data to the display 150, and provides a sync signal to the outputted stereoscopic image signal. It can be generated and sent to the glasses 201. The 3D stereoscopic image format refers to a format in which the display 150 may display the received image data as a 3D image.

The formatter 280 may include an infrared output unit (not shown) for transmission of a synchronization signal. Here, the synchronization signal is a signal for synchronizing the display time of the left eye view image or the right eye view image according to the stereoscopic image signal with the opening / closing time of the left eye lens or the right eye lens of the shutter glasses 201.

3 is a diagram illustrating a binocular parallax method.

Referring to FIG. 3, the binocular parallax provides spatial or stereoscopic feeling by showing at least the left eye view image 301 and the right eye view image 302 captured by a binocular camera or the like to both eyes 311 and 312 of the viewer, respectively. It is a three-dimensional display system. Depending on the binocular disparity between the left eye view image 301 and the right eye view image 302, a sense of space or a stereoscopic feeling provided to a viewer varies. As the distance between the left eye view image 301 and the right eye view image 302 is narrower, it is recognized that an image is formed at a far distance from the left eye 311 and the right eye 312, so that a sense of space or a three-dimensional feeling provided to a viewer may be reduced. . As the distance between the left eye view image 301 and the right eye view image 302 is wider, it is recognized that an image is formed at a close distance from the left eye 311 and the right eye 312, so that a sense of space or a three-dimensional feeling provided to a viewer may increase.

4 is a view for explaining a three-dimensional control method.

Referring to FIG. 4, FIG. 4A illustrates a 2D image frame. The stereoscopic adjusting unit 245 detects an edge component in the 2D image frame. FIG. 4B illustrates edge components detected in the two-dimensional image frame of FIG. 4A. The 2D image frame may be an image frame reconstructed by the video decoder 230, a left eye view image frame or a right eye view image frame output by the 3D view converter 250, and a sloop converter 255. ) May be an output image frame.

The three-dimensional adjustment unit 245 detects the object 420 in the edge component. The stereoscopic adjusting unit 245 generates a 3D image frame using the 2D image of FIG. 4A so that the detected object 420 has a depth value, and outputs the generated 3D image frame. The generated 3D image frame may include a left eye view image frame and a right eye view image frame. In addition, the depth value may be set based on the three-dimensional control value.

4C illustrates the generated 3D image frame. The object 410 of FIG. 4A is displayed as an object 430 having a depth value.

5 is a diagram illustrating a principle of distance sensing of a detected object.

Referring to FIG. 5, the distance between the image formed in the left eye and the image formed in the right eye becomes narrow when the distant object is viewed with both eyes, so that the binocular disparity when viewing the distant object is small. However, the distance between the image formed in the left eye and the image formed in the right eye becomes wider when the nearby object is viewed by both eyes, so the binocular disparity is large when the nearby object is viewed. In other words, the binocular parallax of an object near is greater than the binocular parallax of an object that is far away.

The three-dimensional adjustment unit 245 may adjust the binocular parallax of the detected object 510 to adjust the depth value of the object 510 to be small. When the depth value of the detected object 510 is adjusted to be small, the 3D effect control unit 245 adjusts the left eye view image 521 and the right eye view image 525 of the object 510 to narrow the left eye view image 521. ) And the right eye view image 525.

The three-dimensional adjustment unit 245 may adjust the depth value of the object 510 by adjusting the binocular parallax of the detected object 510. When the depth value of the detected object 510 is largely adjusted, the 3D effect control unit 245 adjusts the left eye view image 531 and the left eye view image 531 of the object 510 such that the interval between the left eye view image 531 is increased. And the right eye view image 535 may be generated.

FIG. 6 is a diagram illustrating an embodiment of a detected object, a left eye view image, and a right eye view image on coordinates, and FIG. 7 illustrates another embodiment of a detected object, a left eye view image, and a right eye view image on coordinates. One drawing.

6 and 7, FIG. 6 is a diagram illustrating a left eye view image 620 and a right eye view image 630 of an object created to have a depth value smaller than that of FIG. 7 from an object 610 on a coordinate. . The left eye view image 620 is an object image in which the object image 610 is moved by the moving distance 621, and the right eye view image 620 is an object image in which the object image 610 is moved by the moving distance 631. Is generated. The movement distance 621 and the movement distance 631 may be one pixel distance.

FIG. 7 is a diagram illustrating coordinates of a left eye view image 670 and a right eye view image 680 of an object generated to have a depth greater than that of FIG. 6 from the object 610. The left eye view image 670 is generated as an object image in which the object image 610 is moved by the moving distance 671, and the right eye view image 680 is an object image in which the object image 610 is moved by the moving distance 681. Is generated. The movement distance 671 and the movement distance 681 may be two pixel distances.

Since the binocular disparity of FIG. 7 is greater than the binocular disparity of FIG. 6, the depth values of the objects recognized as the left eye view image 670 and the right eye view image 680 of FIG. 7 are the left eye view image 620 and the right eye of FIG. 6. It is larger than the depth value of the object recognized as the viewpoint image 630.

8 is a diagram for describing a 3D viewpoint transformation method.

Referring to FIG. 8, the 3D view converter 250 may move the image frame 810 to the left to generate the image frame 831, and move to the right to generate the image frame 836. Here, the leftward movement size may be determined based on the left eye viewpoint adjustment value, and the rightward movement size may be determined based on the right eye viewpoint adjustment value. Also, the image frame 810 may be an image frame reconstructed by the video decoder 230 or may be an image frame output by the sloop converter 255.

 The 3D view converter 250 may move the left eye image frame 821 to the left to generate an image frame 831, and move the right eye image frame 826 to the right to generate an image frame 836. . Here, the leftward movement size may be determined based on the left eye viewpoint adjustment value, and the rightward movement size may be determined based on the right eye viewpoint adjustment value. Also, the image frame 821 may be a left eye view image frame output by the 3D effect controller 245, and the right eye image frame 826 may be a left eye view image frame output by the 3D effect controller 245.

9 is a view for explaining a slew transform method.

Referring to FIG. 9, the sloop converter 255 may narrow the upper portion of the image frame 910 to change the image frame 920 to output the image frame 920. The sloop converter 255 varies the degree of narrowing of the upper portion of the image frame 910 according to the sloop adjustment value.

For example, the sloop converter 255 may slew the image frame 910 by varying the angles 921 and 923 according to the slew adjustment value. When the sloop adjustment value is 1, the sloop converter 255 may narrow the upper portion of the image frame 910 such that the angle 921 and the angle 923 are 2 degrees. In addition, when the slew adjustment value is 2, the sloop converter 255 may narrow the upper portion of the image frame 910 such that the angle 921 and the angle 923 are 4 degrees. When the sloop adjustment value is 3, the sloop converter 255 may narrow the upper portion of the image frame 910 such that the angle 921 and the angle 923 are 6 degrees. When the sloop adjustment value is 4, the sloop converter 255 may narrow the upper portion of the image frame 910 such that the angle 921 and the angle 923 are 8 degrees. In addition, when the slew adjustment value is 0, the sloop converter 255 bypasses the video frame 910 without performing the slew conversion. Here, the values of the angle 921 and the angle 923 with respect to the slew adjustment value are one example, and are not limited to the above embodiment.

The image frame 910 may be an image frame reconstructed by the video decoder 230, and may be a left eye view image frame or a right eye view image frame output by the 3D control unit 245. The left eye view image frame or the right eye view image frame may be output.

FIG. 10 is a diagram illustrating an exemplary embodiment of a look up table (LUT).

Referring to FIG. 10, the stereoscopic adjustment reference table includes at least one of a stereoscopic adjustment value, a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a sloop adjustment value, and the stereoscopic adjustment value is a stereoscopic adjustment value, a left eye viewpoint adjustment value, Association information associated with at least one of the right eye viewpoint adjustment value and the sloop adjustment value may be stored. The controller 190 may select at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a slew adjustment value corresponding to the stereoscopic adjustment value using the stereoscopic adjustment reference table. The controller 190 may read at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a slew adjustment value associated with the stereoscopic adjustment value in the stereoscopic adjustment reference table using the association information of the stereoscopic adjustment value.

The stereoscopic adjustment reference table may be the stereoscopic adjustment reference table 1000 illustrated in FIG. 10. The stereoscopic adjustment control table 1000 includes a stereoscopic adjustment value, a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a slew adjustment value associated with the stereoscopic adjustment value on a row-by-row basis.

For example, the stereoscopic adjustment reference table 1000 includes a stereoscopic adjustment value having a value of 1 in row 2, and the stereoscopic adjustment value 3 associated with the stereoscopic adjustment value 1, the left eye viewpoint adjustment value 0, the right eye viewpoint adjustment value 1, and the sloop. Include the adjustment value 0 in line 2. In addition, the stereoscopic adjustment reference table 1000 includes a stereoscopic adjustment value having a value of 20 in row 21, and the stereoscopic adjustment value 47 associated with the stereoscopic adjustment value 20, the left eye viewpoint adjustment value 10, the right eye viewpoint adjustment value 9, and the slew adjustment value 4 Include in line 21.

In the stereoscopic adjustment control table 1000, the minimum value of the stereoscopic adjustment value is defined as 0, and the maximum value is adjusted to 20. In some embodiments, a GUI may be displayed for selecting a stereoscopic adjustment value between the minimum and maximum values.

If the stereoscopic adjustment value is 10, the control unit 190 detects row information indicating a row (row 11) in which the stereoscopic adjustment value 10 is located in the stereoscopic adjustment reference table 1000, and displays the row information indicated by the row information. The three-dimensional adjustment value 29, the left eye viewpoint adjustment value 5, the right eye viewpoint adjustment value 5, and the slew adjustment value 3 which are located are read. Here, the row information is an example of related information.

FIG. 11 is a diagram illustrating a chart of values included in the table of FIG. 10.

Referring to FIG. 11, the controller 190 may control to display a chart of values included in the stereoscopic adjustment reference table. The controller 190 may control the chart to be displayed in response to a user action detection or a user action detection requesting setting of a stereoscopic adjustment reference table. In some embodiments, display 150 may display chart 1100 with the chart.

In the chart 1100, the horizontal axis represents a stereoscopic adjustment value. The graph 1110 shows a stereoscopic adjustment value for the stereoscopic adjustment value, and the graph 1120 shows a left eye view adjustment value for the stereoscopic adjustment value. The graph 1130 represents a right eye viewpoint adjustment value for the stereoscopic adjustment value, and the graph 1140 represents a sloop adjustment value for the stereoscopic adjustment value.

The user may drag and move the graphs 1110, 1120, 1130, and 1140 of the chart 1100, and change the coordinates on the graphs 1110, 1120, 1130, and 1140 to other coordinates to change the graphs 1110, 1120, 1130. , 1140). The user may perform a user action requesting that the value of the stereoscopic adjustment reference table be modified according to the modified graph through a user action of pressing the setting button 1150. In addition, the user may perform a user action requesting that the value of the stereoscopic adjustment reference table be maintained through a user action of pressing the cancel button 1155.

The controller 190 detects a user action of pressing the setting button 1150 and includes the control unit 190 in the stereoscopic adjustment reference table according to the value indicated by the coordinate values of the graphs 1110, 1120, 1130, and 1140 in response to the detection of the user action. Change the specified value.

FIG. 12 illustrates a screen on which a graphical user interface (GUI) for setting stereoscopic adjustment values is displayed.

Referring to FIG. 12, the controller 190 may control to display a GUI for setting a stereoscopic adjustment value. Herein, the controller 190 may control the GUI to be displayed in response to detecting a user action requesting the setting of a stereoscopic adjustment value. The display 150 may display the GUI 1210, the GUI 1220, and the GUI 1230 with the GUI.

GUI 1210 includes numerical value 1212 and stereoscopic adjustment bar 1215. The numerical value 1212 indicates the currently set stereoscopic adjustment value, and the numerical value 1212 becomes the set stereoscopic adjustment value when the GUI 1210 is displayed, and the changed stereoscopicization when the user changes the stereoscopic adjustment value on the GUI. It becomes the adjustment value. For example, the numerical value 1212 represents the stereoscopic adjustment value of 0, the numerical value 1222 represents the stereoscopic adjustment value of 10, and the numerical value 1232 represents the stereoscopic adjustment value of 20.

The stereoscopic adjustment bar 1215 displays a relative ratio of the stereoscopic adjustment value currently set between the maximum value and the minimum value of the stereoscopic adjustment value. For example, the stereoscopic adjustment bar 1225 displays an area 1227 having a magnitude of a relative ratio corresponding to the stereoscopic adjustment value 10, and the stereoscopic adjustment bar 1235 has a magnitude of the relative ratio corresponding to the stereoscopic adjustment value 20. Displays the entire area of the stereoscopic adjustment bar 1235 having a. Also, since the stereoscopic adjustment bar 1215 has a stereoscopic adjustment value 0, the stereoscopic adjustment bar 1215 does not display an area indicating a relative ratio of the stereoscopic adjustment values.

The user can change the stereoscopic adjustment value by performing a user action of pressing a specific button on the remote controller. Here, the user may perform a user action requesting an increase in stereoscopic adjustment value through a user action of pressing a specific button (for example, an up button) on the remote control, and a user action of pressing a specific button (for example, a down button). Through the user action can be requested to reduce the stereoscopic adjustment value. The controller 190 may increase the stereoscopic adjustment value in response to the detection of the user action requesting the increase of the stereoscopic adjustment value, and may decrease the stereoscopic adjustment value in response to the detection of the user action requesting the reduction of the stereoscopic adjustment value. .

In addition, the controller 190 may adjust stereoscopic display of the image 1201 displayed according to the stereoscopic adjustment value indicated by the numerical value 1212. The image 1201 is stereoscopically imaged to the image 1202 according to the stereoscopic adjustment value 10 indicated by the numerical value 1222, and the image 1201 is stereoscopically rendered to the image 1203 according to the stereoscopic adjustment value 20 represented by the numerical value 1232.

FIG. 13 illustrates a screen on which a graphical user interface (GUI) for selecting a stereoscopic method is displayed.

Referring to FIG. 13, the controller 1900 may control to display a GUI for setting a stereoscopic method. Herein, the controller 190 may control the GUI to be displayed in response to detecting a user action requesting the setting of the stereoscopic method. The GUI may display at least one stereoscopic method and an option for selecting each of the at least one stereoscopic method. The GUI may also include a button for confirming whether to set the stereoscopic method according to the selected option.

 The display 150 may display the GUI 1300 with the GUI. The GUI 1300 displays a three-dimensional adjustment method 1310, a three-dimensional viewpoint transformation method 1320, and a sloop transformation method 1330 as a three-dimensionalization method. In addition, the GUI 1300 optionally displays check boxes 1315, 1325, and 1335. When the check box 1315 is checked, the controller 190 selects the stereoscopic adjustment method as the stereoscopic method. When the check box 1325 is checked and displayed, the controller 190 selects the 3D conversion method as the stereoscopic method. Choose. If the check box 1335 is checked, the controller 190 selects the sloop conversion method as the stereoscopic method.

The GUI 1300 also includes a confirm button 1350 and a cancel button 1355.

In some embodiments, the controller 190 may set an execution order between stereoscopic methods selected by the GUI 1300. Here, the execution order may be set in advance and may be changed according to the detected user action.

FIG. 14 is a flowchart illustrating a preferred embodiment of a method for converting a 2D image into a 3D image according to the present invention.

Referring to FIG. 14, the controller 190 calculates individual adjustment values for at least one stereoscopic method based on the stereoscopic adjustment value (S100). The stereoscopic adjustment value may be received by being multiplexed with image data including an image frame. Also, the stereoscopic adjustment value may be set based on the detected user action, and may be a default value. The stereoscopic adjustment value may be changed according to the stereoscopic adjustment value setting method shown in FIG. 15, and the method of converting the 2D image of FIG. 14 into a 3D image may be performed by the stereoscopic adjustment value setting method shown in FIG. 15. It may further include.

The stereoscopic method may include at least one of the three-dimensional adjustment method described above in FIG. 4, the three-dimensional viewpoint transformation method described in FIG. 8, and the sloop transformation method described in FIG. 9. The execution order of the stereoscopic method may have a preset order. In this case, the preset order may be an order that proceeds in the order of a three-dimensional control method, a three-dimensional viewpoint transformation method and a sloop transformation method.

In some embodiments, the order of execution of stereoscopic methods may be set based on user actions.

In addition, the individual adjustment value may include at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a sloop adjustment value. The individual adjustment value for the three-dimensional adjustment method may be a three-dimensional adjustment value, and the individual adjustment value for the three-dimensional viewpoint conversion method may include one of a left eye viewpoint adjustment value and a right eye viewpoint adjustment value, and an individual adjustment for the sloop conversion method. The value may be a slew adjustment value.

The controller 190 may calculate individual adjustment values using the stereoscopic adjustment reference table. Here, the stereoscopic adjustment reference table may be the stereoscopic adjustment reference table 1000 of FIG. 10. In addition, the value of the stereoscopic adjustment reference table may be changed according to the individual setting value changing method shown in FIG. 16, and the method of converting the two-dimensional image of FIG. 14 into a three-dimensional image may include the method of changing the individual setting value shown in FIG. 16. The process may further include.

The image processing unit 240 three-dimensionalizes the two-dimensional image frame according to the at least one stereoscopic method based on the individual adjustment value calculated by the controller 190 (S110). The image processor 240 may execute the stereoscopic method in accordance with the execution order of the at least one stereoscopic method.

In some embodiments, step S110 may include a stereoscopic process shown in FIG. 17.

The display 150 displays a stereoscopic image frame (S120). The stereoscopic image frame may include a left eye view image frame and a right eye view image frame.

FIG. 15 is a flowchart illustrating a preferred embodiment of the stereoscopic adjustment value setting method according to the present invention.

Referring to FIG. 15, the controller 190 detects a user action for requesting setting of a stereoscopic adjustment value (S200).

In response to the user action detection, the controller 190 controls to display a graphical user interface (GUI) for setting a stereoscopic adjustment value (S210). The graphical user interface may display a numerical value of the stereoscopic adjustment value. The graphic user interface may be one of the GUI 1210, the GUI 1220, and the GUI 1230 illustrated in FIG. 12.

The controller 190 detects a user action requesting a change of the stereoscopic adjustment value (S220).

In response to the user action detection, the controller 190 changes the stereoscopic adjustment value (S230). The controller 190 may perform step S100 based on the stereoscopic adjustment value changed in step S230.

FIG. 16 is a flowchart illustrating a preferred embodiment of a method for changing an individual setpoint according to the present invention.

Referring to FIG. 16, the controller 190 detects a user action for requesting setting of a stereoscopic adjustment control table (S300).

In response to detecting the user action, the controller 190 generates a chart based on the value included in the stereoscopic adjustment reference table (S310).

The controller 190 displays the generated chart (S320). The displayed chart includes a graph representing an individual adjustment value for the stereoscopic adjustment value, and the graph may be displayed to be movable on the coordinates of the chart according to a sensed user action. Here, the displayed chart may be the chart 1100 illustrated in FIG. 11.

The controller 190 detects a user action for requesting a change of the graph included in the chart (S330).

In response to the user action detection, the controller 190 changes the graph (S340). Here, the coordinate values included in the graph change as the graph changes.

The controller 190 detects a user action requesting a change of a value included in the stereoscopic adjustment reference table (S350).

In response to detecting the user action, the controller 190 changes a value included in the stereoscopic adjustment reference table according to the graph (S360). Here, the value included in the stereoscopic adjustment control table may be changed according to the coordinate value included in the changed graph.

17 is a view showing a process of performing a preferred embodiment for the stereoscopic process according to the present invention.

Referring to FIG. 17, the controller 190 checks the execution order of the stereoscopic method (S400). According to the identified execution order, the first stereoscopic method may be one of a three-dimensional control method, a three-dimensional viewpoint transformation method, and a sloop transformation method, the second stereoscopic method may be another one, and the third stereoscopic method may be the other one. Can be.

The image processor 240 accesses individual adjustment values of the first stereoscopic method (S410).

The image processor 240 checks whether the individual adjustment value accessed in step S410 is 0 (S420).

If not 0, the image processing unit 240 performs the first stereoscopic method (S430).

The image processing unit 240 accesses individual adjustment values of the second stereoscopic method (S440).

The image processor 240 checks whether the individual adjustment value accessed in step S440 is 0 (S450).

If not 0, the image processor 240 performs a second stereoscopic method (S460).

The image processor 240 accesses individual adjustment values of the third stereoscopic method (S470).

The image processor 240 checks whether the individual adjustment value accessed in step S470 is 0 (S480).

If not 0, the image processing unit 240 performs a third stereoscopic method (S490).

In some embodiments, the stereoscopic method selected in the GUI 1300 of FIG. 13 may be terminated at step S430, and may be terminated at step S460.

The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer apparatus is stored. Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and may be implemented in the form of a carrier wave (for example, transmission via the Internet) . The computer-readable recording medium may also be distributed to networked computer devices so that computer readable code can be stored and executed in a distributed manner.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation in the embodiment in which said invention is directed. It will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the appended claims.

Claims (20)

Calculating individual adjustment values for the at least one stereographic method based on the stereographic adjustment values;
Stereoscopically displaying a two-dimensional image frame according to the at least one stereoscopic method based on the calculated individual adjustment value; And
And displaying the stereoscopic image frame to convert the two-dimensional image to a three-dimensional image.
The method of claim 1,
And the stereoscopic adjustment value is multiplexed with the image frame and received.
The method of claim 1,
The stereoscopic adjustment value is set on the basis of the detected user action method for converting a two-dimensional image to a three-dimensional image.
The method of claim 1,
Detecting a user action requesting the setting of the stereoscopic adjustment value;
In response to detecting the user action, displaying a graphical user interface (GUI) for setting a stereoscopic adjustment value, wherein the graphical user interface displays a numerical value of the stereoscopic adjustment value;
Detecting a user action requesting a change of the stereoscopic adjustment value; And
In response to detecting the user action, changing the stereoscopic adjustment value;
And converting the 2D image frame into a 3D image, wherein the 2D image frame is stereoscopically based on the changed stereoscopic adjustment value.
The method of claim 1,
The at least one stereoscopic method is a method of converting a two-dimensional image to a three-dimensional image, characterized in that the set based on the detected user action.
The method of claim 1,
The at least one stereoscopic method includes at least one of a stereoscopic control method, a three-dimensional viewpoint transformation method, and a sloop transformation method.
The method of claim 1,
The execution order of the at least one stereoscopic method is a method of converting a two-dimensional image to a three-dimensional image, characterized in that the set based on the user action.
The method of claim 1,
Wherein the individual adjustment value includes at least one of a stereoscopic adjustment value, a left eye viewpoint adjustment value, a right eye viewpoint adjustment value, and a sloop adjustment value.
The method of claim 1,
And the individual adjustment value is calculated using a stereoscopic adjustment reference table.
The method of claim 9,
Detecting a user action requesting setting of the stereoscopic adjustment reference table;
In response to detecting the user action, displaying a value included in the stereoscopic adjustment reference table as a chart, wherein the chart includes a graph representing an individual adjustment value for the stereoscopic adjustment value, wherein the graph is a detected user action. Movably displayed on the coordinates of the chart according to;
Detecting a user action requesting a change of a value included in the stereoscopic adjustment control table; And
And in response to detecting the user action, changing a value included in the stereoscopic adjustment control table according to the graph.
The method of claim 1,
Receiving the 2D image frame;
Restoring the received two-dimensional image frame;
Scaling the stereoscopic image frame; And
Sampling the scaled image frame in a 3D stereoscopic image format;
The stereoscopic step of the two-dimensional image frame,
Stereoscopically reconstructing the restored two-dimensional image frame;
Displaying the stereoscopic image frame,
And displaying the sampled image frame.
The method according to claim 1 or 11, wherein
And the stereoscopic image frame includes a left eye view image frame and a right eye view image frame.
A controller configured to calculate an individual adjustment value for at least one stereoscopic method based on the stereoscopic adjustment value; And
An image processing unit configured to stereoscopic a two-dimensional image frame according to the at least one stereoscopic method based on the calculated individual adjustment value;
The controller converts a 2D image into a 3D image, characterized in that for controlling the display of the stereoscopic image frame.
The method of claim 13,
The control unit,
Detects a user action requesting the setting of the stereoscopic adjustment value, controls a graphic user interface (GUI) for setting the stereoscopic adjustment value in response to detecting the user action, and controls the stereoscopic adjustment value Detect a user action requesting a change of the, change the stereoscopic adjustment value in response to detecting the user action,
Wherein the graphical user interface displays a numerical value of the stereoscopic adjustment value, and the two-dimensional image frame is stereoscopically based on the changed stereoscopic adjustment value.
The method of claim 13,
The image processing unit,
A three-dimensional adjustment unit for performing a three-dimensional adjustment method for the image frame based on the three-dimensional adjustment value;
A displacement changer configured to perform a 3D viewpoint transformation method on the image frame based on at least one of a left eye viewpoint adjustment value and a right eye viewpoint adjustment value; And
An apparatus for converting a 2D image into a 3D image, comprising: a sloop transform unit performing a sloop transform method on an image frame based on a sloop adjustment value.
The method of claim 13,
The control unit,
And converting the 2D image into the 3D image, wherein the execution order of the at least one stereoscopic method is set based on a user action.
The method of claim 13,
The control unit,
And converting the two-dimensional image to a three-dimensional image, wherein the individual adjustment value is calculated using a stereoscopic adjustment reference table.
18. The method of claim 17,
The control unit,
Detect a user action requesting the setting of the stereoscopic adjustment reference table, and in response to detecting the user action, display a value included in the stereoscopic adjustment reference table in a chart, and change a value included in the stereoscopic adjustment reference table Detect a user action requesting a request, change a value included in the stereoscopic adjustment reference table according to the graph in response to the user action detection,
The chart may include a graph indicating an individual adjustment value for the stereoscopic adjustment value, wherein the graph is displayed to be movably displayed on the coordinates of the chart according to a detected user action. Device to convert.
The method of claim 13,
A receiver which receives the 2D image frame;
A video decoder for reconstructing the received two-dimensional image frame;
A scaler for scaling the stereoscopic image frame; And
And a formatter configured to sample the scaled image frame in a 3D stereoscopic image format and output the sampled image frame.
The image processing unit,
And converting the 2D image into a 3D image, wherein the restored 2D image frame is three-dimensional.
Receiving unit for receiving a two-dimensional image frame;
An interface unit for sensing a user action requesting a change of the stereoscopic adjustment value;
A controller configured to detect the sensed user action, change the stereoscopic adjustment value in response to the detection of the user action, and calculate an individual adjustment value for at least one stereoscopic method based on the changed stereoscopic adjustment value;
A signal processor configured to three-dimensionalize a two-dimensional image frame according to the at least one stereoscopic method based on the calculated individual adjustment value; And
And a display for displaying the stereoscopic image frame to convert the 2D image into a 3D image.
KR1020110071346A 2011-07-19 2011-07-19 Apparatus and method for converting 2 dimensional image into 3 dimensional image KR20130010613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110071346A KR20130010613A (en) 2011-07-19 2011-07-19 Apparatus and method for converting 2 dimensional image into 3 dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110071346A KR20130010613A (en) 2011-07-19 2011-07-19 Apparatus and method for converting 2 dimensional image into 3 dimensional image

Publications (1)

Publication Number Publication Date
KR20130010613A true KR20130010613A (en) 2013-01-29

Family

ID=47839790

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110071346A KR20130010613A (en) 2011-07-19 2011-07-19 Apparatus and method for converting 2 dimensional image into 3 dimensional image

Country Status (1)

Country Link
KR (1) KR20130010613A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014168614A1 (en) * 2013-04-09 2014-10-16 Bitanimate, Inc. Two-dimensional video to three-dimensional video conversion method and system
US9172940B2 (en) 2009-02-05 2015-10-27 Bitanimate, Inc. Two-dimensional video to three-dimensional video conversion based on movement between video frames

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9172940B2 (en) 2009-02-05 2015-10-27 Bitanimate, Inc. Two-dimensional video to three-dimensional video conversion based on movement between video frames
WO2014168614A1 (en) * 2013-04-09 2014-10-16 Bitanimate, Inc. Two-dimensional video to three-dimensional video conversion method and system

Similar Documents

Publication Publication Date Title
US10009603B2 (en) Method and system for adaptive viewport for a mobile device based on viewing angle
CN107801094B (en) Method of controlling source device at sink device and apparatus using the same
CN107925791B (en) Image display device and mobile terminal
EP2701152B1 (en) Media object browsing in a collaborative window, mobile client editing, augmented reality rendering.
KR102364620B1 (en) Image display apparatus, and method for operating the same
KR102266901B1 (en) A display apparatus and a display method
EP3097690B1 (en) Multi-view display control
WO2015142971A1 (en) Receiver-controlled panoramic view video share
US20120287235A1 (en) Apparatus and method for processing 3-dimensional image
JP7392105B2 (en) Methods, systems, and media for rendering immersive video content using foveated meshes
EP3024220A2 (en) Display apparatus and display method
KR20130134103A (en) Device and method for providing image in terminal
KR20190027079A (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
KR20140089858A (en) Electronic apparatus and Method for controlling electronic apparatus thereof
US20180048846A1 (en) Image display apparatus
EP2888716B1 (en) Target object angle determination using multiple cameras
US20130009949A1 (en) Method, system and computer program product for re-convergence of a stereoscopic image
KR20130010613A (en) Apparatus and method for converting 2 dimensional image into 3 dimensional image
KR101783608B1 (en) Electronic device and method for dynamically controlling depth in stereo-view or multiview sequence image
KR101914206B1 (en) Server of cloud audio rendering based on 360-degree vr video
EP2770726A1 (en) Display device, and method of controlling a camera of the display device
EP3032392B1 (en) Display apparatus and display method
KR102541173B1 (en) Terminal and method for controlling the same
US20140160503A1 (en) Input device and image processing method thereof
KR20130011384A (en) Apparatus for processing 3-dimensional image and method for adjusting setting value of the apparatus

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination