KR101239149B1 - Apparatus and method for converting multiview 3d image - Google Patents

Apparatus and method for converting multiview 3d image Download PDF

Info

Publication number
KR101239149B1
KR101239149B1 KR1020110081365A KR20110081365A KR101239149B1 KR 101239149 B1 KR101239149 B1 KR 101239149B1 KR 1020110081365 A KR1020110081365 A KR 1020110081365A KR 20110081365 A KR20110081365 A KR 20110081365A KR 101239149 B1 KR101239149 B1 KR 101239149B1
Authority
KR
South Korea
Prior art keywords
image
depth map
view
information
image frame
Prior art date
Application number
KR1020110081365A
Other languages
Korean (ko)
Other versions
KR20130019295A (en
Inventor
박상민
Original Assignee
(주)레이딕스텍
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)레이딕스텍 filed Critical (주)레이딕스텍
Priority to KR1020110081365A priority Critical patent/KR101239149B1/en
Publication of KR20130019295A publication Critical patent/KR20130019295A/en
Application granted granted Critical
Publication of KR101239149B1 publication Critical patent/KR101239149B1/en

Links

Images

Abstract

The present invention relates to a multi-view three-dimensional image conversion apparatus. The multi-view 3D image converting apparatus may include a display unit displaying a first image and a second image, a sensor unit detecting a user's viewpoint based on a screen of the display unit, a current image frame and a previous image frame, and receiving the current image frame. A map generator configured to generate a depth map for movement of each of the unit elements constituting the image frame; and the first image in which each of the unit elements of the current image frame is moved based on the depth map and the user's viewpoint. And a multi-view image generator configured to generate the second image based on a tilt control of the current image frame based on the user's viewpoint, wherein the first image and the second image are for configuring a 3D image. It is a video.

Description

Multiview 3D image conversion device and method {APPARATUS AND METHOD FOR CONVERTING MULTIVIEW 3D IMAGE}

The present invention relates to an image conversion system, and more particularly, to an apparatus and method for converting a 2D image into a multiview 3D image according to a user's viewpoint in real time.

3D imaging technology, which is rapidly developing, is moving beyond the level of delivering stereoscopic images to users and progressing to providing realistic images as in actual situations. It is evolving from transmitting 3D information to delivering 3D experience. Multi-view 3D image conversion technology is a technology that provides a stereoscopic image having various binocular disparity as in the real world according to the change of the user's point of view unlike the short-view 3D image technology.

Unlike 2D images, 3D images contain a lot of information, which is expensive in the production stage. Multi-view 3D imaging technology requires more information than short-term 3D imaging technology, and production of 3D contents including 3D broadcasting is not activated due to the difficulty of various standards.

In order for the 3D image display apparatus and related industries to be activated and used in various places, it is more competitive to generate not only short-term 3D images but also multi-view 3D images.

Therefore, there is a need to convert a two-dimensional image to a three-dimensional image and to change the three-dimensional effect of the three-dimensional image provided according to a user's viewpoint.

An object of the present invention is to provide a multi-view 3D image converting apparatus and method capable of converting a 2D image into a multi-view 3D image according to a user's viewpoint.

The multi-view three-dimensional image conversion apparatus of the present invention receives a display unit for displaying a first image and a second image, a sensor unit for detecting a user view based on the screen of the display unit, the current image frame and the previous image frame A map generator configured to generate a depth map for movement of each of the unit elements constituting the current image frame, and the first unit that moves each of the unit elements of the current image frame based on the depth map and the user's viewpoint. A multi-view image generating unit generating a first image and generating the second image by controlling a tilt of the current image frame based on the user's viewpoint, wherein the first image and the second image include a 3D image; This is a video for configuration.

In this embodiment, the map generator selects one base map from among a plurality of base maps in which perspective information is set using the previous image frame, and corrects a plurality of brightnesses in which a brightness value is set using the previous image frame. A preprocessing unit to select one brightness correction table among tables, a full image processing unit to generate a full depth map for setting a perspective of the current image frame according to the selected base map, and converting the current image frame into the selected brightness correction table And a local image processor configured to generate a local depth map by compensating brightness, and a depth map generator configured to generate the depth map by calculating the full depth map and the local depth map.

In this embodiment, the multi-view image generating unit selects a first multi-view information and a second multi-view information corresponding to the user viewpoint, the depth map according to the first multi-view information And a parallax processor configured to generate a first image for reproducing a 3D image from the current image frame by performing a multiview and calculating the multiview depth map and the current image frame. And a tilt controller configured to generate a second image for reproducing a 3D image through a tilt operation using the second multiview information.

In this embodiment, the first multi-view information includes information on parameters and depth directions for multi-viewing the depth map.

In this embodiment, the second multiview information includes information on an inclination value and an inclination direction for multiviewing the current image frame.

In this embodiment, the multi-view image generating unit is a first distance control unit for controlling to move the first image in one of the left and right directions according to the first distance control signal, and the second image The apparatus may further include a second distance controller configured to move in one of the right and left directions according to the distance control signal.

In this embodiment, the first distance control signal and the second distance control signal are signals for either increasing or decreasing the distance between the first image and the second image.

In this embodiment, the parallax processor receives a buffer for storing the current image frame, the depth map and the first multiview information, and converts the depth map to multiview the depth map according to the multiview information. A parallax calculator configured to generate the first image by calculating the current image frame output from the buffer with the multiview depth map processed by the multiview; a 3D image interpolator to interpolate the first image; and the interpolated And a circular buffer for sequentially selecting and outputting the first image.

In this embodiment, the tilt control unit receives the second multi-view information and multi-views the current image frame according to the second multi-view information.

In this embodiment, if the first image is a left image, the second image is a right image, and if the first image is a right image, the second image is a left image.

In this embodiment, the unit element is characterized in that the pixel.

The multi-view three-dimensional image conversion method according to an embodiment of the present invention detects the user's viewpoint from the display screen, for the movement of each of the unit elements constituting the current image frame using the previous image frame and the current image frame. Generating a depth map, selecting first multiview information corresponding to the user's viewpoint, converting the depth map into a multiview depth map based on the first multiview information, and generating the multiview depth map Generating a first image in which each of the unit elements of the current image frame is moved, selecting second multiview information corresponding to the user view, and based on the second multiview information Generating a second image through tilt control of the second image; and synchronically controlling the first image and the second image to output the second image .

In this embodiment, the first multi-view information includes information on parameters and depth directions for multi-viewing the depth map.

In this embodiment, the second multiview information includes information on an inclination value and an inclination direction for multiviewing the current image frame.

In this embodiment, the current image frame and the previous image frame are two-dimensional image frames.

According to the present invention, the multi-view three-dimensional image conversion apparatus can convert the two-dimensional image to a multi-view three-dimensional image by generating two left and right images from the two-dimensional image, respectively, according to the user's viewpoint. In addition, the apparatus for converting a 3D image of the present invention can obtain a multiview 3D image from a 2D image, thereby reducing the cost of producing a multiview 3D image.

1 is a diagram illustrating a multi-view 3D image conversion apparatus according to an embodiment of the present invention;
2 is a view showing a user viewpoint measured by a sensor unit according to an embodiment of the present invention;
3 is a view showing a map generating unit according to an embodiment of the present invention;
4 is a diagram illustrating base maps and base map indexes according to an embodiment of the present invention;
5A through 5D illustrate brightness correction using brightness correction tables corresponding to brightness correction table indexes '1', '2', '3', and 'k' according to an embodiment of the present invention;
6 is a view showing a local image processing unit according to an embodiment of the present invention;
7 is a view showing a brightness correction unit according to an embodiment of the present invention;
8 is a view showing a depth map generator according to an embodiment of the present invention;
9 illustrates a full depth map, an area depth map, and a depth map according to an embodiment of the present invention;
10 is a view illustrating a depth map generation operation of a depth map generator according to an embodiment of the present invention;
11 illustrates a depth map assigned to an image field according to depth map correction according to an embodiment of the present invention;
12 illustrates a depth map and a corrected depth map according to an embodiment of the present invention;
13 is a diagram illustrating a structure of a multiview image generator according to an embodiment of the present invention;
14 illustrates a depth map and a slope map according to a user's viewpoint according to an embodiment of the present invention;
15 is a view illustrating a parallax processing unit according to an embodiment of the present invention;
16 is a view illustrating a 3D image generating operation of a parallax processor according to an embodiment of the present invention;
17 is a view illustrating a tilt control operation of a tilt controller according to an embodiment of the present invention;
18 is a view schematically showing first and second images generated according to an embodiment of the present invention; and
19 is a diagram illustrating operations of the first distance controller and the second distance controller according to an exemplary embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention, and how to accomplish it, will be described with reference to the embodiments described in detail below with reference to the accompanying drawings. However, the present invention is not limited to the embodiments described herein but may be embodied in other forms. However, the embodiments of the present invention are provided to explain in detail enough to facilitate the technical idea of the present invention to those skilled in the art.

In the drawings, embodiments of the present invention are not limited to the specific forms shown and are exaggerated for clarity. In addition, parts denoted by the same reference numerals throughout the specification represent the same components.

The expression "and / or" is used herein to mean including at least one of the components listed before and after. In addition, the expression “connected / combined” is used in the sense including including directly connected to or indirectly connected to other components. In this specification, the singular forms also include the plural unless specifically stated otherwise in the phrases. Also, as used herein, components, steps, operations, and elements referred to as "comprising" or "comprising" refer to the presence or addition of one or more other components, steps, operations, elements, and devices.

The present invention provides a multi-view 3D image conversion apparatus for converting a 2D image into a multiview 3D image according to a user's viewpoint.

1 is a diagram illustrating a multi-view 3D image conversion apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the apparatus for converting a multiview 3D image 100 includes a sensor unit 110, a map generator 120, a multiview image generator 130, and a display 140.

The sensor unit 110 may detect a user's viewpoint. The sensor unit 110 may detect a user's viewpoint, that is, the position of the pupil of the user. The sensor unit 110 may detect the user's viewpoint based on the screen of the display unit 140. The sensor unit 110 may be included in the display unit 140 or the multi-view image generator 130. The sensor unit 110 may include a camera sensor, a motion detection sensor, a temperature sensor, an infrared sensor, and the like for detecting a user's viewpoint. The sensor unit 110 outputs the detected user viewpoint information to the multi-viewpoint image generator 130.

The map generator 120 generates a first image (one of the images constituting the 3D image) from the 2D image (the current image frame (n image frame) and the previous image frame (n-1 image frame)). Create a depth map to acquire. For example, the depth map is a map in which values (depths) for shifting a position (for example, shift) are set for each of the unit elements (for example, pixels) constituting the 2D image. The map generator 120 outputs the generated depth map to the multi-view image generator 130.

The multi-view image generator 130 generates a first image and a second image (eg, a left image and a right image) according to binocular parallax. The multiview image generator 120 may select multiview information for multiview processing based on user view information. The multi-view image generator 120 performs multi-view processing through the depth or tilt control on the first image and the second image using the selected multi-view information. The multi-view image generator 130 may recreate the depth (depth) of the multi-view processed image in response to each of the external control signals (the first distance control signal D_CTRL1 and the second distance control signal D_CTRL2). The distance (distance feeling) of the pointed image can be set. The multi-view image generator 130 outputs the multi-view processed first and second images to the display 140.

Meanwhile, the first image and the second image are respectively generated to provide a three-dimensional stereoscopic feeling to a user in an image frame. Therefore, the first image and the second image have an association relationship with each other. Therefore, the multi-view image generator 120 may output 3D multi-view processed images (first image and second image) through synchronous control between the first image and the second image.

The display unit 140 receives the multiview processed first image and the second image, and outputs the received first image and the second image through a display screen. In this case, the first image and the second image output from the display 140 are images generated according to a position viewed by the user, that is, a user's viewpoint.

Accordingly, the multi-view three-dimensional image conversion apparatus 100 of the present invention may generate two images (first image, second image) according to the user's viewpoint from the two-dimensional image. Here, the first image and the second image are images according to binocular parallax. If the first image is the left (L) image, the second image is the right (R) image, and if the first image is the right (R) image, the first image is the left image.

The multi-view 3D image conversion apparatus 100 may generate the first image according to the depth map, and generate the second image by the tilt control. In this case, the multi-view 3D image conversion apparatus 100 multi-views the first image and the second image based on user viewpoint information. As a result, the multi-view three-dimensional image conversion apparatus 100 of the present invention may provide the user with a three-dimensional image according to the user's gaze at the screen.

2 is a diagram illustrating a user viewpoint measured by a sensor unit according to an exemplary embodiment of the present invention.

Referring to FIG. 2, the sensor unit 110 may detect a user viewpoint based on the center of the screen 141 of the display unit. For example, the sensor unit 110 divides an area according to a user's view into 22 areas and detects an area according to a user's view. Here, the divided region is represented by the coordinate '(x, x)'.

A number (0 or 1) located in front of the coordinates indicates a position according to the distance of the user's viewpoint with respect to the screen 141. Here, 0 is located near the screen 141 relative to the area indicated by 1, and 1 is located far away on the screen relative to the area indicated by 0. FIG.

The number (-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5) located behind the coordinates is the direction of the user's viewpoint relative to the center of the screen 141 (left Or right). Here, in the case of 5, the viewpoint of the user means the right side based on the screen 141, and in the case of -5, the viewpoint of the user means the left side of the screen 141. Also, 0 indicates that the user's viewpoint is in front of the screen 141. Accordingly, the sensor unit 110 may divide the user view for viewing the display screen into 22 areas in order to detect the user view. The coordinates of each of the divided regions are (0, -5), (0, -4), (0, -3), (0, -2), (0, -1), (0, 0), ( 0, 1), (0, 2), (0, 3), (0, 4), (0, 5), (1, -5), (1, -4), (1, -3), (1, -2), (1, -1), (1, 0), (1, 1), (1, 2), (1, 3), (1, 4), (1, 5) Can be represented. For example, it may be confirmed that a user viewpoint having a coordinate of (0, 0) is located at a short distance from a user viewpoint having a coordinate of (1, 0). It can be seen that a user viewpoint having a coordinate of (1, -5) is located on the left side relative to a user viewpoint having a coordinate of (1, -2). Also, it can be seen that the user viewpoint having the coordinates of (1, 5) is located on the right side relative to the user viewpoint having the coordinates of (1, 1).

The sensor unit 110 may detect a user's viewpoint and may provide the detected user's viewpoint to the multi-viewpoint image generator 130. The user's viewpoint represents a point at which the gaze of the user who views the image through the screen 141 is located. For example, the user's viewpoint may be represented by coordinates (or coordinates indicating a user's location) of the area where the user's gaze is located with respect to the screen 141.

3 is a diagram illustrating a map generator according to an exemplary embodiment of the present invention.

Referring to FIG. 3, the map generator 120 may include a preprocessor 121, an entire image processor 122, a local image processor 123, a depth map generator 124, and a depth map corrector 125. Include.

The preprocessor 121 receives a 2D image (previous (n-1) image frame). In this case, the two-dimensional image (previous image frame) input to the preprocessor 121 is transferred to the two-dimensional image (current (n) image frame) for conversion into a three-dimensional image in the multi-view three-dimensional image conversion apparatus 100 An image frame is input.

When the current image frame (n image frame), which is a two-dimensional image, is input based on the multi-view 3D image conversion apparatus 100, the values calculated from the previous image frame (n-1 image frame) are converted to the current image frame (n image frame). Is used to process The image frames that are continuously input have similar characteristics, and the image frames extracted from the previous image frame (n-1 image frame) are applied to the current image frame (n image frame).

The preprocessor 121 selects an index of the base map similar to the previous image frame (n-1 image frame) among the plurality of full depth maps. The base map is a map for setting the depth of each pixel corresponding to the previous image frame (n-1 image frame) according to the perspective of the entire image. The index of the base map may be used to generate an entire image map of the 2D image (n image frame). The preprocessor 121 outputs the selected base map index to the entire image processor 122.

Also, the preprocessor 121 selects an index of the brightness correction table that is similar to the brightness distribution of the two-dimensional image (the previous (n-1) image frame) among the plurality of brightness correction tables. The brightness correction table is a table for extracting depth information by correcting brightness of each pixel corresponding to an input two-dimensional image (current (n) image frame). The preprocessor 121 outputs the selected table index to the local image processor 123.

The full image processor 122 generates a full depth map corresponding to the base map index selected by the preprocessor 121. That is, the entire image processor 122 may set a three-dimensional effect for the entire one-dimensional image frame by generating the entire depth map. The full depth map has the depths set by the base map. The full image processor 122 outputs the generated full depth map to the depth map generator 124.

The local image processor 123 generates a local depth map by setting the depth of each pixel in the image from the input 2D image (current (n) image frame). The local image processor 123 may set a three-dimensional effect for each pixel in the image when generating the first image (3D image) by generating an area depth map. As an example, when a 2D image (current image frame) is input, the local image processor 123 extracts brightness information of the image. Thereafter, the local image processor 123 generates an area depth map by correcting the brightness value extracted from the value corresponding to the index of the brightness correction table calculated by the preprocessor 121. The generated area depth map is output to the depth map generator 124.

The depth map generator 124 receives the full depth map and the local depth map. The depth map generator 124 generates the depth map by calculating the entire depth map and the local depth map. Here, the depth map has a depth set for each pixel of the current image frame input to generate the first image. The depth map generator 124 outputs the generated depth map to the depth map corrector 125.

The depth map corrector 125 corrects the depth map to improve image quality of the 3D image. The depth map corrector 125 improves the image quality of the image generated after the parallax by correcting the depth map. Therefore, the depth map corrector 125 may more clearly implement the image quality of the generated first image and be stronger in noise.

For example, the depth map corrector 125 corrects the depth of each pixel through depth average and depth correction for depth values corresponding to a plurality of pixels located around the pixel in the depth map. . The depth map corrector 125 outputs the corrected depth map to the multi-view image generator 130.

4 is a diagram illustrating basic indexes and basic maps according to an embodiment of the present invention.

Referring to FIG. 4, the preprocessor 121 selects an index of one of the predefined basic maps through the perspective estimation of the 2D image. Thus, the base map has perspective information of the image.

For example, when looking at an image captured by a camera frame, a nearby subject has a small perspective (distance) and a background such as a landscape, the sea, and the sky has a larger perspective (distance) than the subject. Therefore, if there is a large difference in the distance between the upper and lower regions of the image frame with respect to one image frame, it is regarded as a far image, and if it is less, a near image. If the depth is set to '0' for the far region, a depth value of greater than '0' is set as the near region is reached. The base map may have a different depth depending on whether the 2D image is a near image or a far image. The near image has a similar depth difference between the background image and the object image than the far image. Assuming that the upper part of the frame is far or background, and the lower part is near or object based on the landscape, the far image has a depth greater than that of the background area compared to the near image.

Here, the base map index of the near image is set to '1' and the base map index of the far image is set to 'i'.

For example, the base map corresponding to the base map index '1' has an even distribution of depth values without a difference in depth values between the background area and the object area. However, in the base map corresponding to the base map index 'i', the object region has a depth value larger than the background region with respect to the bottom. As the change from the near image to the far image, the base map index may increase sequentially from '1' to 'i'.

At this time, the figure on the left shows a perspective, and the right shows a basic map on which the perspective is set for each of the basic map indices. Thus, the darker areas of the base map are farther away than the brighter areas. Here, the basic map set according to the distance has been described as an example, and may be implemented in various other forms.

The preprocessor 121 selects an index of the base map similar to the 2D image by analyzing the entire frame in the 2D image. The preprocessor 121 transmits the selected basic map index to the entire image processor 122. The entire image processor 122 transmits the base map corresponding to the base map index to the depth map generator 124. The base maps corresponding to the base map index may be stored in a memory located inside the entire image processor 122 or in a memory located outside the entire image processor 122.

On the other hand, when a two-dimensional image is first input to the multi-view three-dimensional image conversion apparatus 100, the preprocessing unit 121 is a previous two-dimensional image (that is, the previous (n) of the input two-dimensional image (current (n) frame)) -1) frame) does not exist. In this case, the preprocessor 121 may output a preset base map index (eg, base map index 'i').

In addition, the preprocessor 121 selects a brightness correction table index for brightness correction of each pixel in the 2D image. The preprocessor 121 outputs the selected table index to the local image processor 123.

The local image processor 123 may correct the image using the brightness correction table corresponding to the brightness correction table index generated by the preprocessor 121. The brightness correction tables may be stored in a memory located inside the local image processor 123 or in a memory located outside the local image processor 123.

5A to 5D illustrate brightness correction using brightness correction tables corresponding to brightness correction table indices '1', '2', '3', and 'k' according to an embodiment of the present invention.

Area depth maps are generated using the brightness of the image. When analyzing the brightness distribution of the overall brightness of the image, the overall dark image, or the unclear image, the brightness is distributed in a specific area. If you create an area depth map from these images, the quality of the 3D image will be degraded and the stereoscopic effect will be reduced. As the distribution of brightness values is uniform, a depth map that can improve a three-dimensional effect can be generated.

The brightness distribution of the entire image can be made uniform by substituting the brightness value of each pixel in the image using the brightness correction table selected according to the characteristics of the image. The index of the brightness correction table may be set from '1' to 'k' according to a predefined brightness distribution. The input brightness value (or input brightness level) of the brightness correction table is a value that normalizes the brightness value of the pixel extracted from the input image, and the output brightness value (or output brightness level) is corrected to normalize it to the range of the input brightness value. Brightness value.

The meanings of the input and output brightness values in the table are as follows. After subsampling the image input from the previous frame to extract the brightness information of the pixel value, the value is normalized to 0-15 steps. The frequency of brightness values corresponding to 0-15 is accumulated until the image frame of the input image is completed. When the image frames are all input, the distribution of brightness values from 0 to 15 levels is generated. Set the index of the table similar to the distribution of the brightness values of the input image until the next image frame is input. An image input to the next image frame is corrected by finding a brightness correction table corresponding to the index by using an index of a preset brightness correction table and substituting the corresponding values with the corresponding values.

Referring to FIG. 5A, when the brightness correction table index is '1', an input brightness value of the brightness correction table may be divided into '0' to '15'. In this case, the input brightness value '0' is converted (or replaced) into the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '1'. In this order, the input brightness values '0' to '15' correspond to the output brightness values '0' to '15', respectively. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.

Referring to FIG. 5B, when the brightness correction table index is '2', an input brightness value of the brightness correction table may be divided into '0' to '15'. At this time, the input brightness value '0' is converted (or replaced) to the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '0'. In this order, input brightness values' 0 'to' 15 'are output brightness values' 0, 0, 0, 0, 0, 0, 0, 1, 2, 4, 6, 8, 10, 12, 14, 15 'Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.

Referring to FIG. 5C, when the brightness correction table index is '3', an input brightness value of the brightness correction table may be divided into '0' to '15'. In this case, the input brightness value '0' is converted (or replaced) into the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '1'. In this order, input brightness values' 0 'to' 15 'are output brightness values' 0, 1, 2, 4, 6, 8, 10, 12, 14, 15, 15, 15, 15, 15, 15, 15. 'Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.

Referring to FIG. 5D, when the brightness correction table index is 'k', the input brightness values of the brightness correction table may be divided into '0' to '15'. At this time, the input brightness value '0' is converted (or replaced) to the output brightness value '0', and the input brightness value '1' is converted to the output brightness value '0'. Input brightness values' 0 'to' 15 'in this order are output brightness values' 0, 0, 0, 0, 0, 1, 3, 5, 7, 9, 11, 13, 15, 15, 15, 15 Corresponds to each. At this time, the frequency of the output brightness value according to the input brightness value is shown as a graph on the right side as an example.

5A through 5D illustrate brightness correction tables corresponding to indices of '1' through 'k' as an example. Thus, the index of each brightness correction table can be implemented with various values.

6 is a diagram illustrating a local image processor according to an exemplary embodiment of the present invention.

Referring to FIG. 6, the local image processor 123 includes a brightness corrector 210 and a depth generator 220.

The brightness corrector 210 receives an index of the brightness correction table and a 2D image. The brightness correction unit 210 corrects the brightness information of the input 2D image using the brightness correction table. In this case, the brightness correction table corresponds to the index of the brightness correction table input to the brightness correction unit 210. The corrected 2D image is output to the depth generator 220.

The depth generator 220 generates a local depth map by setting a depth corresponding to the corrected brightness value of the corrected 2D image.

7 is a diagram illustrating a brightness compensator according to an exemplary embodiment of the present invention.

Referring to FIG. 7, the brightness corrector 210 includes a table selector 211, a brightness value converter 212, a brightness information extractor 213, and a corrected brightness value generator 214.

The table selector 211 receives a brightness correction table index and selects a brightness correction table corresponding to the brightness correction table index among the plurality of brightness correction tables. Therefore, the brightness correction table can be selected using the previous image. The table selector 211 outputs the selected brightness correction table to the brightness value converter 212.

The brightness value converting unit 212 uses the brightness correction table to adjust brightness correction values of steps 0-15 of the brightness correction table in steps 0-n corresponding to a range of brightness values of the input image (n is the maximum value of the brightness value). In the case of 255, up-sampling is performed in 0 ~ 255 steps, that is, 256 levels of brightness. The brightness value converter 212 outputs the upsampled brightness level corresponding to the brightness value output from the brightness information extractor 213 to the corrected brightness value generator 214.

The brightness information extractor 213 extracts a brightness level (step 0-n) from the 2D image. The brightness information extractor 213 outputs the brightness level extracted from the 2D image to the corrected brightness value generator 214. For example, the brightness information extractor 213 may provide information corresponding to the brightness value to be output to the brightness value converter 212 to output the corresponding upsampled brightness value.

The corrected brightness value generator 214 replaces the brightness value of the brightness information extractor 213 with the upsampled brightness value of the brightness value converter 212. The corrected brightness value generator 214 outputs the corrected brightness value generated through the substitution operation to the depth generator 220.

On the other hand, the brightness value of the brightness correction table, that is, the brightness level (steps 0-15) is used in steps '0' to '15' different from the brightness value of the input image, that is, the brightness level (0-n). The reason for using different brightness levels is to minimize the size of the memory to store the brightness correction table and to easily check the distribution of brightness values. For example, in order to convert the brightness value of the "0-15" step into "0-255", the brightness value of the "0-15" step must be upsampled 16 times. That is, each step of the input and output values of the brightness correction table should be upsampled 16 times. In this case, the brightness value converter 212 may use linear interpolation to perform an upsampling operation.

8 is a diagram illustrating a depth map generator according to an exemplary embodiment of the present invention.

Referring to FIG. 8, the depth map generator 124 includes a map calculator 310.

The map calculator 310 generates a depth map by combining the entire depth map and the local depth map. In order to generate the depth map, the map calculator 310 may perform a preset operation (eg, conditional statement operation) between pixels corresponding to each other.

9 is a diagram illustrating an entire depth map, an area depth map, and a depth map according to an embodiment of the present invention.

Referring to FIG. 9, the depth map generator 124 may generate the depth map 430 by combining the entire depth map 410 and the local depth map 420.

Depth is set in each of the pixels in the full depth map 410, the local depth map 420, and the depth map 430. Here, the depth represents the moving distance according to the change of position of each pixel. For example, the pixel having a depth of '0' does not change its position when generating the first image. However, a pixel having a depth of '2' changes its position by 2 pixels when generating the first image. In addition, the pixel having a depth of '10' changes its position by 10 pixels when generating the first image.

For example, a pixel having a depth of '0' first located at the top left of the full depth map 410 may be combined with a pixel having a depth of '1' first located at the top left of the local depth map 420. Through the depth map 430 may be set to the pixel that the first depth is located at the upper left '0'.

10 is a diagram illustrating a depth map generation operation of the depth map generator according to an exemplary embodiment of the present invention.

Referring to FIG. 10, the depth map calculator 310 generates a depth map by calculating an entire depth map and a local depth map. The full depth map sets the perspective of the entire image frame. The area depth map sets the three-dimensional texture of each area of the image. By combining the full depth map and the local depth map, we create a depth map that can express the overall perspective and the three-dimensional texture of each region in the image. The depth map calculator 310 generates a depth map by calculating the entire depth map and the local depth map.

11 is a diagram illustrating a depth map allocated to an image field according to a depth map correction according to an embodiment of the present invention.

Referring to FIG. 11, the depth map corrector 125 corrects a depth map. For example, the depth map corrector 125 may include depth information defined in a plurality of fields constituting one image frame. Here, each of the plurality of fields is a line including pixels constituting one image frame.

The depth map corrector 125 corrects the depth of each pixel in the depth map based on two lines through a calculation defined below. Depth information is the depth information of even-numbered pixels (

Figure 112011063270352-pat00001
) And depth information of odd-numbered pixels (
Figure 112011063270352-pat00002
). 'p' is the position of the depth information located on the line or field with respect to one line or one field.

For example, depth value

Figure 112011063270352-pat00003
and
Figure 112011063270352-pat00004
If you calibrate
Figure 112011063270352-pat00005
,
Figure 112011063270352-pat00006
Is output.
Figure 112011063270352-pat00007
,
Figure 112011063270352-pat00008
Indicates depth information of the previous field.
Figure 112011063270352-pat00009
,
Figure 112011063270352-pat00010
Can be represented by Equation 1 below.

Figure 112011063270352-pat00011

Figure 112011063270352-pat00012

Correct depth information for odd areas

Figure 112011063270352-pat00013
) Represents depth information () of odd pixels at the same position as the current position (p) with respect to the previous field.
Figure 112011063270352-pat00014
) And depth information of odd pixels at the previous position (p-1)
Figure 112011063270352-pat00015
), The average value of the depth information of the even pixels and the depth information of the odd pixels at the current position (p)
Figure 112011063270352-pat00016
) And depth information of odd pixels at the previous position (p-1)
Figure 112011063270352-pat00017
Can be corrected to an average value of

Correct depth information in even areas (

Figure 112011063270352-pat00018
) Represents depth information () of odd pixels at the same position as the current position (p) with respect to the previous field.
Figure 112011063270352-pat00019
) And depth information of odd pixels at the previous position (p-1)
Figure 112011063270352-pat00020
), The depth information of the corrected odd pixel at the current position p based on the current field (
Figure 112011063270352-pat00021
) And depth information of odd pixels at the previous position (p-1)
Figure 112011063270352-pat00022
Can be corrected to an average value of

Depth correction of odd areas can reduce the influence of noise by smoothing the depth map. Depth correction of the even region can achieve a clear image quality by minimizing the overlap of the pixel shift that may occur during the three-dimensional parallax processing through the correction of the depth information value assigned to the odd region.

Through this, the depth map corrector 125 softens the boundary area of the object located in the 3D image. In addition, the depth map corrector 125 increases the linearity of the depth map and the connectivity with adjacent pixels to reduce the influence of noise. Through this, the depth map corrector 125 may improve the quality of the 3D image.

In addition, the depth map corrector 125 may store the corrected depth map in a memory or the like. In this case, the depth map corrector 125 may adjust the depth map of the corrected odd area (

Figure 112011063270352-pat00023
) Can be stored to reduce the size of information stored in memory.

12 illustrates a depth map and a corrected depth map according to an embodiment of the present invention.

Referring to FIG. 12, the depth map corrector 125 may obtain a corrected depth map from the depth map. Depth maps are exemplarily shown on the left side, and depth maps corrected by the depth map correction unit 125 are shown on the right side.

13 is a diagram illustrating a structure of a multiview image generator according to an embodiment of the present invention.

Referring to FIG. 13, the multiview image generating unit 130 includes a multiview information selecting unit 131, a parallax processing unit 132, a tilt control unit 133, a first distance control unit 134, and a second distance control unit ( 135).

The multi-viewpoint information selector 131 receives user viewpoint information measured from the sensor unit 110. The user viewpoint information is based on the screen of the display unit. The multi-view information selecting unit 131 selects multi-view information (first multi-view information and second multi-view information) corresponding to the user's viewpoint using the user's viewpoint information. Here, the first multi-view information includes parameters and depth information for multi-viewing the depth map. Also, the second multi-view information includes information on a tilt value and a tilt direction for multi-viewing the tilt map. The multi-viewpoint information selecting unit 131 outputs the first multi-viewpoint information to the parallax processing unit 132 and outputs the second multi-viewpoint information to the tilt control unit 133.

The parallax processing unit 132 generates a multi-view depth map by multi-view converting the depth map into first multi-view information. The parallax processor 132 converts the 2D image (current (n) image frame) to the first image using a multiview depth map. For example, to generate the first image, the parallax processor 132 resets (eg, shifts left or right) the position of each unit element (pixel) in the 2D image according to a multi-view depth map. The parallax processor 132 outputs the first image to the first distance controller 134.

The tilt controller 133 controls the tilt of the 2D image by using the second multi-view information. The tilt control unit 133 generates a second image by controlling the tilt of the 2D image by using the second multi-view information. For example, to generate the second image, the tilt controller 133 resets the positions of the pixels of the 2D image based on the second multi-view information.

For example, the tilt control refers to a control for converting a rectangular image frame into a rhombus image frame. Depth information may be set in each of the pixel lines (fields) constituting the image frame similarly to the image viewed from the front of the user. The image frame may be composed of a plurality of pixel lines.

The tilt controller 133 sets different depths in each of the fields in order to form a rhombus-shaped image frame. The tilt controller 133 may sequentially shift left or right of each field of the image frame to convert the image frame into a rhombus shape. The gradient converter 133 may adjust the depth of view viewed from the side according to the shift degree of the fields (that is, the degree of the gradient operation). Therefore, the greater the degree of shift, the greater the sense of depth can be felt by the user. The tilt controller 133 outputs the second image to the second distance controller 135.

The tilt controller 133 controls the tilt of the second image according to the tilt control information. In this way, the tilt control unit 133 may adjust the depth of the 3D image.

The parallax processor 132 and the tilt controller 133 generate two images (a first image and a second image (left image or right image)) according to binocular parallax. Therefore, when the parallax processor 132 generates a three-dimensional left (left) image according to binocular disparity, the tilt controller 133 generates a three-dimensional right (right) image according to binocular disparity. In addition, when the parallax processor 132 generates a 3D right image based on binocular disparity, the tilt controller 133 generates a 3D left image based on binocular disparity.

The first distance controller 134 may control the distance (distance felt by the user) of the first image in response to the first distance control signal D_CTRL1. The first distance controller 134 outputs a first image obtained by shifting the first image left or right.

The second distance controller 135 may control the distance of the second image in response to the second distance control signal D_CTRL2. The second distance controller 135 outputs a second image obtained by shifting the second image to the right or the left.

The first distance controller 134 and the second distance controller 135 control the distance of the 3D images by user control, that is, the first distance control signal D_CTRL1 and the second distance control signal D_CTRL2 (for example, By controlling the movement in the left or right direction of the entire frame).

Each of the tilt control unit 133, the first distance control unit 134, and the second distance control unit 135 control signals (tilt control signal SL_CTRL, first distance control signal D_CTRL1, and second distance control signal). D_CTRL2)) can control the three-dimensional effect when generating the 3D image.

The multi-view three-dimensional image conversion apparatus 100 of the present invention may generate two images (a first image and a second image) according to binocular disparity from a two-dimensional image.

In this case, the multi-view 3D image conversion apparatus 100 may generate one image of two images according to a depth map, and generate another image by tilt control. That is, the multi-view three-dimensional image conversion apparatus 100 of the present invention is the first image (left (L) or right (R) image) and the second image (right (R)) having binocular disparity in the input two-dimensional image Alternatively, the left (L) image may be generated and provided to the user.

14 is a diagram illustrating a depth map and a slope map according to a user's viewpoint according to an embodiment of the present invention.

Referring to FIG. 14, (a) shows depth maps of a left image. (b) shows the slope maps of the right image. The first image generated by the parallax processor 132 generates the left image as in (a), and the second image generated in the tilt control unit 134 generates the right image as in (b).

(a) has depths corresponding to L (-3), L (-2), L (1), L (0), L (1), L (2), and L (3) according to the user's viewpoint. The depth maps of the set left image are shown.

(b) has depths corresponding to R (-3), R (-2), R (1), R (0), R (1), R (2), and R (3) according to the user's viewpoint. Tilt maps of the set right image are shown.

(c) illustrates a first image generated by using the corrected depth map corresponding to the user's view shown in FIG. 2 and a second image generated by controlling the tilt. Here, the horizontal axis represents the direction along the left side or the right side from 141, and the vertical axis represents the distance from the screen 141.

For example, when the user's viewpoint corresponds to coordinates (1, -5), three-dimensional images (first image and second image) corresponding to L (-3) and R (-1) may be generated. If the user viewpoint corresponds to the coordinates (0, 2), three-dimensional images (first image and second image) corresponding to L (0) and R (2) may be generated. For example, the 3D image may generate multi-view images of 22 divided regions according to a user's viewpoint through a combination of seven images, such as left, right, and front. However, the number of depth maps of the left image and the gradient maps of the right image described herein and the number of regions according to the user's viewpoint may be changed.

15 is a diagram illustrating a parallax processing unit according to an embodiment of the present invention.

Referring to FIG. 15, the parallax processor 132 includes a buffer 510, a depth map converter 520, a parallax calculator 530, a 3D image interpolator 540, and a circular buffer 550.

The buffer 510 stores a 2D image. The buffer 510 stores a red-green-blue (RGB) value of a two-dimensional image. The buffer 510 outputs the RGB of the 2D image to the parallax computation unit 520.

The depth map converter 520 receives the depth map and the first multi-view information. The depth map converter 520 generates a multiview depth map for generating a multiview image using the first multiview information as the depth map. The depth map converter 520 may increase or decrease the relative depth of the depth map according to the user's viewpoint. The depth map converter 520 may be implemented using a multiplier. The depth map may have, for example, 0 to 255 levels. The depth map converter 520 multiplies the first multiview information (eg, a parameter) selected according to the user's viewpoint with the depth map. Here, if the parameter is '# 0', it has a value of '1.0', the parameter has a value of '# 1', '1.1', and if the parameter is '# 2', it has a value of '1.2', If '# 3', it may have a value of '1.3'. However, the parameter is described as an example and may be set to various values.

The depth map converter 520 cuts the result value generated from the depth map multiplied by the parameter value in steps 0 to 255 to generate a multi-view depth map. Here, we do not normalize the depth map multiplied by the parameter value, but fix it to a value of 255 for values above 255. The depth map converter 520 changes the depth difference relatively more than the image of the depth map by generating a multiview depth map. When the depth map converter 520 is a coordinate of (0, 5) according to the viewpoint of the user, the first multiview information selected by the multiview information selecting unit 131 (for example, 1.2 corresponding to the parameter # 2) ) Is multiplied by the depth map. Through this, the depth map converter 520 converts the value of '150' in the depth map into a value of '180' in the multi-view depth map.

In addition, the depth map converter 520 may further include an inverter. The depth map converter 520 is a direction of a depth map multiplied by a parameter using an inverter in one of '+' direction or '-' direction based on the first multi-view information (for example, depth direction). Can be set. Accordingly, the depth map converter 520 may generate a multiview depth map corresponding to the user's viewpoint using the parameter and the depth direction. The depth map converter 520 outputs the multi-view depth map to the parallax computation unit 530.

The parallax calculator 530 generates a first image (three-dimensional image) using the images transferred from the buffer 510 with reference to the corrected depth map of the two-dimensional image RGB. The parallax calculator 530 moves the 2D image according to the depth value of the depth map set in each pixel for generating the first image. The parallax computation unit 530 assumes that an inverter operation is applied to the direction '+', for example, the direction set in the multi-view depth map. For example, if the corrected depth map is 255 (0xff), the correction depth map may be 0 (0x00). If the corrected depth map is 127 (0x7f), the correction depth map may be 128 (0x10). The parallax computation unit 530 outputs the generated first image to the 3D image interpolator 540.

The 3D image interpolator 540 interpolates the first image transmitted from the parallax computation unit 530 using the respective depths. The 3D image interpolator 540 outputs the interpolated first image to the circular buffer 550.

The circular buffer 550 may have a structure that is circulated in a ring shape. The first image interpolated by the 3D image interpolator 540 is stored in the circular buffer 550. The images stored in the circular buffer 550 are sequentially output by the circular buffer 550.

16 is a diagram illustrating a 3D image generating operation of a parallax processor according to an embodiment of the present invention.

Referring to FIG. 16, the parallax processor 132 converts a 2D image into a 3D image using a depth map.

In (a), a two-dimensional image is shown.

In (b1)-(b4), multi-view depth maps are shown.

In (c1)-(c4), three-dimensional images obtained from two-dimensional images by a depth map are shown. The parallax processor 132 moves (shifts) each pixel included in each of the regions 610, 620, 630, and 640 of the 2D image to the right according to a depth set in a multiview depth map. Here, in the parallax processing unit (c), the parallax processing unit 132 describes the generation of the first image (eg, a 3D left image according to binocular parallax) as an example. However, in order to generate the first image, the parallax processor 132 moves each pixel included in each of the regions 610, 620, 630, and 640 of the 2D image to the right according to the depth set in the depth map ( (For example, to generate a 3D right image according to binocular parallax).

In (b1), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multiview depth map is set to a depth of '4'. The third area 630 of the multiview depth map is set to a depth of '2'. The fourth region 640 of the multi-view depth map is set to a depth of '10'. In (c1), the two-dimensional image (a) is a '0', '4', '2 each of the pixels included in each of the regions 610, 620, 630, 640 by the multi-view depth map (b1) ',' 10 'has been moved.

In (b2), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '5'. The third area 630 of the multiview depth map is set to a depth of '2'. The fourth area 640 of the multi-view depth map is set to a depth of '13'. In (c2), the two-dimensional image (a) is a '0', '5', '2 each of the pixels included in each of the regions 610, 620, 630, 640 by the multi-view depth map (b2) ',' 13 'has been moved.

In (b3), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '6'. The third region 630 of the multiview depth map is set to a depth of '3'. The fourth region 640 of the multi-view depth map is set to a depth of '14'. In (c3), the two-dimensional image (a) is a '0', '6', '3 each of the pixels included in each of the regions (610, 620, 630, 640) by the multi-view depth map (b3) ',' 14 'has been moved.

In (b4), the first area 610 of the multi-view depth map is set to a depth of '0'. The second area 620 of the multi-view depth map is set to a depth of '7'. The third region 630 of the multiview depth map is set to a depth of '3'. The fourth region 640 of the multi-view depth map is set to a depth of '16'. In (c4), the two-dimensional image (a) is a '0', '7', '3 each of the pixels included in each of the areas 610, 620, 630, 640 by the multi-view depth map (b4) ',' 16 'has been moved.

17 is a diagram illustrating a tilt control operation of a tilt controller according to an embodiment of the present invention.

Referring to FIG. 17, the tilt controller 134 may control the tilt of the 2D image based on the second multiview information.

In (a), the tilt controller 134 may receive second multi-view information corresponding to the (0, 1) coordinates. In this case, the tilt controller 134 may control the tilt of the left 2D image. The tilt controller 134 receives second multiview information, for example, an inclination value and an inclination direction for generating an image corresponding to R (1) of FIG. 14. The tilt controller 134 generates a second image corresponding to R (1) on the right through tilt control from the two-dimensional image on the left.

In operation (b), the tilt controller 134 may receive second multi-view information corresponding to the (0, -1) coordinate. In this case, the tilt controller 134 may control the tilt of the left 2D image. The tilt controller 134 receives second multiview information, for example, an inclination value and an inclination direction for generating an image corresponding to R (-1) of FIG. 14. The tilt controller 134 generates a second image corresponding to R (-1) on the right through tilt control from the two-dimensional image on the left.

The tilt controller 134 may control the tilt of the second image by moving to the right or the left according to the tilt of each of the pixel lines formed of the pixels.

Therefore, the parallax processor 132 and the tilt controller 134 may generate the first image and the second image. For example, when the parallax processor 132 generates a first image on the left side according to binocular parallax, the tilt controller 134 may generate a second image on the right side. On the contrary, when the parallax processor 132 generates the first image on the right side according to the binocular disparity, the tilt controller 134 may generate the second image on the left side.

18 is a diagram schematically illustrating first and second images generated according to an embodiment of the present invention.

Referring to FIG. 18, the parallax processor 132 may generate one image selected by a user's viewpoint among the five first images shown in (a). The tilt controller 133 may generate one image selected by the user's viewpoint among the five second images illustrated in (b).

The subject reproduced through the screen 141 may have various shapes according to the viewpoint of the user. When looking at the object from the front and looking at the object from the left or the right, the shape of the subject can be changed. Accordingly, in the multi-view image generating apparatus of the present invention, the depth (or slope) d (x) of the image has a larger value as the user's viewpoint is positioned laterally based on the viewpoint on the left side and the viewpoint on the right side.

19 is a diagram illustrating operations of the first distance controller and the second distance controller according to an exemplary embodiment of the present invention.

Referring to FIG. 19, the first distance controller 134 and the second distance controller 135 may control the sense of distance of the 3D image recognized by the user through the distance change of each of the first image and the second image. The first distance controller 134 operates in response to the first distance control signal D_CTRL1, and the second distance controller 135 operates in response to the second distance control signal D_CTRL2. As a result, the first distance controller 134 and the second distance controller 135 may change the sense of distance sensed by the user through the 3D image. Here, the first distance control signal D_CTRL1 and the second distance control signal D_CTRL2 are signals for increasing or decreasing the distance (or spacing) between the first image and the second image.

The first distance controller 134 and the second distance controller 135 may respectively control the first image to move to the left (left) and the second image to move to the right (right). As a result, the distance between the 3D images (the first image and the second image) provided to the user increases. By the control in the first direction 710, the user who is provided with the 3D image may feel the distance of the 3D image away.

In addition, the first distance controller 134 and the second distance controller 135 may control the first image to move to the right (right) and the second image to move to the left (left), respectively. As a result, the distance between the 3D images (the first image and the second image) provided to the user is reduced. By the control in the second direction 720, the user who is provided with the 3D image may feel closer to the distance of the 3D image.

As a result, the multi-view 3D image conversion apparatus 100 of the present invention may generate the 3D image based on a user's viewpoint. Accordingly, the multi-view three-dimensional image conversion apparatus 100 of the present invention may generate a more realistic three-dimensional image according to the change of the user's viewpoint.

The multi-view three-dimensional image conversion device of the present invention can be implemented in the form of a chip (for example, a single chip), the next generation display devices, for example television (TV), monitors, notebooks, netbooks, digital video discs (DVD: Digital Video Disk (PV) player, Portable Multimedia Player (PMP), mobile phone, tablet, navigation device, etc.). In addition, the present invention may be applied to the generation of a virtual input device and a realistic experience image, for example, a hologram display. In addition, the multi-view three-dimensional image conversion apparatus of the present invention may be implemented in each of the next-generation display devices.

100: multi-view three-dimensional image conversion device
110: sensor unit 120: map generator
130: multi-view image generation unit 140: display unit
121: preprocessor 122: entire image processor
123: Local image processor 124: Depth map generator
125: depth map correction unit 131: multi-view information selection unit
132: parallax processing unit 133: tilt control unit
134: first distance controller 135: second distance controller
210: brightness correction unit 220: depth generation unit
211: table selector 212: brightness value converter
213: brightness information extraction unit 214: correction brightness value generation unit
310: map computing unit
510: buffer 520: depth map converter
530: parallax operation unit 540: 3D image interpolation unit
550: circular buffer

Claims (15)

A display unit configured to display a first image and a second image;
A sensor unit configured to detect a user's viewpoint based on the screen of the display unit;
A map generator which receives a current image frame and a previous image frame and generates a depth map for movement of each pixel constituting the current image frame; And
The first image is generated by moving each of the pixels of the current image frame based on the depth map and the user's viewpoint, and the second image is generated through a tilt control of the current image frame based on the user's viewpoint. Including a multi-view image generating unit,
The first image and the second image is a multi-view three-dimensional image conversion apparatus that is an image for forming a three-dimensional image.
The method of claim 1,
The map generation unit
One base map is selected from among a plurality of base maps in which perspective information is set using the previous image frame, and one brightness correction table is selected from a plurality of brightness correction tables in which brightness values are set using the previous image frame. A preprocessing unit;
An entire image processor configured to generate an entire depth map for setting perspective of the current image frame according to the selected basic map;
A local image processor configured to generate an area depth map by compensating the current image frame with the selected brightness correction table; And
And a depth map generator configured to generate the depth map by calculating the full depth map and the local depth map.
The method of claim 1,
The multiview image generating unit
A multi-viewpoint information selecting unit which selects first multi-viewpoint information and second multi-viewpoint information corresponding to the user viewpoint;
The multi-view process is performed on the depth map according to the first multi-view information, and a first image for reproducing a 3D image is reproduced from the current image frame by calculating the multi-view processed multi-view depth map and the current image frame. Generating a parallax processor; And
And a tilt controller configured to generate a second image for reproducing a 3D image by performing a tilt operation using the second multiview information on the current image frame.
The method of claim 3, wherein
And the first multi-view information includes information about a parameter and a depth direction for multi-viewing the depth map.
The method of claim 3, wherein
The second multi-view information is a multi-view 3D image conversion apparatus including information on the inclination value and the inclination direction for multi-viewing the current image frame.
The method of claim 3, wherein
The multiview image generating unit
A first distance controller configured to control the first image to move in one of left and right directions according to a first distance control signal; And
And a second distance controller configured to control the second image to move in one of right and left directions according to a second distance control signal.
The method according to claim 6,
And the first distance control signal and the second distance control signal are signals for one of increasing and decreasing a distance between the first image and the second image.
The method of claim 3, wherein
The parallax processing unit
A buffer for storing the current image frame;
A depth map converter configured to receive the depth map and the first multi-view information and multi-view the depth map according to the multi-view information;
A parallax calculator configured to generate the first image by calculating a current image frame output from the buffer with the multiview depth map processed by the multiview;
A 3D image interpolator interpolating the first image; And
And a circular buffer for sequentially selecting and outputting the interpolated first image.
The method of claim 3, wherein
The tilt control unit
The multi-view 3D image converting apparatus receives the second multi-view information and multi-views the current image frame according to the second multi-view information.
The method of claim 1,
And the second image is a right image, when the first image is a left image, and when the first image is a right image, the second image is a left image.
delete Sensing a user viewpoint from a display screen;
Generating a depth map for movement of each pixel constituting the current image frame using the previous image frame and the current image frame;
Selecting first multiview information corresponding to the user's viewpoint and converting the depth map into a multiview depth map based on the first multiview information;
Generating a first image by shifting each pixel of the current image frame using the multi-view depth map;
Selecting second multiview information corresponding to the user view and generating a second image through tilt control of the current image frame based on the second multiview information; And
And outputting the synchronously controlled first and second images.
13. The method of claim 12,
And the first multiview information includes information about a parameter and a depth direction for multiviewing the depth map.
13. The method of claim 12,
The second multiview information includes a tilt value and information on a tilt direction for multiviewing the current image frame.
13. The method of claim 12,
The current image frame and the previous image frame is a multi-view image frame conversion method.
KR1020110081365A 2011-08-16 2011-08-16 Apparatus and method for converting multiview 3d image KR101239149B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020110081365A KR101239149B1 (en) 2011-08-16 2011-08-16 Apparatus and method for converting multiview 3d image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020110081365A KR101239149B1 (en) 2011-08-16 2011-08-16 Apparatus and method for converting multiview 3d image

Publications (2)

Publication Number Publication Date
KR20130019295A KR20130019295A (en) 2013-02-26
KR101239149B1 true KR101239149B1 (en) 2013-03-11

Family

ID=47897484

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020110081365A KR101239149B1 (en) 2011-08-16 2011-08-16 Apparatus and method for converting multiview 3d image

Country Status (1)

Country Link
KR (1) KR101239149B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100503276B1 (en) 2003-08-12 2005-07-22 최명렬 Apparatus for converting 2D image signal into 3D image signal
KR20100034789A (en) * 2008-09-25 2010-04-02 삼성전자주식회사 Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100503276B1 (en) 2003-08-12 2005-07-22 최명렬 Apparatus for converting 2D image signal into 3D image signal
KR20100034789A (en) * 2008-09-25 2010-04-02 삼성전자주식회사 Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image

Also Published As

Publication number Publication date
KR20130019295A (en) 2013-02-26

Similar Documents

Publication Publication Date Title
US8866884B2 (en) Image processing apparatus, image processing method and program
JP5556394B2 (en) Stereoscopic image display system, parallax conversion device, parallax conversion method, and program
US9401039B2 (en) Image processing device, image processing method, program, and integrated circuit
JP6147275B2 (en) Stereoscopic image processing apparatus, stereoscopic image processing method, and program
JP6370708B2 (en) Generation of a depth map for an input image using an exemplary approximate depth map associated with an exemplary similar image
JP5879713B2 (en) Image processing apparatus, image processing method, and program
WO2012176431A1 (en) Multi-viewpoint image generation device and multi-viewpoint image generation method
TWI493505B (en) Image processing method and image processing apparatus thereof
JP5068391B2 (en) Image processing device
KR101690297B1 (en) Image converting device and three dimensional image display device including the same
EP2469870A2 (en) Image processing device, image processing method, and program
US20140333739A1 (en) 3d image display device and method
JP2013005259A (en) Image processing apparatus, image processing method, and program
JP5002702B2 (en) Parallax image generation device, stereoscopic video display device, and parallax image generation method
CN104023220A (en) Real-time multi-view synthesizer
CN105306919A (en) Stereo image synthesis method and device
KR20180030881A (en) Virtual / augmented reality system with dynamic local resolution
US20140079313A1 (en) Method and apparatus for adjusting image depth
KR101239149B1 (en) Apparatus and method for converting multiview 3d image
WO2013080898A2 (en) Method for generating image for virtual view of scene
KR101165728B1 (en) Apparatus and method for converting three dimension image
KR20120087867A (en) Method for converting 2 dimensional video image into stereoscopic video
KR101207862B1 (en) Method for converting 2 dimensional video image into stereoscopic video
KR20130078990A (en) Apparatus for convergence in 3d photographing apparatus
JP2012060246A (en) Image processor and integrated circuit device

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
LAPS Lapse due to unpaid annual fee