WO2013118955A1 - Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same - Google Patents

Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same Download PDF

Info

Publication number
WO2013118955A1
WO2013118955A1 PCT/KR2012/008238 KR2012008238W WO2013118955A1 WO 2013118955 A1 WO2013118955 A1 WO 2013118955A1 KR 2012008238 W KR2012008238 W KR 2012008238W WO 2013118955 A1 WO2013118955 A1 WO 2013118955A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
correction
depth map
image
depth
area
Prior art date
Application number
PCT/KR2012/008238
Other languages
French (fr)
Korean (ko)
Inventor
우대식
박재범
전병기
김종대
정원석
Original Assignee
에스케이플래닛 주식회사
시모스 미디어텍(주)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • G06T5/001Image restoration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Abstract

The present invention relates to an apparatus and method for depth map correction, and to an apparatus and method for stereoscopic image conversion using same. The depth map correction apparatus comprises: a filtering unit for performing noise-filtering or sharp characteristic improvement filtering on an inputted two-dimensional image; a region setup unit for selecting the boundary surface having the largest variation in the characteristics of a pixel from the filtered image and dividing the filtered image into a correction region, a neighboring region and a peripheral region according to the degree of correction in accordance with the selected boundary surface; and a depth value correction unit for performing, on the correction region, interpolation using the filtered image and a pre-generated depth map so as to correct a depth value and generate a corrected depth map by performing depth value correction on the neighboring region and peripheral region.

Description

Three-dimensional image conversion device and method using the same and the depth map correction apparatus and method

The present invention has been performed for, and more particularly, to a noise filter or sharp (sharp) characteristic improving filter on the input image of the two-dimensional relates to a three-dimensional image conversion device and method using the same and the depth map correction apparatus and method, and the filtered select the degree of change is the largest interface characteristic of the pixel in a picture, in accordance with the correction degree with respect to the boundary surfaces separated by a correction area, the nearby region, the outer region, and the interpolation of the image and the depth map filtered with respect to the calibration zone carried out to correct the depth value, related to the neighboring region and a three-dimensional image conversion device and method using the same and the depth map correction apparatus and method for generating a depth map corrected by the depth value correction in the outer region.

As the recently amplified interest in the 3D image (Stereoscopic image), it has been actively studied for the 3D image.

In general, humans are known to feel the cubic effect by the time difference between both eyes most. Accordingly, 3D video may be implemented using a human such properties. For example, a particular subject, a left-eye image and the viewer's to distinguish the right-eye image viewed through the right eye, the viewer has the particular subject by displaying the left-eye image and the right-eye image at the same time as shown by the left eye of the viewer with a 3D image you can so that you can see. After all, 3D images can be implemented by displaying it by making the both eyes (binocular) image divided into a left-eye image and right-eye images.

In order to convert depth information without a monocular (monocular) 2D image into a 3D image it is required the task of rendering (rendering) in addition to the depth information for the 2D image.

Typically, the three-dimensional transformation is divided into manual mode and automatic mode. Passive way is to create a depth map So while watching the video on the subjective judgment of a person for literally any video. This process is watching the footage is based on the subjective judgment of people you can expect the depth map, even small parts of the video. Therefore, it is the direct production of a depth map for each person in the footage, errors of fact, the depth map is very small. However, a lot of time and effort is required to create a depth map because of the footage directly to the people involved in every video.

Autostereoscopic conversion means for analyzing characteristics of the image to extract the proper depth map, and generate the left and right stereoscopic image by using this. In this process, the video object itself since there is no information on the depth map utilizing conventional image features such as the outer (Edge) characteristics of the image, color, brightness characteristics, the vanishing point characteristics and generates the depth map. However, these features are some cases, a large error in the depth map for a three-dimensional representation, and in particular contains a significant error in the characteristics of the interface, because in many cases does not match the three-dimensional properties of the image that has the video object itself. In other words, the boundary surfaces of the boundary surface and the depth map created for a three-dimensional representation of the object within the actual video object mismatch may feel uncomfortable three-dimensional view to reduce or to generate a visual dizziness as a partial three-dimensional representation of the boundary.

The present invention been made in view of solving the above problems, an object of the present invention to correct in consideration of the original image, the boundary characteristics of the depth map that does not match the original image so as to best match that can alleviate caused by discrepancy dizziness to provide a three-dimensional image conversion device and method using the same and the depth map correction apparatus and method.

Another object of the invention is the image processing the video object and the best possible match between boundary depth map correction device capable of correcting the error of the depth map to have a property and a method and a stereoscopic image using the same transformation apparatus and method for using in terms of the image converter to provide.

And It is another object of the present invention corrects the error of the depth map, and by using the corrected depth map convert the 2-D image into a 3-D image and depth map, which can minimize errors in the image conversion correction apparatus and method and it is to provide a three-dimensional image conversion apparatus and method.

According to an aspect of the invention, and select a noise filter or sharp (sharp) characteristic filtering unit to perform the improved filtering, the largest boundary surfaces a characteristic change of the pixel even in the filtered image to the input image of the two-dimensional, the correction area in accordance with the correction degree based on the boundary surface, nearby areas, an area setting unit that separates the outer region, and with respect to the correction area corrected by the depth value by performing the filtered image and the group with the depth map generating interpolation, the depth map correction device including the correction depth value to produce a depth map correction with the depth value correction in the near region and the outer region is provided.

The filtering unit may be larger to make the variance of the pixel values ​​of the boundary surface of the noise filter or the input image to remove noise components of the input image by more than a predetermined value sharp (sharp) filter characteristic improvement.

The area setting unit corresponds to a boundary surface in the filtered image, the position change near the area of ​​the correction area setting unit, a distance in an adjacent region of the calibration area for characteristic variation of the pixel is also set to the largest area to the correction area in accordance with the near the region setting section, the outer region setting unit for setting a region other than the correction area and adjacent region to the outer region that is set to include parts.

Using the equation to the depth map correcting section corrects the depth value of the depth map corresponding to the correction region.

Formula;

Correction area New Depth (i) = ?? (SI (n) * Depth (n))

Here, i is left, pixel index, to the right n of the correction region is also smaller than the correction area to the interpolation interval, the SI (n) is the pixel value of the input image (original image), Depth (n) is the depth map also of the pixel value, New depth (i) stands for the corrected depth value of the pixel position i.

In addition, the depth map correcting section correcting a depth value for the outer region via the Gaussian filter tiring or low-pass filtering with respect to the outer region.

Further, using the equations to the depth map correcting section corrects the depth value of the depth map about the nearby area.

Formula;

Neighboring regions New Depth (i) = A + delta * i

Here, i is a pixel value, delta is ((BA) / (kj)) adjacent to the pixel position, A is a neighboring area in the compensation area of ​​the neighboring region as a pixel index to the one surface of the outer region on either side of the calibration area, B is a pixel value adjacent to the adjacent region from the outer region, j is the position of the pixel having the index values ​​a, k being the index position of the pixel having the pixel value of the B.

According to another aspect of the invention, the image analysis section, the depth map generating unit for generating a depth map for the input image based on the attribute information to analyze the input image of the two-dimensional extracted at least one feature information, wherein filtering the input image, and that using the filtered image the depth map to correct the depth map generation correction, using the corrected depth map unit generates three-dimensional image for converting the input image into a three-dimensional image of a three-dimensional the three-dimensional video conversion device comprising is provided.

The image analysis unit extracts the attribute information including the perimeter (edge) information, color (color) information, the brightness (luminance) information, the motion (motion) information, at least one of a histogram (histogram) information.

The depth map generating unit to generate at least one and then divided into blocks (block) by setting the depth value for the at least one block of the depth maps (depth map) a plurality of pixels (pixel) forming the input image .

The depth map correction section performing improved noise filtering or sharp characteristic filtering on the input image, selecting the largest boundary characteristic change of the pixel also is in the filtered image after correction in accordance with the correction degree with respect to the boundary surface area , near the area, delimited by the outer region and corrects the depth value to perform the image interpolation by using a depth map filtered with respect to the correction area, and the correction with the depth value correction in the near region and the outer region depth map It generates.

In accordance with another aspect of the invention, there is provided a method of correcting a depth map correction device is the depth map, (a) performing a noise filter or sharp (sharp) characteristic improving filter on the input image of the two-dimensional, (b ) selecting the largest boundary characteristic change of the pixel also has, and the step to separate the correction area, near the region, the outer region in accordance with the correction degree with respect to the boundary surface, (c) filtering with respect to the correction area in the filtered image the corrected depth value by performing interpolation using the image and depth map, the depth map correcting method comprising the step of generating a depth map correction through the outer region and a depth value correction in the neighboring region.

Wherein the step (c), to the filtered image and the step, a Gaussian filter tiring or low-pass filtering with respect to the outer region to correct the depth map for the correction area by performing interpolation of the depth map for the correction area a pixel value, the slope value which corresponds to the change of the pixel is also for connecting the correction region and the outer region in contact with the neighboring area in step, the correction region for correcting the depth value for the outer region to the near region by using through for a step of correcting a depth value.

In accordance with another aspect of the invention, there is provided a method of three-dimensional video conversion device converts the input image of the two-dimensional to three-dimensional image of a three-dimensional, the step of extracting at least one feature information by analyzing the input image of the two-dimensional, using the filtering step, the input image to generate a depth map for the input image based on the characteristic information, and corrects the generated depth map by using the filtered image, the corrected depth map the three-dimensional image conversion method comprising the step of converting the input image into a three-dimensional image of the three-dimensional is provided.

Correcting the depth map is performed to improve the noise filter or sharp characteristic filtering on the input image, selecting the largest boundary surfaces a characteristic change of the pixel even in the filtered image and then, the degree correction on the basis of the interface according to the separated correction region, near the region, the outer region and corrects the depth value to perform the image interpolation by using a depth map filtered with respect to the correction area, and the correction with the depth value correction in the near region and the outer region It means to generate a depth map.

Therefore, according to the present invention, the boundary characteristic of the original does not match the image depth map is corrected so as to match as much as possible in consideration of the original image can be alleviated due to the mismatch dizziness.

Also, through image processing from the point of view of the image conversion it can correct errors in the depth map to have a boundary characteristic that most match the video.

In addition, correct errors in the depth map, and it is possible to minimize the error of the image conversion to convert the 2-D image into a 3-D image using the corrected depth map.

1 is a block diagram showing the configuration of a three-dimensional video conversion device according to the present invention.

2 is a block diagram schematically showing the configuration of a depth map correction device according to the present invention.

Figure 3 is a flow chart illustrating how the image conversion apparatus according to the invention converts the input image of the two-dimensional three-dimensional image of a three-dimensional.

Figure 4 is an exemplary view for explaining the depth map before and after correction according to the present invention.

Figure 5 is a flow diagram illustrating a method for correcting the depth map correction device is depth map in accordance with the present invention.

Figure 6 is a view for explaining a correction process of a depth map in accordance with the present invention.

With reference to the accompanying drawings, the present will be described in more detail a preferred embodiment of the invention. Description of the same components in the following description with reference to the accompanying drawings or corresponding are assigned the same reference numerals and a duplicate thereof will be omitted.

In order to correct the depth map shall be the following original image (i.e., input image), and then the three kinds of conditions, such as between the depth map premises.

① has overall depth map including a partial error in areas such as must be properly reflects the depth value of the original image, and only the interface.

② original brightness and color characteristics of the image boundary coincides with the boundary properties of the depth map.

③ the original image is to be of the external influences such as noise as possible to minimize image.

Reference to the drawing how to correct the depth map based on the precondition as described above will be described.

1 is a block diagram showing the configuration of a three-dimensional video conversion device according to the present invention.

1, a three-dimensional image conversion device 100 comprises an image analysis unit 110, a depth map generation unit 120, a depth map correction section 130, the three-dimensional image generation section 140.

The image analysis unit 110 analyzes the input image of the two-dimensional and extracts at least one feature information. The characteristic information includes a boundary (edge) information, color (color) information, the brightness (luminance) information, the motion (motion) information, a histogram (histogram) information, and the like.

The image analysis unit 110 to gather information that is the basis of the depth map generation, and extracts the characteristic information in the image using a comprehensive set of analysis methods of the pixels (pixel) or the block (block) unit.

The depth map generating unit 120 generates a depth map for the input image based on the attribute information extracted by the image analysis unit 110. The That is, the depth map generation unit 120 then separates the plurality of pixels (pixel) constituting the input image into at least one block (block) of the set the depth value for the at least one block in the depth map ( It generates a depth map).

In addition, the depth map generating unit 220 generates a depth map (depth map) for each frame of the two-dimensional image based on the extracted characteristic information. That is, the depth map generation unit 220 is to extract the depth value for each pixel for each frame from depth maps (depth map) for the 2-D image. Here, the depth map is a data structure storing a depth value for each pixel per frame for the two-dimensional image (depth value).

The depth map correction section 130 and to filter the input image, corrects the depth map generated from the use of the filtered image to generate the depth map 220. That is, the depth map correction section 130 performs the improved noise filtering or sharp characteristic filtering on the input image, and selects the largest boundary surfaces of the pixel characteristic change degree from the filtered image. Here, the characteristic is variation of the pixel is also the large means that a pixel value changes most large. Then, the depth map correction section 130 is the depth to the in accordance with the correction degree based on the selected boundary surfaces separated by a correction area, the nearby region, the outer region, and performs interpolation of the image and the depth map filtered with respect to the calibration zone after adjusting the value, it generates a depth map corrected by the depth value correction in the near region and the outer region.

A detailed description of the depth map correction section 130 will be described with reference to Fig.

The three-dimensional image generation section 140 converts using the depth map correction in the depth map correction section 130, the input image of the two-dimensional to three-dimensional image of a three-dimensional. For example, the three-dimensional image generation section 140 may generate the time difference (parallax) information by using the depth map and the corrected, by using the time difference information, generating a three-dimensional image. The generated three-dimensional image appears to allow more depth (depth) value for each of the pixels to vary in each frame, more three-dimensional appearance.

Here, the three-dimensional image generation unit 140 has been described as converting the two-dimensional image into a three-dimensional image of a three-dimensional using parallax information, the three-dimensional image generation section 140, the input image using the corrected depth map a method of converting a three-dimensional image are different according to prior methods.

Three-dimensional image conversion device 100 configured as described above can be converted, by setting the depth values ​​for the input image based on the characteristic information of the input image, the input 2D image of the 3D image (Stereoscopic Video).

Figure 2 is a block diagram schematically showing the configuration of a depth map correction device according to the present invention.

Although Figure 1 in the depth map as explained with correction, it will be described with a depth map correction device 200 in FIG.

2, the depth map correction device 200 includes a filtering unit 210, the area setting unit 220, a depth value correction unit 230.

The filtering unit 210 performs the improved noise filtering or sharp filter characteristics for the two-dimensional input image. That is, improving the interface of the image Sharp attributes to enhance the due receive a lot of external influences, the filtering unit 210, a noise filter, the characteristics of the interface in order to minimize the effects due to noise before the video processing, such as noise, It performs filtering. Here, the noise (noise) filtering means for attenuating a noise component, and by, for a filter, for example, for passing the signal component that requires use of a low-pass filter (lowpass filter) to minimize the influence of noise. Sharp the filter characteristic improvement refers to making larger the difference of the pixel value of the boundary surface to so that visually a more clear and sharp boundary distinction of image. Thus, the Sharp properties improved filtering may mean a filter with a high-frequency band-pass filter (High Pass Filter), etc. for passing the higher frequency band than a given cut-off frequency and than the attenuated low frequency band to produce large deviations of the pixel values is.

Thus, to the filtering unit 210 includes a noise filter, improve sharp (sharp) properties making larger a deviation above a certain value of the pixel value of the filter for the boundary of the input image, such as to remove a noise component of the input image can.

The area setting unit 220 selects the largest boundary surfaces of the pixel is also a characteristic change in the filtered image, separated by a calibration region according to the degree of correction with respect to the boundary surface, the nearby region, the outer region.

In other words, the region setting unit 220 sets a degree of change by a large area of ​​the characteristics of pixels or sizes, depending on corresponds to a boundary position change of the filtered image to the correction area, and adjacent to that within a predetermined distance of the correction area setting a region to nearby regions, and sets the area other than the correction area and adjacent region to the outer region.

The calibration area is an area for performing a depth map correction characteristic or change in size of the pixel is also a choice in a large area, on the basis of each boundary surface in accordance with change in position in the area corresponding to the boundary of the filtered image. The neighboring region refers to a region for setting the depth value for the left / right region depth value after correction in the correction area to the adjacent area of ​​the compensating region. The outer region means a region for the whole depth value correction in areas other than the correction area and adjacent areas, in each area outside the boundary.

With respect to the depth value correction unit 230 includes a correction region, and performs interpolation (Interpolation) between the filtered image and groups the generated depth map correcting the depth value of the correction region, and the near region and the outer region to produce a depth map correction with the depth value correction.

That is, the depth value correction unit 230 corrects the depth value (the correction area New Depth (i)) of the depth map corresponding to the correction area using the expression (1).

Equation 1

Figure PCTKR2012008238-appb-M000001

Here, i is pixel index, n to the left / right of the correction region is typically set slightly smaller than the correction area to the interpolation interval. And the SI (n) is the pixel value of the input image (the filtered images), Depth (n) is the pixel value of the depth map, New Depth (i) stands for the corrected depth value of the pixel position i.

In other words, the depth value correction unit 230 according to the precondition that the boundary properties of the filtered image matches the boundary characteristics of the depth map, the interpolation of the image and the depth map filtered with respect to the correction area Equation 1 It shall be carried out in accordance with. The interpolation is corrected by the depth map boundary characteristics of the original image is reflected in the depth map, which is more boundary property is otherwise distinctive.

Through the equation (1) when the determined depth value of the correction region, and the depth value correction unit 230 corrects the depth value for the outer region.

Since the correction of said outer region is included in the area of ​​an object to represent, there are three-dimensional the correction region and has a degree of change by a very small stable region of different depth map for the interface, typically is not, or, or a correction a low-pass filter that a Gaussian filter tiring or similar effect to the stability of the depth map will be performed. Filtering for the outer region is carried out in addition to the stability of the overall depth map.

If the depth values ​​are corrected for the outer region as described above, the depth value correction unit 230 performs a depth value correction on the nearby regions. That is, the neighboring area is a buffer area for the connection to the zone because it is the respective regions are different from each other in the middle area correction of the correction area and the outer region made.

Thus, the depth value correction unit 230 performs a depth value corrected for the neighboring region by using the equation (2).

Equation 2

Figure PCTKR2012008238-appb-M000002

Here, i refers to a pixel position of a nearby area on the one surface of the compensation area to the pixel index to the one surface of the outer region, A refers to the pixel value adjacent to the neighboring areas in the correction area.

The delta can be defined as Equation (3).

Equation 3

Figure PCTKR2012008238-appb-M000003

Here, A is the pixel value adjacent to the neighboring areas in the correction area, B is a position index of the pixel having a pixel value, j is the position of the pixel having an A value index, k is the pixel of the B value in contact with the neighboring area from the outside area, .

As a result, the slope value which corresponds to the change of the pixel is also connected to the delta area is corrected and the outer region. Therefore, depth values ​​of the neighboring area in accordance with equation (2) is to ensure the continuity of the depth map by connecting the pixel value corresponding to the start position of the outer region in the final pixel value of the correction region in a linear fashion.

Using the same method as described above wherein the depth value correction unit 230 is to generate a depth map a depth value corresponding to each area are corrected.

Figure 3 is a flow chart illustrating how the three-dimensional image conversion apparatus according to the invention converts the input image of the two-dimensional to three-dimensional image of a three-dimensional, Figure 4 is an exemplary view for explaining the depth map before and after correction according to the invention .

3, the three-dimensional image transformation apparatus extracts the at least one characteristic information by analyzing the input image of the two-dimensional (S302). Here, the attribute information includes a boundary (edge) information, color (color) information, the brightness (luminance) information, the motion (motion) information, a histogram (histogram) information, and the like.

After execution of the S302, the three-dimensional image transformation apparatus generates a depth map for the input image based on the attribute information (S304).

Then, the three-dimensional video conversion device filtering the input image, corrects the generated depth map by using the filtered image (S306). Detailed description of how the three-dimensional video conversion device is correcting the depth map will be given with reference to FIG.

It will be reference to Figure 4 to compare the depth map and the depth map correction in the S306 generated in the S304.

(A) of Figure 4 is an example of an original image (input image Im) for a three-dimensional transformation. The (a) and (b) the depth map 410 as in Fig. 4 when generating a depth map using the same original image 400 is generated. That is, when generating a depth map based on the boundary characteristics of the original image 400, the (b) the depth map 410 as in Figure 4 is generated.

Wherein the generated depth map 410 Compared with the original image 400, but the properties of the overall depth map 410 is properly represent the depth values ​​of the original image 400, look at the interface between the object depth map 410 is very rough representation. For example, comparison of the original image 400 and depth map 410 for the interface A person B person, the depth map 410, the unclear boundary of A person B who matches naturally visually the partial failure does not exist. That is, the depth map 410 is represented as a shape with a large curvature, with a very low resolution properties compared to the original image (400).

(B) an interface to want to present the image partial failure at the interface as compared to 400 and the resolution is expressed by a natural cubic effect by using, an original image 400 with respect to the depth map 410 has low properties such as correction must at the same time as the original image 400. If the corrected depth map of the same level as the resolution of Figure 4 (c) the depth map 420 as the boundary is corrected is generated.

That is, filtering the original image 400, and the characteristic variation of pixels in the filtered image is also the largest boundary surfaces of A person and depth values ​​corresponding to the interface between B who is corrected using the equation (1), near the When correcting the depth value of the region and the outer region, the corrected depth map 420, such as (c) of Figure 4 it is produced. Detailed description of the method for correcting the depth value of the neighboring region and the outer region is illustrated in FIG.

Looking at the corrected depth map 420, to clear the boundary between the person A and person B can clearly distinguish between the person A and person B, it can be seen that has a similar resolution as the original image (400).

Referring again to Figure 3, after execution of the S306, the three-dimensional image converting unit converts the input image using the corrected depth map with three-dimensional image of a three-dimensional (S308).

Converting the 2-D image the boundary characteristics of the depth map that does not match the original image as described above in consideration of the original image is corrected to the best possible match, by using the corrected depth map with three-dimensional image of a three-dimensional, the mismatch can relieve dizziness caused.

5 is a view for explaining a correction process according to the flowchart of the depth map, the present invention Figure 6 illustrates a method for correcting the depth map correction device is depth map in accordance with the present invention.

5, the depth map correction apparatus performs the improved noise filtering and sharp filter characteristics for the two-dimensional input image (hereinafter be referred to as an original image) (S502). That is, the boundary of the image is due to receive a lot of external influences such as noise, the original image is no more possible to increase the noise correction effect. Accordingly, the depth map correction apparatus performs a noise filtering to minimize the effects such as noise, before the image processing, performs filtering Sharp improved properties to enhance the properties of the interface. Referring to (a) of Figure 4 for the Sharp characteristic improving filter, such a boundary surface due to a distinct interface, such as between a person and the mountains of the distinct boundary, mountains and sky in the image is distinct border, yet the interface of the solid of the objects visually to make large variations in pixel value between a more distinct boundary that can be a Sharp improved filtering characteristics.

After execution of the S502, the depth map correction apparatus in the in the filtered image, select the degree of change is larger boundary characteristic of the pixel, and (S504), corrected by the correction degree with respect to the boundary surface area, near the region, the outer region divides (S506).

The correction area is characteristic or change in size of the pixels also is selected to a large area according to the positional change in the region corresponding to the boundary of the filtered image, a region for performing depth correction map based on the respective boundary. The neighboring region refers to a region for setting the depth value for the left / right region depth value after correction in the correction area to the adjacent area of ​​the compensating region. The outer region means a region for the whole depth value correction in areas other than the correction area and adjacent areas, in each area outside the boundary.

After each region delimited as described above is first set, the control range of the correction area in accordance with the degree of correction, set the neighboring region that has a more stable depth values ​​are out of the control range, and defines the other region to the outer region .

After execution of the S506, the depth map correction unit corrects the depth value of the correction area to perform the interpolation (Interpolation) between the image and the parasitic filtering the depth map for the correction area (S508).

That is, the depth map correction unit corrects the depth value of the depth map corresponding to the correction area using the expression (1).

In other words, the depth map correction apparatus performs according to the interpolation of the image and the depth map filtered with respect to the correction area in accordance with the prerequisite that the boundary characteristics of the filtered image matches the boundary characteristics of the depth map to the equation (1) . The interpolation (interpolation) is corrected by the depth map boundary characteristics of the original image is reflected in the depth map, which is more boundary property is otherwise distinctive.

When the depth value of the depth map, by the S508 corresponds to the correction area are corrected, the depth map correction unit corrects the depth value for the outer region (S510).

Correction of said outer region is not usually correct because it is in a region of the object to already express the three-dimensional as the degree of change is very small stable region of the correction range and is different from the depth map corresponding to the boundary surface, or or the depth map for the stability of the low-pass filter that performs the Gaussian filter or the like tiring effect. Filtering for the outer region is carried out in addition to the stability of the overall depth map.

After the S510 is performed, the depth map a depth value correction unit performs correction for the near region (S512). That is, the depth value correction in the neighboring area is a buffer area for the connection to the zone because it is the respective regions are different from each other in the middle area of ​​the correction the correction region and the outer region made. Accordingly, the depth map a depth value correction unit performs correction on the nearby area by using the equation (2).

After the S512 is performed, the depth map correction device and generates a depth map a depth value corresponding to each area are corrected (S514).

Referring to Figure 6 for the method for depth map correction unit corrects the depth map by using the original image will be described.

Will be described with the depth map correction device for correcting the image depth map is x, but the two-dimensional image in the y-space, a one-dimensional representation as a starting point of the horizontal axis for convenience of explanation.

When 6, (1) illustrates the properties of a pixel according to the position change of the original image (or size, value deungim), for the original image, such as (1) perform the noise and sharp characteristic filter (2 ) and it is converted to the image having characteristics of the same pixel. In other words, when performing noise and sharp filter characteristics in the original image, and the noise components of the image to be removed and a clear distinction of the interface as shown in (2).

If (2) the degree of change is selected, the large boundary surfaces in, is selected for the V1 and V2 of the rising portion and the falling portion. Is divided into calibration areas (a), near the region (b), the outer region (c) in accordance with the degree of correction with respect to the selection of V1, V2 of the interface. Each area is set near a region that has a more stable depth value (b) a first control range of the correction area (a) is set out of the control range depending on the degree of correction. And to set the other region to the outer region (c).

The correction area (a) is a change in a large area of ​​the pixel values ​​are selected according to the position change in corresponding to the boundary of the filtered video region, near the region (b) is a correction region to an adjacent region of the correction range (a) ( a) and the area at the predetermined distance is selected, and the outer region (c) is selected the area other than the correction area (a) and near the area (b).

The depth map correction apparatus performs interpolation of the image 2 and the depth map (3) filters for the correction area (a) according to the precondition that the boundary characteristics of the filtered image matches the boundary characteristics of the depth map . Here, the depth map (3) is a depth map generated by using the characteristic information extracted from the original image (1).

Then, the depth map correction apparatus is obtaining a depth map correction as shown in (4) to perform the correction about the neighboring region and the outer region.

Detailed description of the method for correcting the depth value for the correction area, near the region, the outer region is illustrated in FIG.

When comparing the depth map (4) correction in a process as described above and the depth map (3) generating group, the depth value of the correction area (a) group generated from the corrected depth map 4, the depth map (3 ) it can be seen in the original image (1) and matched than the depth value of the corresponding area.

Also, look at the boundary after the correction value is the interface becomes a correction in the form of moving close as possible to the original image is made to represent variability of the depth map with a minimized form. Of course, the depth may be a partial change part of the original map completely removed within the correction area is not visible characteristics of a person can achieve many improvements are actually three-dimensional because it is more sensitive to the match of the border.

Three-dimensional video conversion device or a depth map correction device is a personal computer, a tablet PC, can also be implemented in the form of a laptop, mobile phones, smartphones, and the three-dimensional image conversion method or a depth map correcting method according to the invention according to the invention are those one provided in the device or may be executable by a processor consisting of more core.

In accordance with another aspect of the invention, and select a noise filter or sharp (sharp) characteristic step, the largest boundary characteristic change of the pixel also is in the filtered image to perform the improved filtering for the input image of the two-dimensional, the boundary correction in accordance with the correction degree based on the area, nearby areas and thereby divided into the outer region, and corrects the depth value by performing interpolation of the image and the depth map filtered with respect to the compensation region, the outer region and the neighboring region depth map correcting method comprising the step of generating a depth map corrected by the depth value correction is recorded in a program is provided with a recording medium readable in the electronic device.

According to another aspect of the invention, the step of generating a depth map for the input image on the basis of the step, the property information to extract at least one characteristic information by analyzing the input image of the two-dimensional filtering the input image and, the step using the filtered input image to correct the generated depth map, using the corrected depth map three-dimensional image conversion method is a program including the step of converting the input image into a three-dimensional image of a three-dimensional being tracked is provided a recording medium readable in the electronic device.

Depth map correcting method and a three-dimensional image conversion method, and can be written as programs, codes and code segments constituting the program can be easily inferred by programmers in the art.

According to the invention the three-dimensional image converting unit depth map correction apparatus and using the same may include a processor, memory, storage and input / output devices as a component, these components can for example, be interconnected using a system bus.

The processor may process instructions for execution within the unit. In one implementation, the processor may be a single thread (Single-threaded) processor, the processor in the other embodiments may be a multi-threaded (Multi0threaded) processor. The processor may process the instructions stored in memory or storage device, it is possible to.

On the other hand, the memory stores information in the unit. If the embodiment, the memory is a computer-readable media. In one implementation, the memory is the case can be a volatile memory unit, and the other embodiments, the memory may be a non-volatile memory unit. The aforementioned storage device is capable of providing mass storage for the parts of the unit. If the embodiment, the storage device is a computer-readable media. In various different implementations, the storage device may include, for example, a hard disk device, optical disk device, or some other mass storage device.

The above-mentioned input / output device provides input / output operations for the system according to the present invention. In one implementation, the input / output device may comprise, for example, a wireless interface device, such as a serial communication device and / or for example, an 802.11 card, such as one or more network interface devices, such as RS-232 ports such as an Ethernet card. In another implementation, the input / output device may include a device driver, such as a keyboard, a printer and a display device configured to transmit and receive input data to output data to other input / output devices.

Device according to the invention may be driven by a command for causing one or more processors to perform the functions and processes described above. For example, in such order, for example, it may include other instructions stored on a readable medium, or computer commands or executable code that is interpreted as a command script, such as JavaScript or ECMAScript command. Further according to the present invention may be implemented in a server farm may be implemented as a distributed over a network, such as (Sever Farm), or a single computer device.

Although in the present specification and drawings, but describes an exemplary device configuration, implementation of the functional operation and the topics discussed in this specification be implemented in digital electronic circuitry of another type, the structure and their structural equivalents disclosed herein be implemented in computer software, firmware, or hardware, including, it can be implemented in one or more of these bonds. Implementation of the subject are more than one computer program product described herein, that is to say one for execution by, or which to control the operation of the device according to the present invention relates to a computer program instructions encoded on a type of program storage medium It may be implemented as one module. Computer-readable media may be readable for influencing the spread-type signal into a machine-readable storage device, a machine readable storage substrate, a memory device, the machine material composition or one or more combinations of the foregoing.

The term "processing system", "processor" and "sub-system" is for example, encompasses all apparatus, devices, and machines for processing data, including a programmable processor, a computer, or multiple processors or computers. The processing system can include a code to be added to the hardware, forming an execution environment for example, code that makes up processor firmware, a protocol stack, a database management system, operating system, or request a computer program, such as one or more combinations of these .

A computer program (also known as programs, software, software application, script, or code) for executing the method according to the invention is mounted on a device according to the present invention is programming, including compiled or interpreted languages, a priori or procedural languages It can be written in any form of a language, and can be deployed in any form to include other unit suitable for use in stand-alone program or a module, component, subroutine, or the computer environment. A computer program does not necessarily have to correspond to the files in the file system. The program part of the inside in a single file that is provided to the requesting program, or the multiple cross-file serving (e. G., One or more modules, sub programs, or file for storing part of the code), or a file that holds other programs or data may be stored in the (e. g., one or more scripts stored in a markup language document). A computer program can be deployed so as to be located at one site or distributed across multiple sites running on a computer of a cross-multiple-access computer or one by a communication network.

Medium readable by a suitable computer, for storing computer program instructions and data, for example, EPROM, EEPROM, and flash semiconductor memory device, such as a memory device, such as magnetic disks, such as internal hard disks and external disks, magneto-optical disks and CD-ROM and it shall include the DVD-ROM disc, including all forms of non volatile memory, media and memory devices. The processor and the memory is supplemented by a logic circuit or a special-purpose, can be integrated into it.

Implementation of the subject matter described herein includes, for example, a back-end component, such as a data server, or, for example, includes a middleware component, such as an application server, or, for example, your Web browser can interact implementation and mutual subject matter described in this specification or the graphical user It may be implemented in front-end component, or such back-end, computing system comprising at least one combination of all of the middleware or a front-end component, such as a client computer having an interface. Of system components, for example it can be interconnected by any form or medium of digital data communication such as a communication network.

The specification includes a number of specific implementations of the detail, but these should not be any understood as inventions and also limit the scope of what can be charged, but rather a description of the features that can be specific to particular embodiments of the particular invention it should be understood. The specific features described herein in the context of separate embodiments may be implemented in combination in a single embodiment. On the other hand, it is possible in a variety of technical features in the context of a single embodiment also implemented separately from or in Fig plurality of embodiments in any suitable sub-combination. Further, characterized in that, but can be described as charge, such as that the operation and initially with particular combinations, one or more features from a claimed combination they can be excluded from the combination in some cases, that the claimed combination is subcombination or it may be modified by modification of the sub-combination.

Similarly, although depicted in the drawings in a particular order of operation, which handageona be carried out such operations as the specific order or sequential order shown to achieve the desired result and should not be understood to be carried out by all illustrated acts. In certain cases, it may be advantageous are multi-tasking and parallel processing. In addition, the separation of various system components in the embodiments described above are not to be understood such a separation to be required in all embodiments, the program components and systems described above are generally to be integrated or packaged into multiple software products together in a single software product it can be understood that.

It has been described particular embodiments of the subject matter described in this specification. Other embodiments are within the scope of the following claims. For example, the operations referred to in the claims are to achieve the desired result while being still carried out in a different order. As an example, a process shown in the accompanying figures do not necessarily require the particular order shown or in sequential order to obtain the desired results. In certain embodiments, it may be advantageous are multi-tasking and parallel processing.

This technical description has provided an example for allowing and presents the best mode of the present invention, in order to illustrate the present invention, and a person skilled in the art to make or use the present invention. This written specification is not intended to limit the invention to the specific terms set forth. Thus, although the detailed description of the present invention will be described with reference to the example described above, one of ordinary skill in the art may be added to the modifications, changes and variations to the examples without departing from the scope of the invention.

Thus, persons skilled in the art will appreciate that the present invention without changing the technical spirit or essential features may be embodied in other specific forms. Therefore, the embodiment described in the above examples should be understood as illustrative and not be limiting in all aspects. The scope of the invention is intended to be included within the scope of the above description becomes than indicated by the claims, which will be described later, the spirit and scope, and all such modifications as derived from the equivalent concept of the appended claims the invention do.

The present invention can correct errors of the overall depth map of the image via image processing during auto-stereoscopic image converter, by using the corrected depth map convert the 2-D image into a 3-D image errors in the image conversion stereoscopic image using the same as a depth map correction apparatus and method for minimizing conversion can be applied to the apparatus and method.

Claims (14)

  1. For the input image of the two-dimensional filtering unit for performing noise filtering or sharp (sharp) improved filtering characteristics;
    The characteristic variation of pixels in the filtered image also selects the largest boundary surfaces of the areas separated by the setting that, depending on the degree of correction based on the selected boundary correction region, near the region, the outer region unit; And
    Correcting the depth value by performing the filtered image and the group with the depth map generating interpolation with respect to the correction area, and the depth to produce a depth map corrected by the depth value correction in the near region and the outer region value correction section .;
    Depth map correction device comprising a.
  2. According to claim 1,
    The filtering unit depth map correction device, characterized in that making large the deviation of the pixel value for the boundary surface of the noise filter or the input image to remove noise components of the input image by more than a predetermined value sharp (sharp) characteristic improving filter.
  3. According to claim 1,
    The area setting unit
    And in the filtered image corresponding to the boundary surface, the area setting unit for correcting characteristic variation of the pixel is also set to the largest area to the correction area in accordance with the location change;
    Nearby area setting unit for setting an adjacent area within a certain distance of the correction area to a nearby area; And
    Depth map correction device characterized in that it comprises setting the outer region section which sets an area other than the correction area and adjacent region to the outer region.
  4. According to claim 1,
    Depth map correction device, characterized in that for correcting the depth value of the depth map corresponding to the correction area using the equation to the depth map correcting unit.
    Formula;
    Correction area New Depth (i) = ?? (SI (n) * Depth (n))
    Here, i is left, pixel index, to the right n of the correction region is also smaller than the correction area to the interpolation interval, the SI (n) is the pixel value of the input image (original image), Depth (n) is the depth map also of the pixel value, New depth (i) stands for the corrected depth value of the pixel position i.
  5. According to claim 1,
    The depth map correcting section depth map correction device, characterized in that for correcting the depth value for the outer region via the Gaussian filter tiring or low-pass filtering with respect to the outer region.
  6. According to claim 1,
    Depth map correction device, characterized in that for correcting the depth value of the neighboring region using the equation to the depth map correcting unit.
    Formula;
    Neighboring regions New Depth (i) = A + delta * i
    Here, i is a pixel value, delta is ((BA) / (kj)) adjacent to the pixel position, A is a neighboring area in the compensation area of ​​the neighboring region as a pixel index to the one surface of the outer region on either side of the calibration area, B is a pixel value adjacent to the adjacent region from the outer region, j is the position of the pixel having the index values ​​a, k being the index position of the pixel having the pixel value of the B.
  7. Analyzing the input image of the two-dimensional image analysis unit to extract at least one attribute information;
    Depth map generation unit for generating a depth map for the input image based on the characteristic information;
    Depth map correction section for filtering the input image, and correcting the generated depth map by using the filtered image; And
    Using the corrected depth map three-dimensional image generation section for converting the input image into a three-dimensional image of the three-dimensional;
    Three-dimensional video conversion device comprising a.
  8. The method of claim 7,
    The image analysis unit boundary (edge) information, color (color) information, the brightness (luminance) information, the motion (motion) information, a histogram (histogram) stereoscopic image, characterized in that to extract the characteristic information includes at least one of information converter.
  9. The method of claim 7,
    The depth map generating unit to generate at least one and then divided into blocks (block) by setting the depth value for the at least one block of the depth maps (depth map) a plurality of pixels (pixel) forming the input image three-dimensional video conversion device, characterized in that.
  10. The method of claim 7,
    The depth map correction section performing improved noise filtering or sharp characteristic filtering on the input image, selecting the largest boundary characteristic change of the pixel also is in the filtered image after correction in accordance with the correction degree with respect to the boundary surface area , near the region, divided into the outer region and corrects the depth value by performing interpolation using the generated depth map and the filtered image with respect to the correction area, corrected by the depth value correction in the near region and the outer region three-dimensional video conversion device, characterized in that to generate the depth map.
  11. A method for the depth map correction unit corrects the depth map,
    (A) performing a noise filter or sharp (sharp) characteristic improving filter on the input image of the two-dimensional;
    (B) selecting the degree of change is the greatest characteristic of the boundary pixels in the filtered image, separated by a correction area, near the region, the outer region according to the degree of correction with respect to the boundary surface; And
    (C) with respect to the correction area, and corrects the depth value by performing interpolation with the group a depth map generation, and the filtered image, said outer region and to generate a depth map corrected by the depth value correction in the neighboring region step;
    Depth map correcting method comprising a.
  12. 12. The method of claim 11,
    Wherein the step (c),
    Correcting the depth map for the correction area to perform the filtered image and the interpolation with the depth map for the correction area;
    Correcting the depth value for the outer region via the Gaussian filter tiring or low-pass filtering with respect to the outer region; And
    Characterized in that it comprises a step of correcting a depth value for the neighboring region using the slope value which corresponds to the change in the pixel serves to connect the pixel value, the correction region and the outer region adjacent to the neighborhood regions in the correction area depth map correcting method.
  13. A method for three-dimensional image converting unit converts the input image of the two-dimensional to three-dimensional image of a three-dimensional,
    Analyzing the input image of the two-dimensional by extracting the at least one attribute information;
    Generating a depth map for the input image based on the characteristic information;
    The step of filtering the input image, corrects the generated depth map by using the filtered image; And
    Converting the input image into a three-dimensional image of the three-dimensional depth map by using the corrected;
    Three-dimensional image conversion method comprising a.
  14. 14. The method of claim 13,
    Correcting the depth map,
    Performing noise filtering or sharp characteristic improving filter on the input image, and the selected largest boundary characteristic change of the pixel also is in the filtered image after correction in accordance with the correction degree with respect to the boundary surface area, nearby areas, outside characterized in that the delimited area, and corrects the depth value by performing interpolation using the image and depth map filtered with respect to the correction area, and generating a depth map corrected by the depth value correction in the near region and the outer region three-dimensional image conversion method.
    According to the invention, the boundary characteristic of the original does not match the image depth map is corrected so as to match as much as possible in consideration of the original image can be alleviated due to the mismatch dizziness.
PCT/KR2012/008238 2012-02-10 2012-10-11 Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same WO2013118955A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR10-2012-0013708 2012-02-10
KR20120013708A KR101332638B1 (en) 2012-02-10 2012-02-10 Apparatus and method for correcting depth map and apparatus and method for generating 3d conversion image using the same

Publications (1)

Publication Number Publication Date
WO2013118955A1 true true WO2013118955A1 (en) 2013-08-15

Family

ID=48947693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/008238 WO2013118955A1 (en) 2012-02-10 2012-10-11 Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same

Country Status (2)

Country Link
KR (1) KR101332638B1 (en)
WO (1) WO2013118955A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2561525A (en) * 2016-12-22 2018-10-24 Canon Kk Method and corresponding device for digital 3D reconstruction

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101664868B1 (en) 2015-03-25 2016-10-12 (주)이더블유비엠 Compensation method and apparatus for depth image based on outline
WO2017007047A1 (en) * 2015-07-08 2017-01-12 재단법인 다차원 스마트 아이티 융합시스템 연구단 Spatial depth non-uniformity compensation method and device using jittered comparison

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image
KR20100064196A (en) * 2008-12-04 2010-06-14 삼성전자주식회사 Method and appratus for estimating depth, and method and apparatus for converting 2d video to 3d video
US20100315488A1 (en) * 2009-06-16 2010-12-16 Samsung Electronics Co., Ltd. Conversion device and method converting a two dimensional image to a three dimensional image
KR20110099526A (en) * 2010-03-02 2011-09-08 (주) 스튜디오라온 Method for converting two dimensional images into three dimensional images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100046837A1 (en) * 2006-11-21 2010-02-25 Koninklijke Philips Electronics N.V. Generation of depth map for an image
KR20100064196A (en) * 2008-12-04 2010-06-14 삼성전자주식회사 Method and appratus for estimating depth, and method and apparatus for converting 2d video to 3d video
US20100315488A1 (en) * 2009-06-16 2010-12-16 Samsung Electronics Co., Ltd. Conversion device and method converting a two dimensional image to a three dimensional image
KR20110099526A (en) * 2010-03-02 2011-09-08 (주) 스튜디오라온 Method for converting two dimensional images into three dimensional images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2561525A (en) * 2016-12-22 2018-10-24 Canon Kk Method and corresponding device for digital 3D reconstruction

Also Published As

Publication number Publication date Type
KR101332638B1 (en) 2013-11-25 grant
KR20130092157A (en) 2013-08-20 application

Similar Documents

Publication Publication Date Title
US7551770B2 (en) Image conversion and encoding techniques for displaying stereoscopic 3D images
US20120147139A1 (en) Stereoscopic image aligning apparatus, stereoscopic image aligning method, and program of the same
US20120237114A1 (en) Method and apparatus for feature-based stereo matching
US20110141237A1 (en) Depth map generation for a video conversion system
US20120176481A1 (en) Processing image data from multiple cameras for motion pictures
US20090244262A1 (en) Image processing apparatus, image display apparatus, imaging apparatus, and image processing method
US8508580B2 (en) Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
JP2004200973A (en) Apparatus and method of inputting simple stereoscopic image, program, and recording medium
US20160057363A1 (en) Portable electronic devices with integrated image/video compositing
US6438260B1 (en) Visual presentation of information derived from a 3D image system
JPH07182533A (en) Method for making two-dimensional image into three-dimensional image
CN200980140Y (en) A three-dimensional cam
WO2008063170A1 (en) System and method for compositing 3d images
CN105049718A (en) Image processing method and terminal
US20110169820A1 (en) 3d image special effect device and a method for creating 3d image special effect
US20150163478A1 (en) Selecting Camera Pairs for Stereoscopic Imaging
Schmeing et al. Depth image based rendering: A faithful approach for the disocclusion problem
US20030007560A1 (en) Image segmentation by means of temporal parallax difference induction
US20130039568A1 (en) Image processing apparatus, image processing method, and recording medium
US20130069934A1 (en) System and Method of Rendering Stereoscopic Images
CN101873509A (en) Method for eliminating background and edge shake of depth map sequence
US20130002660A1 (en) Stereoscopic video display device and operation method of stereoscopic video display device
US20120121163A1 (en) 3d display apparatus and method for extracting depth of 3d image thereof
KR20090129175A (en) Method and device for converting image
Ideses et al. 3D from compressed 2D video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12868290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12868290

Country of ref document: EP

Kind code of ref document: A1