KR101760463B1 - Method and Apparatus for correcting a depth map - Google Patents

Method and Apparatus for correcting a depth map Download PDF

Info

Publication number
KR101760463B1
KR101760463B1 KR1020150149904A KR20150149904A KR101760463B1 KR 101760463 B1 KR101760463 B1 KR 101760463B1 KR 1020150149904 A KR1020150149904 A KR 1020150149904A KR 20150149904 A KR20150149904 A KR 20150149904A KR 101760463 B1 KR101760463 B1 KR 101760463B1
Authority
KR
South Korea
Prior art keywords
pixel
current
depth
edge
image frame
Prior art date
Application number
KR1020150149904A
Other languages
Korean (ko)
Other versions
KR20170049042A (en
Inventor
신지태
오동률
이용우
Original Assignee
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 성균관대학교산학협력단 filed Critical 성균관대학교산학협력단
Priority to KR1020150149904A priority Critical patent/KR101760463B1/en
Publication of KR20170049042A publication Critical patent/KR20170049042A/en
Application granted granted Critical
Publication of KR101760463B1 publication Critical patent/KR101760463B1/en

Links

Images

Classifications

    • H04N13/0271
    • H04N13/0018
    • H04N13/0022
    • H04N13/0217

Landscapes

  • Image Processing (AREA)

Abstract

A depth map correction method is disclosed. A depth map generation method according to an embodiment of the present invention includes generating an edge weight map indicating a dynamic object edge of a current color image frame; Determining, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge; And selectively performing a first weighted three-way filtering in which a depth edge weight and a color edge weight are applied to the current depth pixel based on the determination result.

Description

[0001] The present invention relates to a depth map correction method,

The present invention relates to image correction, and more particularly, to a depth map correction method and apparatus therefor.

Currently, a method of separating left and right images from 3D images and providing them to both eyes of the user is common. However, it causes inconvenience to use exclusive glasses. 3D point-of-view 3D technology is known to have the advantage of having an improved stereoscopic effect and a user's point of view by showing images with parallax according to viewer's movement without glasses.

In order to acquire multi-view images, it is ideal to use many cameras physically. However, it is widely used to acquire images of a new virtual viewpoint by capturing and synthesizing images with fewer cameras, Depth Image Based Rendering.

The virtual viewpoint is synthesized with a pair of left and right color images and a pair of left and right depth map images corresponding thereto. The depth map that contains the distance information of the object in the image has the characteristic that the relative distance difference between the camera and the object is composed of gray scale and the similarity value is almost same in the object with the edge as the boundary.

It is very important to acquire a high quality depth map because the quality of the depth map has a great influence on the image quality in the synthesis.

Conventionally, various filtering techniques have been studied for correcting depth maps, such as bidirectional filters and combined bidirectional filters, in order to obtain a high-quality depth map. However, in the prior art, effective noise elimination of edge portions is insufficient, There was a limitation that the cognitive characteristics were not considered.

An object of an embodiment of the present invention is to provide a depth map correction method and apparatus which can effectively remove noise of an edge portion while taking into consideration a cognitive characteristic that a person is sensitive to a dynamic object.

According to an aspect of the present invention, there is provided a depth map correction method comprising: generating an edge weight map indicating a dynamic object edge of a current color image frame; Determining, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge; And selectively performing a first weighted three-way filtering in which a depth edge weight and a color edge weight are applied to the current depth pixel based on the determination result.

Preferably, the performing of the first weighted three-way filtering may include performing the first weighted three-way filtering on a current depth pixel belonging to the dynamic object edge if the current depth pixel belongs to the dynamic object edge, If the current depth pixel does not belong to the dynamic object edge, perform the second weighted three-way filtering in which the depth edge weight and the color edge weight are not applied to the current depth pixel that does not belong to the dynamic object edge.

Advantageously, the step of performing the first weighted three-way filtering may be performed according to the following equations (1) to (5), and the second weighted three-way filtering may be performed according to Equation (6).

[Equation 1]

Figure 112015104581481-pat00001

&Quot; (2) "

Figure 112015104581481-pat00002

&Quot; (3) "

Figure 112015104581481-pat00003

&Quot; (4) "

Figure 112015104581481-pat00004

&Quot; (5) "

Figure 112015104581481-pat00005

&Quot; (6) "

Figure 112015104581481-pat00006

Preferably, the step of performing the first weighted three-way filtering may include: calculating a Euclidean distance between the current depth pixel and neighboring pixels belonging to a window centered on the current depth pixel, A difference between pixel values of adjacent pixels belonging to a window centered on a current depth pixel, a pixel value of a current color pixel at a position corresponding to the current depth pixel in the current color image frame, The difference between the pixel values of the corresponding neighboring pixels, the depth edge weight, and the color edge weight.

Advantageously, the step of performing the first weighted three-way filtering is based on a distance standard deviation, which is a standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels, and the size of the window to which the current depth pixel belongs Lt; RTI ID = 0.0 > a < / RTI >

Preferably, the basic weight is a standard deviation of the Euclidean distance between the current depth pixel and adjacent pixels belonging to the window centered on the current depth pixel, and the size of the window to which the current depth pixel belongs Wherein the depth edge weight is generated based on the base weight and the pixel value of the adjacent pixels and the color edge weight is generated based on the base weight and the neighboring pixels in the current color image frame Corresponding to the position of the pixel.

Preferably, the depth map correction method according to an embodiment of the present invention further includes a step of correcting the current depth pixel existing in a boundary noise region existing in a region between the boundary of the current color image frame and the boundary of the depth map, Wherein the step of performing the first weighted three-way filtering comprises the steps of: detecting each of the Euclidian distances between adjacent pixels belonging to the window centered on the current boundary noise pixel and the current boundary noise pixel, 2, a value obtained by applying a Gaussian function, a difference between a pixel value of the current boundary noise pixel and a pixel value of the adjacent pixels, a pixel value of a current color pixel at a position corresponding to the current boundary noise pixel in the current color image frame And a difference between pixel values of corresponding adjacent pixels at positions corresponding to the adjacent pixels, The first weighted three-way filtering for the current boundary noise pixel may be performed based on the depth edge weight and the color edge weight.

Advantageously, the step of generating the edge weight map further comprises the step of generating the edge weight map based on the pixel values of all pixels belonging to the window to which the current color pixel of the current color image frame belongs and the corresponding window in the previous color image frame immediately preceding the current color image frame Generating a plurality of dynamic object windows by selecting windows having a mean absolute difference computation result between pixel values of all pixels belonging to all pixels belonging to the plurality of dynamic object windows; And detecting a dynamic object edge in the plurality of dynamic object windows and generating the edge weight map using the detected dynamic object edge.

Advantageously, the step of generating the edge weight map further comprises the step of generating the edge weight map based on the pixel values of all pixels belonging to the window to which the current color pixel of the current color image frame belongs and the corresponding window in the previous color image frame immediately preceding the current color image frame Generating a first intermediate image based on a Mean Absolute Difference operation between pixel values of all pixels belonging to the first intermediate image; Subtracting an edge of the current color image frame from the first intermediate image to produce a second intermediate image; And generating the edge weight map by subtracting the second intermediate image from the first intermediate image.

According to another aspect of the present invention, there is provided a depth map correction apparatus including: a map generation unit generating an edge weight map indicating a dynamic object edge of a current color image frame; A determination unit for determining, based on the edge weight map, whether a current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge, And a correction unit for selectively performing first weighted three-way filtering to which edge weights and color edge weights are applied.

Preferably, the correcting unit performs the first weighted three-way filtering on a current depth pixel belonging to the dynamic object edge if the current depth pixel belongs to the dynamic object edge, and if the current depth pixel is on the dynamic object edge Weighted three-way filtering in which the depth edge weight and the color edge weight are not applied to a current depth pixel that does not belong to the dynamic object edge.

Preferably, the correcting unit performs the first weighted three-way filtering according to Equation (1) to Equation (5), and the second weighted three-way filtering may be performed according to Equation (6).

[Equation 1]

Figure 112015104581481-pat00007

&Quot; (2) "

Figure 112015104581481-pat00008

&Quot; (3) "

Figure 112015104581481-pat00009

&Quot; (4) "

Figure 112015104581481-pat00010

&Quot; (5) "

Figure 112015104581481-pat00011

&Quot; (6) "

Figure 112015104581481-pat00012

Preferably, the correcting unit corrects the Euclidean distance between the current depth pixel and the neighboring pixels belonging to the window centered on the current depth pixel, the pixel value of the current depth pixel and the window centered on the current depth pixel, A difference between a pixel value of a current color pixel at a position corresponding to the current depth pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels; , The depth edge weighting and the color edge weighting, the first weighted three-way filtering may be performed.

Preferably, the correcting unit may further calculate, based on the distance standard deviation, which is the standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels, and the basic weight generated based on the size of the window to which the current depth pixel belongs The first weighted three-way filtering can be performed.

Preferably, the basic weight is a standard deviation of the Euclidean distance between the current depth pixel and adjacent pixels belonging to the window centered on the current depth pixel, and the size of the window to which the current depth pixel belongs Wherein the depth edge weight is generated based on the base weight and the pixel value of the adjacent pixels and the color edge weight is generated based on the base weight and the neighboring pixels in the current color image frame Corresponding to the position of the pixel.

Preferably, the correcting unit divides each of the Euclidean distances between adjacent pixels belonging to the window centered on the current depth pixel and the current depth pixel by 2 and applies a Gaussian function to the pixel value of the current depth pixel, A difference between a pixel value of a current color pixel at a position corresponding to the current depth pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels, The first weighted three-way filtering may be performed based on the depth edge weight and the color edge weight.

Preferably, the map generation unit generates the map of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixels of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame. Selecting a window having a result of a mean absolute difference calculation between a plurality of dynamic object windows and selecting a window having a result of a mean absolute difference between values of the dynamic object window; To generate the edge weight map.

Preferably, the map generation unit generates the map of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixels of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame. Generating a second intermediate image by subtracting the edge of the current color image frame from the first intermediate image to generate a first intermediate image based on a Mean Absolute Difference operation between the first intermediate image and the first intermediate image, And the edge weight map may be generated by subtracting the second intermediate image from the intermediate image.

According to an embodiment of the present invention, a first weighted three-way filtering or a second weighted three-way filtering is selectively applied to a depth pixel according to whether a depth pixel belongs to a dynamic object edge, It is possible to remove the noise while considering the cognitive characteristic that the person is sensitive to the dynamic object by removing the noise while preserving the maximum.

According to another embodiment of the present invention, there is an effect of minimizing the boundary noise generated in the composite image due to the boundary mismatch between the object boundary in the depth map and the object in the current color image frame.

1 is a view for explaining a depth map correction apparatus according to an embodiment of the present invention.
2 is a flowchart illustrating a depth map correction method according to an embodiment of the present invention.
3 is a flowchart illustrating a method of generating an edge weight map according to an exemplary embodiment of the present invention.
FIG. 4 is a diagram illustrating a method of generating an edge weight map according to an embodiment of the present invention. Referring to FIG.
FIG. 5 is a diagram illustrating an example in which three-way filtering is applied according to an embodiment of the present invention.
FIG. 6 is a view for explaining a three-way filtering example in which enhancement region filtering is added according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

1 is a view for explaining a depth map correction apparatus according to an embodiment of the present invention.

Referring to FIG. 1, a depth map correction apparatus 100 according to an embodiment of the present invention includes a map generation unit 110, a determination unit 120, and a correction unit 130.

The map generating unit 110 generates an edge weight map indicating a dynamic object edge of the current color image frame.

Here, the current color image frame means a color image frame corresponding to the depth map to be corrected and may be a 2D image or a 3D image, and the dynamic object edge may be an edge of the moving object among the objects belonging to the current color image frame. .

According to an embodiment of the present invention, the map generation unit 110 generates the map value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the corresponding window of the previous color image frame immediately before the current color image frame A plurality of dynamic object windows are generated by selecting windows having a result of a mean absolute difference calculation between pixel values of all the pixels belonging to the plurality of dynamic object windows, The edge weight map can be generated using dynamic object edges. At this time, the method of generating the dynamic edge is not limited to the above method, and all the methods capable of detecting the window including the dynamic edge can be applied.

The specific operation of the map generating unit 110 will be described later with reference to the embodiment of FIG.

The determination unit 120 determines, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge.

Here, the current depth pixel means a pixel that is currently subjected to filtering in the depth map corresponding to the current color image frame. That is, according to an embodiment of the present invention, noise is removed from the depth map by sequentially filtering the individual pixels of the depth map. In this way, each of the individual pixels corresponding to the order in which the filtering is applied in the depth map, It is called depth pixel at present.

The correction unit 130 selectively performs a first weighted three-way filtering to which depth edge weighting and color edge weighting are applied to the current depth pixel, based on the determination result.

More specifically, when the current depth pixel belongs to the dynamic object edge, the correction unit 130 performs the first weighted three-way filtering on the current depth pixel belonging to the dynamic object edge, and if the current depth pixel does not belong to the dynamic object edge We perform a second weighted three-way filtering that does not apply depth edge weights and color edge weights to current depth pixels that do not belong to dynamic object edges.

Here, the depth edge weight reflects the pixel value of the adjacent pixel of the depth map, and the color edge weight is generated by reflecting the pixel value of the corresponding adjacent pixel (existing at the same position) corresponding to the adjacent pixels in the current color image frame So that the corrected pixel value of the current depth pixel obtained through the first weighted three-way filtering is less influenced by the pixel value of the adjacent pixel and the pixel value of the corresponding adjacent pixel.

The correction unit 130 can generate the corrected depth map through the first weighted three-way filtering and the second weighted three-way filtering.

On the other hand, the first weight filter for the first weighted three-way filtering may be defined as Equations (1) to (5).

Figure 112015104581481-pat00013

here,

Figure 112015104581481-pat00014
K denotes a normalization factor, X denotes a current depth pixel, m denotes a window centered on the current depth pixel X, ≪ / RTI >< RTI ID = 0.0 &
Figure 112015104581481-pat00015
Is the Euclidean distance between the current depth pixel X and the adjacent pixels m,
Figure 112015104581481-pat00016
Denotes a pixel value difference between the pixel value f (X) of the current depth pixel X and the pixel value f (m) of the adjacent pixels m,
Figure 112015104581481-pat00017
(X) of the current color pixel (existing at the same position) corresponding to the current depth pixel X and the corresponding adjacent pixel value C (m) corresponding to the adjacent pixels m Lt; / RTI >
Figure 112015104581481-pat00018
The
Figure 112015104581481-pat00019
Quot; means the standard deviation of the distance,
Figure 112015104581481-pat00020
silver
Figure 112015104581481-pat00021
Which is a standard deviation of the pixel value,
Figure 112015104581481-pat00022
Is the depth edge weight,
Figure 112015104581481-pat00023
Denotes a color edge weight,
Figure 112015104581481-pat00024
Means the basic weight.

Equation 1 can be broadly divided into three terms. The first term on the right side may be referred to as an area filter, the second term may be referred to as a first range filter, the third term may be referred to as a second range filter, The weight corresponding to each of the three filters is multiplied by each pixel value f (m) of adjacent pixels m in the depth map, and then divided by k, which is a normalization factor.

That is, referring to Equation (1), the corrected pixel value of the current depth pixel X

Figure 112015104581481-pat00025
Is calculated based on the pixel value f (m) of the adjacent pixels adjacent to the current depth pixel X. [ However, in the present invention, the first range filter has a depth edge weight
Figure 112015104581481-pat00026
And the second range filter is applied with a color edge weight
Figure 112015104581481-pat00027
The corrected pixel value of the current depth pixel X
Figure 112015104581481-pat00028
It is less affected by the pixel values of the adjacent pixels in the depth map and the corresponding neighboring pixels in the current color image frame. In this case, although the basic weight is applied to the area filter in Equation (1), in other embodiments, the basic weight may be removed from the area filter.

On the other hand, the normalization factor k,

Figure 112015104581481-pat00029
, Depth edge weight
Figure 112015104581481-pat00030
, Color edge weight
Figure 112015104581481-pat00031
Are defined as shown in Equations (2) to (5).

Figure 112015104581481-pat00032

Referring to Equation (2), since f (m) is not multiplied on the right side in contrast to Equation (1), k is the Euclidian distance between the current depth pixel X and the adjacent pixels m

Figure 112015104581481-pat00033
(X) of the current depth pixel X and the pixel value f (m) of the adjacent pixels m,
Figure 112015104581481-pat00034
, And this value becomes a normalization factor.

Figure 112015104581481-pat00035

Here, l is an experimental constant value ranging from 0 to 1, and N is the size of the window to which the current depth pixel belongs. Preferably, the experimental constant value may be set to 0.4.

Figure 112015104581481-pat00036

Figure 112015104581481-pat00037

Here, if the depth edge weight is a basic weight

Figure 112015104581481-pat00038
And a pixel value f (m) of an adjacent pixel, and a color edge weight is defined as a basic weight
Figure 112015104581481-pat00039
And the pixel value C (m) of the corresponding adjacent pixel, whereby the depth edge weight value
Figure 112015104581481-pat00040
, Color edge weight
Figure 112015104581481-pat00041
Allows the distribution of the pixel values of the corrected depth map to be similar to the distribution of the original pixel values of the current color image frame.

On the other hand, the second weight filter for the second weighted three-way filtering can be defined as Equation (6).

Figure 112015104581481-pat00042

Referring to Equation 6, although the basic weight is applied to the area filter which is the first term on the right side, the second range filter, which is the second term, and the second range filter, which is the third term,

Figure 112015104581481-pat00043
, Color edge weight
Figure 112015104581481-pat00044
Is not applied. This is because, if the current depth pixel X does not belong to the dynamic object edge, the corrected pixel value of the current depth pixel X may be calculated with much smoothening by the adjacent pixels m.

That is, the conventional bidirectional filter and combined bidirectional filter perform smoothing to the same degree without distinguishing the current depth pixel and the other pixels belonging to the dynamic object edge for noise cancellation, whereas in the embodiment of the present invention, For the depth pixel, the first weighted three-way filtering is applied to reduce the influence of the pixel value f (m) of the adjacent pixels and the pixel value C (m) of the corresponding adjacent pixels, and for the other current depth pixels, Applying direction filtering has the advantage of preserving the depth map pixel distribution characteristics close to the current color image frame by minimizing the smoothing of the edge part of the dynamic object.

However, although the basic weight is applied to the area filter which is the first term on the right side of Equation (6), the basic weight may not be applied to the area filter in other embodiments.

2 is a flowchart illustrating a depth map correction method according to an embodiment of the present invention.

In step 210, the depth-of-view correction apparatus 100 generates an edge-weight map indicating the dynamic object edge of the current color image frame.

In step 220, the depth-of-view correction apparatus 100 determines, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge.

In step 230, the depth-of-view correction apparatus 100 selectively performs a first weighted three-way filtering in which a depth edge weight and a color edge weight are applied to the current depth pixel, based on the determination result.

As described above, the first weighted three-way filtering includes a normalization factor, an Euclidean distance between adjacent pixels belonging to a window centered on a current depth pixel and a current depth pixel, a pixel value of a current depth pixel, Between the pixel values of the current color pixel at the position corresponding to the current depth pixel in the current color image frame and the pixel value of the corresponding pixels at the position corresponding to the adjacent pixels in the current color image frame, Differences, depth edge weights, and color edge weights.

Also, the first weighted three-way filtering is performed based on the basis weight generated based on the distance standard deviation, which is the standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels, and the size of the window to which the current depth pixel belongs It is possible.

On the other hand, in another embodiment, in order to remove the boundary noise generated by mismatching the boundary of the object in the current color image frame and the boundary of the object in the corresponding depth map, Dian Street

Figure 112015104581481-pat00045
Lt; / RTI > may be applied.

That is, the depth-of-view correction apparatus 100 according to an embodiment of the present invention includes a current depth pixel existing in a boundary noise region existing between a border of a current color image frame and a boundary of a depth map as a current boundary noise pixel After the detection, the first weighted three-way filtering may be performed in which enhancement region filtering is added to the current boundary noise pixel.

For example, for the current boundary noise pixel

Figure 112015104581481-pat00046
this
Figure 112015104581481-pat00047
And can be applied to Equations (1) and (6), which are expressed by Equations (7) and (8), respectively.

Figure 112015104581481-pat00048

Figure 112015104581481-pat00049

Referring to Equations (7) and (8), the Euclidean distance between the current boundary noise pixel X and the adjacent pixels m in Equations (1) and (6)

Figure 112015104581481-pat00050
this
Figure 112015104581481-pat00051
. ≪ / RTI > here,
Figure 112015104581481-pat00052
The
Figure 112015104581481-pat00053
Can be calculated by dividing by 2 and applying a Gaussian function.

As a result, the boundary of the object in the depth map becomes close to the boundary of the object in the current color image frame, and consequently, due to the boundary of the object in the depth map and the boundary mismatch of the object in the current color image frame, The generated boundary noise can be minimized. That is, the pixel value of the current boundary noise pixel is close to the pixel value of the edge pixel located at the boundary of the object.

FIG. 3 is a flowchart illustrating an edge-weighted map generation method according to an embodiment of the present invention. FIG. 4 is a diagram illustrating an edge-weighted map generation method according to an embodiment of the present invention.

In step 310, the depth-of-view correction apparatus 100 corrects the pixel value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixel value of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame The first intermediate image is generated based on a Mean Absolute Difference (MAD) operation between the pixel values of the pixels.

At this time, the first intermediate image may be generated based on Equation (9) and Equation (10).

First, an average absolute difference (MAD) operation of pixel values between pixels belonging to the current color image frame and pixels belonging to the previous color image frame is performed through Equation (9).

Figure 112015104581481-pat00054

X denotes the x-axis coordinate of all the pixels belonging to the window to which the current color pixel belongs, y denotes the y-axis coordinate of all the pixels belonging to the window to which the current color pixel belongs, p- Denotes the previous previous color image frame, p denotes the current color image frame, and N denotes the size of the window. That is, in Equation (9), the absolute values of the pixel value differences of pixels existing at the same position in the fore and aft color image frame are summed and divided by the number of pixels (N 2 ) belonging to the window.

Next, a first intermediate image can be generated by comparing the result of Equation (9) in Equation (10) with a threshold value.

Figure 112015104581481-pat00055

In Equation (10), when the threshold value is set to 1, when the average absolute difference (MAD) operation result calculated for each window is 1 or more, the pixel value of all the pixels belonging to the window is set to 255, By setting the pixel value to 0, it is possible to generate the first intermediate image.

As described above, a window composed of pixels having a pixel value of 255 in the first intermediate image can be determined as a window including a dynamic object with motion, and hereinafter, such window is referred to as a dynamic object window. Fig. 4 (a) shows an example of the first intermediate image calculated through such calculation.

In step 320, the depth map correction apparatus 100 subtracts the edge of the current color image frame from the first intermediate image to generate a second intermediate image.

More specifically, the second intermediate image is generated by the following process.

In the first step, the edge is detected by setting the pixel value of the pixels belonging to the edge in the current color image frame to 255 and setting the pixel value of the remaining pixels to zero.

Preferably, the first step may be performed by a sobel algorithm.

In the second step, the second intermediate image is generated by subtracting the edge detected in the first intermediate image from the first step.

At this time, if the edge is subtracted from the dynamic object window in the first intermediate image, only the pixel values of the pixels corresponding to the edges in the corresponding dynamic window become 0 and the pixel value of the remaining pixels becomes 255.

However, if the edge is subtracted from the normal window instead of the dynamic object window, the value obtained by subtracting the pixel value (255) of the edge from the pixel value (0) of the pixels belonging to the general window becomes negative, The pixel value becomes zero. That is, in the dynamic object window, the pixel values of the pixels corresponding to the edges are 0, the pixel values of the pixels excluding the edge are 255, and the pixel values of the pixels belonging to the general window are all 0, Only the edge corresponding to the object window is displayed, and the edge corresponding to the general window is not displayed. Fig. 4 (b) shows an example of the second intermediate image calculated by this procedure.

In step 330, the depth-of-view correction apparatus 100 subtracts the second intermediate image from the first intermediate image to generate an edge weight map.

When the second intermediate image is subtracted from the first intermediate image, the pixel values of the pixels corresponding to the edges in the dynamic object window become 255, the pixel values of the pixels excluding the edge become 0, An edge weight map including only edges is generated. Fig. 4 (c) shows an example of an edge weight map calculated by such a procedure.

FIG. 5 is a diagram illustrating an example in which three-way filtering is applied according to an embodiment of the present invention.

5A is a diagram showing an edge weight map superimposed on a depth map. FIG. 5B is a diagram showing a current depth pixel a of the window 510 displayed on the depth map of FIG. 5A. FIG. 3 is a diagram illustrating a result of performing the three-way filtering according to the example.

Referring to FIG. 5A, the window 510 includes a current depth pixel a located at the center, pixels including shaded pixels, dark gray pixels, and white pixels. Here, the hatched pixels denote pixels corresponding to the dynamic object edge on the edge weight map, the dark gray pixels denote adjacent pixels having a relatively large pixel value having a pixel value between 196 and 210, The displayed pixels represent adjacent pixels having a relatively small pixel value having a pixel value of 40-50. Pixels denoted by dark gray and pixels denoted by white may be objects and background pixels classified based on edges, respectively.

Referring to FIG. 5B, the uppermost table 520 is a table showing original pixel values of pixels belonging to the window 510 of FIG. 5A, and the first table 532 at the lower left is a table A second table 534 on the lower left of the table shows the weight of adjacent pixels reflected from the original pixel values of the pixels belonging to the uppermost table 520, Pixel values of neighboring pixels calculated by multiplying the weights of the first table 532 and the second table 532 by the weights of the first table 532 on the lower left side.

The first table 542 on the lower right side is a table showing weights of adjacent pixels reflected when the corrected pixel value of the current depth pixel a is calculated in the three-way filter according to an embodiment of the present invention. The second table 544 of the table 520 represents the pixel values of the adjacent pixels calculated by multiplying the original pixel values of the pixels belonging to the uppermost table 520 by the weight of the first table 542 of the lower right.

On the basis of this, when the corrected pixel value of the current depth pixel a is calculated, a pixel value of 186.95 (rounded to 187) in the case of the conventional dual-type filter is obtained, and in the case of the three- (Rounded to 201). That is, the corrected pixel value of the current depth pixel a calculated through the three-way filter according to an exemplary embodiment of the present invention is smaller than that of the conventional two-dimensional filter (dark gray pixels) Which is closer to the pixel value of the pixel. This indicates that according to the three-way filtering according to an embodiment of the present invention, smoothing is performed to remove noise but the dynamic object edge is smoothed to minimize the change of the original pixel distribution characteristic of the depth map.

FIG. 6 is a view for explaining an example of three-way filtering added with enhancement region filtering according to an embodiment of the present invention.

6 (a) is an enlarged view of a part of an arbitrary virtual viewpoint composite image, in which a current color image frame is superimposed on a depth map. FIG. 6 (b) In the case of performing the three-way filtering in which the enhancement region filtering is added to the current depth pixel a of the embodiment of the present invention.

Referring to FIG. 6A, the window 610 includes pixels denoted by dark gray and light gray, as well as the current depth pixel a located at the center. Here, pixels denoted by dark gray represent pixels of the depth map, pixels denoted by light gray denote pixels belonging to the current color image frame, and pixels denoted by white denote pixels existing outside the edge of the object, Lt; RTI ID = 0.0 > of the current color image frame. ≪ / RTI > That is, FIG. 6 (a) shows a situation in which a boundary noise in which the boundary between the depth map and the boundary of the current color image frame is inconsistent occurs, and the current depth pixel a is a boundary between the boundary of the current color image frame and the boundary of the depth map Is present in the boundary noise region existing in the region, it corresponds to the current boundary noise pixel. Also, since the pixels d, e, f, k, l, and m exist in the region between the boundary of the current color image frame and the boundary of the depth map,

This is because filtering is performed so that the pixel distribution characteristic of the corrected depth map is different from the pixel distribution characteristic of the depth map before correction, or noise is removed without distinguishing the portion of the object from the edge boundary portion, .

Accordingly, in an embodiment of the present invention, the Euclidean distance between the current depth pixel X and the adjacent pixels m applied to the existing area filter

Figure 112015104581481-pat00056
For the current boundary noise pixel a
Figure 112015104581481-pat00057
Divided by 2 and the value obtained by applying the Gaussian function
Figure 112015104581481-pat00058
The boundary noise can be minimized. Hereinafter,
Figure 112015104581481-pat00059
of
Figure 112015104581481-pat00060
Is called improved area filtering.

Euclidean distance between the current depth pixel X and the adjacent pixels m applied to the existing area filter

Figure 112015104581481-pat00061
Is shown in Table 1, and the Euclidean distance < RTI ID = 0.0 >
Figure 112015104581481-pat00062
Are shown in Table 2.

Figure 112015104581481-pat00063

Figure 112015104581481-pat00064

Table 1 Euclidean distance

Figure 112015104581481-pat00065
Instead, the Euclidean distance
Figure 112015104581481-pat00066
The corrected pixel values of the boundary noise pixel a are calculated by taking the weight of the edge of the object corresponding to the boundary of the depth map and the weight of the pixel inside the edge of the object. For example, pixels b, c, and j in FIG. 6A correspond to edges of an object on a depth map, and pixels g, h, and i can be regarded as pixels on an edge of an object. In this case, the edge of the object is regarded as one pixel. If the edge pixel of the object is viewed in two, the pixels b, c, h, g, h and i can be regarded as edges of the object.

That is, in the three-way filtering added with the improved region filtering according to the embodiment of the present invention, the edge of the object of the depth map located nearest to the current boundary noise pixel and the weight of the pixel value of the pixels inside the edge of the object are reflected As shown in Table 2, the distances between the current boundary noise pixel and the edge of the object and the distance between the current boundary noise pixel and the pixels inside the edge of the object are adjusted to be closer to each other.

As a result, when the three-way filtering added with the improved region filtering according to an embodiment of the present invention is applied to the current boundary noise pixel, the corrected pixel value of the current boundary noise pixel is calculated as the pixel value of the pixel corresponding to the object edge of the depth map Value and the pixel value of the pixels located inside the object edge, so that the boundary of the depth map becomes close to the boundary of the current color image frame. More specifically, by correcting the pixel value of the boundary noise pixel to a value as close as possible to the pixel value of the pixel corresponding to the edge of the object of the depth map and the pixel value of the pixel inside the object edge, .

For example, assuming that the edge pixels of the object on the depth map are b, c, and j in Fig. 6A, and the pixels inside the edge of the object are g, h, i, When three-way filtering is applied to the boundary noise pixel a according to an embodiment of the present invention, when calculating the corrected pixel value of the boundary noise pixel a, the edge pixels b, c, j and the pixel values of the pixels g, h, i inside the edge of the object, thereby improving the boundary noise of about one to two pixels in comparison with the conventional one.

Since the pixel value of the boundary noise pixel a is relatively close to the pixel values of the edge of the surrounding object, the step size of the pixel representing the curve in the virtual viewpoint composite image becomes smaller, Can be expressed more smoothly than before.

6 (b), the topmost table 620 is a table showing the original pixel values of the pixels belonging to the window 610 of FIG. 6 (a), and the first table 632 at the bottom left, And a second table 534 at the lower left of the table shows the weight of adjacent pixels reflected from the original pixels of the pixels belonging to the uppermost table 620. [ Values of the neighboring pixels calculated by multiplying the values by the weights of the first table 632 in the lower left corner.

The first table 642 on the lower right side shows the weight of the adjacent pixels reflected when the corrected pixel value of the current boundary noise pixel a is calculated in the tri-directional filter to which the improved region filter is added according to an embodiment of the present invention And the second table 644 on the lower right side represents the pixel values of the adjacent pixels calculated by multiplying the original pixel values of the pixels belonging to the table 620 at the uppermost stage by the weight of the first table 642 at the lower right.

On the basis of this, when the corrected pixel value of the current boundary noise pixel a is calculated, the improved area filtering according to an embodiment of the present invention has a pixel value of 154.40 (rounded to 154) in the case of the conventional two- Directional filter has a pixel value of 161.88 (rounded up to 162). That is, it can be seen that the corrected pixel value of the current boundary noise pixel a is increased by about 8 compared to the conventional bidirectional filter in the case of the three-way filter added with the improved region filtering according to an embodiment of the present invention. As a result, it can be seen that boundary noise and step-like noise can be improved as compared with the conventional art.

Table 3 compares the performance of the present invention and the prior art using a 3D video standard multi view video-plus-depth sequence.

The sequences used in the experiments are Poznan hall (1920 x 1088), kendo (1024 x 768), Newspaper (1024 x 768), balloons (1024 x 768) and Poznan street (1920x 1088). The parameter is ??

Figure 112015104581481-pat00067
,
Figure 112015104581481-pat00068
, The window size is 5, and the experiment is performed using YUV sequence of 100 frames per each sequence, and evaluated by mean MSE (mean square error) of 100 frame values.

Figure 112015104581481-pat00069

In Table 3, original denotes an intermediate virtual point of view created using the provided depth map. Referring to Table 3, it can be seen that the present invention exhibits the lowest MSE value and the best performance.

The above-described embodiments of the present invention can be embodied in a general-purpose digital computer that can be embodied as a program that can be executed by a computer and operates the program using a computer-readable recording medium.

The computer readable recording medium includes a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM, DVD, etc.).

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (18)

Generating an edge weight map indicating a dynamic object edge of a current color image frame;
Determining, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge; And
Selectively performing a first weighted three-way filtering in which a depth edge weight and a color edge weight are applied to the current depth pixel based on the determination result,
Wherein selectively performing the first weighted three-way filtering comprises:
Performing the first weighted three-way filtering on a current depth pixel belonging to the dynamic object edge if the current depth pixel belongs to the dynamic object edge,
And performing the second weighted three-way filtering in which the depth edge weight and the color edge weight are not applied to a current depth pixel that does not belong to the dynamic object edge if the current depth pixel does not belong to the dynamic object edge Map correction method.
delete The method according to claim 1,
Wherein the step of performing the first weighted three-way filtering is performed according to the following equations (1) to (5)
Wherein the step of performing the second weighted three-way filtering is performed according to Equation (6).
[Equation 1]
Figure 112017021471715-pat00070

&Quot; (2) "
Figure 112017021471715-pat00071

&Quot; (3) "
Figure 112017021471715-pat00072

&Quot; (4) "
Figure 112017021471715-pat00073

&Quot; (5) "
Figure 112017021471715-pat00074

&Quot; (6) "
Figure 112017021471715-pat00075
The method according to claim 1,
Wherein performing the first weighted three-way filtering comprises:
The Euclidean distance between the current depth pixel and neighboring pixels belonging to the window centered on the current depth pixel, the pixel value of the current depth pixel and the pixel value between adjacent pixels belonging to the window centered on the current depth pixel A difference between a pixel value of a current color pixel at a position corresponding to the current depth pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels, the depth edge weight, Wherein the color depth correction is performed based on the color edge weight.
5. The method of claim 4,
Wherein performing the first weighted three-way filtering comprises:
Based on a distance standard deviation which is a standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels and a basic weight generated based on the size of the window to which the current depth pixel belongs, Map correction method.
5. The method of claim 4,
A basic weight is generated based on a distance standard deviation which is a standard deviation of the Euclidean distance between adjacent pixels belonging to the window centered on the current depth pixel and the current depth pixel and a size of the window to which the current depth pixel belongs time,
Wherein the depth edge weight is generated based on the basic weight and the pixel value of the adjacent pixels,
Wherein the color edge weight is generated based on the basic weight and pixel values of corresponding neighboring pixels at positions corresponding to the adjacent pixels in the current color image frame.
The method according to claim 1,
Detecting the current depth pixel present in the boundary noise region existing in the region between the boundary of the current color image frame and the boundary of the depth map as the current boundary noise pixel,
Wherein performing the first weighted three-way filtering comprises:
Wherein the Euclidean distance between the current boundary noise pixel and the neighboring pixels belonging to the window centered on the current boundary noise pixel is divided by 2 and a value obtained by applying a Gaussian function to the pixel value of the current boundary noise pixel, A difference between a pixel value of a current color pixel at a position corresponding to the current boundary noise pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels, Wherein the first weighted three-way filtering for the current boundary noise pixel is performed based on the weight and the color edge weight.
The method according to claim 1,
The step of generating the edge weight map
A mean absolute value between the pixel value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixel value of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame, A plurality of dynamic object windows are generated by selecting windows having a difference of at least a threshold value; And
Detecting a dynamic object edge in the plurality of dynamic object windows and generating the edge weight map using the detected dynamic object edge.
The method according to claim 1,
The step of generating the edge weight map
A mean absolute value between the pixel value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixel value of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame, Difference) operation to generate a first intermediate image;
Subtracting the edge of the current color image frame from the first intermediate image and then replacing the negative pixel value with 0 to generate a second intermediate image; And
And generating the edge weight map by subtracting the second intermediate image from the first intermediate image.
A map generator for generating an edge weight map indicating a dynamic object edge of a current color image frame;
A determination unit for determining, based on the edge weight map, whether the current depth pixel of the depth map corresponding to the current color image frame belongs to the dynamic object edge; And
And a correction unit for selectively performing a first weighted three-way filtering in which a depth edge weight and a color edge weight are applied to the current depth pixel based on the determination result,
The correction unit
Performing the first weighted three-way filtering on a current depth pixel belonging to the dynamic object edge if the current depth pixel belongs to the dynamic object edge,
And performing the second weighted three-way filtering in which the depth edge weight and the color edge weight are not applied to a current depth pixel that does not belong to the dynamic object edge if the current depth pixel does not belong to the dynamic object edge Map correction device.
delete 11. The method of claim 10,
The correction unit
The first weighted three-way filtering is performed according to the following equations (1) to (5)
Wherein the second weighted three-way filtering is performed according to Equation (6).
[Equation 1]
Figure 112017021471715-pat00076

&Quot; (2) "
Figure 112017021471715-pat00077

&Quot; (3) "
Figure 112017021471715-pat00078

&Quot; (4) "
Figure 112017021471715-pat00079

&Quot; (5) "
Figure 112017021471715-pat00080

&Quot; (6) "
Figure 112017021471715-pat00081
11. The method of claim 10,
The correction unit
The Euclidean distance between the current depth pixel and neighboring pixels belonging to the window centered on the current depth pixel, the pixel value of the current depth pixel and the pixel value between adjacent pixels belonging to the window centered on the current depth pixel A difference between a pixel value of a current color pixel at a position corresponding to the current depth pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels, the depth edge weight, And performs the first weighted three-way filtering based on the color edge weight.
14. The method of claim 13,
The correction unit
Based on the distance standard deviation, which is a standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels, and the basic weight generated based on the size of the window to which the current depth pixel belongs, And the depth map is corrected.
14. The method of claim 13,
The basic weight is generated based on the distance standard deviation which is the standard deviation of the Euclidean distance between the current depth pixel and the adjacent pixels belonging to the window centered on the current depth pixel and the size of the window to which the current depth pixel belongs time,
Wherein the depth edge weight is generated based on the basic weight and the pixel value of the adjacent pixels,
Wherein the color edge weight is generated based on the basic weight and pixel values of corresponding neighboring pixels at positions corresponding to the adjacent pixels in the current color image frame.
11. The method of claim 10,
The correction unit
Detecting the current depth pixel present in a boundary noise region existing in a region between a boundary of the current color image frame and a boundary of the depth map as a current boundary noise pixel,
Wherein the Euclidean distance between the current boundary noise pixel and the neighboring pixels belonging to the window centered on the current boundary noise pixel is divided by 2 and a value obtained by applying a Gaussian function to the pixel value of the current boundary noise pixel, A difference between a pixel value of a current color pixel at a position corresponding to the current boundary noise pixel in the current color image frame and a pixel value of a corresponding adjacent pixel at a position corresponding to the adjacent pixels, And performs the first weighted three-way filtering on the current boundary noise pixel based on the weight and the color edge weight.
11. The method of claim 10,
The map generation unit
A mean absolute value between the pixel value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixel value of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame, Difference detection operation is performed on a plurality of dynamic object windows to detect a dynamic object edge in the plurality of dynamic object windows and then generates the edge weight map using the detected dynamic object edge And the depth map is corrected.
11. The method of claim 10,
The map generation unit
A mean absolute value between the pixel value of all the pixels belonging to the window to which the current color pixel of the current color image frame belongs and the pixel value of all the pixels belonging to the corresponding window in the previous color image frame immediately before the current color image frame, A second intermediate image is generated by subtracting the edge of the current color image frame from the first intermediate image and replacing a negative pixel value by 0, And the edge weight map is generated by subtracting the second intermediate image from the first intermediate image.
KR1020150149904A 2015-10-28 2015-10-28 Method and Apparatus for correcting a depth map KR101760463B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150149904A KR101760463B1 (en) 2015-10-28 2015-10-28 Method and Apparatus for correcting a depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150149904A KR101760463B1 (en) 2015-10-28 2015-10-28 Method and Apparatus for correcting a depth map

Publications (2)

Publication Number Publication Date
KR20170049042A KR20170049042A (en) 2017-05-10
KR101760463B1 true KR101760463B1 (en) 2017-07-24

Family

ID=58744198

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150149904A KR101760463B1 (en) 2015-10-28 2015-10-28 Method and Apparatus for correcting a depth map

Country Status (1)

Country Link
KR (1) KR101760463B1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190086320A (en) * 2018-01-12 2019-07-22 삼성전자주식회사 The apparatus for proccesing image and method therefor
CN110689565B (en) * 2019-09-27 2022-03-04 北京奇艺世纪科技有限公司 Depth map determination method and device and electronic equipment

Also Published As

Publication number Publication date
KR20170049042A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
JP5615552B2 (en) Generating an image depth map
KR101590767B1 (en) Image processing apparatus and method
TWI489418B (en) Parallax Estimation Depth Generation
JP5230669B2 (en) How to filter depth images
TWI483612B (en) Converting the video plane is a perspective view of the video system
KR100846513B1 (en) Method and apparatus for processing an image
US9171373B2 (en) System of image stereo matching
RU2423018C2 (en) Method and system to convert stereo content
CN110268712B (en) Method and apparatus for processing image attribute map
TW201432622A (en) Generation of a depth map for an image
KR101811718B1 (en) Method and apparatus for processing the image
EP3311361B1 (en) Method and apparatus for determining a depth map for an image
US20140376809A1 (en) Image processing apparatus, method, and program
JP6102928B2 (en) Image processing apparatus, image processing method, and program
US10269099B2 (en) Method and apparatus for image processing
KR101364860B1 (en) Method for transforming stereoscopic images for improvement of stereoscopic images and medium recording the same
US20140294299A1 (en) Image processing apparatus and method
JP6715864B2 (en) Method and apparatus for determining a depth map for an image
JP2012249038A (en) Image signal processing apparatus and image signal processing method
US9667939B2 (en) Image processing apparatus, image processing method, and program
EP3438923B1 (en) Image processing apparatus and image processing method
KR101458986B1 (en) A Real-time Multi-view Image Synthesis Method By Using Kinect
KR101760463B1 (en) Method and Apparatus for correcting a depth map
JP7159198B2 (en) Apparatus and method for processing depth maps
JP6221333B2 (en) Image processing apparatus, image processing circuit, and image processing method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant