KR20130123240A - Apparatus and method for processing depth image - Google Patents
Apparatus and method for processing depth image Download PDFInfo
- Publication number
- KR20130123240A KR20130123240A KR1020120046534A KR20120046534A KR20130123240A KR 20130123240 A KR20130123240 A KR 20130123240A KR 1020120046534 A KR1020120046534 A KR 1020120046534A KR 20120046534 A KR20120046534 A KR 20120046534A KR 20130123240 A KR20130123240 A KR 20130123240A
- Authority
- KR
- South Korea
- Prior art keywords
- pixel
- weight
- pixels
- depth
- depth value
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
The following embodiments relate to an apparatus and method for processing a depth image, and an apparatus and method for processing an error occurring in a depth image during an encoding process.
Unlike the two-dimensional image, the three-dimensional image provides a more realistic effect to the user through the sense of depth. In the 3D image, the depth of the virtual view corresponding to the 3D image is determined using the left view depth image and the right view depth image, and the left view color image and the right view color image are synthesized according to the depth of the virtual view. Is generated.
In this case, the depth image may be distorted in the boundary of the object due to an error in the encoding process.
When synthesizing a 3D image into a depth image in which distortion occurs, distortion may also occur at a boundary of an object included in the 3D image.
Therefore, there is a need for a method capable of processing an error of a depth image according to encoding.
In one embodiment, a depth image processing apparatus includes a first pixel determiner configured to determine, as a first pixel, pixels located at a boundary of an object among pixels included in a depth image; A second pixel determination unit which determines pixels of a predetermined area as a second pixel with respect to the first pixel; And a depth value controller configured to adjust the depth value of the first pixel using the depth values of the second pixels.
The depth value controller of the depth image processing apparatus may adjust the depth value of the first pixel by using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.
The depth adjusting unit of the depth image processing apparatus according to an exemplary embodiment uses the second weight and the depth value of the second pixels, which are inversely proportional to the difference between the depth value of the first pixel and the depth value of the second pixel, to determine the depth of the first pixel. You can adjust the value.
The depth value controller of the depth image processing apparatus according to an exemplary embodiment may use the depth of the first pixel using the third weight and the depth value of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel. You can adjust the value.
In one embodiment, a depth image processing apparatus includes a pixel setting unit configured to set second pixels around first pixels on a boundary of an object; A weight determination unit for determining at least one of the first weight based on the distance between pixels, the second weight based on the difference between pixel depth values, and the third weight based on the directionality of the boundary and the pixel slope using the second pixels. ; And a depth value controller configured to adjust depth values of the first pixels using the determined weight.
The weight determiner of the depth image processing apparatus may determine a first weight of a second pixel far from the first pixel to be higher than a first weight of a second pixel close to the first pixel.
The weight determiner of the depth image processing apparatus may further include a direction determiner configured to determine a direction of a boundary using a gradient of an area including second pixels; And
An inclination determination unit may be configured to determine an inclination of the second pixel by using the position of the first pixel and the position of the second pixel.
When the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, the inclination determiner of the depth image processing apparatus according to an exemplary embodiment may determine a difference between the vertical coordinate of the first pixel and the vertical coordinate of the second pixel. You can determine the slope.
According to an embodiment, a depth image processing method may include: determining pixels located at a boundary of an object from among pixels included in a depth image as a first pixel; Determining pixels of a predetermined area as a second pixel with respect to the first pixel; And adjusting the depth value of the first pixel using the depth value of the second pixels.
According to an embodiment, a depth image processing method may include: setting second pixels around first pixels at a boundary of an object; Determining, using the second pixels, at least one of a first weight based on the inter-pixel distance, a second weight based on the inter-pixel depth value difference, and a third weight based on the directionality of the boundary and the pixel slope; And adjusting the depth value of the first pixels using the determined weight.
1 is a diagram illustrating an operation of a depth image processing apparatus according to an exemplary embodiment.
2 is a diagram illustrating a structure of a depth image processing apparatus according to an exemplary embodiment.
3 is a diagram illustrating a structure of a pixel setting unit according to an exemplary embodiment.
4 is a diagram illustrating a structure of a weight determining unit according to an embodiment.
5 illustrates an example of determining a distance weight according to an embodiment.
6 illustrates an example of determining a direction weight according to an embodiment.
7 is a diagram illustrating a structure of an encoding apparatus, according to an embodiment.
8 is a diagram illustrating a structure of a decoding apparatus according to an embodiment.
9 is a diagram illustrating a depth image processing method, according to an exemplary embodiment.
10 illustrates a method of determining a direction weight according to an embodiment.
11 is a diagram illustrating an encoding method, according to an embodiment.
12 is a diagram illustrating a decoding method according to an embodiment.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.
1 is a diagram illustrating an operation of a depth image processing apparatus according to an exemplary embodiment.
As shown in FIG. 1, the depth
In detail, the depth
In this case, the first pixel may be pixels located at the boundary of the object among pixels included in the depth image and having a large difference between the adjacent pixels and the depth value. Also, the second pixel may be pixels located within a predetermined area from the first pixel.
In addition, the distortion generated at the boundary of the object is generated because the difference between the depth value of the pixel corresponding to the object and the depth value of the pixel corresponding to the background is too large based on the boundary. Therefore, the depth
2 is a diagram illustrating a structure of a depth image processing apparatus according to an exemplary embodiment.
Referring to FIG. 2, the depth
The
A detailed configuration of the
The
A configuration in which the
The
In this case, p may be a first pixel and q may be a second pixel. In addition, D p and new may be depth values of the first pixel adjusted by the
The details of W ran , W dep , and W dir and the decision process are described in more detail later.
3 is a diagram illustrating a structure of a pixel setting unit according to an exemplary embodiment.
Referring to FIG. 3, the
The
The
4 is a diagram illustrating a structure of a weight determining unit according to an embodiment.
Referring to FIG. 4, the
The distance
For example, the
In this case, W ran may be a distance weight, and exp may be an exponential function. In addition, p x and p y may be the x and y coordinates of the first pixel, respectively, and q x and q y may be the x and y coordinates of the second pixel, respectively. And, δ r a n 2 may be a dispersion value of the distance between the first pixel and the second pixel.
In this case, the distance
For example, the
Since the depth value of the second pixel close to the first pixel is similar to the depth value of the first pixel, the weight of the second pixel close to the first pixel is conventionally set high. However, being close to the first pixel means being close to the boundary of the object, and the possibility of distortion caused by the boundary of the object also increases.
Accordingly, the distance
In this case, the distance
The second pixel includes a pixel corresponding to the object and a pixel corresponding to the background. In addition, the second pixel for which the distance
In this case, since the region including the second pixels is set around the first pixel, the pixels having the highest distance weights have the same distance from the first pixel. That is, the distance
Therefore, the depth
A process of determining the distance weight by the
The depth value
In this case, the depth value weight may be inversely proportional to the absolute difference between the depth value of the first pixel and the depth value of the second pixel. That is, as the absolute difference between the depth value of the first pixel and the depth value of the second pixel is smaller, the depth value weight of the second pixel is increased, and as the absolute difference between the depth value of the first pixel and the depth value of the second pixel is larger, The depth value weight of the second pixel may decrease.
For example, the depth value
In this case, W dep may be a depth value weight, and exp may be an exponential function. In addition, D p may be a depth value of the first pixel, and D q may be a depth value of the second pixel. Δ dep 2 may be a dispersion value of the distance between the first pixel and the second pixel.
The
The depth information is different from the depth value of the pixel corresponding to the object and the depth value of the pixel corresponding to the background based on the boundary of the object.
For example, a second pixel corresponding to a boundary including the first pixel, a second pixel corresponding to an object, and a second pixel located at the same distance as the first pixel. The second pixels corresponding to the background all have different depth values.
Accordingly, the
In detail, the
For example, the
In this case, A may be a depth image.
Next, the
Finally, the
For example, when the direction of the gradient is 0 ° or 180 °, the
In addition, when the direction of the gradient is between 90 ° and 180 ° or between 270 ° and 360 °, the
When the direction of the gradient is between 0 ° and 90 ° or between 180 ° and 270 °, the
A process of determining the directionality of the boundary by the
The
In detail, the
For example, the
In this case, p x and p y may be the x and y coordinates of the first pixel, respectively, and q x and q y may be the x and y coordinates of the second pixel, respectively.
The direction
In this case, the
In this case, W dir _ vertical may be used as W dir which is a direction weight of the second pixel when the boundary direction is vertical. In addition, Equation 8 may be a formula for determining the direction weight of the second pixel higher as the slope of the second pixel is closer to horizontal, and lowering the direction weight of the second pixel as the slope of the second pixel is closer to vertical. have.
In this case, W dir _ horizontai may be used as W dir which is a direction weight of the second pixel when the boundary direction is horizontal. In addition, Equation 9 may be a formula for determining a direction weight of the second pixel higher as the slope of the second pixel is closer to vertical, and lowering the direction weight of the second pixel as the slope of the second pixel is closer to horizontal. have.
In this case, Wdir _ diagornal _ upleft may be used as W dir which is a direction weight of the second pixel when the boundary direction is a diagonal line in the left direction. In addition, Equation 10 may be a formula for determining a direction weight of the second pixel to be low when the slope of the second pixel is close to a diagonal line having a high left direction.
In this case, W dir _ diagornal _ upleft may be used as W dir which is a direction weight of the second pixel when the boundary direction is a diagonal line in the right direction. In addition, Equation 11 may be a formula for determining the direction weight of the second pixel to be low when the slope of the second pixel is close to the diagonal line having the right direction.
For example, if the direction of the boundary is horizontal and the slope of the second pixel is 0.8, the
In addition, when the inclination of the second pixel is 0.8 and the direction of the boundary is a diagonal line in the left direction, the direction
As in the above examples, the direction
5 illustrates an example of determining a distance weight according to an embodiment.
The distance
For example, as illustrated in FIG. 5, the
In this case, since the
The pixels corresponding to the object may be disposed on one side and the pixels corresponding to the background may be disposed on the other side of the
Accordingly, the distance
In this case, the
6 illustrates an example of determining a direction weight according to an embodiment.
When the
Also, the
In this case, the direction
In addition, when the
In this case, the direction
In addition, when the
At this time, the direction
In addition, when the
At this time, the direction
7 is a diagram illustrating a structure of an encoding apparatus, according to an embodiment.
The
Next, the
The
The
Therefore, the
The
In detail, the
In this case, the
8 is a diagram illustrating a structure of a decoding apparatus according to an embodiment.
The
The
The
Accordingly, the
The
In detail, the
In this case, when the
9 is a diagram illustrating a depth image processing method, according to an exemplary embodiment.
In
In
In
In
In
The process of determining the direction weight will be described in detail with reference to FIG. 10.
In
In this case, the
10 illustrates a method of determining a direction weight according to an embodiment. In this case, steps 1010 to 1030 in FIG. 10 may be included in
In
In detail, the
In
In detail, the
In
In this case, the
11 is a diagram illustrating an encoding method, according to an embodiment.
In
In
In
In
In
In
In detail, the
12 is a diagram illustrating a decoding method according to an embodiment.
In
In
In
In
In
In detail, the
The apparatus described above may be implemented as a hardware component, a software component, and / or a combination of hardware components and software components. For example, the apparatus and components described in the embodiments may be implemented within a computer system, such as, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable array (FPA) A programmable logic unit (PLU), a microprocessor, or any other device capable of executing and responding to instructions. The processing device may execute an operating system (OS) and one or more software applications running on the operating system. The processing device may also access, store, manipulate, process, and generate data in response to execution of the software. For ease of understanding, the processing apparatus may be described as being used singly, but those skilled in the art will recognize that the processing apparatus may have a plurality of processing elements and / As shown in FIG. For example, the processing unit may comprise a plurality of processors or one processor and one controller. Other processing configurations are also possible, such as a parallel processor.
The software may include a computer program, code, instructions, or a combination of one or more of the foregoing, and may be configured to configure the processing device to operate as desired or to process it collectively or collectively Device can be commanded. Software and / or data may be any type of machine, component, physical device, virtual equipment, computer storage medium or device in order to be interpreted by or to provide instructions or data to the processing device. May be permanently or temporarily embodied in the transmitted ignal wave. The software may be distributed over a networked computer system and stored or executed in a distributed manner. The software and data may be stored on one or more computer readable recording media.
The method according to an embodiment may be implemented in the form of a program command that can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. For example, it is to be understood that the techniques described may be performed in a different order than the described methods, and / or that components of the described systems, structures, devices, circuits, Lt; / RTI > or equivalents, even if it is replaced or replaced.
Therefore, other implementations, other embodiments, and equivalents to the claims are also within the scope of the following claims.
100: depth image processing device
110: pixel setting unit
120: weight setting unit
130: depth value control unit
Claims (21)
A second pixel determination unit which determines pixels of a predetermined area as a second pixel with respect to the first pixel; And
Depth adjustment unit for adjusting the depth value of the first pixel using the depth value of the second pixel
And the depth image processing unit.
The depth value adjusting unit,
And a depth value of the first pixel is adjusted using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.
The depth value adjusting unit,
And a depth value of the first pixel is adjusted using a second weight and an depth value of the second pixels inversely proportional to a difference between the depth value of the first pixel and the depth value of the second pixel.
The depth value adjusting unit,
And a depth value of the first pixel is adjusted by using a third weight and depth values of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel.
The directionality of the boundary,
And depth information processing apparatus according to the gradient of the region including the second pixels.
The slope of the second pixel is,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, the depth image processing apparatus of claim 1, wherein the depth coordinate is a difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel.
A weight determination unit for determining at least one of the first weight based on the distance between pixels, the second weight based on the difference between pixel depth values, and the third weight based on the directionality of the boundary and the pixel slope using the second pixels. ; And
Depth adjustment unit for adjusting the depth value of the first pixel using the determined weight
And the depth image processing unit.
The weight determination unit,
And determining a first weight of a second pixel far from the first pixel higher than a first weight of a second pixel close to the first pixel.
The weight determination unit,
A directional determination unit that determines a directionality of a boundary by using a gradient of an area including second pixels; And
Gradient determination unit for determining the slope of the second pixel using the position of the first pixel and the position of the second pixel
And the depth image processing unit.
The tilt determination unit,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, a difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel is determined as the slope of the second pixel.
Determining pixels of a predetermined area as a second pixel with respect to the first pixel; And
Adjusting the depth value of the first pixel using the depth value of the second pixels
/ RTI >
Adjusting the depth value,
And controlling the depth value of the first pixel using a first weight proportional to a distance between the first pixel and the second pixel and a depth value of the second pixels.
Adjusting the depth value,
And controlling the depth value of the first pixel using a second weight value inversely proportional to a difference between the depth value of the first pixel and the depth value of the second pixel and the depth value of the second pixels.
Adjusting the depth value,
And controlling the depth value of the first pixel using the third weight and the depth value of the second pixels according to the direction of the boundary including the first pixel and the slope of the second pixel.
The directionality of the boundary,
The depth image processing method of claim 1, wherein the depth information is information determined according to a gradient of an area including the second pixels.
The slope of the second pixel is,
And when the horizontal coordinate of the first pixel and the horizontal coordinate of the second pixel are the same, a difference between the vertical coordinate of the first pixel and the vertical coordinate of the second pixel.
Determining, using the second pixels, at least one of a first weight based on the inter-pixel distance, a second weight based on the inter-pixel depth value difference, and a third weight based on the directionality of the boundary and the pixel slope; And
Adjusting a depth value of the first pixels using the determined weight value
/ RTI >
Determining the weight,
And determining a first weight of a second pixel far from the first pixel higher than a first weight of a second pixel close to the first pixel.
Determining a directionality of the boundary using a gradient of the region in which the second pixels are included; And
Determining the slope of the second pixel using the position of the first pixel and the position of the second pixel
The depth image processing method further comprising:
Determining the slope,
And when the horizontal coordinates of the first pixel and the horizontal coordinates of the second pixel are the same, the difference between the vertical coordinates of the first pixel and the vertical coordinates of the second pixel is determined as the slope of the second pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120046534A KR20130123240A (en) | 2012-05-02 | 2012-05-02 | Apparatus and method for processing depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120046534A KR20130123240A (en) | 2012-05-02 | 2012-05-02 | Apparatus and method for processing depth image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20130123240A true KR20130123240A (en) | 2013-11-12 |
Family
ID=49852618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020120046534A KR20130123240A (en) | 2012-05-02 | 2012-05-02 | Apparatus and method for processing depth image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20130123240A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113938665A (en) * | 2020-07-14 | 2022-01-14 | 宏达国际电子股份有限公司 | Method and electronic device for transmitting reduced depth information |
CN116503570A (en) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | Three-dimensional reconstruction method and related device for image |
-
2012
- 2012-05-02 KR KR1020120046534A patent/KR20130123240A/en not_active Application Discontinuation
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113938665A (en) * | 2020-07-14 | 2022-01-14 | 宏达国际电子股份有限公司 | Method and electronic device for transmitting reduced depth information |
CN113938665B (en) * | 2020-07-14 | 2023-10-13 | 宏达国际电子股份有限公司 | Method and electronic device for transmitting reduced depth information |
US11869167B2 (en) | 2020-07-14 | 2024-01-09 | Htc Corporation | Method for transmitting reduced depth information and electronic device |
CN116503570A (en) * | 2023-06-29 | 2023-07-28 | 聚时科技(深圳)有限公司 | Three-dimensional reconstruction method and related device for image |
CN116503570B (en) * | 2023-06-29 | 2023-11-24 | 聚时科技(深圳)有限公司 | Three-dimensional reconstruction method and related device for image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109804633B (en) | Method and apparatus for omni-directional video encoding and decoding using adaptive intra-prediction | |
AU2020309130B2 (en) | Sample padding in adaptive loop filtering | |
KR101957873B1 (en) | Apparatus and method for image processing for 3d image | |
CN113301334B (en) | Method and apparatus for adaptive filtering of video coding samples | |
US11212497B2 (en) | Method and apparatus for producing 360 degree image content on rectangular projection by selectively applying in-loop filter | |
CN108293129B (en) | Coding sequence coding method and its equipment and decoding method and its equipment | |
ES2607451T3 (en) | Video encoding using an image gradation reduction loop filter | |
ES2583129T3 (en) | Method and apparatus for noise filtering in video coding | |
US9225967B2 (en) | Multi-view image processing apparatus, method and computer-readable medium | |
US10185145B2 (en) | Display apparatus and operating method of display apparatus | |
US20210297663A1 (en) | Systems and methods for image coding | |
WO2020249124A1 (en) | Handling video unit boundaries and virtual boundaries based on color format | |
US20140267808A1 (en) | Video transmission apparatus | |
JPWO2009037828A1 (en) | Image coding apparatus and image decoding apparatus | |
US10123021B2 (en) | Image encoding apparatus for determining quantization parameter, image encoding method, and program | |
WO2017128634A1 (en) | Deblocking filter method and apparatus | |
KR102522098B1 (en) | Method and apparatus for measuring image quality base on perceptual sensitivity | |
KR20130123240A (en) | Apparatus and method for processing depth image | |
US11330295B2 (en) | Determining inter-view prediction areas in images captured with a multi-camera device | |
CN114827603A (en) | CU block division method, device and medium based on AVS3 texture information | |
KR20120087084A (en) | Image processing apparatus and method for defining distortion function for synthesis image of intermediate view | |
JP6239838B2 (en) | Moving picture encoding apparatus, control method thereof, and imaging apparatus | |
KR102564477B1 (en) | Method for detecting object and apparatus thereof | |
EP3854087A1 (en) | Method and apparatus of encoding or decoding using reference samples determined by predefined criteria | |
TW202007149A (en) | Boundary filtering for sub-block |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |