KR101681197B1 - Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor - Google Patents
Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor Download PDFInfo
- Publication number
- KR101681197B1 KR101681197B1 KR1020150063692A KR20150063692A KR101681197B1 KR 101681197 B1 KR101681197 B1 KR 101681197B1 KR 1020150063692 A KR1020150063692 A KR 1020150063692A KR 20150063692 A KR20150063692 A KR 20150063692A KR 101681197 B1 KR101681197 B1 KR 101681197B1
- Authority
- KR
- South Korea
- Prior art keywords
- image
- generating
- color
- convolution
- integral
- Prior art date
Links
Images
Classifications
-
- G06T7/0069—
-
- G06T7/0067—
-
- G06T7/0075—
-
- H04N13/0257—
Abstract
In the extraction method of the image depth information by using the high speed convolution of, for detecting the color image and the infrared image image input step of receiving the form Bayer pattern (Bayer Pattern), a color image within the contour from the multi-color sensor process, the median filter for removing noise (noise Reduction), and border highlighted (boundary Enhancement) (Median Filter) processing steps, and a contour for detecting the contours in the filtered image detection step, and the integral image generation method comprising: based on the infrared image received from the multi-color sensor generates an integral image, and the generated integrated image within at least 4 a fast convolution step of approaching a point of + alpha (alpha is possible from 0) to convolute an image and generating a plurality of blurred images using the convolution, and a fast convolution step of generating a single color (RGB) And a depth image generating step of generating a depth map by measuring a maximum degree of similarity between the blurred images .
Description
BACKGROUND OF THE
Due to the rapid development of image processing technology, 3D camera technology including color image and depth image is becoming an issue.
With 3D camera technology,
(1) Stereo camera (Bumblebee2)
(2) IR Pattern (Randomized dots) based camera (Kinect, Xtion)
(3) Time of Flight (TOF) camera (Kinect2)
.
It can be applied to various application fields by 3D reconstruction using depth image and XYZ three dimensional space expansion.
For example, the following application fields can be cited.
(1) Stereo 3D image generation - 3D display
(2) De-focusing or auto-focusing of a digital camera
(3) 3D Reconstruction of 3D Printing
(4) 3D motion recognition (Gesture Recognition)
In the prior art of this field
(1) Color image and depth information extraction technology using multi-color sensor based on Duar Aperture
(2) Kernel-based video convolution technology
And the like.
Although a single multi-color sensor based on a dual aperture can be used to extract color images and depth information, it is difficult to process images due to the high computational complexity of extracting depth information and memory usage. . Therefore, it is necessary to reduce the amount of computation and memory usage.
Conventionally, a process of generating a plurality of blurred infrared images using a plurality of PSF (Point Spread Function) models has been performed in a process of extracting depth information. Here, we use memory for a number of PSF models for convolution of images and approach all pixels in kernel based mask (MASK).
However, according to the prior art, (1) the amount of computation is large and the processing speed is slow; and (2) the memory usage is very large.
The present invention reduces the time required to generate a plurality of infrared images convolved using only an integral value of at least 4 + N (N is 0 to 0) in an image without a PSF model, Processing technology was developed.
That is, it performs high-speed convolution based on a multi-color sensor to reduce the amount of computation, shorten the time required, and reduce the memory usage. At this time, the processing speed and the memory usage amount with respect to the conventional system are relatively compared and analyzed.
According to an aspect of the present invention, there is provided a method of extracting depth information of an image using fast convolution, comprising the steps of : inputting a color image and an infrared image from a multi-color sensor in a Bayer pattern; and, the color process for detecting the contour within the image, median filter for removing noise (noise Reduction), and border highlighted (boundary Enhancement) (Median Filter) processing steps, and a contour for detecting the contours in the filtered image detection step, and the integral image generation method comprising: based on the infrared image received from the multi-color sensor generates an integral image, and the generated integrated image within at least 4 a fast convolution step of approaching a point of + alpha (alpha is possible from 0) to convolute an image and generating a plurality of blurred images using the convolution, and a fast convolution step of generating a single color (RGB) And a depth image generating step of generating a depth map by measuring a maximum degree of similarity between the blurred images .
At this time, it is preferable that the 4 + alpha points are symmetrical to each other with respect to the pixel to be processed.
The present invention for achieving the above object, in the extraction assembly of the image depth information by using the high speed convolution of Bayer pattern color image and an infrared image from the multi-color sensor (Bayer Pattern) receiving in the form image input, and a color to the process for detecting the image within the contour, the median filter for removing noise (noise Reduction), and border highlighted (boundary Enhancement) (Median Filter) at least within the processing section and the and the contour detecting unit for detecting a contour line in the filtered image, and based on the infrared image received from the multi-color sensor and integral image generation unit for generating the integrated image generating
At this time, it is preferable that the 4 + alpha points are symmetrical to each other with respect to the pixel to be processed.
According to the present invention, an image processing method and apparatus for shortening the time required to generate a plurality of convoluted infrared images and reducing memory usage, and extracting depth information of an image using the method.
FIG. 1 is a conceptual diagram of an apparatus for extracting depth information of an image using a high-speed convolution based on a multi-color sensor.
FIG. 2 shows a process of a median filter.
FIG. 3 shows an RGB image and an IR image input through a multi-color sensor.
FIG. 4 shows the image converted through the median filter and the outline detection result using the image. The mask size represents from 3 x 3 to 13 x 13.
5 shows a method of generating an integral image.
FIG. 6 shows a method of extracting a convolution value for a desired region in an integral image.
[Fig. 7] shows a process of generating an integral image.
FIG. 8 shows an input image received from a multi-color sensor and an integral image of the input image.
FIG. 9 shows a convolution process using an integral-image interpolation method.
10 shows a fast convolution result based on an integral image including an interpolation method. The results are convoluted from the 0th to the 80th.
11 shows data obtained by comparing and analyzing a convolution using a plurality of PSF models of 33 x 33 x 16 banks x 32 bits and a fast convolution based on an integral image.
12 shows the result of the depth information of the image. The image from the minimum bank to the maximum bank is mapped from red to blue.
Hereinafter, the present invention will be described in detail with reference to the accompanying drawings. However, the parts having the same function by the same configuration are denoted by the same reference numerals even if the drawings are different, and the detailed description thereof may be omitted.
A conceptual diagram of an apparatus for extracting depth information of an image using fast convolution based on the multi-color sensor in the present invention is shown in FIG.
A method for extracting depth information of an image using fast convolution, the method comprising: an image input step (1) for inputting a color image and an infrared image from a multi-color sensor in a Bayer pattern; a process for detecting the contours in the image, median filter for removing noise (noise Reduction), and border highlighted (boundary Enhancement) (Median Filter) processing step (2), the integral image generation step and the edge detection step (3) for detecting the contours in the filtered image, and based on the infrared image received from the multi-color sensor to generate the integral image (5) And a high-speed convergence step (6 ) for generating a plurality of blurred images by using a convolution of an image by approaching points of at least 4 + alpha (alpha is possible from 0) in the generated integral image ) and further characterized in that the color of the single (RGB) including an image and a plurality of blurring (blurring) the measures the maximum similarity between the image depth map (depth-map) depth image generating step (4) for generating a . However, the process of the above method may be implemented by software stored in a semiconductor storage device such as a ROM, a RAM, or a flash RAM, and the software may be distributed as a function of each function or module, and may be configured as a device.
In the
The median filter processing step (2) represents a processing procedure in Fig. The neighboring pixels are collected based on one pixel to perform ascending sorting, and the middle value among the sorted values is mapped to one pixel value as a reference. Here, filtering is preferably performed so as to remove noise and emphasize the boundary lines so that highly reliable contours are detected.
More specifically, neighboring pixel points can be collected by setting a mask size based on a pixel at a point in the input image (u, v). 2, 1, 0, 0, 9, 1, 2, 3, 4, and 5 are obtained by setting the 3x3 mask size based on the pixel position of (u, v) 5, 8}. (U, v) is replaced with the median value ((mask size * mask size) / 2 + 1) th value in the ascending or descending order. 2, 3, 5, 7, 8, and 9} are arranged in ascending order in FIG. 2, and the middle value (fifth value) .
4 shows the outline detection result using the converted image and the filtered image through the median filter.
As the size of the mask in the median filter increases, the noise phenomenon is reduced and the interface is emphasized. FIG. 4 shows median filtering of mask sizes from 3x3 to 13x13 based on the same input image, showing the difference in noise phenomenon and emphasis effect on the boundary surface according to the mask size, and using each filtered image And the outline image derived through the same outline detection step is displayed. The effect of reducing the contour of noise according to the size of each mask and the contour of the boundary are emphasized.
The
The integral
More specifically, FIG. 5 shows a process of generating an integral image. d (i, j) is the integral value of the integral image of (i, j) and A (y, x) is the i and j pixels of the input image. The integrated value of the (i, j) -th integrated image requires a cumulative integral value of 3 points and a pixel of the input image of 1 point, and can be calculated by approaching 4 points. In detail, the integral value of (i, j) is summed with the integral value of (i, j-1) When the pixels of the image are summed, an integral value of d (i, j) is generated.
6 shows a process of convoluting only the corresponding region using only the integral value of four points in the generated integral image. The convoluted value of b (i, j) represents the convoluted value for the region with the magnitude of (w, h) based on the i, j address. The convoluted value of B (i, j) requires an integral value of at least 4 points + α in the integral image, and the integral value of d (i, jw) j), adds the integral value of d (ih, jw) which is the overlapping region, and then averages the result to the size of the convolution region to derive the convolution region.
7 shows a process of obtaining the sum of pixels of a corresponding region by generating an integral image and approaching four points in the integral image.
One point in the input image is defined as a pixel value (Pixel), and one point in the integral image is defined as an integral value (Integral Valus).
The integral image is generated by sequentially accumulating values based on the input image. For example, assuming that the input image is {1, 2, 2, 4, 1}, the integral images are sequentially accumulated values {1, 1 + 2, 1 + 2 + 2, 1 + 2 + 2 + 4 , 1 + 2 + 2 + 4 + 1} = {1, 3, 5, 9, 10}.
And shows the process of deriving the area to be subjected to convolution processing in the integral image only by a four-point approach. If the integral value of 46 in FIG. 7 is subtracted from the integral value of 22 and the integral value of 20, and the overlapping 10 is added, 46-22-20 + 10 = 14, and the sum of the pixel values of the corresponding region in the input image + 2 + 5 + 4 = 14). Finally, averaging is performed to the size of the region to be convoluted.
The
9 shows a method in which an interpolation method considering the size of a mask is applied.
The size of the mask is 1, 3, 5, 7, 9, 11 and so on in an odd numbered process considering the symmetrical relationship with respect to the center point. Thus, to derive the convoluted values of 1.1, 1.2, 1.3, 1.4, 1.5 and so on including the convoluted values of even, 2, 4, 6, 8, .
A 3x3 mask and a 5x5 mask, for example, a convolution corresponding to 4x4 is derived. 4 points corresponding to the corners of each mask are extracted, and the value corresponding to the mask of 3x3 is multiplied by the weight of 0.5, and the value corresponding to the 5x5 mask is multiplied by the weight of 0.5 (i.e., average value). Taking as an example the values {1,5,6,21} in the 3x3 mask of FIG. 9 and {0,0,0,46} in the 5x5 mask, the four point values of the 4x4 mask are {0 * 0.5 + 1 * 0.5 + 5 * 0.5, 0 * 0.5 + 6 * 0.5, 46 * 0.5 + 21 * 0.5} = {0.5. 2.5. 3.0. 33.5} and performs convolution.
FIG. 10 shows a blurred result image from
The depth
Fig. 12 shows the result of the depth information of the image. The image from the minimum bank to the maximum bank is mapped from red to blue.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
INDUSTRIAL APPLICABILITY The present invention can be applied to a method and apparatus for extracting depth information of an image using fast convolution based on a multi-color sensor.
1:
2: Median filter processing unit
3: contour detector
4: Depth image generation unit
5: Integrated image generation unit
6: Fast convolution unit
7: Output section
Claims (4)
An image input step of receiving a color (RGB) image and an infrared (IR) image from a multi-color sensor in the form of a Bayer pattern;
A process for detecting a contour in a color image includes a median filter processing step for noise reduction and boundary enhancement,
An outline detection step for detecting an outline in the filtered image,
An integral image generation step of generating an integral image based on an infrared image input from a multi-color sensor,
A fast convolution step of generating a plurality of blurred infrared images by using the extracted values and magnitudes based on four pixels corresponding to the four corners of the generated integral image, Wow,
A depth image generation step of generating a depth map by measuring a maximum similarity between a single color image and a plurality of blurred infrared images
And extracting the depth information of the image using the fast convolution.
The four pixels corresponding to the four corners of the quadrangle are symmetrical with respect to the pixel to be processed
And extracting depth information of the image using high-speed convolution.
An image input unit for receiving a color (RGB) image and an infrared (IR) image from a multi-color sensor in a Bayer pattern,
A process for detecting an outline in a color image includes a median filter processing unit for noise reduction and boundary enhancement,
An outline detection unit for detecting an outline in the filtered image,
An integral image generating unit for generating an integral image based on the infrared image input from the multi-color sensor,
A high speed convolution unit for generating a plurality of blurred infrared images by using the extracted values and magnitudes based on four pixels corresponding to four corners forming the rectangle in the generated integral image, Wow,
A depth image generating unit for generating a depth map (Depth-Map) by measuring a maximum similarity between a single color image and a plurality of blurred infrared images ,
And extracting the depth information of the image using the fast convolution.
The four pixels corresponding to the four corners of the quadrangle are symmetrical with respect to the pixel to be processed
And extracting depth information of the image using high-speed convolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150063692A KR101681197B1 (en) | 2015-05-07 | 2015-05-07 | Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150063692A KR101681197B1 (en) | 2015-05-07 | 2015-05-07 | Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20160132209A KR20160132209A (en) | 2016-11-17 |
KR101681197B1 true KR101681197B1 (en) | 2016-12-02 |
Family
ID=57542216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150063692A KR101681197B1 (en) | 2015-05-07 | 2015-05-07 | Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101681197B1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021118270A1 (en) | 2019-12-11 | 2021-06-17 | Samsung Electronics Co., Ltd. | Method and electronic device for deblurring blurred image |
CN114697584B (en) * | 2020-12-31 | 2023-12-26 | 杭州海康威视数字技术股份有限公司 | Image processing system and image processing method |
CN113658134A (en) * | 2021-08-13 | 2021-11-16 | 安徽大学 | Multi-mode alignment calibration RGB-D image salient target detection method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100809687B1 (en) * | 2006-02-28 | 2008-03-06 | 삼성전자주식회사 | Image processing apparatus and method for reducing noise in image signal |
US20130033579A1 (en) | 2010-02-19 | 2013-02-07 | Dual Aperture Inc. | Processing multi-aperture image data |
KR102086509B1 (en) * | 2012-11-23 | 2020-03-09 | 엘지전자 주식회사 | Apparatus and method for obtaining 3d image |
-
2015
- 2015-05-07 KR KR1020150063692A patent/KR101681197B1/en active IP Right Grant
Non-Patent Citations (2)
Title |
---|
논문1:방공공학회 |
논문2:전자공학회 |
Also Published As
Publication number | Publication date |
---|---|
KR20160132209A (en) | 2016-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9305360B2 (en) | Method and apparatus for image enhancement and edge verification using at least one additional image | |
US9390511B2 (en) | Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data | |
RU2012145349A (en) | METHOD AND DEVICE FOR PROCESSING IMAGES FOR REMOVING DEPTH ARTIFacts | |
JP5908844B2 (en) | Image processing apparatus and image processing method | |
KR20150116833A (en) | Image processor with edge-preserving noise suppression functionality | |
JP2020129276A (en) | Image processing device, image processing method, and program | |
KR20130112311A (en) | Apparatus and method for reconstructing dense three dimension image | |
JP6497162B2 (en) | Image processing apparatus and image processing method | |
KR102516495B1 (en) | Methods and apparatus for improved 3-d data reconstruction from stereo-temporal image sequences | |
KR101681197B1 (en) | Method and apparatus for extraction of depth information of image using fast convolution based on multi-color sensor | |
US9466095B2 (en) | Image stabilizing method and apparatus | |
KR101681199B1 (en) | Multi-color sensor based, method and apparatus for extraction of depth information from image using high-speed convolution | |
JP2011150483A (en) | Image processing device | |
WO2017128646A1 (en) | Image processing method and device | |
US20130294708A1 (en) | Object separating apparatus, image restoration apparatus, object separating method and image restoration method | |
JP5662890B2 (en) | Image processing method, image processing apparatus, image processing program, and radiation dose estimation method by image processing | |
CN106663317B (en) | Morphological processing method and digital image processing device for digital image | |
GB2545649B (en) | Artefact detection | |
KR101527962B1 (en) | method of detecting foreground in video | |
Gao et al. | Depth error elimination for RGB-D cameras | |
KR101796551B1 (en) | Speedy calculation method and system of depth information strong against variable illumination | |
EP3547251B1 (en) | Dynamic range extension of partially clipped pixels in captured images | |
Viacheslav et al. | Kinect depth map restoration using modified exemplar-based inpainting | |
KR101711929B1 (en) | Method and apparatus for extraction of edge in image based on multi-color and multi-direction | |
Roosta et al. | Multifocus image fusion based on surface area analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |