KR100776649B1 - A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method - Google Patents

A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method Download PDF

Info

Publication number
KR100776649B1
KR100776649B1 KR1020050028382A KR20050028382A KR100776649B1 KR 100776649 B1 KR100776649 B1 KR 100776649B1 KR 1020050028382 A KR1020050028382 A KR 1020050028382A KR 20050028382 A KR20050028382 A KR 20050028382A KR 100776649 B1 KR100776649 B1 KR 100776649B1
Authority
KR
South Korea
Prior art keywords
image
depth information
variation
information
disparity
Prior art date
Application number
KR1020050028382A
Other languages
Korean (ko)
Other versions
KR20060063558A (en
Inventor
김강연
김승만
안충현
엄기문
이관행
이수인
Original Assignee
광주과학기술원
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR20040101773 priority Critical
Priority to KR1020040101773 priority
Application filed by 광주과학기술원, 한국전자통신연구원 filed Critical 광주과학기술원
Publication of KR20060063558A publication Critical patent/KR20060063558A/en
Application granted granted Critical
Publication of KR100776649B1 publication Critical patent/KR100776649B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

 1. TECHNICAL FIELD OF THE INVENTION
 The present invention relates to a depth information based stereo / multi-view image registration device and method.
 2. The technical problem to be solved by the invention
 The present invention adapts the variation search range and matching window frame of stereo matching using depth information about a reference time point obtained by a depth camera or the like to improve the accuracy of the disparity map for the reference image obtained by matching one or more stereo images. The purpose of the present invention is to provide a stereo / multiview image matching device and method that can minimize the range of the disparity search while improving the accuracy of the disparity map.
 3. Summary of Solution to Invention
 The present invention provides a depth information based stereo / multiview image for obtaining a 3D image information by creating a disparity map using a multiview image obtained from a multiview video camera and a depth information obtaining apparatus and depth information of a reference view point. A matching device, comprising: camera correction means for extracting camera information of each viewpoint using the multi-view image; Depth information converting means for converting the depth information into a variation with respect to a reference time point; And an image correction means for performing image rectification by receiving the multi-view image and calculating an epipolar line of a target image to be matched with the reference viewpoint image to calculate a disparity map with respect to a reference viewpoint. Included.
 4. Important uses of the invention
The present invention is used for a multiview image based scene modeling system.
 Disparity Map, Stereo Matching, Multi-view Image, 3D Image Processing

Description

A depth information-based stereo / multi-view stereo image matching apparatus and method}

1 is a configuration diagram of an embodiment of a depth-based stereo / multi-view image registration device according to the present invention,

2 is a flowchart illustrating a depth information based stereo / multiview image matching method according to the present invention;

3 is an exemplary diagram of a three-view corrected image calculated from a multi-view image corrector according to a preferred embodiment of the present invention. Drawing,

4 is a flowchart illustrating a process of selecting a matching window frame and setting a search range according to the present invention;

5 is a diagram illustrating an embodiment of a process of setting a variation search range using depth information according to an embodiment of the present invention.

* Explanation of symbols for the main parts of the drawings

101: multi-view video camera 102: depth camera

110: multi-view video storage unit 120: depth information storage unit

130: camera correction unit 140: depth information converter

150: multi-view image correction unit 160: registration window frame selection unit

170: mutation search region calculation unit 180: mutation search unit

190: variation selection unit 200: variation map generation / storage unit

The present invention relates to a depth information-based stereo / multi-view image registration device and method, and more particularly, to a reference viewpoint obtained by an active depth information acquisition device such as a depth camera, a scanner, etc. which are frequently used for modeling a 3D scene or an object. The present invention relates to a depth-based stereo / multi-view image registration device and a method for creating a disparity map using depth information and obtaining the 3D image information using the depth map.

Methods of acquiring 3D information of an object or a scene can be largely divided into an active method and a passive method.

The active method acquires depth information (depth map) at various points of time using an active three-dimensional information acquisition device such as a three-dimensional scanner, structured light or a three-dimensional depth camera, They can be aligned on a common three-dimensional coordinate system, or a relative transformation relationship can be obtained to align / combine them (see I. Stamos, PK Allen, "3-D Model Construction Using Range and Image Data," IEEE International). conference on Computer Vision and Pattern Recognition , pp.531-536, June 2000).

This active method provides relatively fast and accurate three-dimensional points and has the advantage of not being affected by dimming conditions. However, there is a limitation in the resolution, the disadvantage is that expensive equipment is required.

Meanwhile, the passive method is a method for generating three-dimensional information using texture information from images obtained from optical cameras of various viewpoints, and a stereo matching method is typical (Soon-Yong Park and Murai). Subbarao, "A Range Image Refinement Technique for Multi-view 3D Model Reconstruction," Proceedings of International Conference on 3-D Digital Imaging and Modeling 2003 , October 2003).

The stereo matching is based on one of the images acquired from the two left and right cameras as the reference image, and when the other image is placed as the search image, the coordinates of the reference image and the search image in the search image for the same point in space in these two images. The difference is calculated, called disparity. When the variation is calculated for each pixel of the reference image, the variation is stored in the form of an image, which is called a disparity map.

That is, the passive method extracts a plurality of such disparity maps from images of various viewpoints, and obtains three-dimensional information by aligning and combining them on a common coordinate system using camera information.

This passive method can obtain three-dimensional information at a lower cost than the active method, and obtains more accurate results due to the higher resolution of the image, and includes texture information, thereby projecting texture information onto the generated model. The advantage is that a realistic three-dimensional model can be obtained. However, there are disadvantages in that the lighting conditions and texture information are largely influenced, the errors are large in the shielding area, and the execution time is long to obtain a dense disparity map.

Recently, three-dimensional depth cameras have been announced (3DV Systems, Zcam ™) that can obtain three-dimensional information by using active sensors such as ultraviolet rays and at the same time obtain image texture information of the area by the camera.

Since the two types of three-dimensional information acquisition methods described above have advantages and disadvantages, recently, techniques for improving the accuracy of three-dimensional information through the combination of the active and passive methods (eg, S. Weik, "Registration of 3-D Partial Surface Models using Luminance and Depth Information, " Proceedings of International Conference on Recent Advances in 3-D Digital Imaging and Modeling , pp. 93-100, May 1997).

However, these methods also have a problem that the error due to the camera correction error can not be solved, there is a problem that the utilization is limited by simply using the image data as an auxiliary means for the accurate alignment of the range (range) data.

In addition, recently, a technique for compensating the advantages and disadvantages of the two methods through fusion of data obtained by an active method and data by a passive method (Ref .: Korea Patent No. 1004118750000, Um Ki-Moon et al., Stereo image parallax map fusion method and 3 3D image display method) is also being studied. However, the above technique also has a disadvantage in that the accuracy of the fused depth map data is also incorrect when the depth map data by the above-described two methods are incorrect. That is, there is a problem in that it depends on the accuracy of the two input depth data.

Therefore, it is necessary to develop a technique that uses two data complementarily in the process of acquiring depth information by an active method and a passive method, rather than merely fusion of results.

The present invention has been proposed to meet the above requirements, and stereo matching is performed by using depth information on a reference viewpoint obtained by a depth camera or the like to improve the accuracy of the disparity map for a reference image obtained by matching one or more stereo images. It is an object of the present invention to provide a stereo / multiview image registration device and a method for adaptively determining a disparity search range and a matching window frame to minimize the disparity search range while improving the accuracy of the disparity map.

In addition, the present invention calculates the disparity map for the reference viewpoint by using the depth information obtained by the depth camera, etc. and one or more pairs of stereo / multi-view images captured at the same time by two or more cameras, The purpose of the present invention is to provide a stereo / multi-view image registration device and method that can obtain three-dimensional image information by minimizing the range of the disparity search while improving the accuracy of the disparity map by using depth information for setting and selecting the window frame type. .

Other objects and advantages of the present invention can be understood by the following description, and will be more clearly understood by the embodiments of the present invention. Also, it will be readily appreciated that the objects and advantages of the present invention may be realized by the means and combinations thereof indicated in the claims.

The apparatus of the present invention for achieving the above object is to create a disparity map using the multi-view image obtained from the multi-view video camera and the depth information acquisition device and the depth information for the reference view to obtain the 3D image information 1. A depth information based stereo / multiview image matching device, comprising: camera correction means for extracting camera information of each viewpoint using the multiview image; Depth information converting means for converting the depth information into a variation with respect to a reference time point; And an image correction means for performing image rectification by receiving the multi-view image and calculating an epipolar line of a target image to be matched with the reference viewpoint image to calculate a disparity map with respect to a reference viewpoint. It is characterized by including.

In addition, the apparatus of the present invention comprises: matching window frame selection means for selecting a type of matching window frame using the depth information and the multi-view image information; Variation search region calculation means for calculating a variation search range using the variation transformed by the depth information converting means; Mutation search means for searching for a variation having the highest similarity within a selected search range using the matching window frame selected by the matching window frame selecting means; Variation selecting means for selecting the variation of the stereo image pair having the highest similarity by comparing the variation of the same point obtained from several viewpoints; And disparity map generation / storing means for generating a disparity map by calculating the disparity of each pixel with respect to the reference viewpoint of the multi-view image and recording it as a digital image.

On the other hand, the method of the present invention is a depth information based stereo / to obtain a three-dimensional image information by creating a disparity map using the depth information of the multi-view image and the reference viewpoint obtained from the multi-view video camera and the depth information acquisition device / A multi-view image matching method, comprising: an input step of receiving depth information of the multi-view image and a reference viewpoint; A camera correction step of extracting camera information of each viewpoint; A depth information converting step of converting the depth information into a variation with respect to a reference time point; An image rectification step of matching an epipolar line of a target image to be matched with the reference viewpoint image to calculate a disparity map with respect to the reference viewpoint; A matching window frame selection step of selecting a type of matching window frame using the depth information and the multi-view image information; A variation search range calculation step of calculating a variation search range from the transformed variation; Searching for a variation having the highest similarity within a selected search range using the selected window frame; Comparing the variations of the same point obtained from several viewpoints and selecting the variation of the stereo image pair having the highest similarity; And calculating a variation of each pixel to generate a variation map and to record the digital image.

The above objects, features, and advantages will become more apparent from the following detailed description taken in conjunction with the accompanying drawings, whereby those skilled in the art may easily implement the technical idea of the present invention. There will be. In addition, in describing the present invention, when it is determined that the detailed description of the known technology related to the present invention may unnecessarily obscure the gist of the present invention, the detailed description thereof will be omitted. Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a configuration diagram of a depth information based stereo / multiview image matching device according to an exemplary embodiment of the present invention, and a depth information based stereo / multiview image matching device using a multiview video camera 101 and a depth camera 102. The configuration diagram.

As shown in FIG. 1, the depth information-based stereo / multiview image matching device according to the present invention includes a multiview video camera (three-view camera 101 in this embodiment) and a depth camera which is a depth information acquisition device for a reference viewpoint. Multi-view video storage unit 110 and depth information storage unit 120 for receiving and storing the multi-view image and depth information of the reference view from the 102, the camera correction unit 130 for extracting the camera information of each view, The depth information converting unit 140 converts the depth information into a variation about the reference view, and inputs a multiview image including the reference view as an input to calculate the disparity map for the reference view. A multi-viewpoint image corrector 150 performing image rectification that matches an epipolar line, and a type of matching window frame to be applied using the depth information and the image information. Matching window frame selection unit 160 to select, the variation search area calculation unit 170 for calculating the variation search range from the transformed variation, the variation search unit for searching for the variation having the highest similarity in the selected search range using the selected window frame (180) a variation selector 190 for selecting the variation of the stereo image pair having the highest similarity by comparing the variation of the same point obtained from several viewpoints, generating a variation map by calculating the variation of each pixel with respect to the reference viewpoint; The variation map generation / storage unit 200 records the digital image. The apparatus may further include three-dimensional information converting means for receiving the disparity map and converting the transformed map into a point cloud or a three-dimensional model in a three-dimensional space using camera information (not shown).

2 is a flowchart illustrating a depth information based stereo / multiview image matching method according to the present invention.

First, a multiview image and depth information of a reference view are acquired from a multiview video camera and a depth camera, which is a depth information acquisition device for a reference view (201), and the obtained multiview image and depth information are stored (202). .

Subsequently, camera correction is performed to extract camera information of each viewpoint and calculate a base matrix (203), and a depth map transformation is performed to convert depth information into a variation of a reference viewpoint (204).

Subsequently, in order to calculate a disparity map with respect to the reference view, image correction is performed to match an epipolar line of the target image to be matched with the reference view image (205). The registration is performed (206).

Subsequently, a disparity map is generated and stored (207), and the disparity map may be additionally performed by converting / generating the disparity map into a point cloud or a three-dimensional model in three-dimensional space using camera information. 208.

Hereinafter, the operation principle and the detailed function of the component according to a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings with a focus on the mutual coupling relationship.

As described above with reference to FIG. 1, the depth information-based stereo / multiview image matching device according to the present invention includes a multiview image acquired at the same time from at least two video cameras (multiview cameras), a depth camera, or the like. The depth information about the reference view obtained by the controller is received and stored in the multiview video storage 110 and the depth information storage 120.

The camera correction unit 130 uses a multiview image received from the multiview video storage 110 to perform a camera calibration to display a base matrix representing camera information such as focal length and mutual positional relationship between respective viewpoints. matrix) At this time, the obtained camera information and the base matrix data are stored in a data storage device or a computer memory.

The depth information converting unit 140 converts the depth information received from the depth information converting unit 103 into the variation information on the reference view image using camera information (camera internal and external parameters) output from the camera correction unit 130. . At this time, the conversion uses Equations 1 and 2 below.

disparity_1 = (f * b) / depth

Here, disparity_1 is a transformed variation value, which is given in actual distance units, f is a focal length of a camera, b is a distance between a reference camera and a target camera (baseline length), and depth is depth information given in actual distance units.

At this time, since the scale of disparity_1 is a very small value, the pixel unit variation to be finally used for setting the search area is calculated by using the actual CCD cell size of the camera using Equation 2 below. .

disparity_2 = disparity_1 / cell_size

Here, cell_size is the actual cell size of the camera CCD.

The calculated variation is used to set a search area required for matching stereo images.

3 is an exemplary diagram of a three-view corrected image calculated from a multi-view image corrector according to a preferred embodiment of the present invention. It is a figure shown.

The multi-view image corrector 150 acquires the image obtained by the camera at a point in time different from the epipolar line of the image acquired by the reference camera, using a fundamental matrix obtained by the camera corrector 130. The image correction is performed so that the superfine lines of the image coincide with each other and coincide with the scan line of the image. In this way, the variation search range can be reduced when extracting mutations.

As shown in FIG. 3, it can be seen that the images at each viewpoint have the same focal length.

4 is a flowchart illustrating a process of selecting a matching window frame and setting a search range according to the present invention.

First, the registration window frame selector 105 receives image information about the reference viewpoint image and the target image and reference viewpoint depth information from the depth information storage unit 101 and the multiview image correction unit 104 (300).

Subsequently, a rectangular center window frame having a constant size is put on the pixel for which the variation is to be obtained, and the color gradient and the depth gradient of the color image of all the pixels in the window frame are examined to determine any one of the two variations. It is determined whether there is a pixel whose value is larger than the predetermined threshold 1 Th1 and the threshold 2 Th2 (301).

As a result of the determination, it is determined that occlusion is likely to occur when present, and thus, multiple adaptive size window frames are used instead of the existing single fixed size window frames (302).

An example of a multi-adaptive window frame is the multiple window frame used by Okutomi et al. (M. Okutomi, Y. Katauama, and S. Oka, "A simple stereo algorithm to recover precise object boundaries and smooth surfaces, " International Journal of Computer Vision , vol. 47, pp. 261-273, 2002), which can be replaced by other window frame setting methods as needed.

On the other hand, as a result of the determination, if it does not exist, the transition search is performed using the existing single fixed size window frame (303) or less.

Next, the variation search region calculation unit 160 determines whether the depth information exists in order to change the variation search range according to the presence or absence of the input depth information (304).

As a result of the determination, if present, the variation search range is set as shown in Equation 3 below using the shift value dc obtained from the depth information converter 140 for the pixel having the depth information value. In step 305, a preset full range search range is set for the pixel in the case where there is no pixel (306). Here, the process of setting the variation search range will be described in detail with reference to FIG. 5.

Subsequently, a matching point having a minimum error or maximum similarity is found using the matching window frame and the variation search range determined as described above, and the corresponding variation is calculated. This process is repeated for other stereo image pairs including the reference view image, and among them, the variation of the stereo image pairs having the smallest error or the maximum similarity is selected (307).

The selected variation detects the shielding area by performing matching by switching the reference image and the target image to detect the shielding area (308). The detected variation of the shielding region is filled by the variation of the surrounding pixels through interpolation or the like, and the obtained final variation is stored in the form of a variation map. The disparity map may be converted into a 3D point cloud or a 3D model using camera information (not shown).

5 is an explanatory view of a preferred embodiment mutant search range setting process using the depth information in accordance with the present invention, the reference viewpoint image (middle picture) within a point p (x o, y o) corresponding points in the left image to the Shows how to set the search range to find.

The point p ' ( x moved from the position of p ( x o , y o ) of the left image in the line direction (x direction) of the epipolar line by the shift value d c obtained by the depth information converter 140. ', y o) to obtain, p' (x ', y o) and the center set the right and left free to leave the search range as the error_rate. If this is expressed as Equation 3, Equation 3 below.

x'-error_rate * dc <x <x '+ error_rate * dc, x' = x o + dc
Here, x 'is the x coordinate of the image pixel to be matched shifted by the disparity information dc converted by the depth information converter 140, dc is the disparity information converted by the depth information converter 140, and error_rate is depth information. This variable is determined by the reliability of. For example, if the reliability of the depth information obtained from the depth camera is 80% as shown in FIG. 5, the error_rate has a value of 0.2. In addition, the y-direction search region may be additionally set in a similar manner in consideration of the image correction error.

delete

As described above, the method of the present invention may be implemented as a program and stored in a recording medium (CD-ROM, RAM, ROM, floppy disk, hard disk, magneto-optical disk, etc.) in a computer-readable form. Since this process can be easily implemented by those skilled in the art will not be described in more detail. In addition, the present invention described above is capable of various substitutions, modifications and changes within the scope without departing from the spirit of the present invention for those skilled in the art to which the present invention pertains to the embodiments and It is not limited by the accompanying drawings.

The present invention as described above, by using the depth information about the reference point obtained by the depth camera, etc. adaptively determine the variation search range and the matching window frame of the stereo matching can improve the accuracy of the variation map while minimizing the variation search range. There is an effect that can provide a stereo / multi-view image registration device and method.

In addition, the present invention has an effect of improving the accuracy of the three-dimensional depth or the disparity information by adaptively setting the disparity search area by using the depth information of the reference viewpoint using a depth camera or the like, and selecting the type of the matching window frame. As a result, there is an effect that can solve the problem that the accuracy is reduced in the depth discontinuity or shielding area of the conventional stereo matching technique.

In addition, the present invention can be efficiently applied to 3D modeling or random view image generation by providing disparity information with improved accuracy, in particular, there is an effect that can reduce the breakage of the image during random view image generation.

Claims (11)

  1. A depth information-based stereo / multiview image matching device for acquiring 3D image information by creating a disparity map by using a multiview image obtained from a multiview video camera and a depth information obtaining device and depth information of a reference view,
    Camera correction means for extracting camera information of each viewpoint using the multi-view image;
    Depth information converting means for converting the depth information into a variation with respect to a reference time point; And
    Image correction means for performing image rectification by receiving the multi-view image and calculating an epipolar line of the target image to be matched with the reference viewpoint image in order to calculate a disparity map for a reference viewpoint.
    Depth information based stereo / multi-view image registration device comprising a.
  2. The method of claim 1,
    Matching window frame selecting means for selecting a type of matching window frame using the depth information and the multi-view image information;
    Variation search region calculation means for calculating a variation search range using the variation transformed by the depth information converting means;
    Mutation search means for searching for a variation having the highest similarity within a selected search range using the matching window frame selected by the matching window frame selecting means;
    Variation selecting means for selecting the variation of the stereo image pair having the highest similarity by comparing the variation of the same point obtained from several viewpoints; And
    Disparity map generation / storing means for generating a disparity map by calculating the disparity of each pixel with respect to the reference viewpoint of the multiview image and recording it as a digital image
    Depth information-based stereo / multi-view image matching device further comprising.
  3. The method of claim 2,
    3D information converting means for receiving the disparity map and converting it into a point cloud or a 3D model in 3D space using camera information
    Depth information-based stereo / multi-view image matching device further comprising.
  4. The method of claim 1,
    The depth information converting means,
    The depth information-based stereo / multi-view image registration device, characterized in that for converting the depth information to the disparity information using the following [Equation 1] and [Equation 2].
    [Equation 1]
    disparity_1 = (f * b) / depth
    [Equation 2]
    disparity_2 = disparity_1 / cell_size
    Where disparity_1 is a transformed variation value, given in actual distance units, f is the focal length of the camera, b is the distance between the reference camera and the target camera (baseline length), and depth is depth information given in the actual distance unit, and cell_size is The actual cell size of the camera CCD and disparity_2 is the pixel unit variation that will be used to finally set the search area.)
  5. The method of claim 2,
    The matching window frame selection means,
    Investigate the brightness variation and the depth variation of the color image of the pixel in the center window frame, and if there is a pixel whose variation is greater than or equal to a predetermined threshold for each, use a multi-adaptation size window frame, and if not, use a single fixed size window frame. A depth information based stereo / multiview image matching device.
  6. The method of claim 2,
    The mutation search region calculation means,
    The depth information based stereo / multipoint image matching device, wherein the depth search range is calculated based on the depth information if the depth information is present, and if the depth information is not present, the shift search is performed within a preset full variation search range.
  7. The method of claim 6,
    Depth information based stereo / multi-view image registration device, characterized in that if the depth information is present to calculate the variation search range using Equation 3 below.
    [Equation 3]
    x'-error_rate * dc <x <x '+ error_rate * dc, x' = x o + dc
    Where x 'is the x-coordinate of the image pixel to be matched shifted by the disparity information dc converted by the depth information converter 140, dc is the disparity information converted by the depth information converter 140, and error_rate is the depth. Variable is determined by the reliability of the information)
  8. A depth information based stereo / multiview image matching method for acquiring 3D image information by creating a disparity map using a multiview image acquired from a multiview video camera and a depth information obtaining device and depth information of a reference view,
    An input step of receiving depth information of the multiview image and the reference view point;
    A camera correction step of extracting camera information of each viewpoint;
    A depth information converting step of converting the depth information into a variation with respect to a reference time point;
    An image rectification step of matching an epipolar line of a target image to be matched with the reference viewpoint image to calculate a disparity map with respect to the reference viewpoint;
    A matching window frame selection step of selecting a type of matching window frame using the depth information and the multi-view image information;
    A variation search range calculation step of calculating a variation search range from the transformed variation;
    Searching for a variation having the highest similarity within a selected search range using the selected window frame;
    Comparing the variations of the same point obtained from several viewpoints and selecting the variation of the stereo image pair having the highest similarity; And
    Computing the variation of each pixel to generate a disparity map and to record the digital image
    Depth-based stereo / multi-view image matching method comprising a.
  9. The method of claim 8,
    The depth information conversion step,
    The depth information-based stereo / multi-view image registration method, characterized in that for converting the depth information to the disparity information using [Equation 1] and [Equation 2] below using the camera information.
    [Equation 1]
    disparity_1 = (f * b) / depth
    [Equation 2]
    disparity_2 = disparity_1 / cell_size
    Where disparity_1 is a transformed variation value, given in actual distance units, f is the focal length of the camera, b is the distance between the reference camera and the target camera (baseline length), and depth is depth information given in the actual distance unit, and cell_size is The actual cell size of the camera CCD and disparity_2 is the pixel unit variation that will be used to finally set the search area.)
  10. The method of claim 8,
    The matching window frame selection step,
    Investigate the amount of change in brightness and depth of the color image of a pixel in the center window frame. A depth information based stereo / multiview image matching method.
  11. The method of claim 8,
    The variation search range calculation step,
    Depth information based stereo / multi-view image registration method, characterized in that for calculating the disparity search range using Equation 3 below.
    [Equation 3]
    x'-error_rate * dc <x <x '+ error_rate * dc, x' = x o + dc
    Where x 'is the x-coordinate of the image pixel to be matched shifted by the disparity information dc converted by the depth information converter 140, dc is the disparity information converted by the depth information converter 140, and error_rate is the depth. Variable determined by the reliability of the information).
KR1020050028382A 2004-12-06 2005-04-06 A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method KR100776649B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR20040101773 2004-12-06
KR1020040101773 2004-12-06

Publications (2)

Publication Number Publication Date
KR20060063558A KR20060063558A (en) 2006-06-12
KR100776649B1 true KR100776649B1 (en) 2007-11-19

Family

ID=37159534

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020050028382A KR100776649B1 (en) 2004-12-06 2005-04-06 A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method

Country Status (1)

Country Link
KR (1) KR100776649B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100953076B1 (en) 2007-12-13 2010-04-13 한국전자통신연구원 Multi-view matching method and device using foreground/background separation
KR101066550B1 (en) * 2008-08-11 2011-09-21 한국전자통신연구원 Method for generating vitual view image and apparatus thereof
KR101086274B1 (en) * 2008-12-05 2011-11-24 한국전자통신연구원 Apparatus and method for extracting depth information
KR101158678B1 (en) * 2009-06-15 2012-06-22 (주)알파캠 Stereoscopic image system and stereoscopic image processing method
CN103379350A (en) * 2012-04-28 2013-10-30 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
US9842400B2 (en) 2015-01-27 2017-12-12 Samsung Electronics Co., Ltd. Method and apparatus for determining disparity

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100795482B1 (en) * 2006-11-23 2008-01-16 광주과학기술원 A method and apparatus for encoding or decoding frames of different views in multiview video using rectification, and a storage medium using the same
KR100931311B1 (en) * 2006-12-04 2009-12-11 한국전자통신연구원 Depth estimation device and its method for maintaining depth continuity between frames
KR100926520B1 (en) * 2006-12-05 2009-11-12 재단법인서울대학교산학협력재단 Apparatus and method of matching binocular/multi-view stereo using foreground/background separation and image segmentation
US8125510B2 (en) 2007-01-30 2012-02-28 Ankur Agarwal Remote workspace sharing
KR100897542B1 (en) * 2007-05-17 2009-05-15 연세대학교 산학협력단 Method and Device for Rectifying Image in Synthesizing Arbitary View Image
KR100891549B1 (en) * 2007-05-22 2009-04-03 광주과학기술원 Method and apparatus for generating depth information supplemented using depth-range camera, and recording medium storing program for performing the method thereof
KR20090055803A (en) * 2007-11-29 2009-06-03 광주과학기술원 Method and apparatus for generating multi-viewpoint depth map, method for generating disparity of multi-viewpoint image
KR100924432B1 (en) * 2007-12-06 2009-10-29 한국전자통신연구원 Apparatus and method for controlling perceived depth of multi-view images
KR101420684B1 (en) 2008-02-13 2014-07-21 삼성전자주식회사 Apparatus and method for matching color image and depth image
KR20100000671A (en) 2008-06-25 2010-01-06 삼성전자주식회사 Method for image processing
KR101004758B1 (en) * 2009-02-17 2011-01-04 경북대학교 산학협력단 Method for multi-perspective image generation using planview
KR101236475B1 (en) * 2009-04-14 2013-02-22 한국전자통신연구원 Apparatus for detecting face and method for estimating distance using the same
KR101626057B1 (en) 2009-11-19 2016-05-31 삼성전자주식회사 Method and device for disparity estimation from three views
KR101598855B1 (en) 2010-05-11 2016-03-14 삼성전자주식회사 Apparatus and Method for 3D video coding
KR101686171B1 (en) * 2010-06-08 2016-12-13 삼성전자주식회사 Apparatus for recognizing location using image and range data and method thereof
KR20120072165A (en) * 2010-12-23 2012-07-03 한국전자통신연구원 Apparatus and method for 3-d effect adaptation
KR101289003B1 (en) * 2011-12-19 2013-07-23 광주과학기술원 Method and Device for Stereo Matching of Images
KR101893788B1 (en) 2012-08-27 2018-08-31 삼성전자주식회사 Apparatus and method of image matching in multi-view camera

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09126738A (en) * 1995-10-27 1997-05-16 Nec Corp Three-dimensional shape measuring device and method therefor

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09126738A (en) * 1995-10-27 1997-05-16 Nec Corp Three-dimensional shape measuring device and method therefor

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100953076B1 (en) 2007-12-13 2010-04-13 한국전자통신연구원 Multi-view matching method and device using foreground/background separation
KR101066550B1 (en) * 2008-08-11 2011-09-21 한국전자통신연구원 Method for generating vitual view image and apparatus thereof
KR101086274B1 (en) * 2008-12-05 2011-11-24 한국전자통신연구원 Apparatus and method for extracting depth information
KR101158678B1 (en) * 2009-06-15 2012-06-22 (주)알파캠 Stereoscopic image system and stereoscopic image processing method
CN103379350A (en) * 2012-04-28 2013-10-30 中国科学院深圳先进技术研究院 Virtual viewpoint image post-processing method
US9842400B2 (en) 2015-01-27 2017-12-12 Samsung Electronics Co., Ltd. Method and apparatus for determining disparity

Also Published As

Publication number Publication date
KR20060063558A (en) 2006-06-12

Similar Documents

Publication Publication Date Title
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
KR20180003535A (en) Rider Stereo Fusion Photographed 3D Model Virtual Reality Video
KR20170063827A (en) Systems and methods for dynamic calibration of array cameras
US20160065948A1 (en) Methods, systems, and computer program products for creating three-dimensional video sequences
US9344701B2 (en) Methods, systems, and computer-readable storage media for identifying a rough depth map in a scene and for determining a stereo-base distance for three-dimensional (3D) content creation
Kim et al. Multi-view image and tof sensor fusion for dense 3d reconstruction
JP5392415B2 (en) Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation
US8217931B2 (en) System and method for processing video images
KR101862199B1 (en) Method and Fusion system of time-of-flight camera and stereo camera for reliable wide range depth acquisition
US20130335535A1 (en) Digital 3d camera using periodic illumination
KR101694292B1 (en) Apparatus for matching stereo image and method thereof
US8385595B2 (en) Motion detection method, apparatus and system
US8593524B2 (en) Calibrating a camera system
CA2704479C (en) System and method for depth map extraction using region-based filtering
JP5160640B2 (en) System and method for stereo matching of images
US8611641B2 (en) Method and apparatus for detecting disparity
EP2236980B1 (en) A method for determining the relative position of a first and a second imaging device and devices therefore
EP2300987B1 (en) System and method for depth extraction of images with motion compensation
CN103345736B (en) A kind of virtual viewpoint rendering method
Zhu et al. Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs
Feng et al. Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications
NL1032656C2 (en) 3-d image processing device and method.
US8452081B2 (en) Forming 3D models using multiple images
KR100748719B1 (en) Apparatus and method for 3-dimensional modeling using multiple stereo cameras
US8447099B2 (en) Forming 3D models using two images

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
J201 Request for trial against refusal decision
B701 Decision to grant
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20121031

Year of fee payment: 6

FPAY Annual fee payment

Payment date: 20131024

Year of fee payment: 7

FPAY Annual fee payment

Payment date: 20141027

Year of fee payment: 8

LAPS Lapse due to unpaid annual fee