KR101853269B1 - Apparatus of stitching depth maps for stereo images - Google Patents

Apparatus of stitching depth maps for stereo images Download PDF

Info

Publication number
KR101853269B1
KR101853269B1 KR1020170047363A KR20170047363A KR101853269B1 KR 101853269 B1 KR101853269 B1 KR 101853269B1 KR 1020170047363 A KR1020170047363 A KR 1020170047363A KR 20170047363 A KR20170047363 A KR 20170047363A KR 101853269 B1 KR101853269 B1 KR 101853269B1
Authority
KR
South Korea
Prior art keywords
depth
depth map
stitching
object
maps
Prior art date
Application number
KR1020170047363A
Other languages
Korean (ko)
Inventor
안재용
정성신
강민수
이주영
우장훈
조재필
Original Assignee
주식회사 씨오티커넥티드
주식회사 엘지유플러스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 씨오티커넥티드, 주식회사 엘지유플러스 filed Critical 주식회사 씨오티커넥티드
Priority to KR1020170047363A priority Critical patent/KR101853269B1/en
Application granted granted Critical
Publication of KR101853269B1 publication Critical patent/KR101853269B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

An apparatus for stitching depth maps for stereo images comprises a camera calibration unit for correcting physical properties of a plurality of stereo cameras; a depth map generation unit for receiving the stereo images from the stereo cameras to generate depth maps; a depth map adjustment unit for adjusting each of the depth maps in a virtual space; and a depth map stitching unit for determining the object based on a segment existing in each of the depth maps, and then calculating the depth of the determined object to stitch the depth maps. Therefore, the apparatus for stitching depth maps for stereo images can perform matching between segments appearing in an area where depth maps overlap with each other, detect an overlapping shape of the matched segments, and integrate the matched segments into a segment having a single depth value according to the detected overlapping shape.

Description

[0001] APPARATUS OF STITCHING DEPTH MAPS FOR STEREO IMAGES [0002]

Field of the Invention [0002] The present invention relates to a stereo image matching technique, and more particularly, to a depth map stitching apparatus for stereo images capable of performing stitching on depth maps associated with stereo images in real time.

The three-dimensional image can be acquired through a stereo camera. More specifically, a stereo camera produces right and left two-dimensional images, respectively, obtained through binocular. A three-dimensional image can be obtained through image processing on the right and left two-dimensional images, and a depth map can be used in this process.

Korean Patent Registration No. 10-1370785 discloses a method and an apparatus for generating a depth map of a stereoscopic image, the depth map of a stereoscopic image that enables detailed representation of the depth of the image by considering not only a vanishing point but also detailed lines within the image. Generating method and apparatus.

Korean Patent Laid-Open Publication No. 10-2016-0086802 relates to an apparatus and method for generating a depth map, and a stereoscopic image converting apparatus and method using the apparatus and method. More specifically, the apparatus includes a feature information extracting unit A depth map initialization unit for generating an initial depth map for the input image based on the characteristic information, an FFT transform unit for performing an FFT on the input image to convert the input image into a frequency image, There is provided a depth map generating apparatus including a depth map determining unit for determining a final depth map based on the correlation value by obtaining a correlation value using an average value of the depth map.

Korean Patent Registration No. 10-1370785 (Feb. 21, 2014) Korean Patent Publication No. 10-2016-0086802 (July 20, 2016)

One embodiment of the present invention seeks to provide a depth map stitching apparatus for stereo images that can perform stitching on depth maps associated with stereo images in real time.

In an exemplary embodiment of the present invention, a stereo image, which can match segments existing within an overlapping region between depth maps based on depth of each object in the segments and difference in RGB color distribution of each corresponding object in the stereo images, To provide a depth map stitching apparatus.

An embodiment of the present invention is to provide a depth map stitching apparatus for stereo images capable of detecting overlapping types of matched segments and integrating matched segments so as to have a single depth value for each type.

Among the embodiments, a depth map stitching apparatus for stereo images includes a camera calibration unit for correcting physical characteristics of a plurality of stereo cameras, a depth map for receiving stereo images from the stereo cameras and generating depth maps, A depth map adjusting unit for adjusting each of the depth maps on a virtual space, and a depth determining unit for determining an object based on a segment in each of the depth maps, calculating a depth of the determined object, And a depth map stitching unit.

The depth map stitching unit may perform segment labeling on the segments to detect the shape of the object.

The depth map stitching unit may adjust the global depth of the object by arranging the detected object in a single space to obtain an approximate value of the depth.

The depth map stitching unit may perform a local optimization on the detected object based on the stereo images to calculate a local depth of the detected object.

The depth map stitching unit may stitch the depth maps by matching and integrating segments in the overlapping area of the depth maps.

The depth map stitching unit may match the segments in the overlapping area based on the depth of each object in the segments and the difference in RGB color distribution of each corresponding object in the corresponding stereo images.

The depth map stitching unit may detect the overlapping type of the matched segments and integrate the matched segments to have a single depth value.

The depth map stitching unit may integrate the depth maps with an average depth value of the background area measured in the corresponding depth maps when there is a background area in which the outline of the specific object is not distinguished in the boundary of the stitching or in the overlap area have.

The disclosed technique may have the following effects. It is to be understood, however, that the scope of the disclosed technology is not to be construed as limited thereby, as it is not meant to imply that a particular embodiment should include all of the following effects or only the following effects.

The depth map stitching apparatus for stereo images according to an embodiment of the present invention can provide a technique for realizing stitching on depth maps associated with stereo images in real time.

The map stitching apparatus relating to stereo images according to an embodiment of the present invention may include a technique capable of matching segments existing within an overlapping area between depth maps and integrating them so as to have a single depth value according to the overlapping type of matched segments Can be provided.

1 is a view for explaining an overall system for generating a virtual viewpoint image synthesized in real time including a depth map stitching apparatus according to an embodiment of the present invention.
2 is a block diagram illustrating the depth map stitching apparatus of FIG.
3 is a block diagram illustrating the depth map stitching unit of FIG.
FIG. 4 is a view for explaining an embodiment of a process of generating a depth map specifier by the depth map adjuster shown in FIG.
5 is a view for explaining an embodiment for projecting and integrating depth maps arranged on a three-dimensional virtual space onto one projection sphere by the depth map stitching unit in Fig.
FIG. 6 is a view showing an embodiment of a process in which the segment labeling module in FIG. 3 determines the depth of overlapping objects through a depth quantization process.
FIG. 7 is a diagram illustrating an embodiment of a process of the segment matching module in FIG. 3 to re-project segments of each patch onto a three-dimensional virtual space in order to perform segment matching.
FIG. 8 is a view for explaining an embodiment of a process in which the segment matching module in FIG. 3 performs matching between overlapping segments in an overlapping region between depth maps. FIG.
FIG. 9 is a flowchart illustrating a depth map stitching process performed in the depth map stitching apparatus shown in FIG. 2. FIG.

The description of the present invention is merely an example for structural or functional explanation, and the scope of the present invention should not be construed as being limited by the embodiments described in the text. That is, the embodiments are to be construed as being variously embodied and having various forms, so that the scope of the present invention should be understood to include equivalents capable of realizing technical ideas. Also, the purpose or effect of the present invention should not be construed as limiting the scope of the present invention, since it does not mean that a specific embodiment should include all or only such effect.

Meanwhile, the meaning of the terms described in the present application should be understood as follows.

The terms "first "," second ", and the like are intended to distinguish one element from another, and the scope of the right should not be limited by these terms. For example, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

It is to be understood that when an element is referred to as being "connected" to another element, it may be directly connected to the other element, but there may be other elements in between. On the other hand, when an element is referred to as being "directly connected" to another element, it should be understood that there are no other elements in between. On the other hand, other expressions that describe the relationship between components, such as "between" and "between" or "neighboring to" and "directly adjacent to" should be interpreted as well.

It is to be understood that the singular " include " or "have" are to be construed as including the stated feature, number, step, operation, It is to be understood that the combination is intended to specify that it does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

In each step, the identification code (e.g., a, b, c, etc.) is used for convenience of explanation, the identification code does not describe the order of each step, Unless otherwise stated, it may occur differently from the stated order. That is, each step may occur in the same order as described, may be performed substantially concurrently, or may be performed in reverse order.

The present invention can be embodied as computer-readable code on a computer-readable recording medium, and the computer-readable recording medium includes all kinds of recording devices for storing data that can be read by a computer system . Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like.

All terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. Commonly used predefined terms should be interpreted to be consistent with the meanings in the context of the related art and can not be interpreted as having ideal or overly formal meaning unless explicitly defined in the present application.

1 is a view for explaining an overall system for generating a virtual viewpoint image synthesized in real time including a depth map stitching apparatus according to an embodiment of the present invention.

Referring to FIG. 1, a virtual viewpoint image generation system 100 can match a stereo image in real time using a depth map stitching technique, and can include a camera manager 110, an image handler An image handler 130, a depth map stitching apparatus 150, and a scene image generating apparatus 170.

Here, stitching means that the depth maps of the overlapping regions are integrated into one by selecting the optimum depth value among the depth values included in the depth map for the overlapping region between the respective depth maps. Depth map stitching allows natural connection between depth maps from each stereo camera and creates a depth map for the entire image area of high resolution.

A stereo image means a stereoscopic image, and more specifically, an image that emphasizes a stereoscopic effect generated using camera images captured by both right and left lenses in a stereo camera.

A camera manager (110) can capture camera images from a plurality of stereo cameras. Here, a plurality of stereo cameras corresponds to a stereo camera device for photographing a three-dimensional space, and each of the plurality of stereo cameras is implemented as a three-dimensional stereo camera for acquiring stereoscopic images using two pairs of cameras for both eyes, A three-dimensional camera capable of simultaneously capturing two images or images of the same subject including two photographing lenses.

The camera manager 110 may store the captured images in main memory. The main role of the camera manager 110 is to maintain synchronization between the left and right cameras of each stereo camera and the multiple stereo cameras. After synchronization, the camera manager 110 may upload a series of camera images to the GPU memory. Each camera processing task, from camera image capture to image upload, is assigned to an independent thread. The camera manager 110 may participate in thread synchronization and perform operations of the synchronized camera.

The image handler 130 may adjust the camera image with the correction data after the camera image is uploaded to the GPU memory by the camera manager 110 so as to be suitable for calibration and alignment. The main role of the image handler 130 is to properly adjust the stereo image and determine the effective image area for depth map generation.

The depth map stitching apparatus 150 generates a depth map for each stereo camera using the stereo image adjusted by the image handler 130, aligns the depth map in the three-dimensional virtual space through depth map transformation, Can be stitched and integrated.

The depth map stitching apparatus 150 may perform a calibration to correct physical characteristics of the current camera settings for a plurality of stereo cameras prior to receiving the plurality of stereo images.

The depth map stitching device 150 may generate a depth map through a pixel-by-pixel correspondence between the left and right images. The depth map stitching apparatus 150 may use various stereo matching methods including a graph cut or a Constant Belief Propagation Method in the depth map generation process.

When the depth map is generated by the depth map generating apparatus 150, the scene image generating apparatus 170 generates a synthetic virtual view point image using the generated depth map. The scene image generation device 170 corresponds to a plurality of stereo cameras and a computing device that can be connected to the depth map stitching device 150. For example, the scene image generating apparatus 170 may be connected to a plurality of stereo cameras by wire or wirelessly, and may be connected to the depth map stitching apparatus 150 through a wireless network.

In one embodiment, the scene image generation apparatus 170 may generate a virtual viewpoint image or an image synthesized in real time by analyzing a scene image or an image generated by a plurality of stereo cameras.

2 is a block diagram illustrating the depth map stitching apparatus of FIG.

2, the depth map stitching apparatus 150 includes a camera calibration unit 210, a depth map generating unit 230, a depth map adjusting unit 250, a depth map stitching unit 270, and a control unit 290 do. Here, the camera calibration unit 210 may be implemented by software, but not necessarily limited thereto, and may be implemented by hardware. The camera calibration unit 210 can be processed by the CPU module and the depth map generator 230, the depth map adjuster 250, the depth map stitcher 270 and the controller 290 are processed by the GPU module And may be processed in parallel by the CPU module and the GPU module.

The camera calibration unit 210 can set the physical characteristics of the camera before capturing the stereo image from the plurality of stereo cameras by the camera manager 110. [ Here, the camera calibration refers to the adjustment of the position and direction of the camera. Specifically, it is a correction operation for effectively capturing an object by adjusting the rotation of the cameras in the stereo camera pair, the positions of the cameras in the stereo camera pair, and the like.

In one embodiment, the camera calibration unit 210 may determine intrinsic parameters and extrinsic parameters of each of the stereo cameras to extract specific parameters for depth map adjustment. The internal parameters may correspond to the intrinsic parameters affected by the camera's own characteristics, such as the lens distortion factor, the focal length of the camera, and the location of the principal point, and the external parameters may be geometric It can correspond to external parameters that are affected by the relationship. Specific parameters for depth map adjustment may represent information such as rotation and translation between cameras in a stereo camera pair or warping parameter information for stereo correction (epipolar line alignment).

The depth map generating unit 230 receives a stereo image from a plurality of stereo cameras and generates a depth map. When the camera manager 110 captures a camera image, the image handler 130 adjusts the camera image through an image warping step. The depth map generating unit 230 generates a depth map based on the adjusted camera image. The depth map can be generated by pixel-by-pixel correspondence between the left and right images.

In one embodiment, the depth map generator 230 may generate a depth map by stereo block matching and then generate a depth map in a manner that applies a boundary preservation filter, and a graph- cut or belief propagation method to optimize the depth map to improve the quality of the depth map.

The depth map adjusting unit 250 adjusts each of the depth maps generated by the depth map generating unit 230 in a virtual space through a depth map transforming step. The depth map adjuster 250 may arrange the plurality of depth maps on the virtual space defined by the three-dimensional coordinate system using the specific parameters collected by the camera calibration unit 210. Here, the specific variable collected at the camera calibration unit 210 may include rotation and translation between a plurality of stereo cameras.

In one embodiment, the depth map adjuster 250 may define a depth map with a spherical coordinate system to manage depth information about a scene having a 360-degree wide viewing angle. In this process, an image coordinate system for the image is defined as Euclidean ) To a spherical coordinate system and each of the depth maps can be transformed and stored in an equirectangular form in which the distance between two pixels is proportional to the angle between the projected rays. The depth map adjuster 250 may then align the depth maps on the three-dimensional virtual space based on the settings for the plurality of stereo cameras.

In one embodiment, the depth map adjuster 250 may arbitrarily set one of the plurality of stereo cameras to be the main camera for alignment, and the optical center of the main camera may be set in a three-dimensional virtual space And the z-axis of the space can be set to be aligned with the main axis of the main camera. Accordingly, the depth map adjuster 250 can generate an image sphere having the center of the sphere as the origin of the space and the radius of the sphere as the focal distance of the main camera. With this setting, the projection plane of the main camera may be a plane perpendicular to the surface of the sphere, and the projected sphere may provide a guideline for generating a depth map sphere.

In one embodiment, the depth map adjuster 250 may generate a depth map sphere using the depth information of the plurality of depth maps, and may adjust the depth information of each depth map to have one common optical point . This will be described with reference to FIG.

FIG. 4 is a diagram for explaining an embodiment of a process of generating a depth map specimen by the depth map adjuster 250 shown in FIG. More specifically, the depth map adjuster 250 can generate a depth map sphere through the following steps.

First, the depth map adjuster 250 can re-project the pixels of each depth map into the three-dimensional space in consideration of the posture and the optical point of the image surface, and then project the image on the image sphere. In Figure 4, represents the j-th pixel on the image plane i. After the re-projection process, the depth map adjuster 250 may re-project m to a three-dimensional point , m , where m is the distance between m and the center of the image sphere 1 - May be projected onto the same pixel point of the image spheres storing the image. The depth map adjuster 250 may project one or more three-dimensional points at the same pixel point of the image sphere when the re-projected three-dimensional point is projected onto the image sphere, in which case depth alignment of the three-dimensional point is necessary.

Secondly, the depth map adjuster 250 may compare the distance between each three-dimensional point and the projection point to select a point with a minimum distance. In Figure 4, the depth map adjustment section 250 is the 2,2 and 2,3 the pixel point in the image sphere, can be projected on the t, 2, and 2, with t if the distance is closer 2, 2, t As shown in FIG. The depth map adjuster 250 can easily perform distance alignment by projecting three-dimensional points sequentially. The depth map adjuster 250 compares the depth of a current three-dimensional point with the depth of the current three-dimensional point when the image sphere has a depth value at the pixel point when the specific three-dimensional point is to be projected onto the image sphere The smaller the value of the projection point can be changed to the depth of the current three-dimensional point.

In one embodiment, in order to solve the problem that sequential processing of a three-dimensional point is not suitable when performing a projection operation of a three-dimensional point re-projected by the GPU in the GPU, Point can be projected.

For example, for a problem of selecting the depth value of a pixel point , t , the depth map adjuster 250 may determine the depth of the pixel , the projection ray 1 , t and the first intersection point, can produce, in this case, 2, 1 and, always can be located in the same half-plane, there is a three-dimensional point is projected t be present in the half-plane. At this time, the half-planes may intersect the image plane # 2, and ray 1 ,, t may have only re-projected three-dimensional points in the pixels on the intersection. Here, as long as the setting of the cameras is not changed, the position of the intersection line with respect to each projected pixel point of the image and the angle between the intersection line and the projection light are not changed and also, The depth value at which the point meets the ray C 1 ,, t is constant and can be calculated using the angle between the projection ray and the intersection line and the distance between the pixel and the intersection point p 2, x, t . Accordingly, the depth map adjuster 250 sequentially searches the target pixels on the line along the intersection line starting from the intersection point 2, t , and searches for the first pixel projected by the re-projected three-dimensional point pixel point t . (The distance between the 3D point and the pixel point 2, x ) - (the radius of the image sphere) 2, the depth value of x can be set. In one embodiment, the use of this algorithm of the present invention eliminates the unnecessary expense of re-projecting the pixels that are not displayed in the image area and makes the entire process suitable for being processed in parallel.

The depth map adjuster 250 may repeat the above operation for each depth map to generate a spherical depth map for each stereo image pair. Here, only the area covered by the original depth map in each spherical depth map has depth information, and the patch can be defined as this area. The depth map adjuster 250 may combine the patches in one sphere, so that overlapping may occur between the patches. This phenomenon is caused by overlapping of the stereo camera's field of view. Since the depth mismatch of the object occurs in the overlapped region of the patch, each object should have a constant depth map for perfect integration. The depth map stitching unit 270 can solve overlapping between scenes and depth discrepancy in the area through the following processes.

The depth map stitching unit 270 determines an object based on a segment in each of the depth maps adjusted by the depth map adjusting unit 250, and calculates the depth of the determined object to stitch the depth maps. Here, stitching means that one depth map is generated for the entire image area by selecting an optimal depth value among the depth values included in the depth map for the overlapping area between the respective depth maps.

In one embodiment, the depth map stitching unit 270 may project and align depth maps arranged on a three-dimensional virtual space onto one projection sphere. As described earlier, it is important to determine the overlap between adjacent depth maps and to resolve depth mismatches for seamless integration between depth maps. This will be described with reference to FIG.

5 is a view for explaining an embodiment of the depth map stitching unit 270 in FIG. 2 for performing a process for projecting and integrating depth maps arranged on a three-dimensional virtual space onto one projection sphere.

More specifically, FIG. 5 shows a case in which overlap occurs between two views of stereo cameras. In Fig. 5, the shaded area represents an overlapping area between the left and right camera pairs. If the object point is in the overlap region, the object appears on both stereo cameras, and the depth of the object can be measured by each stereo camera. The depth map stitching unit 270 may detect the objects displayed in the overlapped area of the neighboring stereo camera in the process of integrating the aligned depth maps and adjust the depth so that the detected objects have a consistent depth.

The overlapping areas occur differently depending on the camera settings in the three-dimensional space. In FIG. 5, in the case of the overlapping, if all the objects in the scene are theoretically infinite distances from the camera, the overlapping area occupies the image area indicated by lines 1, 1, and k . Here, the first, corresponds to a line consisting of a left border, 2, the right border, 1, k is the pixel points by which the projection light of the image region # 22, parallel to the r in the image region # 2, # 1 of the image area do.

The controller 290 controls the overall operation of the depth map generator 150 and manages the control flow or data flow between the depth map generator 230, the depth map adjuster 250 and the depth map stitcher 270 . In one embodiment, the control unit 290 may operate within the CPU module or the GPU module and may control the operations performed in the depth map stitching apparatus 150 to be distributed and parallelized to the CPU module or the GPU module.

3 is a block diagram illustrating the depth map stitching unit in FIG.

3, the depth map stitching unit 270 includes a segment labeling module 310, a segment optimization module 330, a segment matching module 350, a segment depth measurement module 370, and a control module 390 do.

The segment labeling module 310 may perform labeling of image segments represented on each depth map to detect object contours. Specifically, labeling means adjusting the depth by advancing the approximation of the depth value so that the segment exists on one plane. In one embodiment, the segment labeling module 310 may arrange the detected objects in a single space to obtain an approximate value of the depth to adjust the global depth of the object. Here, the global depth means the depth value of the object adjusted through segment labeling.

In one embodiment, the segment labeling module 310 classifies the objects in the image of each of the aligned depth maps into a background object and a foreground object, and re-projects them into the overlapping one of the foreground objects If an object exists, labeling can be performed by designating it as a target object requiring depth adjustment. Here, the foreground object corresponds to an object showing a sufficient mismatch in a stereo image, and the background object may correspond to an object for which no mismatch is observed.

In one embodiment, the segment labeling module 310 performs a depth quantization process for labeling, which exaggerates the depth difference between the objects in the depth map and reduces the depth difference in the object to make each object more distinct can do. This will be described with reference to FIG.

FIG. 6 is a diagram illustrating an exemplary process of the segment labeling module 310 in FIG. 3 to determine the depth of overlapping objects through the depth quantization process.

More specifically, the segment labeling module 310 can select a sample depth value of each object through a depth quantization process and compare and analyze depth information between objects. In one embodiment, the segment labeling module 310 may perform depth quantization processing in a manner that appropriately selects the sample depth taking into account depth information of objects in the current scene, and more specifically, And the average depth value of each cluster calculated by performing K average value clustering of the depth value based on the histogram can be selected as the sample depth value of each object. Here, K represents the number of clusters and can be preset by the designer.

In one embodiment, the segment labeling module 310 may perform image segmentation by clustering connected pixels having the same depth value after the depth quantization process, thereby defining an object shape.

The segment optimization module 330 can estimate the depth of the object image segment shown in the current depth map using the original image (left / right image) used for generating the depth map. Since the depth of the object image area on the depth map in the previous step is the result of optimizing the entire image, optimization is further performed only on the image area in which the segment exists.

In one embodiment, the segment optimization module 330 may perform a local optimization on the detected object based on the stereo images to yield a local depth of the detected object. Here, the local depth refers to the object detected by the segment labeling module 310, and represents a specific depth value of the object's detail components in the object image area, i.e., the inner area occupied by the object.

The segment matching module 350 may perform matching between segments that appear on the overlapping depth map. In one embodiment, the segment matching module 350 may first re-project segments of each patch onto a three-dimensional virtual space to perform segment matching. This will be described with reference to FIG.

FIG. 7 is a diagram illustrating an embodiment of a process in which the segment matching module 350 in FIG. 3 re-projects segments of each patch onto a three-dimensional virtual space to perform segment matching. Here, the segments of the patch can be scaled when converted at the surface of the sphere, unlike the segment re-projection into the planar depth map, and the scaling factor can be proportional to the depth d s of the segment.

In one embodiment, the segment matching module 350 performs matching between the segments in the overlapping regions of the depth maps using the difference in RGB color distribution of the image used for depth and depth of the object determined by the segment can do. This will be described with reference to FIG.

FIG. 8 is a view for explaining an embodiment of a process in which the segment matching module in FIG. 3 performs matching between overlapping segments in an overlapping region between depth maps. FIG. Referring to FIG. 8, in case 2, the center of gravity of s1.2 and s2 and the average RGB are used to measure the similarity between segments s1 and s2. Here, the degree of similarity indicates the degree of correspondence between the depth value on the refined depth map and the depth value on the raw depth map. Similarly, in Case 3, we measure the similarity between s1.2 and s2.1. The segment matching module 350 performs matching between the segments having the greatest similarity.

The segment depth measurement module 370 may detect the overlapping types of segments matched by the segment matching module 350 and may incorporate the segments according to the detected overlapping type. The integration of the segments means that the optimum depth value is selected from among the depth values of the segments appearing in the overlapping area between the respective depth maps.

In one embodiment, the overlapping types of segments matched by the segment matching module 350 are such that when the boundaries of the segments are within the overlapping region, they are present over the overlapping region and the one depth map, Or < / RTI >

In one embodiment, the segment depth measurement module 270 may detect the overlapping types of matched segments and incorporate the matched segments to have a single depth value. Specifically, when the boundaries of the segments exist within the overlapping region, the depth value of the segment with the highest reliability among the segments on the both depth maps is selected as the depth value of the final segment. Here, the reliability means the degree of matching of the depth value on the refined depth map and the depth value on the original depth map. If the boundaries of the segments exist over the overlapping area and the depth map, the depth value of the final segment is estimated based on the reliability of the segments on the respective depth maps and the size ratio of the segments to the matched segments. If the boundaries of the segments exist over the overlapping region and both depth maps, the depth value of the last segment is estimated based on the reliability of the segments on the respective depth maps and the size ratio of the overlapping region surrounding segments for the matched segments.

In one embodiment, the segment depth measurement module 270 defines the area as a background when there is an area where the appearance of the object is not determined in the overlapped area of the boundary and depth maps of the stitching, Depth maps can be integrated by calculating depth values as an average of depth values.

The control module 390 controls the overall operation of the depth map stitching unit 270 and controls the operation between the segment labeling module 310, the segment optimization module 330, the segment matching module 350, and the segment depth measurement module 370 You can manage the flow or data flow.

FIG. 9 is a flowchart illustrating a depth map stitching process performed in the depth map stitching apparatus shown in FIG. 2. FIG.

Referring to FIG. 9, the camera calibration unit 210 may recognize and analyze the physical characteristics of the stereo cameras and calibrate the camera settings before capturing the camera images from the plurality of stereo cameras (step S910).

The depth map generating unit 230 may receive a stereo image from a plurality of stereo cameras to generate a depth map (step S920). The depth map adjusting unit 250 may adjust each of the depth maps generated by the depth map generating unit 230 on the virtual space through the depth map converting step (step S930).

The depth map stitching unit 270 may determine an object based on a segment in each of the depth maps adjusted by the depth map adjuster 250 and then stitch depth maps by calculating the depth of the determined object.

Specifically, the segment labeling module 310 in the depth map stitching unit 270 may perform labeling of image segments appearing on each depth map to determine an object contour (step S940). Determining the object shape (contour) here means determining which objects the image segments represented on each depth map represent.

The segment optimizing module 330 in the depth map stitching unit 270 can estimate the depth of the object image area shown in the current depth map by using the original image used for generating the depth map (step S950). Since the depth of the object image area on the depth map in the previous step is the result of optimizing the entire image, optimization is further performed on the image area in which the segment exists in the current step to achieve depth optimization for the entire image .

The segment matching module 350 in the depth map stitching unit 270 may perform matching between the segments displayed on the overlapped depth map (step S960). In one embodiment, the segment matching module 350 performs matching between the segments in the overlapping regions of the depth maps using the difference in RGB color distribution of the image used for depth and depth of the object determined by the segment can do.

The segment depth measurement module 370 in the depth map stitching unit 270 detects the overlapping type of the matched segments by the segment matching module 350 and detects the segments matched according to the detected overlapping type to a segment having one depth value (Step S970).

The overlapping type of the segments matched by the segment matching module 350 may be such that the segment boundary exists within the overlap region, the overlap region exists over the overlapping region and the depth map, and the overlapped region exists between the overlap region and both depth maps Or the like.

In one embodiment, the segment depth measurement module 270 may integrate the nested depth maps according to the nested type of matched segments. In one embodiment, the segment depth measurement module 270 defines the area as a background when there is an area where the appearance of the object is not determined in the overlapped area of the boundary and depth maps of the stitching, Depth maps can be integrated by calculating depth values as an average of depth values.

It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the present invention as defined by the following claims It can be understood that

100: Real-time synthesized virtual point image generation system
110: camera manager 130: image handler
150: depth map stitching apparatus 170: scene image generating apparatus
210: camera calibration unit 230: depth map generating unit
250: depth map adjusting unit 270: depth map stitching unit
290: control unit 310: segment labeling module
330: Segment optimization module 350: Segment matching module
370: Segment Depth Measurement Module 390: Control Module

Claims (8)

  1. A camera calibration unit for calibrating physical characteristics of a plurality of stereo cameras;
    A depth map generator for receiving stereo images from the stereo cameras and generating depth maps;
    A depth map adjuster for adjusting each of the depth maps on a virtual space; And
    And a depth map stitching unit for stitching the depth maps by determining an object based on a segment in each of the depth maps and calculating a depth of the determined object,
    The depth map stitching unit
    And stitching the depth maps by matching the segments in the overlap area based on the depth of each object in the segments and the difference in RGB color distribution of each corresponding object in the stereo images, A depth map stitching device for images.
  2. The apparatus of claim 1, wherein the depth map stitching unit
    And a segment labeling step of detecting a shape of an object by performing segment labeling on the segments.
  3. 3. The apparatus according to claim 2, wherein the depth map stitching unit
    And arranging the detected object in a single space to obtain an approximate value of depth to adjust a global depth of the object.
  4. 4. The apparatus according to claim 3, wherein the depth map stitching unit
    Wherein the local depth of the detected object is calculated by performing a local optimization to obtain a specific depth value for the detail components within an image area of the detected object based on the stereo images. Depth map stitching device.
  5. delete
  6. delete
  7. The apparatus of claim 1, wherein the depth map stitching unit
    And detects the overlapping type of the matched segments to consolidate the matched segments so as to have a single depth value.
  8. The apparatus of claim 1, wherein the depth map stitching unit
    Wherein when the boundary of the stitching or the background area in which the appearance of the specific object is not determined in the overlapping area exists, the depth maps are integrated with the average depth value of the background area measured in the depth maps. Depth map stitching device.
KR1020170047363A 2017-04-12 2017-04-12 Apparatus of stitching depth maps for stereo images KR101853269B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020170047363A KR101853269B1 (en) 2017-04-12 2017-04-12 Apparatus of stitching depth maps for stereo images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020170047363A KR101853269B1 (en) 2017-04-12 2017-04-12 Apparatus of stitching depth maps for stereo images

Publications (1)

Publication Number Publication Date
KR101853269B1 true KR101853269B1 (en) 2018-06-14

Family

ID=62629273

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020170047363A KR101853269B1 (en) 2017-04-12 2017-04-12 Apparatus of stitching depth maps for stereo images

Country Status (1)

Country Link
KR (1) KR101853269B1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090080556A (en) * 2006-12-22 2009-07-24 퀄컴 인코포레이티드 Complexity-adaptive 2d-to-3d video sequence conversion
JP2013074473A (en) * 2011-09-28 2013-04-22 Panasonic Corp Panorama imaging apparatus
KR101370785B1 (en) 2012-11-06 2014-03-06 한국과학기술원 Apparatus and method for generating depth map of streoscopic image
WO2014055239A1 (en) * 2012-10-01 2014-04-10 Microsoft Corporation Multi-camera depth imaging
KR20160086802A (en) 2016-07-11 2016-07-20 에스케이플래닛 주식회사 Apparatus and Method for generating Depth Map, stereo-scopic image conversion apparatus and method usig that

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090080556A (en) * 2006-12-22 2009-07-24 퀄컴 인코포레이티드 Complexity-adaptive 2d-to-3d video sequence conversion
JP2013074473A (en) * 2011-09-28 2013-04-22 Panasonic Corp Panorama imaging apparatus
WO2014055239A1 (en) * 2012-10-01 2014-04-10 Microsoft Corporation Multi-camera depth imaging
KR101370785B1 (en) 2012-11-06 2014-03-06 한국과학기술원 Apparatus and method for generating depth map of streoscopic image
KR20160086802A (en) 2016-07-11 2016-07-20 에스케이플래닛 주식회사 Apparatus and Method for generating Depth Map, stereo-scopic image conversion apparatus and method usig that

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Semi-Global Depth Estimation Algorithm for Mobile 3-D Video Applications, TSINGHUA SCIENCE AND TECHNOLOGY Volume 17, Number 2, April 2012* *

Similar Documents

Publication Publication Date Title
US20190014307A1 (en) Methods, systems, and computer-readable storage media for generating three-dimensional (3d) images of a scene
Anderson et al. Jump: virtual reality video
US8867827B2 (en) Systems and methods for 2D image and spatial data capture for 3D stereo imaging
JP2016018213A (en) Hmd calibration with direct geometric modeling
TWI554103B (en) Image capturing device and digital zooming method thereof
JP6273163B2 (en) Stereoscopic panorama
JP2018515825A (en) LIDAR stereo fusion live-action 3D model virtual reality video
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
KR20170063827A (en) Systems and methods for dynamic calibration of array cameras
JP5342036B2 (en) Method for capturing 3D surface shapes
EP2678824B1 (en) Determining model parameters based on transforming a model of an object
US9214013B2 (en) Systems and methods for correcting user identified artifacts in light field images
US20160295108A1 (en) System and method for panoramic imaging
JP5392415B2 (en) Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation
US9697607B2 (en) Method of estimating imaging device parameters
US10089737B2 (en) 3D corrected imaging
US9241147B2 (en) External depth map transformation method for conversion of two-dimensional images to stereoscopic images
JP2015525407A (en) image fusion method and apparatus
US9451236B2 (en) Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof
US20130335535A1 (en) Digital 3d camera using periodic illumination
US8508580B2 (en) Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
JP6295645B2 (en) Object detection method and object detection apparatus
EP2383699B1 (en) Method for estimating a pose of an articulated object model
US8928734B2 (en) Method and system for free-view relighting of dynamic scene based on photometric stereo
US9898856B2 (en) Systems and methods for depth-assisted perspective distortion correction

Legal Events

Date Code Title Description
GRNT Written decision to grant