WO2016072559A1 - 3d 콘텐츠 제작 방법 및 시스템 - Google Patents

3d 콘텐츠 제작 방법 및 시스템 Download PDF

Info

Publication number
WO2016072559A1
WO2016072559A1 PCT/KR2015/000974 KR2015000974W WO2016072559A1 WO 2016072559 A1 WO2016072559 A1 WO 2016072559A1 KR 2015000974 W KR2015000974 W KR 2015000974W WO 2016072559 A1 WO2016072559 A1 WO 2016072559A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth map
motion vector
smoothing
global motion
Prior art date
Application number
PCT/KR2015/000974
Other languages
English (en)
French (fr)
Inventor
조충상
고민수
신화선
강주형
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020140153108A external-priority patent/KR101709974B1/ko
Priority claimed from KR1020140165076A external-priority patent/KR20160062771A/ko
Priority claimed from KR1020150003125A external-priority patent/KR20160086432A/ko
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Publication of WO2016072559A1 publication Critical patent/WO2016072559A1/ko

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present invention relates to image processing, and more particularly, to image processing methods necessary for generating, modifying, and utilizing depth maps required for converting 2D image contents to 3D image contents.
  • [3] 3D image conversion technology converts existing 2D images taken with analog or digital content.
  • the 3D video conversion kitchen is also very labor intensive. It requires a lot of manpower and long work time for every conversion of the image during the precise conversion process. The more 3D images you want, the more intense this task is.
  • [6] 3D image conversion of 2D image is performed by first analyzing image information, separating object, background, front and back mirrors, and adding stereoscopic values to each object and background to create a stereoscopic image.
  • the cost is low because of the conversion, but the quality of the stereo is very low.
  • the manual method of manual operation is very good, but it has a lot of labor, time and money.
  • the auto-converting method is only expected to disappear as a temporary means of combating the absence of 3D video content.
  • High-fidelity conversion requires the development of various efficient technologies that can take advantage of the manual method and overcome the shortcomings.
  • Equation 1 the most common video smoothing algorithm is presented in Equation 1 below.
  • the present invention has been made to solve the above problems, and an object of the present invention is to generate a contour line from a depth map having continuous depth values in the process of converting 2D image content to 3D image content.
  • a method and apparatus are provided.
  • Another object of the present invention is to provide a method and system for more accurately and automatically calculating moving vectors of points included in an image in the process of converting the 2D image content to the 3D image content.
  • Another object of the present invention is to provide an image smoothing method and apparatus having an excellent smoothing process at the edge portion.
  • the depth values of the depth map may be linearly quantized or non-linear quantized.
  • the depth map fillet generation method according to an embodiment of the present invention may further include removing a feather existing in a boundary portion of the quantized depth map.
  • a method of generating a depth map circle line includes: setting an area having a smaller area size than a threshold size as a feather area in a quantized depth map; and having a depth value difference between the feather areas.
  • the method may further include converting the depth value of the smallest peripheral area.
  • the depth map fillet generation method comprising the steps of extracting the bend points of the ridgeline; and converting the curve between the bend points to another type of curve; can do.
  • a method of calculating a motion vector extracting the motion vectors of the points included in a specific area of the frame; hierarchically dividing the specific area; And extracting global motion vectors for hierarchically divided areas from the specific area and the specific area; using the global motion vectors, correcting the motion vectors.
  • the moving vector extracting step may include: setting a blob centering on the point of the region; setting a blob having the same position and size as the block in the frame; And measuring a coordinate value having the largest value in the correlation diagram, shifting according to the size of the force, and acquiring a moving vector.
  • the calculating of the global motion vector may include the specific region and the specific region.
  • the global motion vector correction step when the direction difference with the global motion vector of the upper layer region including the global motion vector exceeds the threshold, the global motion vector is weighted by applying a weighted sum with the global motion vector of the upper layer region. You can correct the vector.
  • the motion vector correction step when the direction difference with the global motion vector of the lowest layer region including the motion vector exceeds the threshold, the motion vector is weighted by applying a weighted sum to the global motion vector of the lowest layer region. You can correct it.
  • an image smoothing method comprising the steps of: receiving a quantized image; smoothing the input image; and outputting a smoothed image;
  • the smoothing step includes smoothing the input image using a cost function including a term reflecting the second derivative of the smoothed image.
  • the cost function is the second derivative and the smoothing with respect to X of the smoothing image.
  • the second derivative of the y in the image may contain the reflected term.
  • the cost function may further include a term reflecting a partial derivative with respect to xy of the smoothing image.
  • the cost function may further include a term in which the first derivative with respect to X of the smoothing image and the first derivative with respect to y of the smoothing image are reflected.
  • additive values of the terms can be set by user input.
  • FIG. 2 is a diagram showing a result of smoothing by an existing method with respect to a quantized image of the original image of FIG. 1;
  • Figure 3 is a flow chart provided in the description of the 3D content production method applicable to the present invention
  • Figure 4 is a view provided in the description of the additional Figure 3
  • FIG. 5 illustrates a depth map fillet generation method according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a labsmap linear quantization result.
  • FIG. 8 is a diagram showing a feather of a quantized depth map
  • FIG. 9 shows a feather region extracted by using a labeling technique on a quantized depth map.
  • FIG. 10 is a diagram provided for explaining a technique of removing a feather area
  • FIG. 11 illustrates a quantized depth map from which a feather area has been removed
  • 12 is a diagram illustrating a mask generated for each quantization step in a quantized depth map
  • FIG. 13 is a view showing a ridgeline for each quantization step
  • 15 is a diagram showing a method of calculating the degree of bending in the fillet
  • 16 is a diagram showing a result of extracting a final bending point from the bending point candidates
  • FIG. 17 is a diagram showing a depth map using a Bezier curve to recreate a round line.
  • FIG. 18 illustrates a method for calculating a motion vector according to another embodiment of the present invention.
  • Fig. 21 is a drawing for explaining the hierarchical region division and global motion vector extraction process in detail.
  • Fig. 22 is a drawing for explaining the description of the global motion vector correction process.
  • Figure 23 is a diagram provided in detail in the description of the motion vector correction process
  • 24 and 25 are diagrams showing a region tracking simulation result using a motion vector calculation result according to an embodiment of the present invention.
  • FIG. 26 is a diagram showing a result of smoothing the proposed method with respect to the quantized image of the original image of FIG. 1;
  • 27 is a diagram showing the PSNR dB measurement results for 38 files
  • FIG. 28 is a view provided for explaining a weighting method of a cost function
  • FIG. 29 is a diagram of a 3D content production system according to another embodiment of the present invention.
  • the illustrated 3D content production method is a process of generating 3D content from 2D content.
  • the 2D video content is first received (S105) and divided into shot units (S110).
  • step S110 the 2D video content is divided into a number of shots.
  • the frames included in the same shot have the same / similar background.
  • Shot segmentation in step S110 may be performed manually by a professional artist, or may be automatically performed by using a program that divides similar frames into shot units.
  • Shot selection in step S115 the time of shot in the 2D video content
  • the order of selection may be determined by expert artists. Let's assume the shots are made in sequential order.
  • the first frame in time (ie the first frame) can be automatically selected as a key-frame, but other frames are judged by expert artists.
  • key-frame is the first frame of the shot.
  • step S120 a depth map of the key-frame selected in step S120 is obtained.
  • Depth map generation in step S125 can also be generated automatically by a professional artist, as well as automatic generation using a program.
  • step S130 the depth map fillet is generated by quantizing the depth map by a plurality of quantization steps using a program.
  • step S130 the fillet generated in step S130 is matched to the next frame (S135). If there is a moving object or the angle of view is moved, the next frame is different from the previous frame. The fillet of the frame is not exactly matched to the next frame.
  • step S 140 Concentric line movement processing in step S140 is also automatically performed. By the step S140, the fillet line is completely matched to the next-frame.
  • the depth map of the next-frame is generated based on the matched contour lines in step S140 (S145).
  • the depth map of the next-frame generated in step S 145 is a depth map of the quantized state.
  • the depth map is interpolated to complete the depth map of the next frame.
  • the depth map interpolation process in step S150 is also automatically performed using the program.
  • Steps S135 to S150 are completed for all the frames constituting the shot.
  • the frames constituting the 2D video content are 3D converted using the depth maps generated so far (S165).
  • the result of the conversion process is output as 3D image content (S170).
  • FIG. 4 is a view provided in the description of FIG. 3.
  • FIG. 5 is provided to explain a depth map fillet generation method according to an embodiment of the present invention. It is a flow chart.
  • a depth map of a key-frame is input (S130-1), and a linear quantization (S 130-2) or a non-linear quantization (S 130-3) with respect to the input depth map. ).
  • Depth values in the depth map input in step S130-1 are continuous, while depth values of the depth map are quantized in a plurality of quantization steps by step S130-2 or S130-3.
  • step S130-2 or step S130-3 can be specified according to needs and specifications.
  • the lumber line extracted in step S130-5 is a line connecting pixels having the same Thames value in the depth map.
  • the bending points are extracted from the fill lines of the depth map (S130-6), and the curves between the bending points are converted into Bezier curves (S130-7) to complete the depth map fillet.
  • the bending points extracted in step S130-6 refer to points where the direction change is large in the fillet line.
  • step S130-7 involves two bend points and one quarter and three quarters between them.
  • the intermediate point is calculated, and the control points of the third Bezier curve are calculated using a total of four points, and the control points are converted into Bezier curves using the calculated control points.
  • the step stones forming FIG. 5 will be described in more detail.
  • the key-frame depth map linear quantization is a technique of quantizing the depth method by dividing the range of the depth values of the depth map into the same size according to the number of quantization steps set by the user, and can be represented by the following Equation 2.
  • represents the depth value of the input key-frame depth map
  • X Max and XM i may be replaced with the maximum and minimum depth values in the depth map, respectively
  • step represents the number of quantization steps.
  • represents the size of the quantization step.
  • Q (x) represents the quantized depth value.
  • the size of the quantization step is the highest in the depth map.
  • the depth value and the minimum depth value are determined, and the difference is calculated by dividing the difference by the number of quantization steps set by the user.
  • the quantization depth map can be obtained by dividing the range into quantization steps of the same size.
  • FIG. 6 shows a quantization depth map obtained through the linear quantization technique and a histogram.
  • (a) shows the input key-frame depth.
  • (B) is a linear quantized depth map with the size of the quantization step set to 10
  • (c) is a linear quantized depth map with the size of the quantization step set to 20.
  • Nonlinear quantization is a method of quantizing a key-frame depth map by calculating the size of each quantization step optimized to the key-frame depth map according to the number of quantization steps set by the user. .
  • Equation 3 represents the key-frame depth map nonlinear quantization consciousness using the K-Means clustering technique.
  • & represents the set of pixels belonging to the i th quantization step.
  • Xj represents the depth value of the pixels belonging to the set, and V represents the total dispersion.
  • the size of the quantization step is equally divided according to the number of quantization steps set by the user, and the center value of each step is determined as the initial average values of the quantization step. Compare each pixel to the initial average and find the closest value and divide it into a set of quantization steps.When the entire depth map is divided, the mean value of the set belonging to each quantization step is recalculated and updated to the average value of the quantization step. Repeat this process until the current mean and the updated mean remain unchanged or the total variance is no longer small.When all the processes are complete, replace the pixels in the set of quantization steps with the mean of the set. Perform quantization.
  • FIG. 7 shows a quantization depth map obtained through the nonlinear quantization technique and a histogram.
  • (a) is an input key-frame depth map
  • (b) shows the size of the quantization step as 10.
  • (c) is a nonlinear quantized depth map by setting the size of the quantization step to 20.
  • the boundary of the original depth map contains a feather area for smooth changes.
  • the feather area appears as a small area stone in the boundary part of the quantized depth map generated by the linear or nonlinear quantization technique.
  • the feather of the quantized depth map is shown in FIG. 8.
  • the original depth map is shown in (a) of FIG. 8
  • the quantized depth map is shown in (b), respectively.
  • the labeling technique is a technique that separates the regions of the image that have the same value continuously into a single region in the image.
  • the quantized depth map is divided into regions, and the feathers are different from the peripheral system values. Since it has a Thames value, it is divided into one small area.
  • the feather area is a very small area different from other areas that are not feather areas.
  • the number of pixels belonging to the area is very small. Therefore, when the number of pixels belonging to the area separated by the labeling method is smaller than a certain threshold, it is extracted to the feather area.
  • FIG. 9 shows an example of extracting a feather region by using a labeling method on a quantized depth map.
  • FIG. 9 (a) shows a result of extracting a feather region by labeling. The feather area is enlarged.
  • FIG. 10 is a diagram provided for the description of a technique for removing a feather area.
  • area A represents a feather area
  • the ⁇ and C areas represent non-feather areas (non-feather areas).
  • the depth value of the main station of the A area which is the feather area is changed.
  • the difference between the searched depth value and the depth value of the feather area is found.
  • the depth value of the feather area is replaced with the depth value of the main station having the smallest difference value to remove the feather area.
  • FIG. 11 illustrates a quantized depth map in which a feather area is removed by this technique.
  • FIG. 11 (a) shows a quantized depth map without removing the feather area
  • FIG. 11 (b) shows a quantized depth map with the feather area removed.
  • the quantized depth map based ridge extraction is a process of generating a closed curve that has a region having the same depth value in the quantized depth map from which the feather is removed.
  • the mask is generated step by step. Next, as shown in FIG. 13, outline extraction is performed on the mask generated by each quantization step. By this, a contour line of each quantization step can be obtained.
  • the bending point refers to a point where the change in direction in the rounded line is large.
  • the bending point is extracted to distinguish each curve from the extracted fillet line.
  • FIG. 15 shows a method of calculating the degree of bending in the rounded line.
  • Equation 4 below calculates the degree of bending of the reference pixel in the fillet.
  • D is the degree of curvature of the reference pixel
  • FV is the reference pixel
  • the directional component distribution of the previous pixels is shown, and PV is the directional component distribution of the pixels after the reference pixel.
  • W H and ⁇ ⁇ represent the weight of the direction similar to the direction of the reference pixel and the weight of the direction different from the direction of the reference pixel, respectively,
  • S denotes the number of pixels to calculate the direction distribution before and after the reference pixel, d
  • B i4 has a value of 1 if the pixel at distance d is in the i direction, or a value of 0.
  • the direction change distribution of the front and rear pixels is obtained to obtain the degree of curvature of one pixel of the fillet. Calculate the difference between the front and back distribution
  • the degree of change in the direction of the reference pixel can be calculated. For every pixel in the fillet, the degree of bending D is calculated and if this value exceeds a certain threshold, the pixel is designated as the candidate for the bending point.
  • FIG. 16 shows the result of extracting the final bending point from the bending point candidates. Specifically, (a) of FIG. The bending point candidates are shown in green, and the final bending point is shown in green in FIG.
  • each quantization step can be divided into individual curves using the extracted bending point as both end points, and each of these curves is converted into a third-order Bezier curve. Estimate the Bezier curves by calculating the incremental points of, and using this: to obtain the control points of the Bezier curves that are identified. Equation 5 below represents the cubic equation.
  • denotes the control point of the Bezier curve
  • t denotes the change interval of the curve between 0 and 1.
  • B (t) denotes the point of the Bezier curve when it is the t interval. It can be expressed in the form of the product of T matrix representing the value of t, the P matrix representing the control point, and the product of M matrices representing the coefficients of t, and converted to the form of the product of the matrix.
  • which is a matrix of bending points at both ends and intermediate points between them, can be divided into X and y components, respectively. In this case, two values of 3/4 point are calculated and 4 points are used as input.
  • Equation 9 the values of the input points and the bending point that is the beginning and end of the curve are calculated.
  • Equation 11 the value of the Py matrix when the error value is minimum becomes the y component of the control point of the optimal Bezier curve. Equation 12 shows a process of obtaining a minimum error-based Py value.
  • the same process can be repeated by changing the input to the X component to obtain the optimal X component.
  • the region tracking must be performed for the ridge shift processing of steps S140 shown in FIG. 3, and the motion vectors of the points included in the region must be known for the region tracking.
  • Fig. 18 is provided for explaining the motion vector calculation method according to another embodiment of the present invention.
  • an area an area to be tracked including points to calculate a motion vector in the current frame (t) is set (S140-1).
  • S140 For the region set in step 1, using the block based robust phase correlation technique, the moving vectors of the points included in the region are extracted (S140-2).
  • step S 140-2 in order to correct the motion vectors extracted in step S 140-2, first, the area set in step S140-1 is divided hierarchically (S 140-3), and the area set in step S 140-1 is performed. In operation S140-3, a motion vector of the divided regions is extracted and corrected (S 140-4).
  • step S140-2 the motion vectors extracted in step S140-2 are corrected with reference to the global motion vectors extracted and corrected in step S140-4 (S140-5), in order to correct the faulty motion vectors. to be.
  • the area marked with dashed line in 19 is the area to set.
  • the block-based robust phase correlation technique is used to include the region.
  • the two blocks ' (3 2 ) are each FFTed into a frequency domain, and the correlations of the two bricks (FBi, FB 2 ) in the frequency domain are obtained.
  • hanhue measure (C FB) IFFT to the correlation (C FB) is converted to the time domain (B t).
  • the Gaussian filter is an example of a low-pass filter and can be replaced by another type of filter.
  • the motion vector extraction shown in Fig. 20 is for one of the points included in the area. For the other point stones, it is also required to extract the motion vectors according to the above-described method.
  • FIG. 21 illustrates a method of extracting global motion vectors by hierarchically dividing a set area.
  • Global motion vectors are described above as motion vectors for (divided) areas.
  • the area S set in step S140-1 includes four areas S h.
  • step S140-1 the region S set in step S140-1 is It is divided hierarchically.
  • the vectors are as follows:
  • the global motion vector for the region (S is the upper hierarchy that contains itself.
  • the motion vector correction uses the lowest level global motion vectors.
  • the motion vector correction process is shown in Figure 23.
  • the motion vectors to be corrected are the motion vectors for the points included in the area (S).
  • the motion vectors have a weighted fit with the global motion vectors of the area when the direction difference with the global motion vectors in the lowest layer area including them exceeds the threshold (Th2). Is compensated for.
  • the motion vector is not corrected.
  • the simulation results are shown in Figs. 24 and 25.
  • the tracking results for the knee area are shown in Fig. 24, and the tracking results for the hat area are shown in Fig. 25, respectively. You can check it.
  • Equation 13 a cost function for depth smoothing is as shown in Equation 13 below.
  • the cost function for the depth map smoothed in accordance with one embodiment of the invention the smoothed image (S p) smoothed image (S P) 1 primary partial differentiation on X and the It is identical to the existing cost function shown in Equation 1, in that the first derivative of y contains the reflected term [term with weight ⁇ ].
  • the cost function for depth map smoothing is different from the existing cost function shown in Equation 1 in that it further includes a term having a weight value of ⁇ and a weight value of ⁇ . .
  • the second derivative of the smoothing image (S p ) is the term with weight ⁇ and the term ⁇ .
  • the value of ⁇ inhang abomination and detestable inhang value ⁇ is the second partial derivative relating to y of the smoothed image (S p) and the second partial derivative smoothed image (S p) of the X of the reflection.
  • Equation 14 a term having a weight of ⁇ can be changed as shown in Equation 14 below.
  • the cost function for depth map smoothing is a term that reflects the partial derivative of xy of the smoothing image (S p ). Include.
  • Equation 15 The smoothing image S p calculated according to the cost function shown in Equation 14 is shown in Equation 15 below.
  • FIG. 26 For performance comparison with the conventional method, the result of smoothing the original image (depth map) shown in FIG. 1 by quantizing the image (depth map) according to the embodiment of the present invention is shown in FIG. 26. Comparing FIG. 2 with FIG. 26, it can be confirmed that the smoothing result at the edge portion is better than that of the conventional smoothing method.
  • FIG. 27 The PSNR dB measurement results of the conventional method and the smoothing method according to the embodiment of the present invention for the 38 files are shown in FIG. 27. Through FIG. 27, the superiority of the smoothing method according to the embodiment of the present invention is shown. You can check it.
  • the additive values ( ⁇ , ⁇ , and ⁇ ) of each term appearing in the cost function can be set by the user and automatically set according to the specifications / characteristics of the depth map. Thus, as shown in FIG. If the user sets weights, depth map smoothing is performed accordingly, while if the user does not set weights, depth map smoothing is performed according to the preset weights.
  • the depth map mentioned in the above embodiment is a kind of image.
  • the technical concept of the present invention may be applied to other images other than the depth map (depth image), and the medical image may be applied to other images. Of course it can be included.
  • FIG 9 shows the 3D content creation system according to another embodiment of the present invention.
  • the 3D content production system 200 includes a 2D image input unit 210, a depth map generation unit 220, a 3D conversion unit 230, and the like, as shown in FIG. 3D image output unit 240 is included.
  • the 3/4 map generation unit 220 generates a depth map of 2D image frames inputted through the 2D image input unit 210 using the key-frame depth map.
  • the key-frame depth map is a shot in the 2D image. It is created one by one, but automatic generation by the program can be performed by a professional artist.
  • the depth map generator 220 extracts and extracts the concentric lines of the key-frame depth map.
  • Depth maps are generated.
  • the depth map generator 220 uses the cost function shown above to generate the depth maps. Perform smoothing.
  • the 3D converter 230 3D converts the 2D image frames by using the depth maps generated by the depth map generator 220.
  • the 3D image output unit 240 performs the 3D conversion unit 230. ) Outputs the 3D video of the conversion processing result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

3D 콘텐츠 제작 방법 및 시스템이 제공된다. 본 발명의 실시예에 따른 3D 콘텐츠 제작 방법 및 시스템은, 2D 영상 콘텐츠를 3D 영상 콘텐츠로 변환하는 과정 에서, 뎁스맵 등심선을 자동으로 생성하고, 뎁스맵 등심선 이동 처리를 위해 필요 한 영역 추적에 이용되는 영역의 포인트들에 대한 움직임 벡터들을 보다 정확하게 산출하며, 영상 스무딩에 이용되는 코스트 함수에 다양한 항들을 적응적/선택적으 로 추가하여 영상 스무딩을 수행한다.

Description

명세서
발명의명칭: 3D콘텐츠제작방법및시스템 기술분야
[1] 본발명은영상처리에관한것으로,더욱상세하게는 2D영상콘텐츠를 3D 영상콘텐츠로변환하는데있어필요한뎁스맵을생성하고,변형하며, 활용하는데필요한영상처리방법에관한것이다.
배경기술
[2] 수요에비해공급이매우부족하여, 3D영상콘텐츠의부재가심각한현
상황에서, 3D영상변환기술은 3D영상콘텐츠확보측면에서그중요성이 증대하고있다.
[3] 3D영상변환기술은아날로그나디지털로촬영된기존의 2D영상을콘텐츠를
3D영상콘텐츠로변환시키는기술이다.변환기술의기법들은다양하지만어느 방법이든입력된 2D영상에포함된단안입체정보를해석하여단안영상의 뎁스를추정한템스맵을생성하여시차에맞는좌우영상을출력하는것이기본 원리이다.
[4] 고화질을유지하기위해, 3D영상변환기술은매프레임마다모두수작업을 거치고있으며,이때문에다수의인력과오랜작업시간이소요되는바, 궁극적으로는 3D영상콘텐츠의생산성문제가시장형성의큰걸림돌로 작용하고있다.
[5] 또한, 3D영상변환작엌은매우노동집약적이라는특징이있다.정밀한변환 작업시,작업과정에서영상의매프레임마다모두수작업을거쳐야하기때문에 다수의인력과오랜작업시간이필요하다.완성도높은 3D영상을원할수록 이러한작업의강도는더욱높아진다.
[6] 2D영상의 3D영상변환은먼저영상정보를분석해사물과배경,전경과후경 둥을분리하고,각각의사물과배경에대해입체값을부여해입체영상으로 만들어내는과정으로이루어진다.
[7] 자동방식에서는사물과배경의분리,뎁스값부여가자동으로이루어지는데 반해수작업에서는기술자의노하우에의거해수동적인작업을거친다.이러한 변환에는오토컨버팅 (auto converting)과수작업흔합 (semi-auto),완전수작업의
3가지작업방식이있다.
[8] 오토컨버팅은변환장치나소프트웨어가설치되어있으면자동으로
변환해주기때문에비용이저렴하지만입체품질이매우낮다.수작업으로 이루어지는수동방식은매우우수한입체감을얻을수있지만,많은인력과 시간,그리고비용이많이든다는단점이있다.
[9] 현재진행되고있는기존영화의 3D영상으로의변환은대부분이프레임별로 작업하는완전수작업방식으로이루어지고있다.오토컨버팅과수작업방식의 중간인반자동또는흔합방식의변환방식의시스템으로작업하는변환 업체들이많으며,작업량의비율이자동과수동중어떤방식에더비중을 두느냐에따라달리변환작업을하고있는시점이다.
[10] 앞으로도자동으로 3D영상으로변환해주는시스템은발전할것이나,
오토컨버팅방식은 3D영상콘텐츠의부재를때우기위한일시적인방편일뿐 점점사라질것으로예상되고있다.완성도높은변환작업에는수동방식의 장점을살리고단점을극복할수있는다양한효율적인기술의개발이필요하다.
[11] 한편,가장일반적인영상스무딩알고리즘은아래의식 1에제시된코스트
함수를이용하는방법이다.
[12] [식 1]
Figure imgf000004_0001
[14] 기존의영상스무딩방법에서가장문제가되는것은,영상이급격히변하는 에지부분에서스무딩처리가만족스럽지못하다는점이다.
[15] 도 1에나타난원본영상을양자화한영상에대해,위식 1에따라스무딩한
결과를도 2에나타내었다.도 1과도 2에서는기존의영상스무딩에서문제가 되는에지부분을중점적으로나타내었다.
[16] 도 2를통해알수있는바와같이 ,기존방식에따르면,양자화된영상을
스무딩하는경우,에지부분에서스무딩이제대로이루어지지않았음을확인할 수있는데,이는영상전반의품질을떨어뜨리는요인으로작용하게된다.
발명의상세한설명
기술적과제
[17] 본발명은상기와같은문제점을해결하기위하여안출된것으로서,본발명의 목적은, 2D영상콘텐츠를 3D영상콘텐츠로변환하는과정에서,연속적인 뎁스값을갖는뎁스맵으로부터등심선을생성하는방법및장치를제공함에 있다.
[18] 또한,본발명의다른목적은, 2D영상콘텐츠를 3D영상콘텐츠로변환하는 과정에서,영상에포함된포인트들의움직임백터들을보다정확하게자동으로 산출하는방법및시스템을제공함에있다.
[19] 그리고,본발명의또다른목적은,에지부분에서의스무딩처리가우수한영상 스무딩방법및장치를제공함에있다.
과제해결수단
[20] 상기목적을달성하기위한본발명의일실시예에따른,뎁스맵등심선생성 방법은,뎁스맵의뎁스값들을다수의양자화단계들로양자화하는단계 ;및 양자화된뎁스맵에서둥심선을생성하는단계;를포함한다.
[21] 그리고,상기양자화단계는,뎁스맵의뎁스값들을선형양자화또는비선형 양자화할수있다. [22] 또한,본발명의일실시예에따른뎁스맵등심선생성방법은,양자화된 뎁스맵의경계부분에존재하는페더를제거하는단계;를더포함할수있다.
[23] 그리고,본발명의일실시예에따른뎁스맵둥심선생성방법은,양자화된 뎁스맵에서영역크기가임계크기보다작은영역을페더영역으로설정하는 단계;및상기페더영역을뎁스값차이가가장작은주변영역의뎁스값으로 변환하는단계;를더포함할수있다.
[24] 또한,본발명의일실시예에따른뎁스맵등심선생성방법은,상기등심선의 굴곡점들을추출하는단계;및상기굴곡점들사이의곡선을다른타입의 곡선으로변환하는단계;를더포함할수있다.
[25] 상기다른목적을달성하기위한본발명의일실시예에따른,움직임백터산출 방법은,프레임의특정영역에포함된포인트들의움직임백터들을추출하는 단계;상기특정영역을계층적으로분할하는단계;상기특정영역및상기특정 영역으로부터계층적으로분할된영역들에대한전역움직임백터들을 추출하는단계;상기전역움직임백터들을이용하여,상기움직임백터들을 보정하는단계;를포함한다.
[26] 그리고,상기움직임백터추출단계는,상기영역의포인트를중심으로하는 블력을설정하는단계;다음 -프레임에서상기블럭과동일한위치와크기를갖는 블릭을설정하는단계;블릭들의상관도를측정하는단계;및상관도에서가장 큰값을갖는좌표값을선택하고,블력사이즈에따른시프트를하여,움직임 백터를획득하는단계;를포함할수있다.
[27] 또한,상기전역움직임백터계산단계는,상기특정영역및상기특정
영역으로부터계층적으로분할된영역들에대한전역움직임백터들을 추출하는단계 ;및추출된전역움직임백터들을보정하는단계 ;를포함할수 있다.
[28] 그리고,상기전역움직임백터보정단계는,전역움직임백터가포함된상위 계층영역의전역움직임백터와의방향차가임계치를초과하면,상기상위 계층영역의전역움직임백터와가중치적용합으로상기전역움직임백터를 보정할수있다.
[29] 또한,상기움직임벡터보정단계는,상기움직임백터가포함된최하위계층 영역의전역움직임백터와의방향차가임계치를초과하면,상기최하위계층 영역의전역움직임벡터와가중치적용합으로상기움직임백터를보정할수 있다.
[30] 상기목적을달성하기위한본발명의또다른실시예에따른,영상스무딩 방법은,양자화된영상을입력받는단계 ;입력된영상을스무딩하는단계 ;및 스무딩된영상을출력하는단계 ;를포함하고,상기스무딩단계는,스무딩 영상의 2차편미분이반영된항을포함하는코스트함수를이용하여,상기 입력된영상을스무딩한다.
[31] 그리고,상기코스트함수는,스무딩영상의 X에관한 2차편미분및스무딩 영상의 y에관한 2차편미분이반영된항을포함할수있다.
[32] 또한,상기코스트함수는,스무딩영상의 xy에관한편미분이반영된항을더 포함할수있다.
[33] 그리고,상기코스트함수는,스무딩영상의 X에관한 1차편미분및스무딩 영상의 y에관한 1차편미분이반영된항을더포함할수있다.
[34] 또한,항들의가증치들은,사용자의입력에의해설정될수있다.
발명의효과
[35] 이상설명한바와같이 ,본발명의실시예들에따르면, 2D영상콘텐츠를 3D 영상콘텐츠로변환하는과정에서,뎁스템등심선을자동으로생성할수있게 되어,수작업을통해그래픽틀에서뎁스맵경계의곡선을그리는작업이필요 없게된다.이에의해,필요한인력의감소는물론,작업속도를크게향상시킬수 있게된다.
[36] 또한,뎁스맵등심선의모든구간을베지어곡선으로변환하기때문에미세 보정이용이하다는장점이있다.
[37] 아울러, 2D영상콘텐츠를 3D영상콘텐츠로변환하는과정에서,뎁스맵
등심선이동처리를위해필요한영역추적에이용되는영역의포인트들에대한 움직임백터들을보다정확하게산출할수있게된다.이에의해,필요한인력의 감소는물론,작업속도를크게향상시킬수있게된다.
[38] 그리고,영상스무딩에이용되는코스트함수에다양한항들을
적웅적 /선택적으로추가하여영상스무딩을수행할수있게되는바,에지 부분에서의스무딩처리가우수해져,궁극적으로는영상전체의품질을 향상시킬수있게된다.
도면의간단한설명
[39] 도 1은원본영상을나타낸도면,
[40] 도 2는,도 1의원본영상을을양자화한영상에대해,기존의방법으로스무딩한 결과를나타낸도면,
[41] 도 3은본발명이적용가능한 3D콘텐츠제작방법의설명에제공되는흐름도, [42] 도 4는,도 3의부연설명에제공되는도면,
[43] 도 5는본발명의일실시예에따른,뎁스맵등심선생성방법의설명에
제공되는흐름도,
[44] 도 6은랩스맵선형양자화결과를예시한도면,
[45] 도 7은뎁스맵비선형양자화결과를예시한도면,
[46] 도 8은양자화된뎁스맵의페더를나타낸도면,
[47] 도 9는양자화된뎁스맵에레이블링기법을이용하여페더영역을추출한
결과를예시한도면,
[48] 도 10은페더영역을제거하는기법의설명에제공되는도면,
[49] 도 11은페더영역이제거된양자화된뎁스맵을예시한도면, [50] 도 12는양자화된뎁스맵에서양자화단계별로생성한마스크를도시한도면,
[51] 도 13은각양자화단계별등심선을나타낸도면,
[52] 도 14는양자화된뎁스맵등심선을나타낸도면,
[53] 도 15는등심선내굴곡정도를계산하는방법을나타낸도면,
[54] 도 16은굴곡점후보들에서최종굴곡점을추출한결과를나타낸도면,
[55] 도 17은베지어곡선을이용하여둥심선을재생성한뎁스맵을나타낸도면,
[56] 도 18은본발명의다른실시예에따른,움직임백터산출방법의설명에
제공되는흐름도,
[57] 도 19는영역설정과정의설명에상세한제공되는도면,
[58] 도 20은움직임백터추출과정의설명에상세한제공되는도면,
[59] 도 21은계층적영역분할및전역움직임백터추출과정의설명에상세한 제공되는도면,
[60] 도 22는전역움직임백터보정과정의설명에상세한제공되는도면,
[61] 도 23은움직임백터보정과정의설명에상세한제공되는도면,
[62] 도 24및도 25는,본발명의실시예에따른움직임백터산출결과를이용한 영역추적시뮬레이션결과를나타낸도면,
[63] 도 26은,도 1의원본영상을양자화한영상에대해,제안된방법으로스무딩한 결과를나타낸도면,
[64] 도 27은 38개파일에대한 PSNR dB측정결과를나타낸도면,
[65] 도 28은코스트함수의가중치설정방법의설명에제공되는도면,그리고, [66] 도 29는본발명의또다른실시예에따른 3D콘텐츠제작시스템의
블럭도이다.
발명의실시를위한최선의형태
[67] 이하에서는도면을참조하여본발명을보다상세하게설명한다.
[68]
[69] 1. 3D콘텐츠제작
[70] 도 3은본발명이적용가능한 3D콘텐츠제작방법의설명에제공되는
흐름도이다.도시된 3D콘텐츠제작방법은, 2D콘텐츠로부터 3D콘텐츠를 생성하는과정이다.
[71] 도 3에도시된바와같이,먼저 2D영상콘텐츠를입력받아 (S105),샷 (Shot) 단위로분할한다 (S110). S110단계에의해 2D영상콘텐츠는다수의샷들로 분할된다.동일샷에포함된프레임들은동일 /유사한배경을갖는다.
[72] S110단계에서의샷분할은전문아티스트에의한수작업으로수행될수도 있고,유사한프레임들을샷단위로구분하는프로그램을이용하여자동으로 수행될수도있다.
[73] 이후,하나의샷을선정하고 (S115),선정된샷에서키-프레임을
선정한다 (S120). S115단계에서의샷선정은, 2D영상콘텐츠에서샷의시간 순서에따라순차적으로자동선정 (즉,첫번째샷을먼저선정하고,이후다음 샷을선정)할수있음은물론,전문아티스트의판단에의해선정순서를다른 순서로정할수도있다.이하에서는,샷선정이샷의시간순서따라순차적으로 이루어지는것을상정하겠다.
[74] 아을러, S120단계에서의키-프레임선정도샷을구성하는프레임들중
시간적으로가장앞선프레임 (즉,첫번째프레임 )을키 -프레임으로자동선정할 수있음은물론,전문아티스트의판단에의해그밖의다른프레임을
키-프레임으로선정할수도있다.이하에서는,키-프레임이샷의첫번째 프레임인것을상정하겠다.
[75] 다음, S120단계에서선정된키-프레임에대한뎁스맵 (Depth Map)을
생성한다 (S125). S125단계에서의뎁스맵생성역시프로그램을이용한자동 생성은물론,전문아티스트의수작업에의한생성도가능하다.
[76] 이후, S125단계에서생성된뎁스맵의등심선을생성한다 (S130). S130단계에서, 뎁스맵등심선은프로그램을이용하여뎁스맵을뎁스에따라,다수의양자화 단계들로양자화처리하여생성한다.
[77] 다음, S130단계에서생성된등심선을다음-프레임에매치시키다 (S135).움직임 객체가있거나화각이이동하여 ,다음 -프레임이이전 -프레임인키 -프레임과 다른부분이있는경우,키-프레임의등심선은다음-프레임에정확히매치되지 않는다.
[78] 이에따라,다음-프레임에완전히매치되도록,등심선을부분적으로이동
처리 (매치무브)한다 (S 140). S 140단계에서의둥심선이동처리역시프로그램을 이용하여자동으로수행된다. S140단계에의해,다음-프레임에완전하게매치된 등심선이생성된다.
[79] 이후, S140단계에서매치무브된등심선을기반으로,다음-프레임의뎁스맵을 생성한다 (S 145). S 145단계에서생성되는다음-프레임의뎁스맵은양자화된 상태의뎁스맵이다.
[80] 이에,뎁스맵을보간하여,다음-프레임의뎁스맵을완성한다 (S150).
S150단계에서의뎁스맵보간처리역시프로그램을이용하여자동으로 수행된다.
[81] S135단계내지 S150단계는,샷을구성하는모든프레임들에대해완료될
때까지반복된다 (S155).즉,샷의두번째프레임에대한뎁스맵이완성되면,두 번째프레임의등심선을세번째프레임에매치시키고 (S135),등심선매치 무브를수행한후에 (S140),매치무브된등심선을기반으로,세번째프레임의 뎁스맵을생성하고 (S145),보간을통해뎁스맵을완성하게되며 (S150),세번째 프레임이후의프레임돌에대해서도동일한작업이수행된다.
[82] 샷을구성하는모든프레임들에대해뎁스맵생성이완료되면 (S155-Y),두
번째샷에대해 S115단계내지 S155단계를반복하며,두번째샷을구성하는 모든프레임들에대해뎁스맵생성이완료되면,이후의샷들에대해서도동일한 작업이수행된다.
[83] 2D영상콘텐츠를구성하는모든샷들에대해위절차가완료되면 (S160-Y), 지금까지생성된뎁스맵들을이용하여, 2D영상콘텐츠를구성하는프레임들을 3D변환처리한다 (S165).그리고, S165단계에서변환처리결과를 3D영상 콘텐츠로출력한다 (S170).
[84] 도 4는,도 3의부연설명에제공되는도면이다.도 4에는,
[85] 1) S115단계에서선정된샷을구성하는프레임들증첫번째프레임을
키 -프레임으로선정하고 (S120),
[86] 2)선정된키 -프레임에대한뎁스맵을생성한후에 (S 125),
[87] 3)뎁스맵의등심선을추출하고 (S130),
[88] 4)추출한둥심선을다음 -프레임에매치시켜,매치무브한후 (S135, S140), [89] 5)매치무브된둥심선을기반으로,다음 -프레임의뎁스맵을생성하고 (S145), [90] 6)생성된뎁스맵을보간하여,다음-프레임의뎁스맵을완성 (S150)하는과정이 도식적으로나타나있다.
[91]
[92] 2.덱스맵둥심선생성
[93] 도 3과도 4에도시된 S130단계의뎁스맵등심선생성과정에대해,도 5를 참조하여상세히설명한다.도 5는본발명의일실시예에따른,뎁스맵등심선 생성방법의설명에제공되는흐름도이다.
[94] 도 5에도시된바와같이,먼저,키-프레임의뎁스맵을입력받고 (S130-1), 입력된뎁스맵에대해선형양자화 (S 130-2)또는비선형양자화 (S 130-3)를 수행한다. S130-1단계에서입력되는뎁스맵에서뎁스들값은연속적인반면, S130-2단계또는 S130-3단계에의해뎁스맵의뎁스값들은다수의양자화 단계들로양자화된다.
[95] S130-2단계또는 S130-3단계에적용할양자화단계의개수는필요와사양에 따라지정할수있다.
[96] 이후,양자화된뎁스맵의경계부분들에존재하는페더를제거하고 (S130-4), 페더가제거된양자화된뎁스맵에서등심선을추출한다 (S130-5).
S130-5단계에서추출하는등심선은뎁스맵에서동일한템스값을갖는화소들을 연결한선이다.
[97] 다음,뎁스맵의등심선에서굴곡점들을추출하고 (S130-6),굴곡점들사이의 곡선을베지어곡선으로변환하여 (S130-7),뎁스맵등심선을완성한다.
S130-6단계에서추출되는굴곡점들은등심선에서방향변화가큰지점들을 말한다.
[98] S130-7단계에서의변환은, 2개의굴곡점들과그사이의 1/4, 3/4지점의
중간점을계산하여,총 4개의포인트를이용하여 3차베지어커브의제어 포인트들을계산하고,계산된제어포인트들을이용하여베지어곡선으로 변환하는과정에의한다. [99] 이하에서,도 5를구성하는단계돌에대해,보다구체적으로설명한다.
[100]
[101] 3.키-프레익뎁스맵 ^형양자화
[102] 키-프레임뎁스맵선형양자화는,사용자가설정한양자화단계의개수에따라 뎁스맵의템스값범위를동일한크기로구분하여뎁스법을양자화하는 기법으로아래의식 2로나타낼수있다.
[103] [식 2]
Figure imgf000010_0001
step
[105] 여기서 , Χ는입력된키-프레임뎁스맵의뎁스값을나타내며 XMax, XMi„은각각 뎁스맵에서의최고뎁스값,최소뎁스값을나타낸다,그리고, step은양자화 단계의개수를나타내며 Δ는양자화단계의크기를나타낸다.마지막으로, Q(x)는양자화된뎁스값을나타낸다.
[106] 위식 2로표현된바와같이,양자화단계의크기는,뎁스맵에서의최고
뎁스값과최소뎁스값을파악하고,이들의차를사용자가설정한양자화단계의 개수로나누어산출함을알수있다.
[107] 위식 2를이용하면,입력된키-프레임뎁스맵이나타낼수있는뎁스값의
범위를동일한크기의양자화단계로구분한양자화뎁스맵을얻을수있다.도 6에는선형양자화기법을통해얻은양자화뎁스맵과그히스토그램을 나타내었다.도 6에서, (a)는입력된키-프레임뎁스맵이고, (b)는양자화단계의 크기를 10으로설정하여선형양자화한뎁스맵이며, (c)는양자화단계의크기를 20으로설정하여선형양자화한뎁스맵이다.
[108]
[109] 4.키-프레임 랩스맵비선형 양자화
[no] 키-프레임뎁스맵비선형양자화는,사용자가설정한양자화단계의개수에 따라각양자화단계의크기를키-프레임뎁스맵에최적화된크기로계산하여 키-프레임뎁스맵을양자화하는기법이다.
[HI] 최적화된양자화단계의크기를계산하기위해 K-means군집화기법을사용할 수있다.아래의식 3은 K-Means군집화기법을이용한키 -프레임뎁스맵비선형 양자화의식을나타낸다.
[112] [식 3]
Figure imgf000010_0002
[114] 여기서, k는양자화단계의개수를나타내며 , (^는 i번째양자화단계의
평균값을나타낸다.또한, &는 i번째양자화단계에속하는화소들의집합을 나타내며, Xj는집합에속하는화소의뎁스값을나타낸다.그리고 , V는전체 분산을나타낸다.
[115] 키-프레임뎁스맵비선형양자화는,사용자가설정한양자화단계의개수에 따라양자화단계의크기를동일하게나누고,각단계의중심값을양자화단계의 초기평균값들로정한다.이후,뎁스맵의각화소들을초기평균값들과비교하여 가장가까운값을찾고그양자화단계의집합으로구분한다.그리고,전체 뎁스맵이구분되면각양자화단계에속한집합의평균값을다시계산하여 양자화단계의평균값으로갱신한다.이과정을현재의평균값들과갱신된 평균값들이변하지않거나전체분산값이더이상작아지지않을때까지 반복한다.모든과정이완료되면양자화단계의집합에속한화소들을그집합의 평균값으로대체하여비선형양자화를수행한다.
[116] 도 7에는비선형양자화기법을통해얻은양자화뎁스맵과그히스토그램을 나타내었다.도 7에서, (a)는입력된키-프레임뎁스맵이고, (b)는양자화단계의 크기를 10으로설정하여비선형양자화한뎁스맵이며, (c)는양자화단계의 크기를 20으로설정하여비선형양자화한뎁스맵이다.
[117] ᅳ
[118] 5.양자화된뎁스맵의페더제거
[119] 원본뎁스맵의경계에는부드러운변화를위한페더영역이존재한다.선형 또는비선형양자화기법을통해생성된양자화뎁스맵의경계부분에는페더 영역이작은영역돌로나타난다.
[120] 도 8에는양자화된뎁스맵의페더를나타내었다.도 8의 (a)에는원본뎁스맵을, (b)에는양자화된뎁스맵을각각나타내었다.
[121] 양자화된뎁스맵에서페더영역을추출하기위해,레이블링기법을이용한다. 레이블링기법은영상에서주변화소와연속으로같은값을갖는영역을하나의 영역으로분리하는기법이다.레이블링기법을이용하여양자화된뎁스맵을 각각의영역으로분리하면,페더들은주변템스값들과다른템스값을갖기 때문에하나의작은영역으로분리된다.
[122] 페더영역은페더영역이아닌다른영역과는다르게매우작은영역이기
때문에영역에속한화소수가매우작은특징이있다.따라서,레이블링 기법으로분리된영역에속한화소수가일정임계치보다작을경우페더 영역으로추출한다.
[123] 도 9에는양자화된뎁스맵에레이블링기법을이용하여페더영역을추출한 예를나타내었다.도 9의 (a)에는레이블링으로페더영역을추출한결과를 나타내었고,도 9의 (b)에는페더영역을확대하여나타내었다.
[124] 도 10은페더영역을제거하는기법의설명에제공되는도면이다.도 10에서 , A 영역은페더영역을나타내고 , Β영역과 C영역은비 -페더영역 (페더가아닌 영역)을나타낸다.
[125] 페더영역을제거하기위해,페더영역인 Α영역의주변화소의뎁스값을 탐색하고,탐색된뎁스값과페더영역의뎁스값의차이값을구한다.그리고, 가장작은차이값을보이는주변화소의뎁스값으로페더영역의뎁스값을 대체하여페더영역을제거한다.
[126] 이기법에의해페더영역이제거된양자화된뎁스맵을도 11에예시하였다.도
11의 (a)에는페더영역이제거되지않은양자화된뎁스맵을나타내었고,도 11의 (b)에는페더영역이제거된양자화된뎁스맵을나타내었다.
[127]
[128] 6.양자화뎁스뱉기반등심선추출
[129] 양자화뎁스맵기반등심선추출은,페더가제거된양자화뎁스맵에서같은 뎁스값을갖는영역을잇는폐곡선을생성하는과정이다.
[130] 이를위해,도 12에도시된바와같이,먼저양자화된뎁스맵에서양자화
단계별로마스크를생성한다.다음,도 13에도시된바와같이,각양자화 단계별로생성된마스크에서외곽선추출을수행한다.이에의해,각양자화 단계별등심선을얻을수있다.
[131] 이후,각양자화단계별로둥심선을통합하면,도 14에도시된바와같이, 양자화된뎁스맵등심선을얻을수있게된다.
[132]
[133] 7.듀식서내궂곡점추출
[134] 굴곡점은둥심선에서방향의변화가큰지점을의미한다.추출된등심선에서 각각의곡선을구분하기위해굴곡점을추출한다.도 15에둥심선내굴곡 정도를계산하는방법을나타내었고,아래의식 4는등심선내의기준화소의 굴곡정도를계산하는식이다.
[135] [식 4]
Figure imgf000012_0001
FVi =∑{WH -C(d -i)} x Biid PVi = ^ { Η -C(d- \)} x Biid
C = (WH -WL)/S
[137] 여기서, D는기준화소의굴곡정도를나타낸다.그리고, FV는기준화소
이전의화소들의방향성분분포를나타내고, PV는기준화소이후들의화소의 방향성분분포를나타낸다. WH와 \\^은각각기준화소의방향과가장비슷한 방향의가중치와기준화소의방향과가장다른방향의가중치를나타내며, S는 기준화소이전,이후의방향분포를계산할화소수를나타내고, d는기준 화소와의거리를나타낸다.마지막으로 Bi4는 d거리에있는화소가 i방향이면 1의값,아니면 0의값을갖는다.등심선의한화소의굴곡정도를구하기위해 앞,뒤의화소들의방향변화분포를계산하며앞,뒤의분포의차이값을구해서 기준화소의방향변화의정도를계산할수있다.등심선내의모든화소에대해 굴곡정도 D를계산하고이값이일정임계치를넘을경우그화소를굴곡점 후보로정한다.
[138] 굴곡점후보들은등심선에변화가시작되는부분부터연속적으로나타난다. 따라서,연속적인굴곡점후보들에서최대굴곡정도를갖는굴곡점후보를최종 굴곡점으로추출한다.도 16에는굴곡점후보들에서최종굴곡점을추출한 결과를나타내었다.구체적으로,도 16의 (a)에는굴곡점후보들을녹색으로 나타내었고,도 16의 (b)에는최종굴곡점을녹색으로나타내었다.
[139]
[140] 8.굴곡점사이곡선의베지어곡선변환
[141] 각양자화단계의등심선은추출된굴곡점을양끝점으로하여각각의곡선들로 나눌수있는데,이곡선들을각각 3차베지어곡선으로변환한다.본발명의 실시예에서는양끝의굴곡점과그사이의증간점을계산하고,이점 :을 이용하여베지어곡선식을추정한다.이를통해,추청된베지어곡선의제어 포인트들을얻는다.아래의식 5는 3차베지어곡선의식을나타낸다.
[142] [식 5]
[143] B(t) = P0(l - tf + 3^(1- )2 + 3Ρ/(1 -ή + Ρ
[144] 여기서, Ρ는베지어곡선의제어포인트를나타내며, t는 0~1사이의곡선의 변화구간을나타낸다.마지막으로, B(t)는 t구간일때베지어곡선의포인트를 나타낸다.식 5를식 6과같이 t값을나타내는 T행렬,제어포인트를나타내는 P행렬, t의계수들을나타내는 M행렬들의곱의형태로나타낼수있다.행렬의 곱의형태로변환하면식 7과같이나타낼수있다.
[145] [식 6]
Figure imgf000013_0001
[147] [식 7]
[148] Β(ΐ) = ΤΜΡ
[149] 아래의식 8과같이입력으로받는양끝의굴곡점과그사이의중간점들의 행렬인 Κ는다시 X성분과 y성분으로각각나누어생각할수있다.본발명의 실시예에서는중간점으로 1/4, 3/4지점의 2개의값을계산하여총 4개의점을 입력으로사용한다.
[150] [식 8]
Figure imgf000014_0001
[152] 아래의식 9와같이입력된점들의값들과곡선의처음과끝인굴곡점을
이용하여비율을계산하면그입력점의대략적인 t값을구할수가있다.모든 입력점들에대해 t값을각각구해이를행렬로나타내면아래의식 10과같다.
[153] [식 9]
[154] | f -^._,|
Figure imgf000014_0002
7-1
=2
[155] [식 10]
Figure imgf000014_0003
[157] 최적의곡선을만들기위해실제등심선의점들과베지어곡선을통해복원한 점들과의화소위치차이를구해서생성한베지어곡선의오차를계산한다.식 11은 y성분에대해오차를구하는식을나타내며이를행렬의계산으로정리한 식또한나타낸다.
[158] [식 11]
[159]
E(Py) =∑(yi -B(tl)f E(P ) = (y -fMP )T(y-TMP )
[160] 식 11에서오차값이최소가될때의 Py행렬의값이최적의베지어곡선의제어 포인트의 y성분이된다.식 12는최소오차기반의 Py의값을구하는과정을 나타낸다.
[161] [식 12]
[162]
^ d = = -2fT(y-TMPy) Ρ^ Μ- ΤΤΤ)-ΨΥ
[163] 계산을통해최적의베지어곡선의 y성분포인트를얻었다면입력을 X성분으로 바꾸여같은과정을반복하면최적의 X성분또한얻을수있다.도 17에는변환된 베지어곡선의포인트들을이용하여등심선을재생성한뎁스맵을나타내었다.
[164]
[165] 9.뎀스햅등심선 이듀처리름위하은 훽터산출
[166] 도 3과도 4에도시된 S140단계의등심선이동처리를위해서는,영역추적이 이루어져야하며,영역추적을위해서는영역에포함된포인트들에대한움직임 백터 (motion vector)들을알아야한다.
[167] 이에,영역의포인트들에대한움직임백터들을산출하는데,움직임백터가 산출되는포인트들은영역내의특징점들에한정되는것은아니다.특징점이 아니더라도,예를들어,작업자가선택한포인트들에대해서도움직임백터들을 산출할수있다ᅳ
[168] 움직임백터산출과정에대해,도 18을참조하여상세히설명한다.도 18은본 발명의다른실시예에따른,움직임백터산출방법의설명에제공되는
흐름도이다.
[169] 도 18에도시된바와같이,먼저,현재-프레임 (t)에서움직임백터를산출하고자 하는포인트들이포함된영역 (추적하고자하는영역)을설정한다 (S140-1).다음, S 140-1단계에서설정된영역에대해,블럭기반의강건한위상상관 (block based robust phase correlation)기법을사용하여 ,영역에포함된포인트들의움직임 백터들을추출한다 (S140-2).
[170] 다음, S 140-2단계에서추출된움직임백터들을보정하기위해,먼저 S140-1 단계에서설정된영역을계층적으로분할하고 (S 140-3), S 140-1단계에서설정된 영역과 S140-3단계에서분할된영역들에대한움직임백터를추출하고 보정한다 (S 140-4).
[171] S140-2단계에서추출되는포인트들에대한움직임백터들과구분하기위해,
S140-4단계에서추출 &보정되는영역들에대한움직임백터들에대해서는,
'전역움직임백터 (global motion vector)'로표기한다.
[172] 다음, S140-4단계에서추출 &보정된전역움직임백터들을참조하여 , S140-2 단계에서추출된움직임백터들을보정한다 (S140-5).이는,오류가있는움직임 백터들을바로잡기위함이다.
[173] 이하에서,도 18에도시된움직임백터산출방법을구성하는각단계들에대해, 하나씩상세히설명한다.
[174]
[175] 10.영역섬정다계 iS140-l)
[176] 도 19에는,현재-프레임 (t)에서움직임백터를산출하고자하는포인트들이
포함된영역 (추적하고자하는영역)을설정하는방법이도시되어있다.도
19에서점선으로표기된부분이설정하고자하는영역이다.
[177] 도 19의좌측에수식으로나타난바와같이,영역설정은포인트들의
좌표들로부터,최소 X좌표,최대 X좌표,최소 y좌표및최대 y좌표를산출하는 방식으로수행된다. [178]
[179] 11.음직임백터추출 (S140-2)
[180] 도 20에는,블럭기반의강건한위상상관기법을사용하여,영역에포함된
포인트들의움직임백터들을추출하는방법이도시되어있다.
[181] 먼저,현재-프레임 (t)에설정된영역에포함된포인트들중어느하나를
중심으로하는블럭 (B 을설정하고,다음 -프레임 (t+1)에서블럭 ( )과동일한 위치 &크기의블럭 (B2)을설정한다.
[182] 현재-프레임 (t)의블럭 (Β,)과다음-프레임 (t+1)의블럭 (B2)을설정한결과는도 20의좌측 /상부에,설정된블럭들 (^ 2)의좌표는도 20의우측 /상부에,각각 나타나있다.
[183] 이후,도 20의우측하부에나타난수식들을이용하여,두개의블릭들 '( 32) 각각을 FFT하여주파수도메인으로변환하고,주파수도메인에서두블릭들 (FBi ,FB2)의상관도 (FBC)를측정한후에 ,상관도 (FBC)를 IFFT하여시간도메인 (Bt )으로변환한다.
[184] 그리고,변환결과 (Bt)에포함된노이즈를제거하기위해 Gaussian filter를
적용한다.여기서, Gaussian filter는 Low-pass filter의일예로든것으로,다른 종류의필터로대체될수있다.
[185] 다음,필터링결과 (B。;)에서가장큰값을갖는 (x,y)좌표값을선택하고,블럭 사이즈에따른시프트를하면,움직임백터가얻어진다.
[186] 도 20에도시된움직임백터추출은,영역에포함된포인트들중하나에대한 것이다.다른포인트돌에대해서도,위에제시한방법에따라움직임백터들을 각각추출할것이요구된다.
[187]
[188] 12.계층적영역분할및저역움직 백터추출 &보정 140-3.4)
[189] 도 21은설정된영역을계층적으로분할하여,전역움직임백터들을추출하는 방법이도시되어있다.전역움직임백터들은 (분할된)영역들에대한움직임 백터들임은전술한바있다.
[190] 도 21에도시된바와같이, S140-1단계에서설정된영역 (S)은, 4개의영역들 (Sh
S2, S3, S4)로,분할된 4개의영역들 S2, S3, S4)은다시각각 4개의영역들로 재분할된다.즉, S140-1단계에서설정된영역 (S)은계층적으로분할되는것이다.
[191] 그리고,도 21에수식으로나타난바와같이,각영역들에대한전역움직임
백터들은다음과같다.
[192] 1)영역 (S)에대한전역움직임백터:영역 (S)에포함된포인트들에대한
움직임백터들의평균
[193] 21)영역 (S 에대한전역움직임백터:영역 (S,: 1포함된포인트들에대한
움직임백터들의평균
[194] 22)영역 (S2)에대한전역움직임백터 :영역 (S2)에포함된포인트들에대한
움직임백터들의평균 [195] ...
[196] 이후,추출된전역움직임백터들에대한보정이이루어지는데,이는도 22에 도시된바와같다.도 22의우측에수식으로나타난바와같이 ,전역움직임 백터는,자신이포함된상위계층영역의전역움직임백터와의방향 (angle)차가 임계치 (Thl)를초과하면,상위계층영역의전역움직임백터와가중치 (α)적용 합을통해보정된다.
[197] 예를들어 ,영역 (S 에대한전역움직임백터는,자신이포함된상위계층
영역인영역 (S)의전역움직임백터와의방향차가임계치를초과하면,영역 (S)의 전역움직임백터와가중치적용합을통해보정되는것이다.가증치 (α)가 "0.5"인경우,가중치적용합은영역 (S 에대한전역움직임백터와영역 (S)의 전역음직임백터의평균이다.
[198] 반면 ,자신이포함된상위계층영역의전역움직임백터와의방향차가임계치 이하이면,전역움직임백터는보정되지않는다.
[199]
[200] 13.음직임벡터보정
[201] 움직임백터보정에는최하위계층의전역움직임백터들이이용된다.도
23에는움직임백터보정과정이나타나있다.여기서,보정대상이되는움직임 백터들은영역 (S)에포함된포인트들에대한움직임백터들이다.
[202] 도 23의우측에수식으로나타난바와같이,움직임백터들은,자신이포함된 최하위계층영역의전역움직임백터와의방향차가임계치 (Th2)를초과하면, 그영역의전역움직임백터와가중치적용합을통해보정된다.
[203] 예를들어,최하위계층영역 (Sn)에포함된포인트에대한홈직임백터는, 자신이포함된최하위계층영역 (Su)의전역움직임백터와의방향차가 임계치를초과하면,최하위계층영역 (S„)의전역움직임백터와가증치적용 합을통해보정되는것이다.
[204] 반면,자신이포함된최하위계층영역의전역움직임백터와의방향차가
임계치이하이면,움직임백터는보정되지않는다.
[205]
[206] 14.시뮴레이션결과
[207] 본발명의실시예에따른움직임백터산출결과를이용한영역추적
시물레이션결과가도 24와도 25에나타나있다.도 24에서는무릎영역에대한 추적결과를,도 25에는모자영역에대한추적결과를,각각나타내었으며,도 24와도 25를통해영역추적이우수하게이루어졌음을확인할수있다.
[208]
[209] 15.뎁스맵스무딩
[210] 등심선으로표현된양자화된뎁스맵을보간하여,연속적인뎁스맵을획득하기 위해서는,뎁스맵을스무딩처리하여야한다.뎁스맵스무딩방법에대해, 이하에서상세히설명한다. [211] 본발명의또다른실시예에따른뎁스템스무딩을위한코스트함수는아래의 식 13과같다.
[212] [식 13]
[213]
|vs|=[( s)r s,s+ ( s)r
|v/|) = a j[(a,5)r dtS + (dsS)T diS]-[(8tI)T dj + {djY dsl])
[214] 위식 13을통해알수있는바와같이,본발명의일실시예에따른뎁스맵 스무딩을위한코스트함수는,스무딩영상 (Sp)의 X에관한 1차편미분과스무딩 영상 (SP)의 y에관한 1차편미분이반영된항 [가중치가 β인항]을포함한다는 점에서,식 1에표시된기존의코스트함수와동일하다.
[215] 하지만,본발명의일실시예에따른뎁스맵스무딩을위한코스트함수는, 가증치가 α인항과가중치가 γ인항을더포함한다는점에서,식 1에표시된 기존의코스트함수와차이가있다.
[216] 가중치가 α인항과가증치가 γ인항은스무딩영상 (Sp)의 2차편미분이
반영되어있다는점에서,스무딩영상 (Sp)의 1차편미분이반영된가중치가 β인 항과차이가있다.
[217] 구체적으로,가증치가 α인항과가증치가 γ인항은스무딩영상 (Sp)의 X에관한 2차편미분및스무딩영상 (Sp)의 y에관한 2차편미분이반영되어있다.
[218] 한편,가중치가 α인항은아래의식 14와같이변경이가능하다.뿐만아니라, 가중치가 γ인항없이가중치가 α인항만코스트함수에포함되도록
구현가능하다.
[219] 반대로,가중치가 α인항없이가중치가 γ인항만이코스트함수에포함되도록 구현가능하는것도가능함은물른이다.
[220] [식 14]
Figure imgf000018_0001
[222] 위식 u를통해알수있는바와같이,본발명의다른실시예에따른뎁스맵 스무딩을위한코스트함수는,스무딩영상 (Sp)의 xy에관한편미분이반영된 항 [가증치가 α인항]을포함한다.
[223] 식 14에제시된코스트함수에따라산출되는스무딩영상 (Sp)은아래의식 15와같다.
[224] [식 15] [225]
Figure imgf000019_0001
[226]
[227] 16.성능비교
[228] 기존방식과의성능비교를위해,도 1에나타난원본영상 (뎁스맵)을양자화한 영상 (뎁스맵)에대해,본발명의실시예에따라스무딩한결과를도 26에 나타내었다.도 2와도 26을비교하면,기존의스무딩방법에비해에지 부분에서의스무딩결과가보다우수하게나타났음을확인할수있다.
[229] 38개파일에대해기존방식과본발명의실시예에따른스무딩방법에대한 PSNR dB측정결과를도 27에나타내었다.도 27을통해서도,본발명의 실시예에따른스무딩방법의우수성을확인할수있다.
[230] 지금까지,뎁스맵스무딩을위한코스트함수와이에따른스무딩영상 (Sp)에 대해상세히설명하였다.
[231] 코스트함수에나타나는각항들의가증치들 (α,β,γ)은사용자에의해설정 가능하며 ,뎁스맵의사양 /특성에따라자동으로설정가능하다.따라서 ,도 28에 도시된바와같이,사용자가가중치들을설정한경우그에따라뎁스맵스무딩이 수행되는반면,사용자가가중치들을설정하지않으면사전설정된가중치들에 따라뎁스맵스무딩이이루어진다.
[232] 또한,위실시예에서언급한뎁스맵은영상의일종이다.뎁스맵 (뎁스영상) 이외의다른영상에대해서도본발명의기술적사상이적용될수있음은 물론이며,다른영상에는의료영상도포함될수있음은물론이다.
[233]
[234] 17. 3D콘텐츠제작시스템
[235] 도 ί9는본발명의또다른실시예에따른 3D콘텐츠제작시스템의
블럭도이다.본발명의실시예에따른 3D콘텐츠제작시스템 (200)은,도 29에 도시된바와같이, 2D영상입력부 (210),뎁스맵생성부 (220), 3D변환부 (230)및 3D영상출력부 (240)를포함한다.
[236] ¾스맵생성부 (220)는키-프레임뎁스맵을이용하여, 2D영상입력부 (210)를 통해입력되는 2D영상프레임들의뎁스맵을생성한다.키 -프레임뎁스맵은, 2D 영상에서샷마다하나씩생성되는데,프로그램에의한자동생성은물론,전문 아티스트에의한수동생성모두가가능하다.
[237] 뎁스맵생성부 (220)는키-프레임뎁스맵의둥심선을추출하고,추출한
등심선을다음-프레임에매치시켜매치무브한후,매치무브된등심선을 기반으로다음-프레임의뎁스맵을생성하고보간하여다음-프레임의뎁스맵을 완성하는절차에의해, 2D영상프레임들에대한뎁스맵들을생성한다.이 과정에서,뎁스맵생성부 (220)는위에서제시한코스트함수를이용하여뎁스맵 스무딩을수행한다.
[238] 3D변환부 (230)는뎁스맵생성부 (220)에서생성된뎁스맵들을이용하여, 2D 영상프레임들을 3D변환처리한다.그리고, 3D영상출력부 (240)는 3D 변환부 (230)에의한변환처리결과를 3D영상으로출력한다.
[239]
[240] 또한,이상에서는본발명의바람직한실시예에대하여도시하고
설명하였지만,본발명은상술한특정의실시예에한정되지아니하며 , 청구범위에서청구하는본발명의요지를벗어남이없이당해발명이속하는 기술분야에서통상의지식을가진자에의해다양한변형실시가가능한것은 물론이고,이러한변형실시들은본발명의기술적사상이나전망으로부터 개별적으로이해되어져서는안될것이다.

Claims

청구범위
뎁스맵의뎁스값들을다수의양자화단계들로양자화하는단계 ; 양자화된뎁스맵에서둥심선을생성하는단계;를포함하는것을 특징으로하는뎁스맵등심선생성방법 .
청구항 1에있어서,
상기양자화단계는,
뎁스맵의뎁스값들을선형양자화또는비선형양자화하는것을 특징으로하는뎁스맵등심선생성방법 .
청구항 1에있어서,
양자화된뎁스맵의경계부분에서존재하는페더를제거하는 단계;를더포함하는것을특징으로하는뎁스맵등심선생성방법. 청구항 1에있어서,
양자화된뎁스맵에서영역크기가임계크기보다작은영역을 페더영역으로설정하는단계;및
상기페더영역을뎁스값차이가가장작은주변영역의
뎁스값으로변환하는단계;를더포함하는것을특징으로하는 뎁스맵등심선생성방법.
청구항 1에있어서,
상기등심선의굴곡점들을추출하는단계;및
상기굴곡점들사이의곡선을다른타입의곡선으로변환하는 단계;를더포함하는것을특징으로하는뎁스맵등심선생성방법. 프레임의특정영역에포함된포인트들의움직임백터들을 추출하는단계;
상기특정영역을계층적으로분할하는단계;
상기특정영역및상기특정영역으로부터계층적으로분할된 영역들에대한전역움직임백터들을추출하는단계;
상기전역움직임백터들을이용하여,상기움직임백터들을 보정하는단계;를포함하는것을특징으로하는움직임백터산출 방법.
청구항 6에있어서,
상기움직임백터추출단계는,
상기영역의포인트를중심으로하는블럭을설정하는단계;
다음 -프레임에서상기블럭과동일한위치와크기를갖는블럭을 설정하는단계;
블럭들의상관도를측정하는단계;및
상관도에서가장큰값을갖는좌표값을선택하고,블럭사이즈에 따른시프트를하여,움직임백터를획득하는단계;를포함하는 것을특징으로하는움직임백터산출방법 .
청구항 6에있어서,
상기전역움직임백터계산단계는,
상기특정영역및상기특정영역으로부터계층적으로분할된 영역들에대한전역움직임백터들을추출하는단계;및 추출된전역움직임백터들을보정하는단계;를포함하는것을 특징으로하는음직임백터산출방법 .
청구항 8에있어서,
상기전역움직임백터보정단계는,
전역움직임백터가포함된상위계층영역의전역움직임 백터와의방향차가임계치를초과하면,상기상위계층영역의 전역움직임백터와가중치적용합으로상기전역움직임백터를 보정하는것을특징으로하는움직임백터산출방법ᅳ 청구항 6에있어서,
상기움직임백터보정단계는,
상기움직임백터가포함된최하위계층영역의전역움직임 백터와의방향차가임계치를초과하면,상기최하위계층영역의 전역움직임백터와가중치적용합으로상기움직임백터를 보정하는것을특징으로하는움직임백터산출방법.
양자화된영상을입력받는단계;
입력된영상을스무딩하는단계;및
스무딩된영상을출력하는단계;를포함하고,
상기스무딩단계는,
스무딩영상의 2차편미분이반영된항을포함하는코스트함수를 이용하여,상기입력된영상을스무딩하는것을특징으로하는 영상스무딩방법 .
청구항 11에있어서,
상기코스트함수는,
스무딩영상의 X에관한 2차편미분및스무딩영상의 y에관한 2차 편미분이반영된항을포함하는것을특징으로하는영상스무딩 방법.
청구항 12에있어서,
상기코스트함수는,
스무딩영상의 xy에관한편미분이반영된항을더포함하는것을 특징으로하는영상스무딩방법 .
청구항 13에있어서,
상기코스트함수는, 스무딩영상의 x에관한 1차편미분및스무딩영상의 y에관한 1차 편미분이반영된항을더포함하는것을특징으로하는영상 스무딩방법.
[청구항 15] 청구항 11에있어서,
항들의가증치들은,
사용자의입력에의해설정되는것을특징으로하는영상스무딩 방법.
PCT/KR2015/000974 2014-11-05 2015-01-29 3d 콘텐츠 제작 방법 및 시스템 WO2016072559A1 (ko)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR1020140153108A KR101709974B1 (ko) 2014-11-05 2014-11-05 뎁스맵 등심선 생성 방법 및 시스템
KR10-2014-0153108 2014-11-05
KR10-2014-0165076 2014-11-25
KR1020140165076A KR20160062771A (ko) 2014-11-25 2014-11-25 영상 스무딩 방법 및 장치
KR10-2015-0003125 2015-01-09
KR1020150003125A KR20160086432A (ko) 2015-01-09 2015-01-09 움직임 벡터 산출 방법 및 시스템

Publications (1)

Publication Number Publication Date
WO2016072559A1 true WO2016072559A1 (ko) 2016-05-12

Family

ID=55909279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/000974 WO2016072559A1 (ko) 2014-11-05 2015-01-29 3d 콘텐츠 제작 방법 및 시스템

Country Status (1)

Country Link
WO (1) WO2016072559A1 (ko)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011126309A2 (ko) * 2010-04-06 2011-10-13 삼성전자 주식회사 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
WO2011155704A2 (ko) * 2010-06-11 2011-12-15 삼성전자주식회사 깊이 전이 데이터를 이용한 3d 비디오 인코딩/디코딩 장치 및 방법
KR20120090508A (ko) * 2011-02-08 2012-08-17 포항공과대학교 산학협력단 이미지 처리 방법 및 이를 위한 장치
WO2013081304A1 (ko) * 2011-11-28 2013-06-06 에스케이플래닛 주식회사 2차원 영상을 3차원 영상으로 변환하는 영상 변환 장치, 방법 및 그에 대한 기록매체

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011126309A2 (ko) * 2010-04-06 2011-10-13 삼성전자 주식회사 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치
WO2011155704A2 (ko) * 2010-06-11 2011-12-15 삼성전자주식회사 깊이 전이 데이터를 이용한 3d 비디오 인코딩/디코딩 장치 및 방법
KR20120090508A (ko) * 2011-02-08 2012-08-17 포항공과대학교 산학협력단 이미지 처리 방법 및 이를 위한 장치
WO2013081304A1 (ko) * 2011-11-28 2013-06-06 에스케이플래닛 주식회사 2차원 영상을 3차원 영상으로 변환하는 영상 변환 장치, 방법 및 그에 대한 기록매체

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ARJAN KUIJPER: "Image smoothing and restorationby PDEs", GRAPHISCH-INTERAKTIVE SYSTEME, 16 December 2008 (2008-12-16), Retrieved from the Internet <URL:http://www.gris.informatik.tu-darmstadt.de/-akuijper/course/TUD/weickertexcerpt.pdf> *

Similar Documents

Publication Publication Date Title
CN102113015B (zh) 使用修补技术进行图像校正
CN101287143B (zh) 基于实时人机对话的平面视频转立体视频的方法
CN102404594B (zh) 基于图像边缘信息的2d转3d的方法
WO2007005839A2 (en) Video object cut and paste
KR102024872B1 (ko) 3d 얼굴의 모델링 방법 및 장치, 얼굴 추적 방법 및 장치
CN101287142A (zh) 基于双向跟踪和特征点修正的平面视频转立体视频的方法
KR20080108430A (ko) 2d 영상들로부터 3d 안면 재구성
CN109891880B (zh) 通过机器学习技术改进2d至3d的自动转换质量的方法
CN107507146B (zh) 一种自然图像软阴影消除方法
CN103248911A (zh) 多视点视频中基于空时结合的虚拟视点绘制方法
CN102609950A (zh) 一种二维视频深度图的生成方法
CN106447718B (zh) 一种2d转3d深度估计方法
JP5561786B2 (ja) 3次元形状モデル高精度化方法およびプログラム
CN113538569A (zh) 一种弱纹理物体位姿估计方法和系统
CN104036481A (zh) 一种基于深度信息提取的多聚焦图像融合方法
CN104200434A (zh) 一种基于噪声方差估计的非局部均值图像去噪方法
CN108805841B (zh) 一种基于彩色图引导的深度图恢复及视点合成优化方法
CN108924434B (zh) 一种基于曝光变换的立体高动态范围图像合成方法
CN102075777B (zh) 一种基于运动对象的视频图像平面转立体处理方法
CN110602476B (zh) 一种基于深度信息辅助的高斯混合模型的空洞填补方法
CN113888614B (zh) 深度恢复方法、电子设备和计算机可读存储介质
WO2016072559A1 (ko) 3d 콘텐츠 제작 방법 및 시스템
CN114998173B (zh) 基于局部区域亮度调节的空间环境高动态范围成像方法
KR101709974B1 (ko) 뎁스맵 등심선 생성 방법 및 시스템
KR101760463B1 (ko) 깊이 지도 보정 방법 및 그 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15856919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15856919

Country of ref document: EP

Kind code of ref document: A1