KR20130122581A - Method for editing stereoscopic image and method for extracting depth information therefor - Google Patents

Method for editing stereoscopic image and method for extracting depth information therefor Download PDF

Info

Publication number
KR20130122581A
KR20130122581A KR1020130048154A KR20130048154A KR20130122581A KR 20130122581 A KR20130122581 A KR 20130122581A KR 1020130048154 A KR1020130048154 A KR 1020130048154A KR 20130048154 A KR20130048154 A KR 20130048154A KR 20130122581 A KR20130122581 A KR 20130122581A
Authority
KR
South Korea
Prior art keywords
image
depth
channel
map
displacement
Prior art date
Application number
KR1020130048154A
Other languages
Korean (ko)
Inventor
김태섭
정광철
김민서
한명희
백광호
Original Assignee
케이디씨 주식회사
리얼스코프 주식회사
동서대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 케이디씨 주식회사, 리얼스코프 주식회사, 동서대학교산학협력단 filed Critical 케이디씨 주식회사
Publication of KR20130122581A publication Critical patent/KR20130122581A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Abstract

The present invention relates to a depth information extraction method and a stereoscopic image editing method using the same, including extracting proxy data from a left / right image; Selecting an image based on the extracted proxy data to temporarily edit the sequence, and generating an edit determination list; Calculating binocular disparity data of the left image and the right image of the pre-edited sequence; Generating a disparity map using the calculated binocular disparity data; Generating a depth map using the displacement map; And finally editing the temporary edited sequence based on the displacement map, the depth map, and the edit determination list.
According to the present invention, the user can more objectively edit the depth, thereby improving the completeness of the image, and there is an effect that can solve the problem for the human factor.

Description

Depth information extraction method and stereoscopic image editing method using the same {Method for Editing Stereoscopic Image and Method for Extracting Depth Information Therefor}

The present invention relates to a three-dimensional image editing method, and more particularly, to extract from the subjective depth of the conventional three-dimensional image editing method depth information extraction method that can edit the three-dimensional image based on the objective information and the three-dimensional image editing using the same It is about a method.

At present, many domestic research institutes and related companies are developing stereo rigs and 3D stereoscopic displays for stereoscopic photography, and they are being used in large part.

However, studies to date have hindered the formation of stereoscopic effects of binocular sequences, or hindered the Human Factor, such as Vertical Alignment Error, Color Matching Error and Time Sync Error. It was only a study to solve the factor, and there is a lack of research / development in developing contents including stereoscopic movies. In particular, research / development related to the editing of stereoscopic images is much shorter than that of shooting and display.

In addition, not many companies specialize in stereoscopic image editing. This is because a huge cost is incurred in establishing a system for stereoscopic image editing, and the success of the stereoscopic contents market cannot be assured. This situation has led to a decrease in the professionalism of stereoscopic image editing and reluctance to produce large contents.

Meanwhile, in the conventional stereoscopic image editing method, a left image or a right image is selected to edit in a 2D state, and the remaining one image is matched with the edited image, and then a process of confirming with a 3D monitor is taken. That is, in the conventional stereoscopic image editing method, one side of a left or right image is selected to perform one side editing, and the edited image is matched with the remaining one to be viewed as a stereoscopic image. Calibration and correction will be repeated.

As described above, the conventional three-dimensional image editing method is expected to work as predicting the 2D image as a stereoscopic image depending on the experience and feeling of the editor, not only completeness of the result is lowered, but also after confirming the stereoscopic image at the final stage of editing There is a problem that the work time and cost increase because of the modification in one-side editing.

The present invention has been made to solve the above-described problems, and away from the subjective depth editing of the conventional stereoscopic image editing method, the stereoscopic image is edited according to the depth change between the cuts and the cut using the displacement map and the depth map. It is an object of the present invention to provide a depth information extraction method and a stereoscopic image editing method using the same.

Another object of the present invention is to provide a low cost depth information extraction method and a stereoscopic image editing method using the same.

According to a first aspect of the present invention, a depth information extraction method according to the present invention includes: receiving proxy data of a left image and a right image of a pre-edited sequence; Tracking feature points of the left image and the right image using the received proxy data; Calculating binocular disparity data between the left image and the right image using the tracked feature points; Generating a displacement map existing as a displacement channel using the calculated binocular disparity data; Generating a depth map existing as a depth channel by using the displacement map and setting a reference value of a virtual stereo cam; And converting the displacement channel and the depth channel into an R.G.B channel.

According to a second aspect of the present invention, a stereoscopic image editing method according to the present invention comprises the steps of: extracting proxy data from the left / right image; Selecting an image based on the extracted proxy data to temporarily edit the sequence, and generating an edit determination list; Calculating binocular disparity data of the left image and the right image of the pre-edited sequence; Generating a disparity map using the calculated binocular disparity data; Generating a depth map using the displacement map; And finally editing the temporary edited sequence based on the displacement map, the depth map, and the edit determination list.

As described above, according to the present invention, by providing a depth information extraction method to show the user as visual information through a depth map and a displacement map and a stereoscopic image editing method using the same, a more objective depth Editing of the senses is possible, and through this, the completeness of the image can be increased, and the effect on the human factor can be solved.

1 is a flowchart illustrating a stereoscopic image editing method according to an embodiment of the present invention;
2 is a flowchart illustrating a depth information extraction method according to an embodiment of the present invention;
3 is a diagram illustrating a color segment extraction result of a 2D image.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

The stereoscopic image editing method according to the present invention is based on visualization of depth information. In general, depth information is a data of the distance between objects based on the parallax generated by the binocular camera, that is, the left and right eyes. Depth information that can be visualized in an image includes a depth map and a disparity map.

The depth map expresses the spatial distance in 256 steps, and represents the degree of black region and white region. However, the depth map cannot express positive disparity, zero disparity and negative disparity required in stereoscopic images.

On the other hand, the displacement map, unlike the depth map, may express positive parallax, zero parallax, and negative parallax through spatial information expressed in color. In addition, the displacement map can represent more than 256 levels of space. However, the displacement map has a disadvantage in that it cannot intuitively determine the distance between objects.

In order to effectively show storytelling and stereoscopic effects in stereoscopic images, the use of stereoscopic space based on the distance between objects and the screen serves as an important point. Here, the use of three-dimensional space refers to the proper use of positive parallax, negative parallax and '0' parallax with respect to storytelling.

Therefore, in the stereoscopic image editing method according to the present invention, the depth information using the displacement map and the depth map is used. Accordingly, in the stereoscopic image editing method according to the present invention, the objective stereoscopic space can be edited through the displacement map, and the distance between the objects can be intuitively confirmed through the depth map.

1 is a flowchart illustrating a stereoscopic image editing method according to an embodiment of the present invention.

Referring to FIG. 1, in a left / right image capturing step, a left / right image is captured (S110). Here, the left and right images represent images ranging from HD to 4K through numerous vertical and horizontal leagues.

In the proxy extraction step, proxy data is extracted from the left and right images (S120). Therefore, it is possible to easily edit HD- or 4K-class images using proxy data, and show a fast processing speed when extracting depth information, thereby enabling a depth view close to real time.

In the sequence fragmentation step, an image is selected based on the extracted proxy data to temporarily edit the sequence, and then an edit decision list (EDL) is generated (S130). In other words, based on the extracted proxy data, OK and NG cuts are selected and cut edits are performed according to storytelling to temporarily edit the sequence.

In the depth information extraction step, a displacement map and a depth map of the simplified sequence are generated (S140). That is, binocular disparity data of left and right images of a pre-edited sequence is calculated, a displacement map is generated using the calculated binocular disparity data, and a depth map is generated using the displacement map. Detailed description thereof will be made with reference to FIG. 2.

In the depth information knitting step, the sense of depth of the compressed sequence is corrected according to the displacement map and the depth map of the compressed sequence (S142). At this time, the depth of the narrowed sequence is corrected by changing the conversion points.

On the other hand, in the case of the depth information flakes, the depth sensation is edited and modified based on the sequence flakes, and thus, the depth sensation of the photograph may not be used as it is or the correction may not be possible. At this time, it is necessary to replace the cut or generate a new three-dimensional image suitable for the before and after cuts.

Therefore, in the 2D-3D instruction step, the observation point, the positive point, and the negative point of the 2D image to be subjected to the 2D-3D conversion operation are designated, and the color segment of the corresponding 2D image as shown in FIG. 3. Extract (Segment Color) (S144).

In the VFX (Visual Effect) & 2D-3D conversion step, the 2D image is converted into a 3D image by referring to the observation point, the positive point, the negative point, and the extracted color segment of the 2D image (S146). Here, the VFX is the same as the VFX used in the existing digital image production step, so a description thereof will be omitted.

In the sequence fragment stage, the temporarily edited sequence is finally edited based on the displacement map, the depth map, and the edit determination list generated in the sequence fragment stage (S150).

In addition, in the Digital Intermediate (DI) & Mastering step, the final edited sequence is digitally color-corrected and output as a DCP (Digital Cinema Package) file, and then the outputted DCP file is recorded for the system to be finally screened ( S160).

2 is a flowchart illustrating a depth information extraction method according to an embodiment of the present invention.

Referring to FIG. 2, in the stereo data step, proxy data of a left image and a right image of a pre-edited sequence is received (S210).

In the left / right image feature point tracking step, feature points of the left and right images are tracked using proxy data received through a feature-base method depth tracking method (S220).

In the binocular disparity analysis step, binocular disparity data of the left image and the right image are calculated using the tracked feature points (S230).

In the disparity channel extraction step, a displacement map is generated based on the calculated binocular disparity data (S240). Here, the displacement map is not represented by the visually visible R.G.B channel but by the displacement channel. That is, in the displacement channel, the X value of the left side is represented by Red, the Y value of the left side is Green, the X value of the right side is Blue, and the Y value of the right side is represented by Alpha. In addition, the observation point is displayed in black. Therefore, in the case of the protruding portion, the magenta (Magenta) close to the red combined with the red and blue are displayed, and the depressed image is displayed as magenta close to the blue. In addition, when binocular disparity occurs on the Y-axis, yellow or cyan is displayed.

In the depth channel extraction and virtual stereo cam setting step, a depth map is generated by using a displacement map, and a reference value of the virtual stereo cam is set at this time. The same value is set for a cut) (S250). Here, the depth map is not represented by the R.G.B channel like the displacement map is represented by the depth channel. That is, the depth value of the depth map in the depth channel is represented by gray scale.

Finally, the shift channel and the depth channel are converted into RGB channels by using channel shift or channel shuffle (S260). Accordingly, the user can visually contact the displacement channel and the depth channel.

The embodiments disclosed in the specification of the present invention do not limit the present invention. The scope of the present invention should be construed according to the following claims, and all the techniques within the scope of equivalents should be construed as being included in the scope of the present invention.

Claims (10)

Receiving proxy data of a left image and a right image of a pre-edited sequence;
Tracking feature points of the left image and the right image using the received proxy data;
Calculating binocular disparity data between the left image and the right image using the tracked feature points;
Generating a displacement map existing as a displacement channel using the calculated binocular disparity data;
Generating a depth map existing as a depth channel by using the displacement map and setting a reference value of a virtual stereo cam; And
Converting the displacement channel and the depth channel into an RGB channel;
Depth information extraction method comprising a.
The method of claim 1, wherein the tracking of the feature points comprises:
Feature-base Method Depth information extraction method characterized in that to track the feature points of the left image and the right image using a depth tracking method.
The method of claim 1, wherein the setting of the reference value of the virtual stereo cam comprises:
Depth information extraction method characterized in that to set the same value for all cuts.
The method of claim 1, wherein in the converting to the RGB channel,
And converting the displacement channel and the depth channel into an RGB channel using a channel shift or a channel shuffle.
The method of claim 1,
In the displacement channel, the left X value is red, the left Y value is green, the right X value is blue, the right Y value is alpha, and the viewpoint is displayed in black.
The method of claim 1,
And a depth value of the depth map in the depth channel is displayed in gray scale.
Extracting proxy data from a left / right image;
Selecting an image based on the extracted proxy data to temporarily edit the sequence, and generating an edit determination list;
Calculating binocular disparity data of the left image and the right image of the pre-edited sequence;
Generating a disparity map using the calculated binocular disparity data;
Generating a depth map using the displacement map; And
Finally editing the temporary edited sequence based on the displacement map, the depth map, and the edit determination list;
Stereoscopic image editing method comprising a.
The method of claim 7, wherein
Modifying a sense of depth of the simplified sequence by changing a Conversions Point according to the displacement map and the depth map;
Stereoscopic image editing method further comprising.
9. The method of claim 8,
If there is a cut in the edited sequence that cannot be used as a sense of depth or cannot be corrected, an observation point, a positive point, and a negative point of the 2D image are substituted for the cut, and the color segment of the 2D image is designated. Extracting; And
Converting the 2D image into a 3D image by referring to an observation point, a positive point, a negative point, and an extracted color segment of the 2D image;
Stereoscopic image editing method further comprising.
The method of claim 7, wherein
Finally digitally correcting the edited sequence and outputting it as a digital cinema package (DCP) file; And
Recording the output DCP file according to a system to be finally played;
Stereoscopic image editing method further comprising.
KR1020130048154A 2012-04-30 2013-04-30 Method for editing stereoscopic image and method for extracting depth information therefor KR20130122581A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20120045663 2012-04-30
KR1020120045663 2012-04-30

Publications (1)

Publication Number Publication Date
KR20130122581A true KR20130122581A (en) 2013-11-07

Family

ID=49852321

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130048154A KR20130122581A (en) 2012-04-30 2013-04-30 Method for editing stereoscopic image and method for extracting depth information therefor

Country Status (1)

Country Link
KR (1) KR20130122581A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10818025B2 (en) 2017-01-26 2020-10-27 Samsung Electronics Co., Ltd. Stereo matching method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10818025B2 (en) 2017-01-26 2020-10-27 Samsung Electronics Co., Ltd. Stereo matching method and apparatus
US11417006B2 (en) 2017-01-26 2022-08-16 Samsung Electronics Co., Ltd. Stereo matching method and apparatus

Similar Documents

Publication Publication Date Title
JP6365635B2 (en) Image processing apparatus and image processing method
Zhu et al. 3D-TV system with depth-image-based rendering
CN102474638B (en) Combining 3D video and auxiliary data
CA2559131C (en) Stereoscopic parameter embedding apparatus and stereoscopic image reproducer
US9094675B2 (en) Processing image data from multiple cameras for motion pictures
KR101483660B1 (en) Method and apparatus for depth-related information propagation
KR101185870B1 (en) Apparatus and method for processing 3 dimensional picture
US8488869B2 (en) Image processing method and apparatus
US8422801B2 (en) Image encoding method for stereoscopic rendering
JP4879326B2 (en) System and method for synthesizing a three-dimensional image
TWI549475B (en) Dimensional image coding apparatus, stereoscopic image decoding apparatus, stereo image coding method, stereo image decoding method, stereo image coding program, and stereo image decoding program
US8373745B2 (en) Image processing apparatus, image display apparatus, image apparatus, and image processing method
KR20110106367A (en) Image based 3d video format
CN104813659B (en) video frame processing method
CN102047669B (en) Video signal with depth information
US20090219282A1 (en) Method and apparatus for encoding datastream including additional information on multiview image and method and apparatus for decoding datastream by using the same
TWI496452B (en) Stereoscopic image system, stereoscopic image generating method, stereoscopic image adjusting apparatus and method thereof
CN102918861A (en) Stereoscopic intensity adjustment device, stereoscopic intensity adjustment method, program, integrated circuit, and recording medium
US10127714B1 (en) Spherical three-dimensional video rendering for virtual reality
CN104604222B (en) For producing, transmitting and receiving the method and relevant equipment of stereo-picture
JP2004242318A (en) Video block dividing method and its apparatus
KR101332386B1 (en) Apparatus and method for acquisition stereoscopic image data
KR20130122581A (en) Method for editing stereoscopic image and method for extracting depth information therefor
JP2013171539A (en) Video image processing device, video image processing method, and computer program
Doyen et al. 3D intensity adjustment of a stereo content to improve Quality Of Experience

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application