TW201237805A - Method for depth estimation and device using the same - Google Patents

Method for depth estimation and device using the same Download PDF

Info

Publication number
TW201237805A
TW201237805A TW100108826A TW100108826A TW201237805A TW 201237805 A TW201237805 A TW 201237805A TW 100108826 A TW100108826 A TW 100108826A TW 100108826 A TW100108826 A TW 100108826A TW 201237805 A TW201237805 A TW 201237805A
Authority
TW
Taiwan
Prior art keywords
data
depth
time
frame
timing
Prior art date
Application number
TW100108826A
Other languages
Chinese (zh)
Other versions
TWI485651B (en
Inventor
Houng-Jyh Wang
Chih-Wei Kao
Tzu-Hung Chen
Original Assignee
Teco Elec & Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teco Elec & Machinery Co Ltd filed Critical Teco Elec & Machinery Co Ltd
Priority to TW100108826A priority Critical patent/TWI485651B/en
Publication of TW201237805A publication Critical patent/TW201237805A/en
Application granted granted Critical
Publication of TWI485651B publication Critical patent/TWI485651B/en

Links

Landscapes

  • Image Analysis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A method for depth estimation includes the following steps. Firstly, first and second frame sets, corresponding to first and second frame time, are received. Then the motion vector data of the frame set of the second frame time with respect to that of the first frame time, are obtained. Next, stereo matching between frame data of first and second view angles of the first frame set is executed to obtain depth data of the first frame time. Then estimated depth data are estimated according to the motion vector data and the depth data of the first frame time. After that, depth data of the second frame time is obtained according to the estimated depth data, the frame data of first and second view angles of the second frame set.

Description

201237805201237805

IW7098PA 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種深度估計方法及其裝置,且特別 是應用雙視域影像比對(Stereo Matching)技術來進行估 計之深度估計方法及其裝置。 【先前技術】 在科技發展日新月異的現今時代中,立體影像多媒體 系統逐漸被業界所重視。一般來說,在立體影像/視訊的 應用中’雙視域影像比對(Stereo Matching)影像處理技 術’是目前業界急需開發的立體影像核心技術。在現有技 術中’雙視域影像比對技術係先根據雙視域影像計算出影 像深度分佈圖。 —般來說’雙視域影像比對技術具有計算複雜度較高 =問題。據此,如何設計出具有計算複雜度較低之雙視域 影像比對深度估計方法為業界不斷致力的方向之〜。 【發明内容】 本發明係有關於一種深度估計方法及其裝置, 傳統深度估計方法,本發明相關之深度估計方法及其裝置 具有運算複雜度較低之優點。 … 用以發明(之第—方面),提出—種深度估計方法, 估叶,人雙眼影像(Bi_lar Vide〇)資料進行深度 像資二ΐ估計方法包括下列步驟。首錢收輪人雙眼影 像貝科中對應至第-及第二時間序的第-及第二時序圖 w /uyttiw 201237805 I ΙΑ/ /iiliVL· » 視’其中各第一及第二時序圖框資料組包括一第一 相對於第。然後找出第二時序圖框資料組 進粁之第一視角及第二視角圖框資料 第一時MateMng) ’以找出對應至 及第-味皮,_序深度資料。然後根據移動向量資料 據第二時二!第:時序估計深度資料。之後根 角及::視角圖框資料,找出第二時序深度資料。 |發明(之第二方面),提出一種深度資料估 括輸入針對輸入雙眼影像資料進行深度估計,其中包 深=元移:向量產生單元、雙視域影像比對單元; 第一時間序& / H*70接收以㈣f彡像隸中對應至 的第二時序圓框”應至第二時間序 組包括第—讳]八中各第一及第二時序圖框資料 產&«- 框資料及第二視角圖框資料。移動向# 一時序圖框資mm見第域影像比對單元根據第 雙視域影像比對,以找出對應至第第一 7夺==資料進行 度資料。深产估_留_ # 夺間序的第一時序深 資料找出第一時序移動向量資料及第一時序深度 據第二d;:度;=視域影像比對單元根 角及第二視角圖框資料,找:;:=:之第-視 為了對本發明之上述及其他方面有更^的瞭解,下文 201237805IW7098PA VI. Description of the Invention: [Technical Field] The present invention relates to a depth estimation method and apparatus thereof, and in particular to a depth estimation method using a dual-view image matching (Stereo Matching) technique for estimating Device. [Prior Art] In the current era of rapid technological development, stereoscopic multimedia systems have gradually gained attention in the industry. In general, in the application of stereoscopic video/video, 'Stereo Matching image processing technology' is a stereoscopic image core technology that is urgently needed in the industry. In the prior art, the dual-view image alignment technique first calculates an image depth map based on the dual-view image. As a general matter, the dual-view image matching technique has a high computational complexity = problem. Based on this, how to design a dual-view image comparison depth estimation method with low computational complexity is the direction that the industry is constantly striving for. SUMMARY OF THE INVENTION The present invention relates to a depth estimation method and apparatus thereof, and a conventional depth estimation method, and a depth estimation method and apparatus thereof according to the present invention have the advantages of low computational complexity. ... used to invent (the first aspect), propose a depth estimation method, estimate the leaf, and the binocular image (Bi_lar Vide〇) data for the depth image estimation method including the following steps. The first and second timing diagrams corresponding to the first and second time series in the first round of the binocular image, the first and second timing diagrams of each of the first and second time diagrams of the first and second time series The box data set includes a first relative to the first. Then find the first time frame of the second time frame data group and the second view frame data MateMng) ' to find the corresponding to the first and the first skin, _ order depth data. Then, according to the motion vector data, the depth data is estimated according to the second time: second: timing. Then the root angle and :: view frame data to find the second time depth data. Inventive (the second aspect), a depth data estimation input is proposed for depth estimation of input binocular image data, wherein packet depth = meta-shift: vector generation unit, dual-view image comparison unit; first time sequence &; / H * 70 receives the second timing frame corresponding to the (4) f 彡 image corresponding to the second time sequence group including the first 及 讳 八 各 各 各 各 各 各 第一 第一 第一 « « « « « « Frame data and second view frame data. Move to #一序图框MM, see the first image comparison unit according to the second view image comparison to find the corresponding to the first 7 win == data progress Data. Deep production estimate _remain _ # The first time series deep data of the inter-sequence sequence finds the first time-series moving vector data and the first time-series depth according to the second d;: degree; = view field image comparison unit root angle And the second perspective frame data, looking for:;:=: the first - is considered to have a better understanding of the above and other aspects of the present invention, the following 201237805

TW7098PA 特舉較佳實施例,並配合所附圖式,作詳細說明如下: 【實施方式】 本實施例之深度估計方法係參考輸入之雙眼影像 (Binocular Video)資料之移動向量資料來降低雙視域影 像比對(Stereo Matching)之運算量。 第一實施例 本實施例之深度估計方法係參考輸入之雙眼影像資 料之移動向量資料來估產生估計深度資料,並參考此估計 深度資料來簡化其產生對應之深度資料的操作。 請參照第1圖,其繪示依照本發明第一實施例之深度 估計裝置的方塊圖。本實施例之深度估計裝置1用以針對 輸入雙眼影像(Binocular Video)資料Vi進行深度估計。 輸入雙眼影像資料Vi例如包括複數個雙視域圖框資料 組,該複數個雙視域圖框資料組係可與各個時間序相對 應,而於各雙視域圖框資料組各包括兩筆對應至不同視角 的圖框資料。在其他例子中,輸入雙眼影像資料Vi更可 包括三個或三個以上之多視角圖框資料。 舉例來說,輸入雙眼影像資料Vi中對應至第一時間 序tl之時序圖框資料組Vi_tl包括第一視角圖框資料 Fl_tl及第二視角圖框資料F2_tl ;輸入雙眼影像資料Vi 中對應至第二時間序t2之時序圖框資料組Vi_t2包括第 一視角圖框資料Fl_t2及第二視角圖框資料F2_t2。其中, 第一視角圖框資料Fl_tl及Fl_t2例如分別為第一及第二 201237805The preferred embodiment of the TW7098PA is described in detail with reference to the following drawings: [Embodiment] The depth estimation method of the present embodiment reduces the double by referring to the moving vector data of the input binocular video (Binocular Video) data. The amount of computation of the field image comparison (Stereo Matching). First Embodiment The depth estimation method of the present embodiment estimates the estimated depth data by referring to the motion vector data of the input binocular image data, and refers to the estimated depth data to simplify the operation of generating the corresponding depth data. Referring to Figure 1, there is shown a block diagram of a depth estimating apparatus in accordance with a first embodiment of the present invention. The depth estimating device 1 of the present embodiment is configured to perform depth estimation on the input binocular video data Vi. The input binocular image data Vi includes, for example, a plurality of dual-view frame data sets, the plurality of double-view frame data sets may correspond to respective time sequences, and each of the double-view frame data sets includes two The pen corresponds to the frame data of different perspectives. In other examples, the input binocular image data Vi may include three or more multi-view frame data. For example, the timing frame data group Vi_tl corresponding to the first time sequence t1 in the input binocular image data Vi includes the first view frame data Fl_tl and the second view frame data F2_tl; and the input binocular image data Vi corresponds to The timing frame data set Vi_t2 to the second time sequence t2 includes the first view frame data F1_t2 and the second view frame data F2_t2. The first view frame data Fl_tl and Fl_t2 are respectively the first and second 201237805

i w /uys^A 時間序tl及t2時之左眼視角圖框資料;第二視角圖框資 料F2_tl及F2_t2例如分別為第一及第二時間序tl及t2 時之右眼視角圖框資料;第一及第二時間序tl及t2例如 對應至互相鄰近的操作時點。 深度估計裝置1包括輸入單元102、移動向量產生單 元104及雙視域影像比對單元108。 輸入單元102接收時序圖框資料組Vi_tl及Vi_t2, 其中包括輸入雙眼影像資料Vi中對應至第一及第二時間 序tl及t2之第一視角圖框資料Fl_tl及Fl_t2與第二視 角圖框資料F2_tl及F2_t2。移動向量產生單元104找出 時序圖框資料組Vi_t2相對於時序圖框資料組Vi_tl之第 一移動向量資料M_12。 在一個操作實例中,輸入單元102及移動向量產生單 元104可以影像解壓縮器來實現;影像解壓縮器針對輸入 之壓縮雙眼影像資料進行解壓縮操作,以還原未壓縮之原 始影像資料及其對應之移動向量資料,該輸入之壓縮雙眼 影像資料其可為左右眼視角圖框資料彼此獨立之兩筆影 像資料或一個同時儲存有左右眼視角圖框資料之一筆雙 眼影像資料。在一個操作實例中,移動向量產生單元104 更例如包括平均濾、波器(Mean Fi 1 ter),可用以對移動向 量資料進行相關之影像處理操作。 雙視域影像比對單元106根據第一視角及第二視角圖 框資料Fl_tl及F2_tl進行雙視域影像比對,以找出對應 至該第一時間序tl的第一時序深度資料D_tl。 深度估計單元108根據移動向量資料M_12及第一時 201237805Iw /uys^A The left eye view frame data of the time sequence tl and t2; the second view frame data F2_tl and F2_t2 are, for example, the right eye view frame data of the first and second time sequences tl and t2, respectively; The first and second time sequences t1 and t2 correspond, for example, to operating time points adjacent to each other. The depth estimating device 1 includes an input unit 102, a motion vector generating unit 104, and a dual view image comparing unit 108. The input unit 102 receives the timing frame data sets Vi_tl and Vi_t2, and includes the first view frame data Fl_tl and Fl_t2 and the second view frame corresponding to the first and second time sequences t1 and t2 in the input binocular image data Vi. Information F2_tl and F2_t2. The motion vector generation unit 104 finds the first motion vector data M_12 of the timing frame data group Vi_t2 with respect to the timing frame data group Vi_tl. In an operation example, the input unit 102 and the motion vector generation unit 104 can be implemented by an image decompressor; the image decompressor performs a decompression operation on the input compressed binocular image data to restore the uncompressed original image data and Corresponding moving vector data, the input compressed binocular image data can be two image data independently of each other, and one binocular image data of one side of the left and right eye view frame data. In one example of operation, the motion vector generation unit 104 further includes, for example, an averaging filter, a Mean Fi, which can be used to perform related image processing operations on the moving vector data. The dual-view image comparison unit 106 performs dual-view image comparison based on the first view and the second view frame data F1_tl and F2_tl to find the first time-series data D_tl corresponding to the first time sequence t1. The depth estimating unit 108 is based on the motion vector data M_12 and the first time 201237805

1 W7W8PA 序深度資料D_t卜找出對應至第二時間序t 估計深度資料DE—t2。由於第一及第二時間序。弟·^序 相互鄰近之時序時點,一般來說,輸入雙眼影像為 在第-及第二時間序tl及t2中具有線性變化之深声1 圖形’深度估計單元108係根據第一時序深度資料又刀= ^動向量資料M」2來估計得到第二時序估計深度資料 深度估計單元1〇8所估計得到之第二時序估計深 料DE_t2更被提供至雙視域影像比對單元1〇6 :雙視ς與 像比對單元106在已有對應至第二時間序^之第二時^ 估計深度資料DE_t2之基礎下,執行針對第—視角及第二 視角圖框資料F1_t2及F2」2之雙視域影像比對摔作,: 找出對應至第二時間序七2之第二時序深度資料⑼。 舉例來說,深度估計單元1〇8係根據第 度資料DE」2來決定第-視角及第二視角圖框資料/;; 及F2」2之雙視域影像比對操作中,欲進行比對之搜尋視 齒(Search Window)。在一個操作實例中,針對 圖框資料Π—財對應至座標⑴,⑴之書素資 角 乂來說’第二時序估計深度㈣DE」2指示其對應至深度值 、η及Jl為自然數。據此,雙視域影像比對單元 可對應將搜尋視窗的令心位置對應至第二視角圖框資 料F2 一 t2中之座標位置(il+x,门)。 據此,根據第二時序估計深度資料心2,雙視域影 像比對單元106可有效地得到相關於第-視角及第二視角 圖框資料及F2」2間的可能深度值,藉此可有效地 2012378051 W7W8PA sequence depth data D_tbu finds the corresponding depth data DE_t2 corresponding to the second time sequence t. Due to the first and second time sequence. The sequence timing of the neighbors is adjacent to each other. Generally, the input binocular image is a deep sound 1 pattern having a linear change in the first and second time sequences t1 and t2. The depth estimation unit 108 is based on the first time series. The depth data is further supplied to the dual-view image comparison unit 1 by estimating the second time-series estimation depth DE_t2 estimated by the second time-series depth data estimating unit 1 to 8 〇6: The dual-view and image matching unit 106 performs the first-view and second-view frame data F1_t2 and F2 on the basis of the estimated depth data DE_t2 corresponding to the second time sequence ^ 2 The two-view image comparison is performed: Find the second time-depth data corresponding to the second time sequence 7.2 (9). For example, the depth estimation unit 1〇8 determines the ratio of the first-view and second-view frame data/; and the F2”2 dual-view image comparison operation according to the degree data DE”2. Search for the Search Window. In an operation example, for the frame data Π 财 对应 对应 对应 对应 ( ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二 第二Accordingly, the dual-view image matching unit can correspond to the position of the center of the search window to the coordinate position (il+x, gate) in the second view frame material F2-t2. Accordingly, the depth of view data center 2 is estimated according to the second time series, and the dual-view image comparison unit 106 can effectively obtain the possible depth values between the first-view and second-view frame data and F2"2, thereby Effectively 201237805

IW7098PA η. 縮小欲進行魏域影像比對操作之搜料 · 應地降低比對操作所需之運算量❶ 次小,並营十 請參照第2圖,其綠示依照本發明第 估計方法的流_。本實施例之深度估計之深度 步驟。首先如步驟(a),輸入單元1〇2接:括下列之 資料Vl中對應至第—時間序tl的時序圖樞資雙眼影像 及對應至第二時間序t2的時序圖框資料級J,vLtl 步驟⑹,移動向量產生單S 104找出時序顧接著如 Vi_t2相對於時序圖框資料乡且vi_tl 樞資科 Μ一12。 勒向量資 然後如步驟(c),雙視域影像比對單元 視角及第二視角圖框資料F1—tl及F2_tl進行根據第〜 比對’以找出對應至第一時間序tl的第一時:域影像 D」卜接著如步驟(d),深度估計單纟⑽=資料 資料M」2及第-時序深度資料D以找出=量 *咨淑mu — 呀序估叶深 之後如步驟(e),雙視域影像比對單元1〇6根據第二 時序估計深度資料DE_t2、第一視角及第二視角圖框資料 FI 及F2_t2,找出並輸出第二時序深度資料D—12 ^ 在本實施例中,雖僅以本實施例之深度估計裝置1參 考對應至第一時間序ti之第一時序深度資料D_tl來估計 出對應至第二時間序t2之第二時序估計㈣資料DE_t2, 並據以簡化其找*對應至第二時序深度資料D」2之操作 的情形為例做說明,I本實施例之深度估計裝置i並不 偈限於此。 201237805IW7098PA η. Reducing the search for the Wei domain image comparison operation · The amount of calculation required for the comparison operation is small, and the camp 10 is referred to the second figure, and the green color is in accordance with the estimation method of the present invention. flow_. The depth step of the depth estimation of this embodiment. First, as in step (a), the input unit 1〇2 is connected to: a timing chart pivotal binocular image corresponding to the first-time sequence t1 in the following data V1 and a timing frame data level J corresponding to the second time sequence t2. , vLtl Step (6), the motion vector generates a single S 104 to find the timing and then, as in Vi_t2, relative to the timing frame data and vi_tl. Then, according to step (c), the dual-view image comparison unit view and the second view frame data F1-tl and F2_tl are subjected to the first-to-alignment to find the first corresponding to the first time sequence t1. Time: domain image D" followed by step (d), depth estimation unit 10 (10) = data M" 2 and the first - time depth data D to find = amount * 淑 mu mu - 呀 order after the depth of the leaf as steps (e) The dual-view image comparison unit 1〇6 finds and outputs the second time-series depth data D-12 according to the second time-series depth data DE_t2, the first view and the second view frame data FI and F2_t2. In the present embodiment, only the second time series estimation (4) corresponding to the second time sequence t2 is estimated by the depth estimating apparatus 1 of the present embodiment with reference to the first time series depth data D_tl corresponding to the first time sequence ti. DE_t2, and simplifies the operation of finding the operation corresponding to the second timing depth data D"2 as an example. The depth estimating apparatus i of the present embodiment is not limited thereto. 201237805

TW7098PA 舉例來說,深度估計裝置i,中 — 以接收時序圖框資料组Vi t3 :早70 02更用 用心J 動向量產生單元财更 V: ν〇3 t2之移動向夏-貝料M_23;深度估計單元⑽,更用 考移動向量資料M—12及M23及對應至第 tl及叹對愿至第―、第二時間序 對二第主― 度資料及D」2估計出 視“傻:=序t3之第二時序估計深度資料DE t3;雙 =出對應至第,深度 及t2間的操作時序,然並不以此為:, 第三時間序t3亦可為於第二時間序一 本實施例之深度估計$法及其裝置係 ,間序之時序圖框資料組與對應至另一時:: 資料組間的移動向量資料及對應至此:之:框 資料,來產生對應至此目標相序的 产之冰度 施例之深度估計方法及其裝置更參考此估計;X本: 簡化其產生對應至此目標時間序之深度 ’來 2於傳統深度估計方法,本發明相關之深:::方= 其裝置具有運算複雜度較低之優點。 法及 :三 了應之,木度資料的操作。 201237805 x w /uy〇r/\ 請參照第4圖’其繪示依照本發明第二實施例之深度 估計裝置的方塊圖。本實施例之深度估計裝置2與第一實 施例之深度估計装置不同之處在於深度估計裴置2中更包 括物件資訊估計單元210、物件資訊修正單元212及控制 單元214。 " 物件資訊估計單元210根據移動向量資料M_12找出 對應至第二時序圖框資料組Vi_t2之第二時序估計物件分 配資料0—t2。舉例來說,物件資訊估計單元21〇可根據移 動向量資料Μ一12中各筆畫素移動向量資料與其周圍鄰近 晝素移動向量資料的一致性,來對輸入雙眼影像資料Η 中之物件進行辨識及追蹤。 物件資訊修正單元212根據第一視角及第二視角圖框資 料Fl_t2及F2_t2修正第二時序估計物件分配資料〇 , 以得到第二時序物件分配資料E_t2。舉例來說,物件資訊 修正單元212係參考對應至第二時間序t2之兩筆對應至 不同視角之圖框資料的晝素資料’來驗證第二時序估計物 件分配資料0_t2的精確性,並產生對應至第二時間序七2 的第二時序物件分配資料E_t2。 控制單元214根據第二時序物件分配資料E—t2修正 第一時序深度資料D一12 ’以得到修正後之輸出第二時序深 度資料Dx_t2。舉例來說’控制單元214例如參考第二時 序物件分配資料E_t2中相關於物件之邊緣資訊 (Boundary),來將對應至相同物件之深度資料修正為接近 之深度值。 由於參考第一及第三時序物件分配資料及 201237805TW7098PA For example, the depth estimation device i, in the reception timing frame data set Vi t3: early 70 02, the use of the heart J motion vector generation unit financial V: ν 〇 3 t2 movement to summer - shell material M_23; The depth estimation unit (10), by using the motion vector data M-12 and M23 and corresponding to the tl and the sigh to the first, the second time sequence to the second main data and D" 2 estimated "stupid: = The second timing of the sequence t3 estimates the depth data DE t3; double = out corresponds to the operation timing between the first, the depth and the t2, but not the following: the third time sequence t3 can also be the second time sequence one In the embodiment, the depth estimation $ method and the device system thereof, the timing sequence data group of the inter-sequence and the corresponding movement data of the data group and the corresponding: the frame data are generated to correspond to the target phase. The method for estimating the depth of the ice application of the sequence and its apparatus refer to this estimation more; X: simplifying the depth corresponding to the time sequence of the target to the traditional depth estimation method, the depth of the invention is::: Fang = its device has the advantage of low computational complexity. The operation of the woody data is as follows: 201237805 xw /uy〇r/\ Please refer to FIG. 4, which is a block diagram showing the depth estimating apparatus according to the second embodiment of the present invention. The device 2 is different from the depth estimation device of the first embodiment in that the depth estimation device 2 further includes an object information estimation unit 210, an object information correction unit 212, and a control unit 214. " The object information estimation unit 210 is based on the motion vector data. M_12 finds the second time-series estimation object allocation data 0_t2 corresponding to the second timing frame data group Vi_t2. For example, the object information estimating unit 21〇 may move the vector data according to the moving vector data Μ12 The object in the input binocular image data is identified and tracked by the consistency of the neighboring pixel vector data. The object information correction unit 212 corrects the second image according to the first view and the second view frame data Fl_t2 and F2_t2. Timing estimation object allocation data 〇 to obtain second time-series object allocation data E_t2. For example, the object information correction unit 212 is a reference The accuracy of the second time-series estimation object allocation data 0_t2 is verified by the two pieces of the second time sequence t2 corresponding to the pixel data of the frame data of different viewing angles, and the second corresponding to the second time sequence VII 2 is generated. The time-series object allocation data E_t2. The control unit 214 corrects the first time-series depth data D_12' according to the second time-series object allocation data E_t2 to obtain the corrected output second-time depth data Dx_t2. For example, the control unit 214 For example, referring to the edge information related to the object in the second time-series object allocation data E_t2, the depth data corresponding to the same object is corrected to a close depth value. Due to reference to the first and third time series object allocation information and 201237805

1W7098FA 作盘f撼Ϊ第二時序深度資料D—U及D」3進行修正之操 根據第二時序物件分配資料Ε」2修正第二時序深度 j D—12之操作實質上㈣’深度估計裝置2亦可經由 才目同之操作,參考第—及第三時序物件分配資料 -及E_t3來針對第一或第三時序深度資料⑽及㈣ =丁修正。換言之,根據第—及第三時序物件分配資 ,、t及Ε」3修正第一及第三時序深度資料匕“及㈣ 之操作可依據前述根據第二時序物件分配 2時序深度資料⑽之操作類推得到,研此並科 對其進行贅述。 上凊參照第5圖’其繚示依照本發明第二實施例之深度 估計方法的流程圖。本實施例之深度估計方法與第= 例之深度料方法刊之處在於其於㈣ 步驟,⑴。進一步的說,於步驟⑹之後首先執= (〇,物件資訊估計單元210根據移動向量資料Μ 時序圖框資料、fevi-t2之第二時序估計物件分 ,著如步驟⑴’物件資料修正單元212根據第一視 角及苐-視角圖框資料F1_t2及F2_t2修正第 物件分配資料〇」2’以得到第二時序物件分配資 ,步驟⑴,控制單元214根據第二時 分 ;〇2修正第二時序深度資料W,以得到輸出第—4 序深度資料Dx_t2。 《乐一日令 施:= 度估計方法及其裝置係參考對應至目 私時間序之時序圖框資料組與對應至另一時間序之圖框 2012378051W7098FA is used to correct the operation of the second time series depth data D_U and D"3 according to the second time series object allocation data Ε"2 to correct the second time series depth j D-12 substantially (d) 'depth estimation device 2 can also be used for the first or third time series depth data (10) and (4) = Ding correction by referring to the first and third time series object allocation data - and E_t3. In other words, according to the first and third time series object allocation resources, t and Ε"3, the first and third timing depth data 匕" and (4) operations may be performed according to the foregoing operation according to the second time series object allocation 2 time depth data (10) Referring to Fig. 5, a flowchart of a depth estimation method according to a second embodiment of the present invention is shown in the above. The depth estimation method of the present embodiment and the depth of the first example The method is based on (4) steps, (1). Further, after step (6), first = (〇, object information estimation unit 210 based on the motion vector data 时序 timing frame data, second timing estimation of fevi-t2 The object is divided, as in step (1), the object data correcting unit 212 corrects the object distribution data 〇"2" according to the first angle of view and the 苐-view frame data F1_t2 and F2_t2 to obtain the second time-series object allocation, step (1), the control unit 214 according to the second time division; 〇2 corrects the second time series depth data W to obtain the output fourth-order depth data Dx_t2. "Lee-day command: = degree estimation method and its device reference Should the entry to a private time series data set frame timing corresponding to another time frame of the sequence 201 237 805

TW7098PA 資料組間的移動向量資料及對應至此另一時間序之深度 資料,來產生對應至此目標時間序的估計深度資料。本實 施例之深度估計方法及其裝置更參考此估計深度資料,來 簡化其產生對應至此目標時間序之深度資料的操作。據此 相較於傳統深度估計方法,本發明相關之深度估計方法及 其裝置具有運算複雜度較低之優點。 綜上所述,雖然本發明已以較佳實施例揭露如上,然 其並非用以限定本發明。另,本發明所稱之深度資料、估 計深度資料係指具有深度性質之資料,並不侷限於深度 值,如視差值等其他具有深度性質之資料,亦在本發明之 範疇中。本發明所屬技術領域中具有通常知識者,在不脫 離本發明之精神和範圍内,當可作各種之更動與潤飾。因 此,本發明之保護範圍當視後附之申請專利範圍所界定者 為準。 【圖式簡單說明】 第1圖繪示依照本發明第一實施例之深度估計裝置的 方塊圖。 第2圖繪示依照本發明第一實施例之深度估計方法的 流程圖。 第3圖繪示依照本發明第一實施例之深度估計裝置的 另一方塊圖。 第4圖繪示依照本發明第二實施例之深度估計裝置的 方塊圖。The motion vector data between the TW7098PA data sets and the depth data corresponding to the other time sequence are used to generate estimated depth data corresponding to the target time sequence. The depth estimation method and apparatus of the present embodiment further refer to the estimated depth data to simplify the operation of generating depth data corresponding to the target time sequence. Accordingly, the depth estimation method and apparatus thereof according to the present invention have the advantage of lower computational complexity than the conventional depth estimation method. In the above, the present invention has been disclosed in the above preferred embodiments, but it is not intended to limit the present invention. Further, the depth data and the estimated depth data referred to in the present invention refer to data having a deep nature, and are not limited to depth values, and other materials having deep nature such as disparity values are also within the scope of the present invention. A person skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the scope of the invention is defined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a block diagram showing a depth estimating apparatus according to a first embodiment of the present invention. Fig. 2 is a flow chart showing a depth estimation method in accordance with a first embodiment of the present invention. Fig. 3 is a block diagram showing another embodiment of the depth estimating apparatus in accordance with the first embodiment of the present invention. Fig. 4 is a block diagram showing a depth estimating apparatus in accordance with a second embodiment of the present invention.

S 12 201237805S 12 201237805

1W7098PA 第5圖繪示依照本發明第二實施例之深度估計方法的 流程圖。 【主要元件符號說明】 1、Γ、2 :深度估計裝置 102、102’、202 :輸入單元 104、104’、204 :移動向量產生單元 106、106’、206 :雙視域影像比對單元 108、108’、208 :深度估計單元 210 :物件資訊估計單元 212 :物件資訊修正單元 214 :控制單元 131W7098PA Fig. 5 is a flow chart showing a depth estimation method according to a second embodiment of the present invention. [Description of Main Element Symbols] 1. Γ, 2: depth estimating means 102, 102', 202: input units 104, 104', 204: motion vector generating units 106, 106', 206: dual-view image matching unit 108 108', 208: depth estimation unit 210: object information estimation unit 212: object information correction unit 214: control unit 13

Claims (1)

201237805 . IW/WWA 七、申請專利範圍: 1. 一種深度估計方法,用以針對一輸入雙眼影像 (Binocular Video)資料進行深度估計,該深度估計方法 包括: (a)接收該輸入雙眼影像資料中對應至一第一時間 序的一第一時序圖框資料組及對應至一第二時間序的一 第一 B寺序圖框資料組,其中該第—時序圖框資料组與第二 時序圖框資料組各自包括―第—視角圖框資料及一第二 視角圖框資料; ㈦找出該第二時序圖框資料組相對於該第一時序 圖框資料組之一第一移動向量資料; 資料^據:一時序圖框資料組之該第-視角圖框 貝科,第—視角圖框資料進行雙視域影像比對⑼⑽ =lng) ’以找出對應至該第—時間序的—第—時序深 (d) 根據該第一移動向量 料找出一第二時序估計深度資料深度資 (e) 根據該帛二時序估計深 、^ 資料組之該第一視角圖框資 :第-時序圖框 出該第二時序深度資料。該第二視角圖框資料,找 2.如申請專利範圍第 法,其中㈣⑷更包括:韻述之深度資料估計方 接收該輸入雙眼影像資料 -第三時序圖框資料組 =第二時間序的 弟二時序圖框資料組包括一第 201237805 ΊΛΥ7098ΡΑ 一視角圖框資料及一第二視角圖框資料。 3.如申請專利範圍f 2項 法,更包括: 木度資枓估計方 ⑴找出該第三時序圖框資料組相對於 圖框資料組之-第二移動向量資料; …第-時序 (g丄根據該第一移動向量資料、該第一時序深产次 二第I:時序移動向量資料及該第二時序深度資料= 第二時序估計深度資料;及 叶找出 資料序估計深度資料、該第三時序圖框 出資料及該第二視角圖框資料,找 出該第三時序深度資料。 找 法,更4包:中請專利範圍第3項所述之深度資料估計方 序圖據該第二移動向量資料找出對應至該第三時 圖框資料組之一估計物件分配資料; (j)根據該第三時序圖框資料組之該第一視角圖框 配資資料修正該第三時序估計物件分 、Ή 以侍到一第三時序物件分配資料;及 ^ 根據該第三時序物件分配資料修正該第三時序 冰度資料,以得到一輪出第三時序深度資料。 5..如申請專利範圍第丨項所述之深度資料估 法’更包括: ° 15 I 201237805 i W /UV»FA (Π根據該第一移動向量 序圖框資料組之一估計物件分配資料;出對應至該第二時 (J)根據該第二時序圖框資料相 資料及該第二視角圖框資料修估1第-視角圖框 得到-第二時序物件分配資料·及估5十物件分配資料,以 (k) 根據該第二時序物 深度資料,以得到一輸出第二^該第二時序 法,更6包㈣專利_第1項所述之深度資料估計方 (l) 根據該第一移動向量 序圖框資料組之一估計物件分配資 1 找出對應至該第一時 (J·)根據該第一時序圖框資 , 第二視角圖框資料修正該估計物件分配資料,:角, -時序物件分配資料;及 貝科以得到-第 00根據該第一時序物 深产資料,…f J 資料修正該第-時序 度貝科以侍到-輸出第-時序深度資料。 7.-種深度資料估計裝置,用以針對—輸 video)資料進行深度估計,該深度估計裝 輸入單7G ’用以接收該輸入雙眼影像資料中對應至 -第-時間序的-第—時序圖框資料組及對應至一第二 時間序的-第二時序圖框資料組,其中各該第—及該第二 時序圖框資料組包括一第一視角圖框資料及一第二視角 201237805 1 W/〇ys^A 圖框資料; 相對=量2單元,用以找出該第二時序圖框資料組 對於該第-時序圖框資料組之—第—移動向量資料; 據”雙3影像比細㈣。Μ·)單元,用以根 第—時序圖框資料組之該第-視角及該第二視角圖 框資料進行雙視域影像比對,以找出對應至 = 的一第一時序深度資料;収 第時間序 第一::度估計單元’用以根據該第-移動向量資料及該 第一時序深度資料找出一第二時序估計深度資料; 度資:中=雙視域影像比對單元根據該第二時序估計深 :_ 弟一時序圖框資料組之該第一視角及該第二讳 角圖框資料,找出該第二時序深度資料。 Λ ^如申請專利範圍帛7項所述之深度資料估 -第二=單;更影像資料中輸 ㈣組包^ = 序圖框 弟一視角圖框資料。 置申請專利範圍第8項所述之深度資料估計裝 該移動向量產生單元更用 料組相對於,第一砗細…广出該第二時序圖框資 «料 時序圖框資料組之H㈣動向 料、:像比對單元更根據該第-移動向量資 P時序深度減、該第二移動向量資料及該第1 17 201237805 TW7098PA 時序深度資料找出一第三時序估計深度資料;及 該深度估計單元更根據該第三時序估計深度資料、該 第三時序圖框資料組之該第一視角及該第二視角圖框資 料,找出該第三時序深度資料。 10. 如申請專利範圍第9項所述之深度資料估計裝 置,更包括: 一物件資訊估計單元,用以根據該第二移動向量資料 找出對應至該第三時序圖框資料組之一估計物件分配資 料; 一物件資訊修正單元,根據該第三時序圖框資料組之 該第一視角及該第二視角圖框資料修正該第三時序估計 物件分配資料,以得到一第三時序物件分配資料;及 一控制單元,用以根據該第三時序物件分配資料修正 該第三時序深度資料,以得到一輸出第三時序深度資料。 11. 如申請專利範圍第7項所述之深度資料估計裝 置,更包括: 一物件資訊估計單元,用以根據該第一移動向量資料 找出對應至該第二時序圖框資料組之一估計物件分配資 料;及 一物件資訊修正單元,根據該第二時序圖框資料組之 該第一視角及該第二視角圖框資料修正該估計物件分配 資料,以得到一第二時序物件分配資料;及 一控制單元,用以根據該第二時序物件分配資料修正 201237805 J W70V8FA 該第二時序深度資料,《得到一輸出第二時序深度資料。 置,i2包請專利範㈣7項所述之深度資料估計裝 —物件資訊估計單元,用 找出對庫至,g 根據該第一移動向量資料 料;及 貝抖、、且之一估計物件分配資 一物件資訊修正單元,根據 該第一禎&艿访筮-、日& 弟時序圖框資料組之 資# “ 一彳圖框資料修正該估計物件分配 貝抖,以得到:第—時序物件分配資料;及刀 嗲第^制單兀’肖以根據該時序物件分配資料修正 違弟一時序深度資料,以得到—輪出第一時序深产」料。 19201237805 . IW/WWA VII. Patent application scope: 1. A depth estimation method for depth estimation of an input binocular video (Binocular Video) data, the depth estimation method includes: (a) receiving the input binocular image a first timing frame data group corresponding to a first time sequence and a first B temple sequence frame data group corresponding to a second time sequence, wherein the first time sequence frame data group and the first The second time series frame data group respectively includes a “first-view frame data and a second view frame data; (7) finding the second time frame data group relative to the first time frame data group first Moving vector data; data data: the first-view frame of a time series frame data set, the first-view frame data for double-view image comparison (9) (10) = lng) 'to find the corresponding to the first - The time sequence - the first time depth (d) is based on the first motion vector material to find a second time series estimation depth data depth (e) according to the second time interval estimation, the first view frame of the data group Capital: first - timing frame The second timing depth data is output. The second perspective frame data, find 2. If the patent application scope method, wherein (4) (4) further includes: the depth data estimation party of the rhyme receives the input binocular image data - the third time series frame data group = the second time sequence The second time frame data group includes a 201237805 ΊΛΥ7098ΡΑ one-view frame data and a second perspective frame data. 3. If the patent application scope f 2 method, including: the wood degree asset estimation party (1) find the third time series frame data group relative to the frame data group - the second movement vector data; ... the first time series ( G丄 according to the first motion vector data, the first timing deep production second second I: timing motion vector data and the second timing depth data = second time series estimation depth data; and leaf finding data order estimation depth data The third timing chart framed the data and the second view frame data to find the third time series depth data. Finding a method, further 4 packages: the depth data estimation method according to item 3 of the patent scope is The second moving vector data is used to find an estimated object allocation data corresponding to one of the third time frame data sets; (j) correcting the third according to the first viewing frame of the third time series data group The timing estimation object is divided into Ή to serve a third time series object allocation data; and ^ the third time series ice data is corrected according to the third time series object allocation data to obtain a round of third time depth data. apply for patent The depth data estimate described in the scope of the second item includes: ° 15 I 201237805 i W /UV»FA (ΠAccording to the first moving vector sequence frame data group to estimate the object allocation data; Second time (J) according to the second time frame frame data phase data and the second view frame data repair 1 first-view frame get-the second time-series object allocation data and estimate the five object distribution data to ( k) according to the second time series depth data, to obtain an output second ^ the second time series method, further 6 packets (four) patent _ the first item of depth data estimation side (1) according to the first motion vector order One of the frame data sets estimates the object allocation resource 1 to find the corresponding time to the first time (J·) according to the first time series frame, the second perspective frame data to correct the estimated object allocation data,: angle, - Time-series object allocation data; and Becco to obtain - 00th based on the first time series of deep-production data, ... f J data to correct the first-order time Beca to serve - output first - timing depth data. Depth data estimation device for invoking video data a depth estimation, the depth estimation loading input 7G' is configured to receive a -th timing frame data group corresponding to the -th-time sequence in the input binocular image data and a second timing corresponding to a second time sequence a frame data group, wherein each of the first and second timing frame data sets includes a first view frame data and a second view 201237805 1 W/〇ys^A frame data; relative = quantity 2 units, For finding the second timing frame data group for the first-time sequence frame data group - the first - moving vector data; according to the "double 3 image ratio fine (four). Μ ·) unit, for the root - timing diagram The first view angle and the second view frame data of the frame data set are compared by the dual view image to find a first time series depth data corresponding to =; the first time sequence: the first time estimate unit 'Using to find a second time-series estimation depth data according to the first-moving vector data and the first time-series depth data; the middle: double-view image comparison unit estimates the depth according to the second timing: _ The first view of the time series frame data set and the second view Frame data to find the depth of the second timing information. Λ ^ If the scope of patent application 帛 7 item of depth data estimate - second = single; more image data in the (four) group package ^ = sequence frame frame of the first view frame data. The depth data according to item 8 of the patent application scope is estimated to be loaded with the moving vector generation unit, and the H (four) trend of the data frame of the second time series frame is obtained. And: the comparison unit further finds a third time series estimation depth data according to the first motion vector resource P depth reduction, the second motion vector data, and the first 17 201237805 TW7098PA timing depth data; and the depth estimation The unit further finds the third timing depth data according to the third timing estimation depth data, the first viewing angle of the third timing frame data group, and the second viewing angle frame data. 10. The depth data estimating apparatus according to claim 9 , further comprising: an object information estimating unit, configured to find an estimate corresponding to the third time frame data group according to the second moving vector data The object information distribution unit, the object information correction unit, correcting the third time-scheduled object allocation data according to the first view angle and the second view frame data of the third time series frame data group, to obtain a third time-series object distribution And a control unit, configured to modify the third timing depth data according to the third time-series object allocation data to obtain an output third timing depth data. 11. The depth data estimating apparatus according to claim 7, further comprising: an object information estimating unit, configured to find an estimate corresponding to the second time frame data group according to the first moving vector data And the object information correction unit, and the object information correction unit corrects the estimated object allocation data according to the first view and the second view frame data of the second time frame data group to obtain a second time series object allocation data; And a control unit, configured to correct the second timing depth data of the 201237805 J W70V8FA according to the second time-series object allocation data, and obtain an output second timing depth data. Set, i2 package, please refer to the depth data estimation device-object information estimation unit described in item 7 (4) of the patent, to find the pair of libraries to, g according to the first moving vector data; and to estimate the object distribution The one-item information correction unit is based on the first 祯& 艿 筮-, 日 &; 时序 时序 时序 图 # “ “ “ “ “ “ “ “ “ “ “ “ “ “ 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正 修正The time-series object allocation data; and the knives of the ^ 制 兀 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖 肖19
TW100108826A 2011-03-15 2011-03-15 Method for depth estimation and device usnig the same TWI485651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100108826A TWI485651B (en) 2011-03-15 2011-03-15 Method for depth estimation and device usnig the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100108826A TWI485651B (en) 2011-03-15 2011-03-15 Method for depth estimation and device usnig the same

Publications (2)

Publication Number Publication Date
TW201237805A true TW201237805A (en) 2012-09-16
TWI485651B TWI485651B (en) 2015-05-21

Family

ID=47223239

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100108826A TWI485651B (en) 2011-03-15 2011-03-15 Method for depth estimation and device usnig the same

Country Status (1)

Country Link
TW (1) TWI485651B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
CN101051386B (en) * 2007-05-23 2010-12-08 北京航空航天大学 Precision matching method for multiple depth image
US8249369B2 (en) * 2008-12-02 2012-08-21 Himax Technologies Limited Method and apparatus of tile-based belief propagation
CN101540926B (en) * 2009-04-15 2010-10-27 南京大学 Stereo video coding-decoding method based on H.264
CN101883283B (en) * 2010-06-18 2012-05-30 北京航空航天大学 Control method for code rate of three-dimensional video based on SAQD domain

Also Published As

Publication number Publication date
TWI485651B (en) 2015-05-21

Similar Documents

Publication Publication Date Title
US9153032B2 (en) Conversion method and apparatus with depth map generation
CN101933335B (en) Method and system for converting 2d image data to stereoscopic image data
US20100142828A1 (en) Image matching apparatus and method
JP2010200213A5 (en)
JP6604502B2 (en) Depth map generation apparatus, depth map generation method, and program
CN104871534A (en) Image encoding method, image decoding method, image encoding device, image decoding device, image encoding program, image decoding program, and recording medium
CN102857778B (en) System and method for 3D (three-dimensional) video conversion and method and device for selecting key frame in 3D video conversion
JP2011081605A (en) Image processing apparatus, method and program
KR101778962B1 (en) Method and apparatus for generating fast hologram
US9654764B2 (en) Stereoscopic image processing device, stereoscopic image processing method, and program
JP4605716B2 (en) Multi-view image compression encoding method, apparatus, and program
JP2014116012A (en) Method and apparatus for color transfer between images
JP2013176052A (en) Device and method for estimating parallax by using visibility energy model
US9098936B2 (en) Apparatus and method for enhancing stereoscopic image, recorded medium thereof
KR102122523B1 (en) Device for correcting depth map of three dimensional image and method for correcting the same
TW201237805A (en) Method for depth estimation and device using the same
TWI622022B (en) Depth calculating method and device
CN104994365B (en) A kind of method and 2D video three-dimensional methods for obtaining non-key frame depth image
US20160286198A1 (en) Apparatus and method of converting image
KR101295347B1 (en) A computation apparatus of correlation of video data and a method thereof
KR20210085953A (en) Apparatus and Method for Cailbrating Carmeras Loaction of Muti View Using Spherical Object
JP2013114682A5 (en)
JP6536077B2 (en) Virtual viewpoint image generating device and program
KR20130068510A (en) A fusion method and apparatus with sparse feature based warping and stereo matching
KR101882421B1 (en) Apparatus and method for estimation disparity using visibility energy model

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees