200828174 九、發明說明: 【發明所屬之技術領域】 本發明係有關於一種基於邊緣資訊之低複雜度視 訊切割法,尤指一種以邊緣偵測模組為主之方法,產 生影像之輪廓資訊,可增加影像之壓縮率、對影像進 行内涵式檢索及多媒體等應用。 【先前技術】 ^ 現今最熱門之視訊切割法係以區域為主,此類方 響 法均採取影像分割之技術,將晝面分割為許多不同性 質之均勻區域,並結合移動資訊,產生視訊物件。此 法雖然能較精準切割物件之輪廓,但卻消耗掉可觀之 運算量。 而部份之精鍊法均採用組合不同之形態學濾波器 等方法來進行輪廓修飾,但此法並無法考慮到輪廓本 身之分佈,因此並無法獲得太好之結果。 2004年由T· Meier等人在「用於内涵式編碼之視 訊切割演算法」(T· Meier,King N· Ngan,“Video segmentation for content-based coding” IEEE Trans· Circuits Syst· Video Technol” vol· 9, no· 8, ρρ·1190-1203, Dec· 1999·) —文中提出,以一最短路徑 法來尋找正確之區段,但其運算量仍係相當龐大之缺 點。故,一般習用者係無法符合使用者於實際使用時 之所需。200828174 IX. Description of the Invention: [Technical Field] The present invention relates to a low complexity video cutting method based on edge information, and more particularly to an edge detection module based method for generating image contour information. It can increase the compression ratio of images, connotative search for images and multimedia applications. [Prior Art] ^ The most popular video cutting method is based on the region. This method uses image segmentation technology to divide the face into a number of uniform regions of different nature and combines mobile information to generate video objects. . Although this method can accurately cut the contour of an object, it consumes a considerable amount of computation. However, some of the refining methods use a combination of different morphological filters to perform contour modification. However, this method does not take into account the distribution of the contour itself, and thus does not obtain a very good result. In 2004, T. Meier et al., "Video Segmentation Algorithm for Connotation Coding" (T· Meier, King N. Ngan, "Video segmentation for content-based coding" IEEE Trans· Circuits Syst· Video Technol" vol · 9, no· 8, ρρ·1190-1203, Dec· 1999·) - The paper proposes that the shortest path method is used to find the correct segment, but its computational complexity is still quite large. Therefore, the general practitioner The system cannot meet the needs of the user in actual use.
200828174 【發明内容】 本發月之主要目的係在於,以邊緣偵測模組為主 之視訊切割法,具有較快之運算速度、較低之運算複 雜度及較低之運算量,以獲得準確切之視訊物件。 =發明之另一目的係在於,提出高階統計之移動 偵測模、、且其在偵測微量之移動有優於一般移動偵測 法之表現。 ' • 為達以上之目的,本發明係一種基於邊緣資訊之 低複雜度視訊切割法,至少包含有一差值影像模組、 一邊緣偵測模組、一移動偵測模組、一映射模組及一 精鍊模組,其中,該邊緣偵測模組係以畫面之邊緣資 訊為主,採用肯尼邊緣偵測法標示高臨界值及低臨界 值,並以二值化之形式標示出最後之邊緣影像,厂係 為出現邊緣之區域,〇係為沒有邊緣之區,域;該移動 偵測模組係採用高階統計测試法,提出四階統計測試 # 函式對連續之景> 像差值進行分析,以二值化之形式標 示出最後之移動影像,且1係為移動之區域, 0係為 靜止之區域,將其測試值與一臨界值作比對,當測試 值高於該臨界值時係為移動區塊,反之則係為靜止區 塊,該移動偵測模組係具有較高之敏感度及對雜訊之 容忍度,同時也可用於偵測一般之移動。 【實施方式】 請參閱『第1圖』所示,係本發明之實施流程示 200828174 意圖。如圖所示:本發明係一種基於邊緣資訊之低複 雜度視訊切割法,至少包含有—差值影像模組;L 6、 :邊緣侧模組1 7、-移動偵測模組i 8、一映射 模組1 9及-精鍊模組2 〇,可擁有較快之運算速 度、較低之運算複雜度及較低之運算量。 <該邊緣偵測模組i7係以晝面之邊緣資訊為主, 採用月尼邊緣偵測法標示高臨界值及低臨界值,並以 一值化之形式標示出最後之邊緣影像資訊,其中,工 係為出現邊緣之區域,〇係為沒有邊緣之區域。 該移動禎測模組i8係採用高階統計測試法,以 靜止之背景符合高斯分布為前提,提出四階統計測試 函式對連續之影像差值進行分析,並以一臨界值作比 對,當測試值高於該臨界值時係為移動區塊,反之則 係為靜止區塊,以得最後之移動影像資訊,其中,該 四階統計測試函式係以二值化之形式樣示出最後之移 動影像’且1係為移動之區域,〇係為靜止之區域; 該移動偵測模組係具有較高之敏感度及對雜訊之容忍 度’同時也可甩於债測一般之移動。 當本發明於運用時,係先對影像11及連續之影 像序列1 2、影像序列ι_ 3、影像序列1 4及影像序 列1 5進行該邊緣偵測模組1 7、該差值影像模組1 6及該移動债測模組18之分析,根據該邊緣偵測模 組1 7所產生之邊緣影像資訊、該差值影像模組1 6 200828174 及該移動偵測1 8所產生之移動影像資訊,利用該映 射模組1 9結合該邊緣影像資訊及該移動影像資訊 (請參第3A圖及第3B圖),標示出移動之邊界,並 根據該移動之邊界從視訊序列中擷取出相對應之視訊 物件21,最後再藉由該精鍊模組2〇對該映射模組 1 9所擷取出該視訊物件2 1,進行整體輪廓之分 析,將輪廓中錯誤之區段以線性之區段取代,達到修 飾之效果並擷取出精準之該視訊物件21:其中,該 • 移動影像係以移動正規化法標示移動區域。 請參閱『第2圖』所示,係本發明之移動正規化 法示意圖。如圖所示:移動正規化法係先決定每一個 靜止區塊之周圍若超過3個以上之移動區塊,則將其 更改為移動區塊,接著再決定每一個移動區塊之周圍 若超過4個以上之靜止區塊,則將其更改為靜止區 塊’其中,該移動區塊係為M,該靜止區塊係為s。 • 請參閱『第3A圖〜第3H圖』所示,係本發明 之映射模組與精鍊模組示意圖。如圖所示··本發明之 映射模組與精鍊模組,其至少包含下列流程: (A) 將邊緣影像與移動影像,利用邏輯AND之 指令,標示出移動邊緣; (B) 並以第-水平式掃描移動邊界,將第一個標 示點與最後一個標示點間之區域填滿; 200828174 (c)接續該水平式掃描移動邊界之標示點,以一 垂直式掃描移動邊界; (D) 再進行第二次水平式掃描移動邊界; (E) 完成後擷取標示點區域之外圍輪廓,即為粗 略之視訊物件; (F) 以精鍊模組接續進行分析,利用上述擷取之 該粗略視訊物件,映射到移動邊界影像上,當外圍輪 _ 廓區#又與移動邊界疊合,此區段邊界即為正確之物件 輪廓; (G) 再根據該物件輪廓將所有錯誤之區段標示 出,找出該區段相對應之錯誤區段兩端點,將該錯誤 =段兩端點間之距離以一最短之線性直線連接,該線 奴即可趣近於正確之該物件輪廓,待校正完所有錯誤 之邊界後,即可得到視訊物件之區域邊界 ❿ (H)再套用—次水平式或垂直式掃描該物件輪 廓,即可得到最後切割出之視訊物件區域。 /综上所述,本發明係一種基於邊緣資訊之低複雜 度視訊切割法,可有效改善習用之種種缺點,以邊緣 偵測模組為主之方法,產生影像之輪廓資訊,擁有較 \快之運算速度、較低之運算複雜度及較低之運算量, 可增加影像之壓縮率、對影像進行内涵式之檢索及多 媒體之應用等皆適合,進而使本發明之産生能更進 200828174 步、更實用、更符合使用者之所須,確已符合發明專 一 利申睛之要件,爰依法提出專利申請。 厂 惟以上所述者,僅為本發明之較佳實施例而已, 當不能以此限定本發明實施之範圍;故,凡依本發明 申請專利範圍及發明說明書内容所作之簡單的等效變 化與修飾,皆應仍屬未發明專利涵蓋之範圍内。 200828174 【圖式簡單說明】 第1圖,係本發明之實施流程示意圖。 第2圖,係本發明之移動正規化法示意圖。 第3A圖〜第3H圖,係本發明之映射模組與精鍊模 組不意圖。 【主要元件符號說明】 影像1 1 • 影像序列12 影像序列13 影像序列14 影像序列15 差值影像模組16 邊緣偵測模組17 移動偵測模組18 映射模組1Θ 精鍊模纟且2 0 視訊物件2 1200828174 [Summary of the Invention] The main purpose of this month is to use the edge detection module as the main video cutting method, which has faster calculation speed, lower computational complexity and lower computational complexity to obtain accurate Cut the video object. Another purpose of the invention is to propose a high-order statistical motion detection mode, and it is superior to the general motion detection method in detecting a small amount of motion. For the purpose of the above, the present invention is a low complexity video cutting method based on edge information, comprising at least one difference image module, an edge detection module, a motion detection module, and a mapping module. And a refining module, wherein the edge detection module is mainly based on the edge information of the picture, and uses a Kenny edge detection method to mark a high threshold value and a low threshold value, and indicates the final value in the form of binarization. The edge image, the factory is the area where the edge appears, and the system is the area without the edge. The motion detection module adopts the high-order statistical test method, and proposes the fourth-order statistical test# function to continuous scene> The difference is analyzed, and the last moving image is marked in the form of binarization, and 1 is the moving region, 0 is the stationary region, and the test value is compared with a critical value when the test value is higher than The threshold value is a moving block, and vice versa is a static block. The motion detection module has high sensitivity and tolerance to noise, and can also be used to detect general movement. [Embodiment] Please refer to FIG. 1 for the implementation of the present invention. As shown in the figure: the present invention is a low complexity video cutting method based on edge information, comprising at least a difference image module; L 6, an edge side module 17 , a motion detection module i 8 , A mapping module 19 and a refining module 2 can have faster computing speeds, lower computational complexity, and lower computational complexity. <The edge detection module i7 is mainly based on the edge information of the kneading surface, and uses the moon edge detection method to mark the high threshold value and the low threshold value, and displays the last edge image information in a form of digitization. Among them, the engineering department is the area where the edge appears, and the tethered area is the area without the edge. The mobile test module i8 adopts a high-order statistical test method, and the fourth-order statistical test function is used to analyze the continuous image difference value and compare it with a critical value. When the test value is higher than the critical value, it is a moving block, and vice versa, it is a static block, so as to obtain the last moving image information, wherein the fourth-order statistical test function is shown in the form of binarization. The moving image '1 is the moving area, and the 〇 is the stationary area; the motion detection module has high sensitivity and tolerance to noise', and can also be used for the general movement of debt measurement. . When the present invention is used, the edge detection module 17 and the difference image module are first performed on the image 11 and the continuous image sequence 1 2, the image sequence ι_ 3, the image sequence 14 and the image sequence 15. The analysis of the mobile debt testing module 18 is based on the edge image information generated by the edge detecting module 17 , the difference image module 1 6 200828174 and the moving image generated by the motion detecting 18 Information, using the mapping module 19 to combine the edge image information and the moving image information (refer to FIGS. 3A and 3B), marking the boundary of the movement, and extracting the phase from the video sequence according to the boundary of the movement Corresponding to the video object 21, the image module 21 is extracted from the mapping module 19 by the refining module 2, and the overall contour is analyzed, and the wrong segment in the contour is linear. Instead, the effect of the modification is achieved and the accurate video object 21 is extracted: wherein the moving image is marked by the moving normalization method. Please refer to FIG. 2, which is a schematic diagram of the mobile normalization method of the present invention. As shown in the figure: The mobile normalization method first determines if there are more than three mobile blocks around each static block, then changes it to a mobile block, and then decides if each moving block is over. If there are more than 4 static blocks, change them to a static block, where the moving block is M and the static block is s. • Please refer to the figure 3A to 3H for a schematic diagram of the mapping module and refining module of the present invention. As shown in the figure, the mapping module and the refining module of the present invention include at least the following processes: (A) The edge image and the moving image are marked with a logical AND instruction to indicate a moving edge; (B) - horizontally scanning the moving boundary to fill the area between the first marked point and the last marked point; 200828174 (c) Continuing the marked point of the horizontal scanning moving boundary, moving the boundary in a vertical scan; (D) Then perform the second horizontal scanning movement boundary; (E) after the completion, the peripheral contour of the marked point area is captured, that is, the rough visual object; (F) the analysis is continued by the refining module, and the rough drawing is used. The video object is mapped onto the moving boundary image. When the peripheral wheel _ region # is overlapped with the moving boundary, the boundary of the segment is the correct object contour; (G) then all the wrong segments are marked according to the contour of the object. Find out the points at the ends of the error segment corresponding to the segment, and connect the error=the distance between the two ends of the segment with a shortest linear line. The line slave can be close to the correct contour of the object. To be corrected After the boundary of all errors, the area boundary of the video object is obtained. H (H) Re-apply—Scan the object outline horizontally or vertically to get the last cut video object area. In summary, the present invention is a low-complexity video cutting method based on edge information, which can effectively improve various shortcomings of the conventional use, and the edge detection module is mainly used to generate image contour information, which has a faster The operation speed, lower computational complexity and lower computational complexity can increase the compression ratio of the image, the connotative retrieval of the image and the application of multimedia, etc., so that the invention can be further advanced into 200828174. It is more practical and more in line with the needs of the users. It has indeed met the requirements of the invention and the patent application. The above is only the preferred embodiment of the present invention, and the scope of the invention is not limited thereto; therefore, the simple equivalent changes made by the scope of the invention and the contents of the invention are Modifications shall remain within the scope of the uninvented patent. 200828174 [Simplified description of the drawings] Fig. 1 is a schematic diagram showing the implementation flow of the present invention. Fig. 2 is a schematic diagram of the mobile normalization method of the present invention. The 3A to 3H drawings are not intended to be a mapping module and a refinery module of the present invention. [Main component symbol description] Image 1 1 • Image sequence 12 Image sequence 13 Image sequence 14 Image sequence 15 Difference image module 16 Edge detection module 17 Motion detection module 18 Mapping module 1 Θ Refining module and 2 0 Video object 2 1