TWI307052B - Method for interactive image object extraction/removal - Google Patents

Method for interactive image object extraction/removal Download PDF

Info

Publication number
TWI307052B
TWI307052B TW95102928A TW95102928A TWI307052B TW I307052 B TWI307052 B TW I307052B TW 95102928 A TW95102928 A TW 95102928A TW 95102928 A TW95102928 A TW 95102928A TW I307052 B TWI307052 B TW I307052B
Authority
TW
Taiwan
Prior art keywords
image
block
blocks
foreground
color
Prior art date
Application number
TW95102928A
Other languages
Chinese (zh)
Other versions
TW200729074A (en
Inventor
Jhing Fa Wang
Hanjen Hsu
Jyun Sian Li
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW95102928A priority Critical patent/TWI307052B/en
Publication of TW200729074A publication Critical patent/TW200729074A/en
Application granted granted Critical
Publication of TWI307052B publication Critical patent/TWI307052B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

1307052 *、發_說明 . .. ......:;............... :. .. - 【發明所屬之技術領域】 本發明是有關於一種影像處理之裝詈盥 辦万法,且特別是 有關於-種基於前景與背景分類之互動式影像物件萃取血 移除之裝置與方法。 〃 【先前技術】 近年來,影像處理技術不斷進步’物件切割(〇bject • Segmentation)的研究也有愈來愈多的學者投入。因此,如何 透過與使者用的互動,以方便及快速地將物件從靜態(stiu) 影像中萃取/移除,是目前一個非常重要的研究重點。 習知之切割演算法略分為區域切割法(Regi〇n based1307052 *, _ _ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image processing is installed, and in particular, there are related devices and methods for extracting blood based on interactive image objects based on foreground and background classification. 〃 [Prior Art] In recent years, image processing technology has continued to advance. The research on object cutting (PB) has also attracted more and more scholars. Therefore, how to extract/remove objects from static (stiu) images conveniently and quickly through interaction with the messenger is a very important research focus. The conventional cutting algorithm is slightly divided into regional cutting methods (Regi〇n based)

Method)和邊界切割法(B〇undary-based Method)。區域切割 法係以影像所指定的區塊為基礎,來達到影像切割之目的, 例如 J. Sun 等人在 “p〇isson matting”(ACM Transactions on Graphics(TOG),Volume 23 Issue 2, 2004)中所提出之方 •法和 C. Rother 等人在 “ GrabCut:interactive f〇regr〇und extraction using iterated graph cuts" (ACM transactions on -Graphics(TOG),Volume 23 Issue 3, 2004)中所提出之方 法,但此類區域切割法的演算法較不規則且運算複雜度較 高’因此較不適合實現於硬體上。 邊界切割法係藉由偵測影像的數個邊界(Boundaries)來 決定欲切割之物件。然而,整張影像之數個邊界可能包含了 紋理(Texture)邊界或欲切割物件的内部邊界輪廓,這會使得 1307052 切割後之物件的影像品質受到影響。又,習知的邊界偵測法 (例如:Sobel Filter、Kirsch 算此影像的梯度值(Gradient) 得到的邊界強度將會非常不理Method) and B〇undary-based Method. The area cutting method is based on the block specified by the image to achieve image cutting purposes, such as J. Sun et al. in "P〇isson matting" (ACM Transactions on Graphics (TOG), Volume 23 Issue 2, 2004). The method and method proposed by C. Rother et al. in "GrabCut: interactive f〇regr〇und extraction using iterated graph cuts" (ACM transactions on -Graphics (TOG), Volume 23 Issue 3, 2004) Method, but the algorithm of such area cutting method is more irregular and the operation complexity is higher' Therefore, it is less suitable for hardware. The boundary cutting method determines the cut to be cut by detecting the Boundaries of the image. Objects. However, the boundaries of the entire image may contain texture boundaries or internal boundary contours of the object to be cut, which will affect the image quality of the 1307052 cut object. Also, the known boundary detection The method (for example, Sobel Filter, Kirsch calculates the gradient value of this image (Gradient) and the boundary strength will be very unreasonable.

Filter)只採用影像之亮度來計 ,當亮度變化較不明顯時,所 想。 影像切割的較典型應用,如影像編輯工具(例如:Ad〇be 公司的Photoshop影像編輯軟體)中的魔術棒功能,其係根 據使用者所給定的種子點’來產生一個和種子點顏色相似之 區域。然而,使用者往往需要給定很多的種子點,才能有效 地產生欲獲得之物件,如此將會造成使用者之不便。 【發明内容】 因此’本發明之-目的就是在提供—種互動式影像物件 萃取與移除之裝置與方法,#由指定少量的前景和背景種 子,讓使用者更方便、快速地將物件從影像中萃取/移除。 本發明的另一目的是在提供一種互動式影像物件萃取 =除之裝置與方法’在影像之亮度變化不明顯時,藉由形 =梯度(Μ。咖丨。_ Gradient),透過色彩的變化,提高 办像之梯度值,且強化景彡像的邊 办像的邊界部份,並使用分水嶺 d)影像㈣演算法來大量_ 降低運算量,使本發明夕石私斗旦,& ^ 互動式影像物件萃取與移除之裝置 與方法t輕易地實現於硬體 中m 一 硬骽上與整合至消費性電子產品 更可加快别景物件與背景物件的分類步驟。 根據本發明上述之任一目的担 萃取與移除…,二“ 一種互動式影像物件 、" ^ e含影像輪入模組、雜訊濾除模 6 1307052 組、梯度運算模組、影像切割模組、種子選擇模組、前景盘 •背景分類^及影像輪出模組。影像輸人模組制以接收輸 入影像’其中此輸人影像包含數個像素。雜訊攄除模組係用 以錢影像輸人模組輪出之第—影像之雜訊,以獲得第二影 梯度運算模組係接收第二影像,以獲得第二影像之數個 梯又值ϋ切割模組係根據第二影像之梯度值,將第二影 像切割成數個區塊。種子選擇模組係於第一影像中選取至‘ 一個前景種子與至少—個背景種子,絲第—影像中之前景 ::與背景種子所對應之第二影像的區塊’分別標記為前景 標鐵:背景標籤。前景與背景分類器則係用以分別將未被標 -己成刖景私籤與背景標藏之區塊標記成前景標籤與背景標 影像輪出模組係將前景標籤與背景標籤所對應之區塊^ 出0 萃取=發明上述之任一目的,提出一種互動式影像物件 1除之方法’至少包含下列步驟。首先,輸入第一影 〜接者,進行雜訊濾除運算’以消除第-影像之雜訊,獲 传第二影像。然後’對第二影像進行梯度值運算,以獲 :影像之數個梯度值。接下來,進行影像切割步驟,根據梯 值,將第二影像切割成數個區塊。隨後,進行種子選取 此種子選取步驟係於第—影像中選取至少—個前景種子 個背景種子,且將第一影像中之前景種子與背景種 粳籤。二影像:所對應之區塊’分別標記為前景標籤與背景 前景與背景分類運算,將未被標記成前景 示、、背景標籤之區塊分別標記成前景標籤與背景標籤。接 1307052 下來,輸出前景標籤所標記之區塊與背景標籤所標記之 塊。 °° 【實施方式】 本發明主要是揭露一種互動式影像物件萃取與移除之 裝置與方法。首先,輸人—第-影像至影像輸人模組,再將 第一影像輸入至雜訊濾除模組進行雜訊濾除運算,以消除第 -影像中的雜訊,並產生第二影像,訊澹除運算係為了’大 量地減少接下來做影像切割步驟時造成的過度分判 (CWSegmentati〇n)。接著,將第:影像送至梯度運算模也 進行梯度值運算,以獲得第二料之梯度值,此梯度算 :根據形態學梯度來做處理,此步驟可強化第二影像中的邊 界,進而使接下來的影像切割步驟更㈣的判斷出所萃取之 物件。此外,此形態學梯度不但採用第二影像 更採用第二影像之色彩資訊,故在亮度變化不明顯之:像 中,還可由色彩的變化來提高梯度值。 影像 根據梯度值來對第一影像進行办 ,、> 像切割模組 割成數個區塊。此影像切割步;7為、步广將第-影像切 7步驟可為分水嶺影傻 法,下述之分水嶺影像切割演算法僅用 β、 並不在此限。使用者透過種子 》說明,本發明 裡于選擇模組於第一影傻φ 個(或數個)前景種子和-個U數個)背景種子=選取一 的前景種子和背景種子的第一 ’、子,並將所選擇 數個區塊分別標記為前景俨鏟二:和所對應之第二影像的 背景分類器,以進行c標藏’且輪入至前景與 與ο分類運算1步称使第二影 8 1307052 像中未被標記成前景標籤和背景標籤的區塊能分別被分類 成前景標籤和背景標籤的物件。接著,影像輸出模組輸出已 分類完成的影像,並從第二影像中萃取出依前景種子/背景 種子所挑選之前景物件,以及從第二影像中萃取出依背景種 子/刚景種子所挑選之背景物件。然而,下述之萃取動作僅 用以舉例說明,亦可根據前景或背景來做移取動作,故本發 明並不在此限。 為了使本發明之敘述更加詳盡與完備,可參照下列描述 並配合第1圖至第6圖之圖示。 請參照第la圖和第lb圖,第la圖係繪示根據本發明 較佳實施例之種子選取步驟的示意圖;第lb圖係繪示本發 明較佳實施例實施後的功能示意圖。如第la圖所示,當使 用本發明之互動式影像物件萃取與移除之裝置時,針對一輸 入影像100,若使用者欲從此輸入影像100中萃取出一人體 物件102,則先透過本發明之互動式影像物件萃取與移除之 襞置的種子選擇模組來選取前景種子1〇4與背景種子ι〇8, 再利用本發明之互動式影像物件萃取與移除之方法後,其結 果便會如第1b圖所示,從輸入影像100中萃取出一萃取區 域110並產生—物件以外的區域112。此萃取區域110即為 輸入如像1〇0中之人體物件102,而輸入影像100中的月亮 和另一人體之物件皆被移除。 ^ 接著,請同時參照第2圖與第3圖,第2圖係繪示本發 明較佳實施例之系統流程圖;第3圖係繪示本發明較佳實施 例之系統方塊圖。在本發明之較佳實施例中,互動式影像物 9 1307052 件萃取與移除之裝置 ^ ^ ^04 ® 括影像輸入模組302、雜訊濾除 ,模組304、梯度運算模組3〇6、 擇模組則、前景與背景分類器3f象切割模組則、種子選 “ 312以及影像輸出模組314。 細說明。 平取,、移除裝置與方法於下文詳 302,再由驟細’輸人—第—影像至影像輸入模組 3〇4,以進:組3〇2輸出第一影像至雜訊濾除模組 防後續訊渡除運算(步驟2〇2)°步驟202之目的係預 304 H 步驟之過度分割。纟中,雜訊攄除模組 器(Mean F.f列如中位數滤波器(Me<lian FD和平均值遽波 财衝:二來消除第一影像之雜訊]位數濾波器對於 僅Si的濾除效果。中位數滤波器和平均值渡波器 γϊ§ . ^ 了採用间斯濾波器或其他習知濾波器來 實見步驟202,故本發明並不在此限。 ^了要由第—影像_的某—個像素與其鄰近的八個像 2的灰階㈣中位數’因此,先將此九個像素做排序之動 傻:決定出中位數’而此令位數即會取代第-影像令的該 像素。例如,假設第-影像中的某—個像素之灰階值為1〇, 5而其鄰近的八個像素的灰階值為(20, 10, 20, 15, 2〇: 2〇, 5〇〗〇〇),將此九個像素之灰階值做排序運算後之結果為 二’广15’20’20,20’20’50,_’此例子;的中 數尸為20,因此,影像中的該像素之灰階值將被中位數 取代而變成20的灰階值。 接著,我們再對經令位數濾波後之第一影像進行平均值 1307052 瀘' 波’此平均值濾波係對第一 波器m… 個㈣的平均濾 =將㈣之中心點像素之灰階值與其鄰近像素之灰階 值取:平均值’來取代此中心點像素之灰階值。如第4圖所 L Λ!會不依照本發明之一實施例之3x3之平均濾波器 25?經中位數濃波和平均㈣波後便會產生第二影像,此 -景’像會較第一影像稍微模糊,因此能改善後續之影像切 割步驟的過度分割。 接著’透過梯度運算模組306來進行第二影像之梯度值 運算(步驟204),以獲得第二影像之梯度值。為了方便處理 影像切割步驟及有效地減少資料量,本發明之梯度值運算包 ^色彩模型轉換和形態學梯度運算。將第二影像之rgb色 彩座標系統轉換成YCbCr色彩座標系統而獲得第―色彩模 型,將第二影像之RGB色彩座標系統轉換成L*a*b*色彩座 標系統而獲得第二色彩模型,此色彩模型轉換已為此技術領 域中具有通常知識者所熟知,故不另贅述,且此色彩模型轉 換為YCbCr色彩座標系統和L*a*b*色彩座標系統僅用以舉 _ 例說明,本發明並不在此限。 接著,進行形態學梯度運算。形態學梯度是融合亮度與 -彩度的資訊所求得之梯度值,故在亮度變化較不明顯的影像 - 位置令,會比傳統的邊界偵測法有較佳的效果。形態學梯度 運算的梯度值主要是由侵蝕(Er〇si〇n)和擴張(Dilati〇n)所求 得,以下將分別定義侵蝕(與擴張“⑺函數運 算,並描述梯度值之計算方法。 假設’(Αβ係為第二影像之第一色彩模型或第二色彩模Filter) is only based on the brightness of the image, and is considered when the brightness change is less obvious. Typical applications for image cutting, such as the magic wand function in image editing tools (eg, Photoshop image editing software from Ad〇be), which produces a color similar to the seed point based on the seed point given by the user. The area. However, users often need to give a lot of seed points in order to effectively generate the objects to be obtained, which will cause inconvenience to the user. SUMMARY OF THE INVENTION Therefore, the present invention is directed to providing an apparatus and method for extracting and removing interactive image objects. # Specifying a small amount of foreground and background seeds allows the user to more conveniently and quickly remove objects from the object. Extract/remove in the image. Another object of the present invention is to provide an interactive image object extraction=dividing device and method. When the brightness of the image changes is not obvious, the color change is transmitted by the shape=gradient (Μ. Gra Gradient). To improve the gradient value of the image, and to strengthen the boundary portion of the image of the image, and use the watershed d) image (4) algorithm to reduce the amount of computation, so that the present invention is sinister, & ^ The device and method for interactive image object extraction and removal are easily realized in the hard body, and the integration of the consumer electronic product can speed up the classification step of the object and the background object. Extracting and removing according to any of the above objects of the present invention, two "an interactive image object, " ^ e image bearing wheel module, noise filtering module 6 1307052 group, gradient computing module, image cutting Module, seed selection module, foreground disc, background classification ^ and image wheeling module. Image input module is used to receive input image 'The input image contains several pixels. The noise removal module is used The second image gradient computing module receives the second image by the money image input module to obtain the second image, and the number of ladders is determined. The gradient value of the second image is used to cut the second image into a plurality of blocks. The seed selection module is selected from the first image to 'a foreground seed and at least one background seed, and the first image in the image:: and the background The block of the second image corresponding to the seed is marked as the foreground standard: the background label. The foreground and background classifiers are used to respectively mark the unlabeled and privately-labeled blocks. Foreground label and background label For example, the round-out module combines the foreground label with the background label corresponding to the background label. For any of the above purposes, an interactive image object 1 method is included, which includes at least the following steps. First, input the first The shadow~receiver performs the noise filtering operation to eliminate the noise of the first image and obtain the second image. Then, the gradient value operation is performed on the second image to obtain: the gradient values of the image. Performing an image cutting step, cutting the second image into a plurality of blocks according to the ladder value. Subsequently, performing seed selection, the seed selecting step is to select at least one foreground seed background seed in the first image, and the first image is to be The medium foreground seed and the background species are checked. The second image: the corresponding block is marked as foreground label and background foreground and background classification operation respectively, and the blocks not marked as foreground and background labels are respectively marked as foreground. Label and background label. Connect 1307052 down, output the block marked by the foreground label and the block marked by the background label. °° [Embodiment] The present invention mainly discloses An apparatus and method for extracting and removing interactive image objects. First, inputting a first image to an image input module, and then inputting the first image to a noise filtering module for noise filtering operation, Eliminate the noise in the first image and generate the second image. The algorithm is used to 'substantially reduce the excessive segmentation caused by the next image cutting step. (CWSegmentati〇n). Then, send the image: The gradient operation mode is also performed on the gradient value to obtain the gradient value of the second material. The gradient calculation is performed according to the morphological gradient, which can strengthen the boundary in the second image, thereby making the next image cutting step. Further, (4) judges the extracted object. In addition, the morphological gradient not only uses the second image but also uses the color information of the second image, so that the brightness value is not obvious: in the image, the gradient value can be increased by the color change. The image is processed according to the gradient value, and the image cutting module is cut into several blocks. This image cutting step; 7 is, step by step, the first image is cut 7 steps can be a watershed shadow silly method, the following watershed image cutting algorithm only uses β, not limited to this. The user through the seed description, the invention selects the module in the first shadow φ (or several) foreground seeds and - U number of background seeds = select a foreground seed and the first seed of the background seed , child, and mark the selected blocks as the foreground shovel 2: and the corresponding background image classifier of the second image, to carry the c mark 'and round to the foreground and ο classification operation 1 step Blocks in the second shadow 8 1307052 image that are not marked as foreground and background labels can be classified into objects of the foreground label and the background label, respectively. Then, the image output module outputs the classified image, and extracts the foreground object selected by the foreground seed/background seed from the second image, and extracts the background seed/Jingjing seed from the second image. Background object. However, the following extraction actions are for illustrative purposes only, and may be performed in accordance with the foreground or background, and the present invention is not limited thereto. In order to make the description of the present invention more detailed and complete, reference is made to the following description in conjunction with the drawings of Figures 1 through 6. Referring to Figures la and lb, Figure la is a schematic diagram showing a seed selection step in accordance with a preferred embodiment of the present invention; and Figure lb is a functional diagram of a preferred embodiment of the present invention. As shown in FIG. 1a, when the apparatus for extracting and removing interactive image objects of the present invention is used, for an input image 100, if the user wants to extract a human object 102 from the input image 100, The seed selection module of the invented interactive image object extraction and removal device selects the foreground seed 1〇4 and the background seed ι〇8, and then uses the method of the interactive image object extraction and removal method of the present invention. As a result, as shown in Fig. 1b, an extraction region 110 is extracted from the input image 100 and a region 112 other than the object is produced. The extraction area 110 is such that the human object 102 in the image 1 is input, and the moon in the input image 100 and the object in the other body are removed. Next, please refer to FIG. 2 and FIG. 3 simultaneously. FIG. 2 is a system flow diagram of a preferred embodiment of the present invention. FIG. 3 is a block diagram of a system according to a preferred embodiment of the present invention. In the preferred embodiment of the present invention, the interactive image material 9 1307052 device for extracting and removing ^ ^ ^ 04 ® includes image input module 302, noise filtering, module 304, gradient computing module 3 6. Select the module, the foreground and background classifier 3f like the cutting module, the seed selection "312 and the image output module 314. Detailed description. The extraction, removal device and method are detailed below 302, and then by the 'Input-first-image to image input module 3〇4, to enter: group 3〇2 output first image to noise filtering module to prevent subsequent signal removal operation (step 2〇2) ° step 202 The purpose is to over-segment the pre-304 H step. In the middle, the noise is removed from the module (Mean Ff column is like the median filter (Me<lian FD and the average 遽波财冲: two to eliminate the first image) Noise] Digit filter for Si-only filtering effect. Median filter and average ferrite γϊ§ ^ ^ Using a sigma filter or other conventional filter to see step 202, the present invention It is not limited to this. ^The gray level (four) of the image of the eight pixels of the image-image_ The number of digits 'Therefore, the nine pixels are sorted first: the median is determined and the number of digits will replace the pixel of the first image. For example, suppose a certain image in the first image. The grayscale value of the pixel is 1〇, 5 and the grayscale value of the adjacent eight pixels is (20, 10, 20, 15, 2〇: 2〇, 5〇〗 〇〇), and the nine pixels are The grayscale value is sorted and the result is two 'wide 15'20'20, 20'20'50, _' this example; the middle number of corpses is 20, therefore, the grayscale value of the pixel in the image will be The median is replaced by a grayscale value of 20. Next, we then average the first image after the number of bits is filtered by 1307052 泸 'wave'. This average is filtered by the first waver m... (4) Average filter = the gray scale value of the center point pixel of (4) and the gray scale value of its neighboring pixels are taken as: the average value 'to replace the gray scale value of the center point pixel. As shown in Fig. 4, L Λ! will not according to the present invention. In the embodiment, the 3x3 averaging filter 25 will generate a second image after the median and average (four) waves, and the image will be slightly blurred compared to the first image. The over-segmentation of the subsequent image cutting step can be improved. Then, the gradient value calculation of the second image is performed by the gradient operation module 306 (step 204) to obtain the gradient value of the second image. In order to facilitate the processing of the image cutting step and the effective The amount of data is reduced, the gradient value calculation package of the present invention, the color model conversion and the morphological gradient operation, and the rgb color coordinate system of the second image is converted into the YCbCr color coordinate system to obtain the first color model, and the second image is RGB. The color coordinate system is converted into an L*a*b* color coordinate system to obtain a second color model. This color model conversion has been well known to those of ordinary skill in the art, and therefore will not be described again, and this color model is converted to YCbCr. The color coordinate system and the L*a*b* color coordinate system are for illustrative purposes only and the invention is not limited thereto. Next, a morphological gradient operation is performed. Morphological gradient is the gradient value obtained by combining the information of brightness and chroma. Therefore, the image-position position with less obvious brightness change will have better effect than the traditional boundary detection method. The gradient values of the morphological gradient operation are mainly determined by erosion (Er〇si〇n) and expansion (Dilati〇n). The following will define the erosion (and the expansion “(7) function operation, and describe the calculation method of the gradient value. Suppose '(Αβ is the first color model or the second color mode of the second image)

1307052 型之輸入訊號’ X、y係分別代表第二影像之第—色 亡第二色彩模型之x軸像素、y轴像素K系代表二的 個一維平面的遮罩子影像’ x〇、y〇係分別代表位於Μ 的/軸像素^軸像素。侵蚀與擴張函數運算分別如下述: =式⑴與公式(2)所示,形態學梯度UWM)則如公式(3)所 以/)㈣=min{/㈣⑴ W)㈣=max{/(> wd,(W(>x"} 《(/)0,少)=么(/)〇,少)、⑺(Λ:,W ⑺ 本發明先利用第一色彩模型 y供!之Y輸入訊號,分別代入 公式(1)與公式(2),接著再代 , 有丹代入至公式(3),以獲得亮度梯度 值(gY(x,y)),而第二色彩模型 、土 < l a以及b*輸入訊號亦 Μ代入公式⑴、公式⑺以及公式(3),以分別獲得p梯 度值(gL*(x,y))、a*梯度值(ga*(x,y))以及b*梯度值 (神秦接著,M*、a*以及b*之梯度值代入公式⑷, 以求得彩度梯度值(gc(x,y))。 接著,融合了亮度與彩度資訊之梯度值(gi(x,y))定義如 公式(5): §i ^ = maxfe y\ gc (xf y)) (5) 當求得第二影像之梯度值後,接著進行步驟206,透過 影像切㈣組⑽於第二料上進行料㈣運算,本實施 例使用刀水嶺影像切割演算法為例來作―說明。分水嶺的觀 念是將-張影像的資料視為三度空間,此三度空間分別是由 12 1307052 水平座標、垂直座標以及梯度值所構成。以地形學上的解釋 來說,模擬洪水從深度最深(梯度值最低)的流域盆地慢慢漲 上來,等到來自不同流域盆地的水即將合併時,便建立出一 個水壩以防止水溢出,此水壩即分水嶺。分水嶺上的點係標 示在區域梯度最大值的位置上。故經由此步驟2〇6,便可= 影像切割成數個區塊’每個區塊為具有相近梯度值之像素之 ★子集合’且每個區塊在此時皆未標記為前景標籤或背景標 籤。因為此分水嶺影像切割演算法係此技術領域中具有通常 知識者所熟知,故不另贅述。 〇經由步驟206之影像切割步驟將第二影像切割成數個 區塊後,接著進行步驟208,由外部(使用者)透過種子選擇 模組310於第一影像中選取一個(或數個)前景種子和一個 (或數個)背景種子,且將其所對應之數個區塊分別標記為前 景標籤和背景標籤’其它非前景種子和背景種子所對應之區 塊仍被視為未標籤區塊。其中,第—影像欲萃取/移除之物 件係由則景種子對應之區塊經步驟21 〇選取,第一影像欲移 除/萃取之物件係由背景種子對應之區塊經步驟2丨〇選取。 輸入之剛景與背景種子愈多,便會獲得更佳的效果,然而本 發明只^量的種子數,就可得到頗佳的效果。 ^接著’將經過步驟206與步驟208後之結果送至前景與 2刀類器312來進行步驟21{),此步驟將針對不屬於前景 ,背景種子之區塊做分類的動作,將未被標記成前景標錢與 背景標籤的區塊標記成前景和背景標籤。此步驟21〇分為幾 個階段:⑴初始化步驟、(2)取出步驟、(3)分類步驟以及(4) 13 1307052 號(q)的最小值所對應的區塊,下述稱此區塊為待處理區塊 (R’)400,待處理區塊(R’)4〇〇係放入一特定資料結構中之鄰 近區塊(A)的其中之一區塊,此特定資料結構於本發明實施 上係採用一個階層式佇列260之資料結構來進行,然而,此 特定資料結構為階層式佇列僅用以方便舉例說明,本發明並 不在此限。 接著,進行分類步驟。在本實施例中,已標籤區塊群組 (LR)係第6圖中第一已標籤區塊402a、第一已標籤區塊402b 鲁和第一已標籤區塊4〇2c之區塊所組成的群組,其中已標籤 區塊群組(LR)係緊鄰於待處理區塊(R’)4〇〇之區塊且為已標 記成前景標藏和背景標籤之區塊所組成的一群組。計算待處 理區塊(R’)400與已標籤區塊群組(LR)之色彩距離,以獲得 數個第一色彩距離,並根據第一色彩距離的最小值來決定一 個最小距離區塊(R、,此最小距離區塊(R*)係已標記為前景 標籤或背景標籤之區塊,且為已標籤區塊群組(LR)之其中之 一區塊,若最小距離區塊(R*)為前景標籤,則將待處理區塊 _ (R )4〇〇標記成前景標籤。反之,若最小距離區塊(R*)為背 景標籤,則將待處理區塊(R’)400標記成背景標籤。 - 接著,進行移除動作。將待處理區塊(R’)400從階層式 , 佇列260中移除。 在此實施例中,未處理區塊(B)係於第6圖中標記為 4〇4a、404b和404c之區塊,此未處理區塊係緊鄰於待 處理區塊(R’)400 ’而且未處理區塊(B)不僅尚未標記前景標 籤和背景標籤,且未被放入特定資料結構。將第二已標籤區 15The input signal of the 1307052 type 'X, y represents the second image, the x-axis pixel of the second color model, and the y-axis pixel K represents the one-dimensional plane of the mask image 'x〇, y The 〇 system represents the pixel of the / axis of the Μ axis. The erosion and expansion function operations are as follows: = Equation (1) and Equation (2), the morphological gradient UWM) is as in Equation (3), so /) (4) = min{/(4)(1) W) (4) = max{/(> Wd, (W(>x"} "(/)0, less) = (/) 〇, less), (7) (Λ:, W (7) The present invention first uses the first color model y for the Y input signal Substituting formula (1) and formula (2), respectively, and then substituting, substituting into equation (3) to obtain the luminance gradient value (gY(x, y)), and the second color model, soil < la And the b* input signal is also substituted into the formula (1), the formula (7), and the formula (3) to obtain the p gradient value (gL*(x, y)), the a* gradient value (ga*(x, y)), and b, respectively. * Gradient values (God Qin, the gradient values of M*, a*, and b* are substituted into equation (4) to obtain the chroma gradient value (gc(x, y)). Next, the gradient of luminance and chroma information is blended. The value (gi(x, y)) is defined as equation (5): §i ^ = maxfe y\ gc (xf y)) (5) After obtaining the gradient value of the second image, proceed to step 206 to transmit the image. Cut (4) group (10) to perform material (4) calculation on the second material. In this embodiment, the Knife Ridge image cutting algorithm is used. For example, the concept of watershed is to treat the image of the image as a three-dimensional space consisting of 12 1307052 horizontal coordinates, vertical coordinates and gradient values. In terms of topographical explanation The simulated floods slowly rise from the basin with the deepest depth (the lowest gradient). When the water from different basins is about to merge, a dam is built to prevent water from overflowing. This dam is the watershed. The point system on the watershed Marked at the position of the maximum value of the region gradient. Therefore, via this step 2〇6, the image can be cut into several blocks 'each block is a subset of pixels with similar gradient values' and each block is in At this time, it is not marked as a foreground label or a background label. Since this watershed image cutting algorithm is well known to those of ordinary skill in the art, it will not be described again. 切割 The second image is cut into numbers by the image cutting step of step 206. After the block, proceed to step 208, and the external (user) selects one (or several) in the first image through the seed selection module 310. The foreground seed and one (or several) background seeds, and the corresponding blocks are marked as foreground tags and background tags respectively. The blocks corresponding to other non-foreground seeds and background seeds are still regarded as unlabeled areas. Block, wherein the object to be extracted/removed by the first image is selected by the block corresponding to the scene seed through step 21, and the object to be removed/extracted by the first image is the block corresponding to the background seed through step 2丨〇Select. The more the input and the background seeds are, the better the effect will be obtained. However, the present invention can obtain a good effect only by the number of seeds. Then, 'the result of step 206 and step 208 is sent to the foreground and 2 tool 312 to perform step 21{), which will not classify the block that does not belong to the foreground, the background seed, and will not be Blocks marked as foreground and background labels are marked as foreground and background labels. This step 21 is divided into several stages: (1) initialization step, (2) extraction step, (3) classification step, and (4) the block corresponding to the minimum value of 13 1307052 (q), which is referred to as the block below. For the block to be processed (R') 400, the block to be processed (R') 4 is placed in one of the adjacent blocks (A) in a specific data structure, and the specific data structure is The implementation of the invention is carried out using a data structure of a hierarchical array 260. However, the specific data structure is a hierarchical array for convenience of illustration only, and the present invention is not limited thereto. Next, a classification step is performed. In this embodiment, the tagged block group (LR) is the block of the first tagged block 402a, the first tagged block 402b, and the first tagged block 4〇2c in FIG. a group consisting of a tagged block group (LR) that is immediately adjacent to the block of the block to be processed (R') and is a block that has been marked as a foreground tag and a background tag. Group. Calculating a color distance between the block to be processed (R') 400 and the tagged block group (LR) to obtain a plurality of first color distances, and determining a minimum distance block according to a minimum value of the first color distance ( R, the minimum distance block (R*) is a block marked as a foreground label or a background label, and is one of the labeled block groups (LR), if the minimum distance block (R) *) For the foreground label, mark the pending block _ (R ) 4 成 as the foreground label. Conversely, if the minimum distance block (R *) is the background label, the pending block (R') 400 Marked as a background label - Next, a removal action is performed. The block (R') 400 to be processed is removed from the hierarchy, queue 260. In this embodiment, the unprocessed block (B) is tied to the 6 Blocks labeled 4〇4a, 404b, and 404c, this unprocessed block is immediately adjacent to the block to be processed (R')400' and the unprocessed block (B) has not yet marked the foreground and background tags. And not placed in a specific data structure. The second tagged area 15

Claims (1)

9 %月〇 修11替換頁I 、申請專利範圍 -互動式景:式影像物件萃取與移除之方法,適用於 與移除裝置中,該方法至少包含: 乐影像; 得訊濾除運算’以消除該第-影像之雜訊,獲 對該第—影像進行一& 之複數個梯度值·’ 運异,讀得該第二影像 進仃-影像切割步驟, 像切割成複數個區塊; 值冑該第一影 像中驟’該種子選取步驟係於該第-影 ,,前豕種子與至少一背景種子,且將該第一 應之該則景種子與該背景種子在該第二影像中所對 *、、以些區塊,分别標記為一前景標籤與一背景標藏; 進行-前景與背景分類運算’將未被標記成該前景標 、該背景標籤之該些區塊分別標記成該前景標藏與該 背景標籤’其中該前景與背景分類運算之步驟更至少包 含: 進行一初始化步驟,該初始化步驟係分別計算緊 鄰於複數個第一已標籤區塊(Ri)之複數個鄰近區塊 (A)的一第一索引編號(q),且依照該第一索W編號(q) 之大小刀別將該些鄰近區塊(A)放入一特定資料结構 所對應的位置中,其中該些第一已標籤區塊(Ri)係已 1307052 97 ft)·%#修正替換頁 標記成該前景標戴或該背景標藏所對應之該些區 塊,該些鄰近區塊⑷係緊鄰於該些第—已標藏區塊 Ri)且尚未標§己成該前景標籤和該背景標籤; 進行-取出步驟,根據該第一索引編號⑷之最 小值來取出所對應之一待處理區掩(R,),其中該待處 理區塊(R,)係放入該特定資料結構中之該些鄰近區 塊(A)其中之一; 進行一分類步驟,該分類步驟至少包含: 馨 計算該待處理區塊(R’)與-已標籤區塊群 組(LR)之色彩距離,以獲得複數個第一色彩距 離,且根據该些第一色彩距離之一最小值來決定 一最小距離區塊(R、,其中該已標籤區塊群組 (LR)係緊鄰於該待處理區塊(R’)且為已標記成該 前景標籤和該背景標籤之該些區塊所組成之一 群组’該最小距離區塊(R*)係該已標籤區塊群組 φ (LR)其中之一; 決定一最小距離區塊(R*)是否為該前景標 籤,並產生一第一結果; 若該第一結果為是,則將該待處理區塊(R’) 標記成該前景標籤; 若該第一結果為否,則將該待處理區塊(R,) 標記成該背景標籤; 計算複數個未處理區塊(B)的一第二索引 編號(t),且決定該第二索引編號⑴是否大於等於 20 1307052 一索引最大值(ζ),並產生一第二結果,其中該索 引最大值(Ζ)係於該初始化步驟時所依序產生之 該第一索引編號(q)的最大值,該些未處理區塊(Β) 係緊鄰於該待處理區塊(R’)且尚未標記成該前景 標籤和該背景標籤’且未被放入該特定資料結 構; 右該第二結果為是,則將該第二索引編號⑴ 所對應之該些未處理區塊之其中之一者放入 該特定資料結構之該第二索引編號⑴位置中;以 及 若該第二結果為否,則將該第二索引編號⑴ 所對應之該些未處理區塊(B)之其中之一者放入 該特定資料結構之該索引最大值(2)位置中;以及 將該待處理區塊(R’)從該特定資料結構中移除; 以及 輸出該A景標籤所標記之該些區塊與該背景標鐵所 標記之該痤區塊。 2. 如申請專利範圍第1項所述之互動式影像物件萃 取與移除之方法,其中該雜訊濾除運算之步驟更至少包含 分別藉由/中位數濾波和一平均值濾波來消除該第一景J 像之雜訊’以獲得該第二影像。 心 3. 如申請專利範圍帛丨;;頁所述之互動式影像物件萃 219 % 月〇修11 Replacement Page I, Patent Application Scope - Interactive Scene: Method for image object extraction and removal, suitable for and removal device, the method includes at least: music image; signal filtering operation In order to eliminate the noise of the first image, a plurality of gradient values of the first image and the image are obtained, and the second image is read into the image cutting step, and the image is cut into a plurality of blocks. Value 胄 in the first image, the seed selection step is tied to the first image, the front seed and the at least one background seed, and the first seed and the background seed are in the second The *, and the blocks in the image are respectively marked as a foreground label and a background label; the - foreground and background classification operations are performed, and the blocks that are not marked as the foreground label and the background label are respectively Marking the foreground label and the background label 'the step of the foreground and background classification operations at least comprises: performing an initialization step of respectively calculating a plurality of plural numbers of the first number of first labeled blocks (Ri) a first index number (q) adjacent to the block (A), and according to the size of the first cable W (q), the neighboring blocks (A) are placed in a position corresponding to a specific data structure. Where the first tagged block (Ri) is 1307052 97 ft)·%# the modified replacement page is marked as the foreground tag or the block corresponding to the background tag, the neighboring blocks (4) is adjacent to the first-labeled block Ri) and has not yet been marked as the foreground label and the background label; a carry-out step, and extracting one of the corresponding ones according to the minimum value of the first index number (4) a to-be-processed area mask (R,), wherein the to-be-processed block (R,) is placed in one of the neighboring blocks (A) in the specific data structure; performing a classification step, the classification step including at least Calculating a color distance between the to-be-processed block (R') and the tagged block group (LR) to obtain a plurality of first color distances, and determining according to one of the first color distances a minimum distance block (R, where the tagged block group (LR) is in close proximity a group of the blocks to be processed (R') and which are marked as the foreground tag and the background tag. The minimum distance block (R*) is the tagged block group. One of the groups φ (LR); determining whether a minimum distance block (R*) is the foreground label and generating a first result; if the first result is yes, the block to be processed (R' Marking the foreground label; if the first result is no, marking the to-be-processed block (R,) as the background label; calculating a second index number of the plurality of unprocessed blocks (B) (t) And determining whether the second index number (1) is greater than or equal to 20 1307052, an index maximum value (ζ), and generating a second result, wherein the index maximum value (Ζ) is sequentially generated during the initializing step a maximum value of the first index number (q), the unprocessed block (Β) is immediately adjacent to the to-be-processed block (R') and has not been marked as the foreground tag and the background tag' and is not placed in the Specific data structure; right second result is yes, then the second index number (1) Corresponding one of the unprocessed blocks is placed in the second index number (1) position of the specific data structure; and if the second result is no, the second index number (1) is corresponding to One of the unprocessed blocks (B) is placed in the index maximum (2) position of the particular data structure; and the pending block (R') is removed from the specific data structure And outputting the blocks marked by the A-view tag and the block marked by the background tag. 2. The method for extracting and removing interactive image objects as described in claim 1, wherein the step of filtering the noise operation further comprises at least removing by using a median filter and an average filter, respectively. The first scene J image of the noise 'to obtain the second image. Heart 3. If you apply for a patent scope 帛丨;; interactive image objects described on the page 21 1307052 取與移除之方法,其中該梯度值運算之步驟至少包人一 彩模型轉換和—形態學梯度運算,且該第二影像 彩模型轉換之步驟而產生一第一色彩模型與—第二^舻 4·如申請專利範圍第3項所述之互動式影像物件萃 取與移除之方法’其中該形態學梯度冑算係利用該第一色 =模型與該第二色彩模型來計算出該第二影像之複數個 亮度梯度值(gY(X,y))與複數個彩度梯度值(gc(x,y)),該第 二影像之該些梯度值(gi(X,y))之計算係根據下列公式: gi(x,y)=max(gY(x,y), gc(x,y)) 其中,X代表該第一影像之X軸座標像素,y代表該 第二影像之y軸座標像素。 5·如申請專利範圍第4項所述之互動式影像物件萃 取與移除之方法,其中該色彩模型轉換至少包含: 該第一色彩模型係將該第二影像之一 RGB色彩座標 系統轉換為一 YCbCr色彩座標系統,使該第一色彩模型 具有複數個Y輸入訊號、複數個Cb輸入訊號以及複數個 Cr輸入訊號;以及 該第二色彩模型係將該第二影像之該RGB色彩座標 系統轉換為一 L*a*b*色彩座標系統,使該第二色彩模型 具有複數個L*輸入訊號、複數個a*輸入訊號以及複數個 b*輸入訊號。 221307052 The method of taking and removing, wherein the step of calculating the gradient value comprises at least a color model conversion and a morphological gradient operation, and the step of converting the second image color model to generate a first color model and a second ^舻4· The method for extracting and removing interactive image objects as described in claim 3, wherein the morphological gradient calculation system uses the first color=model and the second color model to calculate the a plurality of brightness gradient values (gY(X, y)) of the second image and a plurality of chroma gradient values (gc(x, y)), the gradient values of the second image (gi(X, y)) The calculation is based on the following formula: gi(x, y) = max(gY(x, y), gc(x, y)) where X represents the X-axis coordinate pixel of the first image, and y represents the second image The y-axis coordinates of the pixel. 5. The method of extracting and removing an interactive image object according to claim 4, wherein the color model conversion comprises: the first color model converting the RGB color coordinate system of the second image into a YCbCr color coordinate system, the first color model having a plurality of Y input signals, a plurality of Cb input signals, and a plurality of Cr input signals; and the second color model converting the RGB color coordinate system of the second image For an L*a*b* color coordinate system, the second color model has a plurality of L* input signals, a plurality of a* input signals, and a plurality of b* input signals. twenty two 1307052 6.如申請專利範圍第5項所述之互動式影像物件萃 取與移除之方法’其中該些亮度梯度值(gY(x,y))係利用該 第一色彩模型之該些γ輸入訊號進行一侵蝕(^〇以〇11)函 數運算和一擴張(Dilation)函數運算所計算出,該些彩度 梯度值(gc(x,y))係利用該第二色彩模型之該些L*輸入訊 號、該些a*輸入訊號以及該些b*輸入訊號進行該侵蝕函 數運算和該擴張函數運算所計算出。 7·如申請專利範圍第丨項所述之互動式影像物件萃 取與移除之方法,其中該影像切割步驟所切割之每一該些 區塊係具有相近該些梯度值之該些像素的子集合。 8. 如申請專利範圍第1項所述之互動式影像物件萃 取與移除之方法,其中該影像切割步驟係根據一分水嶺影 φ 像切割演算法。 9. 如申請專利範圍第1項所述之互動式影像物件萃 取與移除之方法’其中該第一影像欲萃取之物件係由該前 景種子對應之該些區塊中選取,該第一影像欲移除之物件 . 係由該背景種子對應之該些區塊中選取。 1 〇·如申請專利範圍第1項所述之互動式影像物件萃 取與移除之方法,其中該第一影像欲萃取之物件係由該背 23 1307052 ________^ 97. ^ (¾ g f正替賴 ________ 之物件 12·如申請專利範圍第1所述之互動式影像物 :與移除之方法’其中該第一索引編號⑷係藉由該些第 一已標籤區塊(RJ與該些鄰近區塊(Α)來進行色彩距離 運算,以獲得複數個第二色彩距離,且於該些第二色私距 離中挑選最小值並進行四捨五入運算後所獲得之結果。 U ·如申請專利範圍第1項所述之互動式影像物件萃 取與移除之方法,其中該第二索引編號⑴係藉由該些第 二已標藏區塊(Rj)與該些未處理區塊(Β)來進行色彩距離 • 之運算’以獲得複數個第三色彩距離,且於該些第三色私 距離中挑選最小值並進行四捨五入運算後所獲得之蜂 果’其中該些第二已標籤區塊(Rj)係已標記成前景標藏或 背景標籤之該些區塊且與未處理區塊(B)緊鄰。 14·如申凊專利範圍第1項所述之互動式影像物件萃 取與移除之方法,更至少包含: 決定該第一索引編號(q)所對應之該特定資料結構的 位置是否已無該待處理區塊(R) ’並產生一第三結果; 24 97. 97.1307052 若該第三結果為是,則關閉該第一 應之該特定㈣結料位置;减 w叫)所對 若該第三結果為否,則正常進行該取出步驟。 如申請專利範圍第14項 萃取與移除之方法,更至少包含:之互動“像物件 決定該待處理區塊(R’)所計算 ^ #+ A > ^ ^ 介出之該第一索引編號(q) 定資料結構的位置是否已關閉,並產生-第 關閉為是’則將該待處理區塊的放入尚未 關閉且a别該特定資料結構 位置;以及 破小的該第一索引編號(q) 若該第四結果為否,則正常進行該取出步驟。1307052 6. The method for extracting and removing interactive image objects as described in claim 5, wherein the brightness gradient values (gY(x, y)) utilize the gamma inputs of the first color model The signal is subjected to an erosion (〇11) function operation and a Dilation function operation, and the chroma gradient values (gc(x, y)) are the Ls of the second color model. * The input signal, the a* input signals, and the b* input signals are calculated by the erosion function operation and the expansion function operation. 7. The method of extracting and removing interactive image objects as described in the scope of claim 2, wherein each of the blocks cut by the image cutting step has sub-pixels of the gradient values set. 8. The method of extracting and removing an interactive image object as described in claim 1, wherein the image cutting step is based on a watershed shadow φ image cutting algorithm. 9. The method for extracting and removing interactive image objects as described in claim 1, wherein the object to be extracted by the first image is selected from the blocks corresponding to the foreground seed, the first image The object to be removed is selected from the blocks corresponding to the background seed. 1 〇 · The method for extracting and removing interactive image objects as described in claim 1, wherein the first image to be extracted is from the back 23 1307052 ________^ 97. ^ (3⁄4 gf 正替The object of ________ is as follows: the interactive image object of claim 1 and the method of removing 'where the first index number (4) is by the first labeled block (RJ and the neighbors) The block (Α) performs a color distance operation to obtain a plurality of second color distances, and selects a minimum value among the second color private distances and performs a rounding operation. U · If the patent application scope The method for extracting and removing interactive image objects according to the above, wherein the second index number (1) is performed by the second labeled blocks (Rj) and the unprocessed blocks (Β). Color distance • operation 'to obtain a plurality of third color distances, and to select the minimum value among the third color private distances and perform the rounding operation to obtain the bee fruit 'the second labeled blocks (Rj ) has been marked as a foreground Or the blocks of the background label and are adjacent to the unprocessed block (B). 14. The method for extracting and removing interactive image objects as described in claim 1 of the patent application, at least includes: Whether the location of the specific data structure corresponding to an index number (q) has no pending block (R)' and produces a third result; 24 97. 97.1307052 If the third result is yes, then the first If the third (the fourth) result is no, the extraction step is normally performed. For example, the method of extracting and removing the fourth item of the patent application includes at least: The interaction "like object determines the calculation of the to-be-processed block (R') ^ #+ A > ^ ^ The first index number (q) is specified to determine whether the position of the data structure is closed, and the result is - the close is If yes, the placement of the block to be processed has not been closed and the location of the specific data structure is not closed; and the first index number (q) is broken. If the fourth result is no, the extraction step is normally performed. 25 1307052 -------- 95謝修纖! 七、(一)、本案指定代表圖為:第 2圖 (二)、本代表圖之元件代表符號簡單說明: 200 ¥m 入 第 -— 影 像 202 雜 訊 濾 除 運 算 204 梯度值 運 算 206 影 像切 割 步 驟 208 種 子 選 取 210 前 景 與 背 景 分類運算 212 影 像 出 八、本案若有化學式時,請揭示最能顯示發明特徵 的化學式:25 1307052 -------- 95 Xie Xiuxian! VII. (1) The representative representative of the case is: Figure 2 (2), the representative symbol of the representative figure is a simple description: 200 ¥m into the first - — Image 202 Noise Filtering Operation 204 Gradient Value Operation 206 Image Cutting Step 208 Seed Selection 210 Foreground and Background Classification Operation 212 Image Out 8. If there is a chemical formula in this case, please reveal the chemical formula that best shows the characteristics of the invention:
TW95102928A 2006-01-25 2006-01-25 Method for interactive image object extraction/removal TWI307052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW95102928A TWI307052B (en) 2006-01-25 2006-01-25 Method for interactive image object extraction/removal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW95102928A TWI307052B (en) 2006-01-25 2006-01-25 Method for interactive image object extraction/removal

Publications (2)

Publication Number Publication Date
TW200729074A TW200729074A (en) 2007-08-01
TWI307052B true TWI307052B (en) 2009-03-01

Family

ID=45071527

Family Applications (1)

Application Number Title Priority Date Filing Date
TW95102928A TWI307052B (en) 2006-01-25 2006-01-25 Method for interactive image object extraction/removal

Country Status (1)

Country Link
TW (1) TWI307052B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430576B (en) 2007-11-05 2010-04-21 鸿富锦精密工业(深圳)有限公司 Eye protection warning device and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images

Also Published As

Publication number Publication date
TW200729074A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN109685060B (en) Image processing method and device
US7869648B2 (en) Object extraction based on color and visual texture
Achanta et al. Salient region detection and segmentation
KR101670282B1 (en) Video matting based on foreground-background constraint propagation
Tong et al. Saliency detection with multi-scale superpixels
JP5645842B2 (en) Image processing apparatus and method using scale space
CN103745468B (en) Significant object detecting method based on graph structure and boundary apriority
WO2017084204A1 (en) Method and system for tracking human body skeleton point in two-dimensional video stream
JP5766620B2 (en) Object region detection apparatus, method, and program
Gui et al. A new method for soybean leaf disease detection based on modified salient regions
CN108549836A (en) Reproduction detection method, device, equipment and the readable storage medium storing program for executing of photo
CN104835146A (en) Salient object segmenting method in stereo image based on depth information and image cutting
Artan Interactive image segmentation using machine learning techniques
CN106778789B (en) A kind of fast target extracting method in multi-view image
Tao et al. New one-step model of breast tumor locating based on deep learning
TWI307052B (en) Method for interactive image object extraction/removal
CN104952071A (en) Maximum between-cluster variance image segmentation algorithm based on GLSC (gray-level spatial correlation)
Frucci et al. Using resolution pyramids for watershed image segmentation
Spina et al. Intelligent understanding of user interaction in image segmentation
Henry et al. Automatic trimap generation and artifact reduction in alpha matte using unknown region detection
Zhang et al. Incorporating spectral similarity into Markov chain geostatistical cosimulation for reducing smoothing effect in land cover postclassification
Safia et al. Image segmentation using continuous cellular automata
Kezia et al. A color-texture based segmentation method to extract object from background
Kochra et al. Study on hill climbing algorithm for image segmentation
Sun et al. Unsupervised object extraction by contour delineation and texture discrimination based on oriented edge features

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees