TWI274296B - Image-based object tracking method - Google Patents

Image-based object tracking method Download PDF

Info

Publication number
TWI274296B
TWI274296B TW94112512A TW94112512A TWI274296B TW I274296 B TWI274296 B TW I274296B TW 94112512 A TW94112512 A TW 94112512A TW 94112512 A TW94112512 A TW 94112512A TW I274296 B TWI274296 B TW I274296B
Authority
TW
Taiwan
Prior art keywords
image
probability
tracking method
gesture
parameter
Prior art date
Application number
TW94112512A
Other languages
Chinese (zh)
Other versions
TW200638287A (en
Inventor
Jr-Ming Fu
Jung-Ling Huang
Original Assignee
Univ Nat Chiao Tung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Chiao Tung filed Critical Univ Nat Chiao Tung
Priority to TW94112512A priority Critical patent/TWI274296B/en
Publication of TW200638287A publication Critical patent/TW200638287A/en
Application granted granted Critical
Publication of TWI274296B publication Critical patent/TWI274296B/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention presents an image-based object tracking method with a characteristic that object region probability is captured in two stages, where pixel characteristic and candidate segment object contour probability are used as bases for tracking and capture object region in the first stage and the second stage, respectively. The present invention allows standardization of system initialization to be applied in different environments and uses the tagless tracking technique, which involves comparatively less number of operations, to effectively obtain the active track without any help of glove or tag. By means of the active track, the object region and track can be obtained and used as auxiliary user interface.

Description

1274296 九、發明說明: 【發明所屬之技術領域】 本發明係有關一種影像式物件追蹤方法,特別是關於一種不需藉助手, 套或標記輔助之影像式物件追縱方法。 【先前技術】 目前相關的物件追蹤技術都是假設已經得到物件的形狀或是位置,並 根據物件的初始位置來做物件的追蹤,但是獲取物件初始位置有其困難存 在,在目前的技術中,都是利用標記幫助得到物件的初始位置,若物件為 手時,則需戴上手套或特殊的標記點,或者若物件在如車内之固定單純背 景下作追蹤時,則追蹤方法較單純,不需使用標記,然僅限於固定的單純 背景下。 因此,目岫所使用的追蹤方式有兩類,例如追縱手時,一類為在單純 走景下,利用動悲輪廓(active contour)的技術來擷取手的形狀,其利用 輸入影像麟度纽做輯,崎赋的对逼進手形的絲,然而此類 的方法用於即時的手勢追蹤需要大量遞回式運算,除了速度不夠快之外, 尚需給定手勢的初始位置,以其為依據作手勢逼近,如此才能夠順利追蹤 到手勢的位置與形狀,而且如果在比較複雜的背景下,手勢的輪廓容易被 干擾導追賴複雜背景下;另-類的方法是在複雜的背景下,_背景相 減法等技術來追蹤前景手勢,這一類的方法會受到不同前景物體的干擾, 例如,頭部區域與身體的區域。 1274296 而上述兩類的追蹤方式,都相當依賴初始的手勢位置資訊,並藉由此 初始資訊來作為手勢追蹤的基礎,也因此,習知在影像中獲取初始手勢位 置之方法依然存在有諸多的缺失。 〜 本國專利公告號1224288中提出一種「在影像中校準手勢特徵之方 .法」,其首先操取-輸入之手勢影像,並對此手勢影像進行影像前處理, w分軸手勢影像之二值化輪絲騎軸的賴鱗,接著,影像處理, _裝置可根據此封閉曲線來描繪出手勢影像之曲率格化空間影像,再將其所 形成之座標-峰值組數列與一預設函數進行迴旋積分運算,藉以將具有最大 積分值之座標值指定為基準點,進而求出手勢影像之特徵參數,μ比對 手勢影像之·參數與參考手勢微之概參數,以_好勢影像所對 應之手勢微’舰運算方法仍祕,且並未討論到如何娜與追縱 手勢。 另,本國公絲393629巾翻—種「手勢職纽及方法錢記憶媒 •體」’然其主要雜在手勢辨識系統而非追縱系統;且本國專利公告號, _中提出-種「手勢滑鼠的構成方法」,其主要使用在單純的環境中, 因此追蹤手勢财會造朗題,若為在環境下追蹤,财其問題。 、有鐘於此,本發明係針對上述之困擾,提出一種影像式物件追蹤方法, 以改善上述之缺失。 【發明内容】 本發明之主要目的,錄提供—娜像絲件追法,糊兩階段 的機率計胸方式,峰素賴輕__轉,以制祕之初始位 1274296 的環^物件S域之參數可以將系統初始化的動作標準化以運用在不同 種運ί判之另—目的,絲提供—種騎絲件追蹤綠,係為利用-2异1低的無賊追蹤技術,可以有效得到動態軌跡而不需藉助手套 .或標記的辅助’用以得到物件的區域與執跡,作為辅助性人機介面。 .為相地㈣,她峨件術法,包括首 Γ 一物件之連續影像,接_物像之每―個像素之各項特徵, 件Γ α項特徵之機率分佈’再來將機率分佈作條件機率運算,以得到物 ==域之_分佈,顧峰餘置之候縣域,且彻物件可能區 雜錄轉,絲參綠叙絲侧倾職量分數,以 下i紅解雜,並崎件_度紅最大純之取樣參 數传出物件目前時刻之位置。 底下藉由具財施_合所_赋詳加·,當更料瞭解本發明 的目的、技_容、_及其所達成的功效。 【實施方式】 示躲發.轉式物件追财权簡㈣卵,首先如步 入物件’其4可移動物件,如手的連續影像,再來如步驟S12, _件之連續影像的每—個像素之各項特徵,如邊_、膚色侧、 動態偵測或别景_ ’以得到各補徵之機率分佈’即邊緣機率、膚色機 2動態辭鱗景機率’接著如_ S14,將得_各彌徵之 佈作條件機率運算,以得到物件所在之可能區域之機率分佈,再來如步称 1274296 S16 ’將物件位置的候撰 、選e域來,並卿物件可能區域之機率分佈作來 數取樣’接著如步驟S18,將參數取樣之結果作物件辨識度量分數,以作為 物件下-娜之機率㈣並⑽—_如麵S2G,_件賴度量分數最 大之取樣錄辦齡_狀财轉S18將纽轉之結果作 物件辨識度#分數,赠為物件下―張影像的機率分佈之步驟以進行下 一聘刻物件位置之追蹤。 而第2圖為本伽之影像式物件追射法之詳細流_,首先如步驟 S30,利用攝影機攝取物件的連續影像,並將物件的影像輸入,再來如步驟 S32,計算每—個像素之各補徵,以得到該各項特徵之機率分佈,並接著 、,步驟S34 4算每-個像素可能為物件區域之機率,並利用一訓練條件機 率控制之,再來如步額6,在祕賴着近以齡分佈作錄取樣,並 如步驟S·步_G,經由經勒丨狀物件纖技術_取縣數與物件 相似之分數,接著如步驟S42,將候選區域參數在時間作機率傳播,並如步 驟S44,取出機率分數最大的參數即為物件追蹤之結果,並如步驟s仙,判 斷疋否結束追蹤,如是,則如步驟S48,結束追蹤,若否,則回到步驟S4〇, 繼續經由物件辨識技術得到取樣參數與物件相似之分數。 底下利用為手勢之物件作一詳細的流程說明,利用連接到電腦的攝影 機擷取影像,並將所擷取到的影像做即時處理。首先若輸入之連續影像資 料的像素為x(/,y·),假設解析度的大小為320x240像素,則〇</<32〇,〇<y . <320,接著將每個像素球力經過邊緣偵測⑻、膚色偵測⑻、與動態區域 偵測(M)與前景偵測(F)得到此像素各項特徵之機率你,p(X5〇w)), 8 1274296 p(術))’顺/,y)),其中颇綱此像素為邊緣之機率顺麟此 像素為膚色之機率’顺⑽此像素為動態之機率处·為此像素為 前景之機率’此四項特性皆為相互獨立且運算量低、易於抽取之特徵,其 中邊緣特徵為此像雜相雜权細對比_,Μ色舰為此像素在 色彩分布上與手部膚色之相關性’像素的動態特徵則為此像素在時間上的 變化特性’而前景齡為此像素是^為前景之解。1274296 IX. Description of the Invention: [Technical Field] The present invention relates to an image-based object tracking method, and more particularly to an image-based object tracking method that does not require hand, sleeve or mark assistance. [Prior Art] At present, the related object tracking technology assumes that the shape or position of the object has been obtained, and the object is tracked according to the initial position of the object, but the initial position of the object has its difficulty. In the current technology, The markers are used to help get the initial position of the object. If the object is a hand, you need to wear gloves or special marking points, or if the object is tracked in a fixed background like a car, the tracking method is simpler, not Marks are required, but only in a fixed simple background. Therefore, there are two types of tracking methods used for witnessing. For example, when chasing a sniper, one type uses the technique of active contour to capture the shape of the hand in a simple scene. To make a series, the savvy pair of hand-shaped silk, however, such a method for instant gesture tracking requires a lot of recursive operations, in addition to the speed is not fast enough, the initial position of the given gesture needs to be based on Gesture approximation, so that the position and shape of the gesture can be smoothly tracked, and if the contour of the gesture is easily interfered with the complex background in a more complicated background; the other method is in a complicated background. Techniques such as background subtraction are used to track foreground gestures. This type of method can be disturbed by different foreground objects, such as the head area and the body area. 1274296 The above two types of tracking methods are quite dependent on the initial gesture position information, and the initial information is used as the basis of gesture tracking. Therefore, there are still many methods for obtaining the initial gesture position in the image. Missing. ~ National Patent Publication No. 1224288 proposes a "method for calibrating gesture features in images." First, the gesture image is input-inputted, and image pre-processing is performed on the gesture image, and the w-axis gesture image is binary. According to the closed curve, the curvature image space image of the gesture image is drawn, and the coordinate-peak group sequence formed by the method is performed with a preset function. The convolution integral operation is used to designate the coordinate value with the largest integral value as the reference point, and then obtain the characteristic parameters of the gesture image, and the μ parameter of the gesture image and the parameter of the reference gesture are corresponding to the image of the good image. The gesture micro-ship computing method is still secret, and does not discuss how to na and chasing gestures. In addition, the national silk 393629 towel turned into a kind of "gesture job and method money memory media body", but its main mixed in the gesture recognition system rather than the tracking system; and the national patent announcement number, _ proposed - "gesture The method of constructing the mouse is mainly used in a simple environment. Therefore, tracking gestures and accounting will create problems. If it is to track in the environment, it is a problem. In view of the above problems, the present invention proposes an image-based object tracking method to improve the above-mentioned deficiency. SUMMARY OF THE INVENTION The main purpose of the present invention is to provide a method for the singularity of the singularity of the smear, the two-stage probabilistic method of the paste, and the singularity of the singularity of the initial position 1274296. The parameters can be used to standardize the actions of the system initialization to be used in different kinds of different operations. The purpose is to provide a kind of wire-traveling tracking green, which is a thief-tracking technology with a low of -2 different and one low, which can effectively obtain dynamics. The trajectory does not require the aid of a glove or a labeled aid to obtain the area and obstruction of the object as an auxiliary human-machine interface. For Xiangdi (4), her tricks, including the continuous image of the first object, the characteristics of each pixel of the object image, the probability distribution of the feature of the item α, and then the probability distribution Conditional probability calculation, to obtain the distribution of the object == domain, the peak of the county where Gu Feng is left, and the possible object area of the miscellaneous record, the silk ginseng green silk wire dip position score, the following i red complex, and Qi The _ red maximum pure sampling parameter is the position of the current moment of the object. Under the circumstance, it is better to understand the purpose, technology, and performance of the present invention. [Embodiment] Show hiding hair. Translating objects to chase the right to save money (four) eggs, first step into the object 'the 4 movable objects, such as the continuous image of the hand, then as in step S12, _ pieces of continuous image of each The characteristics of the pixel, such as edge _, skin color side, motion detection or other scenes _ 'to get the probability distribution of each patch', that is, the edge probability, the color machine 2 dynamic squawk probability 'and then _ S14, will get _ Each of the cloths is used as a conditional probability calculation to obtain the probability distribution of the possible area where the object is located, and then the step is called 1274296 S16 'to select the position of the object, select the e-field, and the probability distribution of the possible area of the object The sampling is performed as follows. Then, as step S18, the result of sampling the parameters is used to identify the metric score of the crop piece, as the probability of the object-na (4) and (10)-_ as the surface S2G, the sample with the largest metric score is the record age _ The result of the conversion of the crops to the S18 will be the result of the probability distribution of the image under the object to track the position of the next hiring object. The second picture is the detailed flow of the image type object pursuit method of the gamma. First, in step S30, the camera takes a continuous image of the object, and inputs the image of the object, and then calculates each pixel as in step S32. Each of the complements to obtain the probability distribution of the features, and then, step S34 4 calculate the probability that each pixel may be the object area, and use a training condition probability to control, and then step 6, In the secret sampling of the near-age distribution, and as in step S·step_G, the score is similar to the object by the number of counts, and then the candidate area parameter is in time according to step S42. As a result of the probability propagation, and in step S44, the parameter with the highest probability score is taken as the result of the object tracking, and if the step sxian, the determination is to end the tracking, and if so, the tracking is ended as in step S48, and if not, the return is made. In step S4, the scores of the sampling parameters and the objects are continuously obtained through the object identification technology. The bottom part uses a detailed flow description for the object of the gesture, uses the camera connected to the computer to capture the image, and processes the captured image for immediate processing. First, if the pixel of the continuous image data input is x(/, y·), assuming that the resolution is 320x240 pixels, then 〇</<32〇, 〇<y. <320, then each Pixel ball force through edge detection (8), skin color detection (8), and dynamic area detection (M) and foreground detection (F) to get the probability of this pixel feature, p (X5〇w)), 8 1274296 p (surgical)) 'shun, y)), which is quite a chance that this pixel is the edge of the edge of this pixel is the probability of skin color 'shun (10) this pixel is the probability of dynamic · the probability of this pixel for the foreground 'this four The characteristics of the items are independent of each other and have low computational complexity and are easy to extract. The edge features are closely related to the miscellaneous miscellaneous weights of the heterogeneous phase _, and the color ship is related to the color of the hand in the color distribution of the pixel. The dynamic feature is the temporal variation of the pixel's and the foreground age is the foreground solution for this pixel.

經由此些特徵與手勢⑻出現機率做運算,其中假設邊緣、膚色、動態、 前景為近似相互獨立之特徵,可以得到此像素為手勢區域之解也 ay) = = ^p(xAiJ) = .)} ΣΡ(^= true I X^MUj\xM^ 异出可能為手勢之區域,得到需要注意的影像區塊,其中條件機率 〆Η ιχ£’Χι^’Χμ’Χ/γ)可由訓練的方式得到,由於各項特徵各有許 多不同的演算法,可以最簡單的特徵抽取演算法來加快運算的快速。 若已經得到了-張影像中為手形區域的機率分布情形,在機率分布的 狀況做機率取樣共取雜和其中Ί (時間第相候選的手勢區域,可 定義為四個參數(長,寬,射心位置脈卜[手勢左界,手勢上界,手勢右 界,手勢下界]=m為],這裡的機率取樣為將影像中為手形之區域 機率刀佈之进度付到’此為初始取樣,也就使手勢之初始位置與大小,經 由手勢辨識技術得到此取樣區域與手勢相似之分數,此分數可定義為 1274296 zkt = score(x^9x9xE,xs9xM,xF) 此方法可以為向量支持機(SVM),PCA或色彩分佈(Color Distribution) 等辨識技術’而這些辨識技術的輸入為影像上的所在選定之參數位置之各 項機率分佈或原始影像,而輸出則為與手勢相識之分數z;。 為了簡化起見,可以只使用原始影像做SVM之訓練,並定義輸入心= b(U)l k < ί < ‘,勺·〈知之區域,並影像將此影像4,做大小之正規化 到較小的晝面以利處理。為了方便起見,因此定義。在這裡 將此分數標準化成為介於〇到i的值,其中i為最相似,而〇為最不相似, 且將候選區域參數在時間上作機率傳播,如下: p(xt = i=k 其中双3,/?)為用來將上面的方程式製造成連續函數的核心函數,可 選擇為獨立的多維高斯函數, 其中, R(a9b)=Through the occurrence of these features and gestures (8), the assumption is that the edge, skin color, dynamics, and foreground are approximately independent features, and the solution of the pixel as the gesture region can also be obtained ay) = = ^p(xAiJ) = .) } ΣΡ(^= true IX^MUj\xM^ The difference may be the area of the gesture, get the image block that needs attention, where the conditional probability 〆Η ιχ£'Χι^'Χμ'Χ/γ) can be obtained by training Since each feature has many different algorithms, the simplest feature extraction algorithm can be used to speed up the calculation. If the probability distribution of the hand-shaped region in the image is obtained, the probability distribution is taken in the probability distribution and the Ί (the gesture region of the temporal phase candidate can be defined as four parameters (length, width, The location of the heart position [gesture left boundary, gesture upper bound, gesture right boundary, gesture lower bound] = m is], the probability sampling here is to pay the progress of the hand-shaped regional probability knife in the image to 'this is the initial sampling Therefore, the initial position and size of the gesture are obtained, and the score of the sampling area is similar to the gesture by the gesture recognition technology. The score can be defined as 1274296 zkt = score(x^9x9xE, xs9xM, xF). This method can be a vector support machine. (SVM), PCA or Color Distribution and other identification techniques' and the input of these identification techniques is the probability distribution or the original image of the selected parameter position on the image, and the output is the score of the gesture. For the sake of simplicity, you can use only the original image for SVM training, and define the input heart = b(U)lk < ί < ', the spoon · the area of the knowledge, and image this image 4, make the size Normalize to a smaller surface for processing. For convenience, it is defined. Here the score is normalized to a value between 〇 and i, where i is the most similar, and 〇 is the most dissimilar, and will The candidate region parameters are probabilistically propagated in time as follows: p(xt = i=k where double 3, /?) is the core function used to make the above equation into a continuous function, which can be selected as an independent multidimensional Gaussian function. Where R(a9b)=

exp(— (ak ~bk)2 0^-2 ' 可以將不同的參數X;為手勢區域之機率隨時間傳遞下去,其中核心函 數可以使用其他的小波轉換讀❾重複壯步驟得顺續辭勢追縱之 機率分布結果,只需要取出機率最大的參數即為手勢追蹤之結果。 底下藉由-實補來制本發明之絲件触方法,首先先輸入 -張如第3(a)圖所示之影像,並經過偵測後分別得到如第柳圖至第如) 1274296 ®之邊緣機率分_、慮色鮮分、移動機率分細及前景機率分佈 圖’並接著得到如第3⑴圖之手形區域之機率分佈,再來將手顧^數 ‘以機率選取,得到第%)圖,並將手形區域參數作物件辨識分數度量與機’ 率傳播,得到最大分數第3⑻圖,最後再由使用者決定是否結束追縱。、 本發崎ih-獅像式替_方法,將輸人之連續影騎料,其像 素、k過邊緣偵測、膚色細、動態區域债測與前景區域谓測得到像素的特 鲁徵之機率分布’再將這四種影像做條件機率運算,得到物件可能區域之機 率:刀布魏過機率圖形選取出物件位置候選區域,將此候選區候做物件 辨識又里77數,伽最大機率所在區域作為物件區域顧度量分數作連續 〜像之機率簡,重複以上步驟制賴的物件追蹤絲·,本伽係為利 用兩階段的機率計算的方式,即像素及候親域的機率分佈,以得到物件 之初始位置與最後物件區域之參數,可以將系統初始化的動作標準化以運 用在不同的環境下,且·運算量低的無標籤追縱技術,可以有效得到動 _〜軌跡而不需藉助手套或標記的獅,㈣制物件的區域與軌跡,作 為辅助人機,,Φ,可細於人機介面、遊戲等產業,如作為徒手滑鼠或 居家照顧侧,如好大醫院都賴二十四小時不騎的人力照顧病人, 因此適當的人機介面可以提供機器照顧的使用 ,以舒緩其工作量。 α上所述係藉由實施例說明本發明之特點,其目的在使熟習該技術者· 此瞭解本發明之内容並據以實施,而非限定本發明之專利範圍,故凡其他 未脫離本發明所揭示之精神而完成之等絲飾或修改 ’仍應包含在以下所 述之申請專利範圍中。 1274296 【圖式簡單說明】 第1圖為本發明之簡易流程圖。 第2圖為本發明之詳細流程圖。 第3(a)圖至第3(h)圖為使用本發明之方法所得出之各步驟之圖式。 【主要元件符號說明】Exp(— (ak ~bk)2 0^-2 ' can take different parameters X; the probability of the gesture area is passed over time, where the core function can use other wavelet transforms to read and repeat the steps to continue As a result of the probability distribution of the memorial, only the parameter with the highest probability of taking out is the result of the gesture tracking. The method of the wire touch of the present invention is made by the actual compensation, first input first - Zhang as in the third (a) The image is displayed, and after detection, the marginal probability scores of the 1274296®, the color difference, the moving probability, and the foreground probability distribution are obtained, respectively, and then obtained as shown in Figure 3(1). The probability distribution of the hand-shaped area, and then take the hand to count 'takes the probability to get the first %) map, and the hand-shaped area parameter crop piece identification score metric and machine 'rate spread, get the maximum score 3 (8), and finally The user decides whether to end the tracking. This is the ih-lion lion method for the _ _ method, will enter the continuous film riding, its pixels, k over the edge detection, skin color, dynamic regional debt measurement and foreground area pre-measurement of the pixel of the Tru mark Probability distribution 'These two images are subjected to conditional probability calculation to obtain the probability of the possible area of the object: the knife cloth Wei probability graph selects the candidate position of the object position, and the candidate area is identified as 77 objects, the maximum probability of gamma The area is taken as the object area metric score for continuous ~ image probability, repeat the above steps to track the object tracking silk, the gamma is a two-stage probability calculation method, that is, the probability distribution of pixels and candidate domains, In order to obtain the initial position of the object and the parameters of the final object area, the system initialization operation can be standardized to be used in different environments, and the low-computation-free label-free tracking technology can effectively obtain the motion_~track without the need With gloves or marked lions, (4) the area and trajectory of the object, as an auxiliary man-machine, Φ, can be finer in the human-machine interface, games and other industries, such as as a freehand mouse or home The care side of the home, such as the big hospitals, relies on manpower to take care of the patients for 24 hours, so the appropriate human-machine interface can provide the use of machine care to ease the workload. The features of the present invention are described by way of example only, and the purpose of the present invention is to understand the contents of the present invention and to implement the present invention, and not to limit the scope of the patent of the present invention. Threads or modifications made by the spirit of the invention are still to be included in the scope of the claims described below. 1274296 [Simple description of the drawings] Fig. 1 is a simplified flow chart of the present invention. Figure 2 is a detailed flow chart of the present invention. Figures 3(a) through 3(h) are diagrams showing the steps obtained using the method of the present invention. [Main component symbol description]

1212

Claims (1)

1274296 十、申請專利範園: 1. 一種影像式物件追蹤方法,其步驟包括·· 將一物件影像連續輸入; 偵测該影像之每一個像素之各項 將該機率分佈作條件機率運算,該各項特徵之機率分佈; … 异以侍到該物件可能區域之機率分佈. 並作 利用該物件可能區域之機率分佈·數個該物件位置之區域,, 參數取樣; ^ 將該參數取樣之結果作物件辨識度量分數, 率分佈; 以作為該物件下張影像之 機 以該物件賴度_分錄蚊取·數絲得_件之位置。 2. 如申請_圍第i項所_像_魏綠,其中,麵測該連 續影像之邊緣偵測、膚色_、動態_及前景铜之至料中之—者 者 3. 如申_麵2項所述之雜式物物_,財,該各項特徵 之機率分佈縣邊緣齡、膚色機率、祕機率及前景機率之至料中之 4.如申請專繼圍第丨項所叙雜式餐追蹤綠,射該物件位置 選出之候選區域係利用機率圖形選取。 5·如申請專利細第丨項所述之影像式物件追蹤方法,其中,該物件之影 像係輸入該物件相似度模翻,贿該參數職之結果作物件辨識度量。 6·如申請專繼圍第1項所述之鱗式物件追蹤方法,其巾,該物件係為 可移動。 7.如申請專娜圍第6撕述之影像式物件追蹤方法,其巾,該物件係為 13 1274296 手。 ' 申4利範圍第1項所述之影像式物件追*,其巾,該物件之連 、衫像係利用攝影機取得。 如申清專利範圍第1項所述之影像式物件追蹤方法,其中,以該物件辨 識度里機率分數最大之取樣參數結果得出該物件之位置之步驟後,更包括 —於下張影像選出該物件位置之候選區域及其後之步驟以繼續追蹤,直至不 想追蹤之步驟。1274296 X. Patent Application Park: 1. An image-based object tracking method, the method comprising: continuously inputting an object image; detecting each of the pixels of the image as a conditional probability operation, The probability distribution of each feature; ... the probability distribution of the possible area of the object. The probability distribution of the possible area of the object is used. The area of the object is sampled, and the parameter is sampled; ^ The result of sampling the parameter Crop parts identification metric score, rate distribution; as the object of the next image of the object, the location of the object _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2. If the application is _ _ _ _ _ Wei Wei, where the surface detection of the continuous image edge detection, skin color _, dynamic _ and the prospect of the copper to the material - such as Shen _ face The miscellaneous items mentioned in the 2 items, the financial probability, the probability distribution of the characteristics of the county, the age of the county, the skin color probability, the secret rate and the probability of the foreground. 4. If you apply for the specialization of the second item The meal is tracked green, and the candidate area selected by the object position is selected by the probability graph. 5. The image-based object tracking method of claim 1, wherein the image of the object is input to the similarity of the object, and the result of the parameter is used to determine the crop identification metric. 6. If the application is to follow the scale object tracking method described in item 1, the towel is movable. 7. If you apply for the image-based object tracking method mentioned in the sixth paragraph of the Nina, the towel is 13 1274296. The image-type object chasing* mentioned in the first item of the claim 4, the towel, the connection of the object, and the shirt image are obtained by using a camera. For example, the image-based object tracking method according to the first aspect of the patent scope, wherein the step of obtaining the position of the object by using the sampling parameter with the highest probability score in the object recognition degree, further includes: selecting the next image The candidate region of the object location and subsequent steps to continue tracking until the step of not wanting to track.
TW94112512A 2005-04-20 2005-04-20 Image-based object tracking method TWI274296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW94112512A TWI274296B (en) 2005-04-20 2005-04-20 Image-based object tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW94112512A TWI274296B (en) 2005-04-20 2005-04-20 Image-based object tracking method

Publications (2)

Publication Number Publication Date
TW200638287A TW200638287A (en) 2006-11-01
TWI274296B true TWI274296B (en) 2007-02-21

Family

ID=38623124

Family Applications (1)

Application Number Title Priority Date Filing Date
TW94112512A TWI274296B (en) 2005-04-20 2005-04-20 Image-based object tracking method

Country Status (1)

Country Link
TW (1) TWI274296B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8081215B2 (en) 2007-12-19 2011-12-20 Industrial Technology Research Institute Tagging and path reconstruction method utilizing unique identification and the system thereof
TWI496114B (en) * 2012-11-23 2015-08-11 Univ Nat Taiwan Image tracking device and image tracking method thereof
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
US11037307B2 (en) 2018-08-06 2021-06-15 Institute For Information Industry Method and electronic apparatus for comparing tracking object

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI401411B (en) * 2009-06-25 2013-07-11 Univ Shu Te Tracing Method and System of Shape Contour of Object Using Gradient Vector Flow
TWI489317B (en) * 2009-12-10 2015-06-21 Tatung Co Method and system for operating electric apparatus
TWI637323B (en) * 2017-11-20 2018-10-01 緯創資通股份有限公司 Method, system, and computer-readable recording medium for image-based object tracking
TWI731466B (en) 2019-11-07 2021-06-21 財團法人資訊工業策進會 Computing device and method for generating an object-detecting model and object-detecting device
TWI775128B (en) * 2020-08-13 2022-08-21 蔡明勳 Gesture control device and control method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8081215B2 (en) 2007-12-19 2011-12-20 Industrial Technology Research Institute Tagging and path reconstruction method utilizing unique identification and the system thereof
US9117138B2 (en) 2012-09-05 2015-08-25 Industrial Technology Research Institute Method and apparatus for object positioning by using depth images
TWI496114B (en) * 2012-11-23 2015-08-11 Univ Nat Taiwan Image tracking device and image tracking method thereof
US11037307B2 (en) 2018-08-06 2021-06-15 Institute For Information Industry Method and electronic apparatus for comparing tracking object

Also Published As

Publication number Publication date
TW200638287A (en) 2006-11-01

Similar Documents

Publication Publication Date Title
TWI274296B (en) Image-based object tracking method
Medina-Carnicer et al. A novel method to look for the hysteresis thresholds for the Canny edge detector
Moilanen et al. Spotting rapid facial movements from videos using appearance-based feature difference analysis
Shenoy et al. Real-time Indian sign language (ISL) recognition
Li et al. Expression-robust 3D face recognition via weighted sparse representation of multi-scale and multi-component local normal patterns
Buehler et al. Long term arm and hand tracking for continuous sign language TV broadcasts
Lin et al. Triangle-based approach to the detection of human face
Li et al. Fully automatic 3D facial expression recognition using polytypic multi-block local binary patterns
Boutellaa et al. On the use of Kinect depth data for identity, gender and ethnicity classification from facial images
Grewe et al. Fully automated and highly accurate dense correspondence for facial surfaces
Chang et al. Spatio-temporal hough forest for efficient detection–localisation–recognition of fingerwriting in egocentric camera
Direkoğlu et al. Shape classification via image-based multiscale description
Kong et al. Learning hierarchical 3D kernel descriptors for RGB-D action recognition
JP6651388B2 (en) Gesture modeling device, gesture modeling method, program for gesture modeling system, and gesture modeling system
Bhuyan et al. Hand pose recognition from monocular images by geometrical and texture analysis
Kuai et al. Learning adaptively windowed correlation filters for robust tracking
Donoser et al. Robust planar target tracking and pose estimation from a single concavity
Sarma et al. Hand detection by two-level segmentation with double-tracking and gesture recognition using deep-features
Popov et al. Long Hands gesture recognition system: 2 step gesture recognition with machine learning and geometric shape analysis
Weerasekera et al. Robust asl fingerspelling recognition using local binary patterns and geometric features
Dalka et al. Human-Computer Interface Based on Visual Lip Movement and Gesture Recognition.
Al-Rahayfeh et al. Application of head flexion detection for enhancing eye gaze direction classification
Roy et al. Real time hand gesture based user friendly human computer interaction system
Akyol et al. Finding relevant image content for mobile sign language recognition
Lee et al. An effective method for detecting facial features and face in human–robot interaction