TW202026940A - Target tracking method - Google Patents
Target tracking method Download PDFInfo
- Publication number
- TW202026940A TW202026940A TW108100807A TW108100807A TW202026940A TW 202026940 A TW202026940 A TW 202026940A TW 108100807 A TW108100807 A TW 108100807A TW 108100807 A TW108100807 A TW 108100807A TW 202026940 A TW202026940 A TW 202026940A
- Authority
- TW
- Taiwan
- Prior art keywords
- frame
- tracking
- target
- result
- tracking frame
- Prior art date
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
本發明係關於一種目標追蹤方法,特別是一種利用參考區域及目標物件的目標追蹤方法。The present invention relates to a target tracking method, in particular to a target tracking method using a reference area and a target object.
習知的Camshift(Continuously Adaptive Mean Shift Algorithm)演算法係利用顏色分布的特性來追蹤一移動物體。首先會先選擇一個作為追蹤目標的物體,然後建立物體的色彩直方圖,接著在追蹤的場景(幀畫面)中得到整個場景的反向投影圖,然後尋找物件的質心位置作為物體位置。藉由反覆對接下來的場景執行與上述相同的步驟達到物體追蹤。The conventional Camshift (Continuously Adaptive Mean Shift Algorithm) algorithm uses the characteristics of color distribution to track a moving object. First, select an object as the tracking target, and then create the color histogram of the object, then get the back projection of the entire scene in the tracked scene (frame picture), and then find the centroid position of the object as the object position. Object tracking is achieved by repeatedly performing the same steps as above on the next scene.
若將Camshift演算法應用於人物追蹤,通常會將目標追蹤框鎖定在人臉部位,依據人臉膚色的特性進行追蹤。在一般的情況下,這種方式能順利追蹤目標人物。然而,一旦在膚色受到干擾的情況下,追蹤的準確率將會降低。此外,若追蹤框因某些因素脫離原先欲追蹤的人物,將可能無法找回正確人臉,導致後續一連串的錯誤追蹤結果。If the Camshift algorithm is applied to human tracking, the target tracking frame is usually locked on the face, and tracking is performed according to the characteristics of the skin color of the face. Under normal circumstances, this method can track the target person smoothly. However, once the skin color is disturbed, the tracking accuracy will be reduced. In addition, if the tracking frame deviates from the original person to be tracked due to some factors, the correct face may not be retrieved, resulting in a series of subsequent incorrect tracking results.
舉例來說,當原本欲追蹤的人臉被另一個人臉完全遮蔽時,由於人臉的膚色相近,依照Camshift演算法的判斷,此時會追蹤到另一個人。即使後來原本欲追蹤的人再度出現於在畫面中,由於Camshift演算法的追蹤框已鎖定至另一個人臉,追蹤框已無法再回到原本所欲追蹤的人臉上,導致了錯誤的追蹤結果。For example, when the face to be tracked is completely obscured by another face, since the skin color of the face is similar, according to the judgment of the Camshift algorithm, another person will be tracked at this time. Even if the person you originally wanted to track appears on the screen again, because the tracking frame of the Camshift algorithm has been locked to another face, the tracking frame can no longer return to the face of the person you originally wanted to track, resulting in incorrect tracking results. .
因此,可加入被追蹤人物的衣服顏色來輔助判斷人物交錯時候的狀況。然而,若兩個人的衣服顏色相同時,按照上述方式仍無法準確分辨哪一個才是原本欲追蹤的對象。Therefore, the clothes color of the tracked person can be added to assist in judging the situation when the person is interlaced. However, if two people's clothes are of the same color, it is still impossible to accurately distinguish which one is the object originally intended to be tracked according to the above method.
有鑑於此,本發明提供一種目標追蹤方法,用以解決上述問題。In view of this, the present invention provides a target tracking method to solve the above-mentioned problems.
依據本發明一實施例的一種目標追蹤方法,適用於在連續的一第一幀畫面及一第二幀畫面中追蹤一目標物件,所述的目標追蹤方法包括:在該第一幀畫面中,根據一第一初始追蹤框執行一迭代演算法以擷取一第一結果追蹤框及一第一參考區域,該第一結果追蹤框包含該目標物件,該第一參考區域鄰近該第一結果追蹤框;在該第一結果追蹤框外擷取一第二結果追蹤框及一第二參考區域,該第二參考區域鄰近該第二結果追蹤框;以及在擷取該第二結果追蹤框後,包括:以一比對演算法計算該第一參考區域相對於一預設參考特徵之一第一相似程度及該第二參考區域相對於該預設參考特徵之一第二相似程度;及以一樣板匹配演算法計算該第一結果追蹤框之一第一特徵值、該第一特徵值與一預設特徵值之一第一差異程度、該第二結果追蹤框之一第二特徵值及該第二特徵值相對於該預設特徵值之一第二差異程度;其中該預設特徵值關聯於該目標物件之一目標樣板;當該第一相似程度與該第二相似程度不相等時,以該第一相似程度及該第二相似程度之中較大者所對應之該第一結果追蹤框或該第二結果追蹤框作為該第二幀畫面的該第一初始追蹤框;或當該第一相似程度與該第二相似程度相等時,以該第一差異程度及該第二差異程度之中較小者所對應之該第一結果追蹤框或該第二結果追蹤框作為該第二幀畫面的該第一初始追蹤框。A target tracking method according to an embodiment of the present invention is suitable for tracking a target object in a continuous first frame and a second frame. The target tracking method includes: in the first frame, An iterative algorithm is executed according to a first initial tracking frame to extract a first result tracking frame and a first reference area, the first result tracking frame includes the target object, and the first reference area is adjacent to the first result tracking Frame; capture a second result tracking frame and a second reference area outside the first result tracking frame, the second reference area being adjacent to the second result tracking frame; and after capturing the second result tracking frame, The method includes: calculating a first degree of similarity of the first reference area with respect to a predetermined reference feature and a second degree of similarity of the second reference area with respect to the predetermined reference feature by using a comparison algorithm; and The board matching algorithm calculates a first feature value of the first result tracking frame, a first difference degree of the first feature value and a preset feature value, a second feature value of the second result tracking frame, and the A second degree of difference between the second characteristic value and the predetermined characteristic value; wherein the predetermined characteristic value is associated with a target template of the target object; when the first degree of similarity is not equal to the second degree of similarity, Use the first result tracking frame or the second result tracking frame corresponding to the larger of the first degree of similarity and the second degree of similarity as the first initial tracking frame of the second frame; or when the When the first degree of similarity is equal to the second degree of similarity, the first result tracking frame or the second result tracking frame corresponding to the smaller of the first degree of difference and the second degree of difference is used as the second The first initial tracking frame of the frame.
依據本發明一實施例的一種目標追蹤方法,其中該該第一特徵值、該第二特徵值及該預設特徵值係該目標物件之一明亮度、一色度、一濃度、一色相、一飽和度、或一明度。According to an embodiment of the target tracking method of the present invention, the first characteristic value, the second characteristic value and the preset characteristic value are a brightness, a chroma, a density, a hue, and a hue of the target object. Saturation, or lightness.
依據本發明一實施例的一種目標追蹤方法,其中在擷取該第二結果追蹤框之前,更包括:更新該目標樣板之該預設特徵值至少一次;以及在擷取該第二結果追蹤框後,更包括:暫停更新該目標樣板。A target tracking method according to an embodiment of the present invention, before capturing the second result tracking frame, further includes: updating the preset feature value of the target template at least once; and capturing the second result tracking frame Later, it also includes: suspend updating the target template.
依據本發明一實施例的一種目標追蹤方法,其中擷取一第二結果追蹤框包括:執行該迭代演算法以在該第一結果追蹤框外取得複數個第二初始追蹤框;以及依序選取該些第二初始追蹤框其中之一,並計算所選取的該第二初始追蹤框相對於該目標物件的一信心指標;其中當該信心指標該小於一門檻值時,依序選取該些第二初始追蹤框中之另一者,及根據該目標物件的一特徵值更新該目標樣板至少一次;當該信心指標大於或等於該門檻值時,以所選取的該第二初始追蹤框作為該第二結果追蹤框,及暫停更新該目標樣板。According to an embodiment of the target tracking method of the present invention, extracting a second result tracking frame includes: executing the iterative algorithm to obtain a plurality of second initial tracking frames outside the first result tracking frame; and sequentially selecting One of the second initial tracking frames is calculated, and a confidence index of the selected second initial tracking frame relative to the target object is calculated; wherein when the confidence index should be less than a threshold value, the first The other of the two initial tracking frames, and the target template is updated at least once according to a characteristic value of the target object; when the confidence index is greater than or equal to the threshold value, the selected second initial tracking frame is used as the The second result tracking frame, and the update of the target template is suspended.
依據本發明一實施例的一種目標追蹤方法,其中該信心指標為該第二結果追蹤框的複數個像素值中關聯於該目標物件的一比例。According to an embodiment of the target tracking method of the present invention, the confidence indicator is a ratio of the plurality of pixel values of the second result tracking frame associated with the target object.
依據本發明一實施例的一種目標追蹤方法,其中該第一參考區域與該第一結果追蹤框之間具有一固定相對位置關係,以及該第二參考區域與該第二結果追蹤框之間具有該固定相對位置關係。According to an embodiment of the target tracking method of the present invention, there is a fixed relative position relationship between the first reference area and the first result tracking frame, and there is a fixed relative position relationship between the second reference area and the second result tracking frame The fixed relative position relationship.
依據本發明一實施例的一種目標追蹤方法,該目標物件為一人臉。According to a target tracking method according to an embodiment of the present invention, the target object is a human face.
依據本發明一實施例的一種目標追蹤方法,其中該迭代演算法為Camshift演算法。該比對演算法為一相關性係數(Correlation)演算法、卡方(Chi-square)演算法、交點(Intersection)演算法以及巴氏距離(Bhattacharyya distance)演算法其中之一。該樣板匹配演算法為絕對誤差平均(Mean Absolute Differences,MAD)、絕對誤差和(Sum of Absolute Differences,SAD)、誤差平方和(Sum of Squared Differences,SSD)、 平均誤差平方和(Mean Square Differences,MSD)、正規化互相關(Normalized Cross Correlation,NCC)、循序相似偵測演算法(Sequential Similiarity Detection Algorithm,SSDA)或絕對變換誤差和(Sum of Absolute Transformed Difference, SATD)。According to an embodiment of the target tracking method of the present invention, the iterative algorithm is the Camshift algorithm. The comparison algorithm is one of a correlation coefficient (Correlation) algorithm, Chi-square algorithm, Intersection algorithm, and Bhattacharyya distance algorithm. The template matching algorithm is Mean Absolute Differences (MAD), Sum of Absolute Differences (SAD), Sum of Squared Differences (Sum of Squared Differences, SSD), and Mean Square Differences (Mean Square Differences, MSD), Normalized Cross Correlation (NCC), Sequential Similiarity Detection Algorithm (SSDA) or Sum of Absolute Transformed Difference (SATD).
以上之關於本揭露內容之說明及以下之實施方式之說明係用以示範與解釋本發明之精神與原理,並且提供本發明之專利申請範圍更進一步之解釋。The above description of the disclosure and the following description of the implementation manners are used to demonstrate and explain the spirit and principle of the present invention, and to provide a further explanation of the patent application scope of the present invention.
以下在實施方式中詳細敘述本發明之詳細特徵以及優點,其內容足以使任何熟習相關技藝者了解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之目的及優點。以下之實施例係進一步詳細說明本發明之觀點,但非以任何觀點限制本發明之範疇。The detailed features and advantages of the present invention are described in detail in the following embodiments. The content is sufficient to enable anyone familiar with the relevant art to understand the technical content of the present invention and implement it accordingly, and according to the content disclosed in this specification, the scope of patent application and the drawings Anyone who is familiar with relevant skills can easily understand the purpose and advantages of the present invention. The following examples further illustrate the viewpoints of the present invention in detail, but do not limit the scope of the present invention by any viewpoint.
圖1為根據本發明一實施例之目標追蹤裝置100的方塊圖。如圖1所示,目標追蹤裝置100包括讀取單元110、迭代處理單元120以及判斷單元130。迭代處理單元120電性連接讀取單元110,判斷單元130電性連接讀取單元110以及迭代處理單元120。讀取單元110、迭代處理單元120以及判斷單元130可藉由各種晶片或者是處理器來實現,在此不加以限制。FIG. 1 is a block diagram of a
目標追蹤裝置100可執行本發明一實施例的目標追蹤方法,以在連續的多個幀畫面中追蹤目標物件。目標物件例如係一人臉。下文以連續的第一幀畫面和第二幀畫面為例說明。請一併參考圖1、圖2及圖3A~3E,圖2係繪示本發明一實施例的目標追蹤方法的流程圖,圖3A~3E係繪示本發明一實施例的追蹤目標物件的示意圖。在步驟S300開始執行本發明一實施例的目標追蹤方法時,首先由讀取單元110讀取第一幀畫面。The
請參考步驟S310,擷取第一結果追蹤框10及第一參考區域30。詳言之,在第一幀畫面中,迭代處理單元120根據第一初始追蹤框執行迭代演算法以擷取第一結果追蹤框10及一第一參考區域30,第一結果追蹤框10包含目標物件,第一參考區域30鄰近第一結果追蹤框10。實務上,使用者可透過目標追蹤裝置100或藉由其他方式設定第一初始追蹤框。迭代演算法例如係Camshift演算法(Continue Adaptive Mean Shift Algorithm),執行迭代演算法後可得到包含人物1的人臉的第一結果追蹤框10,如圖3A所示。第一參考區域30與第一結果追蹤框10之間可具有一固定相對位置關係。例如第一參考區域30位於第一結果追蹤框10下方且相距1/2個追蹤框的長度,且第一參考區域30與第一結果追蹤框10大小相同,但並不以此為限。第一參考區域30對應人物1的上衣胸前的部位。一般而言,不同人物之間衣物顏色的差異相較於膚色的差異更大。因此,參考衣物的顏色作輔助所欲追蹤的人臉可增加追蹤結果的準確度。Please refer to step S310 to capture the first
在步驟S310中,判斷單元130從第一參考區域30擷取第一參考特徵。第一參考特徵例如代表所欲追蹤之人臉所對應的衣服顏色。所述第一參考特徵例如是第一參考區域30於HSV色彩空間中的色相(Hue)與飽和度(Saturation)的直方圖(Histogram)。另外,第一次經由執行迭代演算法後得到的第一結果追蹤框10所對應的第一參考區域30的第一參考特徵可作為預設參考特徵。In step S310, the determining
請參考步驟S320,更新目標樣板至少一次。詳言之,目標樣板(Image Patch)係目標物件(人臉)的特徵值。特徵值例如係目標物件之明亮度(Luminance)、色度(Chrominance)、濃度(Chrominance)、色相(Hue)、飽和度(Saturation)、或明度(Saturation)。當人的姿勢不動時,即不會由正臉變成側臉、不抬頭或低頭時,根據特徵值所取得的目標樣板基本上不會有太大改變。也就是說,目標樣板可用以代表人臉的特徵。然而實務上,由於所追蹤的人物未必總是以臉部正面出現在影像中,因此需更新目標樣板以增加後續追蹤判斷時的準確率。例如在當前的第一幀畫面中未出現其他人臉時,可每隔一固定週期更新目標物件所對應的目標樣板,但更新的時機並不以此為限。另外,在步驟S300開始執行目標追蹤方法並且被追蹤的目標物件被選定之後,可同時取得此目標物件的目標樣板以作為初始目標樣板。Please refer to step S320 to update the target template at least once. In detail, the target pattern (Image Patch) is the feature value of the target object (face). The characteristic value is, for example, the brightness (Luminance), chrominance (Chrominance), density (Chrominance), hue (Hue), saturation (Saturation), or lightness (Saturation) of the target object. When the person's posture is not moving, that is, when the face does not change from a front face to a side face, and the head is not raised or lowered, the target model obtained according to the feature value will basically not change much. In other words, the target template can be used to represent the characteristics of a human face. However, in practice, since the tracked person may not always appear in the image with the front face, the target template needs to be updated to increase the accuracy of subsequent tracking judgments. For example, when there are no other faces in the first frame of the current picture, the target template corresponding to the target object can be updated at regular intervals, but the update timing is not limited to this. In addition, after the target tracking method is executed in step S300 and the target object to be tracked is selected, the target template of the target object can be obtained at the same time as the initial target template.
請參考步驟S330,擷取第二結果追蹤框及第二參考區域40。詳言之, 在第一結果追蹤框10外擷取第二結果追蹤框及第二參考區域40,第二參考區域40鄰近第二結果追蹤框。實務上,類似於第一參考區域30與第一結果追蹤框10之間所具有的一固定相對位置關係,第二參考區域40與第二結果追蹤框之間亦具有相同的固定相對位置關係,然而本發明並不以此為限。擷取第二結果追蹤框包括下列步驟:執行迭代演算法以在第一結果追蹤框10外取得複數個第二初始追蹤框,如圖3C、圖3D的20a~20h;依序選取這些第二初始追蹤框其中之一,再計算所選取的第二初始追蹤框相對於目標物件的一信心指標。所述的信心指標係用以決定被選取的第二初始追蹤框是否可作為第二結果追蹤框。實務上,信心指標例如係第二結果追蹤框的複數個像素值中關聯於目標物件的一比例。換言之,以第二結果追蹤框中膚色佔據的比例,判斷第二結果追蹤框中是否具有人臉。因此,當判斷單元130判斷信心指標小於一門檻值時,依序選取其他第二初始追蹤框中之另一者,同時根據目標物件的一特徵值選擇性更新目標樣板;例如可選擇不更新目標樣板(這是因為周圍沒有膚色目標物並不代表周圍沒有其他人,有可能是兩個人重疊),或者在偵測到周圍沒有膚色目標物並且持續一段時間後,選擇更新目標樣板。上述更新目標樣板的狀況僅為例示性敘述而非用於限制本發明。另一方面,當信心指標大於或等於門檻值時,判斷單元130以其所選取的第二初始追蹤框作為第二結果追蹤框。舉例來說,每當選取一第二初始追蹤框時,進一步計算其中膚色占整個追蹤框的百分比是否小於一門檻值,藉以判斷所選取的第二初始追蹤框是否包括足夠大小的人臉。如圖3C~3E所示,經逐一計算之後,例如透過第6個第二初始追蹤框20f所獲得的第二結果追蹤框21f的膚色比例大於或等於門檻值。Please refer to step S330 to capture the second result tracking frame and the
需注意的是:在擷取第二結果追蹤框後,相當於此時的幀畫面中具有兩個以上的目標物件,因此需暫停更新目標樣板,以避免更新到非原追蹤者的人臉樣板。然而本發明並不以此為限,實務上,亦可在開始取得第二初始追蹤框的時候便暫停更新樣板。It should be noted that after capturing the second result tracking frame, it is equivalent to there are more than two target objects in the frame at this time, so it is necessary to pause the update of the target template to avoid updating to the face template of the non-original tracker . However, the present invention is not limited to this. In practice, it is also possible to suspend updating the template when the second initial tracking frame is obtained.
請參考步驟S340,計算第一及第二相似程度、第一及第二差異程度。詳言之,在擷取第二結果追蹤框後執行下列步驟,包括:判斷單元130執行一比對演算法計算第一參考區域30相對於預設參考特徵之第一相似程度及第二參考區域40相對於預設參考特徵之第二相似程度。所述比對演算法例如是相關性係數(Correlation) 演算法、卡方(Chi-square)演算法、交點(Intersection) 演算法或巴氏距離(Bhattacharyya distance)演算法。實務上,當追蹤人物被遮蔽時,如圖3B~圖3C所示,可透過比對第一及第二參考區域40內的參考特徵(相當於人物1、2的上衣部分)與預設參考特徵(原被追蹤人物1的上衣部分)進行比對,從而計算出是哪個結果追蹤框所對應的參考區域的特徵值更接近於預設參考特徵的特徵值,也就是哪個結果追蹤框內的上衣顏色更類似於原被追蹤人物1的上衣顏色。Please refer to step S340 to calculate the first and second similarity degrees, and the first and second difference degrees. In detail, after capturing the second result tracking frame, the following steps are executed, including: the determining
此外,在步驟S340中更包括以樣板匹配演算法計算第一結果追蹤框10之第一特徵值、第一特徵值與預設特徵值之第一差異程度、第二結果追蹤框之第二特徵值及第二特徵值與預設特徵值之第二差異程度,其中預設特徵值關聯於一目標樣板。實務上,樣板匹配演算法例如係:絕對誤差平均(Mean Absolute Differences,MAD)、絕對誤差和(Sum of Absolute Differences,SAD)、誤差平方和(Sum of Squared Differences,SSD)、平均誤差平方和(Mean Square Differences,MSD)、正規化互相關(Normalized Cross Correlation,NCC)、循序相似偵測演算法(Sequential Similiarity Detection Algorithm,SSDA)或絕對變換誤差和(Sum of Absolute Transformed Difference, SATD)。第一及第二差異程度分別反映了第一及第二結果為追蹤框中的人臉與原本預設的人臉樣板的差異程度。在本發明一實施例中,考量到僅依靠前述的相似程度可能遇到無法追蹤到正確的目標物件的情況(例如當前幀畫面中的兩人穿著一樣或相似度極高的衣服),因此加入計算此二差異程度的步驟以提升追蹤的正確率。In addition, step S340 further includes using a template matching algorithm to calculate the first feature value of the first
請參考步驟S350,判斷第一相似程度與第二相似程度兩者是否相等。詳言之,判斷單元130首先判斷第一結果追蹤框10鎖定的人物1與第二結果追蹤框鎖定的人物2,兩人是否穿著相同顏色的衣服。若判斷結果為否,請繼續參考步驟S370,當第一相似程度與第二相似程度不相等時,以第一相似程度及第二相似程度之中較大者所對應之第一結果追蹤框10或第二結果追蹤框作為第二幀畫面的第一初始追蹤框。也就是以衣服顏色較接近原本追蹤對象的結果追蹤框所對應的人物作為後續追蹤的對象。反過來說,當步驟S350的判斷結果為是,請繼續參考步驟S360,即當第一相似程度與第二相似程度相等時(或兩者差值小於一閾值時,例如相差5%以內也判斷為第一及第二相似程度兩者相等),則以第一差異程度及第二差異程度之中較小者所對應之第一或第二結果追蹤框作為第二幀畫面的第一初始追蹤框。換句話說,當第一目標追蹤框及第二目標追蹤框分別鎖定的兩人,此二人各自的衣服顏色和先前追蹤對象的衣服顏色相比,皆有相當高的相似程度。因此在步驟S340預先計算兩人各自臉部樣板與原始被追蹤者(經更新後)臉部樣板的差異程度,並在步驟S360中,以差異程度較小者,也就是結果追蹤框中臉部樣板特徵值與原始被追蹤者的臉部樣板特徵值差異較小者,作為第二幀畫面的第一初始追蹤框。如步驟S370所示。Please refer to step S350 to determine whether the first degree of similarity and the second degree of similarity are equal. In detail, the determining
綜上所述,本發明一實施例的目標追蹤方法,在初始設定時,以人臉偵測所偵測到的結果追蹤框中的人臉作為目標樣板,並記錄參考區域中的衣服顏色,參考區域可設置為人臉下距離1/2人臉高度且與目標追蹤框相同大小(亦可為足以代表衣服顏色的任意大小的區域);在記錄衣服顏色的同時,並將目標樣板所有像素的特徵值記錄,作為人臉樣板。In summary, in the target tracking method of an embodiment of the present invention, in the initial setting, the face in the tracking frame detected by the face detection is used as the target template, and the clothes color in the reference area is recorded. The reference area can be set to the distance under the
因此,當此追蹤人被全遮蔽時,依照Camshift演算法會追蹤到未被遮蔽的其他目標。此時可先根據目標區域所代表的衣服顏色進行判斷,若衣服顏色不同,則在兩個人又分開時,第二結果追蹤框將挑選衣服顏色相同或相似程度較高者所對應的目標人物繼續追蹤。而若是兩人的衣服顏色都與原本的衣服顏色相同時,則繼續比較兩個結果追蹤框的人臉樣板與原本的目標樣板各自的差異程度,然後在下一幀畫面中追蹤人臉樣板差異程度較小者所對應的目標物件。另外,在實作上,因為後續的第一或第二結果追蹤框所框到的人臉所對應的可能不是初始時第一結果追蹤框10所對應的位置。因此可採用移動偵測(motion estimation)的技巧,去找附近最接近的位置。Therefore, when the tracker is fully obscured, other targets that are not obscured will be tracked according to the Camshift algorithm. At this point, you can first judge according to the color of the clothes represented by the target area. If the clothes are of different colors, when the two people are separated again, the second result tracking box will select the target person corresponding to the clothes with the same color or a higher degree of similarity Keep tracking. If the color of the clothes of the two is the same as the original color, continue to compare the difference between the face template of the two result tracking frames and the original target template, and then track the difference of the face template in the next frame The target object corresponding to the smaller one. In addition, in practice, because the face framed by the subsequent first or second result tracking frame may not correspond to the position corresponding to the initial first
基於上述說明,本發明之功效主要是提高人物追蹤的正確率,其架構係建立在迭代演算法(例如Camshift)之基礎上,再利用周圍的結果追蹤框為起始點,預測可能的追蹤人,並以衣服顏色的判斷為輔助,當衣服顏色相同時,將繼續比較人臉的樣板作為判斷的依據。如此在衣服顏色相同時,仍能正確的追蹤原本欲追蹤的人。同時人臉樣板將在判斷當前幀畫面中無其他遮蔽人物時進行更新,以便後續在追蹤對象的人臉移動至不同角度時,仍能保有以人臉樣板辨識差異程度的準確性。Based on the above description, the effect of the present invention is mainly to improve the accuracy of person tracking. Its architecture is based on an iterative algorithm (such as Camshift), and then uses the surrounding result tracking frame as the starting point to predict the possible tracking person , And use the judgment of the color of the clothes as an aid. When the color of the clothes is the same, continue to compare the face models as the basis for judgment. In this way, when the color of the clothes is the same, the person who originally wanted to be tracked can still be tracked correctly. At the same time, the face template will be updated when it is judged that there are no other obscured people in the current frame, so that when the face of the tracking object moves to different angles, the accuracy of identifying the difference degree with the face template can still be maintained.
雖然本發明以前述之實施例揭露如上,然其並非用以限定本發明。在不脫離本發明之精神和範圍內,所為之更動與潤飾,均屬本發明之專利保護範圍。關於本發明所界定之保護範圍請參考所附之申請專利範圍。Although the present invention is disclosed in the foregoing embodiments, it is not intended to limit the present invention. All changes and modifications made without departing from the spirit and scope of the present invention fall within the scope of patent protection of the present invention. For the scope of protection defined by the present invention, please refer to the attached patent scope.
100:目標追蹤裝置
110:讀取單元
120:迭代處理單元
130:判斷單元
1、2:人物
10:第一結果追蹤框
20a~20h:第二初始追蹤框
21f:第二結果追蹤框
30:第一參考區域
40:第二參考區域
S330~S370:步驟
100: Target tracking device
110: Reading unit
120: Iterative processing unit
130:
圖1係依據本發明一實施例所繪示的目標追蹤裝置的方塊架構圖。 圖2係依據本發明一實施例所繪示的目標追蹤方法的流程圖。 圖3A~3E係依據本發明一實施例所繪示的追蹤目標物件的示意圖。FIG. 1 is a block diagram of a target tracking device according to an embodiment of the invention. FIG. 2 is a flowchart of a target tracking method according to an embodiment of the invention. 3A to 3E are schematic diagrams of tracking target objects according to an embodiment of the present invention.
S300~S370:目標追蹤方法的步驟 S300~S370: Steps of the target tracking method
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW108100807A TWI706330B (en) | 2019-01-09 | 2019-01-09 | Target tracking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW108100807A TWI706330B (en) | 2019-01-09 | 2019-01-09 | Target tracking method |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202026940A true TW202026940A (en) | 2020-07-16 |
TWI706330B TWI706330B (en) | 2020-10-01 |
Family
ID=73005064
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW108100807A TWI706330B (en) | 2019-01-09 | 2019-01-09 | Target tracking method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI706330B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288774A (en) * | 2020-10-22 | 2021-01-29 | 深圳市华宝电子科技有限公司 | Movement detection method and device, electronic equipment and storage medium |
CN114140494A (en) * | 2021-06-30 | 2022-03-04 | 杭州图灵视频科技有限公司 | Single-target tracking system and method in complex scene, electronic device and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI612482B (en) * | 2016-06-28 | 2018-01-21 | 圓展科技股份有限公司 | Target tracking method and target tracking device |
TWI641265B (en) * | 2017-04-07 | 2018-11-11 | 國家中山科學研究院 | Mobile target position tracking system |
CN107993256A (en) * | 2017-11-27 | 2018-05-04 | 广东工业大学 | Dynamic target tracking method, apparatus and storage medium |
CN109146917B (en) * | 2017-12-29 | 2020-07-28 | 西安电子科技大学 | Target tracking method for elastic updating strategy |
CN109145752B (en) * | 2018-07-23 | 2022-07-01 | 北京百度网讯科技有限公司 | Method, apparatus, device and medium for evaluating object detection and tracking algorithms |
-
2019
- 2019-01-09 TW TW108100807A patent/TWI706330B/en active
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112288774A (en) * | 2020-10-22 | 2021-01-29 | 深圳市华宝电子科技有限公司 | Movement detection method and device, electronic equipment and storage medium |
CN112288774B (en) * | 2020-10-22 | 2024-01-30 | 深圳市华宝电子科技有限公司 | Mobile detection method, mobile detection device, electronic equipment and storage medium |
CN114140494A (en) * | 2021-06-30 | 2022-03-04 | 杭州图灵视频科技有限公司 | Single-target tracking system and method in complex scene, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TWI706330B (en) | 2020-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109076198B (en) | Video-based object tracking occlusion detection system, method and equipment | |
US6404900B1 (en) | Method for robust human face tracking in presence of multiple persons | |
JP6655878B2 (en) | Image recognition method and apparatus, program | |
JP6587435B2 (en) | Image processing apparatus, information processing method, and program | |
JP4597391B2 (en) | Facial region detection apparatus and method, and computer-readable recording medium | |
TWI701609B (en) | Method, system, and computer-readable recording medium for image object tracking | |
KR101035055B1 (en) | System and method of tracking object using different kind camera | |
JP6579950B2 (en) | Image analysis apparatus, program, and method for detecting person appearing in captured image of camera | |
JP2007042072A (en) | Tracking apparatus | |
JP2004094491A (en) | Face orientation estimation device and method and its program | |
JP4682820B2 (en) | Object tracking device, object tracking method, and program | |
JP6822482B2 (en) | Line-of-sight estimation device, line-of-sight estimation method, and program recording medium | |
JP7197485B2 (en) | Detection system, detection device and method | |
US20160253554A1 (en) | Determination device and determination method | |
JPWO2019003973A1 (en) | Face authentication device, face authentication method and program | |
US10496874B2 (en) | Facial detection device, facial detection system provided with same, and facial detection method | |
TWI706330B (en) | Target tracking method | |
WO2021084972A1 (en) | Object tracking device and object tracking method | |
JP2019040306A (en) | Information processing device, information processing program, and information processing method | |
JP2010057105A (en) | Three-dimensional object tracking method and system | |
CN111582036B (en) | Cross-view-angle person identification method based on shape and posture under wearable device | |
KR20100025048A (en) | Image analysis apparatus and method for motion capture | |
JP2008288684A (en) | Person detection device and program | |
CN111639582A (en) | Living body detection method and apparatus | |
JP5253227B2 (en) | Image input device, subject detection method, and program |