TWI684956B - Object recognition and tracking system and method thereof - Google Patents
Object recognition and tracking system and method thereof Download PDFInfo
- Publication number
- TWI684956B TWI684956B TW107143429A TW107143429A TWI684956B TW I684956 B TWI684956 B TW I684956B TW 107143429 A TW107143429 A TW 107143429A TW 107143429 A TW107143429 A TW 107143429A TW I684956 B TWI684956 B TW I684956B
- Authority
- TW
- Taiwan
- Prior art keywords
- tracking
- object recognition
- mobile device
- module
- template
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
本發明係關於一種物體辨識與追蹤技術,特別是指一種物體辨識與追蹤系統及其方法。 The invention relates to an object identification and tracking technology, in particular to an object identification and tracking system and method.
在一現有技術中,提出一種移動物體追蹤方法及電子裝置,其利用多個攝影機接收多個視訊資料,透過比對多個不同的幀(frame)來得知物體的位置與移動路徑,但此現有技術僅能追蹤畫面中物體的平移位置,而無法辨識與追蹤物體或得知物體之視角。 In a prior art, a moving object tracking method and an electronic device are proposed, which uses multiple cameras to receive multiple video data, and obtains the position and moving path of the object by comparing multiple different frames, but this existing The technology can only track the translational position of the object in the picture, but cannot identify and track the object or know the angle of view of the object.
在另一現有技術中,提出一種多追蹤器物體追蹤(Multi-tracker object tracking)系統,其可以整合多種追蹤器(如輪廓追蹤器、光學追蹤器)一同運作,以獲得穩定的物體追蹤效果,但此現有技術難以減少對物體追蹤所需之運算量。 In another prior art, a multi-tracker object tracking system is proposed, which can integrate multiple trackers (such as contour tracker and optical tracker) to work together to obtain a stable object tracking effect. However, this prior art is difficult to reduce the amount of calculation required for object tracking.
因此,如何解決上述現有技術之缺點,以辨識與追蹤物體或得知物體之視角,或者減少對物體追蹤所需之運算量,實已成為本領域技術人員之一大課題。 Therefore, how to solve the above-mentioned shortcomings of the prior art in order to identify and track an object or learn the angle of view of an object, or to reduce the amount of calculation required to track an object, has become a major issue for those skilled in the art.
本發明提供一種物體辨識與追蹤系統及其方法,係可 辨識與追蹤物體或得知物體之視角,或者減少對物體追蹤所需之運算量。 The invention provides an object identification and tracking system and method thereof, which can be Identify and track objects or learn the angle of view of an object, or reduce the amount of computation required to track an object.
本發明之物體辨識與追蹤系統包括:一伺服器,係具有一樣板建構模組與一特徵擷取模組,樣板建構模組對物體之三維模型以投影之方式建構多個不同視角之樣板,且特徵擷取模組擷取、分析或精簡多個不同視角之樣板的樣板特徵的資料;以及一行動裝置,係自伺服器中取得或下載多個樣板特徵的資料,該行動裝置具有一物體辨識與追蹤模組以比對多個樣板特徵的資料來辨識物體及其視角,且物體辨識與追蹤模組利用疊代最近點演算法(Iterative Closest Point algorithm,ICP)、隱藏面移除法與雙向對應檢查法三者進行物體之視角追蹤,其中,在執行疊代最近點演算法時,物體辨識與追蹤模組利用隱藏面移除法移除或忽略物體之視角所無法觀察到的樣板特徵,而在疊代最近點演算法搜尋樣板特徵的最接近資料時,物體辨識與追蹤模組利用雙向對應檢查法雙向檢查或搜尋樣板特徵的兩個資料是否為彼此的最接近資料。 The object recognition and tracking system of the present invention includes: a server with a same plate construction module and a feature extraction module, the three-dimensional model of the object is constructed by the template construction module to construct a plurality of templates with different perspectives, And the feature extraction module captures, analyzes, or simplifies the data of template features of multiple templates with different perspectives; and a mobile device, which obtains or downloads data of multiple template features from the server, the mobile device has an object The identification and tracking module compares the data of multiple template features to identify the object and its perspective, and the object identification and tracking module uses the Iterative Closest Point algorithm (ICP), hidden surface removal method and The two-way correspondence inspection method is used to track the angle of view of the object. During the iterative closest point algorithm, the object recognition and tracking module uses the hidden surface removal method to remove or ignore the template features that cannot be observed by the angle of view of the object. However, when the iterative closest point algorithm searches for the closest data of the model features, the object recognition and tracking module uses a two-way correspondence check method to bidirectionally check or search whether the two data of the model features are the closest data to each other.
本發明之物體辨識與追蹤方法包括:由一伺服器之樣板建構模組對物體之三維模型以投影之方式建構多個不同視角之樣板,並由伺服器之特徵擷取模組擷取、分析或精簡多個不同視角之樣板的樣板特徵的資料;以及由一行動裝置自伺服器中取得或下載多個樣板特徵的資料,並由行動裝置之一物體辨識與追蹤模組比對多個樣板特徵的資料來辨識物體及其視角,且物體辨識與追蹤模組利用疊代最 近點演算法、隱藏面移除法與雙向對應檢查法三者進行物體之視角追蹤,其中,在執行疊代最近點演算法時,物體辨識與追蹤模組利用隱藏面移除法移除或忽略物體之視角所無法觀察到的樣板特徵,而在疊代最近點演算法搜尋樣板特徵的最接近資料時,物體辨識與追蹤模組利用雙向對應檢查法雙向檢查或搜尋樣板特徵的兩個資料是否為彼此的最接近資料。 The object recognition and tracking method of the present invention includes: constructing a plurality of templates with different perspectives by projecting a three-dimensional model of an object from a model building module of a server, and acquiring and analyzing by the feature extraction module of the server Or streamline the data of template features of multiple templates with different perspectives; and obtain or download data of multiple template features from a server by a mobile device, and compare multiple templates by an object recognition and tracking module of the mobile device Feature data to identify objects and their perspectives, and the object recognition and tracking module Near point algorithm, hidden surface removal method and two-way correspondence inspection method are used to track the angle of view of the object. Among them, when iterating the closest point algorithm, the object recognition and tracking module uses the hidden surface removal method to remove or Ignore the template features that cannot be observed from the perspective of the object, and when the iterative closest point algorithm searches for the closest data of the template feature, the object recognition and tracking module uses the two-way correspondence check method to check or search for the two data of the model feature Whether it is the closest information to each other.
為讓本發明上述特徵與優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明。在以下描述內容中將部分闡述本發明之額外特徵及優點,且此等特徵及優點將部分自所述描述內容顯而易見,或可藉由對本發明之實踐習得。本發明之特徵及優點借助於在申請專利範圍中特別指出的元件及組合來認識到並達到。應理解,前文一般描述與以下詳細描述兩者均僅為例示性及解釋性的,且不欲約束本發明所主張之範圍。 In order to make the above-mentioned features and advantages of the present invention more obvious and understandable, the embodiments are specifically described below in conjunction with the accompanying drawings for detailed description. Additional features and advantages of the present invention will be partially explained in the following description, and these features and advantages will be partially apparent from the description, or may be learned by practicing the present invention. The features and advantages of the present invention are recognized and achieved by means of the elements and combinations particularly pointed out in the scope of the patent application. It should be understood that both the foregoing general description and the following detailed description are merely exemplary and explanatory, and are not intended to limit the claimed scope of the invention.
1‧‧‧物體辨識與追蹤系統 1‧‧‧Object recognition and tracking system
10‧‧‧行動裝置 10‧‧‧Mobile device
11‧‧‧彩色攝影機 11‧‧‧Color camera
12‧‧‧深度感測器 12‧‧‧Depth sensor
13‧‧‧前景切割模組 13‧‧‧Foreground cutting module
14‧‧‧物體辨識與追蹤模組 14‧‧‧Object recognition and tracking module
141‧‧‧疊代最近點演算法 141‧‧‧ Iterative nearest point algorithm
142‧‧‧隱藏面移除法 142‧‧‧ hidden face removal method
143‧‧‧雙向對應檢查法 143‧‧‧Two-way correspondence inspection method
144‧‧‧裝置運動追蹤法 144‧‧‧ device motion tracking method
145‧‧‧姿勢測量法 145‧‧‧ Posture measurement method
15‧‧‧顯示模組 15‧‧‧Display module
20‧‧‧伺服器 20‧‧‧Server
21‧‧‧三維模型重建模組 21‧‧‧Three-dimensional model reconstruction module
22‧‧‧樣板建構模組 22‧‧‧Model construction module
23‧‧‧特徵擷取模組 23‧‧‧Feature extraction module
A‧‧‧物體 A‧‧‧Object
B‧‧‧三維模型 B‧‧‧3D model
C‧‧‧樣板 C‧‧‧Model
D‧‧‧樣板特徵 D‧‧‧Model features
F1‧‧‧辨識階段 F1‧‧‧Identification stage
F2‧‧‧追蹤階段 F2‧‧‧ tracking stage
T'‧‧‧樣板矩陣 T'‧‧‧ Model matrix
S11至S14、S21至S25‧‧‧步驟 S11 to S14, S21 to S25 ‧‧‧ steps
S31至S33、S41至S45‧‧‧步驟 S31 to S33, S41 to S45 ‧‧‧ steps
第1圖為本發明之物體辨識與追蹤系統的示意架構圖;第2圖為本發明之物體辨識與追蹤系統及其方法的使用流程的簡化示意圖;第3A圖與第3B圖為本發明以圖學投影之方式建構多視角之樣板的示意圖;第4圖為本發明沿光學軸旋轉之多個樣板的示意圖;第5圖為本發明將所有樣板向量組成一樣板矩陣的示 意圖;第6圖為本發明之行動裝置在互動操作上的流程示意圖;以及第7圖為本發明之行動裝置在追蹤階段上的動態切換流程示意圖。 Figure 1 is a schematic structural diagram of the object recognition and tracking system of the present invention; Figure 2 is a simplified schematic diagram of the use process of the object recognition and tracking system and method of the present invention; Figures 3A and 3B are the present invention A schematic diagram of a multi-view template constructed by graphic projection; FIG. 4 is a schematic diagram of a plurality of templates rotating along an optical axis of the present invention; FIG. 5 is a diagram of the present invention that all template vectors are combined into a template matrix Intention; FIG. 6 is a schematic flowchart of the interactive operation of the mobile device of the present invention; and FIG. 7 is a schematic diagram of the dynamic switching process of the mobile device of the present invention in the tracking phase.
以下藉由特定的具體實施形態說明本發明之實施方式,熟悉此技術之人士可由本說明書所揭示之內容輕易地了解本發明之其他優點與功效,亦可藉由其他不同的具體實施形態加以施行或應用。 The following describes the embodiments of the present invention by specific specific embodiments. Those familiar with this technology can easily understand other advantages and effects of the present invention from the contents disclosed in this specification, and can also be implemented by other different specific embodiments. Or application.
無標記物(Markerless)或有標記物之物體辨識與追蹤技術是拓展擴增實境(Augmented Reality,AR)應用的關鍵技術,本發明提出一種物體辨識與追蹤系統及其方法,例如無標記物之物體辨識與追蹤系統及其方法,可透過行動裝置之彩色攝影機與深度感測器拍攝或掃描物體(目標物體),進而辨識與追蹤物體(目標物體),以利後續的擴增實境(AR)應用。 Markerless or marker-based object recognition and tracking technology is a key technology for expanding the application of Augmented Reality (AR). The present invention provides an object recognition and tracking system and method, such as marker-free The object recognition and tracking system and its method can shoot or scan objects (target objects) through the color camera and depth sensor of the mobile device, and then recognize and track the objects (target objects) to facilitate subsequent augmented reality ( AR) Application.
本發明以電腦視覺技術為基礎發展出一種物體辨識與追蹤系統及其方法,透過行動裝置之一彩色攝影機與一深度感測器拍攝或掃描物體(目標物體),並由物體辨識與追蹤模組分析物體之色彩特徵與深度資訊,以辨識物體(目標物體)之狀態及視角。而且,本發明配合行動裝置內附的動態感測資訊,在行動裝置於短時距內小幅度運動下,使行動裝置自動改以感測資訊推估運動,達到以較低運算量來 追蹤物體(目標物體)之三維(3D)動態的功能。同時,本發明可透過伺服器預先精簡要辨識之樣板的資料,以減少即時辨識樣板所需之運算量與資料量。 The present invention develops an object recognition and tracking system and method based on computer vision technology. The object (target object) is photographed or scanned through a color camera and a depth sensor of the mobile device, and the object recognition and tracking module Analyze the color characteristics and depth information of objects to identify the state and perspective of objects (target objects). Moreover, the present invention cooperates with the dynamic sensing information included in the mobile device, so that the mobile device automatically changes the mobile device to use the sensing information to estimate the motion when the mobile device moves in a small range within a short time interval, so as to achieve a lower amount of calculation. The function of tracking the three-dimensional (3D) dynamics of objects (target objects). At the same time, the present invention can refine the data of the template identified in advance through the server to reduce the amount of calculation and data required for real-time identification of the template.
第1圖為本發明之物體辨識與追蹤系統1,其包括一行動裝置10與一伺服器20。行動裝置10可例如為智慧手機或平板電腦等,伺服器20可例如為遠端伺服器、雲端伺服器、網路伺服器或後台伺服器等,但不以此為限。
FIG. 1 is an object recognition and
伺服器20可具有一樣板建構模組22與一特徵擷取模組23,樣板建構模組22對物體A之三維模型B以投影之方式建構多個不同視角之樣板C,且特徵擷取模組23擷取、分析或精簡多個不同視角之樣板C的樣板特徵D的資料。同時,行動裝置10可自伺服器20中取得或下載多個樣板特徵D的資料,該行動裝置10具有一物體辨識與追蹤模組14比對多個樣板特徵D的資料來辨識物體A及其視角,且物體辨識與追蹤模組14利用疊代最近點演算法(ICP)141、隱藏面移除法142與雙向對應檢查法143三者進行物體A之視角追蹤。而且,在執行疊代最近點演算法141時,物體辨識與追蹤模組14利用隱藏面移除法142移除或忽略物體A之視角所無法觀察到的樣板特徵D,而在疊代最近點演算法141搜尋樣板特徵D的最接近資料時,物體辨識與追蹤模組14利用雙向對應檢查法143雙向檢查或搜尋樣板特徵D的兩個資料是否為彼此的最接近資料。
The
物體辨識與追蹤系統1之運作方式可分為前置處理階段與互動操作階段兩個部分。第一部分之前置處理階段主
要包括:由伺服器20之樣板建構模組22辨識物體A之三維模型B,以依據三維模型B建構多個不同視角之樣板C,並由伺服器20之特徵擷取模組23擷取多個不同視角之樣板C以產生相應之樣板特徵D。第二部分之互動操作階段主要包括:由行動裝置10之物體辨識與追蹤模組14進行物體A之辨識與追蹤定向。
The operation mode of the object recognition and
在物體辨識與追蹤系統1之前置處理階段,使用者可透過行動裝置10拍攝或掃描實際之物體A(目標物體)、或輸入物體A之三維模型B(亦可作為目標物體)的方式,以供伺服器20依據物體A之三維模型B建立多個不同視角之樣板C及樣板特徵D。例如,使用者可透過行動裝置10環繞拍攝或掃描物體A,以上傳物體A之色彩影像與三維(3D)點雲至伺服器20,再由伺服器20之三維模型重建模組21建立物體A之三維模型B,或者使用者可透過行動裝置10或其他任何之電子裝置直接輸入或上傳物體A之三維模型B至伺服器20。然後,由伺服器20之樣板建構模組22對物體A之三維模型B以投影之方式建構多個不同視角之樣板C,再由伺服器20之特徵擷取模組23擷取、分析或精簡多個不同視角之樣板C的樣板特徵D的資料,以供後續比對。
In the pre-processing stage of the object recognition and
在物體辨識與追蹤系統1之互動操作階段,使用者可透過行動裝置10之物體辨識與追蹤模組14,以下列程序P11至程序P14對物體A進行辨識與追蹤。
In the interactive operation stage of the object recognition and
程序P11:由行動裝置10之物體辨識與追蹤模組14
比對多個不同視角之樣板C的樣板特徵D以進行物體A及其視角之辨識。例如,當行動裝置10自伺服器20中取得或下載多個樣板特徵D的資料後,行動裝置10之物體辨識與追蹤模組14可比對多個樣板特徵D之色彩影像與深度資訊,以辨識物體A及其視角(如粗略視角)。
Procedure P11: Object recognition and
程序P12:由行動裝置10之物體辨識與追蹤模組14利用疊代最近點演算法(ICP)進行物體A之視角追蹤。例如,物體辨識與追蹤模組14可基於辨識後得到的物體A之粗略視角,結合本發明所提出之隱藏面移除法142與雙向對應檢查法143,以加強傳統之疊代最近點演算法(疊代逼近法)對物體A的角度追蹤效果。
Process P12: The object recognition and
程序P13:當行動裝置10在短時距內僅有小幅度運動時,物體辨識與追蹤模組14可自動切換改以裝置運動追蹤法144進行物體A之視角追蹤。例如,當物體辨識與追蹤模組14分析短時距內,行動裝置10僅有小幅度運動時,物體辨識與追蹤模組14可自動切換改以行動裝置10之慣性測量單元(Inertial Measurement Unit,IMU)取得的動態感測資訊推估出物體A之相對視角運動。據此,本發明可減少對物體A之相對視角運動較複雜的比對運算量、提高系統反應率或減少計算能耗。
Procedure P13: When the
程序P14:由行動裝置10之物體辨識與追蹤模組14自動判斷是否需切換回完整的視角追蹤或物體辨識。例如,物體辨識與追蹤模組14可以比對關於物體A之裝置動態追蹤之效果與拍攝物體A之場景兩者的差異,以於兩
者的差異超過門檻值時,由物體辨識與追蹤模組14切換回完整的視角追蹤計算、或需重新進行物體視角辨識。
Process P14: The object recognition and
上述前景切割模組13、物體辨識與追蹤模組14、三維模型重建模組21、樣板建構模組22與特徵擷取模組23等五個模組,可採用硬體、韌體或軟體之形式予以建構、組成或實現。例如,此五個模組採用硬體之單一晶片或多個晶片予以建構。或者,前景切割模組13可為前景切割軟體或程式,物體辨識與追蹤模組14可為物體辨識與追蹤軟體或程式,三維模型重建模組21可為三維模型重建軟體或程式,樣板建構模組22可為樣板建構軟體或程式,特徵擷取模組23可為特徵擷取軟體或程式。但是,本發明並不以此為限。
The above-mentioned
第2圖為本發明之物體辨識與追蹤系統1及其方法的使用流程的簡化示意圖,請一併參閱第1圖。在整個觸發程序之前,使用者可以透過行動裝置10(見第1圖)之物體選擇介面F(見第2圖)選擇想要辨識與追蹤的物體A(見第2圖之步驟S11),例如玩具車、玩具飛機等物體。若物體A之資料不存在行動裝置10中,則行動裝置10會從伺服器20中取得或下載物體A之資料包裹(見第2圖之步驟S12),物體A之資料包裹的內容包括多視角樣板姿勢資訊、色彩樣板資料、深度樣板資料與權重值,並儲存在使用者之行動裝置10的記憶體(如硬碟或記憶卡)中。
FIG. 2 is a simplified schematic diagram of the use flow of the object recognition and
觸發程序可由選定物體A與檢查物體A之資料存在後開始,先將物體A放置於行動裝置10之畫面中央附近,
以供行動裝置10拍攝物體A(見第2圖之步驟S13),行動裝置10之前景切割模組13(見第1圖)會自動於背景進行有關物體A之前景切割、視角辨識及追蹤,並將得到之物體A之姿勢結果以三維(3D)點雲的方式繪製在行動裝置10之畫面物體的相應位置上,以透過顯示模組15顯示三維(3D)點雲的結果於行動裝置10之螢幕上(見第2圖之步驟S14),或以其他擴增實境(AR)輔助資訊呈現於行動裝置10之螢幕上。
The triggering process can start after the data of the selected object A and the inspection object A exist, and first place the object A near the center of the screen of the
第3A圖與第3B圖為本發明以圖學投影之方式對物體A建構出多視角之樣板C的示意圖,請一併參閱第1圖。第3A圖為關於一般型態的物體A,對物體A做半球體或更細角度之投影。第3B圖為關於對稱型態的物體A,因繞著物體A之對稱軸可具有相似之投影影像,僅需針對其中一橫切面進行半圓形的視角投影。 FIG. 3A and FIG. 3B are schematic diagrams of a template C in which multi-view angles are constructed on the object A by graphical projection in the present invention. Please refer to FIG. 1 together. Fig. 3A is about the general type of object A, and the object A is projected with a hemisphere or a finer angle. FIG. 3B is about the symmetrical shape of the object A. Since a similar projection image can be formed around the symmetry axis of the object A, only one of the cross-sectional planes needs to be projected in a semicircular perspective.
如第3A圖、第3B圖與第1圖所示,在前置處理階段中,於行動裝置10拍攝完物體A(目標物體)後,行動裝置10可將物體A之色彩影像與深度資訊傳送至伺服器20,以供伺服器20之三維模型重建模組21對物體A進行建模而產生三維模型B,亦可透過行動裝置10或其他任何之電子裝置直接輸入物體A(目標物體)之三維模型B至伺服器20。然後,伺服器20可對物體A之三維模型B以圖學投影之方式建構多視角之樣板C,以供伺服器20之特徵擷取模組23分析多視角之樣板C而取得樣板特徵D之資訊。
As shown in FIG. 3A, FIG. 3B and FIG. 1, in the pre-processing stage, after the
第4圖為本發明沿光學軸(Optical Axis)旋轉之多個樣 板C的示意圖。為了快速處理在某視點物體沿光學軸旋轉的情況,本發明也會預先計算沿光學軸旋轉的多個樣板C,此類旋轉稱為平面內旋轉(in-plane rotation)。 FIG. 4 is a plurality of examples of the invention rotating along the optical axis (Optical Axis) Schematic diagram of panel C. In order to quickly handle the situation where an object rotates along an optical axis at a certain viewpoint, the present invention also precalculates a plurality of templates C that rotate along the optical axis. Such rotation is called in-plane rotation.
第5圖為本發明將所有樣板向量組成一樣板矩陣T'的示意圖,右側T1、T2至Tn表示多個原始樣板影像,中間t1'、t2'至tn'表示多個經過LoG的結果影像,其中LoG表示高斯拉普拉斯算子(Laplacian of Gaussian)。T'為樣板矩陣,由向量化的樣板資料組合而成。 FIG. 5 is a schematic diagram of the present invention that all template vectors are combined into a template matrix T′. The right side T 1 , T 2 to T n represent multiple original template images, and the middle t 1 ′, t 2 ′ to t n ′ represent multiple After the result image of LoG, where LoG represents the Laplacian of Gaussian. T'is a template matrix, which is composed of vectorized template data.
因樣板C之比對容易受到光線變化、陰影、雜訊等干擾或影響,且樣板C之全圖比對所需之運算量十分龐大,為了增加對樣板C之辨識的準確性與對干擾的抵抗能力,本發明之行動裝置10將經過LoG(高斯拉普拉斯算子)與正規化(normalized)的每個樣板C之資訊重組成單一向量,並將所有樣板C之向量組成一樣板矩陣T',且以互相關(cross-correlation)等方式作為特徵向量的比較方式。
Because the comparison of template C is susceptible to interference or influence of light changes, shadows, noise, etc., and the calculation of the full image comparison of template C is very large, in order to increase the accuracy of identification of template C and the interference Resistance ability, the
另外,本發明之行動裝置10可透過奇異值分解(Singular Value Decomposition,SVD)的方式,以減少在行動裝置10上所需要的資料量或減少樣板矩陣T'之維度。同時,本發明在不過度降低比對準確度與提升效率的基礎下,保留足以代表原始資料的維度來減少使用的資料量。這些在伺服器20產生樣板特徵D之資料,則再被包裹為資料集,以供行動裝置10下載及進行比對。
In addition, the
第6圖為本發明之行動裝置10在互動操作上的流程示意圖,請一併參閱第1圖。本發明可透過第1圖之行動裝
置10之彩色攝影機11與深度感測器12拍攝或掃描有關物體A(目標物體)之場景,並由前景切割模組13利用平面切割等技術進行前景切割以取得物體A(目標物體)之輪廓區域。
FIG. 6 is a schematic flowchart of the interactive operation of the
同時,本發明之物體辨識與追蹤方法可包括第6圖之第一階段(辨識階段F1)與第二階段(追蹤階段F2)。 Meanwhile, the object recognition and tracking method of the present invention may include the first stage (recognition stage F1) and the second stage (tracking stage F2) of FIG. 6.
在第6圖之第一階段(辨識階段F1)中,先由第1圖之物體辨識與追蹤模組14分析有關物體A之前景區域特徵,並將物體A之前景區域特徵與預先產生之樣板特徵D的資料進行特徵比對,以辨識物體A(目標物體)之狀態及視角。物體辨識與追蹤模組14在取得前景區域之物體後,會將前景區域正規化及縮放至指定大小,並以建立樣板C時的分析方式對前景色彩與深度影像進行LoG與正規化以及向量化資訊,再與預先產生的樣板矩陣T'進行互相關運算以計算樣板C之相似度,其中,經互相關運算得到之分數最高者即為相似度最高的樣板C,且以樣板C之姿勢當作物體A的初始估計姿勢。然後,以四元數計算當前結果與前一幀是否有過大的旋轉角度差,以避免正反形狀過於相似造成錯誤的結果。為確保比對姿勢的可信度,樣板C之相似度超過一定門檻值的才會採納且第一個設定為初始的比對姿勢。
In the first stage (recognition stage F1) of Figure 6, the object recognition and
舉例而言,在第6圖之第一階段(辨識階段F1)中,物體辨識與追蹤模組14可於步驟S21中進行多個樣板C之比對,並於步驟S22中進行多個樣板C之翻轉檢查。若多
個樣板C中無角度小於門檻值者,則進行步驟S23將例如Cmiss(偵測失敗)加1,且若例如Cmiss(偵測失敗)大於5,即再進行步驟S24以重設初始之比對姿勢。反之,若多個樣板C中有角度小於門檻值者,則進行步驟S25以設定比對姿勢。
For example, in the first stage of FIG. 6 (recognition stage F1), the object recognition and
在第6圖之第二階段(追蹤階段F2)中,物體辨識與追蹤模組14可依據步驟S25所設定之比對姿勢進行步驟S31之ICP(疊代最近點演算法)追蹤或裝置運動追蹤。若追蹤失敗,則返回辨識階段F1(步驟S21之樣板比對)。反之,若追蹤成功,則物體辨識與追蹤模組14依序進行步驟S32之姿勢平滑化與步驟S33之更新姿勢比對,再返回步驟S31之ICP(疊代最近點演算法)追蹤或裝置運動追蹤。
In the second stage of FIG. 6 (tracking stage F2), the object recognition and
前述步驟S32之姿勢平滑化,係因疊代最近點演算法(ICP)141向下採樣與使用者手持行動裝置10移動容易抖動等因素,可能造成追蹤到的姿勢過於跳動以至於畫面不流暢。若追蹤成功時,物體辨識與追蹤模組14會將姿勢記錄下來,並將當前的姿勢與前兩幀的姿勢以高斯濾波器(Gaussian filter)進行平滑化,使得過程畫面更加流暢。
The posture smoothing in the aforementioned step S32 is due to factors such as the iterative closest point algorithm (ICP) 141 downsampling and the user's hand-held
上述第一階段(辨識階段F1)可以估計物體A之粗略視角方向,而第二階段(追蹤階段F2)則需要求取更準確的追蹤視角。傳統上,視角的追蹤求取僅透過疊代最近點演算法(ICP)來獲得,疊代最近點演算法(ICP)的目標是找兩個點集合對齊最佳的旋轉矩陣R與平移矩陣t。假設空間中有一輸入之點集合P(如P={pi},i=1,...NP)、及另一目標 之點集合Q(如Q={qi},i=1,...,NQ),其中pi,qi ,傳統之疊代最近點演算法(ICP)會以最接近點作為對應,對應之點集合為,例如下列公式(1)所示,其中,P、Q、為點集合,pi、qi為點,i、j、NP、NQ為正整數,x、y、z分別為x軸、y軸、z軸之數值。 The first stage (recognition stage F1) can estimate the direction of the rough viewing angle of the object A, while the second stage (tracking stage F2) requires a more accurate tracking perspective. Traditionally, the tracking of the angle of view is obtained only by the iterative closest point algorithm (ICP). The goal of the iterative closest point algorithm (ICP) is to find the rotation matrix R and the translation matrix t with the best alignment of the two point sets. . Suppose there is an input point set P (such as P={p i }, i=1,...N P ), and another target point set Q (such as Q={q i }, i=1, ...,N Q ), where p i , q i , The traditional iterative closest point algorithm (ICP) will use the closest point as the correspondence, and the corresponding point set is , As shown in the following formula (1), where P, Q, Set point, p i, q i is a point, i, j, N P, N Q is a positive integer, x, y, z are x-axis, y-axis, the value of the z-axis.
可將上述求取最佳的旋轉矩陣R與平移矩陣t的關係寫為目標函數,以轉換為搜尋如下列公式(2)中最小的E(R,t),即找到一組的旋轉矩陣R與平移矩陣t,使得兩者最為接近,其中,E(R,t)為依據旋轉矩陣R與平移矩陣t所計算之點集合與實際之點集合的總誤差值。 The relationship between the above-mentioned optimal rotation matrix R and translation matrix t can be written as an objective function, which can be converted to search for the smallest E(R, t) in the following formula (2), that is, to find a group of rotation matrix R It is the closest to the translation matrix t, where E(R, t) is the total error value between the set of points calculated according to the rotation matrix R and the translation matrix t and the actual set of points.
由上述方法可以推估實際拍攝之物體A的視角與粗略視角間的旋轉矩陣R與平移矩陣t,進而得知物體A之相對運動。然而,傳統之疊代最近點演算法(ICP)有容易陷入局部最小值的缺點,因此本發明在傳統之疊代最近點演算法(ICP)中加入(1)隱藏面移除法142與(2)雙向對應檢查法143,以求取更準確之物體A的追蹤視角。
The above method can be used to estimate the rotation matrix R and translation matrix t between the actual angle of view of the object A and the rough angle of view, and then know the relative motion of the object A. However, the traditional iterative closest point algorithm (ICP) has the disadvantage of easily falling into the local minimum, so the present invention adds (1) hidden
(1)隱藏面移除法142:傳統之疊代最近點演算法(ICP)會對整個點集合進行比對,不但耗時且容易發生不穩的情況。由於本發明可以取得物體A之粗略視角,因此本發明之隱藏面移除法142可以移除物體A之視角看不到的點,
並僅利用物體A之視角的可視點(剩餘的點)進行比對,以減少比對過程中的模糊地帶及連續畫面間之追蹤軌跡的顫抖情形。
(1) Hidden surface removal method 142: The traditional iterative closest point algorithm (ICP) compares the entire point set, which is not only time-consuming but also prone to instability. Since the present invention can obtain the rough angle of view of the object A, the hidden
(2)雙向對應檢查法143:傳統之疊代最近點演算法(ICP)對每個輸入的點p i P只會單方向的搜尋相對應的點,但本發明之雙向對應檢查法143可考慮不只搜尋對點pi最接近的點,也同樣搜尋對點qj最接近的點 P,當點pi與點qj互為最接近的點時,點pi與點qj被稱為雙向對應,且具雙向對應的點應更具有代表性。
(2) Two-way correspondence check method 143: the traditional iterative closest point algorithm (ICP) for each input point p i P will only search the corresponding point in one direction , But the bidirectional
再者,考慮到行動裝置10之運算能力較弱於伺服器20,若行動裝置10之應用程式進行過多的資料運算,則會影響行動裝置10之速率及快速消耗行動裝置10之電池剩餘使用時間。在許多行動應用(如擴增實境應用)中,在短時距內,物體A(目標物體)與行動裝置10的相對視角不會有太大的變化,且主要來自於行動裝置10的移動。因此,本發明在物體A之狀態與角度辨識完成後的短時距內,提出裝置運動追蹤法144,且裝置運動追蹤法144可視情況以行動裝置10之慣性測量單元(IMU)取得的動態感測資訊作為運動轉換參考,以達成於行動裝置10上辨識與追蹤物體A(目標物體)之高反應率及低運算量。
Furthermore, considering that the computing power of the
第7圖為本發明之行動裝置10在追蹤階段上的動態切換流程示意圖,請一併參閱第1圖,且第7圖主要以疊代最近點演算法(ICP)141、裝置運動追蹤法144與姿勢測量法145協力完成。
FIG. 7 is a schematic diagram of the dynamic switching process of the
在第7圖之步驟S41中,於行動裝置10辨識完物體A之粗略姿勢後,物體辨識與追蹤模組14利用疊代最近點演算法(ICP)141先行微調修正物體A之視角。同時,在第7圖之步驟S42中,物體辨識與追蹤模組14利用姿勢測量法145來比較物體A之輪廓及深度影像之差值以計算物體A之視角的誤差。
In step S41 of FIG. 7, after the
在第7圖之步驟S43中,若物體A之視角的誤差大於預定之門檻值,則表示估計的方向是錯誤的(即追蹤失敗),則返回辨識階段(物體狀態辨識的步驟)。反之,在第7圖之步驟S44中,若視角之誤差未大於預定之門檻值,則表示這個結果是可接受的(即追蹤成功),物體辨識與追蹤模組14即會切換成以裝置運動追蹤法144之裝置運動資訊來反推物體A當前的視角。
In step S43 of FIG. 7, if the error of the viewing angle of the object A is greater than the predetermined threshold value, it indicates that the estimated direction is wrong (ie, the tracking fails), and then it returns to the recognition stage (the step of object state recognition). On the contrary, in step S44 of FIG. 7, if the angle of view error is not greater than the predetermined threshold value, it means that the result is acceptable (that is, the tracking is successful), and the object recognition and
在第7圖之步驟S45中,每隔一段時間(如每隔100幀)後,物體辨識與追蹤模組14以姿勢測量法145對當前的前景物體及推算出的物體視角進行姿勢測量而得到姿勢測量值。若姿勢測量值小於預定之門檻值(即追蹤成功),則物體辨識與追蹤模組14以步驟S44之裝置運動追蹤法144維持裝置運動追蹤。反之,若姿勢測量值未小於預定之門檻值(即追蹤失敗),則重新以步驟S41之疊代最近點演算法(ICP)141進行物體視角之調整,並再次以步驟S42之姿勢測量法145進行姿勢測量,若姿勢測量值仍大於門檻值(即追蹤失敗),則返回步驟S43之辨識階段(物體狀態辨識的步驟),以重新估算物體A之視角。
In step S45 of FIG. 7, after a certain period of time (such as every 100 frames), the object recognition and
如上述第1圖至第7圖所載,本發明之物體辨識與追蹤方法主要包括:由一伺服器20之樣板建構模組22對物體A之三維模型B以投影之方式建構多個不同視角之樣板C,並由伺服器20之特徵擷取模組23擷取、分析或精簡多個不同視角之樣板C的樣板特徵D的資料。同時,由一行動裝置10自伺服器20中取得或下載多個樣板特徵D的資料,並由行動裝置10之一物體辨識與追蹤模組14比對多個樣板特徵D的資料來辨識物體A及其視角,且物體辨識與追蹤模組14利用疊代最近點演算法141、隱藏面移除法142與雙向對應檢查法143三者進行物體A之視角追蹤。在執行疊代最近點演算法141時,物體辨識與追蹤模組14利用隱藏面移除法142移除或忽略物體A之視角所無法觀察到的樣板特徵D,而在疊代最近點演算法141搜尋樣板特徵D的最接近資料時,物體辨識與追蹤模組14利用雙向對應檢查法143雙向檢查或搜尋樣板特徵D的兩個資料是否為彼此的最接近資料。
As shown in FIGS. 1 to 7 above, the object recognition and tracking method of the present invention mainly includes: a
具體而言,本發明之物體辨識與追蹤方法可例如為下列程序P21至P26所述,其餘技術內容如同上述第1圖至第7圖之詳細說明,於此不再覆敘述。 Specifically, the object recognition and tracking method of the present invention can be described in the following procedures P21 to P26, for example, and the remaining technical contents are as detailed in the above FIGS. 1 to 7 and will not be repeated here.
程序P21:由行動裝置10以拍攝或掃描實際之物體A、或輸入物體A之三維模型B的方式,提供伺服器20建立或取得三維模型B。
Program P21: The
程序P22:由伺服器20之樣板建構模組22對三維模型B以投影之方式建構多個不同視角之樣板C,並由伺服
器20之特徵擷取模組23擷取多個不同視角之樣板C以產生相應之樣板特徵D。
Process P22: the
程序P23:由行動裝置100之物體辨識與追蹤模組14比對物體A與多個不同視角之樣板C的樣板特徵D來辨識出物體A及其粗略視角。
Process P23: The object recognition and
程序P24:由行動裝置100之物體辨識與追蹤模組14依據物體A之粗略視角,利用一疊代最近點演算法141(疊代逼近法)進行物體A之視角追蹤以求取較準確之視角。
Procedure P24: The object recognition and
程序P25:當一段時間內,行動裝置10僅有小幅度運動時,由行動裝置10之物體辨識與追蹤模組14自動改以裝置運動追蹤法144進行物體A之視角追蹤。
Process P25: When the
程序P26:行動裝置10之物體辨識與追蹤模組14透過裝置運動追蹤法144比對物體A之視角追蹤之效果與物體A之拍攝場景兩者的差異,當兩者的差異超過門檻值時,行動裝置100之物體辨識與追蹤模組14自動改以疊代最近點演算法141(疊代逼近法)進行物體A之視角追蹤,或重新進行物體A及其視角之辨識。
Program P26: The object recognition and
上述物體辨識與追蹤模組14可包括一隱藏面移除法142,在執行疊代最近點演算法141時,物體辨識與追蹤模組14利用隱藏面移除法142移除或忽略以物體A之粗略視角所無法觀察到的樣板特徵D。
The object recognition and
上述物體辨識與追蹤模組14可包括一雙向對應檢查法143,在疊代最近點演算法141搜尋樣板特徵D的最接近資料時,物體辨識與追蹤模組14利用雙向對應檢查法
143雙向檢查或搜尋樣板特徵D的兩個資料是否為彼此的最接近資料。例如,雙向對應檢查法143可以搜尋資料A的最接近資料B,亦能檢查資料B的最接近資料是否為資料A,藉此提升資料A與資料B之對應關係的可信度與準確度。
The object recognition and
綜上,本發明之物體辨識與追蹤系統及其方法可具有下列特色、優點或技術功效: In summary, the object recognition and tracking system and method of the present invention can have the following features, advantages or technical effects:
一、本發明之行動裝置可對物體(目標物體)進行位置與視角追蹤,以拓展擴增實境之應用範疇。 1. The mobile device of the present invention can track the position and perspective of an object (target object) to expand the application scope of augmented reality.
二、本發明將較耗時之樣板的建構與樣板特徵的分析移至伺服器中進行運算,以減少即時辨識所需之運算量與資料量。 2. The present invention moves the construction of the more time-consuming template and the analysis of the characteristics of the template to the server for calculation, so as to reduce the amount of calculation and data required for real-time identification.
三、本發明之物體辨識與追蹤模組可將疊代最近點演算法(ICP)結合隱藏面移除法與雙向對應檢查法,以求取物體之更準確的追蹤視角。 3. The object identification and tracking module of the present invention can combine the iterative closest point algorithm (ICP) with the hidden surface removal method and the two-way corresponding inspection method to obtain a more accurate tracking angle of the object.
四、本發明之隱藏面移除法可移除視角看不到的點,並僅利用視角的可視點(剩餘的點)進行比對,以減少比對過程中的模糊地帶及連續畫面間之追蹤軌跡的顫抖情形。 4. The hidden surface removal method of the present invention can remove the points that are not visible in the viewing angle, and only use the visible points of the viewing angle (the remaining points) for comparison, so as to reduce the blur zone and the continuous picture between the comparison process Trembling to track the trajectory.
五、本發明之雙向對應檢查法可雙向檢查或搜尋樣板特徵的兩個資料是否為彼此的最接近資料,藉此提升兩個資料之對應關係的可信度與準確度。 5. The bidirectional correspondence checking method of the present invention can bidirectionally check or search whether the two data of the model features are the closest data to each other, thereby improving the credibility and accuracy of the correspondence between the two data.
六、本發明之物體辨識與追蹤模組可在行動裝置僅有小幅度運動下,自動改以動態感測資訊推估物體(目標物體)之三維相對運動,以大幅減少對物體之相對視角運動較複 雜的比對運算量、提高系統反應率或減少計算能耗。 6. The object recognition and tracking module of the present invention can automatically change the dynamic sensing information to estimate the three-dimensional relative motion of the object (target object) when the mobile device has only a small amplitude movement, so as to greatly reduce the relative perspective motion of the object More complex Miscellaneous comparison calculations, improve system response rate or reduce computing energy consumption.
七、本發明可視行動裝置之狀況動態調整物體之視角運算方式,在追蹤物體時能保有低角度誤差,減低運算能耗,並維持即時互動性。 7. According to the situation of the mobile device of the present invention, the view angle calculation method of the object is dynamically adjusted, which can keep low angle error when tracking the object, reduce calculation energy consumption, and maintain real-time interactivity.
八、本發明可應用於例如下列產業。(1)製造業:產品之組裝提示、新一代工業4.0中智慧製造維修之應用。(2)教育業:器官構造之解剖教學。(3)食品業:營養成分與食用方式之說明及建議。(4)廣告商務:商品廣告內容之展示與互動。(5)服務業:遠端視訊協助客戶完成故障排除或裝修工作。(6)遊戲產業:公仔玩偶互動遊戲。另外,本發明亦可應用在例如智慧型眼鏡之類的產品上。 8. The present invention can be applied to the following industries, for example. (1) Manufacturing: assembly instructions for products, the application of smart manufacturing and maintenance in the new generation of Industry 4.0. (2) Education: Teaching of anatomy of organ structure. (3) Food industry: explanations and suggestions on nutrients and consumption methods. (4) Advertising commerce: display and interaction of product advertising content. (5) Service industry: Remote videoconferencing assists customers to complete troubleshooting or decoration work. (6) Game industry: Doll interactive games. In addition, the present invention can also be applied to products such as smart glasses.
上述實施形態僅例示性說明本發明之原理、特點及其功效,並非用以限制本發明之可實施範疇,任何熟習此項技藝之人士均可在不違背本發明之精神及範疇下,對上述實施形態進行修飾與改變。任何運用本發明所揭示內容而完成之等效改變及修飾,均仍應為申請專利範圍所涵蓋。因此,本發明之權利保護範圍,應如申請專利範圍所列。 The above-mentioned embodiments only exemplarily illustrate the principles, characteristics and effects of the present invention, and are not intended to limit the scope of the invention. Anyone who is familiar with this skill can do the above without departing from the spirit and scope of the present invention. The embodiment is modified and changed. Any equivalent changes and modifications made using the disclosure of the present invention should still be covered by the scope of the patent application. Therefore, the scope of protection of the rights of the present invention should be as listed in the scope of patent application.
1‧‧‧物體辨識與追蹤系統 1‧‧‧Object recognition and tracking system
10‧‧‧行動裝置 10‧‧‧Mobile device
11‧‧‧彩色攝影機 11‧‧‧Color camera
12‧‧‧深度感測器 12‧‧‧Depth sensor
13‧‧‧前景切割模組 13‧‧‧Foreground cutting module
14‧‧‧物體辨識與追蹤模組 14‧‧‧Object recognition and tracking module
141‧‧‧疊代最近點演算法 141‧‧‧ Iterative nearest point algorithm
142‧‧‧隱藏面移除法 142‧‧‧ hidden face removal method
143‧‧‧雙向對應檢查法 143‧‧‧Two-way correspondence inspection method
144‧‧‧裝置運動追蹤法 144‧‧‧ device motion tracking method
145‧‧‧姿勢測量法 145‧‧‧ Posture measurement method
15‧‧‧顯示模組 15‧‧‧Display module
20‧‧‧伺服器 20‧‧‧Server
21‧‧‧三維模型重建模組 21‧‧‧Three-dimensional model reconstruction module
22‧‧‧樣板建構模組 22‧‧‧Model construction module
23‧‧‧特徵擷取模組 23‧‧‧Feature extraction module
A‧‧‧物體 A‧‧‧Object
B‧‧‧三維模型 B‧‧‧3D model
C‧‧‧樣板 C‧‧‧Model
D‧‧‧樣板特徵 D‧‧‧Model features
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107143429A TWI684956B (en) | 2018-12-04 | 2018-12-04 | Object recognition and tracking system and method thereof |
CN201811626054.3A CN111275734B (en) | 2018-12-04 | 2018-12-28 | Object identification and tracking system and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107143429A TWI684956B (en) | 2018-12-04 | 2018-12-04 | Object recognition and tracking system and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI684956B true TWI684956B (en) | 2020-02-11 |
TW202022803A TW202022803A (en) | 2020-06-16 |
Family
ID=70413546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW107143429A TWI684956B (en) | 2018-12-04 | 2018-12-04 | Object recognition and tracking system and method thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111275734B (en) |
TW (1) | TWI684956B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI779488B (en) * | 2021-02-09 | 2022-10-01 | 趙尚威 | Feature identification method and system |
TWI772020B (en) * | 2021-05-12 | 2022-07-21 | 廣達電腦股份有限公司 | Image positioning device and method |
TWI817847B (en) * | 2022-11-28 | 2023-10-01 | 國立成功大學 | Method, computer program and computer readable medium for fast tracking and positioning objects in augmented reality and mixed reality |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201248515A (en) * | 2011-05-30 | 2012-12-01 | Univ Nat Cheng Kung | Three dimensional dual-mode scanning apparatus and three dimensional dual-mode scanning system |
TW201530495A (en) * | 2014-01-22 | 2015-08-01 | Univ Nat Taiwan Science Tech | Method for tracking moving object and electronic apparatus using the same |
CN106462976A (en) * | 2014-04-30 | 2017-02-22 | 国家科学研究中心 | Method of tracking shape in a scene observed by an asynchronous light sensor |
TW201816662A (en) * | 2016-10-18 | 2018-05-01 | 瑞典商安訊士有限公司 | Method and system for tracking an object in a defined area |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102800103B (en) * | 2012-06-18 | 2015-02-18 | 清华大学 | Unmarked motion capturing method and device based on multi-visual angle depth camera |
CN102802000A (en) * | 2012-08-09 | 2012-11-28 | 冠捷显示科技(厦门)有限公司 | Tracking type multi-angle three-dimensional display image quality improving method |
WO2015134795A2 (en) * | 2014-03-05 | 2015-09-11 | Smart Picture Technologies, Inc. | Method and system for 3d capture based on structure from motion with pose detection tool |
US9830703B2 (en) * | 2015-08-12 | 2017-11-28 | Nvidia Corporation | Model-based three-dimensional head pose estimation |
US20170323149A1 (en) * | 2016-05-05 | 2017-11-09 | International Business Machines Corporation | Rotation invariant object detection |
TWI612482B (en) * | 2016-06-28 | 2018-01-21 | 圓展科技股份有限公司 | Target tracking method and target tracking device |
CN108509848B (en) * | 2018-02-13 | 2019-03-05 | 视辰信息科技(上海)有限公司 | The real-time detection method and system of three-dimension object |
-
2018
- 2018-12-04 TW TW107143429A patent/TWI684956B/en active
- 2018-12-28 CN CN201811626054.3A patent/CN111275734B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201248515A (en) * | 2011-05-30 | 2012-12-01 | Univ Nat Cheng Kung | Three dimensional dual-mode scanning apparatus and three dimensional dual-mode scanning system |
TW201530495A (en) * | 2014-01-22 | 2015-08-01 | Univ Nat Taiwan Science Tech | Method for tracking moving object and electronic apparatus using the same |
CN106462976A (en) * | 2014-04-30 | 2017-02-22 | 国家科学研究中心 | Method of tracking shape in a scene observed by an asynchronous light sensor |
TW201816662A (en) * | 2016-10-18 | 2018-05-01 | 瑞典商安訊士有限公司 | Method and system for tracking an object in a defined area |
Also Published As
Publication number | Publication date |
---|---|
CN111275734A (en) | 2020-06-12 |
TW202022803A (en) | 2020-06-16 |
CN111275734B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Memo et al. | Head-mounted gesture controlled interface for human-computer interaction | |
CN108369643B (en) | Method and system for 3D hand skeleton tracking | |
US11238606B2 (en) | Method and system for performing simultaneous localization and mapping using convolutional image transformation | |
Panteleris et al. | Back to rgb: 3d tracking of hands and hand-object interactions based on short-baseline stereo | |
CN108958473A (en) | Eyeball tracking method, electronic device and non-transient computer-readable recording medium | |
Hackenberg et al. | Lightweight palm and finger tracking for real-time 3D gesture control | |
Ye et al. | Accurate 3d pose estimation from a single depth image | |
Vieira et al. | On the improvement of human action recognition from depth map sequences using space–time occupancy patterns | |
JP6001562B2 (en) | Use of 3D environmental models in game play | |
EP4053795A1 (en) | A method and system for real-time 3d capture and live feedback with monocular cameras | |
US9299161B2 (en) | Method and device for head tracking and computer-readable recording medium | |
US20220351535A1 (en) | Light Weight Multi-Branch and Multi-Scale Person Re-Identification | |
Kemelmacher-Shlizerman et al. | Being john malkovich | |
Wang et al. | Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system | |
TWI684956B (en) | Object recognition and tracking system and method thereof | |
CN109359514B (en) | DeskVR-oriented gesture tracking and recognition combined strategy method | |
CN113706699B (en) | Data processing method and device, electronic equipment and computer readable storage medium | |
CN104050475A (en) | Reality augmenting system and method based on image feature matching | |
CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
Núnez et al. | Real-time human body tracking based on data fusion from multiple RGB-D sensors | |
Pires et al. | Visible-spectrum gaze tracking for sports | |
Rocca et al. | Head pose estimation by perspective-n-point solution based on 2d markerless face tracking | |
Rekik et al. | 3d face pose tracking using low quality depth cameras | |
CN110009683B (en) | Real-time on-plane object detection method based on MaskRCNN | |
Xue et al. | Event-based non-rigid reconstruction from contours |