TWI517100B - Method for tracking moving object and electronic apparatus using the same - Google Patents

Method for tracking moving object and electronic apparatus using the same Download PDF

Info

Publication number
TWI517100B
TWI517100B TW103102279A TW103102279A TWI517100B TW I517100 B TWI517100 B TW I517100B TW 103102279 A TW103102279 A TW 103102279A TW 103102279 A TW103102279 A TW 103102279A TW I517100 B TWI517100 B TW I517100B
Authority
TW
Taiwan
Prior art keywords
foreground object
foreground
angle range
moving
shooting angle
Prior art date
Application number
TW103102279A
Other languages
Chinese (zh)
Other versions
TW201530495A (en
Inventor
范欽雄
黃姝蓉
Original Assignee
國立臺灣科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立臺灣科技大學 filed Critical 國立臺灣科技大學
Priority to TW103102279A priority Critical patent/TWI517100B/en
Publication of TW201530495A publication Critical patent/TW201530495A/en
Application granted granted Critical
Publication of TWI517100B publication Critical patent/TWI517100B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

移動物體追蹤方法及電子裝置 Moving object tracking method and electronic device

本發明是有關於一種物體的辨識方法及電子裝置,且特別是有關於一種移動物體追蹤方法及電子裝置。 The present invention relates to an object recognition method and an electronic device, and more particularly to a moving object tracking method and an electronic device.

隨著監控設備(Surveillance Equipment)已被廣泛使用,各式各樣智慧型監控系統因應而生。單攝影機可用來分析簡單人類行為活動,但只能適用在小範圍的監控應用。結合多攝影機則可用來分析大範圍之行為活動,例如大樓內人員行走路徑分析、賣場顧客購物行為分析與異常行為分析等應用。 With the widespread use of Surveillance Equipment, a variety of intelligent surveillance systems have emerged. Single cameras can be used to analyze simple human behavioral activities, but only for small-scale surveillance applications. Combined with multiple cameras, it can be used to analyze a wide range of behavioral activities, such as pedestrian walking path analysis, store customer shopping behavior analysis and abnormal behavior analysis.

多攝影機的架設可分成視野重疊(Overlapping Field)與視野非重疊(Non-overlapping Field)。大致來說,於重疊區域進行多攝影機的目標物追蹤,空間中的絕對位置為主要運用的判別特徵,利用已校正的攝影機參數與三維場景模型,即可透過估測目標物於空間中的絕對位置來判別目標物是否為同一個追蹤物體。然而,由於計算量與經濟成本的考量,在實際應用上要求所有攝影機的監控畫面皆有重疊是不太可能的;反之,視野無重疊環境 之多攝影機配置則應用較為彈性,成本較低,且監控範圍較廣,但由於空間上的不連續,以及攝影機擺放角度與環境的不同,使得攝影機所拍攝到之物體因為外在條件因素的影響,進而造成物體比對(Object Matching)上的困難。 The erection of multiple cameras can be divided into an Overlapping Field and a Non-overlapping Field. Generally speaking, the target tracking of multi-camera is performed in the overlapping area, and the absolute position in the space is the main discriminative feature. By using the corrected camera parameters and the three-dimensional scene model, the absolute value of the target in the space can be estimated. Position to determine whether the target is the same tracking object. However, due to the calculation of the amount of calculation and the cost of the economy, it is unlikely that the monitoring images of all the cameras overlap in practical applications; otherwise, the field of view has no overlapping environment. The multi-camera configuration is more flexible, lower cost, and has a wider monitoring range. However, due to the spatial discontinuity and the difference between the camera placement angle and the environment, the camera captures the object due to external conditions. Influence, which in turn causes difficulties in object matching.

有鑑於此,本發明提供一種移動物體追蹤方法及電子裝置,其可對移動物體進行廣域的長程追蹤。 In view of this, the present invention provides a moving object tracking method and an electronic device capable of performing long-range tracking of a wide range of moving objects.

本發明提出一種移動物體追蹤方法,包括下列步驟。分別自多個攝影機接收視訊資料,其中所述攝影機包括具有第一拍攝視角範圍的第一攝影機以及第二拍攝視角範圍的第二攝影機,所述視訊資料包括分別對應於第一拍攝視角範圍以及第二拍攝視角範圍的第一視訊資料以及第二視訊資料,第一拍攝視角範圍與第二拍攝視角範圍無重疊。自第一視訊資料以及第二視訊資料分別偵測至少一第一前景物體以及至少一第二前景物體,並且追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑。估算所述第一前景物體自第一拍攝視角範圍移動到第二拍攝視角範圍之間的移動時間,並且計算第一視訊資料以及第二視訊資料的亮度關係。之後,根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的第一 前景物體,從而建立各所述第二前景物體的完整移動路徑。 The invention provides a moving object tracking method comprising the following steps. Receiving video data from a plurality of cameras, wherein the camera includes a first camera having a first range of viewing angles and a second camera having a second range of viewing angles, the video data including a first shooting angle range and a first The first video data and the second video data in the range of the viewing angle are not overlapped, and the first shooting angle range does not overlap with the second shooting angle range. Detecting at least one first foreground object and at least one second foreground object from the first video data and the second video data, and tracking the first foreground object and the second foreground object to obtain each of the first a moving path of the foreground object and a moving path of each of the second foreground objects. Estimating the movement time of the first foreground object from the first shooting angle range to the second shooting angle range, and calculating the brightness relationship of the first video material and the second video material. And comparing the first foreground object and the second foreground object according to the moving time, the color information of the first foreground object, and the color information of the second foreground object, The first associated with the second foreground object A foreground object, thereby establishing a complete moving path of each of said second foreground objects.

在本發明的一實施例中,上述自第一視訊資料以及第二視訊資料分別偵測所述第一前景物體以及所述第二前景物體的步驟包括:利用高斯混合模型,分別建立對應於第一拍攝視角範圍以第二拍攝視角範圍的第一背景模型以及第二背景模型;利用背景相減法,根據第一背景模型以及第二背景模型,分別自第一視訊資料以及第二視訊資料取得至少一第一陰影前景物體以及至少一第二陰影前景物體;以及針對所述第一陰影前景物體以及所述第二陰影前景物體進行陰影濾除處理以及型態學處理,以分別產生所述第一前景物體以及所述第二前景物體。 In an embodiment of the invention, the step of detecting the first foreground object and the second foreground object from the first video data and the second video data respectively comprises: using a Gaussian mixture model to respectively establish corresponding to the first a first background model and a second background model of the second shooting angle range; and the background subtraction method, the first background image and the second background model are respectively obtained from the first video data and the second video data respectively a first shadow foreground object and at least a second shadow foreground object; and performing shadow filtering processing and type processing on the first shadow foreground object and the second shadow foreground object to respectively generate the first a foreground object and the second foreground object.

在本發明的一實施例中,上述追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑的步驟包括:判斷所述第一視訊資料的第一畫面中,所述第一前景物體的第一參考前景物體與所述第一前景物體的其中之一者,是否發生重疊現象;當判斷發生重疊現象時,根據卡爾曼濾波器計算第一參考前景物體於第一畫面的下一個畫面中的第一預測位置,並且針對第一預測位置執行均值漂移演算法,以取得第一偵測位置,再根據卡爾曼濾波器修正第一偵測位置,以取得第一修正位置;當判斷無發生重疊現象時,根據區塊式追蹤法追蹤所述第一前景物體;判斷所述第二視訊資料的第二畫面中,所述第二前景物體的第二參考前景物體與所述第二前景物體的其中之一者,是否發生重疊現象;當判斷 發生重疊現象時,根據卡爾曼濾波器計算第二參考前景物體於第二畫面的下一個畫面中的第二預測位置,並且針對第二預測位置執行均值漂移演算法,以取得第二偵測位置,再根據卡爾曼濾波器修正第二偵測位置,以取得第二修正位置;以及當判斷無發生重疊現象時,根據區塊式追蹤法追蹤所述第二前景物體。 In an embodiment of the invention, the step of tracking the first foreground object and the second foreground object to obtain a moving path of each of the first foreground objects and a moving path of each of the second foreground objects The method includes: determining whether an overlap occurs between the first reference foreground object of the first foreground object and the first foreground object in the first picture of the first video material; when it is determined that an overlap occurs Calculating a first predicted position of the first reference foreground object in a next picture of the first picture according to the Kalman filter, and performing a mean shift algorithm for the first predicted position to obtain the first detected position, and then according to The Kalman filter corrects the first detection position to obtain the first correction position; when it is determined that no overlap occurs, the first foreground object is tracked according to the block tracking method; and the second video data is determined to be the second Whether a overlap occurs between the second reference foreground object of the second foreground object and one of the second foreground objects in the picture; When an overlap occurs, a second reference foreground object is calculated according to a Kalman filter to a second predicted position in a next picture of the second picture, and a mean shift algorithm is performed for the second predicted position to obtain a second detected position And correcting the second detection position according to the Kalman filter to obtain the second correction position; and when determining that no overlap occurs, tracking the second foreground object according to the block tracking method.

在本發明的一實施例中,上述估算所述第一前景物體自 第一拍攝視角範圍移動到第二拍攝視角範圍之間的所述移動時間,並且計算第一視訊資料以及第二視訊資料的亮度關係的步驟包括:於訓練階段,統計多個已知移動物體自第一拍攝視角範圍移動到第二拍攝視角範圍之間所需的時間,以取得所述移動時間;以及於訓練階段,利用亮度轉換函式,根據所述已知移動物體分別於第一拍攝視角範圍以及第二拍攝視角範圍的色彩資訊,取得亮度關係。 In an embodiment of the invention, the estimating the first foreground object from The moving the first shooting angle range to the moving time between the second shooting angle range, and calculating the brightness relationship of the first video data and the second video data comprises: counting a plurality of known moving objects from the training phase a time required for the first shooting angle range to move between the second shooting angle range to obtain the moving time; and during the training phase, using the brightness conversion function, according to the known moving object respectively at the first shooting angle The range and the color information of the second shooting angle range are used to obtain the brightness relationship.

在本發明的一實施例中,所述第二前景物體包括第二目 標前景物體,而上述根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的第一前景物體,從而建立各所述第二前景物體的完整移動路徑的步驟包括:當偵測第二目標前景物體進入第二拍攝視角範圍時,根據亮度關係轉換第二目標前景物體的色彩資訊;根據所述移動時間,查詢資料庫,以取得關聯於第二目標前景物體的至少一第一時間相近前景物體,其中資料庫記錄各所述第一前景物體 所對應的時間點以及色彩資訊,所述第一時間相近前景物體為所述時間點中符合所述移動時間的第一前景物體;根據第二目標前景物體的色彩資訊以及所述第一時間相近前景物體的色彩資訊,比對第二目標前景物體與各所述第一時間相近前景物體,以自各所述第一時間相近前景物體之中辨識第二目標前景物體所關聯的第一目標前景物體,其中第一目標前景物體的色彩資訊相似於第二目標前景物體;根據第一目標前景物體的移動路徑以及第二目標前景物體的移動路徑,建立第二目標前景物體的完整移動路徑。 In an embodiment of the invention, the second foreground object comprises a second mesh Marking the foreground object, and comparing the first foreground object and the second foreground object according to the moving time, the color information of the first foreground object, and the color information of the second foreground object The step of identifying the first foreground object associated with each of the second foreground objects to establish a complete moving path of each of the second foreground objects comprises: when detecting that the second target foreground object enters the second shooting angle range, according to Converting the color information of the second target foreground object according to the moving time; querying the database according to the moving time to obtain at least one first time-similar foreground object associated with the second target foreground object, wherein the database records each of the first objects Foreground object Corresponding time point and color information, the first time-similar foreground object is a first foreground object in the time point that meets the moving time; the color information according to the second target foreground object and the first time is similar Color information of the foreground object, comparing the second target foreground object with each of the first time foreground objects, to identify the first target foreground object associated with the second target foreground object from each of the first time-similar foreground objects The color information of the first target foreground object is similar to the second target foreground object; and the complete moving path of the second target foreground object is established according to the moving path of the first target foreground object and the moving path of the second target foreground object.

本發明另提出一種電子裝置,包括儲存單元以及一或多個處理單元,其中所述處理單元耦接儲存單元。儲存單元用以記錄多個模組。處理單元用以存取並執行儲存單元中記錄的所述模組。所述模組包括:資料接收模組、偵測模組、追蹤模組以及識別模組。資料接收模組用以自多個攝影機分別接收視訊資料,其中所述攝影機包括具有第一拍攝視角範圍的第一攝影機以及第二拍攝視角範圍的第二攝影機,所述視訊資料包括分別對應於第一拍攝視角範圍以及第二拍攝視角範圍的第一視訊資料以及第二視訊資料,而第一拍攝視角範圍與第二拍攝視角範圍無重疊。偵測模組用以自第一視訊資料以及第二視訊資料分別偵測至少一第一前景物體以及至少一第二前景物體。追蹤模組用以追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑。識別模組用以估算所述第一前景物體自第一拍攝視角範圍移動到第二拍攝視角範 圍之間的移動時間,又計算第一視訊資料以及第二視訊資料的亮度關係,以及根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的第一前景物體,從而建立各所述第二前景物體的完整移動路徑。 The invention further provides an electronic device comprising a storage unit and one or more processing units, wherein the processing unit is coupled to the storage unit. The storage unit is used to record a plurality of modules. The processing unit is configured to access and execute the module recorded in the storage unit. The module includes: a data receiving module, a detecting module, a tracking module, and an identification module. The data receiving module is configured to receive video data from a plurality of cameras respectively, wherein the camera comprises a first camera having a first shooting angle range and a second camera having a second shooting angle range, the video data respectively corresponding to the first camera The first video data and the second video data of the range of the viewing angle and the second shooting angle range are captured, and the first shooting angle range does not overlap with the second shooting angle range. The detecting module is configured to detect at least one first foreground object and at least one second foreground object from the first video data and the second video data respectively. The tracking module is configured to track the first foreground object and the second foreground object to obtain a moving path of each of the first foreground objects and a moving path of each of the second foreground objects. The identification module is configured to estimate that the first foreground object moves from the first shooting angle range to the second shooting angle Calculating a brightness relationship between the first video data and the second video data, and calculating, according to the moving time, color information of the first foreground object, and color information of the second foreground object, And identifying, by the first foreground object and the second foreground object, a first foreground object associated with each of the second foreground objects, thereby establishing a complete moving path of each of the second foreground objects.

在本發明的一實施例中,上述的偵測模組利用高斯混合 模型,分別建立對應於第一拍攝視角範圍以及第二拍攝視角範圍的第一背景模型以及第二背景模型,又利用背景相減法,根據第一背景模型以及第二背景模型,分別自第一視訊資料以及第二視訊資料取得至少一第一陰影前景物體以及至少一第二陰影前景物體,並且針對所述第一陰影前景物體以及所述第二陰影前景物體進行陰影濾除處理以及型態學處理,以分別產生所述第一前景物體以及所述第二前景物體。 In an embodiment of the invention, the detecting module utilizes Gaussian mixing a first background model and a second background model corresponding to the first shooting angle range and the second shooting angle range, respectively, and the background subtraction method, respectively, according to the first background model and the second background model, respectively, from the first video And the second video material acquires at least one first shadow foreground object and at least one second shadow foreground object, and performs shadow filtering processing and pattern processing on the first shadow foreground object and the second shadow foreground object And generating the first foreground object and the second foreground object, respectively.

在本發明的一實施例中,上述的追蹤模組判斷所述第一 視訊資料的第一畫面中,所述第一前景物體的第一參考前景物體與所述第一前景物體的其中之一者是否發生重疊現象;當判斷發生重疊現象時,根據卡爾曼濾波器計算第一參考前景物體於第一畫面的下一個畫面中的第一預測位置,並且針對第一預測位置執行均值漂移演算法,以取得第一偵測位置,再根據卡爾曼濾波器修正第一偵測位置,以取得第一修正位置;當判斷無發生重疊現象時,根據區塊式追蹤法追蹤所述第一前景物體。追蹤模組又判斷所述第二視訊資料的第二畫面中,所述第二前景物體的第二參 考前景物體與所述第二前景物體的其中之一者是否發生該重疊現象;當判斷發生重疊現象時,根據卡爾曼濾波器計算第二參考前景物體於第二畫面的下一個畫面中的第二預測位置,並針對第二預測位置執行均值漂移演算法,以取得第二偵測位置,再根據卡爾曼濾波器修正第二偵測位置,以取得第二修正位置;以及當判斷無發生重疊現象時,根據區塊式追蹤法追蹤所述第二前景物體。 In an embodiment of the invention, the tracking module determines the first Whether the first reference foreground object of the first foreground object overlaps with one of the first foreground objects in the first picture of the video material; when it is judged that the overlap phenomenon occurs, the calculation is performed according to the Kalman filter The first reference foreground object is at a first predicted position in a next picture of the first picture, and performs a mean shift algorithm for the first predicted position to obtain a first detected position, and then corrects the first detect according to the Kalman filter The position is measured to obtain the first corrected position; when it is determined that no overlap occurs, the first foreground object is tracked according to the block tracking method. The tracking module further determines a second reference of the second foreground object in the second picture of the second video material Whether the overlap phenomenon occurs between one of the foreground object and the second foreground object; when it is determined that the overlap phenomenon occurs, calculating the second reference foreground object in the next picture of the second picture according to the Kalman filter Performing a mean shift algorithm for the second predicted position to obtain a second detected position, and then correcting the second detected position according to the Kalman filter to obtain a second corrected position; and when it is determined that no overlap occurs In the case of the phenomenon, the second foreground object is tracked according to the block tracking method.

在本發明的一實施例中,上述的識別模組於訓練階段,統計多個已知移動物體自第一拍攝視角範圍移動到第二拍攝視角範圍之間所需的時間,以取得所述移動時間,又於訓練階段,利用亮度轉換函式,根據所述已知移動物體分別於第一拍攝視角範圍以及第二拍攝視角範圍的色彩資訊,取得亮度關係。 In an embodiment of the present invention, the above-mentioned identification module counts the time required for a plurality of known moving objects to move from the first shooting angle range to the second shooting angle range during the training phase to obtain the movement. The time, and in the training phase, the brightness conversion function is used to obtain the brightness relationship according to the color information of the known moving object in the first shooting angle range and the second shooting angle range respectively.

在本發明的一實施例中,當偵測模組偵測第二目標前景物體進入第二拍攝視角範圍時,識別模組根據亮度關係轉換第二目標前景物體的色彩資訊,又根據所述移動時間,查詢資料庫,以取得關聯於第二目標前景物體的至少一第一時間相近前景物體,其中所述第一時間相近前景物體為所述時間點中符合所述移動時間的第一前景物體。識別模組再根據第二目標前景物體的色彩資訊以及所述第一時間相近前景物體的色彩資訊,比對第二目標前景物體與各所述第一時間相近前景物體,以自各所述第一時間相近前景物體之中辨識第二目標前景物體所關聯的第一目標前景物體,其中第一目標前景物體的色彩資訊相似於第二目標前景物體。識別模組更根據第一目標前景物體的移動路徑以及第二目 標前景物體的移動路徑,建立第二目標前景物體的完整移動路徑。 In an embodiment of the invention, when the detecting module detects that the second target foreground object enters the second shooting angle range, the recognition module converts the color information of the second target foreground object according to the brightness relationship, and according to the moving Time, querying a database to obtain at least one first time-similar foreground object associated with the second target foreground object, wherein the first time-similar foreground object is the first foreground object in the time point that meets the moving time . The recognition module further compares the second target foreground object with each of the first time-similar foreground objects according to the color information of the second target foreground object and the color information of the foreground object at the first time, The first target foreground object associated with the second target foreground object is identified among the foreground objects, wherein the color information of the first target foreground object is similar to the second target foreground object. The recognition module is further based on the moving path of the first target foreground object and the second item A moving path of the foreground object, establishing a complete moving path of the second target foreground object.

基於上述,本發明所提出的移動物體追蹤方法及電子裝置係先自多個攝影機取得視訊資料後,提取出移動物體所對應的前景物體,再對前景物體進行追蹤。此外,根據移動物體在穿越拍攝視野範圍之間的盲區時所需花費的時間,並且利用亮度轉換函式取得移動物體在不同拍攝視野範圍的亮度關係,以在不同視訊資料內中辨識出相同的移動物體。據此,本發明可在多個拍攝視野範圍不重疊的攝影機下,對移動物體進行廣域的長程追蹤以達到全方位的安全監控。 Based on the above, the moving object tracking method and the electronic device proposed by the present invention first acquire the video data from the plurality of cameras, extract the foreground object corresponding to the moving object, and then track the foreground object. In addition, according to the time taken by the moving object to pass through the blind zone between the shooting field of view, and using the brightness conversion function to obtain the brightness relationship of the moving object in different shooting field ranges, to identify the same in different video materials. Move the object. Accordingly, the present invention can perform wide-area long-range tracking of moving objects under a plurality of cameras that do not overlap in the field of view, so as to achieve comprehensive security monitoring.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the invention will be apparent from the following description.

100‧‧‧電子裝置 100‧‧‧Electronic devices

110‧‧‧處理單元 110‧‧‧Processing unit

120‧‧‧儲存單元 120‧‧‧ storage unit

122‧‧‧資料接收模組 122‧‧‧ Data receiving module

124‧‧‧偵測模組 124‧‧‧Detection module

126‧‧‧追蹤模組 126‧‧‧Tracking module

128‧‧‧識別模組 128‧‧‧identification module

S202~S212、S506~S512‧‧‧移動物體追蹤方法的流程 S202~S212, S506~S512‧‧‧ Flow of moving object tracking method

312~318、326、326a、328、410a~410b、420a~420b‧‧‧影像 312~318, 326, 326a, 328, 410a~410b, 420a~420b‧‧‧ images

422、424、502‧‧‧區塊 Blocks 422, 424, 502‧‧

504‧‧‧串列 504‧‧‧Listing

DB1、DB2‧‧‧資料庫 DB1, DB2‧‧‧ database

602‧‧‧第二目標前景物體 602‧‧‧ second target foreground object

S604~S618‧‧‧移動物體識別流程 S604~S618‧‧‧Mobile object recognition process

圖1繪示本發明一實施例之一種電子裝置的方塊示意圖。 FIG. 1 is a block diagram of an electronic device according to an embodiment of the invention.

圖2繪示本發明一實施例之移動物體追蹤方法的流程圖。 2 is a flow chart of a method for tracking a moving object according to an embodiment of the present invention.

圖3A與3B繪示本發明一實施例之取得前景物體的示意圖。 3A and 3B are schematic diagrams showing a foreground object according to an embodiment of the present invention.

圖4繪示本發明一實施例之處理重疊現象的示意圖。 FIG. 4 is a schematic diagram of processing overlap phenomenon according to an embodiment of the present invention.

圖5繪示本發明一實施例之移動物體追蹤流程示意圖。 FIG. 5 is a schematic diagram of a moving object tracking process according to an embodiment of the invention.

圖6繪示本發明一實施例之移動物體識別流程示意圖。 FIG. 6 is a schematic diagram of a mobile object recognition process according to an embodiment of the present invention.

本發明的部份實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些實施例只是本發明的一部份,並未揭示所有本發明的可實施方式。更確切的說,這些實施例只是本發明的專利申請範圍中的裝置與方法的範例。 The components of the present invention will be described in detail in the following description in conjunction with the accompanying drawings. These examples are only a part of the invention and do not disclose all of the embodiments of the invention. Rather, these embodiments are merely examples of devices and methods within the scope of the patent application of the present invention.

圖1繪示本發明一實施例之一種電子裝置的方塊示意圖,但此僅是為了方便說明,並不用以限制本發明。圖1先介紹電子裝置的所有構件及配置關係,詳細功能將配合圖2一併揭露。 1 is a block diagram of an electronic device according to an embodiment of the present invention, but is for convenience of description and is not intended to limit the present invention. FIG. 1 first introduces all the components and configuration relationships of the electronic device, and the detailed functions will be disclosed in conjunction with FIG. 2 .

請參照圖1,本實施例的電子裝置100用以在接收來自不同拍攝視角範圍的攝影機以及不同環境光源的場景下所拍攝的視訊資料後,針對視訊資料中的移動物體進行長程追蹤。電子裝置100可以被實作為電腦、伺服器、分散式系統、智慧型手機、平板電腦、任何形式的嵌入式系統或裝置,本發明並不侷限於此等實作態樣。電子裝置100包括儲存單元110及一或多個處理單元120,其功能分述如下。 Referring to FIG. 1 , the electronic device 100 of the embodiment is configured to perform long-range tracking on moving objects in the video data after receiving video data captured in a scene from different shooting angle ranges and different ambient light sources. The electronic device 100 can be implemented as a computer, a server, a distributed system, a smart phone, a tablet, any form of embedded system or device, and the invention is not limited to such implementations. The electronic device 100 includes a storage unit 110 and one or more processing units 120, the functions of which are described below.

儲存單元110例如是任意型式的固定式或可移動式隨機存取記憶體、唯讀記憶體、快閃記憶體、硬碟或其他類似裝置或這些裝置的組合,用以記錄由處理單元120執行的多個模組,這些模組可載入處理單元120以針對視訊資料執行移動物體的追蹤。 The storage unit 110 is, for example, any type of fixed or removable random access memory, read only memory, flash memory, hard disk or the like or a combination of these devices for recording by the processing unit 120. A plurality of modules that can be loaded into the processing unit 120 to perform tracking of the moving object for the video material.

處理單元120例如是中央處理單元,或是其他可程式化之一般用途或特殊用途的微處理器、數位訊號處理器、可程式化 控制器、特殊應用積體電路、可程式化邏輯裝置或其他類似裝置或這些裝置的組合。處理單元120耦接至儲存單元110,其可存取並執行記錄在儲存單元110中的模組。 The processing unit 120 is, for example, a central processing unit, or other programmable general purpose or special purpose microprocessor, digital signal processor, and programmable. Controller, special application integrated circuit, programmable logic device or other similar device or a combination of these devices. The processing unit 120 is coupled to the storage unit 110, which can access and execute the modules recorded in the storage unit 110.

上述模組包括資料接收模組112、偵測模組114、追蹤模 組116以及識別模組118。這些模組例如是電腦程式,其可載入處理單元120,從而針對視訊資料執行移動物體追蹤方法的功能。以下即列舉實施例說明電子裝置100執行移動物體追蹤方法的詳細步驟。 The module includes a data receiving module 112, a detecting module 114, and a tracking module. Group 116 and identification module 118. These modules are, for example, computer programs that can be loaded into the processing unit 120 to perform the functions of the moving object tracking method for the video material. The following is a detailed description of the detailed steps of the electronic device 100 performing the moving object tracking method.

圖2是依據本發明一實施例所繪示之移動物體追蹤方法 的流程圖。請參照圖2,本實施例的方法適用於圖1的電子裝置100,以下即搭配電子裝置100中的各項元件說明本發明之移動物體追蹤方法的詳細步驟。 2 is a moving object tracking method according to an embodiment of the invention Flow chart. Referring to FIG. 2, the method of the present embodiment is applied to the electronic device 100 of FIG. 1. Hereinafter, the detailed steps of the moving object tracking method of the present invention will be described with reference to various components in the electronic device 100.

首先,資料接收模組112分別自多個攝影機接收視訊資 料(步驟S202)。詳言之,資料接收模組112可利用有線傳輸或是無線傳輸的方式自所述攝影機接收其所拍攝的視訊資料。在此的所述攝影機具有不同的拍攝視角範圍。以下僅針對所述攝影機中的第一攝影機以及第二攝影機進行說明,然而本領域具通常知識者裡當能依據以下步驟推得利用其它攝影機的視訊資料來進行移動物體追蹤方法之各步驟。 First, the data receiving module 112 receives video resources from a plurality of cameras respectively. (Step S202). In detail, the data receiving module 112 can receive the video data captured by the data receiving module 112 from the camera by means of wired transmission or wireless transmission. The cameras herein have different ranges of viewing angles. Hereinafter, only the first camera and the second camera in the camera will be described. However, those skilled in the art can use the following steps to derive the steps of the moving object tracking method using the video data of other cameras.

在本實施例中,第一攝影機以及第二攝影機分別具有第 一拍攝視角範圍以及第二拍攝視角範圍,其中第一拍攝視角範圍以及第二拍攝視角範圍無重疊之處。在此將第一攝影機以及第二 攝影機所拍攝的視訊資料分別定義為「第一視訊資料」以及「第二視訊資料」。 In this embodiment, the first camera and the second camera respectively have the first A shooting angle range and a second shooting angle range, wherein the first shooting angle range and the second shooting angle range have no overlap. Here will be the first camera and the second The video data captured by the camera are defined as "first video data" and "second video data" respectively.

接著,偵測模組114自第一視訊資料以及第二視訊資料 分別偵測至少一第一前景物體以及至少一第二前景物體(步驟S204)。詳言之,以第一視訊資料而言,偵測模組114可利用高斯混合模型(Gaussian Mixture Model)針對第一攝影機所監控的場景建立第一背景模型,並根據第一背景模型來取得視訊資料中的第一前景物體,再可藉由背景相減法(Background Subtraction),自該張影像中取得前景影像。以圖3A為例,影像312為第一視訊資料中的其中一張影像;影像314則影像312中的背景影像,其是依據第一背景模型所獲得;影像316則為影像312的前景遮罩(Foreground Mask);影像318為影像312中的前景影像。 Then, the detecting module 114 is configured from the first video data and the second video data. At least one first foreground object and at least one second foreground object are respectively detected (step S204). In detail, in the first video data, the detection module 114 can use the Gaussian Mixture Model to establish a first background model for the scene monitored by the first camera, and obtain the video according to the first background model. The first foreground object in the data can then obtain the foreground image from the image by Background Subtraction. Taking FIG. 3A as an example, the image 312 is one of the first video data; the image 314 is the background image in the image 312, which is obtained according to the first background model; and the image 316 is the foreground mask of the image 312. (Foreground Mask); image 318 is the foreground image in image 312.

圖3B中的影像326為影像316的局部放大圖。請參照圖 3B,藉由背景相減法取得前景物體,有時會受到陰影與一些雜訊的干擾(例如局部326a),而造成無法完整擷取出前景物體的輪廓,在此將包括影子與雜訊的前景物體定義為「第一陰影前景物體」。在本實施例中,偵測模組114更可針第一陰影前景物體進行陰影濾除處理(Shadow Removal)以及型態學處理(Morphological Operation),以獲得較完整的前景物體,亦即前述的第一前景物體。陰影濾除法的原理是當一個像素被陰影所覆蓋時,其亮度(Brightness/Value)會降低,但色相(Hue)不變。基於這項特徵,在本實施例中是採用HSV(Hue,Saturation,Value)色彩模型,對 前景影像與背景影像進行差分的運算而得出亮度的差異值,其中亮度變暗的部份可視為陰影,再藉由型態學填補與消除影像破裂的部分。舉例而言,影像326在經過陰影濾除處理以及型態學處理後,局部326a的影子部份將會被去除,而產生具有較完整前景物體的影像328。類似地,在圖2的步驟S204中,偵測模組114亦利用高斯混合模型針對第二攝影機所監控的場景建立第二背景模型,並根據第二背景模型來取得視訊資料中的第二陰影前景物體後,再利用陰影濾除處理以及型態學處理,分別產生第二前景物體。相關說明請參照前述段落,於此不再贅述。 Image 326 in FIG. 3B is a partial enlarged view of image 316. Please refer to the figure 3B, the foreground object is obtained by the background subtraction method, and sometimes it is disturbed by the shadow and some noise (for example, the part 326a), and the outline of the foreground object cannot be completely extracted, and the foreground object including the shadow and the noise is included here. Defined as "first shadow foreground object". In this embodiment, the detecting module 114 can perform shadow removal and Morphological Operation on the first shadow foreground object to obtain a more complete foreground object, that is, the foregoing The first foreground object. The principle of shadow filtering is that when a pixel is covered by a shadow, its brightness (Brightness/Value) will decrease, but the hue will not change. Based on this feature, in this embodiment, a HSV (Hue, Saturation, Value) color model is used, The difference between the foreground image and the background image results in a difference in brightness, wherein the darkened portion can be regarded as a shadow, and the portion of the image is filled and eliminated by the morphology. For example, after the shadow 326 and the morphological processing of the image 326, the shadow portion of the portion 326a will be removed, resulting in an image 328 having a more complete foreground object. Similarly, in step S204 of FIG. 2, the detection module 114 also uses the Gaussian mixture model to establish a second background model for the scene monitored by the second camera, and obtains a second shadow in the video data according to the second background model. After the foreground object, the shadow filtering process and the morphological processing are used to generate the second foreground object, respectively. Please refer to the preceding paragraph for related instructions, which will not be repeated here.

請再參照圖2,偵測模組114在偵測所述第一前景物體以 及所述第二前景物體後,追蹤模組116便會追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑(Step S206)。追蹤模組116所採用的追蹤方法是依據所述前景物體在視訊資料中是否發生重疊的情況而決定。詳言之,在本實施例中,當第一前景物體中沒有發生重疊現象時,追蹤模組116將利用區塊式追蹤演算法,藉由第一前景物體在連續畫面產生交集的情況來達成追蹤的效果。 另一方面,當第一前景物體中發生重疊現象時,則追蹤模組116可使用多目標重疊追蹤方法,其係以卡爾曼濾波器(Kalman Filter)結合均值漂移演算法(Mean Shift Algorithm)來對重疊的第一前景物體進行追蹤,用以解決第一前景物體互相遮蔽的間題。必須說明的是,在其它實施例中,追蹤模組116可採用其它追蹤演算 法對移動物體進行追蹤,本發明不在此設限。為了方便說明均值漂移演算法以及卡爾曼濾波器應用於多目標重疊追蹤方法,以下僅以第一前景物體之中被遮蔽的「第一參考前景物體」進行描述。 Referring to FIG. 2 again, the detecting module 114 is detecting the first foreground object. And the second foreground object, the tracking module 116 tracks the first foreground object and the second foreground object to obtain a moving path of each of the first foreground objects and each of the second foreground objects The path of movement (Step S206). The tracking method used by the tracking module 116 is determined based on whether or not the foreground object overlaps in the video material. In detail, in this embodiment, when there is no overlap in the first foreground object, the tracking module 116 will use the block-based tracking algorithm to achieve the intersection of the first foreground object in the continuous image. The effect of tracking. On the other hand, when an overlap occurs in the first foreground object, the tracking module 116 can use a multi-object overlap tracking method, which is based on a Kalman filter combined with a Mean Shift Algorithm. Tracking the overlapping first foreground objects to solve the problem that the first foreground objects are mutually shielded. It should be noted that in other embodiments, the tracking module 116 may employ other tracking algorithms. The method tracks the moving object, and the present invention is not limited thereto. In order to facilitate the explanation of the mean shift algorithm and the Kalman filter applied to the multi-target overlap tracking method, only the "first reference foreground object" that is masked among the first foreground objects will be described below.

追蹤模組116可將未遮蔽前的第一參考前景物體,利用一個區塊(Blob)將其色彩資訊儲存至儲存單元110的資料庫(未繪示)。在本實施例中,色彩資訊為紅色資訊、綠色資訊以及藍色資訊,並且色彩資訊可以例如是RGB直方圖(Histogram)的形式儲存至資料庫。當第一參考前景物體在第一畫面產生遮蔽時(亦即重疊現象發生時),追蹤模組116會使用卡爾曼濾波器計算第一參考前景物體於第一畫面的下一個畫面中的第一預測位置,並且將第一預測位置設為均值漂移演算法的初始窗口,利用前景遮罩排除背景像素的干擾,取得前景影像後做反向投影得到該影像機率密度圖,再調整搜索窗口的位置直到收斂為止。最後,追蹤模組116再使用卡爾曼濾波器將第一偵測位置修正至第一修正位置。 The tracking module 116 can store the color information of the first reference foreground object before the unmasking into a database (not shown) of the storage unit 110 by using a blob. In this embodiment, the color information is red information, green information, and blue information, and the color information may be stored in the database in the form of, for example, a RGB histogram. When the first reference foreground object produces a shadow on the first picture (ie, when the overlap phenomenon occurs), the tracking module 116 calculates the first reference foreground object in the next picture of the first picture using the Kalman filter. Predicting the position, and setting the first predicted position as the initial window of the mean shift algorithm, using the foreground mask to exclude the interference of the background pixel, obtaining the foreground image, performing back projection to obtain the image probability density map, and then adjusting the position of the search window. Until the convergence. Finally, the tracking module 116 then uses the Kalman filter to correct the first detection position to the first correction position.

以圖4為例,影像410a、410b為發生重疊現象之前的影 像,而影像420a、420b為發生重疊現象之後的影像。在此,區塊422可以例如是對應上述的第一參考前景物體,而區塊424可以例如是對應另一第一前景物體。由於追蹤模組116結合了卡爾曼濾波器與均值漂移演算法,因此即便發生了重疊現象,其仍然能夠分別地追蹤到區塊422與區塊424。追蹤模組116便是依上述方式處理第一視訊資料,以得到第一參考前景物體於第一視訊資料中的移動路徑。類似地,追蹤模組116亦利用類似的方式追蹤第二 前景物體,相關說明請參照前述段落,於此不再贅述。 Taking FIG. 4 as an example, the images 410a and 410b are shadows before the overlap phenomenon occurs. For example, the images 420a and 420b are images after the overlap phenomenon occurs. Here, block 422 may, for example, correspond to the first reference foreground object described above, and block 424 may, for example, correspond to another first foreground object. Since the tracking module 116 combines the Kalman filter and the mean shift algorithm, it can track the block 422 and the block 424 separately even if an overlap occurs. The tracking module 116 processes the first video data in the above manner to obtain a moving path of the first reference foreground object in the first video material. Similarly, the tracking module 116 also tracks the second in a similar manner. For foreground objects, please refer to the preceding paragraph for related explanations, which will not be repeated here.

在一實施例中,圖2中的步驟S206更可以圖5的移動物 體追蹤流程來實作。請參照圖5,當偵測模組114在第一視訊資料或是第二視訊資料偵測到一個前景物體的區塊blob 502時,追蹤模組116會將區塊blob 502與資料庫所儲存的串列blobList 504中已記錄的前景物體進行配對(步驟S506),並且判斷區塊blob 502是否發生重疊現象(步驟S508)。當追蹤模組116判斷有重疊現象發生時,追蹤模組116會利用均值漂移演算法以及卡爾曼濾波器對區塊blob 502進行追蹤(步驟S510)。當追蹤模組116判斷無重疊現象發生時,追蹤模組116會利用區塊式追蹤演算法對區塊blob 502進行追蹤(步驟S512)。圖5中各步驟已詳細說明如上,在此便不再贅述。 In an embodiment, step S206 in FIG. 2 can further move the object of FIG. 5. The body tracking process is implemented. Referring to FIG. 5, when the detection module 114 detects a block blob 502 of a foreground object in the first video data or the second video data, the tracking module 116 stores the block blob 502 and the database. The foreground objects recorded in the tandem blobList 504 are paired (step S506), and it is judged whether or not the block blob 502 overlaps (step S508). When the tracking module 116 determines that an overlap occurs, the tracking module 116 tracks the block blob 502 using the mean shift algorithm and the Kalman filter (step S510). When the tracking module 116 determines that no overlap occurs, the tracking module 116 tracks the block blob 502 using a block-based tracking algorithm (step S512). The steps in Fig. 5 have been described in detail above, and will not be described again here.

請再參照圖2,步驟S202至步驟S206主要是針對不同的 視訊資料分別進行處理。接著,在後續的步驟中,識別模組118將針對穿越不同攝影機拍攝視角範圍的移動物體進行比對,以進行完整的追蹤。本發明主要是利用移動物體在不同拍攝視角範圍下的亮度關係,並且預估移動物體穿越不同場景之間的盲區(Blind Zone)時所需的時間,據以對移動物體進行追蹤。 Referring again to FIG. 2, steps S202 to S206 are mainly directed to different The video data is processed separately. Next, in a subsequent step, the recognition module 118 will align the moving objects that have taken the range of viewing angles across different cameras for complete tracking. The invention mainly utilizes the brightness relationship of the moving object under different shooting angle ranges, and predicts the time required for the moving object to traverse the blind zone between different scenes, thereby tracking the moving object.

詳言之,識別模組118將估算所述第一前景物體自第一 拍攝視角範圍移動到第二拍攝視角範圍之間的移動時間(步驟S208)。在本實施例中,在訓練階段時,識別模組118可統計多個已知移動物體穿越盲區所需花費的時間,並且取得一個高斯分布 (Gaussian Distribution)圖。識別模組118可根據高斯分布圖來取得上述移動時間的最大可能時間以及最小可能時間。舉例來說,最大可能時間可以為上述時間的平均值加上一個標準差;最小可能時間可以為上述時間的平均值減一個標準差。 In particular, the recognition module 118 will estimate the first foreground object from the first The shooting angle range is moved to a moving time between the second shooting angle ranges (step S208). In this embodiment, during the training phase, the recognition module 118 can count the time taken for a plurality of known moving objects to pass through the blind zone, and obtain a Gaussian distribution. (Gaussian Distribution) map. The identification module 118 can obtain the maximum possible time and the minimum possible time of the above moving time according to the Gaussian distribution map. For example, the maximum possible time may be the average of the above times plus one standard deviation; the minimum possible time may be the average of the above times minus one standard deviation.

接著,識別模組118計算第一視訊資料以及第二視訊資料的亮度關係(步驟S210)。詳細而言,由於每一攝影機的外在環境光源不同以及攝影機本身的焦距、光圈等參數上設定的不同,會造成不同視訊資料中的移動物體產生色彩上的差異。因此,識別模組118可利用亮度轉換函式(Brightness Transfer Function)來對移動物體進行色彩校正,其主要是藉由將一台攝影機的色彩資訊轉換至另一台攝影機的色彩資訊做為其校正色彩的方法。 Next, the recognition module 118 calculates a brightness relationship between the first video material and the second video data (step S210). In detail, due to the difference in the external ambient light source of each camera and the setting of the focal length and aperture of the camera itself, color differences in moving objects in different video materials may occur. Therefore, the recognition module 118 can use the brightness transfer function (Brightness Transfer Function) to perform color correction on the moving object, which is mainly corrected by converting the color information of one camera to the color information of another camera. The method of color.

在本實施例中,於訓練階段時,識別模組118可利用前述已知移動物體分別於第一訓練視訊資料以及第二訓練視訊資料內的色彩資訊,利用亮度轉換函式來取得上述亮度關係。在取得亮度關係後,識別模組118將第一視訊資料與第二視訊資料之其中一者中的色彩資訊做為參考資訊,並且利用亮度關係,根據參考資訊來轉換第一視訊資料與第二視訊資料之另一者中的色彩資訊。舉例而言,假設識別模組118將第一視訊資料做為參考資訊,則識別模組118會利用亮度轉換函式轉換第二視訊資料的色彩資訊,也就是說第二前景物體的色彩資訊將會被轉換。 In this embodiment, during the training phase, the recognition module 118 can use the brightness information to obtain the brightness relationship by using the color information of the known moving object in the first training video data and the second training video data respectively. . After obtaining the brightness relationship, the identification module 118 uses the color information in one of the first video data and the second video data as reference information, and uses the brightness relationship to convert the first video data and the second according to the reference information. Color information in the other of the video material. For example, if the identification module 118 uses the first video data as the reference information, the recognition module 118 converts the color information of the second video data by using the brightness conversion function, that is, the color information of the second foreground object will be Will be converted.

接著,識別模組118將根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述 第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的第一前景物體,從而建立各所述第二前景物體的完整移動路徑(步驟S212)。詳言之,當有移動物體進入第二視訊資料的一個影像畫面而成為第二前景物體時(在此定義為「第二目標前景物體」),識別模組118可利用上述取得的亮度關係對其進行色彩校正後,建立其外觀模型。在本實施例中,識別模組118是採用K-均數分群演算法(K-means Clustering Algorithm)對第二目標前景物體的色彩資訊分成例如是24個群組,再藉由例如是方程式(1)的正規化幾何距離對色彩群組再次分類: 其中V 1以及V 2分別為不同像素的像素值,其分別包括紅色像素值r 1r 2、綠色像素值g 1g 2以及藍色像素值b 1b 2。之後,識別模組118將距離相近的群組再次合併,直到群集成員不再變動為止。識別模組118可根據最後產生的色彩群組例如以直方圖的形式建立第二目標前景物體的外觀模型。 Then, the identification module 118 compares the first foreground object and the second foreground object according to the moving time, the color information of the first foreground object, and the color information of the second foreground object. And identifying a first foreground object associated with each of the second foreground objects, thereby establishing a complete moving path of each of the second foreground objects (step S212). In detail, when a moving object enters an image frame of the second video material and becomes a second foreground object (herein defined as a "second target foreground object"), the recognition module 118 can utilize the obtained brightness relationship pair. After color correction, the appearance model is established. In this embodiment, the identification module 118 uses the K-means Clustering Algorithm to divide the color information of the second target foreground object into, for example, 24 groups, and then by, for example, an equation ( 1) The normalized geometric distance is again classified for the color group: Wherein V 1 and V 2 are pixel values of different pixels, respectively, including red pixel values r 1 and r 2 , green pixel values g 1 and g 2 , and blue pixel values b 1 and b 2 . Thereafter, the recognition module 118 merges the groups of similar distances again until the cluster members no longer change. The recognition module 118 can establish an appearance model of the second target foreground object according to the last generated color group, for example, in the form of a histogram.

接著,識別模組118可根據所述移動時間,將第二目標 前景物體與所述第一前景物體進行比對。舉例來說,當第二目標前景物體進入第二視訊資料時,識別模組118可根據移動時間推算第二目標前景物體在第一拍攝視角範圍的時間點,並且取得對應於該時間點的第一前景物體。換言之,儲存單元110中的資料庫可記錄各所述第一前景物體所對應的時間點以及色彩資訊。識 別模組118可自資料庫查詢,根據前述所推算的時間點取得關聯於第二目標前景物體的第一前景物體,在此定義為「第一時間相近前景物體」。接著,識別模組118根據第二目標前景物體以及所述第一時間相近前景物體的色彩資訊,比對第二目標前景物體與各所述第一時間相近前景物體的相似度,而相似度高的第一時間相近前景物體即為該第二目標前景物體所關聯的第一前景物體,在此定義為「第一目標前景物體」。由於追蹤模組116已在步驟S206追蹤到第一目標前景物體以及第二目標前景物體的追蹤路徑,識別模組118可據以獲得第二目標前景物體在第一視訊資料以及第二視訊資料中的完整移動路徑。 Then, the identification module 118 can select the second target according to the moving time. The foreground object is compared to the first foreground object. For example, when the second target foreground object enters the second video data, the recognition module 118 may calculate a time point of the second target foreground object in the first shooting angle range according to the moving time, and obtain a corresponding to the time point. A foreground object. In other words, the database in the storage unit 110 can record the time point and color information corresponding to each of the first foreground objects. knowledge The module 118 can query from the database, and obtain the first foreground object associated with the second target foreground object according to the estimated time point, which is defined as “the first time close foreground object”. Then, the recognition module 118 compares the similarity between the second target foreground object and each of the first time-similar foreground objects according to the second target foreground object and the color information of the first-time foreground object, and the similarity is high. The first foreground object that is adjacent to the foreground object is the first foreground object associated with the second target foreground object, and is defined herein as a "first target foreground object." Since the tracking module 116 has tracked the tracking path of the first target foreground object and the second target foreground object in step S206, the identification module 118 can obtain the second target foreground object in the first video data and the second video data. The full path of the move.

在一實施例中,圖2中的步驟S208至步驟S12更可以圖6的移動物體識別流程來實作。請參照圖6,當偵測模組114偵測到第二目標前景物體602時,識別模組118會判斷第二目標前景物體602是否為新的前景物體(步驟S604)。詳言之,在本實施例中,每個前景物體除了求對應的時間點與色彩資訊外,更包括一個編號。因此,在此步驟中偵測模組114可根據例如是其編號來查詢是否為新的前景物體。 In an embodiment, steps S208 to S12 in FIG. 2 can be implemented by the moving object recognition process of FIG. 6. Referring to FIG. 6, when the detection module 114 detects the second target foreground object 602, the recognition module 118 determines whether the second target foreground object 602 is a new foreground object (step S604). In detail, in this embodiment, each foreground object includes a number in addition to the corresponding time point and color information. Therefore, in this step, the detection module 114 can query whether it is a new foreground object according to, for example, its number.

當識別模組118判斷第二目標前景物體602不是新的前景物體時,識別模組118會將第二目標前景物體602的編號、時間點以及色彩資訊儲存到儲存單元110中對應於第二攝影機的第二資料庫DB2。反之,當識別模組118判斷第二目標前景物體602為新的前景物體時,識別模組118將利用亮度轉換函式轉換第二 目標前景物體602的色彩資訊(步驟S606),並且根據移動時間,自儲存單元110中對應於第一攝影機的第一資料庫DB1查詢第一時間相近前景物體(步驟S607),以判斷第一時間相近前景物體是否存在(步驟S608)。 When the recognition module 118 determines that the second target foreground object 602 is not a new foreground object, the recognition module 118 stores the number, time point, and color information of the second target foreground object 602 in the storage unit 110 corresponding to the second camera. The second database DB2. On the contrary, when the recognition module 118 determines that the second target foreground object 602 is a new foreground object, the recognition module 118 will convert the second using the brightness conversion function. The color information of the target foreground object 602 (step S606), and according to the moving time, querying the first time-similar foreground object from the first database DB1 corresponding to the first camera in the storage unit 110 (step S607) to determine the first time Whether or not the foreground object is present (step S608).

當識別模組118查詢到第一時間相近前景物體時,識別 模組118將計算第二目標前景物體602與各所述第一時間相近前景物體的相似度(步驟S610)。當識別模組118無法尋找到時間點相近的第一前景物體時,識別模組118亦將計算第二目標前景物體602與其它第一前景物體的相似度(步驟S612)。 When the identification module 118 queries the foreground object that is close to the first time, the identification The module 118 will calculate the similarity of the second target foreground object 602 to the foreground object that is similar to each of the first times (step S610). When the recognition module 118 cannot find the first foreground object with a similar time point, the recognition module 118 will also calculate the similarity between the second target foreground object 602 and the other first foreground objects (step S612).

接著,識別模組118將根據相似度比對結果查詢是否有關聯於第二目標前景物體602的第一目標前景物體(步驟S614)。當識別模組118查詢得到第一目標前景物體時,則將第二目標前景物體602的編號設為其對應的第一目標前景物體的編號matchedID 616,並且將matchedID 620儲存至第二資料庫DB2。當識別模組118查詢不到對應於第二目標前景物體602的第一目標前景物體,則將給予第二目標前景物體602新的編號newID 618。 Next, the recognition module 118 will query whether the first target foreground object associated with the second target foreground object 602 is correlated according to the similarity comparison result (step S614). When the recognition module 118 queries the first target foreground object, the number of the second target foreground object 602 is set to the number of the corresponding first foreground object, the matchedID 616, and the matchedID 620 is stored to the second database DB2. . When the recognition module 118 does not find the first target foreground object corresponding to the second target foreground object 602, the second target foreground object 602 will be given a new number newID 618.

綜上所述,本發明所提出的移動物體追蹤方法及電子裝置係先自多個攝影機取得視訊資料後,提取出移動物體所對應的前景物體,再對前景物體進行追蹤。此外,根據移動物體在穿越拍攝視野範圍之間的盲區時所需花費的時間,並且利用亮度轉換函式取得移動物體在不同拍攝視野範圍的亮度關係,以在不同視訊資料內中辨識出相同的移動物體。據此,本發明可在多個拍攝 視野範圍不重疊的攝影機下,對移動物體進行廣域的長程追蹤,以達到全方位的安全監控。 In summary, the moving object tracking method and the electronic device proposed by the present invention first acquire the video object from the plurality of cameras, extract the foreground object corresponding to the moving object, and then track the foreground object. In addition, according to the time taken by the moving object to pass through the blind zone between the shooting field of view, and using the brightness conversion function to obtain the brightness relationship of the moving object in different shooting field ranges, to identify the same in different video materials. Move the object. Accordingly, the present invention can be used in multiple shots Under the camera with non-overlapping field of view, wide-area tracking of moving objects is performed to achieve comprehensive security monitoring.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.

S202~S212‧‧‧移動物體追蹤方法的流程 S202~S212‧‧‧ Flow of moving object tracking method

Claims (8)

一種移動物體追蹤方法,包括:自多個攝影機分別接收多個視訊資料,其中所述攝影機包括具有一第一拍攝視角範圍的一第一攝影機以及一第二拍攝視角範圍的一第二攝影機,所述視訊資料包括分別對應於該第一拍攝視角範圍以及該第二拍攝視角範圍的一第一視訊資料以及一第二視訊資料,該第一拍攝視角範圍與該第二拍攝視角範圍無重疊;自該第一視訊資料以及該第二視訊資料分別偵測至少一第一前景物體以及至少一第二前景物體,並且追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑,其中追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑的步驟包括:判斷所述第一視訊資料的一第一畫面中,所述第一前景物體的一第一參考前景物體與所述第一前景物體的其中之一者是否發生一重疊現象;當判斷發生該重疊現象時,根據一卡爾曼濾波器計算該第一參考前景物體於該第一畫面的下一個畫面中的一第一預測位置,並且針對該第一預測位置執行一均值漂移演算法,以取得一第一偵測位置,再根據該卡爾曼濾波器修正該第一偵測位置,以取得一第一修正位置;當判斷無發生該重疊現象時,根據一區塊式追蹤法追蹤 所述第一前景物體;判斷所述第二視訊資料的一第二畫面中,所述第二前景物體的一第二參考前景物體與所述第二前景物體的其中之一者是否發生該重疊現象;當判斷發生該重疊現象時,根據該卡爾曼濾波器計算該第二參考前景物體於該第二畫面的下一個畫面中的一第二預測位置,並且針對該第二預測位置執行該均值漂移演算法,以取得一第二偵測位置,再根據該卡爾曼濾波器修正該第二偵測位置,以取得一第二修正位置;以及當判斷無發生該重疊現象時,根據該區塊式追蹤法追蹤所述第二前景物體;估算所述第一前景物體自該第一拍攝視角範圍移動到該第二拍攝視角範圍之間的至少一移動時間,並且計算該第一視訊資料以及該第二視訊資料的一亮度關係;以及根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的該第一前景物體,從而建立各所述第二前景物體的完整移動路徑。 A mobile object tracking method includes: receiving a plurality of video materials respectively from a plurality of cameras, wherein the camera includes a first camera having a first shooting angle range and a second camera having a second shooting angle range. The video data includes a first video data and a second video data respectively corresponding to the first shooting angle range and the second shooting angle range, and the first shooting angle range and the second shooting angle range do not overlap; The first video data and the second video data respectively detect at least one first foreground object and at least one second foreground object, and track the first foreground object and the second foreground object to obtain each of the first a moving path of the foreground object and a moving path of each of the second foreground objects, wherein the first foreground object and the second foreground object are tracked to obtain a moving path of each of the first foreground objects and each of the The step of moving the second foreground object includes: determining a first picture of the first video material, the first Whether a first reference foreground object of the foreground object and one of the first foreground objects overlap; when it is determined that the overlapping phenomenon occurs, calculating the first reference foreground object according to a Kalman filter a first predicted position in the next picture of the first picture, and performing a mean shift algorithm for the first predicted position to obtain a first detected position, and then correcting the first detect according to the Kalman filter Measuring position to obtain a first correction position; when judging that the overlap phenomenon does not occur, tracking according to a block tracking method Determining whether the first foreground object of the second foreground object overlaps with one of the second foreground object and the second foreground object in the second picture of the second video material a phenomenon; when it is determined that the overlapping phenomenon occurs, calculating a second predicted position of the second reference foreground object in a next picture of the second picture according to the Kalman filter, and performing the mean for the second predicted position a drift algorithm to obtain a second detection position, and then correcting the second detection position according to the Kalman filter to obtain a second correction position; and when determining that the overlap phenomenon does not occur, according to the block Tracking the second foreground object; estimating at least one moving time of the first foreground object moving from the first shooting angle range to the second shooting angle range, and calculating the first video material and the a brightness relationship of the second video material; and, according to the moving time, color information of the first foreground object, and color information of the second foreground object, The first object and the second foreground foreground object, according to the first foreground object to identify each of the second object associated with the foreground, to establish a complete path of movement of each of the second foreground object. 如申請專利範圍第1項所述的移動物體追蹤方法,其中自該第一視訊資料以及該第二視訊資料分別偵測所述第一前景物體以及所述第二前景物體的步驟包括:利用一高斯混合模型,分別建立對應於該第一拍攝視角範圍 以及該第二拍攝視角範圍的一第一背景模型以及一第二背景模型;利用一背景相減法,根據該第一背景模型以及該第二背景模型,分別自該第一視訊資料以及該第二視訊資料取得至少一第一陰影前景物體以及至少一第二陰影前景物體;以及針對所述第一陰影前景物體以及所述第二陰影前景物體進行一陰影濾除處理以及一型態學處理,以分別產生所述第一前景物體以及所述第二前景物體。 The mobile object tracking method of claim 1, wherein the detecting the first foreground object and the second foreground object from the first video data and the second video data respectively comprises: utilizing a Gaussian mixture model, respectively corresponding to the first shooting angle range And a first background model of the second shooting angle range and a second background model; using a background subtraction method, according to the first background model and the second background model, respectively, from the first video data and the second Obtaining at least a first shadow foreground object and at least a second shadow foreground object; and performing a shadow filtering process and a pattern processing on the first shadow foreground object and the second shadow foreground object, The first foreground object and the second foreground object are generated separately. 如申請專利範圍第1項所述的移動物體追蹤方法,其中估算所述第一前景物體自該第一拍攝視角範圍移動到該第二拍攝視角範圍之間的所述移動時間,並且計算該第一視訊資料以及該第二視訊資料的該亮度關係的步驟包括:於一訓練階段,統計多個已知移動物體自該第一拍攝視角範圍移動到該第二拍攝視角範圍之間所需的時間,以取得所述移動時間;以及於該訓練階段,利用一亮度轉換函式,根據所述已知移動物體分別於該第一拍攝視角範圍以及該第二拍攝視角範圍的色彩資訊,取得該亮度關係。 The moving object tracking method of claim 1, wherein the moving time of the first foreground object from the first shooting angle range to the second shooting angle range is estimated, and the first The step of the video data and the brightness relationship of the second video data includes: counting, during a training phase, a time required for moving a plurality of known moving objects from the first shooting angle range to the second shooting angle range Obtaining the moving time; and obtaining the brightness according to the color information of the first moving viewing angle range and the second shooting viewing angle range by the known moving object by using a brightness conversion function. relationship. 如申請專利範圍第1項所述的移動物體追蹤方法,其中所述第二前景物體包括一第二目標前景物體,而根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識 各所述第二前景物體所關聯的該第一前景物體,從而建立各所述第二前景物體的完整移動路徑的步驟包括:當偵測該第二目標前景物體進入該第二拍攝視角範圍時,根據該亮度關係轉換該第二目標前景物體的色彩資訊;根據所述移動時間,查詢一資料庫,以取得關聯於該第二目標前景物體的至少一第一時間相近前景物體,其中該資料庫記錄各所述第一前景物體所對應的色彩資訊以及一時間點,其中所述第一時間相近前景物體為所述時間點中符合所述移動時間的第一前景物體;根據該第二目標前景物體的色彩資訊以及所述第一時間相近前景物體的色彩資訊,比對該第二目標前景物體與各所述第一時間相近前景物體,以自各所述第一時間相近前景物體之中辨識該第二目標前景物體所關聯的一第一目標前景物體,其中該第一目標前景物體的色彩資訊相似於該第二目標前景物體;以及根據該第一目標前景物體的移動路徑以及該第二目標前景物體的移動路徑,建立該第二目標前景物體的完整移動路徑。 The moving object tracking method according to claim 1, wherein the second foreground object includes a second target foreground object, and according to the moving time, color information of the first foreground object, and the Color information of the foreground object, which is compared with the first foreground object and the second foreground object The step of establishing the first foreground object associated with each of the second foreground objects to establish a complete moving path of each of the second foreground objects comprises: when detecting the second target foreground object entering the second shooting angle range Converting color information of the second target foreground object according to the brightness relationship; and querying a database according to the moving time to obtain at least one first time-similar foreground object associated with the second target foreground object, wherein the data The library records color information corresponding to each of the first foreground objects and a time point, wherein the first time-similar foreground object is a first foreground object in the time point that meets the moving time; according to the second target The color information of the foreground object and the color information of the foreground object in the first time are different from the foreground object in the first time and the foreground object in the first time. a first target foreground object associated with the second target foreground object, wherein the color information of the first target foreground object is similar The second target foreground object; and according to the path of the moving path of the first target object and the second foreground foreground target object, the second object to establish a complete movement path of the foreground object. 一種電子裝置,包括:一儲存單元,記錄多個模組;以及一或多個處理單元,耦接該儲存單元,以存取並執行該儲存單元中記錄的所述模組,所述模組包括:一資料接收模組,自多個攝影機分別接收多個視訊資料,其中所述攝影機包括具有一第一拍攝視角範圍的一第一攝影 機以及一第二拍攝視角範圍的一第二攝影機,所述視訊資料包括分別對應於該第一拍攝視角範圍以及該第二拍攝視角範圍的一第一視訊資料以及一第二視訊資料,該第一拍攝視角範圍與該第二拍攝視角範圍無重疊;一偵測模組,自該第一視訊資料以及該第二視訊資料分別偵測至少一第一前景物體以及至少一第二前景物體;一追蹤模組,追蹤所述第一前景物體以及所述第二前景物體,以取得各所述第一前景物體的移動路徑以及各所述第二前景物體的移動路徑,其中該追蹤模組判斷所述第一視訊資料的一第一畫面中,所述第一前景物體的一第一參考前景物體與所述第一前景物體的其中之一者是否發生一重疊現象,當該追蹤模組判斷發生該重疊現象時,根據一卡爾曼濾波器計算該第一參考前景物體於該第一畫面的下一個畫面中的一第一預測位置,並且針對該第一預測位置執行一均值漂移演算法,以取得一第一偵測位置,再根據該卡爾曼濾波器修正該第一偵測位置,以取得一第一修正位置,當該追蹤模組判斷無發生該重疊現象時,根據一區塊式追蹤法追蹤所述第一前景物體,該追蹤模組又判斷所述第二視訊資料的一第二畫面中,所述第二前景物體的一第二參考前景物體與所述第二前景物體的其中之一者是否發生該重疊現象; 當該追蹤模組判斷發生該重疊現象時,根據該卡爾曼濾波器計算該第二參考前景物體於該第二畫面的下一個畫面中的一第二預測位置,並且針對該第二預測位置執行該均值漂移演算法,以取得一第二偵測位置,再根據該卡爾曼濾波器修正該第二偵測位置,以取得一第二修正位置,以及當該追蹤模組判斷無發生該重疊現象時,根據該區塊式追蹤法追蹤所述第二前景物體;以及一識別模組,估算所述第一前景物體自該第一拍攝視角範圍移動到該第二拍攝視角範圍之間的至少一移動時間,計算該第一視訊資料以及該第二視訊資料的一亮度關係,以及根據所述移動時間、所述第一前景物體的色彩資訊以及所述第二前景物體的色彩資訊,比對所述第一前景物體以及所述第二前景物體,據以辨識各所述第二前景物體所關聯的該第一前景物體,從而建立各所述第二前景物體的完整移動路徑。 An electronic device includes: a storage unit that records a plurality of modules; and one or more processing units coupled to the storage unit to access and execute the module recorded in the storage unit, the module The method includes: a data receiving module, respectively receiving a plurality of video materials from a plurality of cameras, wherein the camera includes a first camera having a first shooting angle range And a second camera of the second shooting angle range, the video data includes a first video material and a second video data respectively corresponding to the first shooting angle range and the second shooting angle range, Detecting at least one first foreground object and at least one second foreground object from the first video data and the second video data; Tracking module, tracking the first foreground object and the second foreground object to obtain a moving path of each of the first foreground objects and a moving path of each of the second foreground objects, wherein the tracking module determines In a first picture of the first video material, whether a first reference foreground object of the first foreground object and one of the first foreground objects overlap, when the tracking module determines that the occurrence occurs In the overlapping phenomenon, calculating a first predicted position of the first reference foreground object in a next picture of the first picture according to a Kalman filter, and the pin The first predicted position performs a mean shift algorithm to obtain a first detected position, and then corrects the first detected position according to the Kalman filter to obtain a first corrected position, when the tracking module determines When the overlapping phenomenon does not occur, the first foreground object is tracked according to a block tracking method, and the tracking module further determines a second foreground object in a second picture of the second video data. Whether the overlap occurs between the reference foreground object and one of the second foreground objects; When the tracking module determines that the overlapping phenomenon occurs, calculating a second predicted position of the second reference foreground object in the next picture of the second picture according to the Kalman filter, and performing for the second predicted position The mean shift algorithm is used to obtain a second detection position, and then the second detection position is corrected according to the Kalman filter to obtain a second correction position, and when the tracking module determines that the overlap does not occur Tracking the second foreground object according to the block tracking method; and identifying a module, estimating at least one of the first foreground object moving from the first shooting angle range to the second shooting angle range Moving time, calculating a brightness relationship of the first video material and the second video material, and comparing the color time according to the moving time, color information of the first foreground object, and color information of the second foreground object Determining the first foreground object and the second foreground object to identify the first foreground object associated with each of the second foreground objects, thereby establishing each of the The full path of movement of the foreground object. 如申請專利範圍第5項所述的電子裝置,其中該偵測模組利用一高斯混合模型,分別建立對應於該第一拍攝視角範圍以及該第二拍攝視角範圍的一第一背景模型以及一第二背景模型,該偵測模組利用一背景相減法,根據該第一背景模型以及該第二背景模型,分別自該第一視訊資料以及該第二視訊資料取得至少一第一陰影前景物體以及至少一第二陰影前景物體,以及該偵測模組針對所述第一陰影前景物體以及所述第二陰影前 景物體進行一陰影濾除處理以及一型態學處理,以分別產生所述第一前景物體以及所述第二前景物體。 The electronic device of claim 5, wherein the detecting module uses a Gaussian mixture model to respectively establish a first background model corresponding to the first shooting angle range and the second shooting angle range, and a a second background model, the detection module uses a background subtraction method to obtain at least one first shadow foreground object from the first video data and the second video data according to the first background model and the second background model respectively And at least one second shadow foreground object, and the detection module is for the first shadow foreground object and the second shadow front The scene object performs a shadow filtering process and a type processing to generate the first foreground object and the second foreground object, respectively. 如申請專利範圍第5項所述的電子裝置,其中該識別模組於一訓練階段,統計多個已知移動物體自該第一拍攝視角範圍移動到該第二拍攝視角範圍之間所需的時間,以取得所述移動時間,以及該識別模組於該訓練階段,利用一亮度轉換函式,根據所述已知移動物體分別於該第一拍攝視角範圍以及該第二拍攝視角範圍的色彩資訊,取得該亮度關係。 The electronic device of claim 5, wherein the identification module is required to count a plurality of known moving objects from the first shooting angle range to the second shooting angle range during a training phase. Time to obtain the moving time, and the recognition module uses a brightness conversion function according to the brightness of the first moving viewing angle range and the second shooting angle range Information to obtain this brightness relationship. 如申請專利範圍第5項所述的電子裝置更包括一資料庫,記錄各所述第一前景物體所對應的色彩資訊以及一時間點,所述第二前景物體包括一第二目標前景物體,其中當該偵測模組偵測該第二目標前景物體進入該第二拍攝視角範圍時,該識別模組根據該亮度關係轉換該第二目標前景物體的色彩資訊,該識別模組根據所述移動時間,查詢該資料庫,以取得關聯於該第二目標前景物體的至少一第一時間相近前景物體,其中所述第一時間相近前景物體為所述時間點中符合所述移動時間的第一前景物體,該識別模組根據該第二目標前景物體的色彩資訊以及所述第一時間相近前景物體的色彩資訊,比對該第二目標前景物體與各所述第一時間相近前景物體,以自各所述第一時間相近前景物體 之中辨識該第二目標前景物體所關聯的一第一目標前景物體,其中該第一目標前景物體的色彩資訊相似於該第二目標前景物體,以及該識別模組根據該第一目標前景物體的移動路徑以及該第二目標前景物體的移動路徑,建立該第二目標前景物體的完整移動路徑。 The electronic device of claim 5, further comprising a database for recording color information corresponding to each of the first foreground objects and a time point, the second foreground object comprising a second target foreground object, When the detecting module detects that the second target foreground object enters the second shooting angle range, the recognition module converts the color information of the second target foreground object according to the brightness relationship, and the Moving the time, querying the database to obtain at least one first time-similar foreground object associated with the second target foreground object, wherein the first time-similar foreground object is the first of the time points that meets the moving time a foreground object, the recognition module compares the color information of the foreground object of the second target with the color information of the foreground object of the first time, and compares the foreground object of the second target with the foreground object of the first time, Promising foreground objects from each of the first time Identifying a first target foreground object associated with the second target foreground object, wherein color information of the first target foreground object is similar to the second target foreground object, and the recognition module is based on the first target foreground object The moving path and the moving path of the second target foreground object establish a complete moving path of the second target foreground object.
TW103102279A 2014-01-22 2014-01-22 Method for tracking moving object and electronic apparatus using the same TWI517100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103102279A TWI517100B (en) 2014-01-22 2014-01-22 Method for tracking moving object and electronic apparatus using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103102279A TWI517100B (en) 2014-01-22 2014-01-22 Method for tracking moving object and electronic apparatus using the same

Publications (2)

Publication Number Publication Date
TW201530495A TW201530495A (en) 2015-08-01
TWI517100B true TWI517100B (en) 2016-01-11

Family

ID=54342786

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103102279A TWI517100B (en) 2014-01-22 2014-01-22 Method for tracking moving object and electronic apparatus using the same

Country Status (1)

Country Link
TW (1) TWI517100B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI620148B (en) * 2016-04-28 2018-04-01 新加坡商雲網科技新加坡有限公司 Device and method for monitoring, method for counting people at a location
US11405581B2 (en) 2017-12-26 2022-08-02 Pixart Imaging Inc. Motion detection methods and image sensor devices capable of generating ranking list of regions of interest and pre-recording monitoring images
TWI684956B (en) * 2018-12-04 2020-02-11 中華電信股份有限公司 Object recognition and tracking system and method thereof
CN113923344B (en) * 2020-07-09 2024-02-06 原相科技股份有限公司 Motion detection method and image sensor device

Also Published As

Publication number Publication date
TW201530495A (en) 2015-08-01

Similar Documents

Publication Publication Date Title
US10417503B2 (en) Image processing apparatus and image processing method
US7916944B2 (en) System and method for feature level foreground segmentation
JP6904346B2 (en) Image processing equipment, image processing systems, and image processing methods, and programs
CN109035304B (en) Target tracking method, medium, computing device and apparatus
Mangawati et al. Object Tracking Algorithms for video surveillance applications
CN107240124A (en) Across camera lens multi-object tracking method and device based on space-time restriction
Sun et al. Moving foreground object detection via robust SIFT trajectories
TWI425445B (en) Method and detecting system for determining quantity of self-motion of a moving platform
WO2019076187A1 (en) Video blocking region selection method and apparatus, electronic device, and system
US10762372B2 (en) Image processing apparatus and control method therefor
Denman et al. Multi-spectral fusion for surveillance systems
Jiang et al. Multiple pedestrian tracking using colour and motion models
TWI517100B (en) Method for tracking moving object and electronic apparatus using the same
Fradi et al. Spatio-temporal crowd density model in a human detection and tracking framework
CN109447022B (en) Lens type identification method and device
Jung et al. Object Detection and Tracking‐Based Camera Calibration for Normalized Human Height Estimation
CN109819206B (en) Object tracking method based on image, system thereof and computer readable storage medium
Mousse et al. People counting via multiple views using a fast information fusion approach
Guan et al. Multi-person tracking-by-detection with local particle filtering and global occlusion handling
KR101690050B1 (en) Intelligent video security system
Almomani et al. Segtrack: A novel tracking system with improved object segmentation
US20210312170A1 (en) Person detection and identification using overhead depth images
Kim et al. A disparity-based adaptive multihomography method for moving target detection based on global motion compensation
KR101595334B1 (en) Method and apparatus for movement trajectory tracking of moving object on animal farm
CN107067411B (en) Mean-shift tracking method combined with dense features

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees