TWI476703B - Real-time background modeling method - Google Patents

Real-time background modeling method Download PDF

Info

Publication number
TWI476703B
TWI476703B TW101128853A TW101128853A TWI476703B TW I476703 B TWI476703 B TW I476703B TW 101128853 A TW101128853 A TW 101128853A TW 101128853 A TW101128853 A TW 101128853A TW I476703 B TWI476703 B TW I476703B
Authority
TW
Taiwan
Prior art keywords
bit
value
real
unit
block
Prior art date
Application number
TW101128853A
Other languages
Chinese (zh)
Other versions
TW201407495A (en
Inventor
Chih Yang Lin
Chia Hung Yeh
li wei Kang
Kahlil Muchtar
Original Assignee
Univ Asia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Asia filed Critical Univ Asia
Priority to TW101128853A priority Critical patent/TWI476703B/en
Publication of TW201407495A publication Critical patent/TW201407495A/en
Application granted granted Critical
Publication of TWI476703B publication Critical patent/TWI476703B/en

Links

Description

即時背景模型化方法 Instant background modeling method

本發明是有關於一種背景模型化方法,特別是有關於一種可用以快速更新參考背景模型之即時背景模型化方法。 The present invention relates to a background modeling method, and more particularly to an instant background modeling method that can be used to quickly update a reference background model.

目前,如何在固定式拍攝的視訊監控影片中進行前景物體的偵測是許多電腦視覺應用的基礎步驟,一般的做法是利用背景模型(Background Modeling)來進行背景的學習,並利用所學習到的背景模型來對後續輸入影像進行前景偵測。在先前的研究中,有許多建立背景模型的方法被提出,諸如使用顏色和方向資訊,或是利用像素及區塊資訊等等當成描述背景的特徵。在描述這些特徵時,目前最常被使用的方法有高斯混合模型(Gaussian Mixtures)以及利用局部二元圖樣(Local Binary Pattern,LPB)來快速建立模型。 At present, how to detect foreground objects in fixed-camera video surveillance movies is a basic step in many computer vision applications. The general practice is to use the background model (Background Modeling) to learn the background and use the learned The background model is used to perform foreground detection on subsequent input images. In previous studies, many methods for establishing background models were proposed, such as using color and direction information, or using pixel and block information to describe features of the background. In describing these features, the most commonly used methods are Gaussian Mixtures and the use of Local Binary Patterns (LPBs) to quickly build models.

然,一般利用背景模型進行前景偵測必須克服幾個常見的問題:亮度變化(影子或強光)、移動的背景(水波或樹葉)、背景改變(如汽車長時間的背景停留)。想要將這些問題克服的越好,就會付出越多的計算時間而無法達成即時處理的需求。背景模型是許多應用的基礎,例如人臉辨識、物體追蹤等。如果沒有從良好的背景模型中得到正確的前景資訊,後面的辨識與追蹤則會 更加困難。 Of course, the use of background models for foreground detection must overcome several common problems: changes in brightness (shadow or glare), moving background (water waves or leaves), background changes (such as long-term background stops in a car). The better you want to overcome these problems, the more computing time you will have to pay for immediate processing. Background models are the basis for many applications, such as face recognition, object tracking, and more. If you don’t get the right prospects from a good background model, the latter identification and tracking will more difficult.

故,如何在有限改變的固定背景中建立出快速,高適應性,且穩定的模型是本發明欲解決的問題。 Therefore, how to establish a fast, highly adaptable, and stable model in a fixed background of limited changes is a problem to be solved by the present invention.

有鑑於上述習知技藝之問題,本發明之其中一目的就是在提供一種即時背景模型化方法,以建立快速、高適應性且穩定的背景模型。 In view of the above-mentioned problems of the prior art, one of the objects of the present invention is to provide an instant background modeling method to establish a fast, highly adaptable and stable background model.

根據本發明之目的,提出一種即時背景模型化方法,適用於監視攝像裝置,此監視攝像裝置係設置有一影像擷取模組及一處理模組,而此方法包含下列步驟:藉由影像擷取模組擷取複數個參考影像;經由處理模組將各參考影像分別分割為彼此不重疊的複數個參考區塊;利用處理模組根據各參考影像之各參考區塊計算出一單位元參考資料及一雙位元參考資料;藉由影像擷取模組擷取一即時影像;經由處理模組將即時影像依據各參考影像之切割方式分割為不重疊的複數個即時區塊;利用處理模組根據各即時區塊分別計算出一單位元即時資料及一雙位元即時資料;利用處理模組將單位元即時資料與位置對應之各單位元參考資料進行比對,或將雙位元即時資料與位置對應之各雙位元參考資料進行比對,以分別產生一匹配結果;以及藉由處理模組根據各匹配結果選擇性執行一資料更新程序,以更新各單位元參考資料或各雙位元參考資料。 According to the purpose of the present invention, an instant background modeling method is provided, which is suitable for a surveillance camera device. The surveillance camera device is provided with an image capture module and a processing module, and the method comprises the following steps: capturing by image The module captures a plurality of reference images; the reference modules are respectively divided into a plurality of reference blocks that do not overlap each other through the processing module; and the processing module calculates a unit reference data according to each reference block of each reference image. And a pair of bit reference materials; the image capturing module captures an instant image; and the processing module divides the instant image into a plurality of non-overlapping plurality of instant blocks according to the cutting manner of each reference image; Calculate one unit of real-time data and one pair of real-time data according to each real-time block; use the processing module to compare the unit-yuan real-time data with each unit-element reference data corresponding to the location, or use the dual-bit real-time data Comparing the two-bit reference materials corresponding to the positions to respectively generate a matching result; and according to each matching by the processing module Fruit selectively performing a data update program to update each of the unit cell of each double-byte references or references.

較佳地,本發明之即時背景模型化方法更包含下列步驟:藉由處理模組根據各參考影像之各單位元參考資料分別計算出一位元轉變次數;經由處理模組將各位元轉變次數分別與一平滑閥值進行比對,若位元轉變次數小於或等於平滑閥值,則藉由處理模組選擇性地將單位元即時資料與位置對應之各單位元參考資料進行比對,若位元轉變次數大於平滑閥值,則藉由處理模組選擇性地利用雙位元即時資料與位置對應之各雙位元參考資料進行比對。 Preferably, the instant background modeling method of the present invention further comprises the following steps: calculating, by the processing module, the number of one-bit transitions according to each unit reference data of each reference image; Comparing with a smoothing threshold respectively, if the number of bit transitions is less than or equal to the smoothing threshold, the processing module selectively compares the real-time data of the unit and the unit reference data corresponding to the position, if The number of bit transitions is greater than the smoothing threshold, and the processing module selectively uses the dual-bit real-time data to compare with the dual-bit reference materials corresponding to the location.

較佳地,本發明之即時背景模型化方法更包含下列步驟:藉由處理模組根據單位元即時資料與位置對應之各單位元參考資料分別計算出一單位元距離值,或將雙位元即時資料與位置對應之各雙位元參考資料分別計算出一雙位元距離值;經由處理模組根據各單位元距離值或各雙位元距離值分別與一距離閥值進行以對,以產生匹配結果;以及若匹配結果中,各單位元距離值之其一或各雙位元距離值之其一小於距離閥值時,則藉由處理模組執行一權重更新程序,而各單位元距離值或各雙位元距離值大於距離閥值時,則藉由處理模組執行資料更新程序。 Preferably, the instant background modeling method of the present invention further comprises the following steps: calculating, by the processing module, a unit-yuan distance value according to each unit-element reference data corresponding to the unit-yuan real-time data and the position, or dividing the double-bit element Each double-bit reference data corresponding to the real-time data and the location respectively calculates a double-bit distance value; and the processing module performs a pairwise distance value according to each unit-element distance value or each double-distance distance value, respectively, to Generating a matching result; and if one of the unit distance distance values or one of the two bit distance values is less than the distance threshold in the matching result, the processing module executes a weight update procedure, and each unit element When the distance value or each double bit distance value is greater than the distance threshold, the data update procedure is performed by the processing module.

較佳地,權重更新程序包含下列步驟:藉由處理模組更新各單位元參考資料之一單位元權 重值或各雙位元參考資料之一雙位元權重值。 Preferably, the weight update procedure comprises the steps of: updating one unit weight of each unit reference material by the processing module A double value weight value that is a double value or one of the double bit references.

較佳地,資料更新程序更包含下列步驟:藉由處理模組將各單位元參考資料之一單位元權重值進行排序,或將各雙位元參考資料之一雙位元權重值進行排序;經由處理模組將各單位元權重值中為最小值的單位元參考資料取代為單位元即時資料,或將各雙位元權重值中為最小值的雙位元參考資料取代為雙位元即時資料;以及利用處理模組將為最小值之單位元權重值或為最小值之雙位元權重值替換為一初始權重值。 Preferably, the data update program further comprises the steps of: sorting the unit weight values of each unit reference material by the processing module, or sorting the double bit weight values of each of the double-bit reference materials; Substituting the unit meta-reference data with the smallest value among the unit weight values as the unit-yuan real-time data through the processing module, or replacing the double-bit reference material with the minimum value among the double-bit weight values as the dual-bit instant data Data; and the processing module replaces the unit weight value of the minimum value or the double-weight weight value of the minimum value with an initial weight value.

較佳地,各單位元參考資料及單位元即時資料係滿足下列條件: 其中,THsmooth係為一預設閥值,xij係為各參考區塊或即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,aij係為各單位元參考資料或單位元即時資料之位置(i,j)之一單位元值。 Preferably, each unit reference material and unit time data meet the following conditions: Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. a ij is a unit value of the position (i, j) of each unit element reference material or unit time data.

較佳地,各雙位元參考資料及雙位元即時資料係滿足下列條件: 其中,THsmooth係為一預設閥值,xij係為各參考區塊或 即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,hm係為當各參考區塊之單位元參考資料之單位元值為1時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各參考區塊之單位元參考資料之單位元值為0時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為0時所計算出之平均數,bij係為各雙位元參考資料或雙位元即時資料之位置(i,j)之一雙位元值。 Preferably, each double-bit reference material and the dual-bit real-time data satisfy the following conditions: Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. Hm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 1 or the unit value of the real-time data of the real-time block is 1. Lm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 0 or the unit value of the real-time data of the real-time block is 0. b ij is a double-bit value of the position (i, j) of each double-bit reference material or double-bit real-time data.

較佳地,本發明之即時背景模型化方法更包含下列步驟:藉由該處理模組更根據各單位元參考資料及各雙位元參考資料分別計算出一三位元參考資料,並根據單位元即時資料及雙位元即時資料計算出一三位元即時資料;以及利用處理模組將三位元即時資料與位置對應之各三位元參考資料進行比對,以分別產生匹配結果,並根據各匹配結果選擇性地更新各三位元參考資料。 Preferably, the instant background modeling method of the present invention further comprises the following steps: calculating, by the processing module, a three-dimensional reference material according to each unit reference material and each double-bit reference data, and according to the unit The real-time data and the dual-bit real-time data are used to calculate a three-dimensional real-time data; and the processing module is used to compare the three-dimensional real-time data with the three-dimensional reference materials corresponding to the positions to respectively generate matching results, and Each three-bit reference material is selectively updated according to each matching result.

較佳地,各該三位元參考資料及該三位元即時資料係滿足下列條件: 其中,THsmooth係為一預設閥值,xij係為各參考區塊或即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,hm係為當各參考區塊之單位元參考資料之單位元值為1時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各參考區塊之單位元參考資料之單位元值為0時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為0時所計算出之平均數,hhm係為當各參考區塊之雙位元參考資料之雙位元值為11時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為11時所計算出之平均數,hlm係為當各參考區塊之雙位元參考資料之雙位元值為10時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為10時所計算出之平均數,lhm係為當各參考區塊之雙位元參考資料之雙位元值為01時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為01時所計算出之平均數,llm係為當各參考區塊之雙位元參考資料之雙位 元值為00時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為00時所計算出之平均數,cij係為各三位元參考資料或三位元即時資料之位置(i,j)之一三位元值。 Preferably, each of the three-bit reference material and the three-dimensional instant data satisfy the following conditions: Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. Hm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 1 or the unit value of the real-time data of the real-time block is 1. Lm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 0 or the unit value of the real-time data of the real-time block is 0. Hhm is calculated when the double-bit value of the double-bit reference data of each reference block is 11 or the double-bit value of the real-time data of the real-time block is 11 The average number, hlm is the average of the double-bit real-time data of the average of the double-bit reference data of each reference block or the double-bit real-time data of the real-time block is 10 The average calculated by the time, lhm is the double-bit value of the double-bit reference material of each reference block is 01 The average calculated by the time or the double-bit real-time data of the real-time block is the average calculated by the double-bit value of 01, and the llm is the double position of the double-bit reference data of each reference block. The average value calculated by the ternary value of 00 or the average of the double-bit real-time data of the real-time block is 00, and c ij is the three-dimensional reference material or three digits. One of the three-digit values of the location (i, j) of the meta-instant data.

較佳地,本發明之即時背景模型化方法更包含下列步驟:藉由處理模組根據三位元即時資料與位置對應之各三位元參考資料進行比對時,更分別計算出一三位元距離值,並將各三位元距離值分別與一距離閥值進行比對,以產生匹配結果;以及若匹配結果中,各三位元距離值之其一小於距離閥值時,則藉由處理模組執行一權重更新程序,而各三位元距離值大於距離閥值時,則藉由處理模組執行資料更新程序。 Preferably, the instant background modeling method of the present invention further comprises the following steps: when the processing module compares the three-dimensional real-time data with the three-dimensional reference data corresponding to the position, the three-digit calculation is respectively calculated. a distance value, and comparing each three-bit distance value with a distance threshold to generate a matching result; and if the matching result, one of the three-dimensional distance values is less than the distance threshold, then A weight update procedure is executed by the processing module, and when the three-bit distance value is greater than the distance threshold, the data update procedure is performed by the processing module.

承上所述,依本發明之即時背景模型化方法,其可具有一或多個下述優點: As described above, the instant background modeling method of the present invention may have one or more of the following advantages:

(1)此即時背景模型化方法可藉由計算位元轉變次數取代計算較複雜之變異數,使本發明之方法可快速地更新背景模型,藉此減少本方法之運算時間。 (1) The instant background modeling method can replace the calculation of the more complex variation number by calculating the number of bit transitions, so that the method of the present invention can quickly update the background model, thereby reducing the computation time of the method.

(2)此即時背景模型化方法可藉由執行資料更新程序及權重更新程序,使本方法對於亮度改變具有較高的容忍度。 (2) The instant background modeling method can make the method have high tolerance for brightness change by executing the data update program and the weight update program.

(3)此即時背景模型化方法可藉由應用多位元模式於各區塊,藉此可增進由背景模型中區分出前景資訊的正確性。 (3) This instant background modeling method can improve the correctness of the foreground information from the background model by applying the multi-bit mode to each block.

為利 貴審查員瞭解本發明之技術特徵、內容與優點及其所能達成之功效,茲將本發明配合附圖,並以實施例之表達形式詳細說明如下,而其中所使用之圖式,其主旨僅為示意及輔助說明書之用,未必為本發明實施後之真實比例與精準配置,故不應就所附之圖式的比例與配置關係解讀、侷限本發明於實際實施上的權利範圍,合先敘明。 The technical features, contents, and advantages of the present invention, as well as the advantages thereof, can be understood by the present inventors, and the present invention will be described in detail with reference to the accompanying drawings. The subject matter is only for the purpose of illustration and description. It is not intended to be a true proportion and precise configuration after the implementation of the present invention. Therefore, the scope and configuration relationship of the attached drawings should not be interpreted or limited. First described.

請參閱第1圖,其係為本發明之即時背景模型化方法之流程圖。 Please refer to FIG. 1 , which is a flow chart of the instant background modeling method of the present invention.

在步驟S11中:藉由該影像擷取模組擷取複數個參考影像。 In step S11, the plurality of reference images are captured by the image capturing module.

在步驟S12中:經由該處理模組將各該參考影像分別分割為彼此不重疊的複數個參考區塊。 In step S12, each of the reference images is divided into a plurality of reference blocks that do not overlap each other via the processing module.

在步驟S13中:利用該處理模組根據各該參考影像之各該參考區塊分別計算出一單位元參考資料及一雙位元參考資料。 In step S13, the processing module calculates a unit cell reference data and a double bit reference data according to each of the reference blocks of each of the reference images.

在步驟S14中:藉由該影像擷取模組擷取一即時影像。 In step S14, an image is captured by the image capturing module.

在步驟S15中:經由該處理模組將該即時影像依據各該參考影像之切割方式分割為不重疊的複數個即時區塊。 In step S15, the real-time image is divided into a plurality of non-overlapping instant blocks according to the cutting manner of each of the reference images.

在步驟S16中:利用該處理模組根據各該即時區塊分別計算出一單位元即時資料及一雙位元即時資料。 In step S16, the processing module calculates a unitary real-time data and a double-bit real-time data according to each of the real-time blocks.

在步驟S17中:利用該處理模組將該單位元即時資料與位置對應之各該單位元參考資料進行比對,或將該雙位元即時資料與位置對應之各該雙位元參考資料進行比對,以分別產生一匹配結果。 In step S17, the processing unit is used to compare the real-time data of the unit and the unit reference data corresponding to the position, or the double-bit real-time data and the position corresponding to the double-bit reference material are performed. Compare to produce a matching result.

在步驟S18中:藉由該處理模組根據各該匹配結果選擇性執行一資料更新程序,以更新各該單位元參考資料或各該雙位元參考資料。 In step S18, the processing module selectively performs a data update procedure according to each matching result to update each of the unit cell reference materials or each of the dual-bit reference materials.

請參閱第2圖,其係為本發明之即時背景模型化方法之第一實施例之示意圖。如圖所示,在監視攝像裝置擷取複數個參考影像後,係首先將此複數個參考影像分別分割為彼此不重疊之複數個參考區塊。圖中之參考區塊(a)即為複數個參考區塊之其一。之後,再藉由處理模組根據此參考區塊(a)利用下述方程式計算出單位元參考資料(b): 其中,THsmooth係為一預設閥值,xij係為各參考區塊或即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,aij係為各單位元參考資料或單位元即時資料之位置(i,j)之一單位元值。 Please refer to FIG. 2, which is a schematic diagram of a first embodiment of the instant background modeling method of the present invention. As shown in the figure, after the monitoring camera captures a plurality of reference images, the plurality of reference images are first divided into a plurality of reference blocks that do not overlap each other. The reference block (a) in the figure is one of a plurality of reference blocks. Then, the processing unit uses the following equation to calculate the unit reference data (b) according to the reference block (a): Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. a ij is a unit value of the position (i, j) of each unit element reference material or unit time data.

更詳細地說,第2圖中之參考區塊(a)之平均值m係為99.25,處理模組係分別對參考區塊(a)中之數值與平 均值m加上預設閥值進行比對。若參考區塊(a)中之數值小於平均值加上預設閥值時,單位元參考資料之數值即為0,若參考區塊(a)中之數值大於或等於平均值加上預設閥值時,單位元參考資料之數值即為1。於是,當參考區塊(a)中之每一像素值皆經過上述方程式計算後,即可計算出單位元參考資料(b)。 In more detail, the average value m of the reference block (a) in Fig. 2 is 99.25, and the processing module is respectively the value and the flat value in the reference block (a). The mean m is added to the preset threshold for comparison. If the value in the reference block (a) is less than the average value plus the preset threshold, the value of the unit reference data is 0. If the value in the reference block (a) is greater than or equal to the average plus the preset At the threshold, the value of the unit reference is 1. Thus, when each pixel value in the reference block (a) is calculated by the above equation, the unit cell reference data (b) can be calculated.

當處理模組計算出單位元參考資料(b)後,處理模組再根據參考區塊(a)及單位元參考資料(b)利用下述方程式計算出雙位元參考資料(b): 其中,THsmooth係為一預設閥值,xij係為各參考區塊或即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,hm係為當各參考區塊之單位元參考資料之單位元值為1時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各參考區塊之單位元參考資料之單位元值為0時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為0時所計算出之平均數,bij係為各雙位元參考資料或雙位元即時資料之位置(i,j)之一雙位元值。 After the processing module calculates the unit reference data (b), the processing module calculates the double-bit reference material (b) according to the reference block (a) and the unit reference data (b) using the following equation: Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. Hm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 1 or the unit value of the real-time data of the real-time block is 1. Lm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 0 or the unit value of the real-time data of the real-time block is 0. b ij is a double-bit value of the position (i, j) of each double-bit reference material or double-bit real-time data.

更詳細地說,處理模組係根據單位元值為1之單位元參考資料之像素值計算出平均數hm,其值為108.86。處理模組再根據單位元值為0之單位元參考資料之像素 值計算出平均數lm,其值為91.77。再計算出hm及lm後,處理模組係分別對參考區塊(a)中之數值與hm加上THsmooth、m加上THsmooth及lm加上THsmooth進行比對。若參考區塊(a)中之數值小於lm加上THsmooth時,雙位元參考資料之數值即為00。若參考區塊(a)中之數值介於lm加上THsmooth及m加上THsmooth之間時,雙位元參考資料之數值即為01。若參考區塊(a)中之數值介於m加上THsmooth及hm加上THsmooth之間時,雙位元參考資料之數值即為10。若參考區塊(a)中之數值大於或等於hm加上THsmooth時,雙位元參考資料之數值即為11。於是,當參考區塊(a)中之每一像素值皆經過上述方程式計算後,即可計算出雙位元參考資料(c)。 In more detail, the processing module calculates the average number hm based on the pixel value of the unit cell reference data having a unit value of 1, and its value is 108.86. The processing module then calculates an average number lm based on the pixel value of the unit element reference data with a unit value of 0, and the value is 91.77. After calculating hm and lm, the processing module compares the value in reference block (a) with hm plus TH smooth , m plus TH smooth and lm plus TH smooth . If the value in the reference block (a) is less than lm plus TH smooth , the value of the double-bit reference material is 00. If the value in reference block (a) is between lm plus TH smooth and m plus TH smooth , the value of the double-bit reference is 01. If the value in reference block (a) is between m plus TH smooth and hm plus TH smooth , the value of the double-bit reference is 10. If the value in the reference block (a) is greater than or equal to hm plus TH smooth , the value of the double-bit reference material is 11. Thus, when each pixel value in the reference block (a) is calculated by the above equation, the double-bit reference material (c) can be calculated.

值得注意的是,此單位元參考資料、雙位元參考資料之計算係利用整個區塊的資訊,因此可提高擷取前景資訊的正確率,且對於亮度的改變有較高的容忍度。此外,因為本發明僅需透過簡單的計算,因此本方法之計算時間較短且可適用於即時的監控攝像裝置。 It is worth noting that the calculation of the unit reference data and the dual-bit reference data utilizes the information of the entire block, so that the correct rate of the prospective information can be improved, and the change of the brightness is highly tolerated. In addition, since the present invention requires only a simple calculation, the calculation time of the method is short and can be applied to an instant surveillance camera.

請參閱第3圖,其係為本發明之即時背景模型化方法之第二實施例之示意圖。如圖所示,處理模組將此複數個參考影像分別分割為彼此不重疊之複數個參考區塊(a)並計算出單位元參考資料(b)。處理模組係再根據參考區塊(a)及單位元參考資料(b)計算出雙位元參考資料(c)。單位元參考資料(b)及雙位元參考資料(c)之詳細計算過程與第一實施例相同,本發明在此並不贅述。當處理模組計算出雙位元參考資料(c)後,處理模組再根據參考 區塊(a)及雙位元參考資料(c)利用下述方程式計算出三位元參考資料(d): 其中,THsmooth係為一預設閥值,xij係為各參考區塊或即時區塊之位置(i,j)之像素值,m係為各參考區塊或即時區塊之平均值,hm係為當各參考區塊之單位元參考資料之單位元值為1時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各參考區塊之單位元參考資料之單位元值為0時所分別計算出之平均數或即時區塊之單位元即時資料之單位元值為0時所計算出之平均數,hhm係為當各參考區塊之雙位元參考資料之雙位元值為11時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為11時所計算出之平均數,hlm係為當各參考區塊之雙位元參考資料之雙位元值為10時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為10時所計算出之平均數,lhm係為當各參考區塊之雙位元參考資料之雙位元值為01時所分別計算出之平均數或即時 區塊之雙位元即時資料之雙位元值為01時所計算出之平均數,llm係為當各參考區塊之雙位元參考資料之雙位元值為00時所分別計算出之平均數或即時區塊之雙位元即時資料之雙位元值為00時所計算出之平均數,cij係為各三位元參考資料或三位元即時資料之位置(i,j)之一三位元值。 Please refer to FIG. 3, which is a schematic diagram of a second embodiment of the instant background modeling method of the present invention. As shown in the figure, the processing module divides the plurality of reference images into a plurality of reference blocks (a) that do not overlap each other and calculates a unit reference material (b). The processing module then calculates the double-bit reference material (c) according to the reference block (a) and the unit reference material (b). The detailed calculation process of the unit element reference material (b) and the double-bit reference material (c) is the same as that of the first embodiment, and the present invention is not described herein. After the processing module calculates the double-bit reference data (c), the processing module calculates the three-dimensional reference data according to the reference block (a) and the double-bit reference data (c) using the following equation (d). : Wherein, TH smooth is a preset threshold, x ij is the pixel value of the position (i, j) of each reference block or instant block, and m is the average value of each reference block or instant block. Hm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 1 or the unit value of the real-time data of the real-time block is 1. Lm is the average of the average value calculated when the unit value of the unit reference data of each reference block is 0 or the unit value of the real-time data of the real-time block is 0. Hhm is calculated when the double-bit value of the double-bit reference data of each reference block is 11 or the double-bit value of the real-time data of the real-time block is 11 The average number, hlm is the average of the double-bit real-time data of the average of the double-bit reference data of each reference block or the double-bit real-time data of the real-time block is 10 The average calculated by the time, lhm is the double-bit value of the double-bit reference material of each reference block is 01 The average calculated by the time or the double-bit real-time data of the real-time block is the average calculated by the double-bit value of 01, and the llm is the double position of the double-bit reference data of each reference block. The average value calculated by the ternary value of 00 or the average of the double-bit real-time data of the real-time block is 00, and c ij is the three-dimensional reference material or three digits. One of the three-digit values of the location (i, j) of the meta-instant data.

更詳細地說,處理模組係根據單位元值為1之單位元參考資料之像素值計算出平均數hm,其值為108.86。處理模組再根據單位元值為0之單位元參考資料之像素值計算出平均數lm,其值為91.77。處理模組再根據雙位元值為11之雙位元參考資料之像素值計算出平均數hhm,其值為115。處理模組再根據雙位元值為10之雙位元參考資料之像素值計算出平均數hlm,其值為104.25。處理模組再根據雙位元值為01之雙位元參考資料之像素值計算出平均數lhm,其值為95.4。處理模組再根據雙位元值為00之雙位元參考資料之像素值計算出平均數llm,其值為87.25。 In more detail, the processing module calculates the average number hm based on the pixel value of the unit cell reference data having a unit value of 1, and its value is 108.86. The processing module then calculates an average number lm based on the pixel value of the unit element reference data with a unit value of 0, and the value is 91.77. The processing module then calculates an average number hhm based on the pixel value of the double-bit reference material with a double bit value of 11, which is 115. The processing module then calculates the average number hlm based on the pixel value of the double-bit reference material with a double bit value of 10, and the value is 104.25. The processing module then calculates an average value of lhm based on the pixel value of the double-bit reference material with a double bit value of 01, and the value is 95.4. The processing module then calculates an average of llm based on the pixel value of the double-bit reference data with a double bit value of 00, which is 87.25.

在計算出hm、lm、hhm、hlm、lhm及llm後,處理模組係分別對參考區塊(a)中之數值與hhm加上THsmooth、hm加上THsmooth、hlm加上THsmooth、m加上THsmooth、lhm加上THsmooth、lm加上THsmooth及llm加上THsmooth進行比對。若參考區塊(a)中之數值大於hhm加上THsmooth時,三位元參考資料之數值即為111。若參考區塊(a)中之數值介於hm加上THsmooth及hhm加上THsmooth之間時,三位元參考資料之數值即為110。若參 考區塊(a)中之數值介於hlm加上THsmooth及hm加上THsmooth之間時,三位元參考資料之數值即為101。若參考區塊(a)中之數值介於m加上THsmooth及hlm加上THsmooth之間時,三位元參考資料之數值即為100。若參考區塊(a)中之數值介於lhm加上THsmooth及m加上THsmooth之間時,三位元參考資料之數值即為011。若參考區塊(a)中之數值介於lm加上THsmooth及lhm加上THsmooth之間時,三位元參考資料之數值即為010。若參考區塊(a)中之數值介於llm加上THsmooth及lm加上THsmooth之間時,三位元參考資料之數值即為001。若參考區塊(a)中之數值小於或等於llm加上THsmooth時,三位元參考資料之數值即為000。於是,當參考區塊(a)中之每一像素值皆經過上述方程式計算後,即可計算出雙位元參考資料(c)。 After calculating hm, lm, hhm, hlm, lhm, and llm, the processing module is respectively added to the reference block (a) and hhm plus TH smooth , hm plus TH smooth , hlm plus TH smooth , m plus TH smooth , lhm plus TH smooth , lm plus TH smooth and llm plus TH smooth for comparison. If the value in reference block (a) is greater than hhm plus TH smooth , the value of the three-bit reference material is 111. If the value in reference block (a) is between hm plus TH smooth and hhm plus TH smooth , the value of the three-bit reference is 110. If the value in reference block (a) is between hlm plus TH smooth and hm plus TH smooth , the value of the three-bit reference is 101. If the value in reference block (a) is between m plus TH smooth and hlm plus TH smooth , the value of the three-bit reference is 100. If the value in reference block (a) is between lhm plus TH smooth and m plus TH smooth , the value of the three-bit reference is 011. If the value in reference block (a) is between lm plus TH smooth and lhm plus TH smooth , the value of the three-bit reference is 010. If the value in reference block (a) is between llm plus TH smooth and lm plus TH smooth , the value of the three-bit reference is 001. If the value in the reference block (a) is less than or equal to llm plus TH smooth , the value of the three-bit reference material is 000. Thus, when each pixel value in the reference block (a) is calculated by the above equation, the double-bit reference material (c) can be calculated.

值得注意的是,本發明之單位元、雙位元及三位元模式可根據需求而更進一步地衍生至四位元、五位元...多位元模式等等。 It should be noted that the unit cell, double bit and three bit mode of the present invention can be further extended to four-bit, five-bit, multi-bit mode and the like according to requirements.

進一步地,本發明更詳細說明如下:當攝像裝置截取一影像後,框架首先被分割為具有n×n像素之大小之不重疊的區塊。每一區塊中,平均值m被計算且滿足如下方程式: 其中xij為區塊之(i,j)位置中的像素值。 Further, the present invention is explained in more detail as follows: After the image capturing device intercepts an image, the frame is first divided into blocks having a size of n × n pixels that do not overlap. In each block, the mean m is calculated and satisfies the following equation: Where x ij is the pixel value in the (i, j) position of the block.

每一影像區塊之輸出值係為具有大小與區塊相同之 二元位元圖(Bitmap,或簡稱為BM)。此二元位元圖藉由下述方程式產生: 其中,aij表示二元位元圖之位置(i,j)之位元值,位元值1係表示此區塊之像素值大於平均值m,而位元值0係表示此區塊之像素值小於平均值m。因此,二元位元圖之組合係為輸入框架之紋理描述符(texture descriptor)。 The output value of each image block is a binary bitmap (Bitmap, or simply BM) having the same size as the block. This binary bit map is generated by the following equation: Where a ij represents the bit value of the position (i, j) of the binary bit map, the bit value 1 indicates that the pixel value of the block is greater than the average value m, and the bit value 0 indicates the block. The pixel value is less than the average value m. Therefore, the combination of binary bit maps is the texture descriptor of the input frame.

在開始時,每一輸入框架被分割為不重疊的區塊,且根據上述提出之紋理描述符,每一區塊被轉換為二元位元圖。應知道的是,如果像素屬於一平滑區塊,因為這些像素值可能非常靠近平均數,故所對應之紋理描述可能不夠穩健(robust)。因此,平滑區塊之二元位元圖可能具有不穩定的0或1位元值。為了解決此問題,本發明稍為改良二元位元圖之產生方程式為下述方程式: 根據本發明之實驗觀察,THsmooth之數值可為8。 Initially, each input frame is partitioned into non-overlapping blocks, and each block is converted to a binary bit map according to the texture descriptors presented above. It should be understood that if the pixels belong to a smooth block, the corresponding texture descriptions may not be robust enough because the pixel values may be very close to the average. Therefore, the binary bit map of the smooth block may have an unstable 0 or 1 bit value. In order to solve this problem, the equation for generating a slightly improved binary bit map of the present invention is the following equation: According to the experimental observation of the present invention, the value of TH smooth can be 8.

此外,在即時監控攝像裝置中所擷取之影像係為彩色影像,所以每一區塊應分別具有紅色色頻、綠色色頻及藍色色頻之二元位元圖。在本發明,為便於描述,每一區塊係只表示為一個二元位元圖。本發明之方法於每一區塊可直觀的延伸於三個二元位元圖。第4圖係顯示藉由紋理描述符所產生之影像。此影像之紋理係清楚地 呈現以證明此紋理描述符之有效性。 In addition, the image captured in the real-time surveillance camera is a color image, so each block should have a binary bit map of red color frequency, green color frequency and blue color frequency. In the present invention, for convenience of description, each block is represented only as one binary bit map. The method of the present invention can be intuitively extended to three binary bit maps in each block. Figure 4 shows the image produced by the texture descriptor. The texture of this image is clearly Presented to prove the validity of this texture descriptor.

我們現在考慮如何使用特徵向量以建構背景模型。用於每一區塊之背景模型係包含K個加權二元位元圖(BM1、BM2、...、BMk),其中每一權重係介於0與1之間,且K個權重值之和為1。第K個加權二元位元圖之權重係表示為wk。當一新的區塊BMnew輸入時,BMnew使用如下所述之海明(Hamming)距離方程式而與K個二元位元圖比對: 其中m之範圍介於1與K之間。 We now consider how to use feature vectors to construct a background model. The background model for each block contains K weighted binary bit maps (BM 1 , BM 2 , ..., BM k ), where each weight is between 0 and 1, and K The sum of the weight values is 1. The weight of the Kth weighted binary bit map is expressed as w k . When a new block BM new is input, BM new is compared with K binary bit maps using the Hamming distance equation as described below: Where m ranges between 1 and K.

於背景模型中,若(BM new ,BM m )小於一預定的閥值,則區塊BMnew係匹配於BMm,且觸發執行一權重更新程序;否則,BMnew係為一前景區塊,且觸發執行一資料更新程序。因為前述之距離僅需使用位元運算,故上述距離之運算係相當簡單的。 In the background model, if ( BM new , BM m ) is less than a predetermined threshold, then the block BM new is matched to BM m and triggers execution of a weight update procedure; otherwise, BM new is a foreground block and triggers execution of a data update program. Since the aforementioned distance only requires bit operations, the above distance calculation is quite simple.

資料更新及權重更新程序如下所述。當一輸入區塊係為一背景區塊,此背景模型之權重係藉由下述方程式而更新:w' k =α M k +(1-α)w k ;其中,α係為學習速率,用於最佳匹配二元位元圖時Mk係為1,其他情況Mk係為0。 The data update and weight update procedures are described below. When an input block is a background block, the weight of the background model is updated by the following equation: w' k = α M k +(1 - α ) w k ; where α is the learning rate, when the best match for the two yuan FIG bit lines M k is 1, otherwise to 0 M k lines.

學習速率決定適應速度。亦即,較大的學習速率具有較快的適應性。關於最佳匹配二元位元圖之每一個二 元值,更新規則係滿足下述方程式: 其中t表示第t個框架,且於初始階段係為0。 The learning rate determines the speed of adaptation. That is, a larger learning rate has a faster adaptability. Regarding each binary value of the best matching binary bit map, the update rule satisfies the following equation: Where t represents the tth frame, and It is 0 in the initial stage.

若輸入區塊係為一前景區塊,於背景模型中,資料更新程序取代具有最小權重值之二元位元圖為輸入區塊。在本發明中,新區塊之權重值係設定為較低的初始權重值,其值為0.01。最後,背景模型之權重值係重新標準化為總合為1之權重值。 If the input block is a foreground block, in the background model, the data update program replaces the binary bit map with the smallest weight value as the input block. In the present invention, the weight value of the new block is set to a lower initial weight value, which is 0.01. Finally, the weight values of the background model are renormalized to a weight value of one.

於上述描述中,輸入區塊可匹配具有較低的權重值之二元位元圖且被認為是一背景區塊。然而,低權重意指所對應之二元位元圖具有較低的機率為背景區塊。為解決此問題,本發明將背景模型之權重以遞減方式排序,並選擇前面B個二元位元圖為背景模型,其選擇方法滿足下列方程式: 其中THB係為預先定義之閥值。 In the above description, the input block can match a binary bit map with a lower weight value and is considered to be a background block. However, low weight means that the corresponding binary bit map has a lower probability of being a background block. To solve this problem, the present invention sorts the weights of the background model in a decreasing manner, and selects the previous B binary bitmaps as a background model, and the selection method satisfies the following equation: Where TH B is a predefined threshold.

此外,在計算單位元參考資料、雙位元參考資料及三為元參考資料時,其平均數可由如下述之遞迴方程式產生: 其中,x係為像素值,Ri係為第i個區域且i大於1,Mi係為Ri之平均數,且|Ri|係為區域Ri中之像素個數。最初時,i相等於1且R1係定義為給定之區塊之所有像素。 In addition, when calculating the unit element reference data, the dual-bit reference material, and the three-dimensional reference material, the average number can be generated by the following recursive equation: Where x is a pixel value, R i is the i-th region and i is greater than 1, M i is the average of R i , and |R i | is the number of pixels in region R i . Initially, i is equal to 1 and R 1 is defined as all pixels of a given block.

像素之K位元圖樣也可藉由下述方程式重複地使用遞增的i而產生: 請參考第5圖,其係顯示用於本發明之由粗至精的紋理描述之具有二層之遞迴程序。第5(a)圖係為具有M1為3.8之輸入區塊R1。於是,藉由應用上述遞迴方程式,此區塊被分割為兩個子區塊R2及R3,其分別具有M2=2.17及M3=5.76。此階段將產生K位元圖樣之第一位元值。相同地,根據M2及M3,R2將產生R4及R5,R3將產生R6及R7。於是,K位元圖樣之第二位元值係產生。應知道的是,若此程序結束於八位元模式,則可取得最精層級的區塊。 The K-bit pattern of the pixel can also be generated by repeatedly using the increasing i by the following equation: Referring to Figure 5, there is shown a two-layer recursive procedure for coarse to fine texture descriptions of the present invention. Figure 5(a) is an input block R 1 having M 1 of 3.8. Thus, by applying the above-described recursive equation, the block is divided into two sub-blocks R 2 and R 3 having M 2 = 2.17 and M 3 = 5.76, respectively. This phase will produce the first bit value of the K-bit pattern. Similarly, according to M 2 and M 3 , R 2 will produce R 4 and R 5 , and R 3 will produce R 6 and R 7 . Thus, the second bit value of the K bit pattern is generated. It should be noted that if the program ends in octet mode, the most refined level block can be obtained.

從圖式中可觀察到用於評估紋理之區域之大小係根據其特徵而動態地決定,其意謂這些紋理能更精確地表示區塊特徵。 It can be observed from the drawing that the size of the area used to evaluate the texture is dynamically determined according to its characteristics, which means that these textures can more accurately represent the block features.

此外,既然每一區塊具有不同的複雜度,每一區塊應表示為不同的位元模式。我們可簡化區塊的變異數以測定一區塊是否係為平滑區塊,但變異數之計算大量地減少本發明之效率。因此,我們使用具有相對地低複雜度的另一策略,即數字1/0或0/1的逐位元(bitwise)轉變次數,以表示一區塊的複雜度。本發明係根據1位元模 式計算出逐位元轉變次數,且其係定義為二元位元圖之每一列逐位元轉變次數之和。例如,第2圖中1位元模式之二元位元圖之每一列之逐位元轉變次數係分別為2、2、3及2。若二元位元圖之逐位元轉變次數小於一預定閥值,此區塊係為一平滑區塊,否則,此區塊係為一複雜區塊。 In addition, since each block has a different complexity, each block should be represented as a different bit pattern. We can simplify the variation of the block to determine whether a block is a smooth block, but the calculation of the variance greatly reduces the efficiency of the present invention. Therefore, we use another strategy with relatively low complexity, the number of bitwise transitions of the numbers 1/0 or 0/1 to represent the complexity of a block. The invention is based on 1-bit mode The formula calculates the number of bitwise transitions, and is defined as the sum of the number of bitwise transitions for each column of the binary bit map. For example, the number of bitwise transitions of each column of the binary bit pattern of the 1-bit mode in FIG. 2 is 2, 2, 3, and 2, respectively. If the number of bitwise transitions of the binary bit map is less than a predetermined threshold, the block is a smooth block; otherwise, the block is a complex block.

若背景區塊係為平滑區塊,其係表示為1位元模式且標誌值係設定為1,否則不平滑區塊係表示為2或3位元模式,以更精確地配適此區塊的特徵,且其標誌值係設定為0。使用者可在開始時界定是否不平滑的區塊需以2位元或3位元模式表示。此外,權重ws係附加於每一背景區塊,以標示是否此區塊為平滑區塊。在一開始ws係設定為0.5。 If the background block is a smooth block, it is represented as a 1-bit mode and the flag value is set to 1, otherwise the non-smooth block is represented as a 2 or 3 bit mode to more accurately fit the block. Characteristic, and its flag value is set to 0. Blocks that the user can define at the outset whether they are not smooth need to be represented in 2-bit or 3-bit mode. In addition, a weight w s is attached to each background block to indicate whether the block is a smooth block. At the beginning, the w s system is set to 0.5.

權重更新程序及資料更新程序如下所述。對於一輸入區塊,其將使用逐位元轉變次數以擷取其特徵(平滑或不平滑)且轉換為所對應之模式,其表示為BMnew。於是,ws之更新係滿足下列方程式:w' s =α M s +(1-α)w s ;其中,若輸入區塊為平滑區塊時,Ms係為1。 The weight update procedure and the data update procedure are as follows. For an input block, it will use the bitwise transitions to extract its features (smooth or unsmooth) and convert to the corresponding mode, denoted BM new . Thus, the update of w s satisfies the following equation: w' s = α M s +(1 - α ) w s ; where, if the input block is a smooth block, M s is 1.

若背景區塊係為平滑區塊時,ws將靠近於1;若背景區塊係為不平滑區塊時,ws則係靠近於0。權重及資料更新程序如第6圖所示。第6圖之右邊的分支除了BMm更新程序外係相似於原始的1位元模式,更新位元圖樣之規則係自上述方程式改變為下述方程式: 其中係大於1/2 k ,且k係指k位元模式。例如,於2位元模式中,若為”01”且Pij於完成上述方程式之計算後為{0.4、0.2、0.2、0.2}時,係改變為”00”。 If the background block is a smooth block, w s will be close to 1; if the background block is a non-smooth block, w s is close to 0. The weight and data update procedures are shown in Figure 6. The branch on the right side of Figure 6 is similar to the original 1-bit mode except for the BM m update procedure, updating the bit pattern The rule is changed from the above equation to the following equation: among them It The system is greater than 1/2 k and k is the k-bit mode. For example, in 2-bit mode, if Is "01" and P ij is {0.4, 0.2, 0.2, 0.2} after the calculation of the above equation is completed, The system changes to "00".

於第6圖之左邊分支中,其係用來改變用於背景模型之k位元模式。就平滑之背景區塊(標誌值為1)來說,當輸入區塊為不平滑區塊導致其ws小於0.5時,背景區塊之模式應自1位元模式改為2位元或3位元模式。因此,所有使用1位元模式的BMi被移除,且使用2位元或3位元的BMnew係插入以做為用於新背景模型之第一位元圖。最後,令標誌值為0且新背景模型係藉由下述使用2位元或3位元模式之BMnew而重新建構,以形成K個加權位元圖(BM1、BM2、...、BMK)。至於不平滑的背景區塊(亦即標誌值為0),當輸入區塊係為平滑區塊而導致其ws大於等於0.5時,用於新背景模型之重建步驟係相似於上述描述。 In the left branch of Figure 6, it is used to change the k-bit mode used for the background model. For a smooth background block (flag value 1), when the input block is a non-smooth block and its w s is less than 0.5, the mode of the background block should be changed from 1-bit mode to 2-bit or 3 Bit mode. Therefore, all BM i using the 1-bit mode are removed, and a 2-bit or 3-bit BM new is inserted as the first bitmap for the new background model. Finally, let the flag value be 0 and the new background model is reconstructed by using BM new using 2-bit or 3-bit mode to form K weighted bit maps (BM 1 , BM 2 , ... , BM K ). As for the unsmooth background block (ie, the flag value is 0), when the input block is a smooth block and its ws is greater than or equal to 0.5, the reconstruction step for the new background model is similar to the above description.

第7圖係顯示本發明之即時背景模型化方法用於移動體偵測之流程圖。首先,BMnew需要先檢查是否逐位元轉變次數係大於預定閥值THbt。基於經驗之結果,THbt通常可設定為4。之後,右邊分支處理是否BMnew係為一平滑區塊,而左邊分支則係說明非平滑區塊。最後,兩邊皆具有如第6圖所示之資料更新程序及權重更新程序。 Figure 7 is a flow chart showing the instant background modeling method of the present invention for moving body detection. First, BM new needs to first check if the number of bitwise transitions is greater than the predetermined threshold TH bt . Based on empirical results, TH bt can usually be set to 4. After that, the right branch handles whether BM new is a smooth block, while the left branch indicates a non-smooth block. Finally, both sides have a data update program and a weight update program as shown in FIG.

以下將顯示本發明採用許多資料庫之實驗結果並與 其它論文提及的方法比較。這些資料庫包含室內及室外的場景。第8圖及第9圖顯示亮度改變的室內場景,其中一位參與實驗者朝著相機方向前進。由第8圖及第9圖之結果可觀察到史道佛及葛寧森(Stauffer and Grimson)的方法係非常敏感於亮度的改變,而海奇拉及培地卡內(Heikkilä and Pietikäinen)的方法則易受到雜訊的干擾。相反地,本發明具有較好的抵抗亮度改變之能力,因為本發明採用更多的資訊,本發明係採用一個區域而不是一個單獨像素的資訊。因為本發明係採用非重疊區塊之方法,本發明之方法我計算出之輪廓係較粗糙於上述其它方法。 The experimental results of the use of many databases of the present invention will be shown below and Comparison of methods mentioned in other papers. These databases contain indoor and outdoor scenes. Figures 8 and 9 show indoor scenes with varying brightness, with one participant moving in the direction of the camera. From the results of Figures 8 and 9, it can be observed that the method of Stauffer and Grimson is very sensitive to the change of brightness, while the method of Heikkilä and Pietikäinen is easy. Interference with noise. Conversely, the present invention has a better ability to resist changes in brightness, and because the present invention employs more information, the present invention uses information in one region rather than a single pixel. Since the present invention employs a method of non-overlapping blocks, the method of the present invention calculates a rougher profile than the other methods described above.

更多室外的偵測結果與史道佛及葛寧森之方法的比較係顯示於第10圖及第11圖中。由第11圖之結果可觀察到雖然本發明可能於移動物體的內部產生更多的孔洞,但此影響可藉由形態學的運算來消除。 A comparison of more outdoor detection results with the methods of Shi Dao Fo and Ge Ningsen is shown in Figures 10 and 11. From the results of Fig. 11, it can be observed that although the present invention may generate more holes in the interior of a moving object, this effect can be eliminated by morphological operations.

此外,考慮本發明之方法,1位元、2位元及3位元之比較及偵測結果係顯示於第12圖。從這些結果可觀察出,2位元或3位元模式之正確率高於1位元模式。第12(c)圖顯示2位元模式可加強移動物體之形狀。然而,於一些情形中,導致於誤判為正(false positive)時,影子可能被認為是前景資訊。此外,在第12(d)圖中,更多的位元組合係產生,而相較於其它結果,前景之改善係銜接的更加完善。 In addition, considering the method of the present invention, the comparison and detection results of 1-bit, 2-bit and 3-bit are shown in FIG. From these results, it can be observed that the 2-bit or 3-bit mode has a higher correct rate than the 1-bit mode. Figure 12(c) shows that the 2-bit mode enhances the shape of the moving object. However, in some cases, the shadow may be considered to be foreground information when it is caused by a false positive. In addition, in the 12th (d) figure, more bit combinations are generated, and the improvement of the foreground is more perfect than other results.

第13圖至第20圖提供本發明之三種位元模式之其 它比較。 Figures 13 to 20 provide the three bit patterns of the present invention. It compares.

由第13圖及第14圖可觀察出,1位元模式及2位元模式具有較高的機率破壞此移動物件。然而,於3位元模式中,此破壞被修正且改善。第14圖係顯示出本發明之方法在抵抗陰影及效率方面可勝過其它方法。 It can be observed from Fig. 13 and Fig. 14 that the 1-bit mode and the 2-bit mode have a high probability of destroying the moving object. However, in the 3-bit mode, this damage is corrected and improved. Figure 14 shows that the method of the present invention outperforms other methods in resisting shadows and efficiency.

更多個比較及使用凱維爾(CAVIAR)公開資料庫之影像係提供於第15圖至第20圖。本發明採用已經廣泛地被用於監視例子之實驗結果的凱維爾公開資料庫。在第15圖中顯示一參與實驗者沿著走廊行走。當連通元件結果顯示於第15(f)圖之2位元模式中,影子係產生於此物件之腳之周圍,且此影子被認為是前景資訊。然而,於3位元模式影子可適當地被混合於背景中。 More comparisons and images using the CAVIAR public database are provided in Figures 15 through 20. The present invention employs a Kewell public database that has been widely used to monitor experimental results of the examples. In Figure 15, a participant is shown walking along the corridor. When the connected component result is displayed in the 2-bit mode of Figure 15(f), the shadow is generated around the foot of the object, and the shadow is considered to be foreground information. However, the 3-bit mode shadow can be appropriately blended into the background.

請參考第16圖,其係說明從1位元模式至3位元模式之漸進的改善以及史道佛及葛寧森之方法與海奇拉及培地卡內之方法之對應結果。於第16圖中,雖然史道佛及葛寧森之方法與海奇拉及培地卡內之方法皆因為採用基於像素之方法而可取得更纖細的前景物件,但其計算量係大量地高於基於區塊之方法。 Please refer to Figure 16, which illustrates the gradual improvement from 1-bit mode to 3-bit mode and the corresponding results of the methods of Storafo and Gingsen and the methods of Hetchira and Petrika. In Figure 16, although the methods of Shi Dao Fo and Ge Ningsen and the methods of Hetchira and Petrika are all possible to obtain a more slender foreground object by using the pixel-based method, the calculation amount is much higher than that based on Block method.

使用凱維爾公開資料庫之其餘實驗結果將使用其它常用於監視例子的資料。於第17圖及第18圖中,與史道佛及葛寧森之方法比較,本發明之方法可正確第抵抗影子。此外孔洞問題可藉由採用多位元模式而減輕。然而,海奇拉及培地卡內之方法在此二個問題中可產生較好的結果,係因為他們採用重疊區塊策略,但同時亦導 致較高的計算成本。 The rest of the experimental results using the Kewell Public Library will use other data commonly used for monitoring examples. In Figures 17 and 18, the method of the present invention can correctly resist the shadow as compared with the methods of Shi Dao Fo and Ge Ningsen. In addition, the hole problem can be mitigated by using a multi-bit mode. However, the methods of Hetchira and Petrika have produced good results in these two problems because they adopt overlapping block strategies, but they also lead Higher calculation costs.

第19圖顯示本發明之方法可比其它方法而更精確地偵測出兩個分離的前景物件。在第20圖中,雖然1位元模式將影子混合進前景,但2位元及3位元模式可與海奇拉及培地卡內之方法充分地競爭。 Figure 19 shows that the method of the present invention can detect two separate foreground objects more accurately than other methods. In Fig. 20, although the 1-bit mode blends shadows into the foreground, the 2-bit and 3-bit modes can compete fully with the methods in Hetchira and Petrika.

本發明之方法之性能係使用許多影像序列而比較於兩個目前最佳技術(state-of-the-art)方法。此些影像序列係由真實的室內、室外環境及公開測試資料庫而取得。用於此實驗之模擬環境係配置2.93GHz Core 2 Duo Intel處理器及2GB記憶體。影像解析度係設置於320x240像素。所有演算法係使用C++執行。 The performance of the method of the present invention uses a number of image sequences compared to two current state-of-the-art methods. These image sequences are obtained from real indoor and outdoor environments and public test databases. The simulation environment used for this experiment was equipped with a 2.93 GHz Core 2 Duo Intel processor and 2 GB of memory. The image resolution is set at 320x240 pixels. All algorithms are executed in C++.

為了標記及分割前景像素,連通元件(connected components)演算法係採用於各個背景模型方法。使用於這些實驗之參數係列表於表1,其中α係為學習速率、THB係為使用於方程式中之閥值,K係為高斯(Gaussian)數量,X表示此方法不需要此參數,BS係為區塊大小及LBPP,R係只使用於海奇拉及培地卡內之方法且表示使用半徑R以尋找P個鄰點(neighbor),如表1所示。 In order to mark and segment foreground pixels, the connected components algorithm is applied to each background model method. The parameter series used in these experiments is shown in Table 1, where α is the learning rate, TH B is the threshold used in the equation, K is the Gaussian number, and X means that this method does not require this parameter, BS The method is block size and LBP P, R is only used in the method of Hetchira and Petrika and indicates that the radius R is used to find P neighbors, as shown in Table 1.

此三種方法之性能比較係顯示於表2(本發明係採用1位元模式),其中最後一列係表示使用於背景建構中之連通元件標號(CCL)方法。從此表我們可觀察出,因為本發明之基於區塊方法僅需位元運算,故本發明之方法係快於其它方法。海奇拉及培地卡內之方法係為最慢之方法,因為他們分割每一框架為重疊區塊,且局部二元圖樣(Local binary pattern,LBP)直方圖之大小顯著地影響其性能。 The performance comparison of the three methods is shown in Table 2 (the present invention employs a 1-bit mode), with the last column indicating the connected component number (CCL) method used in the background construction. From this table we can observe that the method of the present invention is faster than other methods because the block-based method of the present invention requires only bit operations. The methods in Hetchira and Petrika are the slowest because they divide each frame into overlapping blocks, and the size of the local binary pattern (LBP) histogram significantly affects its performance.

本發明三種不同模式(1位元、2位元及3位元)之性能比較係顯示於表3。可清楚看出1位元模式具有最佳的效率。此外,表4顯示平滑區塊及非平滑區塊之界定之經驗結果。如之前所述,變異數及逐位元轉變計畫之兩個不同的計畫可被採用。逐位元轉變計畫因具有相對低的複雜度,故可勝過變異數計畫。 The performance comparison of the three different modes (1-bit, 2-bit, and 3-bit) of the present invention is shown in Table 3. It can be clearly seen that the 1-bit mode has the best efficiency. In addition, Table 4 shows the empirical results of the definition of smooth and non-smooth blocks. As mentioned earlier, two different programs for the variance and bitwise transition plans can be used. The bitwise transformation plan can outperform the variability plan because of its relatively low complexity.

表3,本發明之方法於每秒顯示幀數之比較。 Table 3 shows a comparison of the number of frames displayed per second by the method of the present invention.

此外,下面多個實驗係執行於室內、室外及凱維爾公開資料庫。由移動物件取得之框架係手工地標記以產生實況(ground truth)。簡單的影像包含不同的條件,其中大多將導致於不穩定的結果。因此,為分析此暫時的不穩定性,四個量測值準確度(precision)、記憶力(recall)、相似度(similarity)及F度量(F-measure)係被使用。 In addition, the following experiments were performed in indoor, outdoor, and Kewell public databases. The frames taken by the moving objects are manually labeled to produce ground truth. Simple images contain different conditions, most of which will result in unstable results. Therefore, to analyze this temporary instability, four measurements of precision, recall, similarity, and F-measure are used.

準確度可視為精確的度量,而記憶力係完成度之度量。F度量考慮準確度與記憶力而計算其分數,其係可解釋為準確度及記憶力之調和(harmonic)平均數。準確度、記憶力、相似度及F度量之計算式係滿足下列方程式: 如表5及表6所示,TP、FP、FN、TN、P及N係分別為正確判斷為正(True positive)、誤判為正(False positive)、誤判為負(False negative)、正確判斷為負(True negative)、判斷為正之個數(Total positive)及判斷為負之個數(Total negative)。應了解的是,上述所考慮的分數係介於0到1之間,因此越高的數值代表越高的正確率。 Accuracy can be thought of as an accurate measure, and memory is a measure of completion. The F metric calculates its score in terms of accuracy and memory, which can be interpreted as the harmonic mean of accuracy and memory. The calculations for accuracy, memory, similarity, and F metrics satisfy the following equation: As shown in Tables 5 and 6, the TP, FP, FN, TN, P, and N systems are correctly judged to be positive (True positive), false positives are positive (False positive), false positives are negative (False negative), and correctly judged. It is negative (negative negative), judged to be positive (Total positive), and judged to be negative (Total negative). It should be understood that the above-mentioned scores are between 0 and 1, so higher values represent higher accuracy.

撇開定性的評估,定性的計算可測定不同的方法是否具有競爭力。為支持定性的結果,首先我們提供實況及多種方法所對應的二元結果。第21圖至第23圖係顯 示將被測定的序列。再者,一但實況產生後,定性的結果可直觀地取得。如表7至表9所述,於更多位元模式被採用之條件時,本發明之四個考慮的度量值可漸進地增加,且將取得更正確的結果。 Qualitative assessments can be used to determine whether different methods are competitive. To support qualitative results, we first provide the binary results for the live and multiple methods. Figure 21 to Figure 23 show Show the sequence to be determined. Moreover, once the reality is produced, qualitative results can be obtained intuitively. As described in Tables 7 through 9, the four considered metrics of the present invention can be incrementally increased when more bin patterns are employed, and more accurate results will be obtained.

此外,本發明之方法使用非重疊的區塊方法,一些問題隨之產生,諸如粗糙的物件形狀及孔洞問題。藉由實施形態學演算法,更無縫的形狀可取得。此外,如表3所示,形態學演算法僅稍微地影響效率。因此,第24圖顯示本發明之方法使用及不使用形態學演算法之3位元模式之二元結果。為了與基於像素之方法比較,用於室內、室外及公開資料庫之定性的量測係分別提供於表10及表11。 Moreover, the method of the present invention uses non-overlapping block methods, with problems such as coarse object shapes and hole problems. A more seamless shape can be achieved by implementing a morphological algorithm. Furthermore, as shown in Table 3, the morphological algorithm only slightly affects efficiency. Thus, Figure 24 shows the binary results of the 3-bit mode of the method of the present invention with and without the morphological algorithm. For comparison with pixel-based methods, qualitative measurement systems for indoor, outdoor, and public databases are provided in Tables 10 and 11, respectively.

以上所述僅為舉例性,而非為限制性者。任何未脫離本發明之精神與範疇,而對其進行之等效修改或變更,均應包含於後附之申請專利範圍中。 The above is intended to be illustrative only and not limiting. Any equivalent modifications or alterations to the spirit and scope of the invention are intended to be included in the scope of the appended claims.

S11~S18‧‧‧步驟流程 S11~S18‧‧‧Step process

R1‧‧‧輸入區塊 R1‧‧‧ input block

R2~R7‧‧‧區塊 R2~R7‧‧‧ Block

第1圖 係為本發明之即時背景模型化方法之流程 圖。 Figure 1 is the flow of the instant background modeling method of the present invention Figure.

第2圖 係為本發明之即時背景模型化方法之1位元參考資料及2位元參考資料之計算方法之示意圖。 2 is a schematic diagram of a 1-bit reference material and a 2-bit reference data calculation method of the instant background modeling method of the present invention.

第3圖 係為本發明之即時背景模型化方法之3位元參考資料之計算方法之示意圖。 Figure 3 is a schematic diagram showing the calculation method of the 3-bit reference material of the instant background modeling method of the present invention.

第4(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 4(a) is an original image of one of the instant background modeling methods of the present invention.

第4(b)圖 係為本發明之即時背景模型化方法之根據第4(a)圖而藉由紋理描述符所計算出之影像。 Fig. 4(b) is an image calculated by the texture descriptor according to Fig. 4(a) of the instant background modeling method of the present invention.

第5(a)圖 係為本發明之即時背景模型化方法之一輸入區塊R1。 Figure 5(a) is an input block R1 which is one of the instant background modeling methods of the present invention.

第5(b)圖 係為本發明之即時背景模型化方法之一區塊R2。 Figure 5(b) is a block R2 of the instant background modeling method of the present invention.

第5(c)圖 係為本發明之即時背景模型化方法之一區塊R3。 Figure 5(c) is a block R3 of the instant background modeling method of the present invention.

第5(d)圖 係為本發明之即時背景模型化方法之一1位元圖樣。 Figure 5(d) is a 1-bit pattern of the instant background modeling method of the present invention.

第5(e)圖 係為本發明之即時背景模型化方法之一區塊R4。 Figure 5(e) is a block R4 of the instant background modeling method of the present invention.

第5(f)圖 係為本發明之即時背景模型化方法之一區塊R5。 Figure 5(f) is a block R5 of the instant background modeling method of the present invention.

第5(g)圖 係為本發明之即時背景模型化方法之一區塊R6。 The fifth (g) diagram is a block R6 of the instant background modeling method of the present invention.

第5(h)圖 係為本發明之即時背景模型化方法之一區塊R7。 Figure 5(h) is a block R7 of the instant background modeling method of the present invention.

第5(i)圖 係為本發明之即時背景模型化方法之一2位元圖樣。 The 5th (i) diagram is a 2-bit pattern of the instant background modeling method of the present invention.

第6圖 係為本發明之即時背景模型化方法之資料更新程序及權重更新程序之流程圖。 Figure 6 is a flow chart of the data update procedure and the weight update procedure of the instant background modeling method of the present invention.

第7圖 係為本發明之即時背景模型化方法之移動物件偵測之流程圖。 Figure 7 is a flow chart of the moving object detection of the instant background modeling method of the present invention.

第8(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 8(a) is an original image of one of the instant background modeling methods of the present invention.

第8(b)圖 係為本發明之即時背景模型化方法之根據第8(a)圖藉由史道佛及葛寧森的方法所計算出之影像。 Fig. 8(b) is an image calculated by the method of Shi Daofu and Ge Ningsen according to Fig. 8(a) of the instant background modeling method of the present invention.

第8(c)圖 係為本發明之即時背景模型化方法之根據第8(a)圖藉由海奇拉及培地卡內的方法所計算出之影像。 Fig. 8(c) is an image calculated by the method of Hetchira and Petrika according to Fig. 8(a) of the instant background modeling method of the present invention.

第8(d)圖 係為本發明之即時背景模型化方法之根據第8(a)圖以1位元模式所計算出之影像。 Fig. 8(d) is an image calculated by the 1-bit mode according to Fig. 8(a) of the instant background modeling method of the present invention.

第9(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 9(a) is an original image of one of the instant background modeling methods of the present invention.

第9(b)圖 係為本發明之即時背景模型化方法之根據 第9(a)圖藉由史道佛及葛寧森的方法所計算出之影像。 Figure 9(b) is the basis of the instant background modeling method of the present invention. Figure 9(a) shows the image calculated by the methods of Shi Dao Fo and Ge Ningsen.

第9(c)圖 係為本發明之即時背景模型化方法之根據第9(a)圖藉由海奇拉及培地卡內的方法所計算出之影像。 Figure 9(c) is an image of the instant background modeling method of the present invention calculated by the method of Hetchira and Petrika according to Fig. 9(a).

第9(d)圖 係為本發明之即時背景模型化方法之根據第9(a)圖以1位元模式所計算出之影像。 Fig. 9(d) is an image calculated by the 1-bit mode according to Fig. 9(a) of the instant background modeling method of the present invention.

第10(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 10(a) is an original image of one of the instant background modeling methods of the present invention.

第10(b)圖 係為本發明之即時背景模型化方法之根據第10(a)圖藉由史道佛及葛寧森的方法所計算出之影像。 Fig. 10(b) is an image calculated by the method of Shi Daofu and Ge Ningsen according to Fig. 10(a) of the instant background modeling method of the present invention.

第10(c)圖 係為本發明之即時背景模型化方法之根據第10(a)圖以1位元模式所計算出之影像。 Fig. 10(c) is an image calculated by the 1-bit mode according to Fig. 10(a) of the instant background modeling method of the present invention.

第11(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 11(a) is an original image of one of the instant background modeling methods of the present invention.

第11(b)圖 係為本發明之即時背景模型化方法之根據第11(a)圖藉由史道佛及葛寧森的方法所計算出之影像。 Fig. 11(b) is an image calculated by the method of Shi Dao Fo and Ge Ningsen according to the image of Fig. 11(a) of the instant background modeling method of the present invention.

第11(c)圖 係為本發明之即時背景模型化方法根據第11(a)圖所計算出之影像。 Fig. 11(c) is an image calculated by the instant background modeling method of the present invention according to Fig. 11(a).

第11(d)圖 係為本發明之即時背景模型化方法根據第11(a)圖及形態學演算法所計算出之影像。 Fig. 11(d) is an image calculated by the instant background modeling method of the present invention according to Fig. 11(a) and the morphological algorithm.

第12(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 12(a) is an original image of one of the instant background modeling methods of the present invention.

第12(b)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元模式所計算出之影像。 Fig. 12(b) is an image calculated by the 1-bit mode according to Fig. 12(a) of the instant background modeling method of the present invention.

第12(c)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元及2位元模式所計算出之影像。 Fig. 12(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 12(a) of the instant background modeling method of the present invention.

第12(d)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元及3位元模式所計算出之影像。 Fig. 12(d) is an image calculated by the 1-bit and 3-bit modes according to Fig. 12(a) of the instant background modeling method of the present invention.

第12(e)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元模式所計算出之連通元件結果(connected component result,CCL)之影像。 Fig. 12(e) is an image of a connected component result (CCL) calculated in a 1-bit mode according to Fig. 12(a) of the instant background modeling method of the present invention.

第12(f)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 12(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 12(a) of the instant background modeling method of the present invention.

第12(g)圖 係為本發明之即時背景模型化方法之根據第12(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 12(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 12(a) of the instant background modeling method of the present invention.

第13(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 13(a) is an original image of one of the instant background modeling methods of the present invention.

第13(b)圖 係為本發明之即時背景模型化方法之根據 第13(a)圖以1位元模式所計算出之影像。 Figure 13(b) is the basis of the instant background modeling method of the present invention. Figure 13(a) shows the image calculated in 1-bit mode.

第13(c)圖 係為本發明之即時背景模型化方法之根據第13(a)圖以1位元及2位元模式所計算出之影像。 Fig. 13(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 13(a) of the instant background modeling method of the present invention.

第13(d)圖 係為本發明之即時背景模型化方法之根據第13(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 13(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 13(a) of the instant background modeling method of the present invention.

第13(e)圖 係為本發明之即時背景模型化方法之根據第13(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 13(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 13(a) of the instant background modeling method of the present invention.

第13(f)圖 係為本發明之即時背景模型化方法之根據第13(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 13(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 13(a) of the instant background modeling method of the present invention.

第13(g)圖 係為本發明之即時背景模型化方法之根據第13(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 13(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 13(a) of the instant background modeling method of the present invention.

第14(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 14(a) is an original image of one of the instant background modeling methods of the present invention.

第14(b)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元模式所計算出之影像。 Fig. 14(b) is an image calculated by the 1-bit mode according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(c)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元及2位元模式所計算出之影像。 Fig. 14(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(d)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 14(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(e)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 14(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(f)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 14(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(g)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 14(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(h)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以史道佛及葛寧森的方法所計算出之影像。 Fig. 14(h) is an image calculated by the method of Shi Daofu and Ge Ningsen according to Fig. 14(a) of the instant background modeling method of the present invention.

第14(i)圖 係為本發明之即時背景模型化方法之根據第14(a)圖以海奇拉及培地卡內的方法所計算出之影像。 Figure 14(i) is an image of the instant background modeling method of the present invention calculated according to the method of Haciilla and Petrika according to Fig. 14(a).

第15(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 15(a) is an original image of one of the instant background modeling methods of the present invention.

第15(b)圖 係為本發明之即時背景模型化方法之根據第15(a)圖以1位元模式所計算出之影像。 Fig. 15(b) is an image calculated by the 1-bit mode according to Fig. 15(a) of the instant background modeling method of the present invention.

第15(c)圖 係為本發明之即時背景模型化方法之根據 第15(a)圖以1位元及2位元模式所計算出之影像。 Figure 15(c) is the basis of the instant background modeling method of the present invention. Figure 15(a) shows the image calculated in 1-bit and 2-bit modes.

第15(d)圖 係為本發明之即時背景模型化方法之根據第15(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 15(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 15(a) of the instant background modeling method of the present invention.

第15(e)圖 係為本發明之即時背景模型化方法之根據第15(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 15(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 15(a) of the instant background modeling method of the present invention.

第15(f)圖 係為本發明之即時背景模型化方法之根據第15(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 15(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 15(a) of the instant background modeling method of the present invention.

第15(g)圖 係為本發明之即時背景模型化方法之根據第15(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 15(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 15(a) of the instant background modeling method of the present invention.

第16(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 16(a) is an original image of one of the instant background modeling methods of the present invention.

第16(b)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元模式所計算出之影像。 Fig. 16(b) is an image calculated by the 1-bit mode according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(c)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元及2位元模式所計算出之影像。 Fig. 16(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(d)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元及3位元模式所計算出 之連通元件結果之影像。 Figure 16(d) is a calculation of the instant background modeling method of the present invention based on the 16-bit (a) and 1-bit modes. The image of the connected component result.

第16(e)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 16(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(f)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 16(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(g)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 16(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(h)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以史道佛及葛寧森的方法所計算出之影像。 Fig. 16(h) is an image calculated by the method of Stora and Gingsen according to Fig. 16(a) of the instant background modeling method of the present invention.

第16(i)圖 係為本發明之即時背景模型化方法之根據第16(a)圖以海奇拉及培地卡內的方法所計算出之影像。 Fig. 16(i) is an image of the instant background modeling method of the present invention calculated according to the method in Fig. 16(a) in Haciilla and Petrika.

第17(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 17(a) is an original image of one of the instant background modeling methods of the present invention.

第17(b)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元模式所計算出之影像。 Fig. 17(b) is an image calculated by the 1-bit mode according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(c)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元及2位元模式所計算出之影像。 Fig. 17(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(d)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 17(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(e)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 17(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(f)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 17(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(g)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 17(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 17(a) of the instant background modeling method of the present invention.

第17(h)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以史道佛及葛寧森的方法所計算出之影像。 Figure 17(h) is an image of the instant background modeling method of the present invention calculated according to the method of Stora and Gingsen according to Fig. 17(a).

第17(i)圖 係為本發明之即時背景模型化方法之根據第17(a)圖以海奇拉及培地卡內的方法所計算出之影像。 Figure 17(i) is an image of the instant background modeling method of the present invention calculated according to the method of Haciilla and Petrika according to Fig. 17(a).

第18(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 18(a) is an original image of one of the instant background modeling methods of the present invention.

第18(b)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以1位元模式所計算出之影像。 Fig. 18(b) is an image calculated by the 1-bit mode according to Fig. 18(a) of the instant background modeling method of the present invention.

第18(c)圖 係為本發明之即時背景模型化方法之根據 第18(a)圖以1位元及2位元模式所計算出之影像。 Figure 18(c) is the basis of the instant background modeling method of the present invention. Figure 18(a) shows the image calculated in 1-bit and 2-bit modes.

第18(d)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 18(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 18(a) of the instant background modeling method of the present invention.

第18(e)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 18(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 18(a) of the instant background modeling method of the present invention.

第18(f)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 18(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 18(a) of the instant background modeling method of the present invention.

第18(g)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 18(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 18(a) of the instant background modeling method of the present invention.

第18(h)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以史道佛及葛寧森的方法所計算出之影像。 Figure 18(h) is an image of the instant background modeling method of the present invention calculated according to the method of Stora and Gingsen according to Fig. 18(a).

第18(i)圖 係為本發明之即時背景模型化方法之根據第18(a)圖以海奇拉及培地卡內的方法所計算出之影像。 Figure 18(i) is an image of the instant background modeling method of the present invention calculated according to the method of Haciilla and Petrika according to Fig. 18(a).

第19(a)圖 係為本發明之即時背景模型化方法之一原始影像。 Figure 19(a) is an original image of one of the instant background modeling methods of the present invention.

第19(b)圖 係為本發明之即時背景模型化方法之根據 第19(a)圖以1位元模式所計算出之影像。 Figure 19(b) is the basis of the instant background modeling method of the present invention. Figure 19(a) shows the image calculated in 1-bit mode.

第19(c)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以1位元及2位元模式所計算出之影像。 Fig. 19(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 19(a) of the instant background modeling method of the present invention.

第19(d)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 19(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 19(a) of the instant background modeling method of the present invention.

第19(e)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 19(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 19(a) of the instant background modeling method of the present invention.

第19(f)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 19(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 19(a) of the instant background modeling method of the present invention.

第19(g)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 19(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 19(a) of the instant background modeling method of the present invention.

第19(h)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以史道佛及葛寧森的方法所計算出之影像。 Fig. 19(h) is an image calculated by the method of Shi Daofu and Ge Ningsen according to the 19th (a) diagram of the instant background modeling method of the present invention.

第19(i)圖 係為本發明之即時背景模型化方法之根據第19(a)圖以海奇拉及培地卡內的方法所計算出之影像。 Figure 19(i) is an image of the instant background modeling method of the present invention calculated according to the method of Haciilla and Petrika according to Fig. 19(a).

第20(a)圖 係為本發明之即時背景模型化方法之一原 始影像。 Figure 20(a) is one of the instant background modeling methods of the present invention. Start image.

第20(b)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元模式所計算出之影像。 Fig. 20(b) is an image calculated by the one-bit mode according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(c)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元及2位元模式所計算出之影像。 Fig. 20(c) is an image calculated by the 1-bit and 2-bit modes according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(d)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 20(d) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(e)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元模式所計算出之連通元件結果之影像。 Fig. 20(e) is an image of the connected component result calculated in the 1-bit mode according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(f)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元及2位元模式所計算出之連通元件結果之影像。 Fig. 20(f) is an image of the connected component result calculated by the 1-bit and 2-bit modes according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(g)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以1位元及3位元模式所計算出之連通元件結果之影像。 Fig. 20(g) is an image of the connected component result calculated by the 1-bit and 3-bit modes according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(h)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以史道佛及葛寧森的方法所計算出之影像。 Fig. 20(h) is an image calculated by the method of Shi Daofu and Ge Ningsen according to Fig. 20(a) of the instant background modeling method of the present invention.

第20(i)圖 係為本發明之即時背景模型化方法之根據第20(a)圖以海奇拉及培地卡內的方法所計 算出之影像。 Figure 20(i) is a method for instant background modeling of the present invention based on the method of Hochila and Petrika according to Figure 20(a) Calculated image.

第21(a)圖 係為本發明之即時背景模型化方法之實況(ground truth)之影像。 Figure 21(a) is an image of the ground truth of the instant background modeling method of the present invention.

第21(b)圖 係為本發明之即時背景模型化方法根據第21(a)圖所計算出之二元影像。 Figure 21(b) is a binary image calculated according to the 21st (a) figure of the instant background modeling method of the present invention.

第21(c)圖 係為本發明之即時背景模型化方法根據第21(a)圖以2位元模式所計算出之二元影像。 Fig. 21(c) is a binary image calculated by the instant background modeling method of the present invention in a 2-bit mode according to Fig. 21(a).

第21(d)圖 係為本發明之即時背景模型化方法根據第21(a)圖以3位元模式所計算出之二元影像。 Fig. 21(d) is a binary image calculated by the instant background modeling method of the present invention in a 3-bit mode according to Fig. 21(a).

第22(a)圖 係為本發明之即時背景模型化方法之實況之影像。 Figure 22(a) is an image of the live state of the instant background modeling method of the present invention.

第22(b)圖 係為本發明之即時背景模型化方法根據第22(a)圖所計算出之二元影像。 Fig. 22(b) is a binary image calculated according to Fig. 22(a) of the instant background modeling method of the present invention.

第22(c)圖 係為本發明之即時背景模型化方法根據第22(a)圖以2位元模式所計算出之二元影像。 Fig. 22(c) is a binary image calculated by the instant background modeling method of the present invention in a 2-bit mode according to Fig. 22(a).

第22(d)圖 係為本發明之即時背景模型化方法根據第22(a)圖以3位元模式所計算出之二元影像。 Fig. 22(d) is a binary image calculated by the instant background modeling method of the present invention in a 3-bit mode according to Fig. 22(a).

第23(a)圖 係為本發明之即時背景模型化方法之實況之影像。 Figure 23(a) is an image of the live state of the instant background modeling method of the present invention.

第23(b)圖 係為本發明之即時背景模型化方法根據第23(a)圖所計算出之二元影像。 Fig. 23(b) is a binary image calculated according to Fig. 23(a) of the instant background modeling method of the present invention.

第23(c)圖 係為本發明之即時背景模型化方法根據第23(a)圖以2位元模式所計算出之二元影像。 Fig. 23(c) is a binary image calculated by the instant background modeling method of the present invention in a 2-bit mode according to Fig. 23(a).

第23(d)圖 係為本發明之即時背景模型化方法根據第23(a)圖以3位元模式所計算出之二元影像。 Fig. 23(d) is a binary image calculated by the instant background modeling method of the present invention in a 3-bit mode according to Fig. 23(a).

第24(a)圖 係為本發明之即時背景模型化方法根據室內序列之影像以3位元模式及形態學演算法所計算出之影像。 Fig. 24(a) is an image of the instant background modeling method of the present invention calculated by a 3-bit pattern and a morphological algorithm based on an image of an indoor sequence.

第24(b)圖 係為本發明之即時背景模型化方法根據第二室內序列之影像以3位元模式及形態學演算法所計算出之影像。 Fig. 24(b) is an image obtained by the instant background modeling method of the present invention based on the image of the second indoor sequence in a 3-bit mode and a morphological algorithm.

第24(c)圖 係為本發明之即時背景模型化方法根據凱維爾(CAVIAR)之WalkByShop1front資料庫之影像以3位元模式及形態學演算法所計算出之影像。 Fig. 24(c) is an image of the instant background modeling method of the present invention calculated by a 3-bit pattern and a morphological algorithm according to the image of the Caviery (CAVIAR) WalkByShop1front database.

第24(d)圖 係為本發明之即時背景模型化方法根據凱維爾之ShopAssistant1cor資料庫之影像以3位元模式及形態學演算法所計算出之影像。 Figure 24(d) is an image of the instant background modeling method of the present invention calculated by a 3-bit pattern and a morphological algorithm based on images of the Kavier's ShopAssistant1cor database.

第24(e)圖 係為本發明之即時背景模型化方法根據凱維爾之OneStopMoveNoEnter2cor資料庫之影像以3位元模式及形態學演算法所計算出之影像。 Figure 24(e) is an image of the instant background modeling method of the present invention calculated by a 3-bit pattern and a morphological algorithm based on the image of the OneStopMoveNoEnter2cor database of Kewell.

S11~S18‧‧‧步驟流程 S11~S18‧‧‧Step process

Claims (9)

一種即時背景模型化方法,適用於一監視攝像裝置,該監視攝像裝置係設置有一影像擷取模組及一處理模組,而該方法包含下列步驟:藉由該影像擷取模組擷取複數個參考影像;經由該處理模組將各該參考影像分別分割為彼此不重疊的複數個參考區塊;利用該處理模組根據各該參考影像之各該參考區塊分別計算出一單位元參考資料及一雙位元參考資料;藉由該影像擷取模組擷取一即時影像;經由該處理模組將該即時影像依據各該參考影像之切割方式分割為不重疊的複數個即時區塊;利用該處理模組根據各該即時區塊分別計算出一單位元即時資料及一雙位元即時資料;利用該處理模組將該單位元即時資料與位置對應之各該單位元參考資料進行比對,或將該雙位元即時資料與位置對應之各該雙位元參考資料進行比對,以分別產生一匹配結果;以及藉由該處理模組根據各該匹配結果選擇性執行一資料更新程序,以更新各該單位元參考資料或各該雙位元參考資料;其中各該單位元參考資料及該單位元即時資料係滿 足下列條件: 其中,THsmooth係為一預設閥值,xij係為各該參考區塊或該即時區塊之位置(i,j)之像素值,m係為各該參考區塊或該即時區塊之平均值,aij係為各該單位元參考資料或該單位元即時資料之位置(i,j)之一單位元值。 An instant background modeling method is applicable to a surveillance camera device, wherein the surveillance camera device is provided with an image capture module and a processing module, and the method comprises the following steps: capturing the plurality of images by the image capture module Each of the reference images is divided into a plurality of reference blocks that do not overlap each other through the processing module; and the processing module calculates a unit cell reference according to each of the reference blocks of each of the reference images. Data and a double-bit reference data; the image capturing module captures a real-time image; the processing module divides the real-time image into a plurality of non-overlapping instant blocks according to the cutting manner of the reference image Using the processing module to calculate a unit of real-time data and a pair of real-time data according to each of the real-time blocks; using the processing module to perform the unit-yuan real-time data and the corresponding unit-element reference data Comparing, or comparing the two-bit real-time data with each of the two-bit reference materials corresponding to the position to respectively generate a matching result; The processing module selectively executes a data update procedure according to each of the matching results to update each of the unit cell reference materials or each of the dual-bit reference materials; wherein each of the unit cell reference materials and the unit cell real-time data meets the following conditions : Wherein, TH smooth is a preset threshold, x ij is the pixel value of each reference block or the position (i, j) of the instant block, and m is each of the reference block or the instant block. The average value, a ij is the unit value of one of the unit element reference materials or the location (i, j) of the unitary metadata. 如申請專利範圍第1項所述之即時背景模型化方法,更包含下列步驟:藉由該處理模組根據各該參考影像之各該單位元參考資料分別計算出一位元轉變次數;以及經由該處理模組將各該位元轉變次數分別與一平滑閥值進行比對,若該位元轉變次數小於或等於該平滑閥值,則藉由該處理模組選擇性地將該單位元即時資料與位置對應之各該單位元參考資料進行比對,若該位元轉變次數大於該平滑閥值,則藉由該處理模組選擇性地利用該雙位元即時資料與位置對應之各該雙位元參考資料進行比對。 The method for instant background modeling according to claim 1, further comprising the steps of: calculating, by the processing module, the number of bit transitions according to each of the unit reference data of each of the reference images; and The processing module compares each of the bit transition times with a smoothing threshold, and if the number of bit transitions is less than or equal to the smoothing threshold, the processing module selectively selects the unit cell instantaneously The data is compared with each of the unit cell reference data corresponding to the location, and if the number of bit transitions is greater than the smoothing threshold, the processing module selectively utilizes the dual-bit real-time data and the location corresponding to each Double-bit reference data for comparison. 如申請專利範圍第1項所述之即時背景模型化方法,更包含下列步驟:藉由該處理模組根據該單位元即時資料與位置對應之各該單位元參考資料分別計算出一單位元距離值,或將該雙位元即時資料與位置對應之各該雙 位元參考資料分別計算出一雙位元距離值;經由該處理模組根據各該單位元距離值或各該雙位元距離值分別與一距離閥值進行比對,以產生該匹配結果;以及若該匹配結果中,各該單位元距離值之最小值或各該雙位元距離值之最小值小於該距離閥值時,則藉由該處理模組執行一權重更新程序,而各該單位元距離值或各該雙位元距離值大於該距離閥值時,則藉由該處理模組執行該資料更新程序。 The method for instant background modeling according to claim 1 further includes the following steps: calculating, by the processing module, a unit distance according to the unit element reference data corresponding to the unit element real-time data and the location Value, or the double-bit real-time data corresponding to the location The bit reference data respectively calculates a double bit distance value; and the processing module compares each of the unit cell distance values or each of the double bit distance values with a distance threshold to generate the matching result; And if the minimum value of each unit cell distance value or the minimum value of each of the double bit distance values is less than the distance threshold, the processing module executes a weight update procedure, and each of the matching results When the unit distance value or each of the double bit distance values is greater than the distance threshold, the data update procedure is executed by the processing module. 如申請專利範圍第3項所述之即時背景模型化方法,其中該權重更新程序係包含下列步驟:藉由該處理模組更新各該單位元參考資料之一單位元權重值或各該雙位元參考資料之一雙位元權重值。 The instant background modeling method according to claim 3, wherein the weight update program comprises the following steps: updating, by the processing module, one unit weight value of each unit reference material or each of the double digits One of the meta-references is a double-bit weight value. 如申請專利範圍第1項所述之即時背景模型化方法,其中該資料更新程序更包含下列步驟:藉由該處理模組將各該單位元參考資料之一單位元權重值進行排序,或將各該雙位元參考資料之一雙位元權重值進行排序;經由該處理模組將各該單位元權重值中為最小值的該單位元參考資料取代為該單位元即時資料,或將各該雙位元權重值中為最小值的該雙位元參考資料取代為該雙位元即時資料;以及 利用該處理模組將為最小值之該單位元權重值或為最小值之該雙位元權重值替換為一初始權重值。 The instant background modeling method as described in claim 1, wherein the data update program further comprises the following steps: sorting, by using the processing module, a unit weight value of each unit reference material, or Each of the two-bit reference materials is sorted by a double-bit weight value; the unit element reference material having the smallest value among the unit weight values is replaced by the processing unit to the unit-yuan real-time data, or each The double-bit reference material having the smallest value among the double-bit weight values is replaced by the dual-bit real-time data; The processing module replaces the unit weight value of the minimum value or the double value weight value of the minimum value with an initial weight value. 如申請專利範圍第1項所述之即時背景模型化方法,其中各該雙位元參考資料及該雙位元即時資料係滿足下列條件: 其中,THsmooth係為一預設閥值,xij係為各該參考區塊或該即時區塊之位置(i,j)之像素值,m係為各該參考區塊或該即時區塊之平均值,hm係為當各該參考區塊之該單位元參考資料之單位元值為1時所分別計算出之平均數或該即時區塊之該單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各該參考區塊之該單位元參考資料之單位元值為0時所分別計算出之平均數或該即時區塊之該單位元即時資料之單位元值為0時所計算出之平均數,bij係為各該雙位元參考資料或該雙位元即時資料之位置(i,j)之一雙位元值。 The instant background modeling method as described in claim 1, wherein each of the dual-bit reference materials and the dual-bit real-time data satisfy the following conditions: Wherein, TH smooth is a preset threshold, x ij is the pixel value of each reference block or the position (i, j) of the instant block, and m is each of the reference block or the instant block. The average value, hm is the average value calculated when the unit value of the unit reference material of each reference block is 1 or the unit value of the real-time data of the real-time block is 1 The average calculated by the time is lm is the average calculated separately when the unit value of the unit reference data of each reference block is 0 or the unit of the real-time data of the real-time block The average calculated by the value of 0 is b ij is the double-bit value of each of the two-bit reference data or the position (i, j) of the dual-bit real-time data. 如申請專利範圍第1項所述之即時背景模型化方法,更包含下列步驟:藉由該處理模組更根據各該單位元參考資料及各該 雙位元參考資料分別計算出一三位元參考資料,並根據該單位元即時資料及該雙位元即時資料計算出一三位元即時資料;以及利用該處理模組將該三位元即時資料與位置對應之各該三位元參考資料進行比對,以分別產生該匹配結果,並根據各該匹配結果選擇性地更新各該三位元參考資料。 The instant background modeling method as described in claim 1 further includes the following steps: by the processing module, according to each of the unit reference materials and each The two-bit reference data is used to calculate a three-digit reference data, and one-third real-time data is calculated based on the real-time data of the unit and the real-time data of the two-digit real-time data; and the three-dimensional instant using the processing module The data is compared with each of the three-bit reference materials corresponding to the location to respectively generate the matching result, and each of the three-bit reference materials is selectively updated according to each matching result. 如申請專利範圍第7項所述之即時背景模型化方法,其中各該三位元參考資料及該三位元即時資料係滿足下列條件: 其中,THsmooth係為一預設閥值,xij係為各該參考區塊或該即時區塊之位置(i,j)之像素值,m係為各該參考區塊或該即時區塊之平均值,hm係為當各該參考區塊之該單位元參考資料之單位元值為1時所分別計算出之平均數或該即時區塊之該單位元即時資料之單位元值為1時所計算出之平均數,lm係為當各該參考區塊之該單位元參考資料之單 位元值為0時所分別計算出之平均數或該即時區塊之該單位元即時資料之單位元值為0時所計算出之平均數,hhm係為當各該參考區塊之該雙位元參考資料之雙位元值為11時所分別計算出之平均數或該即時區塊之該雙位元即時資料之雙位元值為11時所計算出之平均數,hlm係為當各該參考區塊之該雙位元參考資料之雙位元值為10時所分別計算出之平均數或該即時區塊之該雙位元即時資料之雙位元值為10時所計算出之平均數,lhm係為當各該參考區塊之該雙位元參考資料之雙位元值為01時所分別計算出之平均數或該即時區塊之該雙位元即時資料之雙位元值為01時所計算出之平均數,llm係為當各該參考區塊之該雙位元參考資料之雙位元值為00時所分別計算出之平均數或該即時區塊之該雙位元即時資料之雙位元值為00時所計算出之平均數,cij係為各該三位元參考資料或該三位元即時資料之位置(i,j)之一三位元值。 For example, the instant background modeling method described in claim 7 wherein each of the three-dimensional reference material and the three-dimensional instant data satisfy the following conditions: Wherein, TH smooth is a preset threshold, x ij is the pixel value of each reference block or the position (i, j) of the instant block, and m is each of the reference block or the instant block. The average value, hm is the average value calculated when the unit value of the unit reference material of each reference block is 1 or the unit value of the real-time data of the real-time block is 1 The average calculated by the time is lm is the average calculated separately when the unit value of the unit reference data of each reference block is 0 or the unit of the real-time data of the real-time block The average number calculated when the value is 0, hhm is the average calculated separately when the double-bit value of the double-bit reference data of each reference block is 11, or the real-time block The average of the double-bit real-time data when the double-bit value is 11, and the hlm is the average calculated when the double-bit value of the double-bit reference data of each reference block is 10. The number or the average of the double-bit real-time data of the instant block when the double-bit value is 10, the lhm is The average of the two-bit reference data of the reference block when the double-bit value is 01 or the double-bit value of the dual-bit real-time data of the real-time block is calculated as 01. The average number of llm is the average of the two-bit real-time data of the real-time block when the double-bit value of the double-bit reference data of each reference block is 00 The average value calculated when the value is 00, and c ij is one of the three-bit values of the three-bit reference material or the position (i, j) of the three-dimensional real-time data. 如申請專利範圍第7項所述之即時背景模型化方法,更包含下列步驟:藉由該處理模組根據該三位元即時資料與位置對應之各該三位元參考資料進行比對時,更分別計算出一三位元距離值,並將各該三位元距離值分別與一距離閥值進行比對,以產生該匹配結果;以及 若該匹配結果中,各該三位元距離值之最小值小於該距離閥值時,則藉由該處理模組執行一權重更新程序,而各該三位元距離值大於該距離閥值時,則藉由該處理模組執行該資料更新程序。 The instant background modeling method as described in claim 7 further includes the following steps: when the processing module compares the three-dimensional real-time data with the three-dimensional reference data corresponding to the location, Calculating a three-bit distance value separately, and comparing each of the three-bit distance values with a distance threshold to generate the matching result; If the minimum value of each of the three-bit distance values is less than the distance threshold, the processing module executes a weight update procedure, and each of the three-bit distance values is greater than the distance threshold. And the data update program is executed by the processing module.
TW101128853A 2012-08-09 2012-08-09 Real-time background modeling method TWI476703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW101128853A TWI476703B (en) 2012-08-09 2012-08-09 Real-time background modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW101128853A TWI476703B (en) 2012-08-09 2012-08-09 Real-time background modeling method

Publications (2)

Publication Number Publication Date
TW201407495A TW201407495A (en) 2014-02-16
TWI476703B true TWI476703B (en) 2015-03-11

Family

ID=50550520

Family Applications (1)

Application Number Title Priority Date Filing Date
TW101128853A TWI476703B (en) 2012-08-09 2012-08-09 Real-time background modeling method

Country Status (1)

Country Link
TW (1) TWI476703B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200632785A (en) * 2005-03-15 2006-09-16 Ind Tech Res Inst Foreground extraction approach by using color and local structure information
TW200743393A (en) * 2006-05-04 2007-11-16 Univ Nat Chiao Tung Method of real-time hierarchical background reconstruction and foreground detection
TW200820065A (en) * 2006-10-30 2008-05-01 Ind Tech Res Inst Method and system for object detection in an image plane
US20080100704A1 (en) * 2000-10-24 2008-05-01 Objectvideo, Inc. Video surveillance system employing video primitives

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080100704A1 (en) * 2000-10-24 2008-05-01 Objectvideo, Inc. Video surveillance system employing video primitives
TW200632785A (en) * 2005-03-15 2006-09-16 Ind Tech Res Inst Foreground extraction approach by using color and local structure information
TW200743393A (en) * 2006-05-04 2007-11-16 Univ Nat Chiao Tung Method of real-time hierarchical background reconstruction and foreground detection
TW200820065A (en) * 2006-10-30 2008-05-01 Ind Tech Res Inst Method and system for object detection in an image plane

Also Published As

Publication number Publication date
TW201407495A (en) 2014-02-16

Similar Documents

Publication Publication Date Title
Tabernik et al. Deep learning for large-scale traffic-sign detection and recognition
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN106815859B (en) Target tracking algorism based on dimension self-adaption correlation filtering and Feature Points Matching
CN109410168B (en) Modeling method of convolutional neural network for determining sub-tile classes in an image
US9619733B2 (en) Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
US10824910B2 (en) Image processing method, non-transitory computer readable storage medium and image processing system
CN109711416B (en) Target identification method and device, computer equipment and storage medium
Lee et al. Place recognition using straight lines for vision-based SLAM
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN106204658A (en) Moving image tracking and device
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN109544592A (en) For the mobile moving object detection algorithm of camera
CN109902576B (en) Training method and application of head and shoulder image classifier
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
Wang et al. AutoScaler: Scale-attention networks for visual correspondence
CN109271848A (en) A kind of method for detecting human face and human face detection device, storage medium
CN114863464B (en) Second-order identification method for PID drawing picture information
Fang et al. Background subtraction based on random superpixels under multiple scales for video analytics
CN107948586A (en) Trans-regional moving target detecting method and device based on video-splicing
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
WO2022120996A1 (en) Visual position recognition method and apparatus, and computer device and readable storage medium
Choi et al. Real-time vanishing point detection using the Local Dominant Orientation Signature
Meng et al. Counting with adaptive auxiliary learning
Tan et al. Local context attention for salient object segmentation

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees