201209759 發明說明 【發明所屬之技術領域】 本發明係為一種影像式火焰偵測方法,其特徵係為以影像處理作為即 時火焰偵測之方式,所偵測之結果係為顯示在一般顯示器上的即時火災尊 報,且該警報係透過警報器或傳送到使用者的接收裴置中。 ° 【先前技術】201209759 DESCRIPTION OF THE INVENTION [Technical Field] The present invention is an image type flame detection method, which is characterized in that image processing is used as an instant flame detection method, and the detected result is displayed on a general display. The instant fire is reported and the alarm is transmitted through the alarm or to the user's receiving device. ° [Prior Art]
習知技術中,傳統火焰偵測技術因為偵測原理的限制(空間上或距離 上)’加上成本太高而無法普及使用,造成無法在火焰產生初期就提早偵測 並且發出警報。目前已經有相關基於影像之火焰偵測研究,但是因為火焰 的影像特徵不多,以及這些特徵無法有效區分一般影像或是火焰,其最^ 的辨識率不高或是誤辨過高等問題’像是火焰擴增率分析、差值影像分 析等等,造成無法在火災初期有效的即時檢知並加以通知防範。在=國二 利公告第26娜號專利「應用影像操取偵測火災生成之方法及其系統, =以顏色、亮度及飽和度資tfl來判敎焰,惟此方法只用預設臨界值作」為 =條件,於相異環境時此關_件實屬_,容易造細識率下降及 ^率提升關題,由此可見’上述制方法仍有 之設計,而亟待加以改良。 天訪良善 新,t案於上述習財式所衍生的各項舰,乃轉加以改良創 ’々亚經多年古心孤言旨潛心研究德,坟认々丄 測方法。 終於柄研發絲树影像式火焰偵 【發明内容】 本發明的主要目的即在於利用影 分析 形上,也嫩蝴⑽叫阔的情 3 201209759 達成上述發明的目的之影像式火趋偵測方法,係先在裝 裝,,例如:随道、車站、發電廠、工廠、倉庫、機;等 工作平台,並實現本方法於工作平台上;#該裝置的場所有 平台上可透過數位影像處理技術並運用創新的火焰特徵^ =, Γ警報叫l上接收火焰發生的地點及位置,並發㈣收準確的火 影像偵:方法利用—般影賴取設備操取彩色影像,再經由 像特徵,來實現!^換、離散小波轉換等技術’並根據火焰的影 =ΓΓ透過前景分離與背景更新的方法取得移動物件 送到空間小波分析,接著進一步攄除非火之紋=^=系的移動物件傳 後將刹下命·仏/J·、- ‘…' ’、’文化特性的移動物件’然 的移動物件,最後通,非火之Ν維度紋理變化特性 判斷目前是否有除短暫產生的雜訊,由警示決策單元 更供之賴火物方法,與細職術相互比較時, 式,=輕糊酬熱感應模 “了二間上的限制,此有效提高偵測率和降低誤判率. 的方式彡像分_方法,獨於傳統將影像财平或是垂直 域影像分ϋ式ΪΓ支是將影像以物件中心呈放射狀來分析,因此當以區 很好的辨識效i;展現火糾中心向外擴張的特性,同時也得到 【實施方式】 驟包Ϊ參考圖—所示’係、為本發㈣像式火焰伽方法的流程®,主要步 1·經由f像擷取設備並經過數位化後取得數位影像9 ; 將取件之數位影像進行前景分離10及背景更新u,該前景分離⑴方 201209759 法係找出目A輸入影像與參考影像的差異,且若此差異大於臨界值則為移 動區域,再將移動區域透過侵触、膨脹與二值連通影像的處理,即可將移 動區域轉成個別的物件;該背景更新11方法係利用混合高斯模型來模擬複 雜的为景§扎號,對不同時間上的每一點畫素都建立其混和高斯模型,並隨 時間更新,因此更新時間越長,則此背景模型將會趨於穩定; 3. 將步驟2取得之影像進行色彩分析12,以rgb色彩空間而言,火焰 顏色大多是紅色色調(R Channei)大於綠色色調(G channel),且綠色色調 大於藍色色調(B channel),因此火焰在影像中,其紅色色調非常飽和。這 些特性可用RGB與YCbCr色彩空間來規範火焰色彩的區域,其定義如下:In the conventional technology, the traditional flame detection technology cannot be popularized because of the limitation of the detection principle (spatial or distance), and the cost is too high, so that it is impossible to detect early and issue an alarm in the early stage of the flame generation. At present, there are related image-based flame detection studies, but because the image features of the flame are not many, and these features can not effectively distinguish the general image or the flame, the best recognition rate is not high or the problem is too high. It is a flame amplification rate analysis, a difference image analysis, etc., which causes immediate detection and prevention in the early stage of a fire. In the National 2nd Announcement No. 26A patent "Application of image manipulation to detect fire generation methods and systems, = color, brightness and saturation tfl to judge the flame, but this method only uses the preset threshold "Because of the = condition, in the case of different environments, this is a _, it is easy to make a fine-recognition rate and improve the rate. It can be seen that the above-mentioned system still has a design, and it needs to be improved. Tianyou Liangshan New, the case of the various ships derived from the above-mentioned Xishui style, was transferred to the improvement of the creation of the 々 々 经 多年 多年 多年 多年 多年 多年 古 古 古 古 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年 多年Finally, the research and development of silk tree image type flame detection [invention content] The main purpose of the present invention is to use the image analysis shape, and also the butterfly (10) called the wide love 3 201209759 to achieve the purpose of the above invention, the image of the fire detection method, First installed, such as: road, station, power plant, factory, warehouse, machine; and other work platforms, and implement the method on the work platform; #The device can be digital image processing technology on all platforms And use the innovative flame characteristics ^ =, Γ alarm called l to receive the location and location of the flame, and concurrently (four) to receive accurate fire image detection: the method uses the general image to take the device to manipulate the color image, and then through the image features, achieve! ^Transformation, discrete wavelet transform and other techniques' and according to the shadow of the flame = ΓΓ through the foreground separation and background update method to obtain the moving object to the spatial wavelet analysis, and then further 摅 unless the fire pattern = ^ = system of moving objects will be transmitted Brake life 仏 仏 / J ·, - '...' ', 'the moving object of cultural characteristics' of the moving object, the last pass, the non-fire Ν dimension texture change characteristics to determine whether there is a temporary noise generated by The warning decision-making unit is more suitable for the fire-fighting method. When compared with the fine-grained technique, the type, = light-filled heat-sensing mode "has a limit on the two, which effectively improves the detection rate and reduces the false positive rate." Like the sub-method, the traditional method of image wealth or vertical domain image branching is to analyze the image in the radial direction of the object, so it is a good identification of the area; The characteristics of the external expansion, as well as the [Embodiment] Ϊ Ϊ Ϊ Ϊ — — — — — — — — — — — — 为本 为本 为本 为本 四 四 四 四 四 四 四 四 四 流程 流程 流程 流程 流程 流程 流程 流程 流程 流程 流程 流程 流程After obtaining a digital image 9; The digital image of the pickup is subjected to foreground separation 10 and background update u. The foreground separation (1) side 201209759 method finds the difference between the target A input image and the reference image, and if the difference is greater than the critical value, it is the moving area, and then moves. The region can transform the moving region into individual objects through the processing of intrusion, expansion and binary connected images; the background updating method uses a mixed Gaussian model to simulate complex scenes, for each time at different times. A little pixel has established its mixed Gaussian model and updated with time, so the longer the update time, the background model will tend to be stable; 3. The image obtained in step 2 will be color analyzed 12, in terms of rgb color space The flame color is mostly red (R Channei) is larger than the green color (G channel), and the green color is larger than the blue color (B channel), so the flame is very saturated in the image, and the red color is very saturated. These characteristics can be RGB and YCbCr colors. Space to regulate the area of the flame color, defined as follows:
PfgR > PbgR (i)PfgR > PbgR (i)
Pfgcr > Pbgcb (2) 在上式(1)中,Pfg與pbg分別代表前景與背景的數值大小,R是表示 RGB色彩空間的R分量’而在上式⑵中’ &與α則是代表yQ£r色彩空 間的Cb、Cr分量; 4. 以空間的小波轉換分析分離出影像中的高頻成分,此高頻成分可視 為此〜像的紋理.^:化之強弱。請參關二所示為本發明之二維小波轉換 的單層過遽器組示意圖,經由二維小波轉換使用的單層過濾及係數之小波 轉換後的影像,會產生喃子影像,分別是LL、LH、瓜和hh,其中lh代 表的疋水平冋頻成分(水平的輪廓強弱),HL代表的是垂直高頻成分(垂直的 輪廓細’冊代表的就是對角的高頻成分(對角的輪扉強弱)。為了計算所 有紋理變化的高頻成分鶴,我們制下式⑶來計算-區域裡的高頻成 刀對於似火fo的和動物體,其輪廓在空間中會有小部分的變化而評估 此變化的式子如下式(4) 5 201209759 ωπ(ηι,η) = |LHn(m,n)p + !HLn(m.n)t2 + |HHn(m,n)|2 (3) ωΓ Sizeroov 心1 emov °Vr (k, 1) (4) 上式子中ωη’Ιί代表對影像中紅色分量做二維小波轉換,m〇v為移動物體 的範圍,size·^為此範圍的大小; 5.以空間頻帶過濾14方法針對火焰影像的紋理做更進一步的分析, 係操取其·並糊倒傳遞赋神_路進行區域資訊纖。請參閱圖三 所不’為本發明之空間鮮過濾進行區域資訊麻示意圖,記錄從移動物 件區域的中‘㈣固定角度到邊界路徑上賴化,這域移動物件在此角度 下的-維魏’每-個移動物件都以物件巾^等分N,因此每—個物件共有 N個-維資訊,這些資訊包含每—個物件不同角度路徑上的變化,將這每一 個角度的資訊分顧-維小波轉換計算其改變的頻率與程度,該—維小波 轉換的m組不意®請參關四所示,並訂式⑸計算每—路經的複雜 度P,當p越小時’表示其路徑上的變化較為平緩,反之表示其變化激烈。 因為將每-個影像都以㈣度分析,所以其複雜心共有n個將這 N個資料利用倒傳遞類神經網路(Back⑸卿心此㈣n咖樹做 分類器’此-類神經網路共有N個輸入、M個中間節點、2個輸出,其架構 圖請參閱圖五所示,其中N個輸入值即為N個不同角度❼,而輪出的值 ^與?!為辨識後相似於火或非火的程度; 6.以時態分㈣來有效降低錯誤偵測率,其係透過移動平均法的方Pfgcr > Pbgcb (2) In the above formula (1), Pfg and pbg represent the magnitude of the foreground and background, respectively, and R is the R component representing the RGB color space, and in the above formula (2), '& and α are Represents the Cb and Cr components of the yQ£r color space; 4. Separates the high-frequency components in the image by spatial wavelet transform analysis. This high-frequency component can be regarded as the texture of the image. Please refer to the second diagram of the single-layer filter set of the two-dimensional wavelet transform of the present invention. The single-layer filter used by the two-dimensional wavelet transform and the wavelet-converted image of the coefficient will generate a coma image, respectively LL, LH, melon and hh, where lh represents the horizontal 冋 frequency component (horizontal contour strength), and HL represents the vertical high frequency component (the vertical contour detail book represents the high frequency component of the diagonal (pair The rim of the horn is strong and weak.) In order to calculate the high-frequency component cranes of all texture changes, we calculate the high-frequency knives in the region - for the fire-like and animal bodies, the outline of which has a small part in space. The equation for evaluating this change is as follows (4) 5 201209759 ωπ(ηι,η) = |LHn(m,n)p + !HLn(mn)t2 + |HHn(m,n)|2 (3) ωΓ Sizeroov heart 1 emov °Vr (k, 1) (4) ωη'Ιί in the above expression represents two-dimensional wavelet transform of the red component in the image, m〇v is the range of the moving object, and size·^ is the range Size; 5. Use the spatial frequency band filtering method 14 to further analyze the texture of the flame image, Passing the fascinating _ road to the regional information fiber. Please refer to Figure 3 for the spatial filtering of the spatial filtering of the present invention. The recording is from the fixed angle of the moving object area to the boundary path. At this angle, the moving object is divided into N by the object towel, so each object has N-dimensional information, and the information includes changes in the path of each object at different angles. Divide the information of each angle-dimensional wavelet transform to calculate the frequency and degree of its change. The m-group of wavelet transforms is not intended to be shown in Figure 4, and the formula (5) calculates the complexity of each path. Degree P, when p is smaller, 'represents that the change in its path is more gradual, and vice versa, indicating that its change is fierce. Because each image is analyzed in (four) degrees, there are a total of n of these complex data. Neural network (Back (5) Qingxin this (four) n cafe tree to do classifiers] This type of neural network has N inputs, M intermediate nodes, 2 outputs, the architecture diagram shown in Figure 5, where N inputs The value is N different ❼ The degree, the value of ^ the wheel and after the identification of fire or similar to the degree of fire;?! 6 minutes to tense (iv) to reduce the error detection rate, which is based party through the moving average method
6 201209759 式’統計每秒鐘内通過顏色和空間性分析的晝面張數,計算出時態分析率 TAR(Te即0ral Analyse Ratio),如下式(7)所示: (7) Τ.4Ώ- Sin» Of CA Sawplmg Rate 並由設定TAR的臨界 其中CA代表累計的警告次數(Cumulated Α_), 值能夠有效濾除錯誤警告的可能;6 201209759 The formula 'statistically calculates the temporal analysis rate TAR (Te is the 0ral Analyse Ratio) by the number of facets analyzed by color and spatiality every second, as shown in the following formula (7): (7) Τ.4Ώ - Sin» Of CA Sawplmg Rate and by setting the threshold of TAR where CA represents the cumulative number of warnings (Cumulated Α_), the value can effectively filter out the possibility of false alarms;
7.對火焰設計歸料指標作騎科紐,#飾物件制識為火 焰時’能獅的值就會增加,反之職少,而_計編值大於臨 界值時,就表示此移動物件為火焰並發出警示。 本發明所包含之步驟’首先,使用者可以透過影像操取設備取得數 位影像9 ’透過前景分_與背景更新u的方法取得移動物件,接著進行色 彩分析12 ’濾除非火之色系的移動物件,以便將火之色系的移動物件傳送 到空間小波分析13,接著進—步濾除非火之紋理變蝴^的鶴物件,然 後將剩下的移動物件進行空間鮮過濾14,紐非火之n維度紐變化特性 的移動物件’最後通過日锡分析15,濾雜暫產生_訊,由警示決策單 元判斷目前是否有火焰產生以發出警報。 本土月〜像式火:^偵測方法,具有讓傳統火焰偵測之方法的使用者透 過本方法就可以即日轉到影像式火焰_情形,麟了空間上的限制甚 至在火小的情况下就能發丨警報,並藉由麟火鋪徵顺方法與驗 也將决判的機率降至最低’提供使用者一個強健而穩定火蹈偵測技術。 【圖式簡單說明】 請參閱有關本發明之詳細制及其關,將可進―步瞭解本發明之技 7 201209759 術内容及其目的功效,有關附圖為: 圖一為本發明影像式火焰偵測方法的流程圖; 圖二為本發明之二維小波轉換的單層過濾器組示意圖; 圖三為本發明之空間頻帶過濾進行區域資訊擷取示意圖; 圖四為本發明之一維小波轉換的慮波器組示意圖; 圖五為本發明之倒傳遞式類神經網路架構圖。 【主要元件符號說明】 9 數位影像 10 前景分離 11 背景更新 12 色彩分析 13 空間小波分析 14 空間頻帶過濾 15 時態分析 16 警示決策7. For the design of the flame design, the index of the refurbishment is made. When the object is known as the flame, the value of the lion can increase, and if the value is less than the threshold, the moving object is Fire and give a warning. The steps included in the present invention 'Firstly, the user can obtain the digital image 9' through the foreground image and the background update u through the image manipulation device, and then perform the color analysis 12 'filtering the movement of the color system. Objects, in order to transmit the moving objects of the fire color to the spatial wavelet analysis 13, and then filter the crane objects unless the texture of the fire changes, and then spatially filter the remaining moving objects. The moving object of the n-dimensional change characteristic is finally passed through the analysis of the Japanese tin, and the filter is temporarily generated. The warning decision unit determines whether there is a flame generated to issue an alarm. Local Moon ~ Image Fire: ^ Detection method, which allows users of the traditional flame detection method to go to the image flame through the method _ situation, the space limit is even in the case of small fire It is possible to issue alarms and minimize the chances of a decision by using the method and test of the Linhupu. It provides users with a robust and stable fire detection technology. [Simplified description of the drawings] Please refer to the detailed system of the present invention and its related points, and the contents of the technology and the purpose of the present invention can be further understood. The related drawings are as follows: Figure 1 is an image flame of the present invention. FIG. 2 is a schematic diagram of a single-layer filter group for two-dimensional wavelet transform according to the present invention; FIG. 3 is a schematic diagram of regional information acquisition for spatial frequency band filtering according to the present invention; Schematic diagram of the converted filter set; Figure 5 is a diagram of the inverse transfer type neural network architecture of the present invention. [Key component symbol description] 9 Digital image 10 Foreground separation 11 Background update 12 Color analysis 13 Spatial wavelet analysis 14 Space band filtering 15 Temporal analysis 16 Warning decision