TW200834476A - A fire image detection method - Google Patents

A fire image detection method Download PDF

Info

Publication number
TW200834476A
TW200834476A TW96105521A TW96105521A TW200834476A TW 200834476 A TW200834476 A TW 200834476A TW 96105521 A TW96105521 A TW 96105521A TW 96105521 A TW96105521 A TW 96105521A TW 200834476 A TW200834476 A TW 200834476A
Authority
TW
Taiwan
Prior art keywords
fire
sample
pixel
image
distance
Prior art date
Application number
TW96105521A
Other languages
Chinese (zh)
Other versions
TWI313848B (en
Inventor
Chung-Ning Huang
Shih-Chieh Chen
Original Assignee
Ind Tech Res Inst
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ind Tech Res Inst filed Critical Ind Tech Res Inst
Priority to TW96105521A priority Critical patent/TWI313848B/en
Publication of TW200834476A publication Critical patent/TW200834476A/en
Application granted granted Critical
Publication of TWI313848B publication Critical patent/TWI313848B/en

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

A fire image detection method is used to compare a fire image sample with a series of real-time caught image frames and decide whether a fire breaks out in the image-caught scene or not. The method includes steps of converting the fire image sample into a specific color space, calculating a central point of the fire image sample, deciding a threshold value by the methods of cluster analysis and Mahalanobis distance, calculating a pixel distance between the central point and each pixel of one of real-time caught image frames by the method of Mahalanobis distance, deciding that a fire breaks out in the scene if the pixel distance is smaller than the threshold value.

Description

200834476 九、發明說明: 【發明所屬之技術領域】 種火災影像_方法,特別是—種將影像像素與一個火 又k本做相似性分析,以得知影像像素與火災樣本之相你程 度,藉以_影像中是否有火災發生之方法。 【先前技術】 在利用影像擷取晝面後判斷是否有火災的技術中,我們可 以見到美國第仍3722鱗利「Flredetecti〇nsystem」,其係先 透過一们、、、工/备、外線感應裔去對所照射的區域做感應,如果所 感應到的能量超過所設定的臨界值(Thresh〇ld)才會將此影像 猎由攝影機擷取,再對影像分析亮度區(brigllt areas)、邊界偵 測(edge detection)與影像抽出(image s^traction)等作業,進而 去判定是否為真實的火。在此架構下,對於火的偵測需要使用 額外的感應器(Sensor)來輔助才能較精準的判定出所要感應的 範圍有火的結論。 此外,在 Wen-Bing Horng、Jian-Wen Peng、Chih-Yuan Chen 5 200834476 於2005美國電氣與電子工程師學會仰EE,2005年3月19-22 日)發表之論文「A new image-based real-time flame detection method using color anaiysis」中,係將偵測過程分成靜態與動 態兩部份,靜態方面,事先蒐集火的樣本,將樣本由紅綠藍色 彩空間(Red,Green,Blue domain, RGB domain)轉換成 HSI 色彩 空間(HIS domain,色調Hue,色飽合度Saturation,亮度 Intensity)後,再統計樣本火與煙的HSI值的範圍,進而將這些 統計值利用來辨識火與煙的流程上。動態方面,利用影像差異 (image difference)的方法,再去掉與火和煙相似的背景物件, 藉以判斷是否有火與煙的產生。 其次,在 Τ· H· Chen,Ρ· H· Wu 及 Y· C. Chiou 於 2004 IEEE 影像處理之國際會議(International Conference on Image Processing,ICIP,2004 年 10 月 24,27 日)中發表之『An Early Fire-Detection Method Based on Image Processing』,其同樣的將 處理過程分成靜態與動態兩部分,而靜態部分,歸類出火在 RGB色彩空間裡的特徵(如r $ g > B),再進行火在色調上的 6 200834476 初步分類,在動態方面, 利用%像差異(image difference)去除 與火相似的背景 以上—扁論文,在靜能古^ 心面,都是利用大量的蒐集火的身 源’並統物火峨(如纖伽)上的特德 或範圍’且纽定多舰界值,《翻最佳絲,再去初步分 類出火的區域。此種方式火 、 树集上較域時,且運算也 複雜。 【發明内容】 係使用一個火災影像 本發明提丨-歡技像舰方法, 本做為基礎,將榻取^ ^ ^ ^ 〜像晝面與該火災影像樣本進行本發 像晝面中的像素是否具有火災 亦無需對多數火災影像樣本 明之分類方式,即可輕㈣斷影 ^ 像素’而無需使用額外之感測器 進行分析,以解決上述問題。 本發明之μ料_方法,仙-Mf彡像樣拉至 少1取之影像畫面(咖geF麗e)做比對,以判斷該操取 之以像晝面具有—火災像素否,該火災影像偵測方法係包 200834476 括: 將該火災影像樣本之各樣本像素轉換成一色彩模型 (Color Space),該色彩模型具有一第一座標與一第二座 標,各該樣本像素則對應該第一座標與該第二座標具有 一第一座標值與一第二座標值; 計算該等樣本像素之該第一座標值與該第二座標值 之一中心點; 計算該火災影像樣本之各該等樣本像素與該中心點 之馬氏距離(Mahalanobis Distance)而得到複數個樣本距 離值; 依據該等樣本距離值定義一門檻值; 取該擷取之影像晝面之複數個影像像素之一為判斷 像素; 、計算該判斷像素與該中心點之馬氏距離而得到一像 素距離;以及 比較該像素距離與該樣本距離,並於該像素距離小 8 200834476 於該門檻值時判斷該 x顆取之影像晝面具有該火災像素。 藉由前述之方法,g 可利用一個火災影像樣本與掏取之影 像晝面做判斷,以決 、疋疋否有火災像素,達到前述目的。 為使對本發明之目的,實作方法及其功能有進-步的了 解,兹配合目科細制如下: 【實施方式】 明苓照「第1圖」,其為本發明火災影像偵測方法之流程 圖,其主要係將火災影像樣本3〇(請見於「第2圖」)與至少 擷取之影像晝面(4〇, image Frame)做比對,以判斷該擷取 之影像晝面是否具有一火災像素;此火災影像偵測方法係 包括·. 步驟S12 :將該火災影像樣本3〇之各樣本像素轉換成一 色彩模型(Color Space),該色彩模型具有一第一座標與一第二 座軚,各該樣本像素則對應該第一座標與該第二座標具有一第 一座標值與一第二座標值; 步驟S14 :計算該等樣本像素之該第一座標值與該第二座 9 200834476 標值之一中心點; 步驟S16 :計算該火災影像樣本之各該等樣本像素與該中 樣本距離 心點之馬氏距離(Mahalanobis Distance)而得到複數個 值; 步驟S18 ·依據該寻樣本距離值定義_門值· 步驟S20 :取該擷取之影像畫面4〇之複數個影像像素之 一為判斷像素; 步驟S22 :計算該判斷像素與該中心點之馬氏距離而得到 一像素距離· ,以及 步驟S24 ·比較該像素距離與該樣本距離,並於該像素距 離小於該門檻值時判斷該擷取之影像晝面具有該火災像素。 其中,在步驟S1.2之色彩模型係可例如RGB,YIQ,CMy (青紫百),HSV,HSI #,本發明係以亮度色差色彩模型YIQ 為例其中第座私⑺係為亮度([腿—職)座標,第二座樣 (I)係為色差同向分量(lnphase)座標,Q座標則為色差正交分 量(Quadrature),YIQ係為美國國家電視系統委員會卿画i 10 200834476200834476 IX. Description of the invention: [Technical field of invention] A fire image_method, in particular, a similarity analysis of an image pixel with a fire and a k-paper to know how close the image pixel is to the fire sample. Whether there is a fire in the image. [Prior Art] In the technique of judging whether there is a fire after using the image to capture the surface, we can see that the United States still has the 3722 scale "Flredetecti〇nsystem", which is first through the ones, the other, the work/storage, and the outside line. The sensory person senses the illuminated area. If the detected energy exceeds the set threshold (Thresh〇ld), the image is captured by the camera, and the image is analyzed for the brillllt areas. Edge detection and image s^traction, etc., to determine whether it is a real fire. Under this architecture, the detection of fire requires the use of an additional sensor to assist in the more accurate determination of the range of vibrations to be sensed. In addition, in Wen-Bing Horng, Jian-Wen Peng, Chih-Yuan Chen 5 200834476, 2005 American Institute of Electrical and Electronics Engineers Yang EE, March 19-22, 2005) paper "A new image-based real- Time flame detection method using color anaiysis", the detection process is divided into two parts: static and dynamic. In static aspect, the fire samples are collected in advance, and the samples are red, green, blue, and RGB domain. After converting to HSI color space (HIS domain, Hue, Saturation, Luminance Intensity), the range of HSI values of sample fire and smoke is counted, and these statistics are used to identify the process of fire and smoke. On the dynamic side, the image difference method is used to remove background objects similar to fire and smoke to determine whether there is fire and smoke. Secondly, it was published in 2004·H·Chen, Ρ·H· Wu and Y·C. Chiou at the 2004 International Conference on Image Processing (ICIP, October 24, 27, 2004). An Early Fire-Detection Method Based on Image Processing, which divides the processing into two parts, static and dynamic, and the static part, which is classified in the RGB color space (such as r $ g > B), and then Carrying out the fire on the hue of the 6 200834476 preliminary classification, in terms of dynamics, using the % image difference to remove the background similar to fire - the flat paper, in the static energy of the ancient face, are using a large number of collected fire The source is 'and the Ted or the range on the fire (such as the fiber gamma) and the value of the Newton multi-ship boundary, "turn the best wire, and then go to the area where the fire is initially classified. In this way, the fire and tree sets are more domain-based, and the operation is complicated. SUMMARY OF THE INVENTION The present invention is based on the use of a fire image of the present invention, and the method of the present invention is based on the method of taking the ^^^^~ image surface and the fire image sample into the pixels in the image field. Whether there is a fire or a classification of most fire image samples, you can lightly (4) break the shadow ^ pixels without using an additional sensor for analysis to solve the above problem. According to the μ material method of the present invention, the fairy-mf 彡 pattern is pulled at least 1 to take an image (cafe geF 丽 e) for comparison, to determine that the operation has a face like a fire pixel, the fire image detection The test method package 200834476 includes: converting each sample pixel of the fire image sample into a color space, the color model having a first coordinate and a second coordinate, each sample pixel corresponding to the first coordinate and The second coordinate has a first coordinate value and a second coordinate value; calculating a center point of the first coordinate value and the second coordinate value of the sample pixels; and calculating each of the sample pixels of the fire image sample Obtaining a plurality of sample distance values from the Mahalanobis Distance of the center point; defining a threshold value according to the sample distance values; taking one of the plurality of image pixels of the captured image plane as the determination pixel; Calculating a Mahjong distance between the determination pixel and the center point to obtain a pixel distance; and comparing the pixel distance to the sample distance, and when the pixel distance is small 8 200834476 at the threshold value It is judged that the image plane of the x image has the fire pixel. By the above method, g can use a fire image sample and the captured image to make a judgment to determine whether or not there is a fire pixel to achieve the foregoing purpose. In order to make the objective of the present invention, the implementation method and the function thereof have a further understanding, the following is the following: [Embodiment] The first embodiment of the present invention is the fire image detecting method of the present invention. The flow chart mainly compares the fire image sample 3〇 (please refer to “Fig. 2”) with at least the captured image frame (4〇, image frame) to determine the captured image surface. Whether there is a fire pixel; the fire image detection method includes: Step S12: converting each sample pixel of the fire image sample 3 into a color space (Color Space), the color model having a first coordinate and a first a second block, each of the sample pixels having a first coordinate value and a second coordinate value corresponding to the first coordinate and the second coordinate; Step S14: calculating the first coordinate value and the second of the sample pixels Block 9 200834476 one of the center points of the calibration; step S16: calculating a Mahalanobis Distance of each of the sample pixels of the fire image sample and the center distance of the medium sample to obtain a plurality of values; Step S18 · According to the Looking for samples Deviation definition_gate value · Step S20: taking one of the plurality of image pixels of the captured image frame 4 as a determination pixel; Step S22: calculating a Mahalanobis distance of the determination pixel from the center point to obtain a pixel distance And step S24. Comparing the pixel distance with the sample distance, and determining that the captured image plane has the fire pixel when the pixel distance is less than the threshold value. Wherein, the color model in step S1.2 can be, for example, RGB, YIQ, CMy, HSV, HSI #, and the invention uses the luminance color difference color model YIQ as an example, wherein the first private (7) is brightness ([leg - Occupation) coordinates, the second sample (I) is the chromatic phase difference component (lnphase) coordinates, the Q coordinate is the chromatic aberration quadrature component (Quadrature), and the YIQ system is the National Television System Committee painting i 10 200834476

Television System Committee,NTSC)所訂定,本發明在以下之 〜 實施例係以亮度座標與色差同向分量座標為例,而此色彩 模型與此二座標之選擇係考量到在各種色彩模型中,以此 亮度色差色彩模型中的亮度座標與色差同向分量座標在實 施本發明時,其門檻值之設定較為容易,且能得到相當正 • 確之判斷結果。 前述列舉之色彩模型之定義係簡述如下,其中CMY(青 紫黃,Cyan Magenta Yellow)色彩模型,其係為彩色圖像印 刷行業最常使用的色彩模型,在彩色立方體中此三色是紅 綠藍的補色,CMY色彩模型即是藉此三色來表現出各種彩 ® 色的模型。HSV (Hue,Saturation, Value)色彩模型主要是對 應於晝家配色模型,與人眼的視覺感知相一致,是一種適 合人眼分辨的模型,其中Η定義產色的波長、稱為色調, S定義顏色的深淺程度,稱為飽和度,V定義顏色的明暗 程度,稱為亮度,通常用百分比來表示。其次,在樣本像 素轉換為亮度色差色彩模型之方法中,較常見的是將紅綠 11 200834476The Television System Committee (NTSC) stipulates that the present invention is exemplified by the luminance coordinate and the color difference and the same component coordinate, and the color model and the selection of the two coordinates are considered in various color models. In the luminance chromatic aberration color model, the luminance coordinate and the chrominance difference component coordinate are used in the practice of the present invention, and the threshold value is set relatively easily, and a fairly positive judgment result can be obtained. The definitions of the color models listed above are briefly described below, in which the CMY (Cyan Magenta Yellow) color model is the most commonly used color model in the color image printing industry. In the color cube, the three colors are red and green. The blue complementary color, the CMY color model is the three color to represent the various colors of the color model. The HSV (Hue, Saturation, Value) color model mainly corresponds to the color matching model of the family, which is consistent with the visual perception of the human eye. It is a model suitable for human eye recognition, in which Η defines the wavelength of color production, called color tone, S Defining the degree of lightness, called saturation, V defines the degree of lightness and darkness of the color, called brightness, usually expressed as a percentage. Secondly, in the method of converting sample pixels into a color difference color model, it is more common to be red and green 11 200834476

藍色彩模型(RGB domain)轉換為亮度色差色彩模型(YIQ domain),故以下即列出此一轉換之公式:The RGB domain is converted to a YIQ domain, so the formula for this conversion is listed below:

Y 0299 0.587 ^ 0.114 R I — 0.595716 -0.274453 - 0.321263 G Q 0.211456 -0.522591 0.311135 B 由上述公式即可得知在此YIQ色彩模型與RGB色彩 模型之間有高度相關性,其差別在於YIQ座標分別代表亮 度、色差同向分量、與色差正交分量,而RGB則分別代表 紅、綠、藍座標。 在樣本像素經過轉換後,每個樣本像素即會具有對應 Y,I座標之第一座標值與第二座標值,也就是亮度值與色 差同向分量值,當然每個樣本像素亦會具有Q座標之色差 正交分量值,但由於在本實施例中未以此座標為例,故不 再贅述。 其次,即是步驟S14之計算該等樣本像素之該第一座標 值與該第二座標值之一中心點,其係將所有樣本像素之第一座 標值與第二座標值分別計算其算數平均數,意即所有第一座標 12 200834476 值相加後除賴本像素之健,所有第二座標值相加後除以樣 本像素之個數’如此即得到此中心點之座標值。 步驟sis之馬氏距離係為統計方法中群集分析中的— 種距離方法,其計算公式為: 其中’义·為(Wx3,··.. X, )τ,#為中心點之座 標值 Σ則為聯合組内共變數矩陣,⑷為》之馬氏距 離5關於詳細計算例容後詳述。Y 0299 0.587 ^ 0.114 RI — 0.595716 -0.274453 - 0.321263 GQ 0.211456 -0.522591 0.311135 B From the above formula, there is a high correlation between the YIQ color model and the RGB color model. The difference is that the YIQ coordinates represent brightness, The chromatic aberration is the same component and the color difference is orthogonal component, and RGB is the red, green and blue coordinates, respectively. After the sample pixels are converted, each sample pixel will have a first coordinate value and a second coordinate value corresponding to the Y, I coordinate, that is, the luminance value and the color difference same direction component value, of course, each sample pixel will also have Q. The color difference quadrature component value of the coordinate, but since this coordinate is not taken as an example in the present embodiment, it will not be described again. Secondly, the first coordinate value of the sample pixels and the center point of the second coordinate value are calculated in step S14, and the arithmetic value of the first coordinate value and the second coordinate value of all the sample pixels are respectively calculated. The number means that all the first coordinates 12 200834476 are added and the value of the pixel is added, and all the second coordinate values are added and divided by the number of sample pixels. Thus, the coordinate value of the center point is obtained. The Markov distance of the step sis is the distance method in the cluster analysis in the statistical method, and the calculation formula is: where 'Yes·(Wx3,··.. X, )τ, # is the coordinate value of the center pointΣ Then, the covariate matrix in the joint group, (4) is the Markov distance of 5, which is detailed after detailed calculation.

本步驟主要係計算出火災影像樣本中各樣本像素與中 心點之馬歧離,以形成複數《本輯,做步驟S18定 義門檻值之依據 關於Η檻狀Μ,可視樣核狀分佈而不同,其 町有下述幾種定義方式,包含將門檻值設定為大於%%之 檬本距離值的數值、將門檻值定義為該等樣本距離值之中 位數、將門檻值設定為該等樣本距離值之最大值、將門.檀 13 200834476 值設定為鱗樣本距離值之平均值、射㈣值設^為該等 樣本距離值之平均值乘上―安全係數檻值設定為該 等樣本輯值之平均值加上料縣㈣值之三倍標準 差’其中安全係數可以依實際情形調整,例如口。 而則述各種值之絲方式端視樣本距離之直方圖分 佈情形而改變,例如料常態分配m倍辟差之方式為 個 較佳’若非謝已知之分佈模式,則可採用最大值、大於某 比例(如95%)之樣本距離值、 A者視衫像樣本中火災圖樣之 比例來定義此門檻值。 再者,步驟S20之影像晝 马由攝影機所拍攝到的即時 晝面,此種影像畫面係由連續 而 的旦面(frai加)構成,往往為了使 的效果,均需於一秒中擷取 影像晝面在人眼的視覺中得到較佳 出至少三十張晝面(frame); 步驟S20韻料娜㈣峨叫進行,其一 為依序將影像晝面40中之辦女^ 衫像像素抽取為判斷像素,再 進行後續步驟,此種方式所+ 所吊進料算與火災像素之判斷較 14 200834476 多,將使系統負荷較重,亦可能因系統之效能與畫面像數較多 而導致運算速度無法跟上影像晝面中各晝面擷取之速度。 其第二種方式可以採用取該擷取之影像畫面中與前一 ❼像晝面相兴之複數個影像像素之_為該判斷像素,意即 將目前擷取到之影像晝面與前一時間點所擷取到之影像晝 面U象像素逐-比對’找出相異之複數個影像像素,再 將相異之影像像素逐一設定為判斷像素,以進行後續步驟。 在步驟S22與步㈣24中係計算該判斷像素與該中心點之 馬氏距離而付到-像素麟,並比較該像素距離與該樣本距This step mainly calculates the horses of each sample pixel in the fire image sample and the center point to form a complex number. In this series, the basis for defining the threshold value in step S18 is different from the nucleus distribution. The town has the following definitions, including setting the threshold value to a value greater than %% of the lemon distance value, defining the threshold value as the median of the sample distance values, and setting the threshold value to the samples. The maximum value of the distance value, the value of the door sandalwood 200834476 is set as the average value of the scale sample distance value, and the shot (four) value is set to the average value of the sample distance values multiplied by the "safety factor" value set to the sample value. The average value plus the standard deviation of three times the value of the county (four)', the safety factor can be adjusted according to the actual situation, such as the mouth. However, the silk method of various values changes depending on the histogram distribution of the sample distance. For example, the method of distributing the normal value of m times is better. If the distribution mode is not known, the maximum value may be used, and the value may be larger than The threshold value of the ratio (such as 95%) and the ratio of the fire pattern in the A-shirt sample are used to define this threshold. Furthermore, the image of the step S20 is captured by the camera, and the image is composed of continuous faces (frai plus), and in order to achieve the effect, it is necessary to capture in one second. The image kneading surface is better than at least thirty frames in the human eye's vision; step S20 rhyme Na (four) squeaking, one of which is to sequentially image the female singer in the image 40 The pixel is extracted as the judgment pixel, and then the subsequent steps are performed. In this way, the judgment of the hoisting feed and the fire pixel is more than that of 14 200834476, which will make the system load heavier, and may also be due to the system performance and the number of images. As a result, the speed of the operation cannot keep up with the speed of the drawing in the image plane. The second method may adopt a plurality of image pixels in the image frame of the captured image that are different from the image of the previous image, which is the judgment pixel, meaning that the image image and the previous time point are currently captured. The captured U-pixels of the image are compared one by one to find a plurality of different image pixels, and then the different image pixels are set one by one as the judgment pixels for the subsequent steps. In step S22 and step (four) 24, the Mahalanobis distance of the judgment pixel and the center point is calculated and the pixel pitch is paid, and the pixel distance is compared with the sample distance.

同-群集,亦_斷像素應屬火災像素。 離,亚於該像素轉於該觸軸取之影像晝 有火火像素,*於門播值之設定係以火災影 關於本發明之實例演曾The same-cluster, also _ broken pixels should be fire pixels. Off, the image taken from the pixel to the touch axis 昼 has a fire pixel, * the setting of the gatecast value is based on the fire shadow.

圖中係以火影像 15 200834476 仏本u為貝%方式’此—火影像樣本係可以取自任何實際的火 之影像,絲火影像樣本之樣本像素有 69* 104 pixels,亦即共 有7176個像素,將之由RGB座標轉換為YIG座標後,γ座 才示與I座彳示則各別有[69 χ 104]之矩陣,將此兩個矩陣重新如下 方式排列,由於數據相當多,故本實施例僅提出特定數值做為 實施例之說明: X,. 1......』7176 (1......^7176 其中Υ〗以79.1697, Ii以81.8977為下述例° 經步驟S14後即可得到中心點為她⑽,其中1與iu ΐμ 由實際計算而得分別為255.0835與40.8868。 Υ與I之聯合組内共變數矩陣乏]則為In the picture, the fire image 15 200834476 仏本u is the shell % mode 'this - fire image sample can be taken from any actual fire image, the sample pixel of the wire fire image sample has 69 * 104 pixels, that is, a total of 7176 After the pixel is converted from the RGB coordinate to the YIG coordinate, the γ-seat and the I-station display have a matrix of [69 χ 104], and the two matrices are arranged in the following manner. Since the data is quite large, This embodiment only sets a specific numerical value as an explanation of the embodiment: X,. 1...』7176 (1...^7176 where Υ is 79.1697, Ii is 81.8977 as the following example) After step S14, the center point is obtained as her (10), where 1 and iu ΐμ are actually calculated as 255.0835 and 40.8868 respectively. The covariate matrix in the joint group of Υ and I is lacking]

(7176 r /=1 、-Meari^X丨-Mean)7 · Λ (7176) -1 V J 並以X!為例計算如下: 79.1697 81.8977 255.0835 40.8868(7176 r /=1, -Meari^X丨-Mean)7 · Λ (7176) -1 V J and take X! as an example: 79.1697 81.8977 255.0835 40.8868

X 79.1697 81.8977 255.0835 40.8868 104χ 2.1291 -0.5984 -0.5984 0.1682 16 200834476 爲除 再將聯合組内共變數矩陣逐一相加Ζ 2 +····..Σ71761 - 以(7176-1)即得到 ’ = 103x L2°86 *°·7754" 1^0.7754 0.6762 _ 其中’為使整個運算過程不致因除以零而虞生"曰^ 亦、可加上-個單位矩陣㈡,之後得到聯合組内麩變數銀 陣為: 1.2096 -0.7754]X 79.1697 81.8977 255.0835 40.8868 104χ 2.1291 -0.5984 -0.5984 0.1682 16 200834476 In addition to adding the covariate matrix in the joint group one by one Ζ 2 +·····.Σ71761 - get (= 7176-1) to get ' = 103x L2 °86 *°·7754" 1^0.7754 0.6762 _ where 'in order to make the whole operation process not divide by zero and generate "曰^ also, can add - unit matrix (2), then get the combined group of bran variable silver The array is: 1.2096 -0.7754]

10 X 0.7754 0.6772 接下來進行前述步驟S16、S18,以本發明之實施1列 、> . 距離 樣本距離值之分佈並非為常態分配,且有95%二‘10 X 0.7754 0.6772 Next, the foregoing steps S16, S18 are performed, with the implementation of the present invention, column 1 > The distance sample distance distribution is not normally assigned, and there are 95% two ‘

值小於6,故門彳監值即設定為6。 其後,則先取擷取之影像晝面40(如「第3圖」所汴户 複數個影像像素之一為判斷像素,假設此判斷像素之Y兴 座標值為像素與中c點做馬氏距離之計算如下The value is less than 6, so the threshold value is set to 6. Thereafter, the captured image plane 40 is first taken (such as one of the plurality of image pixels in the "Fig. 3" as the judgment pixel, and it is assumed that the Y-station of the judgment pixel is the pixel and the c-point is Markov. The distance is calculated as follows

AA

To" α°- "225.0835' 40.8868 ^ — r、 r J X 103 X \ 1.2096 - 0.7754" :0.7754 0.6772 ^~/ X ^ l V -0 义 - "225.0835' ^ 40.8868 T\ 17 200834476 0产15.23,由於其值大於門檻值,故判斷並非為火災 像素,如此即完成一個影像像素點之判斷。 關於判斷像素之決定,請參考「第4圖」,其係為另一 另一擷取之影像畫面41,圖中可以看見其内具有火焰像素 42,而在前一張影像晝面40中並未有此火焰像素存在,因 此,當判斷像素係以前後二張影像晝面40, 41做比較,將 變化的影像像素再計算其馬氏距離,即可省略一些運算之 時間,而此火焰像素在計算出馬氏距離後,其值為3.25, 小於門檻值,即判斷為火災像素。 關於前述本發明方法方物理意義,係以統計方法中的 群集分析(Cluster analysis,Classification),直接以馬氏距離 法算出火災影像樣本中的每個火災像素至中心值之距離, 而由於在實驗後得知採馬氏距離之方式計算對於色彩空間 .的座標之選擇,需以亮度及色差同向分量、或亮度及色差 正交分量做搭配,將有較好之效果,因此,在門檻值定義 後,對新擷取之影像晝面中的各判斷像素在計算對樣本中 18 200834476 心點之距離若小於門於福 ^ 、^ p表示此判斷像素與火災影像 樣本中的火災像素屬於同一 ^ 鮮$,猎以判斷火災像素。 請參考「第5圖」,其係為本發明火災影像樣本31第二 實施例之示意圖,其以煙影像樣本為例,一般在產生火災時, 通常會伴隨著煙,故煙影像樣本亦需做為火災像素判斷之重要 依據’其影像細方法與前述姉,不再贅述。 雨述火災影像樣本3〇, 31亦可同時係為—火影像樣本 及一煙影像樣本,因此,中心點即包含為一火中心點與一 煙中心點,樣本距離則包含—火樣本距離與-煙樣本距 離’像素距離係包含-火像素距離與一煙像素距離,該門 檀值包含—火n檻值與—簡襤值,以在該火像素距離小 於該火門檻值時且該煙像素距離小於該煙門檻值時,判斷 該擷取之影像晝面具有一火災像素,如此一來,其判斷上 則較為嚴謹,需同時有火與煙之產生才判斷為火災像素。 前述火災影像樣本30與擷取之影像晝面4〇, 41,在實 施時最好以同一影像擷取系統與硬體所擷取而得之像素為 19 200834476 佳,以消弭硬體與硬體間的差異。 關於本發明之應用,請參考「第6圖」,其係將前述步 驟S12, S14, S16, S18結合為步驟S50 :樣本分析,以設定 門檻值,而步驟S20,S22則為步驟S52,以計算判斷像素 之像素距離,其後於步驟S54即進行像素距離與門檻值之 比較,若像素距離小於門檻值時,即記錄該影像晝面為火 災晝面,並開始計數(S55),於判斷完一張影像晝面後,即 進行步驟S56,判斷是否連續三十張影像晝面均具有火災 晝面,若是,則發出警報(S58),若否,則執行步驟S59, 本次影像畫面若不是火災晝面時,則火災晝面計數歸零, 如此一來,即可避免警報誤發之情形;當然,前述連續三 十張火災畫面計數之設計係可依不同之需求而調整,可以 設定為二十張或更多或更少,以符合實際需求。 雖然本發明已以較佳實施例揭露如上,然其並非用以限定 本發明,任何熟習此技藝者,在不脫離本發明之精神·和範圍 内,當可作各種之更動和潤飾,因此本發明之保護範圍當後附 20 200834476 之申請專利範圍所界定者為準。 ~ 【圖式簡單說明】 ~ 第1圖,係為本發明火災影像偵測方法之流程圖; 第2圖,係為本發明之火災影像樣本第一實施例之示意圖; 第3圖,係為一擷取之影像晝面之示意圖; • 第4圖,係為另一擷取之影像晝面之示意圖; 第5圖,係為本發明之火災影像樣本第二實施例之示意圖;及 第6圖,係為本發明應用例之流程圖。 【主要元件符號說明】 30, 31 火災影像樣本 40, 41 影像晝.面 42 火焰像素 步驟S12將火災影像樣本之各樣本像素轉換成一色彩 樣型(Color Space) 步驟S14計算樣本像素之中心點 步驟S16計算樣本像素與中心點之馬氏距離而得到樣 21 200834476 本距離值 步驟S18依據樣本距離值定義一門檻值 步驟S20取影像晝面之影像像素之一為判斷像素 步驟S22計算判斷像素與中心點之馬氏距離而得到一 像素距離 步驟S24比較像素距離與樣本距離,並於像素距離小於 樣本距離時判斷影像晝面具有一火災像素 步驟S50樣本分析 步驟S52影像擷取 步驟S54火災像素判斷 步驟S55火災晝面計數 步驟S56火災晝面計數達三十個 步驟S58發出警報 步驟S59本次影像晝面若不是火災晝面,則火災晝面計 數歸零 22To" α°- "225.0835' 40.8868 ^ — r, r JX 103 X \ 1.2096 - 0.7754" :0.7754 0.6772 ^~/ X ^ l V -0 Meaning - "225.0835' ^ 40.8868 T\ 17 200834476 0 15.23, since its value is greater than the threshold value, it is judged that it is not a fire pixel, thus completing the judgment of an image pixel point. For the determination of the determination pixel, please refer to "Fig. 4", which is another image image 41 of the captured image, which can be seen to have a flame pixel 42 therein, and in the previous image surface 40 There is no such flame pixel. Therefore, when it is judged that the pixel is compared with the previous two images, 40, 41 are compared, and the changed image pixel is calculated by the Mahalanobis distance, and some operation time can be omitted, and the flame pixel is omitted. After calculating the Mahalanobis distance, the value is 3.25, which is less than the threshold value, that is, it is judged to be a fire pixel. Regarding the physical meaning of the foregoing method of the present invention, the cluster analysis (Classification) in the statistical method is used to directly calculate the distance from each fire pixel to the central value in the fire image sample by the Mahalanobis distance method, and After learning the Markov distance method to calculate the coordinates of the color space, the choice of brightness and chromatic aberration in the same direction component, or the luminance and chromatic aberration orthogonal components will have a better effect, therefore, at the threshold value After the definition, the judgment pixels in the newly captured image plane are calculated in the sample. The distance between the heart points of the 200834476 is less than the gates. The ^^^^p indicates that the judgment pixel is the same as the fire pixel in the fire image sample. Fresh $, hunting to judge the fire pixel. Please refer to "figure 5" which is a schematic diagram of a second embodiment of the fire image sample 31 of the present invention. The smoke image sample is taken as an example. Generally, in the event of a fire, smoke is usually accompanied, so the smoke image sample is also required. As an important basis for the judgment of fire pixels, the detailed method of the image and the foregoing are not repeated here. The rain image sample 3〇, 31 can also be a fire image sample and a smoke image sample. Therefore, the center point contains a fire center point and a smoke center point, and the sample distance includes the fire sample distance and - the smoke sample distance 'pixel distance system includes - fire pixel distance and a smoke pixel distance, the gate value includes - fire n 槛 value and - 褴 value, so that when the fire pixel distance is less than the fire threshold value and the smoke When the pixel distance is less than the threshold value of the smoke, it is judged that the captured image mask has a fire pixel, and thus the judgment is more rigorous, and it is determined that the fire pixel is generated by the fire and the smoke at the same time. The fire image sample 30 and the captured image surface 4, 41 are preferably implemented by the same image capturing system and hardware. The pixel is 19 200834476, to eliminate hardware and hardware. The difference between the two. For the application of the present invention, please refer to "Fig. 6", which combines the foregoing steps S12, S14, S16, S18 into step S50: sample analysis to set the threshold value, and step S20, S22 is step S52, Calculating the pixel distance of the pixel, and then comparing the pixel distance with the threshold value in step S54, if the pixel distance is less than the threshold value, recording the image surface as a fire surface, and starting counting (S55), After the image is finished, step S56 is performed to determine whether there are three consecutive images of the image surface, and if so, an alarm is issued (S58). If not, step S59 is performed, and if the image is displayed, When the fire is not in the face, the fire is counted back to zero. In this way, the alarm can be avoided. Of course, the design of the above 30 consecutive fire screen counts can be adjusted according to different needs. Twenty or more or less to meet actual needs. While the present invention has been described in its preferred embodiments, the present invention is not intended to limit the invention, and it is obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. The scope of the invention is defined by the scope of the patent application of the Chinese Patent Application No. 2008. ~ [Simple description of the drawing] ~ Figure 1 is a flow chart of the fire image detecting method of the present invention; Fig. 2 is a schematic view of the first embodiment of the fire image sample of the present invention; Schematic diagram of a captured image; • Figure 4 is a schematic view of another captured image; Figure 5 is a schematic view of a second embodiment of the fire image sample of the present invention; and The figure is a flow chart of an application example of the present invention. [Main component symbol description] 30, 31 Fire image sample 40, 41 Image 昼. Face 42 Flame pixel Step S12 Convert each sample pixel of the fire image sample into a color space (Step S14) Step of calculating the center point of the sample pixel S16 calculates the Mahalanobis distance between the sample pixel and the center point to obtain the sample 21 200834476. The distance value step S18 defines a threshold value according to the sample distance value. Step S20 takes one of the image pixels of the image plane as the determination pixel. Step S22 calculates the judgment pixel and the center. The pixel distance of the point is obtained by a pixel distance step S24 to compare the pixel distance and the sample distance, and when the pixel distance is smaller than the sample distance, it is determined that the image mask has a fire pixel. Step S50 Sample Analysis Step S52 Image Capture Step S54 Fire pixel determination step S55 fire 昼 surface counting step S56 fire 昼 surface count up to 30 steps S58 issued an alarm step S59 If the image 昼 surface is not a fire 昼 surface, the fire 昼 surface count returns to zero 22

Claims (1)

200834476 十、申請專利範圍: 1. 一種火災影像偵測方法,係將一火災影像樣本與至少一 擷取之影像晝面(image Frame)做比對,以判斷該擷取之 影像晝面具有一火災像素否,該火災影像偵測方法係包 括: 將該火災影像樣本之各樣本像素轉換成一色彩模型 (Color Space),該色彩模型具有一第一座標與一第二座 標,各該樣本像素則對應該第一座標與該第二座標具有 一第一座標值與一第二座標值; 計算該等樣本像素之該第一座標值與該第二座標值 之一中心點; 計算該火災影像樣本之各該等樣本像素與該中心點 之馬氏距離(Mahalanobis Distance)而得到複數個樣本距 離值; 依據該等樣本距離值定義一門檻值; .取該擷取之影像晝面之複數個影像像素之一為判斷 23 200834476 像素; 計算該判斷像素與該中心點之馬氏距離而得到一像 素距離;以及 比較該像素距離與該樣本距離,並於該像素距離小 於該門檻值時判斷該擷取之影像畫面具有該火災像素。 2. 如申請專利範圍第1項所述之火災影像偵測方法,其中 該色彩模型係為一亮度色差色彩模型(YIQ domain),該 第一座標係為亮度(Luminance)座標,該第二座標係為色 差同向分量(Inphase)座標。 3. 如申請專利範圍第2項所述之火災影像偵測方法,其中 該樣本像素之該第一座標值係為一亮度值,該樣本像素 之該第二座樣值係為一色差同向分量值。 4. 如申請專利範圍第1項所述之火災影像偵測方法,其中 該馬氏距離係為(xz.) = ^/(x厂μ)τ Σ —1 (X厂μ),其中Χζ· 為該第一座標值與該第二座標值,芦為該中心點之該第 24 200834476 -座標值與該第二座標值,[-1則為聯合組内共變數矩 陣,Α (χζ·)為χζ.之該馬氏距離。 5. 如申請專利範圍第1項所述之火災影像偵測方法,其中 該門檻值係大於95%之該等樣本距離值。 6. 如申請專利範圍第1項所述之火災影像_方法,其中 該門檻值係為該等樣本距離值之中位數。 7. 如申請專鄕圍第〗項所述之火災影像_方法,其中 該門檻值係為該等樣本距離值之最大值。 8. 如申5胃專利乾圍第i項所述之火災影像朗方法,其中 該門I值係為該等樣本距離值之平均值。 9·如申請專利範圍第1項所述之火災影像偵測方法,其中 該門捏值係為該等樣本距離值之平均值乘上一安全係 其中 10’如申請專·圍第9項所述之火災影像偵測方法, 該安全係數係為1.2。 u·如申請糊範圍第1項所述之火 <c< 影像偵測方法,其中 25 200834476 該門檻值係為該等樣本距離值之平岣值 丄球寺樣本距 離值之三倍標準差。 12.如申請專利範圍第!項所述之火炎影像_方法,其中 該取該擷取之影像畫面之複數個影像 1豕京之一為判斷像 素之步驟係為取該擷取之影像畫面中盥 一 T ”刚一影像晝面相 兴之複數個影像像素之一為該判斷像素。 13 ♦如申請專利範圍第1項所述之火災影像偵測方法,其中 該計算該判斷像素與該中心點之馬氏距離而得到一像素 距離之步驟係為先將該判斷像素轉換為該第—座標值與 該第二座標值,其次再計算該判斷像素與該中心點之馬 氏距離。 4·如申请專利範圍第1項所述之火災影像偵測方法,其中 該火灸影像樣本係為一火影像樣本。 15·如申請專利範圍第1項所述之火災影像偵測方法,其中 該火災影像樣本係為 一煙影像樣本。 I6·如申請專利範圍第1項所述之火災影像偵測方法,其中 26 200834476 該火災影像樣本係為一火影像樣本及一煙影像樣本,該 中心點係包含為-火中^點與—煙中心點,而該樣本距 離則包含-火樣本麟與—煙樣本輯,該像素距離係200834476 X. Patent application scope: 1. A fire image detection method, which compares a fire image sample with at least one captured image frame to determine that the captured image mask has a mask The fire image detection method includes: converting each sample pixel of the fire image sample into a color space, the color model having a first coordinate and a second coordinate, wherein the sample pixel is Corresponding to the first coordinate and the second coordinate having a first coordinate value and a second coordinate value; calculating a first coordinate value of the sample pixel and a center point of the second coordinate value; calculating the fire image sample a plurality of sample distance values obtained by each of the sample pixels and a Mahalanobis Distance of the center point; a threshold value is defined according to the sample distance values; and a plurality of images of the captured image plane are taken One of the pixels is a judgment 23 200834476 pixel; calculating the Mahalanobis distance of the judgment pixel from the center point to obtain a pixel distance; and comparing the pixel distance The sample distance, and determines the threshold at the time of a small distance to the pixel of the captured image frame pixel having the fire. 2. The fire image detecting method according to claim 1, wherein the color model is a YIQ domain, and the first coordinate is a Luminance coordinate, the second coordinate It is the color difference in-phase component (Inphase) coordinate. 3. The fire image detecting method according to claim 2, wherein the first coordinate value of the sample pixel is a brightness value, and the second sample value of the sample pixel is a color difference Component value. 4. The fire image detection method according to claim 1, wherein the Mahalanobis distance is (xz.) = ^/(x factory μ)τ Σ -1 (X factory μ), wherein Χζ· For the first coordinate value and the second coordinate value, the reed is the 24th 200834476-coordinate value and the second coordinate value of the center point, [-1 is the joint group covariate matrix, Α (χζ·) It is the Markov distance. 5. The fire image detecting method according to claim 1, wherein the threshold value is greater than 95% of the sample distance values. 6. The fire image_method of claim 1, wherein the threshold value is the median of the sample distance values. 7. If applying for the fire image_method described in item ,, the threshold value is the maximum value of the sample distance values. 8. The method of fire image grading according to item (i) of claim 5, wherein the door I value is an average of the sample distance values. 9. The fire image detecting method according to claim 1, wherein the door pinch value is the average value of the sample distance values multiplied by a safety system of 10', such as the application for the second item The fire image detection method described above has a safety factor of 1.2. u·Applicable to the fire <c< image detection method described in item 1 of the paste range, wherein 25 200834476 is the threshold value of the sample distance value, which is three times the standard deviation of the distance value of the Ryukyu Temple sample. . 12. If you apply for a patent scope! The fire image_method of the item, wherein the plurality of images of the captured image image 1 is one of the steps of determining the pixel is to take the image of the captured image in the captured image. One of the plurality of image pixels of the present invention is the fire image detecting method, wherein the fire image detecting method according to the first aspect of the invention, wherein the determining the pixel and the center point of the determining point to obtain a pixel The step of distance is to first convert the judgment pixel into the first coordinate value and the second coordinate value, and then calculate the Mahalanobis distance between the determination pixel and the center point. 4. As described in claim 1 The fire image detecting method is a fire image sample. The fire image detecting method according to the first aspect of the invention, wherein the fire image sample is a smoke image sample. I6. The method for detecting fire images as described in claim 1 wherein 26 200834476 the fire image sample is a fire image sample and a smoke image sample, the center point system Is containing - ^ fire point - the center point of the smoke, which contains sample distance - sample fire and Lin - Series tobacco samples, the pixel-based distance 包含一火像素距離與一煙像素距離,該門檻值係包含一 火門榼值與一煙門檻值,以在該火像素距離小於該火門 檻值時且該煙像素距離小於該煙門檻值時,判斷該擷取 之影像晝面具有一火災像素。· 17·如申請專利範圍第1項所述之火災影像偵測方法,更包 含步驟:連續三十個該擷取之影像晝面均具有該火災像 素時,發出警訊。 18如申請專利範圍第1項所述之火災影像偵測方法,其中 該計算該判斷像素與該中心點之馬氏距離而得到一像素 距離之步驟前更包含將該判斷像素轉換成該色彩模型之 步驟。 19.如申請專利範圍第1項所述之火災影像偵測方法,其中 該中心點係為該寻樣本像素之.該弟一座標值之管數平约 27 200834476 數與該第二座標值之算數平均值。Included as a fire pixel distance and a smoke pixel distance, the threshold value includes a fire threshold value and a smoke threshold value, when the fire pixel distance is less than the fire threshold value and the smoke pixel distance is less than the smoke threshold value , judging that the captured image mask has a fire pixel. · 17· The method for detecting fire images as described in item 1 of the patent application, further includes the step of: sending a warning when 30 consecutive images of the captured image have the fire image. The fire image detecting method according to claim 1, wherein the step of calculating the Mahalanobis distance of the determining pixel and the center point to obtain a pixel distance further comprises converting the determining pixel into the color model. The steps. 19. The fire image detecting method according to claim 1, wherein the center point is the sample pixel of the sample. The number of tubes of the set value is about 27 200834476 and the second coordinate value. The arithmetic mean. 2828
TW96105521A 2007-02-14 2007-02-14 A fire image detection method TWI313848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW96105521A TWI313848B (en) 2007-02-14 2007-02-14 A fire image detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW96105521A TWI313848B (en) 2007-02-14 2007-02-14 A fire image detection method

Publications (2)

Publication Number Publication Date
TW200834476A true TW200834476A (en) 2008-08-16
TWI313848B TWI313848B (en) 2009-08-21

Family

ID=44819490

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96105521A TWI313848B (en) 2007-02-14 2007-02-14 A fire image detection method

Country Status (1)

Country Link
TW (1) TWI313848B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI420423B (en) * 2011-01-27 2013-12-21 Chang Jung Christian University Machine vision flame identification system and method
TWI453679B (en) * 2009-01-16 2014-09-21 Hon Hai Prec Ind Co Ltd Image matching system and method
TWI486883B (en) * 2011-11-28 2015-06-01 Ind Tech Res Inst Method for extractive features of flame image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI453679B (en) * 2009-01-16 2014-09-21 Hon Hai Prec Ind Co Ltd Image matching system and method
TWI420423B (en) * 2011-01-27 2013-12-21 Chang Jung Christian University Machine vision flame identification system and method
TWI486883B (en) * 2011-11-28 2015-06-01 Ind Tech Res Inst Method for extractive features of flame image

Also Published As

Publication number Publication date
TWI313848B (en) 2009-08-21

Similar Documents

Publication Publication Date Title
CN108038456B (en) Anti-deception method in face recognition system
CN103824059B (en) Facial expression recognition method based on video image sequence
CN110084135A (en) Face identification method, device, computer equipment and storage medium
US7606414B2 (en) Fusion of color space data to extract dominant color
TWI293742B (en)
CN106548165A (en) A kind of face identification method of the convolutional neural networks weighted based on image block
WO2016037422A1 (en) Method for detecting change of video scene
JP2006350557A5 (en)
RU2661529C1 (en) Method and device for classification and identification of banknotes based on the color space lab
JP4722467B2 (en) A method of classifying pixels in an image, a method of detecting skin using the method, configured to use data encoded using the method to encode, transmit, receive, or decode Method and apparatus, apparatus configured to perform the method, a computer program causing a computer to execute the method, and a computer-readable storage medium storing the computer program
CN109300110A (en) A kind of forest fire image detecting method based on improvement color model
US6483940B1 (en) Method for dividing image
CN111209818A (en) Video individual identification method, system, equipment and readable storage medium
CN107180439B (en) Color cast characteristic extraction and color cast detection method based on Lab chromaticity space
CN106599880A (en) Discrimination method of the same person facing examination without monitor
JP2005190474A5 (en)
WO2019210707A1 (en) Image sharpness evaluation method, device and electronic device
CN109284759A (en) One kind being based on the magic square color identification method of support vector machines (svm)
TW200834476A (en) A fire image detection method
CN111179293B (en) Bionic contour detection method based on color and gray level feature fusion
CN109657544B (en) Face detection method and device
CN109766860B (en) Face detection method based on improved Adaboost algorithm
Berbar Novel colors correction approaches for natural scenes and skin detection techniques
CN106558047A (en) Color image quality evaluation method based on complementary colours small echo
CN106340037B (en) Based on coloration center than coloration centrifuge away from image color shift detection method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees