CN108090495A - A kind of doubtful flame region extracting method based on infrared light and visible images - Google Patents

A kind of doubtful flame region extracting method based on infrared light and visible images Download PDF

Info

Publication number
CN108090495A
CN108090495A CN201711401121.7A CN201711401121A CN108090495A CN 108090495 A CN108090495 A CN 108090495A CN 201711401121 A CN201711401121 A CN 201711401121A CN 108090495 A CN108090495 A CN 108090495A
Authority
CN
China
Prior art keywords
mrow
mtd
mtr
msub
doubtful
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711401121.7A
Other languages
Chinese (zh)
Inventor
陈蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Source Letter Photoelectric Polytron Technologies Inc
Original Assignee
Hunan Source Letter Photoelectric Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Source Letter Photoelectric Polytron Technologies Inc filed Critical Hunan Source Letter Photoelectric Polytron Technologies Inc
Priority to CN201711401121.7A priority Critical patent/CN108090495A/en
Publication of CN108090495A publication Critical patent/CN108090495A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of extracting methods of the doubtful flame region based on infrared light and visible images, are related to computer vision field.This method carries out Intensity threshold Separation to the infrared image collected and obtains doubtful thermal-flame region;Image registration is carried out to infrared image and visible images simultaneously and obtains affine transformation matrix, position of the doubtful thermal-flame region on visible images is obtained with doubtful thermal-flame region in conjunction with it;Doubtful flame region is obtained finally by flame segmentation, Morphological scale-space and connected component labeling based on YCrCb color spaces.This method can efficiently extract more accurate doubtful flame region, while can guarantee the detection of the doubtful flame region under night-environment, be conducive to carry out round-the-clock fire defector.

Description

A kind of doubtful flame region extracting method based on infrared light and visible images
Technical field
The present invention relates to computer vision field, the doubtful flame region for refering in particular to a kind of infrared light and visible images extracts Method.
Background technology
In recent years, Large Space Building Fires, mine fire, forest fire, tunnel fire hazard etc. frequently occur not only to make the mankind Life and property by massive losses, great destruction is also caused to human ecological environment.With the continuous progress of science, people Start to be conceived to the research of intelligent video analysis to take precautions against fire, therefore the fire defector based on image/video analysis and identification Method has obtained quick popularization.
The extraction of doubtful flame region is the premise of flame identification, and it is that fire is visited that flame image is separated from background The basis of survey is related to the accuracy of the reliability and target identification of subsequent characteristics extraction.In the prior art, fields skill Art personnel are usually extracted using the doubtful conflagration area of the calculus of finite differences based on background model, and the algorithm is by being based on background model reality The algorithm now increased to effective detection of flame, calmodulin binding domain CaM cluster, finally realizes extracted region.But this method assumes that initially Change the training sequence that background does not include moving target, limit the use condition of background model;Also some algorithms often combine The motion detection of flame color model carries out interference source exclusion, and sensitivity is by image capture device quality and motion detection algorithm quality Limitation, and the flame color model of general visible images is designed just for partially red, partially yellow flame color, and limitation is larger;It is further more, fiery Physical characteristic when flame burns in itself, such as the information such as temperature, brightness can not be applied fully, affect to a certain extent Detection efficiency;Meanwhile the detection of the suspicious region under night-environment is easily affected, and can not ensure round-the-clock monitoring.
The content of the invention
It is an object of the invention to overcome above-mentioned the deficiencies in the prior art, propose a kind of based on infrared light and visible images Doubtful flame object extracting method, this method combines infrared light image and visible light image information characteristic, takes full advantage of The physical characteristic of flame can efficiently extract more accurate doubtful flame region, convenient for subsequent flame identification, simultaneously should Method can guarantee the detection of the doubtful flame region under night-environment, be conducive to carry out round-the-clock fire defector.
To achieve the above object, technical solution of the present invention is specific as follows:
A kind of doubtful flame region extracting method based on infrared light and visible images, this method specifically include following step Suddenly:
S1 gathers image simultaneously using thermal camera and visible light camera;
To infrared image, it carries out Intensity threshold Separation to S2;
S3 carries out registration to visible ray and infrared light image and obtains affine transformation matrix, and step is obtained according to affine transformation matrix Position of the doubtful thermal-flame region on visible ray in rapid S2;
Flames of the S4 based on YCrCb color spaces, which is split, obtains final doubtful flame zone come identification step S3.3 results Domain;
S5 is smoothed the result of step S4 using the opening and closing operation in morphology, then using connected component labeling Method handles sharpening result, marks each doubtful flame region.
As being further improved for technical solution of the present invention, the step S2 is specifically included:
S2.1 assumes that the maximum gradation value of infrared image for Max, is then chosenFor the seed point of region growing;
The average gray value of pixel in neighborhood centered on S2.2 solution seed points, by each picture in seed neighborhood of a point The gray value of vegetarian refreshments is with the absolute value of the difference of the average gray value compared with the threshold value set is into T rows;
If S2.3 is less than the threshold value T of setting, which is merged with seed point, and as new seed point;If More than or equal to the threshold value T of setting, then merge without the pixel with seed point, then continue to judge next point.
S2.4 is repeated in step S2.2 and S2.3 until point all in image is all merged into some region, the region As doubtful thermal-flame region.
As being further improved for technical solution of the present invention, the step S3 is specifically included:
S3.1 carries out multiple dimensioned harris Corner Detections to infrared light and visible images;
The descriptor of S3.2 generation characteristic points and the matching of characteristic point;
S3.3 solves affine transformation matrix, and doubtful thermal-flame region is obtained in step S2 can according to affine transformation matrix See the position on light.
As being further improved for technical solution of the present invention, the step S3.1 is specifically included:
S3.1.1, respectively to infrared and visible images structure graphical rule space, passes through selection difference using Gaussian function Metric space parameter σ establishes the different scale space of image;
Assuming that I (x, y) is input picture, then the metric space of an image is defined as function L (x, y, σ), such as following Shown in formula:
L (x, y, σ)=G (x, y, σ) * I (x, y)
Wherein, * is convolution algorithm, and G (x, y, σ) is changeable scale Gaussian function;
S3.1.2 respectively carries out infrared light and visible images the harris Corner Detections under multiscale space;
First respectively in two groups of metric spaces of infrared light and visible images, being examined using harris Robust Algorithm of Image Corner Extraction Harris angle points are measured, then filter out pseudo- angle point from small scale to large scale direction.
As being further improved for technical solution of the present invention, the step S3.2 is specifically included:Using circle come construction feature Descriptor after being accorded with to the characteristic point generation feature point description that infrared light and visible images extract, utilizes arest neighbors method pair Characteristic point carries out matching operation, obtains the point to match of infrared light and visible ray.
As being further improved for technical solution of the present invention, the step S3.3 is specifically included:
Position of the doubtful thermal-flame region on visible ray in step S2 is asked for according to formula below;
Wherein, [u v 1] is the homogeneous coordinates for the characteristic point (u, v) extracted in infrared light image;[u'v'1] is visible The homogeneous coordinates of the characteristic point (u', v') to match in light image with characteristic point (u, v);a11、a12、a21、a22、a31And a32It is imitative Penetrate 6 parameters in transformation model.
As being further improved for technical solution of the present invention, the step S4 is specifically included:
It is doubtful that the acquisition of identification step S3.3 results is come according to the segmentation of the flame based on YCrCb color spaces of formula below Flame region:
Wherein, τ is a threshold value, and Y (x, y) obtains the luminance component of pixel at (x, y) in result, Cb for step S3 (x, y) and Cr (x, y) are the chromatic component of the pixel;YmeanPixel (x included in result is obtained for step S3i,yi) Luminance component average value;
If the F of pixelτBe worth for 1 when, represent the pixel as the pixel in doubtful flame region, otherwise, representing should Pixel is not the pixel in doubtful flame region.
Compared with prior art, the invention has the advantages that:
1) the method for the present invention is by constructing multiscale space, and harris Corner Detections are carried out on multiscale space, profit With circular configuration come construction feature point descriptor, recycling nearest neighbor method progress Feature Points Matching, this method ensure that infrared light With the detection efficiency of the characteristic point of visible ray, the feature point description symbol of circular configuration has rotational invariance, reduces algorithm Complexity, algorithmic match accuracy are high;It is determined finally by affine Transform Model between infrared light and visible ray two images Transformational relation obtains the doubtful flame region of high temperature position in visible images according to affine transformation matrix.
2) the method for the present invention combine infrared light image Intensity threshold Separation, infrared light and visible light image registration and The color segmentation method in the YCrCb spaces based on visible images obtains meaning flame region.This method combines infrared light Image and visible light image information characteristic take full advantage of the physical characteristic of flame, while this method can guarantee under night-environment Doubtful flame region detection, be conducive to carry out round-the-clock fire defector.
Description of the drawings
Fig. 1 is the doubtful flame region extracting method flow chart based on infrared light and visible images in the present embodiment.
Specific embodiment
The present invention is a kind of doubtful flame region extracting method based on infrared light and visible images, and this method can be embedded in FPGA is realized, applies to have in the video camera of fire defector.Below in conjunction with present specification attached drawing, to the one of the present invention The specific embodiment of doubtful flame region extracting method of the kind based on infrared light and visible images is described in further details, and is shown So, described embodiment is only part of the embodiment of the present invention, instead of all the embodiments, based on the reality in the present invention Apply example, those of ordinary skill in the art's all other embodiments obtained without making creative work all belong to In the scope of the application protection.
As shown in Figure 1, the doubtful flame region extracting method based on infrared light and visible images of the present invention is specifically wrapped Include following steps:
S1 gathers image simultaneously using thermal camera and visible light camera;
To infrared image, it carries out Intensity threshold Separation to S2;
Infrared radiation is a kind of most commonly used electromagenetic wave radiation existing for nature, it is normal based on any object Can all generate molecule and the random movement of atom of itself under rule environment, not stop eradiation and go out thermal infrared energy, molecule and Atomic motion is more violent, and the energy of radiation is bigger, otherwise the energy of radiation is smaller.Thermal camera is according to object emission or reflection Infrared imaging, the flame of different flame colors is respectively provided with higher brightness in infrared video.The process that flame occurs is with hair The phenomenon that light, fever, the temperature of flame generation area will be apparently higher than the temperature of ambient enviroment.In view of the brightness of its thermography point Cloth is directly proportional to temperature height, and the image segmentation based on brightness of image can be carried out by the infrared image collected, from And tentatively extract high temp objects region, i.e. the doubtful flame region of high temperature.The step of its Intensity threshold Separation, is as follows:
S2.1 assumes that the maximum gradation value of infrared image for Max, is then chosenFor the seed point of region growing;
The average gray value of pixel in neighborhood centered on S2.2 solution seed points, by each picture in seed neighborhood of a point The gray value of vegetarian refreshments is with the absolute value of the difference of the average gray value compared with the threshold value set is into T rows;
If S2.3 is less than the threshold value T of setting, which is merged with seed point, and as new seed point;If More than or equal to the threshold value T of setting, then merge without the pixel with seed point, then continue to judge next point.
S2.4 is repeated in step S2.2 and S2.3 until point all in image is all merged into some region, this area Domain is exactly doubtful thermal-flame region.
S3 carries out registration to visible ray and infrared light image and obtains affine transformation matrix, and step is obtained according to affine transformation matrix Position of the doubtful thermal-flame region on visible ray in rapid S2;
Visible images are different from the imaging mechanism of infrared image, it is seen that light reflected is the reflective information of scenery, and red The radiation information of the scenery of outer light reflection.Two class gradation of images are widely different, and do not possess correlation between pixel grey scale, unsuitable Use the method for registering based on gradation of image.Therefore, the present invention carries out image registration, tool using multiple dimensioned harris angle points Body step is as follows:
S3.1 carries out multiple dimensioned harris Corner Detections to infrared light and visible images;
S3.1.1 is respectively to infrared and visible images structure graphical rule space;
Graphical rule space is established using Gaussian function in the present invention, Gaussian function formula is as follows:
Wherein, σ is metric space parameter;G (x, y, σ) is changeable scale Gaussian function.
Assuming that I (x, y) is input picture, then the metric space of an image is defined as function L (x, y, σ), such as following Shown in formula:
L (x, y, σ)=G (x, y, σ) * I (x, y) (2)
Wherein, * is convolution algorithm;Metric space parameter σ is bigger, describes the general picture feature of image, and σ is smaller, describes image Minutia, with the variation of σ, just establish the different metric space of image.
In this embodiment, σ=0.5 is selected, Gaussian convolution is carried out as first group to original image using formula (2) Then the piece image of metric space gradually increases σ values, using formula (2) until generating the 5th width image, so far first group Metric space just establishes.Next group of metric space is resettled, two points are carried out to the 3rd width image of upper one group of metric space One of sampling, obtain the piece image of next group of metric space, then carry out as establishing first group of metric space Operation gradually increases fiveth width images of the σ until generating second group of metric space, has so far obtained second group of metric space.
S3.1.2 respectively carries out infrared light and visible images the harris Corner Detections under multiscale space;
(1) respectively in two groups of metric spaces of infrared light and visible images, being examined using harris Robust Algorithm of Image Corner Extraction Measure harris angle points.
Assuming that metric space is σ0, then the number of the harris angle points detected under the metric space is denoted as n0, then scale Space σ1The number of the harris angle points of lower detection is denoted as n1, until out to out space σ4, detect under the metric space The number of harris angle points is denoted as n4
(2) pseudo- angle point is filtered out from small scale to large scale direction.
Judge (1) extraction angle point whether same group of metric space each scale uniform coordinate region (3 × 3 moulds In the range of plate) all exist, if it does, this angle point is arranged to the characteristic point used in image registration;It should if it does not exist, then rejecting Angle point.
Multiple dimensioned harris angle point grids improve the reliability of the precision and Corner Detection of Corner character.
The descriptor of S3.2 generation characteristic points and the matching of characteristic point;
In order to ensure the rotational invariance of feature descriptor, using circle come construction feature descriptor in the present invention, justifying has Good rotational invariance, after image rotates, the region around characteristic point will not all change, in very big degree On reduce computational complexity, improve Corner Detection efficiency.Generating the thought that feature point description accords with is:
(1) centered on the characteristic point detected, first border circular areas is generated by radius of 2 pixel sizes, partly Footpath is sequentially increased 1 generation, 8 concentric circles.
(2) gradient-norm of each pixel and direction in each annulus are calculated, two neighboring concentric circles forms an annular regions Domain shares 7 circle ring areas, counts the gradient accumulated value in 10 directions in 7 circle ring areas successively, such as at first The gradient accumulated value in 10 directions of statistics in the circle ring area of circle and second circle composition, as the 1st to the after angularly sorting 10 elements;Then the gradient accumulated value in 10 directions is counted in the circle ring area of second circle and the 3rd circle composition, is pressed The the 11st to the 20th element is used as after angle sequence;7 circle ring areas that go down are counted successively, share 7 × 10 elements, these yuan The one-dimensional vector of element composition is just the feature descriptor for the characteristic point extracted.
(3) one-dimensional vector is normalized, the influence of illumination variation can be reduced.
After the characteristic point generation feature point description symbol extracted to infrared light and visible images, arest neighbors method pair is utilized Characteristic point carries out matching operation, obtains the point to match of infrared light and visible ray.
S3.3 solves affine transformation matrix, and doubtful thermal-flame region is obtained in step S2 can according to affine transformation matrix See the position on light.
According to the step S3.2 infrared lights obtained and the point to match of visible ray, affine transformation matrix is acquired, is determined red Transformational relation between two image of outer light and visible ray.Affine Transform Model is as follows:
[u'v'1]=[u v 1] F (3)
Wherein
Wherein, [u v 1] is the homogeneous coordinates for the characteristic point (u, v) extracted in infrared light image;[u'v'1] is visible The homogeneous coordinates of the characteristic point (u', v') to match in light image with characteristic point (u, v);F is affine transformation matrix;a11、a12、 a21、a22、a31And a32For 6 parameters in affine Transform Model, this six parameters determine infrared light and visible ray two images Transformational relation between coordinate.Only need 3 pairs of points to match, it is possible to determine this 6 parameters, the point to match can be by step S3.2 is obtained.
Formula (3) is represented by:
Position of the doubtful thermal-flame region on visible ray in step S2 can be acquired according to formula (5).
Flames of the S4 based on YCrCb color spaces, which is split, obtains final doubtful flame zone come identification step S3.3 results Domain;
In YCrCb color spaces, flame pixels can be identified using the scope and relation of each component of pixel.Brightness is One key character of flame, in luminance picture, flame region brightness is clearly higher than other regions.It is empty in YCrCb colors Between in, brightness and colourity are separated, be conducive to fully beneficial to flame brightness and chromaticity carry out flame segmentation.Under The flame based on YCrCb color spaces of face formula is split to carry out the doubtful flame region of identification step S3.3 results acquisition:
Wherein, τ is a threshold value, τ values 20 in this specific embodiment;Y (x, y) for step S3.3 obtain result in (x, Y) luminance component of place's pixel, Cb (x, y) and Cr (x, y) are the chromatic component of the pixel;YmeanIt is obtained for step S3.3 As a result pixel (x included ini,yi) luminance component average value.If the F for the pixel that formula (6) obtainsτIt is worth for 1 When, the pixel is represented as the pixel in doubtful flame region, and otherwise, it is not the picture in doubtful flame region to represent the pixel Element.
Flame segmentation based on YCrCb color spaces can exclude some interfering objects similar to flame color, and extraction is more Add accurate doubtful flame region, be conducive to improve the accuracy of subsequent flame identification.
S5 is smoothed the result of step S4 using the opening and closing operation in morphology, then using connected component labeling Method handles sharpening result, marks each doubtful flame region.
Those skilled in the art will be clear that the scope of the present invention is not restricted to example discussed above, it is possible to which it is carried out Several changes and modification, the scope of the present invention limited without departing from the appended claims.Although oneself is through in attached drawing and explanation The present invention is illustrated and described in book in detail, but such illustrate and describe only is explanation or schematical, and not restrictive. The present invention is not limited to the disclosed embodiments.

Claims (6)

1. a kind of doubtful flame region extracting method based on infrared light and visible images, which is characterized in that this method is specific Comprise the following steps:
S1 gathers image simultaneously using thermal camera and visible light camera;
To infrared image, it carries out Intensity threshold Separation to S2;
S3 carries out registration to visible ray and infrared light image and obtains affine transformation matrix, and step S2 is obtained according to affine transformation matrix In position of the doubtful thermal-flame region on visible ray;
S3.1 carries out multiple dimensioned harris Corner Detections to infrared light and visible images;
The descriptor of S3.2 generation characteristic points and the matching of characteristic point;
S3.3 solves affine transformation matrix, and doubtful thermal-flame region is obtained in step S2 in visible ray according to affine transformation matrix On position.
Flames of the S4 based on YCrCb color spaces, which is split, obtains final doubtful flame region come identification step S3.3 results;
S5 is smoothed the result of step S4 using the opening and closing operation in morphology, then using connected component labeling method Sharpening result is handled, marks each doubtful flame region.
2. a kind of doubtful flame region extracting method based on infrared light and visible images according to claim 1, It is characterized in that, the step S2 is specifically included:
S2.1 assumes that the maximum gradation value of infrared image for Max, is then chosenFor the seed point of region growing;
The average gray value of pixel in neighborhood centered on S2.2 solution seed points, by each pixel in seed neighborhood of a point Gray value and the average gray value absolute value of the difference compared with the threshold value set is into T rows;
If S2.3 is less than the threshold value T of setting, which is merged with seed point, and as new seed point;If more than Equal to the threshold value T of setting, then merge without the pixel with seed point, then continue to judge next point;
S2.4 is repeated in step S2.2 and S2.3 until point all in image is all merged into some region, which is Doubtful thermal-flame region.
3. a kind of doubtful flame region extracting method based on infrared light and visible images according to claim 1, It is characterized in that, the step S3.1 is specifically included:
S3.1.1 builds graphical rule space to infrared and visible images respectively using Gaussian function, by selecting different scale Spatial parameter σ establishes the different scale space of image;
Assuming that I (x, y) is input picture, then the metric space of an image is defined as function L (x, y, σ), as the following formula It is shown:
L (x, y, σ)=G (x, y, σ) * I (x, y)
Wherein, * is convolution algorithm, and G (x, y, σ) is changeable scale Gaussian function;
S3.1.2 respectively carries out infrared light and visible images the harris Corner Detections under multiscale space;
First respectively in two groups of metric spaces of infrared light and visible images, being detected using harris Robust Algorithm of Image Corner Extraction Then harris angle points filter out pseudo- angle point from small scale to large scale direction.
4. a kind of doubtful flame region extracting method based on infrared light and visible images according to claim 3, It is characterized in that, the step S3.2 is specifically included:Using justifying come construction feature descriptor, infrared light and visible images are extracted Characteristic point generation feature point description symbol after, using arest neighbors method to characteristic point carry out matching operation, obtain infrared light and The point to match of visible ray.
5. a kind of doubtful flame region extracting method based on infrared light and visible images according to claim 4, It is characterized in that, the step S3.3 is specifically included:
Position of the doubtful thermal-flame region on visible ray in step S2 is asked for according to formula below;
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msup> <mi>u</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> <mtr> <mtd> <msup> <mi>v</mi> <mo>&amp;prime;</mo> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>a</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>12</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>a</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>a</mi> <mn>22</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>a</mi> <mn>31</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>a</mi> <mn>32</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, [u v 1] is the homogeneous coordinates for the characteristic point (u, v) extracted in infrared light image;[u'v'1] is visible ray figure The homogeneous coordinates of the characteristic point (u', v') to match as in characteristic point (u, v);a11、a12、a21、a22、a31And a32For affine change 6 parameters in mold changing type.
6. a kind of doubtful flame region extracting method based on infrared light and visible images according to claim 1, It is characterized in that, the step S4 is specifically included:
Split to carry out the doubtful flame of identification step S3.3 results acquisition according to the flame based on YCrCb color spaces of formula below Region:
<mrow> <msub> <mi>F</mi> <mi>&amp;tau;</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Y</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&gt;</mo> <mi>C</mi> <mi>b</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>Y</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&gt;</mo> <msub> <mi>Y</mi> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>|</mo> <mi>C</mi> <mi>b</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>C</mi> <mi>r</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mi>&amp;tau;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <msub> <mi>Y</mi> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <mi>Y</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow>
Wherein, τ is a threshold value, and Y (x, y) obtains the luminance component of pixel at (x, y) in result, Cb (x, y) for step S3 With the chromatic component that Cr (x, y) is the pixel;YmeanPixel (x included in result is obtained for step S3i,yi) it is bright Spend the average value of component;
If the F of pixelτBe worth for 1 when, represent that the pixel for the pixel in doubtful flame region, otherwise, represents the pixel It is not the pixel in doubtful flame region.
CN201711401121.7A 2017-12-22 2017-12-22 A kind of doubtful flame region extracting method based on infrared light and visible images Pending CN108090495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711401121.7A CN108090495A (en) 2017-12-22 2017-12-22 A kind of doubtful flame region extracting method based on infrared light and visible images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711401121.7A CN108090495A (en) 2017-12-22 2017-12-22 A kind of doubtful flame region extracting method based on infrared light and visible images

Publications (1)

Publication Number Publication Date
CN108090495A true CN108090495A (en) 2018-05-29

Family

ID=62177691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711401121.7A Pending CN108090495A (en) 2017-12-22 2017-12-22 A kind of doubtful flame region extracting method based on infrared light and visible images

Country Status (1)

Country Link
CN (1) CN108090495A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108761436A (en) * 2018-08-27 2018-11-06 上海岗消网络科技有限公司 A kind of flame visual token device and method
CN109146904A (en) * 2018-08-13 2019-01-04 合肥英睿系统技术有限公司 The method and apparatus of infrared image object profile is shown in visible images
CN110097030A (en) * 2019-05-14 2019-08-06 武汉高德红外股份有限公司 It is a kind of based on infrared and visible images protrusion identification methods and system
CN110945566A (en) * 2018-08-01 2020-03-31 深圳市大疆创新科技有限公司 Image registration method, device, computer system and movable equipment
CN111027541A (en) * 2019-11-15 2020-04-17 国网安徽省电力有限公司检修分公司 Flame detection method and system based on visible light and thermal imaging and storage medium
CN112560657A (en) * 2020-12-12 2021-03-26 南方电网调峰调频发电有限公司 Smoke and fire identification method and device, computer equipment and storage medium
CN112857580A (en) * 2021-01-08 2021-05-28 湖南中冶长天节能环保技术有限公司 High-temperature detection-discharge treatment method and system for activated carbon flue gas purification device
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN115311471A (en) * 2022-09-29 2022-11-08 如东延峰钢结构有限公司 Shuttle kiln sintering condition image identification method
CN116660317A (en) * 2023-07-25 2023-08-29 北京智芯微电子科技有限公司 Hot spot detection method, system, processor and storage medium of photovoltaic array
CN117689568A (en) * 2024-02-04 2024-03-12 及安盾(海南)科技有限公司 Fire detection method and system based on infrared and visible light image fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400111A (en) * 2013-07-10 2013-11-20 重庆大学 Method for detecting fire accident on expressway or in tunnel based on video detection technology
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN106997461A (en) * 2017-03-28 2017-08-01 浙江大华技术股份有限公司 A kind of firework detecting method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400111A (en) * 2013-07-10 2013-11-20 重庆大学 Method for detecting fire accident on expressway or in tunnel based on video detection technology
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN106997461A (en) * 2017-03-28 2017-08-01 浙江大华技术股份有限公司 A kind of firework detecting method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
冷春艳: "基于红外图像处理的森林火灾识别关键技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *
王士迪, 裴海龙: "基于视频图像的火焰识别算法", 《现代电子技术》 *
芮挺 等: "具有SIFT描述的Harris角点多源图像配准", 《光电工程》 *
赵倩: "基于视频的隧道火焰检测技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》 *
陈天炎 等: "基于 YCbCr 颜色空间的火焰图像分割方法", 《传感器与微系统》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110945566A (en) * 2018-08-01 2020-03-31 深圳市大疆创新科技有限公司 Image registration method, device, computer system and movable equipment
CN109146904A (en) * 2018-08-13 2019-01-04 合肥英睿系统技术有限公司 The method and apparatus of infrared image object profile is shown in visible images
CN108761436A (en) * 2018-08-27 2018-11-06 上海岗消网络科技有限公司 A kind of flame visual token device and method
CN108761436B (en) * 2018-08-27 2023-07-25 上海岗消网络科技有限公司 Flame vision distance measuring device and method
CN110097030A (en) * 2019-05-14 2019-08-06 武汉高德红外股份有限公司 It is a kind of based on infrared and visible images protrusion identification methods and system
CN111027541A (en) * 2019-11-15 2020-04-17 国网安徽省电力有限公司检修分公司 Flame detection method and system based on visible light and thermal imaging and storage medium
CN112560657B (en) * 2020-12-12 2023-05-30 南方电网调峰调频发电有限公司 Method, device, computer device and storage medium for identifying smoke and fire
CN112560657A (en) * 2020-12-12 2021-03-26 南方电网调峰调频发电有限公司 Smoke and fire identification method and device, computer equipment and storage medium
CN112857580A (en) * 2021-01-08 2021-05-28 湖南中冶长天节能环保技术有限公司 High-temperature detection-discharge treatment method and system for activated carbon flue gas purification device
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN115311471A (en) * 2022-09-29 2022-11-08 如东延峰钢结构有限公司 Shuttle kiln sintering condition image identification method
CN115311471B (en) * 2022-09-29 2022-12-27 如东延峰钢结构有限公司 Shuttle kiln sintering condition image identification method
CN116660317A (en) * 2023-07-25 2023-08-29 北京智芯微电子科技有限公司 Hot spot detection method, system, processor and storage medium of photovoltaic array
CN116660317B (en) * 2023-07-25 2023-12-22 北京智芯微电子科技有限公司 Hot spot detection method, system, processor and storage medium of photovoltaic array
CN117689568A (en) * 2024-02-04 2024-03-12 及安盾(海南)科技有限公司 Fire detection method and system based on infrared and visible light image fusion
CN117689568B (en) * 2024-02-04 2024-05-28 及安盾(海南)科技有限公司 Fire detection method and system based on infrared and visible light image fusion

Similar Documents

Publication Publication Date Title
CN108090495A (en) A kind of doubtful flame region extracting method based on infrared light and visible images
Gan et al. Immature green citrus fruit detection using color and thermal images
Oh et al. Approaching the computational color constancy as a classification problem through deep learning
Toet et al. Progress in color night vision
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN103761529B (en) A kind of naked light detection method and system based on multicolour model and rectangular characteristic
Ajmal et al. A comparison of RGB and HSV colour spaces for visual attention models
CN104021207B (en) A kind of food information providing method based on image
US8345936B2 (en) Multispectral iris fusion for enhancement and interoperability
CN103164695B (en) A kind of fruit identification method based on multi-source image information fusion
CN103473571B (en) Human detection method
CN108198157A (en) Heterologous image interfusion method based on well-marked target extracted region and NSST
CN106384117B (en) A kind of vehicle color identification method and device
CN103914708B (en) Food kind detection method based on machine vision and system
CN104376334B (en) A kind of pedestrian comparison method of multi-scale feature fusion
CN103927741A (en) SAR image synthesis method for enhancing target characteristics
CN109558825A (en) A kind of pupil center&#39;s localization method based on digital video image processing
CN105225235A (en) A kind of video flame detecting method based on multispectral characteristic
CN109344842A (en) A kind of pedestrian&#39;s recognition methods again based on semantic region expression
CN107481183B (en) Low-light-level image colorization method based on multi-dimensional data association rule
CN107633229A (en) Method for detecting human face and device based on convolutional neural networks
CN110298893A (en) A kind of pedestrian wears the generation method and device of color identification model clothes
CN107862333A (en) A kind of method of the judgment object combustion zone under complex environment
CN108090428A (en) A kind of face identification method and its system
CN109919071A (en) Flame identification method based on infrared multiple features combining technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180529

RJ01 Rejection of invention patent application after publication