CN113947711A - Dual-channel flame detection algorithm for inspection robot - Google Patents

Dual-channel flame detection algorithm for inspection robot Download PDF

Info

Publication number
CN113947711A
CN113947711A CN202110866369.0A CN202110866369A CN113947711A CN 113947711 A CN113947711 A CN 113947711A CN 202110866369 A CN202110866369 A CN 202110866369A CN 113947711 A CN113947711 A CN 113947711A
Authority
CN
China
Prior art keywords
flame
area
region
visible light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110866369.0A
Other languages
Chinese (zh)
Inventor
张鑫曈
刘林
王瑞芳
郭伟
张永来
张永利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Senhe Zhiku Robot Technology Co ltd
Original Assignee
Suzhou Senhe Zhiku Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Senhe Zhiku Robot Technology Co ltd filed Critical Suzhou Senhe Zhiku Robot Technology Co ltd
Priority to CN202110866369.0A priority Critical patent/CN113947711A/en
Publication of CN113947711A publication Critical patent/CN113947711A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Radiation Pyrometers (AREA)

Abstract

The invention discloses a dual-channel flame detection algorithm for an inspection robot, which comprises the following steps: s1, image acquisition: respectively collecting a visible light image and a thermal imaging image; s2, image extraction: respectively extracting a suspected flame area in the visible light image collected in S1 and a high-temperature area in the thermal imaging image collected in S1; s3: registering the flame area: and performing area matching on the suspected flame area extracted in the step S2 and the high-temperature area, identifying the flame area if the registration is successful, and performing subsequent analysis according to the corresponding condition if the registration is unsuccessful. The double-channel flame detection algorithm for the inspection robot improves the identification stability of the flame detection technology to different development stages of flame.

Description

Dual-channel flame detection algorithm for inspection robot
Technical Field
The invention belongs to the field of computer vision, relates to an image processing technology and a pattern recognition algorithm, and particularly relates to a dual-channel flame detection algorithm for an inspection robot.
Background
Flame detection has important significance in the field of computer vision all the time, and can analyze the characteristics of the flame such as shape, color, frequency flash and the like in different development stages based on images, and the flame detection relates to the technologies of image preprocessing, characteristic extraction, pattern recognition and the like.
The existing flame detection scheme is mainly based on the collection of images by a visible light camera, different characteristics and variable modes occurring in different development stages of flame cannot be stably identified, and the situations of false detection and missed detection are easy to occur.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a dual-channel flame detection algorithm for the inspection robot, and the identification stability of the flame detection technology to different development stages of flame is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a dual channel flame detection algorithm for an inspection robot, comprising the steps of:
s1, image acquisition: respectively acquiring a visible light image and a thermal imaging image by using a double-spectrum holder equipped for the inspection robot;
s2, image extraction: respectively extracting a suspected flame area in the visible light image collected in S1 and a high-temperature area in the thermal imaging image collected in S1;
s3: registering the flame area: and performing area matching on the suspected flame area extracted in the step S2 and the high-temperature area, identifying the flame area if the registration is successful, and performing subsequent analysis according to the corresponding condition if the registration is unsuccessful.
In the above technical solution, preferably, the method for extracting the suspected flame area in the visible light image in S2 is as follows:
s211: color space segmentation: performing color segmentation on the visible light image collected in the S1 to segment a color area of the suspected flame;
s212: filtering the motion characteristics: in the visible light image collected in S1, two consecutive frames of images in the color region of the suspected flame segmented in S211 are arbitrarily selected for pixel subtraction to obtain a difference image, a static region or a flame flicker region is determined according to the coincidence proportion of the pixel values of the difference image, and the flame flicker region is extracted;
s213: and (3) filtering texture features: comparing the textural features of the flame flicker region extracted in the step S212 with textural features of a K-means clustering center obtained by training through a K-means clustering algorithm, and filtering out regions without the flame textural features to obtain flame textural regions, wherein the textural features comprise areas and compactness;
s214: area positioning: extracting the gray level histogram of the flame texture region obtained in S213, obtaining the occurrence frequency of pixel points with the gray level between 165 and 255, comparing the occurrence frequency with the pixel coordinates and the frequency of the pixel points of the K-means clustering center obtained by training in S213, determining the region with the coincidence proportion of more than or equal to 60 percent as a suspected flame region, and outputting a detection result.
In the above-described aspect, it is further preferable that in S212, a static region is determined as a ratio of overlapping pixel values of the pixel difference images in two consecutive frames of images being 40% or more, and a flame flicker region is determined as a ratio of overlapping pixel values of the pixel difference images in two consecutive frames of images being less than 40%.
In the foregoing technical solution, it is further preferable that the high temperature region is a region having a temperature of 60 ℃ or higher and an area of 4800 pixels:
in the above technical solution, it is still further preferable that the matching method of the flame region in S3 is as follows:
s301: image matching: performing image matching on the suspected flame area obtained in the step S214 and the high-temperature area in the step S1; specifically, the image matching is to match the pixel coordinates of the suspected flame area with the pixel coordinates of the high temperature area.
S302: and (5) calculating the overlapping area of the two images successfully matched in the step (S301), and judging the overlapping area with the overlapping area of more than or equal to 60% as a flame area.
In the above technical solution, it is further preferable that, in S301, the case of unsuccessful registration includes the following two cases and corresponding analysis is performed on the corresponding cases:
a: the thermal imaging image collected in the S1 has no high-temperature area, and is considered to have no flame;
b: the thermal imaging image collected in the step S1 has a high temperature region, but the visible light image collected in the step S1 has no suspected flame region detected, the focal length of the visible light camera used for collecting the visible light image is reduced, and the detection is performed again.
The invention discloses a dual-channel flame detection algorithm for an inspection robot, which has the following beneficial effects: through a reliable flame detection algorithm, the double-spectrum holder carried by the inspection robot can effectively detect or early warn a fire hazard safety hazard area.
Drawings
FIG. 1 is a flow chart of a flame detection algorithm of the present invention;
FIG. 2 is a flow chart of the present invention for extracting a suspected flame region from a visible light image;
FIG. 3 is a diagram of the result of visible light recognition in the diagram of the effect of the present invention;
fig. 4 is a thermal imaging recognition result diagram in the actual operation effect diagram of the invention.
In the figure: a suspected flame area-1, a high temperature area-2 and a flame area-3.
Detailed Description
Example (b):
referring to fig. 1 and 2, a dual channel flame detection algorithm for an inspection robot includes the steps of:
s1: image acquisition: respectively acquiring a visible light image and a thermal imaging image by using a double-spectrum holder equipped with an inspection robot, wherein the resolution of the visible light image is 1920 x 1080, a visible light camera for acquiring the visible light image has a 30-time optical zooming function, the resolution of the thermal imaging image is 640 x 480, the focal length of the thermal imaging camera for acquiring the thermal imaging image is fixed focus 17mm, the focal length of the visible light camera is adjusted, the adjustment condition is different according to different positions of detected targets, and the adjustment range of the visible light focal length is between 4.3mm and 129 mm;
s2, image extraction: extracting a suspected flame area in the visible light image collected in S1, and extracting a high-temperature area in the thermal imaging image collected in S1; wherein, the high temperature region is the region that the temperature is more than or equal to 60 ℃, and the area is more than 4800 pixel points:
wherein, the steps of extracting the suspected flame area are as follows:
s211: color space segmentation: in the visible light image collected in S1, the RGB color model is first converted into the HIS color model by the following conversion formula, where H is between [0 °, 180 ° ] corresponding to the case where G ≧ B, and when G < B, H >180 °, H may be made 360 ° -H, and H is converted into between [186 °, 360 ° ]. H was converted to between [0, 1] using H' ═ H/360 °. If S ═ 0 corresponds to a colorless center, H is meaningless and defined as 0. When I is 0, S is also meaningless.
Figure RE-RE-GDA0003371735170000041
Figure RE-RE-GDA0003371735170000042
Figure RE-RE-GDA0003371735170000051
And (3) judging the flame pixels in the visible light image collected in the S1 by adopting three HIS components, judging the pixels satisfying 0-H-60, 100-I-255 and 0.2-S-1.0 as the color area of the suspected flame, and extracting the color area of the suspected flame.
S212: filtering the motion characteristics: in the visible light image collected in S1, two consecutive frames of images in the color region of the suspected flame segmented in S211 are arbitrarily selected for pixel subtraction to obtain a difference image, a static region or a flame flicker region is determined according to the proportion of coincidence of the pixel values of the difference image, wherein the proportion of coincidence of the pixel values of the pixel difference image in the two consecutive frames of images is greater than or equal to 40% and is determined as the static region, the proportion of coincidence of the pixel values of the pixel difference image in the two consecutive frames of images is less than 40% and is determined as the flame flicker region, and the flame flicker region is extracted;
s213: and (3) filtering texture features: comparing the textural features of the flame flicker region extracted in the step S212 with textural features of a K-means clustering center obtained by training through a K-means clustering algorithm, and filtering out regions without the flame textural features to obtain flame textural regions, wherein the textural features comprise areas and compactness;
specifically, the K value clustering algorithm implementation process is as follows:
training K-means clustering, comprising the following steps:
1) selecting 1000 flame samples, extracting flame area and compactness characteristics through a halcon operator, representing the flame area and the compactness characteristics in a vector form, arbitrarily selecting 8 data objects in a data space as initial centers, and representing each data object as a clustering center;
2) for other data objects in the sample, according to Euclidean distances between the data objects and the data objects which are selected as the clustering centers, the data objects are classified into the class corresponding to the clustering center (most similar) closest to the data objects according to the closest criterion;
3) updating a clustering center: taking the mean value of the flame area and the compactness characteristic corresponding to all the data objects in each category as a clustering center of the category, calculating the distance between each data object and each clustering center, and recording as a target function;
4) and judging whether the values of the clustering center and the objective function are changed or not, if not, outputting the result, and if so, returning to the step 2).
Identification process for flame textured areas: and (3) extracting the area and compactness characteristics of the pixels in the flame flicker area obtained in the S212 through a halcon operator, comparing the area and compactness between the pixels in the flame flicker area and the pixels of the clustering center trained through the step method, and regarding the area as a flame texture area if the similarity is more than 40%.
S214: area positioning: extracting the gray level histogram of the flame texture region obtained in S213, obtaining the occurrence frequency of pixel points with the gray level between 165 and 255, comparing the occurrence frequency with the pixel coordinates and the frequency of the pixel points of the K-means clustering center obtained by training in S213, determining the region with the coincidence proportion of more than or equal to 60 percent as a suspected flame region, and outputting a detection result.
S3: registering the flame area: and performing region matching on the suspected flame region extracted in the step S214 and the high-temperature region extracted in the step S1, identifying the flame region if the registration is successful, and performing subsequent analysis according to the corresponding condition if the registration is unsuccessful.
The matching method of the flame region in S3 is as follows:
s301: image matching: performing image matching on the suspected flame area obtained in the step S214 and the high-temperature area in the step S1, wherein the image matching is to match the pixel coordinates of the suspected flame area with the pixel coordinates of the high-temperature area;
s302: and calculating the overlapping area of the two images successfully matched in the step S301, and determining the overlapping area with the overlapping area being more than or equal to 60% as a flame area, wherein the overlapping area refers to the overlapping area of the pixel points.
In addition, in S3, the case where the registration is unsuccessful includes the following two cases and a corresponding analysis is made for the corresponding cases:
a: the thermal imaging image collected in the S1 has no high-temperature region (the temperature is more than or equal to 60 ℃ and the area is more than 4800 pixel points), and is considered to have no flame;
b: the thermal image collected in the step S1 is a high-temperature region, but the suspected flame region is not detected in the visible light image collected in the step S1, and the focal length of the visible light camera used for collecting the visible light image is reduced according to the target position, and the detection is performed again.
Referring to fig. 3 and 4, a region surrounded by a solid line in the visible light image in fig. 3 represents a suspected flame region 1, a region surrounded by a solid line in the thermal imaging image in fig. 4 represents a high temperature region 2, and a black highlight portion in fig. 4 represents a two-channel flame region registration success region, i.e., a flame region 3, and it is determined that a fire occurs. Due to the large field of view of the thermography, the final result is displayed in the thermography image.

Claims (6)

1. A dual-channel flame detection algorithm for an inspection robot is characterized by comprising the following steps:
s1, image acquisition: respectively collecting a visible light image and a thermal imaging image;
s2, image extraction: respectively extracting a suspected flame area in the visible light image collected in S1 and a high-temperature area in the thermal imaging image collected in S1;
s3: registering the flame area: and performing area matching on the suspected flame area extracted in the step S2 and the high-temperature area, identifying the flame area if the registration is successful, and performing subsequent analysis according to the corresponding condition if the registration is unsuccessful.
2. The dual-channel flame detection algorithm for the inspection robot according to claim 1, wherein the suspected flame area in the visible light image in S2 is extracted by the following method:
s211: color space segmentation: performing color segmentation on the visible light image collected in the S1 to segment a color area of the suspected flame;
s212: filtering the motion characteristics: in the visible light image collected in S1, two consecutive frames of images in the color region of the suspected flame segmented in S211 are arbitrarily selected for pixel subtraction to obtain a difference image, a static region or a flame flicker region is determined according to the coincidence proportion of the pixel values of the difference image, and the flame flicker region is extracted;
s213: and (3) filtering texture features: comparing the textural features of the flame flicker region extracted in the step S212 with textural features of a K-means clustering center obtained by training through a K-means clustering algorithm, and filtering out regions without the flame textural features to obtain flame textural regions, wherein the textural features comprise areas and compactness;
s214: area positioning: extracting the gray level histogram of the flame texture region obtained in S213, obtaining the occurrence frequency of pixel points with the gray level between 165 and 255, comparing the occurrence frequency with the pixel coordinates and the frequency of the pixel points of the K-means clustering center obtained by training in S213, determining the region with the coincidence proportion of more than or equal to 60 percent as a suspected flame region, and outputting a detection result.
3. The dual-channel flame detection algorithm for the inspection robot according to claim 2, wherein in S212, a static region is determined as a ratio of overlapping of pixel values of the pixel difference images in two consecutive frames of images being greater than or equal to 40%, and a flame flicker region is determined as a ratio of overlapping of pixel values of the pixel difference images in two consecutive frames of images being less than 40%.
4. The dual channel flame detection algorithm for an inspection robot according to claim 3, wherein the high temperature zone is a zone having a temperature of 60 ℃ or higher and an area of 4800 pixels.
5. The dual channel flame detection algorithm for an inspection robot according to claim 4, wherein the matching method of the flame regions in S3 is as follows:
s301: image matching: performing image matching on the suspected flame area obtained in the step S214 and the high-temperature area in the step S1;
s302: and (5) calculating the overlapping area of the two images successfully matched in the step (S301), and judging the overlapping area with the overlapping area of more than or equal to 60% as a flame area.
6. The dual channel flame detection algorithm for an inspection robot according to claim 5, wherein the instances of unsuccessful registration in S301 include two instances and corresponding analyses are made for the respective instances:
a: the thermal imaging image collected in the S1 has no high-temperature area, and is considered to have no flame;
b: the thermal imaging image collected in the step S1 has a high temperature region, but the visible light image collected in the step S1 has no suspected flame region detected, the focal length of the visible light camera used for collecting the visible light image is reduced, and the detection is performed again.
CN202110866369.0A 2021-07-29 2021-07-29 Dual-channel flame detection algorithm for inspection robot Pending CN113947711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110866369.0A CN113947711A (en) 2021-07-29 2021-07-29 Dual-channel flame detection algorithm for inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110866369.0A CN113947711A (en) 2021-07-29 2021-07-29 Dual-channel flame detection algorithm for inspection robot

Publications (1)

Publication Number Publication Date
CN113947711A true CN113947711A (en) 2022-01-18

Family

ID=79327676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110866369.0A Pending CN113947711A (en) 2021-07-29 2021-07-29 Dual-channel flame detection algorithm for inspection robot

Country Status (1)

Country Link
CN (1) CN113947711A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN117173854A (en) * 2023-09-13 2023-12-05 西安博深安全科技股份有限公司 Coal mine open fire early warning method and system based on deep learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399882A (en) * 2022-01-20 2022-04-26 红骐科技(杭州)有限公司 Fire source detection, identification and early warning method for fire-fighting robot
CN117173854A (en) * 2023-09-13 2023-12-05 西安博深安全科技股份有限公司 Coal mine open fire early warning method and system based on deep learning
CN117173854B (en) * 2023-09-13 2024-04-05 西安博深安全科技股份有限公司 Coal mine open fire early warning method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110070570B (en) Obstacle detection system and method based on depth information
CN111369516B (en) Transformer bushing heating defect detection method based on infrared image recognition
EP1835462A1 (en) Tracing device, and tracing method
CN104978567B (en) Vehicle checking method based on scene classification
CN113947711A (en) Dual-channel flame detection algorithm for inspection robot
CN110084830B (en) Video moving object detection and tracking method
CN109859246B (en) Low-altitude slow unmanned aerial vehicle tracking method combining correlation filtering and visual saliency
CN109165602B (en) Black smoke vehicle detection method based on video analysis
CN105139011B (en) A kind of vehicle identification method and device based on mark object image
CN109344842A (en) A kind of pedestrian&#39;s recognition methods again based on semantic region expression
Halstead et al. Locating people in video from semantic descriptions: A new database and approach
CN110263662B (en) Human body contour key point and key part identification method based on grading
CN112131976B (en) Self-adaptive portrait temperature matching and mask recognition method and device
CN111582118A (en) Face recognition method and device
CN112396011A (en) Face recognition system based on video image heart rate detection and living body detection
Can et al. Detection and tracking of sea-surface targets in infrared and visual band videos using the bag-of-features technique with scale-invariant feature transform
CN111508006A (en) Moving target synchronous detection, identification and tracking method based on deep learning
CN111563896A (en) Image processing method for catenary anomaly detection
CN110675442B (en) Local stereo matching method and system combined with target recognition technology
CN108563997B (en) Method and device for establishing face detection model and face recognition
CN107145820B (en) Binocular positioning method based on HOG characteristics and FAST algorithm
CN117036259A (en) Metal plate surface defect detection method based on deep learning
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
Ortego et al. Long-term stationary object detection based on spatio-temporal change detection
US20220036114A1 (en) Edge detection image capture and recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination