CN115909196A - Video flame detection method and system - Google Patents

Video flame detection method and system Download PDF

Info

Publication number
CN115909196A
CN115909196A CN202211370215.3A CN202211370215A CN115909196A CN 115909196 A CN115909196 A CN 115909196A CN 202211370215 A CN202211370215 A CN 202211370215A CN 115909196 A CN115909196 A CN 115909196A
Authority
CN
China
Prior art keywords
flame
target area
video
target
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211370215.3A
Other languages
Chinese (zh)
Inventor
唐俊
徐威
孙鑫
赵雷雷
赵全祐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANHUI BOWEI GUANGCHENG INFORMATION TECHNOLOGY CO LTD
Anhui University
Original Assignee
ANHUI BOWEI GUANGCHENG INFORMATION TECHNOLOGY CO LTD
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANHUI BOWEI GUANGCHENG INFORMATION TECHNOLOGY CO LTD, Anhui University filed Critical ANHUI BOWEI GUANGCHENG INFORMATION TECHNOLOGY CO LTD
Priority to CN202211370215.3A priority Critical patent/CN115909196A/en
Publication of CN115909196A publication Critical patent/CN115909196A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Fire-Detection Mechanisms (AREA)

Abstract

The invention relates to the technical field of flame detection, solves the technical problems of false alarm and false alarm of flame easily occurring when detecting flame by a video image, and particularly relates to a video flame detection method, which comprises the following steps: s1, detecting a current frame of video flame by using a trained flame detector to obtain a primary selected flame target area; s2, screening the primarily selected flame target area according to the RGB color model to obtain a more accurate multiple flame target area; and S3, extracting foreground flames of the check flame target areas through mixed Gaussian background modeling to obtain the selected flame target areas without background information. The detection method provided by the invention eliminates the light and some objects which are similar to the real flame and are easy to be confused, greatly reduces the false alarm rate, improves the accuracy rate of flame detection, can alarm in the early stage of a fire, reduces the possibility of spreading the fire and minimizes the loss.

Description

Video flame detection method and system
Technical Field
The invention relates to the technical field of flame detection, in particular to a video flame detection method and system.
Background
Flame detection is an important field of current academic research, has important significance, and can help people to find fire early and reduce loss of lives and properties. The conventional fire detection method commonly used at present generally uses sensing equipment for environmental monitoring, and is mainly applied to urban building fire detection and management. These methods mainly include temperature-sensitive fire detection, ion-sensitive smoke detection, smoking-sensitive smoke fire detection, photoelectric smoke detection, ultraviolet smoke detection, infrared smoke detection, and the like.
The performance of the traditional fire detection method depends on the reliability and position distribution of the sensors, only a small space can be detected, the detection precision is greatly influenced by the environment, and the situations of missing report or false report often occur. If the monitoring system is monitored manually, a large amount of manual work is needed to watch the monitoring screen at any time, so that the possibility of human errors is increased, the required data storage capacity and cost are high, and the problems of false alarm and missed alarm of flame are easy to occur.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a video flame detection method and a video flame detection system, which solve the technical problems of false alarm and missed alarm of flame when detecting flame by a video image.
In order to solve the technical problems, the invention provides the following technical scheme: a video flame detection method comprises the following steps:
s1, detecting a current frame of video flame by using a trained flame detector to obtain a primary selected flame target area;
s2, screening the primarily selected flame target area according to the RGB color model to obtain a more accurate multiple flame target area;
s3, extracting foreground flames of the multiple flame target areas through mixed Gaussian background modeling to obtain a selected flame target area without background information;
and S4, judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
Further, in step S1, the specific process of detecting the current frame of the video flame by using the trained flame detector to obtain the initially selected flame target region includes the following steps:
s11, intercepting a single-frame image of a current frame of video flame;
s12, zooming the single-frame image to a specified size and sending the single-frame image to a flame detector;
and S13, setting a confidence threshold value to obtain the coordinate, width and height of the central point of the flame target frame and the confidence information which are larger than the confidence threshold value.
Further, the flame detector employs a YOLOV5 target detection network, the single frame image is scaled to a specified size of 640 × 640, and a confidence threshold is set to 0.5.
Further, in the step S2, the specific process of screening the initially selected flame target regions according to the RGB color model to obtain more accurate checked flame target regions includes the following steps:
s21, intercepting a target area from the primarily selected flame target area image;
and S22, traversing pixel points of the obtained target areas one by one, if more than 60% of the pixel points meet the conditions, retaining the target, and otherwise, discarding.
Further, in step S3, the specific process of extracting foreground flames in the checked flame target regions through mixed gaussian background modeling to obtain the selected flame target regions without background information includes the following steps:
s31, performing mixed Gaussian background modeling on all pixel points in the image of the check flame target area to obtain a mixed Gaussian background model;
s32, judging pixel points in the images of the multiple flame target areas through a mixed Gaussian background model to obtain foreground flames;
when the pixel point satisfies | X-mu i |<2.5σ i Then the pixel point is the background flame, X represents the new pixel value, mu i Denotes the mean value, σ i Represents the standard deviation;
otherwise, the pixel point is the foreground flame;
s33, updating the Gaussian mixture background model according to the judgment result;
and S34, intercepting the foreground flame image extracted from the foreground according to the coordinate information of the checked flame target area to obtain the flame target area without background information.
Further, in step S4, the specific process of determining the selected flame target area by using the trained flame determination model to obtain the finally determined flame area includes the following steps:
s41, zooming the image of the selected flame target area without background information to a specified size, and sending the image into a flame judgment model for judgment;
and S42, outputting a score by the flame judgment model, setting a score threshold value, and judging whether the target area is a real flame area or not according to the score threshold value.
Further, the flame judger model adopts a network structure of a combination of a ResNet50 network and an FPN feature pyramid network, wherein the ResNet50 network comprises 49 convolutional layers and a full connection layer, the image is scaled to a specified size of 56 × 56, and the score threshold is set to 0.5.
The technical scheme also provides a system for realizing the video flame detection method, and the video flame detection system comprises:
the flame detection module is used for detecting the current frame of the video flame by using the trained flame detector to obtain a primary selected flame target area;
the detection result screening module is used for screening the primary selected flame target area according to the RGB color model to obtain a more accurate multiple flame target area;
the foreground extraction module is used for extracting foreground flames of the multiple flame target areas through mixed Gaussian background modeling to obtain the selected flame target areas without background information;
and the flame judgment module is used for judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
By means of the technical scheme, the invention provides a video flame detection method and a video flame detection system, which at least have the following beneficial effects:
the invention adopts a mixed Gaussian background modeling method to extract the foreground and adds a flame judgment model to judge whether the video image is a real flame target, eliminates the light and some confusable targets with the color similar to the real flame color, greatly reduces the false alarm rate, improves the accuracy of flame detection, can alarm in the early stage of fire, reduces the possibility of fire spreading and minimizes the loss.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a video flame detection method of the present invention;
FIG. 2 is a block diagram of a video flame detection system according to the present invention;
FIG. 3 is a block diagram of the YOLOV5 target detection network of the present invention;
FIG. 4 is a network structure diagram of a flame determiner model according to the present invention;
in the figure: 10. a flame detection module; 20. a detection result screening module; 30. a foreground extraction module; 40. and a flame judging module.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. Therefore, the realization process of solving the technical problems and achieving the technical effects by applying technical means can be fully understood and implemented.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and therefore, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Referring to fig. 1 to 4, a specific implementation of the present embodiment is shown, in this embodiment, based on the prior art, only a flame detector is used for detection or an RGB color model is used for determination for detection of video flame, non-flame targets with high brightness and colors close to flame, such as light and leaves, may be determined as flame, and the false alarm rate is high, so that the detection result of a single module cannot be completely trusted, and therefore, a method of extracting foreground by using mixed gaussian background modeling and a method of adding a flame determination model are proposed as innovative points of the present embodiment.
Referring to fig. 1, the present embodiment provides a video flame detection method, which includes the following steps:
s1, detecting a current frame of video flame by using a trained flame detector to obtain a primary selected flame target area;
in the step S1, the specific process of detecting the current frame of the video flame by using the trained flame detector to obtain the initially selected flame target area comprises the following steps:
s11, intercepting a single-frame image of a current frame of video flame;
s12, zooming the single-frame image to a specified size and sending the single-frame image to a flame detector;
and S13, setting a confidence threshold value to obtain the coordinate of the central point, the width and the height of the flame target frame and confidence information of the flame target frame which are greater than the confidence threshold value.
Referring to fig. 3, the flame detector employs a YOLOV5 target detection network, the single frame image is scaled to a size of 640 × 640, and the confidence threshold is set to 0.5.
The flame detector is trained by adopting a data set under a real scene, wherein 23546 training sets and 5887 verification sets are obtained; and (5) labeling the image by using a labeling tool labelImg, and converting the obtained xml file into a txt file. And meanwhile, the image and the txt file containing the labeling information are used as the input of the network, the image input size is 640 multiplied by 640, the txt file contains the coordinates of the central point, the width and the height of each labeling frame and the label information, and finally, the trained flame detector is verified by adopting a verification set, so that the detection accuracy of the flame detector is improved.
When the model is used for reasoning, a confidence coefficient threshold needs to be manually set, if the confidence coefficient threshold is set too high, detection is missed, and if the confidence coefficient threshold is set too low, false detection is missed, and in consideration of the fact that a flame judgment process is added subsequently, the confidence coefficient threshold is set to be 0.5, detection results larger than 0.5 are reserved, and otherwise, the confidence coefficient threshold is discarded. Therefore, the single frame image is scaled to 640 × 640 and sent to the flame detector, and the coordinates of the center point, the width and the height and the confidence degree information of the flame target frame are obtained.
S2, screening the primarily selected flame target area according to the RGB color model to obtain a more accurate multiple flame target area;
in step S2, the specific process of screening the initially selected flame target regions according to the RGB color model to obtain more accurate check flame target regions includes the following steps:
s21, intercepting a target area from the primarily selected flame target area image;
and S22, traversing pixel points of the obtained target areas one by one, if more than 60% of the pixel points meet the conditions, retaining the target, and otherwise, discarding the target.
The RGB color model is also referred to as an additive color mixing model. The amount of light is expressed in units of primary light according to the principle of three primary colors. In the RGB color model, any color light C can be obtained by adding and mixing different components of R, G, and B, and the expression is C = R [ R ] + G [ G ] + B [ B ]. When the three primary color components are all 0, C is black light; when the three primary color components are all 1, C is white light.
The RGB color model includes almost all colors, and adjusting any one of the three color coefficients r, g, b changes its color value. 20 images containing flames are analyzed, and the images totally contain 306.4 ten thousand pixel points, wherein about 103.4 ten thousand flame pixel points. 10 images containing no flame were analyzed, and the images contained 144.5 ten thousand pixel points in total. Comparing and analyzing the color distribution of flame pixel points in the pictures with the color distribution of pixel points of interference sources such as lamplight and helmets, and obtaining that the three primary colors of flame approximately meet the following 4 conditions:
Figure BDA0003925241060000061
R(x,y)>R mean
R(x,y)>G(x,y)>B(x,y)
R(x,y)>200,G(x,y)<200,B(x,y)<100
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the values of the three primary color components of the pixel point (x, y) in the RGB model, and R (x, y) i ,y i ) Expressing the value of the red component of the ith pixel point, k expressing the number of all pixel points in the image, R mean And representing the mean value of the red components of all the pixel points in the image. Traversing and screening the region detected by the flame detector by using an RGB color model, and if more than 60% of pixel points in the region meet the conditions of the 4 formulas, reserving the pixel points; otherwise, the operation is abandoned.
Because only the flame detector and the RGB color model are used for judging whether flames exist, the false alarm rate is high in certain scenes, such as lamplight, sun and the like. Therefore, a method for extracting a foreground by using mixed Gaussian background modeling and a method for adding a flame judgment model are specially proposed, false detection is eliminated, and finally the remaining flame target area is the real flame area.
S3, extracting foreground flames of the multiple flame target areas through mixed Gaussian background modeling to obtain a selected flame target area without background information;
in step S3, the specific process of extracting foreground flames of the check flame target regions through the mixed gaussian background modeling to obtain the selected flame target region without background information includes the following steps:
s31, performing mixed Gaussian background modeling on all pixel points in the image of the check flame target area to obtain a mixed Gaussian background model, wherein the mathematical expression of the mixed Gaussian background model is as follows:
Figure BDA0003925241060000071
Figure BDA0003925241060000072
wherein K represents the number of Gaussian models, generally between 3 and 5, omega i,t Representing the weight of the ith part at time t, η (X) ti,t ,∑ i,t ) Denotes the ith Gaussian function, μ at time t i,t Represents the mean value, sigma i,t Represents the covariance, X t Expressing the observed value of the pixel point at the time t, n expresses X t E represents the base of the natural logarithmic function, and T represents the transposition.
S32, judging pixel points in the image of the target area of the multiple flame through a mixed Gaussian background model to obtain foreground flame;
when the pixel point satisfies | X-mu i |<2.5σ i Then, the pixel point is the background flame, X represents the new pixel value, mu i Denotes the mean value, σ i The standard deviation is indicated.
Otherwise, the pixel point is the foreground flame;
s33, updating the Gaussian mixture background model according to the judgment result;
and S34, intercepting the foreground flame image extracted from the foreground according to the coordinate information of the checked flame target area to obtain the flame target area without background information.
And S4, judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
In step S4, the specific process of determining the selected flame target region by using the trained flame determination model to obtain the finally determined flame region includes the following steps:
s41, zooming the image of the selected flame target area without background information to a specified size, and sending the image into a flame judgment model for judgment;
and S42, outputting a score by the flame judgment model, setting a score threshold value, and judging whether the target area is a real flame area or not according to the score threshold value.
Referring to fig. 4, the flame judger model adopts a network structure of a combination of a ResNet50 network and an FPN feature pyramid network, the ResNet50 network includes 49 convolutional layers and a fully connected layer, the image is scaled to a specified size of 56 × 56, and the score threshold is set to 0.5.
The ResNet50 network comprises 49 convolutional layers and a full-connection layer, and the first part of the ResNet50 network structure mainly calculates the convolution, regularization, activation function and maximum pooling of input; the second, third, fourth and fifth part structures comprise residual blocks. The FPN feature pyramid network has two roles: the method comprises the following steps: the multi-scale features are fused, so that the richness of the features is improved; secondly, the following steps: and dividing the task into a plurality of subtasks according to different target sizes. The loss function used by the flame estimation model is BCEloss, i.e. the cross entropy loss in two categories:
Figure BDA0003925241060000091
wherein x is i To input an image, y i Is a label, phi θ (x i ) For extracted features, l is the sigmoid activation function and n represents the number of samples.
For the training of the flame judgment model, the present embodiment is completed by using a data set containing flame images, where the data set contains 20000 images, and each of the flame images and the non-flame images occupies 10000 images; the image input size is 56 × 56, the flame image is taken as a positive sample label and is "0", the non-flame image is taken as a negative sample label and is "1", and the training set is input into the flame judgment model to be trained to obtain the flame judgment model.
The image of the flame area to be selected is zoomed in to 56 x 56 and is sent into a flame judgment model to obtain a score; and setting the score threshold to be 0.5, if the final score is less than or equal to the threshold, considering the target area of the real flame, and otherwise, discarding the target area.
In the embodiment, a mixed Gaussian background modeling foreground extraction method and a method of adding a flame judgment model are adopted to judge whether the video image is a real flame target, so that light and some confusable targets similar to the real flame color are eliminated, the false alarm rate is greatly reduced, the accuracy of flame detection is improved, an alarm can be given in the early stage of a fire, the possibility of fire spreading is reduced, and the loss is minimized.
The present embodiment also provides a system of the video flame detection method corresponding to the video flame detection method provided by the foregoing embodiment, and since the video flame detection system provided by the present embodiment corresponds to the video flame detection method provided by the foregoing embodiment, the foregoing embodiment of the video flame detection method is also applicable to the video flame detection system provided by the present embodiment, and will not be described in detail in the present embodiment.
Referring to fig. 2, a block diagram of a video flame detection system according to this embodiment is shown, where the video flame detection system includes:
the flame detection module 10 is used for detecting the current frame of the video flame by using the trained flame detector to obtain a primary selected flame target area;
the detection result screening module 20 is used for screening the primary selected flame target area according to the RGB color model to obtain a more accurate check flame target area;
the foreground extraction module 30, the foreground extraction module 30 is configured to extract foreground flames in the multiple flame target regions through mixed gaussian background modeling, so as to obtain a selected flame target region without background information;
and the flame judgment module 40 is used for judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
It should be noted that, when the detection system provided in the foregoing embodiment implements the functions thereof, the division of each functional module is merely used as an example, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the system and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
The foregoing embodiments have described the present invention in detail, and the principle and embodiments of the present invention are explained by applying specific examples herein, and the descriptions of the foregoing embodiments are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A video flame detection method is characterized by comprising the following steps:
s1, detecting a current frame of video flame by using a trained flame detector to obtain a primary selected flame target area;
s2, screening the primarily selected flame target area according to the RGB color model to obtain a more accurate multiple flame target area;
s3, extracting foreground flames of the multiple flame target areas through mixed Gaussian background modeling to obtain a selected flame target area without background information;
and S4, judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
2. The video flame detection method of claim 1, wherein: in step S1, the specific process of detecting the current frame of the video flame by using the trained flame detector to obtain the initially selected flame target region includes the following steps:
s11, intercepting a single-frame image of a current frame of video flame;
s12, zooming the single-frame image to a specified size and sending the single-frame image to a flame detector;
and S13, setting a confidence threshold value to obtain the coordinate, width and height of the central point of the flame target frame and the confidence information which are larger than the confidence threshold value.
3. The video flame detection method of claim 2, wherein: the flame detector uses a YOLOV5 target detection network, the single frame image is scaled to a specified size of 640 x 640, and a confidence threshold is set to 0.5.
4. The video flame detection method of claim 1, wherein: in step S2, the specific process of screening the initially selected flame target regions according to the RGB color model to obtain more accurate check flame target regions includes the following steps:
s21, intercepting a target area from the primarily selected flame target area image;
and S22, traversing pixel points of the obtained target areas one by one, if more than 60% of the pixel points meet the conditions, retaining the target, and otherwise, discarding.
5. The video flame detection method of claim 1, wherein: in step S3, the specific process of extracting foreground flames of the check flame target regions through the mixed gaussian background modeling to obtain the selected flame target region without background information includes the following steps:
s31, performing mixed Gaussian background modeling on all pixel points in the image of the check flame target area to obtain a mixed Gaussian background model;
s32, judging pixel points in the images of the multiple flame target areas through a mixed Gaussian background model to obtain foreground flames;
when the pixel point satisfies | X-mu i |<2.5σ i If so, the pixel point is a background flame;
otherwise, the pixel point is the foreground flame;
s33, updating the Gaussian mixture background model according to the judgment result;
and S34, intercepting the foreground flame image extracted from the foreground according to the coordinate information of the checked flame target area to obtain the flame target area without background information.
6. The video flame detection method of claim 1, wherein: in step S4, the specific process of determining the selected flame target region by using the trained flame determination model to obtain the finally determined flame region includes the following steps:
s41, zooming the image of the selected flame target area without background information to a specified size, and sending the image into a flame judgment model for judgment;
and S42, outputting a score by the flame judgment model, setting a score threshold value, and judging whether the target area is a real flame area or not according to the score threshold value.
7. The video flame detection method of claim 6, wherein: the flame judger model adopts a network structure which is a combination of a ResNet50 network and an FPN characteristic pyramid network, wherein the ResNet50 network comprises 49 convolutional layers and a full connection layer, the image is zoomed to a specified size of 56 x 56, and the score threshold is set to be 0.5.
8. A system for implementing the video flame detection method of any of the above claims 1 to 7, wherein the video flame detection system comprises:
the flame detection module (10) is used for detecting the current frame of the video flame by using the trained flame detector to obtain a primary selected flame target area;
the detection result screening module (20) is used for screening the initially selected flame target area according to the RGB color model to obtain a more accurate check flame target area;
the foreground extraction module (30) is used for extracting foreground flames of the check flame target areas through mixed Gaussian background modeling to obtain the selected flame target areas without background information;
the flame judgment module (40) is used for judging the selected flame target area by using the trained flame judgment model to obtain the finally determined flame area.
CN202211370215.3A 2022-11-03 2022-11-03 Video flame detection method and system Pending CN115909196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211370215.3A CN115909196A (en) 2022-11-03 2022-11-03 Video flame detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211370215.3A CN115909196A (en) 2022-11-03 2022-11-03 Video flame detection method and system

Publications (1)

Publication Number Publication Date
CN115909196A true CN115909196A (en) 2023-04-04

Family

ID=86475564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211370215.3A Pending CN115909196A (en) 2022-11-03 2022-11-03 Video flame detection method and system

Country Status (1)

Country Link
CN (1) CN115909196A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853935A (en) * 2024-03-07 2024-04-09 河南胜华电缆集团有限公司 Cable flame spread detection method and device based on visual analysis and service platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117853935A (en) * 2024-03-07 2024-04-09 河南胜华电缆集团有限公司 Cable flame spread detection method and device based on visual analysis and service platform
CN117853935B (en) * 2024-03-07 2024-06-11 河南胜华电缆集团有限公司 Cable flame spread detection method and device based on visual analysis and service platform

Similar Documents

Publication Publication Date Title
CN109522819B (en) Fire image identification method based on deep learning
Premal et al. Image processing based forest fire detection using YCbCr colour model
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN109472193A (en) Method for detecting human face and device
CN105741328A (en) Shot image quality evaluation method based on visual perception
CN107452018B (en) Speaker tracking method and system
CN112668426B (en) Fire disaster image color cast quantization method based on three color modes
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN115909196A (en) Video flame detection method and system
CN110866473B (en) Target object tracking detection method and device, storage medium and electronic device
CN111914938A (en) Image attribute classification and identification method based on full convolution two-branch network
CN111553337A (en) Hyperspectral multi-target detection method based on improved anchor frame
CN113660484B (en) Audio and video attribute comparison method, system, terminal and medium based on audio and video content
CN114155457A (en) Control method and control device based on flame dynamic identification
CN117451012B (en) Unmanned aerial vehicle aerial photography measurement method and system
CN111260687A (en) Aerial video target tracking method based on semantic perception network and related filtering
CN113657233A (en) Unmanned aerial vehicle forest fire smoke detection method based on computer vision
CN114040094A (en) Method and equipment for adjusting preset position based on pan-tilt camera
CN114187515A (en) Image segmentation method and image segmentation device
CN110852172B (en) Method for expanding crowd counting data set based on Cycle Gan picture collage and enhancement
CN111062926B (en) Video data processing method, device and storage medium
CN112836608A (en) Forest fire source estimation model training method, estimation method and system
CN112633179A (en) Farmer market aisle object occupying channel detection method based on video analysis
CN116962612A (en) Video processing method, device, equipment and storage medium applied to simulation system
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination