CN116206253A - Method and system for detecting and judging site fire behavior based on deep learning - Google Patents

Method and system for detecting and judging site fire behavior based on deep learning Download PDF

Info

Publication number
CN116206253A
CN116206253A CN202211674265.0A CN202211674265A CN116206253A CN 116206253 A CN116206253 A CN 116206253A CN 202211674265 A CN202211674265 A CN 202211674265A CN 116206253 A CN116206253 A CN 116206253A
Authority
CN
China
Prior art keywords
target detection
target
detection network
fire
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211674265.0A
Other languages
Chinese (zh)
Inventor
徐新
郭晓平
俞恩荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Seari Intelligent System Co Ltd
Original Assignee
Shanghai Seari Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Seari Intelligent System Co Ltd filed Critical Shanghai Seari Intelligent System Co Ltd
Priority to CN202211674265.0A priority Critical patent/CN116206253A/en
Publication of CN116206253A publication Critical patent/CN116206253A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting and judging the fire behavior of a construction site based on deep learning, which is characterized by comprising the following steps: obtaining a training data set and a test set; building and training a target detection network: after the real-time image data is obtained, the target detection network I and the target detection network II after training are utilized to detect the target with the conventional size and the small target in the real-time image data respectively. The invention further provides a site fire behavior detection and judgment system based on deep learning, which is characterized by comprising a fire fighter face verification module and a fire fighter detection module. The invention is suitable for detecting targets with different sizes, and mainly improves the detection rate of small targets, carries out high-accuracy identification and alarm warning prompt on detected abnormal events, meets 24-hour uninterrupted real-time alarm condition monitoring, and is an end-to-end detection system.

Description

Method and system for detecting and judging site fire behavior based on deep learning
Technical Field
The invention relates to a method and a system for detecting and judging site fire behavior based on deep learning, and belongs to the technical field of target detection.
Background
Target detection refers to the identification and positioning of a specified object from an image, and is an important application of computer vision. The application of the target detection method is wide, and the target detection method plays an indispensable role in the fields of industrial monitoring safety, automatic driving, robot vision, military detection and the like. In recent years, with the development of deep learning algorithms, the target detection method has also been developed rapidly, and a new solution is provided for the detection of the fire behavior of the construction site in the industrial field.
With the rapid development of deep learning technology, most of the current target detection algorithms are based on deep learning. Target detection methods based on deep learning can be divided into two categories: firstly, extracting candidate areas possibly with targets, and then extracting features by using a Convolutional Neural Network (CNN); the other type is a single-stage target detection method, which is an end-to-end detection method, and the stage of selecting a region is omitted, and CNN calculation is only carried out, so that the image target type and the position information can be returned.
However, the existing single-stage target detection method has the problem of lower accuracy in detecting small targets in engineering application, because the computing power of edge equipment is low, the speed of inputting original pictures into a training network is low, and the size of an original image needs to be resampled to reduce the resolution. For smaller size detection targets, the reduced resolution will present more difficulties for the detection process. For example, for smoking detection by construction site constructors, if model training is performed by using an existing algorithm, the detection rate of the obtained model is far lower than the actual application requirement.
Disclosure of Invention
The invention aims to solve the technical problems that: the existing single-stage target detection method has low detection rate on small targets.
In order to solve the technical problems, the technical scheme of the invention provides a method for detecting and judging the fire behavior of a construction site based on deep learning, which is characterized by comprising the following steps:
step 1, collecting image data of various live fire real scenes from a real operation area of construction site operation, marking each image with a rectangular frame, and marking a target with a conventional size and a small target, wherein the target with the conventional size comprises welding sparks, cutting sparks, open flames, safety helmets, safety clothing, people, oxygen cylinders, acetylene cylinders and fire extinguishers, and the small target is cigarettes;
step 2, carrying out data preprocessing on all the image data marked in the step 1 to obtain a training data set and a test set;
step 3, constructing and training a target detection network:
step 301, respectively building two target detection networks by using a cross-layer convergence backbone network CSPDarknet and a path aggregation network PANet: one of the target detection networks is used for detecting targets with conventional sizes and is defined as a first target detection network; the other target detection network is used for detecting small targets, namely, detecting whether people smoke at the construction site or not, and is defined as a target detection network II;
step 302, adopting an optimal transmission allocation strategy OTA as a positive and negative sample allocation strategy, inputting the training data set obtained in the step 2 into a first target detection network and a second target detection network, and training the first target detection network and the second target detection network by using a random gradient descent method;
during training, the loss function of the first target detection network and the second target detection network is shown as follows:
Figure BDA0004016771110000021
wherein L is cls To classify loss, L reg To locate the loss, L obj For target confidence, λ is the balance coefficient of the positioning loss, N pos An anchor point that is divided into positive samples;
step 303, testing the trained first target detection network and the trained second target detection network by using the test set, if the requirements are met, completing the training of the first target detection network and the second target detection network, and if the requirements are not met, returning to the step 302 to train the first target detection network and the second target detection network again;
step 4, after the real-time image data are obtained, detecting the target with the conventional size and the small target in the real-time image data by utilizing a first target detection network and a second target detection network after training, wherein:
the detection of a target of conventional size comprises the steps of:
the resolution of the real-time image is reduced to a set size, and then the real-time image is input into a first target detection network for detection, so that a detection result of a target with a conventional size is obtained;
the detection of small targets comprises the following steps:
step 401, reducing the resolution of the real-time image to a set size, inputting the real-time image into a first target detection network for detection, and extracting a prediction frame of the personnel class after non-maximum suppression in the first target detection network;
step 402, mapping the prediction frame to the resolution size of the original real-time image;
step 403, inputting the image processed in step 402 into a second target detection network, and detecting whether there is a small target by the second target detection network.
Preferably, in step 2, the preprocessing of the data comprises the steps of:
step 201, reducing the resolution of the image data to a preset size;
step 202, carrying out data enhancement on a real site moving fire scene data set consisting of all image data;
and 203, dividing the data-enhanced real site dynamic scene data into a training data set and a test set according to a certain proportion.
Preferably, in step 202, the method used in data enhancement includes: hsv color transform, mix-up fusion enhancement and multi-scale scaling.
Preferably, in step 302, the impairment is classifiedLoss of L cls And target confidence L obj All the two-value cross entropy losses are adopted, and the expression is shown as follows:
Figure BDA0004016771110000031
wherein x is i Represents the ith training sample in the training data set, n represents the total number of training samples in the training data set, y i E {0,1} represents training sample x i Is a sample tag of f (x) i (ii) means that the first or second target detection network is based on the training samples x i And θ is a model parameter.
Preferably, in step 302, the loss L is located reg The cross-over ratio loss IOULSs is adopted, and the expression is shown as follows:
IOULoss=1-IOU
in the method, in the process of the invention,
Figure BDA0004016771110000032
is the ratio of the intersection and union of the real frame a and the predicted frame B.
The invention further provides a site fire behavior detection and judgment system based on deep learning, which adopts the site fire behavior detection and judgment method, and is characterized by comprising a fire fighter face verification module and a fire fighter detection module, wherein:
the fire fighter face verification module is used for carrying out identity verification on a person holding a construction site fire fighter certificate and applying for fire fighter based on face recognition; if the face checking is unsuccessful, fire is not allowed, and only after the face checking is successful, related personnel can get the fire equipment for construction and start the fire detection module;
in the construction process, the fire detection module detects the targets with conventional sizes and the small targets by utilizing the first target detection network and the second target detection network, and judges whether the following violation conditions exist according to the detection results: detecting unexpected flame, smoking by a fireman, wearing no safety helmet or safety suit, not being equipped with a fire extinguisher, running by the personnel, and too close distance between an oxygen bottle and an acetylene bottle; if the violation condition exists, the fire detection module continuously performs voice broadcasting until the violation condition is not detected, and uploads video and image evidence to be preserved.
Preferably, in the construction process, the face verification module of the fireman is triggered once at intervals, and face verification is performed once.
The invention is suitable for detecting targets with different sizes, and mainly improves the detection rate of small targets, carries out high-accuracy identification and alarm warning prompt on detected abnormal events, meets 24-hour uninterrupted real-time alarm condition monitoring, and is an end-to-end detection system.
Drawings
FIG. 1 is a flow chart of a method for detecting site fire behavior based on deep learning;
FIG. 2 is a training flow chart of the site fire behavior detection method based on deep learning provided by the invention;
fig. 3 is a schematic flow chart of the method for detecting the fire behavior of the construction site based on the deep learning.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
As shown in fig. 1, the method for detecting and judging the fire behavior of the construction site based on the deep learning disclosed in the embodiment includes the following steps:
step 1, acquiring image data by using a camera, wherein in the embodiment, the camera is used for acquiring the image data of various live fire real scenes from a real work area of the work site. After the target to be detected is definitely detected, a rectangular frame mark is carried out on each image by using a target detection data set marking tool LabelImg, and the target is marked. In this embodiment, the objects to be detected are divided into objects of a conventional size and small objects, where the objects of a conventional size include welding sparks, cutting sparks, open fires, helmets, safety wear, people, oxygen cylinders, acetylene cylinders, fire extinguishers, and the small objects are cigarettes, i.e. whether people smoke at a construction site is detected.
Step 2, carrying out data preprocessing on all the image data marked in the step 1:
because the computing power of the edge equipment is low and the original image transferring speed is low, the size of the image data is firstly adjusted when the data preprocessing is carried out, and the resolution of the image data is adjusted from 1024×1920 to 320×640. Then, data enhancement is carried out on a real site moving fire scene data set consisting of all image data, and the method comprises the following steps: hsv color transform, mix-up fusion enhancement and multi-scale scaling. And finally, dividing the data-enhanced real site dynamic fire scene data into a training data set (80%) and a test set (20%).
Referring to fig. 2, step 3, a target detection network is constructed and trained:
step 301, respectively building two target detection networks by using a cross-layer convergence backbone network CSPDarknet+a path aggregation network PANet: one of the target detection networks is used for detecting targets with conventional sizes and is defined as a first target detection network; the other object detection network is used for detecting small objects, namely whether people smoke at the construction site or not, and is defined as an object detection network II.
The first target detection network has no essential difference from the second target detection network in structure, and in this embodiment, the second target detection network is slightly smaller in channel width than the first target detection network in channel depth.
And 302, adopting an optimal transmission allocation strategy OTA as a positive and negative sample allocation strategy, inputting the training data set obtained in the step 2 into a first target detection network and a second target detection network, and training the first target detection network and the second target detection network by using a random gradient descent method.
During training, the loss function of the first target detection network and the second target detection network is shown as follows:
Figure BDA0004016771110000051
wherein L is cls To classify loss, L reg To locate the loss, L obj For target confidence, λ is the balance coefficient of the positioning loss, N pos Is an Anchor Point (Anchor Point) divided into positive samples.
Classification loss L cls And target confidence L obj Binary cross entropy loss (BCELoss) is used, and the expression is shown as follows:
Figure BDA0004016771110000052
wherein x is i Represents the ith training sample in the training data set, n represents the total number of training samples in the training data set, y i E {0,1} represents training sample x i Is a sample tag of f (x) i (ii) means that the first or second target detection network is based on the training samples x i And θ is a model parameter.
Loss of positioning L reg The cross-over ratio loss IOULSs is adopted, and the expression is shown as follows:
IOULoss=1-IOU
in the method, in the process of the invention,
Figure BDA0004016771110000053
is the ratio of the intersection and union of the real frame a and the predicted frame B, and when the real frame a and the predicted frame B are completely coincident, the IOU is 1.
Based on training data sets, training a first target detection network and a second target detection network for 100 rounds, wherein the used training optimizers are SGD optimizers, momentum momentum=0.9 weight attenuation weight_decay=5e-4, a warm-up cosine annealing learning rate reduction strategy is adopted, the initial learning rate is 5e-3, and the warm-up rounds are 5epoch.
Step 303, testing the trained first target detection network and the trained second target detection network by using the test set, if the requirements are met, completing the training of the first target detection network and the second target detection network, and if the requirements are not met, returning to step 302 to train the first target detection network and the second target detection network again.
And 4, after the real-time image data are obtained, detecting the target with the conventional size and the small target in the real-time image data by utilizing the trained target detection network I and the trained target detection network II respectively.
Referring to fig. 3, the detection of a target of conventional size comprises the steps of:
the resolution of the real-time image is adjusted to 320 multiplied by 640, and then the real-time image is input into a target detection network I for detection.
The detection of small targets comprises the following steps:
step 401, adjusting the resolution of the real-time image to 320×640, inputting the real-time image into a first target detection network for detection, and extracting a prediction frame of the personnel class after non-maximum suppression in the first target detection network;
step 402, mapping the position of the prediction frame at 320×640 resolution to 1024×1920 resolution of the original real-time image;
step 403, inputting the image processed in step 402 into a second target detection network, and detecting whether there is a small target by the second target detection network.
The invention also discloses a construction site fire behavior detection and judgment system based on deep learning, which comprises a fire fighter face verification module and a fire fighter detection module, wherein:
and the fire-fighting personnel face verification module is used for carrying out identity verification on personnel holding the construction site fire-fighting certificate and applying for fire. If the face checking is unsuccessful, fire is not allowed, and only after the face checking is successful, related personnel can get the fire equipment for construction, and the fire detection module is started. In the construction process, a fire fighter face verification module is triggered once at intervals, and face verification is performed once to ensure the matching of constructors and operators.
In the construction process, the fire detection module detects the targets with conventional sizes and the small targets by using the first target detection network and the second target detection network, and judges whether the following violation conditions exist according to detection results: the method comprises the steps of detecting unexpected flame, smoking by a fire fighter, wearing no safety helmet or safety suit, not being equipped with a fire extinguisher, running by a person, and too close distance between an oxygen bottle and an acetylene bottle. If the condition exists, the fire detection module continuously performs voice broadcasting until no illegal condition is detected, and uploads video and image evidence to be preserved.

Claims (7)

1. The method for detecting and judging the site fire behavior based on deep learning is characterized by comprising the following steps of:
step 1, collecting image data of various live fire real scenes from a real operation area of construction site operation, marking each image with a rectangular frame, and marking a target with a conventional size and a small target, wherein the target with the conventional size comprises welding sparks, cutting sparks, open flames, safety helmets, safety clothing, people, oxygen cylinders, acetylene cylinders and fire extinguishers, and the small target is cigarettes;
step 2, carrying out data preprocessing on all the image data marked in the step 1 to obtain a training data set and a test set;
step 3, constructing and training a target detection network:
step 301, respectively building two target detection networks by using a cross-layer convergence backbone network CSPDarknet and a path aggregation network PANet: one of the target detection networks is used for detecting targets with conventional sizes and is defined as a first target detection network; the other target detection network is used for detecting small targets, namely, detecting whether people smoke at the construction site or not, and is defined as a target detection network II;
step 302, adopting an optimal transmission allocation strategy OTA as a positive and negative sample allocation strategy, inputting the training data set obtained in the step 2 into a first target detection network and a second target detection network, and training the first target detection network and the second target detection network by using a random gradient descent method;
during training, the loss function of the first target detection network and the second target detection network is shown as follows:
Figure FDA0004016771100000011
wherein L is vls To classify loss, L reg To locate the loss, L obj For target confidence, λ is the balance coefficient of the positioning loss, N pos An anchor point that is divided into positive samples;
step 303, testing the trained first target detection network and the trained second target detection network by using the test set, if the requirements are met, completing the training of the first target detection network and the second target detection network, and if the requirements are not met, returning to the step 302 to train the first target detection network and the second target detection network again;
step 4, after the real-time image data are obtained, detecting the target with the conventional size and the small target in the real-time image data by utilizing a first target detection network and a second target detection network after training, wherein:
the detection of a target of conventional size comprises the steps of:
the resolution of the real-time image is reduced to a set size, and then the real-time image is input into a first target detection network for detection, so that a detection result of a target with a conventional size is obtained;
the detection of small targets comprises the following steps:
step 401, reducing the resolution of the real-time image to a set size, inputting the real-time image into a first target detection network for detection, and extracting a prediction frame of the personnel class after non-maximum suppression in the first target detection network;
step 402, mapping the prediction frame to the resolution size of the original real-time image;
step 403, inputting the image processed in step 402 into a second target detection network, and detecting whether there is a small target by the second target detection network.
2. The method for detecting and judging the fire behavior of a construction site based on deep learning as set forth in claim 1, wherein in the step 2, the preprocessing of the data comprises the steps of:
step 201, reducing the resolution of the image data to a preset size;
step 202, carrying out data enhancement on a real site moving fire scene data set consisting of all image data;
and 203, dividing the data-enhanced real site dynamic scene data into a training data set and a test set according to a certain proportion.
3. The method for detecting and judging fire behavior of construction site based on deep learning as set forth in claim 2, wherein in step 202, the method for performing data enhancement includes: hsv color transform, mix-up fusion enhancement and multi-scale scaling.
4. The method for detecting and judging fire behavior in construction site based on deep learning as set forth in claim 1, wherein in step 302, classification loss L cls And target confidence L obj All the two-value cross entropy losses are adopted, and the expression is shown as follows:
Figure FDA0004016771100000021
wherein x is i Represents the ith training sample in the training data set, n represents the total number of training samples in the training data set, y i E {0,1} represents training sample x i Is a sample tag of f (x) i (ii) means that the first or second target detection network is based on the training samples x i And θ is a model parameter.
5. The method for detecting and determining fire behavior in construction site based on deep learning as claimed in claim 1, wherein in step 302, the loss of location L reg The cross-over ratio loss IOULSs is adopted, and the expression is shown as follows:
IOULoss=1-IOU
in the middle of,
Figure FDA0004016771100000022
Is the ratio of the intersection and union of the real frame a and the predicted frame B.
6. The site fire behavior detection and judgment system based on deep learning adopts the site fire behavior detection and judgment method according to claim 1, and is characterized by comprising a fire fighter face verification module and a fire fighter detection module, wherein:
the fire fighter face verification module is used for carrying out identity verification on a person holding a construction site fire fighter certificate and applying for fire fighter based on face recognition; if the face checking is unsuccessful, fire is not allowed, and only after the face checking is successful, related personnel can get the fire equipment for construction and start the fire detection module;
in the construction process, the fire detection module detects the targets with conventional sizes and the small targets by utilizing the first target detection network and the second target detection network, and judges whether the following violation conditions exist according to the detection results: detecting unexpected flame, smoking by a fireman, wearing no safety helmet or safety suit, not being equipped with a fire extinguisher, running by the personnel, and too close distance between an oxygen bottle and an acetylene bottle; if the violation condition exists, the fire detection module continuously performs voice broadcasting until the violation condition is not detected, and uploads video and image evidence to be preserved.
7. The system for detecting and judging fire behavior on a construction site based on deep learning as set forth in claim 6, wherein said fire fighter face verification module is triggered once at intervals during construction to perform a face check.
CN202211674265.0A 2022-12-26 2022-12-26 Method and system for detecting and judging site fire behavior based on deep learning Pending CN116206253A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211674265.0A CN116206253A (en) 2022-12-26 2022-12-26 Method and system for detecting and judging site fire behavior based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211674265.0A CN116206253A (en) 2022-12-26 2022-12-26 Method and system for detecting and judging site fire behavior based on deep learning

Publications (1)

Publication Number Publication Date
CN116206253A true CN116206253A (en) 2023-06-02

Family

ID=86515369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211674265.0A Pending CN116206253A (en) 2022-12-26 2022-12-26 Method and system for detecting and judging site fire behavior based on deep learning

Country Status (1)

Country Link
CN (1) CN116206253A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611928A (en) * 2024-01-23 2024-02-27 青岛国实科技集团有限公司 Illegal electric welding identification method, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611928A (en) * 2024-01-23 2024-02-27 青岛国实科技集团有限公司 Illegal electric welding identification method, electronic equipment and storage medium
CN117611928B (en) * 2024-01-23 2024-04-09 青岛国实科技集团有限公司 Illegal electric welding identification method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108389359B (en) Deep learning-based urban fire alarm method
CN103106766A (en) Forest fire identification method and forest fire identification system
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN111428617A (en) Video image-based distribution network violation maintenance behavior identification method and system
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
CN104463253B (en) Passageway for fire apparatus safety detection method based on adaptive background study
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN112785809B (en) Fire re-ignition prediction method and system based on AI image recognition
CN112084963A (en) Monitoring early warning method, system and storage medium
CN109389040B (en) Inspection method and device for safety dressing of personnel in operation field
CN114971409B (en) Smart city fire monitoring and early warning method and system based on Internet of things
CN116206253A (en) Method and system for detecting and judging site fire behavior based on deep learning
CN111325133A (en) Image processing system based on artificial intelligence recognition
CN113743256A (en) Construction site safety intelligent early warning method and device
KR20200052418A (en) Automated Violence Detecting System based on Deep Learning
CN110287917A (en) The security management and control system and method in capital construction building site
CN113713292A (en) Method and device for carrying out accurate flame discrimination, fire extinguishing point positioning and rapid fire extinguishing based on YOLOv5 model
CN115223249A (en) Quick analysis and identification method for unsafe behaviors of underground personnel based on machine vision
CN115171006B (en) Detection method for automatically identifying person entering electric power dangerous area based on deep learning
CN114821486B (en) Personnel identification method in power operation scene
CN113989886B (en) Crewman identity verification method based on face recognition
CN115457331A (en) Intelligent inspection method and system for construction site
CN112699745A (en) Method for positioning trapped people on fire scene
CN115394025A (en) Monitoring method, monitoring device, electronic equipment and storage medium
KR20220067833A (en) Position estimation system and method based on 3D terrain data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination