CN113688921A - Fire operation identification method based on graph convolution network and target detection - Google Patents

Fire operation identification method based on graph convolution network and target detection Download PDF

Info

Publication number
CN113688921A
CN113688921A CN202111008415.XA CN202111008415A CN113688921A CN 113688921 A CN113688921 A CN 113688921A CN 202111008415 A CN202111008415 A CN 202111008415A CN 113688921 A CN113688921 A CN 113688921A
Authority
CN
China
Prior art keywords
center
gravity
electric welding
fire
flame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111008415.XA
Other languages
Chinese (zh)
Inventor
周伟
郭鑫
庞一然
郑福建
宋光磊
易军
张秀才
左应祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202111008415.XA priority Critical patent/CN113688921A/en
Publication of CN113688921A publication Critical patent/CN113688921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fire operation identification method based on a graph convolution network and target detection, which comprises the steps of firstly, collecting pictures of electric welding flames and common flames, making a data set, and putting the data set into YOLOv5 for training to obtain a model for detecting the electric welding flames; secondly, acquiring videos of the fire behavior and other behaviors, making a data set through OPENPLE, and putting the data set into an improved space-time diagram convolutional neural network for training to obtain a model for identifying the fire behavior; and finally, acquiring a video stream through image acquisition equipment, integrating results of the two models, matching the fire businessmen and the electric welding flame target through the weighted distance between the targets and the Hungarian algorithm, and outputting the identification result of the electric welding fire behavior. According to the method, the weighted distance is used as the weight, people and flames are matched through the Hungarian algorithm, and the fire-working condition in the picture can be better reflected; meanwhile, a partition strategy in the space-time graph convolutional neural network is improved, so that the model can learn more features, and the accuracy of motion recognition is improved.

Description

Fire operation identification method based on graph convolution network and target detection
Technical Field
The invention relates to a fire operation identification method based on a graph convolution network and target detection, and belongs to the field of computer vision.
Background
The firing work is a temporary work for welding and cutting work in a forbidden fire area and for performing surfaces which may generate flames, sparks and glowing heat using a torch, an electric drill, a grinding wheel, etc. in an inflammable and explosive place. The fire-fighting operation safety management system aims to strengthen the fire-fighting operation safety management of group companies, ensure the safety of personnel, production and equipment, control fire-fighting operation behaviors and reduce and avoid fire accidents and other accidents. When determining whether an electric welding flame for a live working exists in a screen, conventional target detection is based on weak features, and is difficult to solve problems caused by factors such as shading and light irradiation, so that the recognition accuracy is not high enough.
Disclosure of Invention
In order to solve the problems, the invention adopts the neural network to model the apparent characteristics and the behavior mode of the target, carries out comprehensive decision on the prediction results of the apparent characteristics and the behavior mode, outputs the fire behavior condition of the operating personnel in the picture and improves the accuracy of result prediction.
In order to achieve the purpose, the application adopts the following technical scheme:
a fire operation identification method based on graph convolution network and target detection comprises the following steps:
s1: collecting videos and pictures with electric welding fire and other common flames;
s2: intercepting pictures with electric welding fires and other common flames, storing electric welding fire pictures and common flame pictures of different environments, different angles and different flame types, wherein the electric welding fires are positive samples, the common flames are negative samples, and constructing an electric welding fire flame data set;
s3: putting the electric welding fire flame data set into a YOLOv5 network model for training to obtain a YOLOv5 network model for detecting electric welding fire and common flame;
s4: collecting videos and pictures with electric welding and firing behaviors and other common behaviors;
s5: intercepting a picture with electric welding fire behavior and other common behaviors, storing videos of the electric welding fire behavior and the common behaviors in different environments and at different angles, wherein the fire behavior is a positive sample, the common behaviors are negative samples, and putting the positive sample and the negative sample into an OPENPLE network model to obtain a joint point data set of the fire behavior;
the human skeleton obtained by the OPENPLE network model can be regarded as a topological graph, and the detection result sequence of continuous video frames is composed of a plurality of topological graphs, and for the skeleton topological graph sequence A in a video segmentGThe expression is as follows:
Figure BDA0003237891700000021
wherein the content of the first and second substances,
Figure BDA0003237891700000022
representing the ith joint point in the skeleton topology map at the f frame;
Figure BDA0003237891700000023
representing a connecting edge between an ith joint point and a jth joint point in the skeleton topological graph at the f frame;
Figure BDA0003237891700000024
representing a connecting edge between the ith joint point in the skeleton topological graph at the f frame and the ith joint point in the skeleton topological graph at the f +1 frame; i, j is 1,2, …, and I is the set joint point number for decomposing the human skeleton; f is 1,2, …, F, F is skeleton topological graph sequence AGLength of (1), F ═ T × R, T isThe duration of the video, R is the frame rate of the video, i.e. F is also the total frame number of the video;
s6: putting the joint point data set of the electric welding firing behavior into an improved space-time graph convolutional neural network model for training to obtain the space-time graph convolutional neural network model for identifying the electric welding firing behavior;
due to the root node
Figure BDA0003237891700000031
The periphery of the node is not provided with a rigid grid structure, and a partition rule is defined so that a root node and a neighbor set thereof
Figure BDA0003237891700000032
Can be convolved; dividing the neighbor set into n subsets of fixed quantity, wherein the n subsets are respectively (1) a root node per se, (2) a neighbor node which is closer to the center of gravity of the skeleton than the root node and is also closer to the center of gravity of the region than the center of gravity of the skeleton, (3) a neighbor node which is closer to the center of gravity of the skeleton than the root node and is also farther from the center of gravity of the region than the center of gravity of the skeleton, (4) a neighbor node which is farther from the center of gravity of the skeleton than the root node and is also closer to the center of gravity of the region than the center of gravity of the skeleton (5) is farther from the center of gravity of the skeleton than the root node and is also farther from the center of gravity of the region than the center of gravity of the skeleton; the mathematical representation of the partitioning rule is as follows:
Figure BDA0003237891700000033
wherein d iscIs the distance from the root node to the center of gravity of the skeleton, djThe distance from the neighbor node to the center of gravity of the skeleton; a iscIs the distance from the center of gravity of the skeleton to the center of gravity of the region, ajThe distances from the neighbor nodes to the center of gravity of the region are all Euclidean distances, the center of gravity of the framework is the coordinate mean value of all the joint points, and the center of gravity of the region is the coordinate mean value of all the neighbor nodes;
by adopting the partition rule, the regional convolution part of the space-time graph convolution neural network can learn more potential characteristics of different actions, so that the finally obtained network can identify the actions more accurately;
s7: acquiring a real-time picture frame of a camera;
s8: comprehensively deciding the detection result of the real-time picture frame, and judging the fire behavior of each person in the picture, wherein the steps are as follows:
s81: the Euclidean distance is used for weighting and summing the joint points in each human body skeleton identified by OPENSPOSE and the flame coordinates detected by YOLOv5, and the formula is as follows:
Figure BDA0003237891700000041
wherein P isiI-th human skeleton, O, detected for OPENPOPEjFor the jth flame target box detected by the YOLOv5 network,
Figure BDA0003237891700000042
is the central point of the jth flame target frame,
Figure BDA0003237891700000043
for the mth joint point, rho, on the ith human body skeleton2() Is the Euclidean distance, wmWeighting corresponding to the distance from each joint point of the human skeleton to the central point of the flame target frame; weighting and summing the distances from all the joint points of each human body skeleton to the central point of the flame target frame to obtain the correlation degree of the ith human body skeleton and the jth flame target frame;
considering that the action posture of a person is complex, only considering the Euclidean distance between the central points of the human body target frame and the flame target frame as a matching standard, which person in the image is doing the fire action cannot be accurately judged; for example, when the distances between a plurality of human target frames and a flame target frame are relatively close, it is difficult to judge the fire relationship between the human target frames and the flame;
the method can improve the weight from the joint points such as eyes, hands and feet in the human body skeleton to the central point of the flame target frame, reduce the weight of other joint points with lower correlation degree, and correspondingly adjust the weight of other joint points when some joint points in the human body skeleton are not completely identified, so that the calculated weighted value can represent the matching degree of the human body and the flame, and the problem can be solved to a certain degree.
S82: matching the human skeleton with the flames to finally obtain whether the image has the fire behavior and the corresponding responsible person of each flame; set a and set B are set as follows:
A=Oc
Figure BDA0003237891700000051
wherein, OcSet of target frame center points, P, for electric welding flames detected by the YOLOv5 network modelcHuman skeleton center point set, R, for OPENPLE network model detectioniFor the motion prediction result of the space-time graph convolutional neural network on the ith personal skeleton in the picture, the value is 1 to represent the fire behavior, and N is the number of the human skeletons in the current picture, so that the set B only contains the central point of the human skeleton detected as the fire behavior;
based on the Hungarian algorithm idea, matching each electric welding flame in the picture with a fireman, and finally outputting the fireman detection result in the picture, including whether the fireman acts or not and the corresponding responsible person of the electric welding flame.
Drawings
FIG. 1 is a flow chart of a surveillance video based method for detecting a fire in an operating area;
FIG. 2 is an exemplary diagram of a partition policy for a portion of a node;
FIG. 3 is a video frame with a fire action;
Detailed Description
A fire operation identification method based on graph convolution network and target detection comprises the following steps:
s1: collecting videos and pictures with electric welding fire and other common flames;
s2: intercepting pictures with electric welding fires and other common flames, storing electric welding fire pictures and common flame pictures of different environments, different angles and different flame types, wherein the electric welding fires are positive samples, the common flames are negative samples, and constructing an electric welding fire flame data set;
s3: putting the electric welding fire flame data set into a YOLOv5 network model for training to obtain a YOLOv5 network model for detecting electric welding fire and common flame;
s4: collecting videos and pictures with electric welding and firing behaviors and other common behaviors;
s5: intercepting a picture with electric welding fire behavior and other common behaviors, storing videos of the electric welding fire behavior and the common behaviors in different environments and at different angles, wherein the fire behavior is a positive sample, the common behaviors are negative samples, and putting the positive sample and the negative sample into an OPENPLE network model to obtain a joint point data set of the fire behavior;
the human skeleton obtained by the OPENPLE network model can be regarded as a topological graph, and the detection result sequence of continuous video frames is composed of a plurality of topological graphs, and for the skeleton topological graph sequence A in a video segmentGThe expression is as follows:
Figure BDA0003237891700000061
wherein the content of the first and second substances,
Figure BDA0003237891700000062
representing the ith joint point in the skeleton topology map at the f frame;
Figure BDA0003237891700000063
representing a connecting edge between an ith joint point and a jth joint point in the skeleton topological graph at the f frame;
Figure BDA0003237891700000064
representing a connecting edge between the ith joint point in the skeleton topological graph at the f frame and the ith joint point in the skeleton topological graph at the f +1 frame; i, j is 1,2, …, and I is the set joint point number for decomposing the human skeleton; f is 1,2, …, F, F is skeleton topological graph sequence AGF is T × R, T is the duration of the video, R is the frame rate of the video, i.e. F is also the videoThe total number of frames;
s6: putting the joint point data set of the electric welding firing behavior into an improved space-time graph convolutional neural network model for training to obtain the space-time graph convolutional neural network model for identifying the electric welding firing behavior;
due to the root node
Figure BDA0003237891700000065
The periphery of the node is not provided with a rigid grid structure, and a partition rule is defined so that a root node and a neighbor set thereof
Figure BDA0003237891700000071
Can be convolved; as shown in fig. 2, taking two root nodes as an example, the neighbor sets of the two root nodes are divided according to the following 5 conditions, and are respectively (1) the root node itself, which is the node a in fig. 2; (2) the neighbor nodes which are closer to the center of gravity of the skeleton than the root node and are closer to the center of gravity of the region than the center of gravity of the skeleton are B nodes in the graph 2; (3) the neighbor nodes which are closer to the center of gravity of the framework than the root node and are further away from the center of gravity of the region than the center of gravity of the framework are C nodes in the figure 2; (4) the neighbor node which is farther away from the center of gravity of the framework than the root node and is closer to the center of gravity of the region than the center of gravity of the framework is the D node in the figure 2; (5) the neighbor node which is farther away from the center of gravity of the framework than the root node and is also farther away from the center of gravity of the area than the center of gravity of the framework is the E node in the figure 2; the mathematical representation of the partitioning rule is as follows:
Figure BDA0003237891700000072
wherein d iscIs the distance from the root node to the center of gravity of the skeleton, djThe distance from the neighboring node to the center of gravity of the skeleton is shown by the dotted line between each node and the five-pointed star in fig. 2; a iscIs the distance from the center of gravity of the skeleton to the center of gravity of the region, ajThe distance from the neighbor node to the center of gravity of the region is shown by the dotted line between each node and the quadrangle star in fig. 2; the distances are all Euclidean distances, and the center of gravity of the skeleton is the coordinate mean value of all joint points, as shown by a five-pointed star in figure 2; coordinates of all neighbor nodes with region gravity center as root nodeValues, as shown by the four-pointed star in FIG. 2;
by adopting the partition rule, the regional convolution part of the space-time graph convolution neural network can learn more potential characteristics of different actions, so that the finally obtained network can identify the actions more accurately;
s7: acquiring a real-time picture frame of a camera;
s8: comprehensively deciding the detection result of the real-time picture frame, and judging the fire behavior of each person in the picture, wherein the steps are as follows:
s81: the Euclidean distance is used for weighting and summing the joint points in each human body skeleton identified by OPENSPOSE and the flame coordinates detected by YOLOv5, and the formula is as follows:
Figure BDA0003237891700000081
wherein P isiI-th human skeleton, O, detected for OPENPOPEjFor the jth flame target box detected by the YOLOv5 network,
Figure BDA0003237891700000082
is the central point of the jth flame target frame,
Figure BDA0003237891700000083
for the mth joint point, rho, on the ith human body skeleton2() Is the Euclidean distance, wmWeighting corresponding to the distance from each joint point of the human skeleton to the central point of the flame target frame; weighting and summing the distances from all the joint points of each human body skeleton to the central point of the flame target frame to obtain the correlation degree of the ith human body skeleton and the jth flame target frame;
considering that the action posture of a person is complex, only considering the Euclidean distance between the central points of the human body target frame and the flame target frame as a matching standard, which person in the image is doing the fire action cannot be accurately judged; for example, when the distances between a plurality of human target frames and a flame target frame are relatively close, it is difficult to judge the fire relationship between the human target frames and the flame;
the method can improve the weight from the joint points such as eyes, hands, feet and the like in the human skeleton to the central point of the flame target frame, reduce the weight of other joint points with lower correlation degree, and correspondingly adjust the weight of other joint points when some joint points in the human skeleton are not completely identified, so that the calculated weighted value can represent the matching degree of the human and the flame better, and the problem can be solved to a certain degree;
s82: matching the human skeleton with the flames to finally obtain whether the image has the fire behavior and the corresponding responsible person of each flame; set a and set B are set as follows:
A=Oc
Figure BDA0003237891700000091
wherein, OcSet of target frame center points, P, for electric welding flames detected by the YOLOv5 network modelcHuman skeleton center point set, R, for OPENPLE network model detectioniFor the motion prediction result of the space-time graph convolutional neural network on the ith personal skeleton in the picture, the value is 1 to represent the fire behavior, and N is the number of the human skeletons in the current picture, so that the set B only contains the central point of the human skeleton detected as the fire behavior;
based on the Hungarian algorithm idea, matching each electric welding flame in the picture with a fireman, and finally outputting a fireman detection result in the picture, wherein the fireman detection result comprises whether the fireman acts or not and a corresponding person in charge of the electric welding flame;
as shown in fig. 3, the detected electric welding flame target and the human skeleton are indicated by rectangular frames; and matching each human body skeleton in the picture with the electric welding flame, and finally outputting an electric welding fire behavior recognition result in the picture.

Claims (3)

1. A fire operation identification method based on graph convolution network and target detection is characterized by comprising the following steps:
s1: collecting videos and pictures with electric welding fire and other common flames;
s2: intercepting pictures with electric welding fires and other common flames, storing electric welding fire pictures and common flame pictures of different environments, different angles and different flame types, wherein the electric welding fires are positive samples, the common flames are negative samples, and constructing an electric welding fire flame data set;
s3: putting the electric welding fire flame data set into a YOLOv5 network model for training to obtain a YOLOv5 network model for detecting electric welding fire and common flame;
s4: collecting videos with electric welding and firing behaviors and other common behaviors;
s5: intercepting a picture with electric welding fire behavior and other common behaviors, storing videos of the electric welding fire behavior and the common behaviors in different environments and at different angles, wherein the fire behavior is a positive sample, the common behaviors are negative samples, and putting the positive sample and the negative sample into an OPENPLE network model to obtain a joint point data set of the fire behavior;
s6: putting the joint point data set of the electric welding firing behavior into an improved space-time graph convolutional neural network model for training to obtain the space-time graph convolutional neural network model for identifying the electric welding firing behavior;
s7: acquiring a real-time picture frame of a camera;
s8: and comprehensively deciding the detection result of the real-time picture frame, and judging the fire behavior of each person in the picture.
2. The method for identifying a fire operation based on the graph convolution network and the target detection according to claim 1, characterized in that: the improved space-time diagram convolution neural model in step S6 includes the following detailed steps:
due to the root node
Figure FDA0003237891690000021
The periphery of the node is not provided with a rigid grid structure, and a partition rule is defined so that a root node and a neighbor set thereof
Figure FDA0003237891690000022
Can be convolved; will be adjacent toThe living set is divided into n subsets with fixed quantity, which are respectively (1) a root node per se, (2) a neighbor node which is closer to the center of gravity of the framework than the root node and is also closer to the center of gravity of the area than the center of gravity of the framework, (3) a neighbor node which is closer to the center of gravity of the framework than the root node and is also farther from the center of gravity of the area than the center of gravity of the framework, (4) a neighbor node which is farther from the center of gravity of the framework than the root node and is also closer to the center of gravity of the area than the center of gravity of the framework (5) is farther from the center of gravity of the framework than the root node and is also farther from the center of gravity of the area than the center of gravity of the framework; the mathematical representation of the partitioning rule is as follows:
Figure FDA0003237891690000023
wherein d iscIs the distance from the root node to the center of gravity of the skeleton, djThe distance from the neighbor node to the center of gravity of the skeleton; a iscIs the distance from the center of gravity of the skeleton to the center of gravity of the region, ajThe distances from the neighbor nodes to the center of gravity of the region are all Euclidean distances, the center of gravity of the framework is the coordinate mean value of all the joint points, and the center of gravity of the region is the coordinate mean value of all the neighbor nodes.
3. The method for identifying a fire operation based on the graph convolution network and the target detection according to claim 1, characterized in that: the comprehensive decision of step S8 is to determine the fire behavior of each person on the screen, and specifically includes the following steps:
s31: the joint points in each human body skeleton identified by OPENSPOSE and the flame coordinates detected by YOLOv5 are weighted and summed, and the formula is as follows:
Figure FDA0003237891690000031
wherein, PiFor the ith human skeleton detected by openpost,
Figure FDA0003237891690000032
on the ith human body skeletonThe mth joint point of (1), OjFor the jth flame target box detected by the YOLOv5 network,
Figure FDA0003237891690000033
is the center point of the jth flame target frame, p2() Is the Euclidean distance, wmThe weighted value is corresponding to the distance from the mth joint point of the human skeleton to the central point of the flame target frame; weighting and summing the distances from all the joint points of each human body skeleton to the central point of the flame target frame to obtain the correlation degree of the ith human body skeleton and the jth flame target frame;
s32: matching the human skeleton with the flames to finally obtain whether the image has the fire behavior and the corresponding responsible person of each flame; set a and set B are set as follows:
A=Oc
Figure FDA0003237891690000034
wherein, OcSet of target frame center points, P, for electric welding flames detected by the YOLOv5 network modelcHuman skeleton center point set, R, for OPENPLE network model detectioniFor the motion prediction result of the space-time graph convolutional neural network on the ith personal skeleton in the picture, the value is 1 to represent the fire behavior, and N is the number of the human skeletons in the current picture, so that the set B only contains the central point of the human skeleton detected as the fire behavior;
based on the Hungarian algorithm idea, matching each electric welding flame in the picture with a fireman, and finally outputting the fireman detection result in the picture, including whether the fireman acts or not and the corresponding responsible person of the electric welding flame.
CN202111008415.XA 2021-08-31 2021-08-31 Fire operation identification method based on graph convolution network and target detection Pending CN113688921A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111008415.XA CN113688921A (en) 2021-08-31 2021-08-31 Fire operation identification method based on graph convolution network and target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111008415.XA CN113688921A (en) 2021-08-31 2021-08-31 Fire operation identification method based on graph convolution network and target detection

Publications (1)

Publication Number Publication Date
CN113688921A true CN113688921A (en) 2021-11-23

Family

ID=78584153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111008415.XA Pending CN113688921A (en) 2021-08-31 2021-08-31 Fire operation identification method based on graph convolution network and target detection

Country Status (1)

Country Link
CN (1) CN113688921A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230188671A1 (en) * 2021-12-09 2023-06-15 Anhui University Fire source detection method and device under condition of small sample size and storage medium
CN116863252A (en) * 2023-09-04 2023-10-10 四川泓宝润业工程技术有限公司 Method, device, equipment and storage medium for detecting inflammable substances in live fire operation site
CN116883661A (en) * 2023-07-13 2023-10-13 山东高速建设管理集团有限公司 Fire operation detection method based on target identification and image processing
CN117984006A (en) * 2024-04-03 2024-05-07 国网山东省电力公司潍坊供电公司 Welding quality prediction method, device and medium based on welding infrared video generation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230188671A1 (en) * 2021-12-09 2023-06-15 Anhui University Fire source detection method and device under condition of small sample size and storage medium
US11818493B2 (en) * 2021-12-09 2023-11-14 Anhui University Fire source detection method and device under condition of small sample size and storage medium
CN116883661A (en) * 2023-07-13 2023-10-13 山东高速建设管理集团有限公司 Fire operation detection method based on target identification and image processing
CN116883661B (en) * 2023-07-13 2024-03-15 山东高速建设管理集团有限公司 Fire operation detection method based on target identification and image processing
CN116863252A (en) * 2023-09-04 2023-10-10 四川泓宝润业工程技术有限公司 Method, device, equipment and storage medium for detecting inflammable substances in live fire operation site
CN116863252B (en) * 2023-09-04 2023-11-21 四川泓宝润业工程技术有限公司 Method, device, equipment and storage medium for detecting inflammable substances in live fire operation site
CN117984006A (en) * 2024-04-03 2024-05-07 国网山东省电力公司潍坊供电公司 Welding quality prediction method, device and medium based on welding infrared video generation

Similar Documents

Publication Publication Date Title
CN113688921A (en) Fire operation identification method based on graph convolution network and target detection
Bera et al. Aggressive, Tense or Shy? Identifying Personality Traits from Crowd Videos.
CN108710126A (en) Automation detection expulsion goal approach and its system
CN111639825B (en) Forest fire indication escape path method and system based on A-Star algorithm
CN106033601A (en) Method and apparatus for detecting abnormal situation
Stipaničev et al. Advanced automatic wildfire surveillance and monitoring network
CN109309809A (en) The method and data processing method, device and system of trans-regional target trajectory tracking
CN113240249B (en) Urban engineering quality intelligent evaluation method and system based on unmanned aerial vehicle augmented reality
CN114582030A (en) Behavior recognition method based on service robot
CN109830078A (en) Intelligent behavior analysis method and intelligent behavior analytical equipment suitable for small space
US11689810B2 (en) Adaptable incident surveillance system
CN111079722A (en) Hoisting process personnel safety monitoring method and system
CN113361352A (en) Student classroom behavior analysis monitoring method and system based on behavior recognition
CN114677754A (en) Behavior recognition method and device, electronic equipment and computer readable storage medium
CN109785574B (en) Fire detection method based on deep learning
CN114677640A (en) Intelligent construction site safety monitoring system and method based on machine vision
Stipaničev et al. Vision based wildfire and natural risk observers
CN112785564B (en) Pedestrian detection tracking system and method based on mechanical arm
Zhang et al. An attention convolutional neural network for forest fire smoke recognition
CN114863352B (en) Personnel group behavior monitoring method based on video analysis
CN111553264A (en) Campus non-safety behavior detection and early warning method suitable for primary and secondary school students
CN108629121B (en) Virtual reality crowd simulation method and system based on terrorist valley effect avoidance
CN113554682B (en) Target tracking-based safety helmet detection method
CN113408435B (en) Security monitoring method, device, equipment and storage medium
CN112880660B (en) Fusion positioning system and method for WiFi and infrared thermal imager of intelligent building

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination