CN116883661B - Fire operation detection method based on target identification and image processing - Google Patents

Fire operation detection method based on target identification and image processing Download PDF

Info

Publication number
CN116883661B
CN116883661B CN202310861073.9A CN202310861073A CN116883661B CN 116883661 B CN116883661 B CN 116883661B CN 202310861073 A CN202310861073 A CN 202310861073A CN 116883661 B CN116883661 B CN 116883661B
Authority
CN
China
Prior art keywords
image
fire operation
highlight
fire
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310861073.9A
Other languages
Chinese (zh)
Other versions
CN116883661A (en
Inventor
范俊瑛
赵然
顾瑞海
王明明
赵贵南
刘晓东
丁若松
李琦
周书坤
朱颖涛
赵伟
张宽慎
甘芳吉
黄丹平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdian Anshi Chengdu Technology Co ltd
Shandong High Speed Construction Management Group Co ltd
Sichuan University
Original Assignee
Zhongdian Anshi Chengdu Technology Co ltd
Shandong High Speed Construction Management Group Co ltd
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Anshi Chengdu Technology Co ltd, Shandong High Speed Construction Management Group Co ltd, Sichuan University filed Critical Zhongdian Anshi Chengdu Technology Co ltd
Priority to CN202310861073.9A priority Critical patent/CN116883661B/en
Publication of CN116883661A publication Critical patent/CN116883661A/en
Application granted granted Critical
Publication of CN116883661B publication Critical patent/CN116883661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention provides a fire operation detection method based on target identification and image processing, which relates to the technical field of fire operation monitoring and aims to realize the fire operation detection method with lower time, lower calculation cost and high precision of research and development training, and comprises the following steps: acquiring an acquired image; performing target positioning on workers and highlight areas in the image; rechecking the highlight region: dividing the highlight region and filtering out the interference region; acquiring a linear distance between a priori frame center point of a highlight region and a priori frame center point of a nearest worker; and judging whether the linear distance is smaller than a distance threshold, if so, judging that the fire operation is performed, otherwise, judging that the fire operation is not performed. The invention has the advantages of low cost and high precision.

Description

Fire operation detection method based on target identification and image processing
Technical Field
The invention relates to the technical field of live fire operation monitoring, in particular to a live fire operation detection method based on target identification and image processing.
Background
The fire operation refers to the operation activities such as heating, baking, welding, cutting, and heat treatment performed in the form of open fire, electric spark, and friction spark in industrial production or construction sites. In a construction site, serious safety accidents are usually caused by illegal fire operation, and life and property safety of operators is endangered.
Therefore, it is necessary to accurately detect the operation, and the safety accident caused by the fire operation can be reduced to a great extent. However, the current detection means is mainly monitored by security inspectors, and in the process, the problems of low detection efficiency and the like occur. Therefore, the method for accurately identifying the fire operation based on the machine vision has important significance for reducing the risk of safety accidents, and the high-precision detection algorithm for the highlight area is a key for realizing the high-efficiency detection for the fire operation. In recent years, convolutional neural networks (Convol utional neural network, CNN) have been widely used in the field of target detection due to the feature that features of images can be extracted by training convolutional kernel parameters. Because the ideal performance is achieved by optimizing the structure of the network model most of the time, the processing of the network back end is omitted. The optimization of the network structure often requires training the network model which is subjected to different optimization and verifying the effect of the network model. This greatly increases the time consumed by the detection algorithm.
Therefore, a network model with lower time and labor cost for developing training needs to be realized to realize the fire operation detection method.
Disclosure of Invention
The invention aims to provide a live fire operation detection method based on target identification and image processing, and aims to realize a live fire operation detection method with lower time and labor cost of research and development training and high precision.
The embodiment of the invention is realized by the following technical scheme:
a fire operation detection method based on target identification and image processing comprises the following steps:
acquiring an acquired image;
performing target positioning on workers and highlight areas in the image;
rechecking the highlight region: dividing the highlight region and filtering out an interference region;
acquiring a linear distance between a priori frame center point of the highlight region and a priori frame center point of a nearest worker;
and judging whether the linear distance is smaller than a distance threshold, if so, judging that the fire operation is performed, otherwise, judging that the fire operation is not performed.
Preferably, the method for positioning the targets of the staff and the highlight areas in the image comprises the following steps: and (5) carrying out target positioning on the staff and the highlight area through the improved SE-YOLO V7.
Preferably, the improved SE-YOLO V7 attention mechanism employs the following mechanism:
v c =f ex (u c ,W 1 ,W 2 )=Sigmoid[W 2 ×Relu(W 1 u c )];
y c =f scale (v c )x c
wherein f sq F is the extrusion function ex To activate the function, f scale H, w and c are respectively the acquired image x as a scale transformation function c Height, width and channel number, x c (i, j) is the input of the jth element of the ith channel, u c V is the intermediate tensor obtained after compression c To output tensor, y c W is a feature map obtained by the attention mechanism 1 And W is 2 The dimension-increasing layer and the dimension-decreasing layer are respectively arranged.
Preferably, the method for rechecking the highlight region comprises the following steps:
acquiring a color image of RGB three channels;
graying the color image by adopting a weighted average method to obtain a gray image;
threshold segmentation is carried out on the gray level image to screen out all highlight areas;
screening out Entropy Entropy of all the highlight areas, thereby filtering out interference areas according to the Entropy Entropy;
wherein rel [ i ] is a relative gray value frequency histogram, i is an image gray value;
obtaining the anisotropy of entropy:
where k is the smallest possible gray value with sum (rel [ i ]) 0.5 or more
Calculating variance of the gray value of the region, confirming the variance as the fire light generated by the fire operation again, and finally finishing judgment:
wherein R is the selected highlight region, p is one pixel in R, the pixel gray value is g (p), and f= |r|.
Preferably, the method for graying the color image by using the weighted average method to obtain a gray image includes:
Gray(i,j)=0.229R(i,j)+0.578G(i,j)+0.114B(i,j)
r (i, j), R (i, j) and R (i, j) are RGB values of the pixel point (i, j), respectively.
Preferably, the method for threshold segmentation screening all the highlight areas of the gray level image comprises the following steps:
the screening range of gray values is enlarged by 10 gray values, specifically:
MinGray Fire -5≤Gray≤MaxGray Fire +5
wherein MinGray Fire And MaxGray Fire The lower and upper limits of the threshold segmentation, respectively.
Preferably, the method for obtaining the linear distance between the prior frame center point of the highlight region and the prior frame center point of the nearest staff member is as follows:
wherein, (X P ,Y P ) World coordinates of a priori frame center point for the staff; (X) F ,Y F ) World coordinates of points in the a priori frame that are the highlight region.
Preferably, the world coordinate of a certain point is obtained by obtaining pixel coordinates of the point in the image, and the relationship between the pixel coordinates and the world coordinates is:
wherein, (u, v) is the pixel coordinates, (X) W ,Y W ,Z W 1) is world coordinate, A is a 3×3 camera reference matrix, and M is a 3×4 camera reference matrix.
Preferably, the acquisition method of the camera internal reference matrix A is a Zhang Zhengyou calibration method.
The technical scheme of the embodiment of the invention has at least the following advantages and beneficial effects:
compared with an algorithm which only relies on a network to identify, the detection method adopted by the invention is not easy to be interfered by a local area, and has strong robustness;
the method and the device have the advantages that from the practical application scene, global characteristics are comprehensively judged, so that the method and the device have higher detection precision;
the invention does not depend on the structure of the network model to optimize, thereby reducing the training and identifying time and the consumption cost of computing power resources;
the invention has reasonable design, simple training and using method, low cost, high acquired identification precision and high cost performance, and is convenient for popularization and application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a fire operation detection method based on object recognition and image processing according to embodiment 1 of the present invention;
FIG. 2 is a diagram showing the structure of SE-YOLO V7 model provided in embodiment 2 of the present invention;
fig. 3 is a schematic diagram of obtaining a parameter setting according to embodiment 4 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Example 1
Referring to fig. 1, a fire operation detection method based on object recognition and image processing includes the following steps:
step S1: acquiring an acquired image;
step S2: performing target positioning on workers and highlight areas in the image;
step S3: rechecking the highlight region: dividing the highlight region and filtering out an interference region;
step S4: acquiring a linear distance between a priori frame center point of the highlight region and a priori frame center point of a nearest worker; monocular vision ranging may be specifically employed herein;
step S5: and judging whether the linear distance is smaller than a distance threshold, if so, judging that the fire operation is performed, otherwise, judging that the fire operation is not performed.
The core of the present embodiment is that image processing and recognition are performed after an image is acquired to determine a fire operation. The image acquisition can be realized by arranging an image pickup device on the ground, shooting and extracting the image according to a certain frequency, and also can be manually operated according to the requirement to acquire the current image. Firstly, the staff and the highlight areas in the image are subjected to target positioning, the staff and all the highlight areas in the image are identified through the image, then interference such as reflection generated by a mirror object is eliminated through rechecking, and the highlight areas which are finally regarded as fire can be determined. Then, the distance between the worker and the fire is determined by a priori frame (Prior Box), which is a concept in a target detection algorithm, and refers to a series of rectangle frames (bounding boxes) preset in the picture, for framing the possible positions of the target objects. The prior frame is usually determined by empirical data or statistical methods, and the loss value is calculated to adjust the prior frame position and scale by matching with the true annotation frame during training.
It is specifically noted that if there are a plurality of highlight regions, each highlight region may be sequentially processed in accordance with the method of step S4-5.
Example 2
The present embodiment further describes a method for performing object positioning on the operator and the highlight region in the image in step S2 based on the technical solution of embodiment 1.
In this embodiment, the method for performing object positioning on the staff and the highlight area in the image includes: and (5) carrying out target positioning on the staff and the highlight area through the improved SE-YOLO V7. The SE-YOLO V7 model block diagram is shown in FIG. 2.
Further, the channel attention mechanism adopted in the embodiment can adjust the weight of each channel and highlight the characteristics of network attention, which has a positive effect on target identification. Whereas conventional spatial attention mechanisms highlight spatial location information of objects mainly by image context, and thus spatially allocate weight highlighting features. However, in view of construction sites, the environment is chaotic, and there are many reflections with brightness similar to that of fire operation, and the differences are different, so that the spatial attention mechanism is not suitable to be applied.
Specifically, the improved SE-YOLO V7 attention mechanism employs the following mechanism:
v c =f ex (u c ,W 1 ,W 2 )=Sigmoid[W 2 ×Relu(W 1 u c )];
y c =f scale (v c )x c
wherein f sq F is the extrusion function ex To activate the function, f scale H, w and c are respectively the acquired image x as a scale transformation function c Height, width and channel number, x c (i, j) is the input of the jth element of the ith channel, u c V is the intermediate tensor obtained after compression c To output tensor, y c W is a feature map obtained by the attention mechanism 1 And W is 2 The dimension-increasing layer and the dimension-decreasing layer are respectively arranged.
Example 3
The present embodiment further describes a method for rechecking the highlight region in step S3 based on the technical solution of embodiment 1.
Because the positioning of the highlight area which is fire light can not be ensured to be 100% accurate based on the previous operation, for example, by SE-YOLO V7, the positioned highlight area needs to be rechecked, and the precise positioning of the highlight area is ensured.
As a preferable solution of this embodiment, the method for rechecking the highlight region includes:
acquiring a color image of RGB three channels;
because the information content of the three-channel color image is too large, the calculation speed is improved for the convenience of calculation, and the color image is then grayed by adopting a weighted average method to obtain a gray image;
threshold segmentation is carried out on the gray level image to screen out all highlight areas; here the screen stamp is a region of the highlighting region. The screening range should include all of his highlight region, including the corresponding interference, so subsequent filtering is required;
the brightness generated by the fire operation, including the generated sparks, has the characteristic that the brightness of the central area is extremely high, and the more the brightness is diffused outwards, the lower the brightness is. This characteristic is reflected in the gray image in that the gray value of the central local area is high and the gray value decreases outward. Therefore, the filtering method adopted in this embodiment is to filter out the Entropy of all the highlight areas, thereby filtering out the interference areas according to the Entropy;
wherein rel [ i ] is a relative gray value frequency histogram, i is an image gray value;
obtaining the anisotropy of entropy:
where k is the smallest possible gray value with sum (rel [ i ]) 0.5 or more
Calculating variance of the gray value of the region, confirming the variance as the fire light generated by the fire operation again, and finally finishing judgment:
wherein R is the selected highlight region, p is one pixel in R, the pixel gray value is g (p), and f= |r|.
Further, the method for graying the color image by using the weighted average method to obtain a gray image comprises the following steps:
Gray(i,j)=0.229R(i,j)+0.578G(i,j)+0.114B(i,j)
r (i, j), R (i, j) and R (i, j) are RGB values of the pixel point (i, j), respectively.
On the other hand, the method for threshold segmentation and screening of all the highlight areas of the gray level image comprises the following steps:
the screening range of gray values is enlarged by 10 gray values, specifically:
MinGray Fire -5≤Gray≤MaxGray Fire +5
wherein MinGray Fire And MaxGray Fire The lower and upper limits of the threshold segmentation, respectively.
Example 4
The present embodiment further describes, based on the technical solution of embodiment 1, a method for obtaining the linear distance between the prior frame center point of the highlight region and the prior frame center point of the nearest worker in step S4.
As a preferred solution of this embodiment, the method for obtaining the linear distance between the prior frame center point of the highlight region and the prior frame center point of the nearest worker is:
wherein, (X P ,Y P ) World coordinates of a priori frame center point for the staff; (X) F ,Y F ) World coordinates of points in the a priori frame that are the highlight region.
Further, the method for obtaining the world coordinate of a certain point is that the pixel coordinate of the certain point in the image is obtained, and the relationship between the pixel coordinate and the world coordinate is as follows:
wherein, (u, v) is the pixel coordinates, (X) W ,Y W ,Z W 1) is world coordinate, A is a 3×3 camera reference matrix, and M is a 3×4 camera reference matrix.
Finally, the acquisition method of the camera internal reference matrix A is a Zhang Zhengyou calibration method.
Reference may be made to fig. 3 for a method of obtaining the extrinsic matrix M.
The known information is the camera height h from the ground, the initial angle of the camera being perpendicular to the ground. So the camera coordinate system is parallel to the ground, we assume that the world coordinate system is ground, then the camera coordinate system is parallel to the world coordinate system. We can therefore derive that the perpendicular to the X-Y plane of the camera coordinate system is perpendicular to the X-Y plane of the world coordinate system, parallel to the Z-axis of the world coordinate system. So we can see the camera coordinate system as if the world coordinate system was shifted by h in the Z-axis direction. External parameters of world coordinates relative to camera coordinates can be obtained.
Therefore, after the camera internal reference matrix A and the camera external reference matrix M are known, the world coordinates can be obtained through a relation formula of the pixel coordinates and the world coordinates by obtaining the pixel coordinates of the center point of the prior frame of the staff and the center point in the highlight area on the image. And then, whether the fire operation is performed or not can be identified through distance finding and distance judging.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The fire operation detection method based on target identification and image processing is characterized by comprising the following steps of:
acquiring an acquired image;
performing target positioning on workers and highlight areas in the image;
rechecking the highlight region: dividing the highlight region and filtering out an interference region;
acquiring a linear distance between a priori frame center point of the highlight region and a priori frame center point of a nearest worker;
and judging whether the linear distance is smaller than a distance threshold, if so, judging that the fire operation is performed, otherwise, judging that the fire operation is not performed.
2. The fire operation detection method based on object recognition and image processing according to claim 1, wherein the method for performing object positioning on the staff and the highlight area in the image is as follows: and (5) carrying out target positioning on the staff and the highlight area through the improved SE-YOLO V7.
3. The fire operation detection method based on object recognition and image processing according to claim 2, wherein the improved attention mechanism of SE-YOLO V7 adopts the following mechanism:
v c =f ex (u c ,W 1 ,W 2 )=Sigmoid[W 2 ×Relu(W 1 u c )];
y c =f scale (v c )x c
wherein f sq F is the extrusion function ex To activate the function, f scale H, w and c are respectively the acquired image x as a scale transformation function c Height, width and channel number, x c (i, j) is the input of the jth element of the ith channel, u c V is the intermediate tensor obtained after compression c To output tensor, y c W is a feature map obtained by the attention mechanism 1 And W is 2 The dimension-increasing layer and the dimension-decreasing layer are respectively arranged.
4. The fire operation detection method based on object recognition and image processing according to claim 1, wherein the method for rechecking the highlight region is as follows:
acquiring a color image of RGB three channels;
graying the color image by adopting a weighted average method to obtain a gray image;
threshold segmentation is carried out on the gray level image to screen out all highlight areas;
screening out Entropy Entropy of all the highlight areas, thereby filtering out interference areas according to the Entropy Entropy;
wherein rel [ i ] is a relative gray value frequency histogram, i is an image gray value;
obtaining the anisotropy of entropy:
where k is the smallest possible gray value with sum (rel [ i ]) 0.5 or more
Calculating variance of the gray value of the region, confirming the variance as the fire light generated by the fire operation again, and finally finishing judgment:
wherein R is the selected highlight region, p is one pixel in R, the pixel gray value is g (p), and f= |r|.
5. The method for detecting fire operation based on object recognition and image processing according to claim 4, wherein the method for graying the color image by using a weighted average method to obtain a gray image comprises the following steps:
Gray(i,j)=0.229R(i,j)+0.578G(i,j)+0.114B(i,j)
r (i, j), R (i, j) and R (i, j) are RGB values of the pixel point (i, j), respectively.
6. The method for detecting fire operation based on object recognition and image processing according to claim 5, wherein the method for threshold segmentation and screening of all highlight areas for the gray scale image is as follows:
the screening range of gray values is enlarged by 10 gray values, specifically:
MinGray Fire -5≤Gray≤MaxGray Fire +5
wherein MinGray Fire And MaxGray Fire The lower and upper limits of the threshold segmentation, respectively.
7. The method for detecting fire operation based on object recognition and image processing according to claim 1, wherein the method for obtaining the linear distance between the prior frame center point of the highlight region and the prior frame center point of the nearest worker is as follows:
wherein, (X P ,Y P ) World coordinates of a priori frame center point for the staff; (X) F ,Y F ) World coordinates of points in the a priori frame that are the highlight region.
8. The method for detecting fire operation based on object recognition and image processing according to claim 7, wherein the method for acquiring the world coordinate of a certain point is that the pixel coordinate of the certain point in the image is acquired, and the relationship between the pixel coordinate and the world coordinate is:
wherein, (u, v) is the pixel coordinates, (X) W ,Y W ,Z W 1) is world coordinate, A is a 3×3 camera reference matrix, and M is a 3×4 camera reference matrix.
9. The method for detecting fire operation based on object recognition and image processing according to claim 8, wherein the method for acquiring the camera internal reference matrix a is a Zhang Zhengyou calibration method.
CN202310861073.9A 2023-07-13 2023-07-13 Fire operation detection method based on target identification and image processing Active CN116883661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310861073.9A CN116883661B (en) 2023-07-13 2023-07-13 Fire operation detection method based on target identification and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310861073.9A CN116883661B (en) 2023-07-13 2023-07-13 Fire operation detection method based on target identification and image processing

Publications (2)

Publication Number Publication Date
CN116883661A CN116883661A (en) 2023-10-13
CN116883661B true CN116883661B (en) 2024-03-15

Family

ID=88265740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310861073.9A Active CN116883661B (en) 2023-07-13 2023-07-13 Fire operation detection method based on target identification and image processing

Country Status (1)

Country Link
CN (1) CN116883661B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802129A (en) * 2021-04-13 2021-05-14 之江实验室 Welding safety distance measuring method based on monocular vision
CN113688921A (en) * 2021-08-31 2021-11-23 重庆科技学院 Fire operation identification method based on graph convolution network and target detection
CN113990027A (en) * 2021-09-17 2022-01-28 珠海格力电器股份有限公司 Alarm method and device, electronic equipment, intelligent bed and storage medium
CN115565123A (en) * 2022-08-23 2023-01-03 上海建工集团股份有限公司 Construction site fire monitoring method based on deep learning and multi-source image fusion perception
CN115861875A (en) * 2022-11-17 2023-03-28 上海建工四建集团有限公司 Construction site fire work safety supervision method by utilizing tower crane video image detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802129A (en) * 2021-04-13 2021-05-14 之江实验室 Welding safety distance measuring method based on monocular vision
CN113688921A (en) * 2021-08-31 2021-11-23 重庆科技学院 Fire operation identification method based on graph convolution network and target detection
CN113990027A (en) * 2021-09-17 2022-01-28 珠海格力电器股份有限公司 Alarm method and device, electronic equipment, intelligent bed and storage medium
CN115565123A (en) * 2022-08-23 2023-01-03 上海建工集团股份有限公司 Construction site fire monitoring method based on deep learning and multi-source image fusion perception
CN115861875A (en) * 2022-11-17 2023-03-28 上海建工四建集团有限公司 Construction site fire work safety supervision method by utilizing tower crane video image detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于计算机视觉的化工企业人员不安全行为自动识别技术研究;杨鹏 等;《山东化工》;20210723;134-135, 137 *

Also Published As

Publication number Publication date
CN116883661A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN107451999B (en) Foreign matter detection method and device based on image recognition
WO2016055031A1 (en) Straight line detection and image processing method and relevant device
CN110910350B (en) Nut loosening detection method for wind power tower cylinder
CN109145756A (en) Object detection method based on machine vision and deep learning
CN112052782B (en) Method, device, equipment and storage medium for recognizing parking space based on looking around
CN108629230B (en) People counting method and device and elevator dispatching method and system
CN106530281A (en) Edge feature-based unmanned aerial vehicle image blur judgment method and system
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN110189375B (en) Image target identification method based on monocular vision measurement
CN110570454A (en) Method and device for detecting foreign matter invasion
CN113591597B (en) Intelligent public security information system based on thermal imaging
CN115512134A (en) Express item stacking abnormity early warning method, device, equipment and storage medium
CN111612895A (en) Leaf-shielding-resistant CIM real-time imaging method for detecting abnormal parking of shared bicycle
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN103607558A (en) Video monitoring system, target matching method and apparatus thereof
CN112288682A (en) Electric power equipment defect positioning method based on image registration
CN108664886A (en) A kind of fast face recognition method adapting to substation's disengaging monitoring demand
CN113177941B (en) Steel coil edge crack identification method, system, medium and terminal
CN114462646A (en) Pole number plate identification method and system based on contact network safety inspection
CN113252103A (en) Method for calculating volume and mass of material pile based on MATLAB image recognition technology
CN113362221A (en) Face recognition system and face recognition method for entrance guard
CN113034544A (en) People flow analysis method and device based on depth camera
CN116883661B (en) Fire operation detection method based on target identification and image processing
CN112560574A (en) River black water discharge detection method and recognition system applying same
CN114417906B (en) Method, device, equipment and storage medium for identifying microscopic image identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant