CN117253176B - Safe production Al intelligent detection method based on video analysis and computer vision - Google Patents
Safe production Al intelligent detection method based on video analysis and computer vision Download PDFInfo
- Publication number
- CN117253176B CN117253176B CN202311515033.5A CN202311515033A CN117253176B CN 117253176 B CN117253176 B CN 117253176B CN 202311515033 A CN202311515033 A CN 202311515033A CN 117253176 B CN117253176 B CN 117253176B
- Authority
- CN
- China
- Prior art keywords
- target detection
- detection area
- frame
- area
- expressed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 275
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 52
- 238000004458 analytical method Methods 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 claims abstract description 56
- 238000012544 monitoring process Methods 0.000 claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 32
- 238000011156 evaluation Methods 0.000 claims description 64
- 230000002159 abnormal effect Effects 0.000 claims description 49
- 238000004364 calculation method Methods 0.000 claims description 37
- 230000000694 effects Effects 0.000 claims description 33
- 238000000605 extraction Methods 0.000 claims description 32
- 230000008859 change Effects 0.000 claims description 25
- 238000012216 screening Methods 0.000 claims description 25
- 238000012937 correction Methods 0.000 claims description 6
- 230000005856 abnormality Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 230000000977 initiatory effect Effects 0.000 claims 3
- 238000012824 chemical production Methods 0.000 abstract description 17
- 230000006399 behavior Effects 0.000 abstract description 7
- 230000007246 mechanism Effects 0.000 abstract description 2
- 230000007547 defect Effects 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000036541 health Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000010365 information processing Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
- G06Q10/06311—Scheduling, planning or task assignment for a person or group
- G06Q10/063114—Status monitoring or status determination for a person or group
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/26—Government or public services
- G06Q50/265—Personal security, identity or safety
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- Entrepreneurship & Innovation (AREA)
- Multimedia (AREA)
- Marketing (AREA)
- Educational Administration (AREA)
- Development Economics (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Quality & Reliability (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Operations Research (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Computer Security & Cryptography (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of safety production detection, and particularly discloses a safety production Al intelligent detection method based on video analysis and computer vision, which comprises the following steps: the invention detects whether the safety production personnel are on duty or not through analyzing videos in the monitoring area in real time, and delimits a special area to detect whether related personnel are in a specific area or not in the chemical production process so as to prevent off duty behaviors, thereby not only being capable of automatically identifying whether the personnel are on duty or off duty, helping a manager to monitor the working condition of the staff in the working time in real time, but also being capable of effectively improving the safety and efficiency in the chemical production process through an intelligent monitoring and alarm mechanism, and enabling the manager to more comprehensively control the chemical production state.
Description
Technical Field
The invention relates to the technical field of safety production detection, in particular to an intelligent detection method for safety production Al based on video analysis and computer vision.
Background
The chemical production environment safety monitoring means monitoring and detecting various security threats and loopholes in the chemical production environment so as to ensure the safety of the chemical production environment, the safety awareness and adherence to safety specifications of staff are important for creating and maintaining the safety chemical production environment, if the staff lacks the safety awareness, the safety regulations can be ignored, potential safety risks and loopholes are caused, meanwhile, the autonomy of the staff is higher, and the staff is difficult to effectively manage in the chemical production environment with larger body volume, so that an improved safety detection method of the chemical production environment is required, and the behavior state and off-duty condition of the staff are monitored and managed.
Today, there are also some disadvantages to the safety production test, in particular in the following several aspects: (1) The traditional video monitoring means generally adopts manual monitoring or determines the behavior and off-duty condition of staff through the image fluctuation degree, the passive monitoring means needs a monitor to manually watch the picture of a camera, and a great deal of time and effort are required to watch a great deal of monitoring pictures, so that the mode has certain delay, the monitor can not timely find the off-duty condition of the staff, and important details are easy to ignore, so that potential safety hazards are buried in production work.
(2) The current video monitoring method often generates a large amount of monitoring data, huge data flow can greatly improve the difficulty of information extraction, a large amount of time and resources are consumed for processing, storing and analyzing the data, key information can be omitted, staff abnormal behaviors cannot be accurately positioned, meanwhile, the monitoring data contains a large amount of redundant information, and resource waste can be caused in the information processing process.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a safe production Al intelligent detection method based on video analysis and computer vision, which can effectively solve the problems related to the background art.
In order to achieve the above purpose, the invention is realized by the following technical scheme: the invention provides an xx method, which comprises the following steps: dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area.
And secondly, identifying each target detection area and analyzing the actual number of on-duty staff in each target detection area.
And thirdly, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area.
And step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information.
And fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model.
As a further method, the identifying each target detection area includes the following specific analysis processes: scanning the objects in each target detection area for a set number of times, generating each object detection frame, and calculating each object detection frame in each target detection areaThe value is calculated by the following formula: />Wherein->Denoted as +.>The first target detection area->Person object detection frame->Area of sub-scan, +.>Expressed as the set->The first target detection area->Area of individual object detection frame->Number expressed as each target detection area, +.>,/>Expressed as total number of target detection areas, +.>Expressed as the number of each object->,/>Expressed as the total number of objects>Number expressed as each scan, +.>,/>Expressed as the total number of scans.
As a further method, the actual number of on-duty staff in each target detection area is analyzed, and the specific analysis process is as follows: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, thereby counting the actual number of on-duty staff in each target detection area。
As a further method, the safety detection degree evaluation coefficient is calculated, and the specific analysis process is as follows: acquiring the property of each target detection area, and matching the safety influence factor of the unit space volume of each target detection area according to the set safety influence factor of the unit space volume of each property detection areaAnd for each target detection areaSpatially scanning to obtain spatial volume of each target detection region>Comprehensively calculating the space volume safety influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set spatial volume correction factor, +.>Expressed as a natural constant.
Obtaining required staff number of each target detection area from safe production detection libraryCalculating employee number influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as the number of allowed deviations set, +.>Indicated as a set person number influence degree correction factor.
Comprehensive calculation of safety detection degree evaluation coefficient of each target detection areaThe calculation formula is as follows:wherein->And->The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
As a further method, the analysis obtains reference screening data of each target detection area, and the specific analysis process is as follows: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
As a further method, the monitoring of the variation of the personnel in each target detection area comprises the following specific analysis processes: according to the set detection period, carrying out video frame extraction detection by using the extraction interval frame number and the frame extraction rate of each target detection area, further counting each video frame of each target detection area, taking the first extracted video frame of each target detection area as a key video frame, jointly marking the video frames extracted subsequently in each target detection area as each associated video frame, and analyzing to obtain the confidence value of each associated video frame of each target detection frame in each target detection areaAnd extracting confidence value +.>Comprehensively calculating confidence value abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation confidence coefficient,/->Represented as the number of each target detection box,,/>expressed as total number of target detection frames, +.>Represented as the number of each associated video frame,,/>represented as the total number of associated video frames.
Temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature of each temperature screening point in each time point in each target detection area is obtained through monitoringAnd acquiring the temperature +.A. of each temperature monitoring point of each target detection area at the initial time point>Comprehensively calculating temperature abnormality variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Indicated as set allowable deviation temperature, +.>Number expressed as each temperature screening point, +.>,/>Expressed as the total number of temperature screening points.
Analyzing and obtaining the number of on-duty staff in each associated video frame of each target detection areaAnd extracts the height accumulated value +.>Simultaneously extracting initial height accumulated value ++of each target detection area in key video frame>On duty employee count>Comprehensively calculating employee state abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />WhereinAnd->The set unit height fluctuation influence factor and the set unit number fluctuation influence factor are shown respectively.
Scanning each target detection area to establish a three-dimensional model, and taking the central point of each target detection area as a datum point to count the distance between each on-duty employee and the datum point in the key video frame of each target detection areaAnd distance of each on Shift employee from the reference point in each associated video frame +.>Comprehensively calculating employee position abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />WhereinExpressed as a set allowable deviation distance, +.>Number indicated as employee on duty, < >>,/>Expressed as the total number of on Shift employees.
Comprehensively calculating abnormal variation evaluation coefficients corresponding to each time of associated video frames of each target detection area。
As a further method, the abnormal variation evaluation coefficients corresponding to each associated video frame in each target detection area are comprehensively calculated, and the calculation formula is as follows:wherein->、/>、/>Andthe confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
As a further method, the recording feedback is performed on the abnormal variation information, and the specific analysis process is as follows: and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
As a further method, the method for evaluating the effect of the AI intelligent detection model specifically comprises the following steps: precision, precision rate, recall rate,Score sum->。
Accuracy ofThe calculation formula is as follows: />Wherein->Expressed as total number of samples tested off duty with the model,/->The number of samples that are correctly detected for the model is indicated.
Accuracy rate ofThe calculation formula is as follows: />Wherein->Expressed as the number of times the off-duty employee is detected as off-duty, < >>Represented as the number of times an off-Shift employee is detected as off-Shift.
Recall rate of recallThe calculation formula is as follows: />Wherein->Represented as the number of times an off-Shift employee is detected as not being off-Shift.
The score is calculated by the following formula: />。
The specific analysis process of (2) is as follows: constructing a curve by taking the accuracy rate as the abscissa and the recall rate as the ordinate, and calculating the area +.>Calculating the +.>Average value of +.>。
As a further method, the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps: according to the accuracy, precision rate, recall rate,Score sum->Calculating an actual effect evaluation index of the AI smart detection model +.>The calculation formula is as follows: />Wherein->、/>、/>、And->Expressed as accuracy, precision, recall, and +.>Score sum->Reference value->、/>、/>、/>And->Expressed as accuracy, precision, recall, and +.>Score sum->Corresponding influencing factors.
And acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects: (1) The invention provides the intelligent detection method for the safe production Al based on video analysis and computer vision, which is used for defining the special area to detect the change condition of related staff in the chemical production process, automatically identifying whether the staff is on duty or off duty, helping the manager to monitor the working condition of the staff in real time during the working time, and effectively improving the safety and the efficiency in the chemical production process through an intelligent monitoring and alarming mechanism so that the manager can more comprehensively control the state of the safe production.
(2) According to the invention, through a frame extraction technology, proper frame extraction interval frames and frame extraction rate are selected according to the safety monitoring degree of the chemical production environment, so that the system can selectively process images or video frames, and the generation of redundant data is reduced, thereby saving the use of computing resources and memory, and helping balance the resource consumption and monitoring effect in a real-time target detection task.
(3) According to the invention, the detection frame is intelligently screened by combining the video with the monitoring mode of the sensor and by utilizing the confidence, the defect that the traditional detection algorithm is easy to be blocked or blurred in a complex scene is overcome, the scene can be more comprehensively understood by the algorithm through a multi-scale feature fusion strategy, the position and the state of staff are effectively captured, and the accuracy and the reliability of the algorithm are improved.
(4) According to the intelligent monitoring feedback method, after the off-duty behavior of the staff is detected, the detection result is uploaded to the background management server, the behavior is recorded and displayed in real time, meanwhile, an administrator is notified through an alarm, necessary measures can be timely taken, intelligent off-duty monitoring of the staff in a chemical production area is achieved, the efficiency and the accuracy of safe production are improved, accidents are prevented, and the safety and the health of production staff and park facilities are protected to the greatest extent.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making creative efforts based on the embodiments of the present invention are included in the protection scope of the present invention.
Referring to fig. 1, the invention provides a safe production Al intelligent detection method based on video analysis and computer vision, comprising the following steps: dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area.
Step two, identifying each target detection area and analyzing the actual number of on-duty staff in each target detection area。
Specifically, the identifying each target detection area includes the following specific analysis processes: scanning the objects in each target detection area for a set number of times, generating each object detection frame, and calculating each object detection frame in each target detection areaThe value is calculated by the following formula: />Wherein->Denoted as +.>The first target detection area->Person object detection frame->Area of sub-scan, +.>Expressed as the set->The first target detection area->Area of individual object detection frame->Number expressed as each target detection area, +.>,/>Expressed as total number of target detection areas, +.>Expressed as the number of each object->,/>Expressed as the total number of objects>Number expressed as each scan, +.>,/>Expressed as the total number of scans.
Further, the analyzing the actual number of on-duty staff in each target detection area comprises the following specific analysis processes: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, thereby counting the actual number of on-duty staff in each target detection area。
It should be explained that, the IOU is the overlapping degree value of the prediction frame and the real frame in the target detection, and is used for measuring the quality of model classification, the confidence level setting plays a role in screening in the detection results output by the algorithm, and only the detection frame with the confidence level higher than the threshold value can be regarded as an effective detection result, so that the less reliable detection frame with lower confidence level is filtered, and the accuracy and the reliability of the algorithm are improved.
And thirdly, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area.
Specifically, the safety detection degree evaluation coefficient is calculated, and the specific analysis process is as follows: acquiring the property of each target detection area, and matching the safety influence factor of the unit space volume of each target detection area according to the set safety influence factor of the unit space volume of each property detection areaScanning the space of each target detection area to obtain the space volume of each target detection area>Comprehensively calculating the space volume safety influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set spatial volume correction factor, +.>Expressed as a natural constant.
It should be construed that the properties of the detection area include, but are not limited to, production lines, detection laboratories, material storage, etc.
Obtaining required staff number of each target detection area from safe production detection libraryCalculating employee number influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as the number of allowed deviations set, +.>Indicated as a set person number influence degree correction factor.
Comprehensive calculation of safety detection degree evaluation coefficient of each target detection areaThe calculation formula is as follows:wherein->And->The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
Further, the analysis obtains reference screening data of each target detection area, and the specific analysis process comprises the following steps: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
And step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information.
Specifically, the monitoring of the change condition of personnel in each target detection area includes the following specific analysis processes: according to the set detection period, and with the number of extraction interval frames and extraction frames of each target detection areaThe video extraction frame detection is carried out according to the rate, and then each video frame of each target detection area is counted, the first extracted video frame of each target detection area is used as a key video frame, the video frames extracted subsequently in each target detection area are jointly marked as each associated video frame, and the confidence value of each associated video frame of each target detection frame in each target detection area is obtained through analysisAnd extracting confidence value +.>Comprehensively calculating confidence value abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation confidence coefficient,/->Number expressed as each target detection frame, +.>,/>Expressed as total number of target detection frames, +.>Denoted as the number of each associated video frame, +.>,/>Represented as the total number of associated video frames.
In a specific embodiment, through a frame extraction technology, a proper frame extraction interval frame number and frame extraction rate are selected according to the safety monitoring degree of a chemical production environment, so that a system can selectively process images or video frames, and the generation of redundant data is reduced, thereby saving the use of computing resources and memory, and helping balance the resource consumption and monitoring effect in a real-time target detection task.
Temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature of each temperature screening point in each time point in each target detection area is obtained through monitoringAnd acquiring the temperature +.A. of each temperature monitoring point of each target detection area at the initial time point>Comprehensively calculating temperature abnormality variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows:wherein->Indicated as set allowable deviation temperature, +.>Number expressed as each temperature screening point, +.>,/>Expressed as the total number of temperature screening points.
It should be explained that the initial time point is a time node of the key video frame.
In a specific embodiment, the detection frame is intelligently screened by combining the video with the monitoring mode of the sensor and utilizing the confidence coefficient, so that the defect that a traditional detection algorithm is easy to be blocked or blurred in a complex scene is overcome, the scene can be more comprehensively understood by the algorithm through a multi-scale feature fusion strategy, the position and the state of staff are effectively captured, and the accuracy and the reliability of the algorithm are improved.
Analyzing and obtaining the number of on-duty staff in each associated video frame of each target detection areaAnd extracts the height accumulated value +.>Simultaneously extracting initial height accumulated value ++of each target detection area in key video frame>On duty employee count>Comprehensively calculating employee state abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Andthe set unit height fluctuation influence factor and the set unit number fluctuation influence factor are shown respectively.
Scanning each target detection area to establish a three-dimensional model, and counting the closing of each target detection area by taking the central point of each target detection area as a datum pointDistance between each on Shift employee and reference point in key video frameAnd distance of each on Shift employee from the reference point in each associated video frame +.>Comprehensively calculating employee position abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation distance, +.>Number indicated as employee on duty, < >>,/>Expressed as the total number of on Shift employees.
It should be explained that, in the above-mentioned processing of video analysis, the feature extraction is performed by extracting pictures at intervals of x frames, and the main steps of feature extraction are that the images are adjusted to 640x640 format, and ten steps are performed after pictures are read in.
The first step transforms the number of channels of the input feature map from 3 (RGB three channels) to 64, and convolves the input feature map with a 3x3 convolution kernel, step size of 2, which reduces the size of the input feature map to half, the convolution layer is used to extract features from the input 640x640x3 image, and the output feature map size is 320x320x64.
The second step is to use a second convolution layer, the number of channels of the input feature map is 64, the number of channels increases to 128 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step size is 2, which again reduces the feature map size to half of the original size, the layer is used for further feature extraction, and the output feature map size is 160x160x128.
The third step is to use a custom C2f layer to convolve the input feature map 3 times, each time using 128 3x3 convolution kernels, and connect the results to the input after each convolution, which does not change the feature map size, but helps to further extract the features.
The fourth step is to use a convolution layer, the number of channels of the input feature map is 128, the number of channels increases to 256 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, the feature map size is reduced to half of the original size again, the layer is used for continuously extracting features, and the output feature map size is 80x80x256.
The fifth step is to continue using the custom C2f layer, similar to before, but here the 6 convolutions are performed and the result is connected to the input after each convolution, this layer continues to extract features, the output feature map size is 80x80x256.
The sixth step is to use a convolution layer, the number of channels of the input feature map is 256, the number of channels increases to 512 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, this again reduces the feature map size to half of the original size, the layer is used for continuously extracting features, and the output feature map size is 40x40x512.
The seventh step is to use a custom C2f layer, similar to the previous custom C2f layer, to perform 6 convolution operations and connect the result with the input after each convolution, this layer continues to extract features, the output feature map size is 40x40x512.
The eighth step uses a convolution layer, the number of channels of the input feature map is 512, the number of channels increases to 1024 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, this reduces the feature map size to half of the original size again, the layer is used for continuously extracting features, and the output feature map size is 20x20x1024.
The ninth step uses the last custom C2f layer, similar to the previous custom C2f layer, performs 3 convolutions and connects the result with the input after each convolution, this layer continues to extract features, the output feature map size is 20x20x1024.
And a tenth step of operation uses an SPPF layer, the layer performs spatial pyramid pooling operation, the feature map is divided into subareas with different scales, and each subarea is subjected to maximum pooling operation, and pooling results with different scales are spliced together.
The characteristic information extraction and the multi-scale characteristic fusion of the image are completed through the steps, and information collection is performed for realizing the classification and detection of the identification targets.
Comprehensively calculating abnormal variation evaluation coefficients corresponding to each time of associated video frames of each target detection area。
Further, the calculating formula of the abnormal variation evaluation coefficient corresponding to each associated video frame in each target detection area is as follows:wherein->、/>、/>And->The confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
Specifically, the recording feedback is performed on the abnormal change information, and the specific analysis process is as follows: and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
In a specific embodiment, through intelligent monitoring feedback of machine learning, when the existence of off-duty behaviors of staff is detected, the detection result is uploaded to a background management server, the behaviors are recorded and displayed in real time, and meanwhile, an administrator is notified through an alarm, so that necessary measures can be timely taken, intelligent off-duty monitoring of staff in a chemical production area is realized, the efficiency and the accuracy of safe production are improved, accidents are prevented, and the safety and the health of production staff and park facilities are protected to the greatest extent.
It should be explained that the abnormal change information processing procedure is as follows: the method comprises the steps of setting the number of people in a safe production area in a background service system, monitoring whether irrelevant personnel intervene in a chemical production process in a counting mode, ensuring that only relevant staff enter a specific area to engage in production activities, uploading detection results to a background management server when the system detects that staff leave the post, recording the actions and displaying the actions in real time, informing an administrator through an alarm, enabling the administrator to take necessary measures in time, enabling timely feedback of abnormal information to achieve intelligent leave post monitoring on staff in the chemical production area, improving efficiency and accuracy of safe production, preventing accidents, and protecting safety and health of production staff and park facilities to the greatest extent.
And fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model.
Specifically, the effect evaluation on the AI intelligent detection model specifically includes: precision, precision rate, recall rate,Score sum->。
Accuracy ofThe calculation formula is as follows: />Wherein->Expressed as total number of samples tested off duty with the model,/->The number of samples that are correctly detected for the model is indicated.
Accuracy rate ofThe calculation formula is as follows: />Wherein->Expressed as the number of times the off-duty employee is detected as off-duty, < >>Represented as the number of times an off-Shift employee is detected as off-Shift.
Recall rate of recallThe calculation formula is as follows: />Wherein->Represented as the number of times an off-Shift employee is detected as not being off-Shift.
The score is calculated by the following formula: />。
The specific analysis process of (2) is as follows: constructing a curve by taking the accuracy rate as the abscissa and the recall rate as the ordinate, and calculating the area +.>Calculating the +.>Average value of +.>。
Further, the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps:
according to the accuracy, precision rate, recall rate,Score sum->Calculating an actual effect evaluation index of the AI smart detection model +.>The calculation formula is as follows: />Wherein->、/>、/>、/>And->Expressed as accuracy, precision, recall, and +.>Score and scoreCorresponding reference value,/->、/>、/>、/>And->Influence factors expressed as set precision, recall, score, and sum, respectively;
and acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
The foregoing is merely illustrative of the structures of this invention and various modifications, additions and substitutions for those skilled in the art can be made to the described embodiments without departing from the scope of the invention or from the scope of the invention as defined in the accompanying claims.
Claims (9)
1. The intelligent detection method for the safe production AI based on video analysis and computer vision is characterized by comprising the following steps:
dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area;
step two, identifying each target detection area, and analyzing the actual number of on-duty staff in each target detection area;
step three, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area;
step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information;
fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model;
the specific analysis process for identifying each target detection area comprises the following steps:
scanning the objects in each target detection area with set times, generating each object detection frame and calculating the IOU of each object detection frame in each target detection area ij The value is calculated by the following formula:wherein->Area of the jth scan of the jth object detection frame expressed as the ith target detection area,/>The area of the j object detection frame, which is denoted as the i-th target detection area set, i is denoted as the number of each target detection area, i=1, 2,3,..n, n is denoted as the total number of target detection areas, j is denoted as the number of each object, j=1, 2,3,..m, m is denoted as the total number of objects, q is denoted as the number of each scan, q=1, 2,3,..k, k is denoted as the total number of scans.
2. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 1, wherein the method comprises the following steps: the actual number of on-duty staff in each target detection area is analyzed, and the specific analysis process is as follows: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, and further counting the actual number N of on-duty staff in each target detection area i On duty 。
3. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 2, wherein the method comprises the following steps: the specific analysis process of calculating the safety detection degree evaluation coefficient is as follows:
acquiring the property of each target detection area, and matching the security influence factor zeta of the unit space volume of each target detection area according to the security influence factor zeta of the unit space volume of each target detection area i Scanning the space of each target detection area to obtain the space volume V of each target detection area i Comprehensively calculating the space volume safety influence degree index alpha of each target detection area i The calculation formula is as follows:wherein, psi is expressed as a set spatial volume correction factor, and e is expressed as a natural constant;
obtaining the required employee number N of each target detection area from a safe production detection library i Calculating the employee number influence degree index beta of each target detection area i The calculation formula is as follows:wherein delta N is expressed as the set number of allowable deviation people, phi is expressed as the set number influence degree correction factor;
comprehensively calculating a safety detection degree evaluation coefficient χ of each target detection area i The calculation formula is as follows:wherein xi 1 And xi 2 The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
4. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 3, wherein the method comprises the following steps: the analysis obtains reference screening data of each target detection area, and the specific analysis process comprises the following steps: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
5. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 4, wherein the method comprises the following steps: the specific analysis process for monitoring the change condition of personnel in each target detection area comprises the following steps:
according to the set detection period, carrying out video frame extraction detection by using the extraction interval frame number and the frame extraction rate of each target detection area, further counting each video frame of each target detection area, taking the first extracted video frame of each target detection area as a key video frame, jointly marking the video frames extracted subsequently in each target detection area as each associated video frame, and analyzing to obtain the confidence value IOU of each associated video frame of each target detection frame in each target detection area ipr And extracting confidence values of key video frames of each target detection frame in each target detection areaComprehensively calculating each time of closing of each target detection areaConfidence value abnormality variation degree index delta corresponding to linked video frame ir The calculation formula is as follows: />Wherein Δiou is expressed as a set allowable deviation confidence coefficient, p is expressed as the number of each target detection frame, p=1, 2,3,..f, p is expressed as the total number of target detection frames, r is expressed as the number of each associated video frame, r=1, 2,3,..h, h is expressed as the total number of associated video frames;
temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature Q of each temperature screening point in each time point in each target detection area is obtained through monitoring ird And acquiring the temperature Q of each temperature monitoring point of each target detection area at the initial time point i Initial initiation Comprehensively calculating the temperature abnormality variation degree index epsilon corresponding to each associated video frame of each target detection area ir The calculation formula is as follows:where Δq is expressed as the set allowable deviation temperature, d is expressed as the number of each temperature screening point, d=1, 2,3,..s, s is expressed as the total number of temperature screening points;
analyzing and obtaining the number N of on-duty staff in each associated video frame of each target detection area ir And extract the height accumulated value H of the on-duty staff ir Simultaneously extracting initial height accumulated value H of each target detection area in key video frame i Initial initiation On duty employee count N i Initial initiation Comprehensively calculating employee state abnormal variation degree index phi corresponding to each associated video frame of each target detection area ir The calculation formula is as follows:wherein omega 1 And omega 2 The unit height variation influence factor and the unit number variation influence factor are respectively set;
scanning each target detection area to establish a three-dimensional model, taking the central point of each target detection area as a datum point, and counting the distance L between each on-duty employee and the datum point in the key video frame of each target detection area iz Distance L between each on Shift employee and reference point in each associated video frame irz Comprehensively calculating employee position abnormal variation degree indexes corresponding to each time of associated video frames of each target detection areaThe calculation formula is as follows: />Where Δl is the set allowable deviation distance, z is the number of each on Shift employee, z=1, 2,3,..u, u is the total number of on Shift employees;
comprehensively calculating abnormal variation evaluation coefficients gamma corresponding to each time of associated video frames of each target detection area ir 。
6. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 5, wherein the method comprises the following steps: the abnormal variation evaluation coefficients corresponding to each associated video frame of each target detection area are comprehensively calculated, and the calculation formula is as follows:wherein v is 1 、υ 2 、υ 3 And v 4 The confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
7. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 6, wherein the method comprises the following steps: the recording feedback is carried out on the abnormal change information, and the specific analysis process is as follows:
and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
8. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 7, wherein: the method for evaluating the effect of the AI intelligent detection model specifically comprises the following steps: precision, recall, F1 score, and mAP;
accuracy Ac, the calculation formula of which is:wherein N is Total (S) Represented as the total number of samples detected off duty with the model, N Correct and correct The number of samples that are correctly detected for the model is represented;
the accuracy Pr is calculated by the following formula:where TP is the number of times that off-Shift employees are detected as off-Shift, and FP is the number of times that off-Shift employees are detected as off-Shift;
the recall rate Re has the following calculation formula:where FN is expressed as the number of times an off-Shift employee is detected as not being off-Shift;
the F1 fraction has the following calculation formula:
the specific analysis process of mAP is as follows: and constructing a curve by taking the accuracy rate as an abscissa and the recall rate as an ordinate, calculating the area AP under the curve by utilizing integral, and calculating the average value of the APs under a plurality of categories to obtain mAP.
9. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 8, wherein the method comprises the following steps: the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps:
according to the accuracy, the precision, the recall, the F1 fraction and the mAP, calculating an actual effect evaluation index eta of the AI intelligent detection model, wherein the calculation formula is as follows:wherein Ac 0 、Pr 0 、Re 0 、F1 0 And mAP 0 Expressed as the set precision, recall, F1 score, and mAP reference values, τ, respectively 1 、τ 2 、τ 3 、τ 4 And τ 5 Respectively representing the set precision, accuracy, recall rate, F1 fraction and mAP corresponding influence factors;
and acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311515033.5A CN117253176B (en) | 2023-11-15 | 2023-11-15 | Safe production Al intelligent detection method based on video analysis and computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311515033.5A CN117253176B (en) | 2023-11-15 | 2023-11-15 | Safe production Al intelligent detection method based on video analysis and computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117253176A CN117253176A (en) | 2023-12-19 |
CN117253176B true CN117253176B (en) | 2024-01-26 |
Family
ID=89137187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311515033.5A Active CN117253176B (en) | 2023-11-15 | 2023-11-15 | Safe production Al intelligent detection method based on video analysis and computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117253176B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110580455A (en) * | 2019-08-21 | 2019-12-17 | 广州洪森科技有限公司 | image recognition-based illegal off-duty detection method and device for personnel |
WO2022022368A1 (en) * | 2020-07-28 | 2022-02-03 | 宁波环视信息科技有限公司 | Deep-learning-based apparatus and method for monitoring behavioral norms in jail |
CN115019236A (en) * | 2022-06-27 | 2022-09-06 | 禾麦科技开发(深圳)有限公司 | Mobile phone playing and off-duty detection alarm system and method based on deep learning |
-
2023
- 2023-11-15 CN CN202311515033.5A patent/CN117253176B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110580455A (en) * | 2019-08-21 | 2019-12-17 | 广州洪森科技有限公司 | image recognition-based illegal off-duty detection method and device for personnel |
WO2022022368A1 (en) * | 2020-07-28 | 2022-02-03 | 宁波环视信息科技有限公司 | Deep-learning-based apparatus and method for monitoring behavioral norms in jail |
CN115019236A (en) * | 2022-06-27 | 2022-09-06 | 禾麦科技开发(深圳)有限公司 | Mobile phone playing and off-duty detection alarm system and method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN117253176A (en) | 2023-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111739250B (en) | Fire detection method and system combining image processing technology and infrared sensor | |
CN111191576B (en) | Personnel behavior target detection model construction method, intelligent analysis method and system | |
CN108319926A (en) | A kind of the safety cap wearing detecting system and detection method of building-site | |
CN108711148B (en) | Tire defect intelligent detection method based on deep learning | |
CN109711322A (en) | A kind of people's vehicle separation method based on RFCN | |
CN113192038B (en) | Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning | |
US11521120B2 (en) | Inspection apparatus and machine learning method | |
CN112163572A (en) | Method and device for identifying object | |
CN115660262B (en) | Engineering intelligent quality inspection method, system and medium based on database application | |
CN117035419B (en) | Intelligent management system and method for enterprise project implementation | |
CN116664846B (en) | Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation | |
CN111815576B (en) | Method, device, equipment and storage medium for detecting corrosion condition of metal part | |
CN113343779A (en) | Environment anomaly detection method and device, computer equipment and storage medium | |
CN111178198B (en) | Automatic monitoring method for potential safety hazards of laboratory dangerous goods based on machine vision | |
CN111259736B (en) | Real-time pedestrian detection method based on deep learning in complex environment | |
CN115311601A (en) | Fire detection analysis method based on video analysis technology | |
CN117291430B (en) | Safety production detection method and device based on machine vision | |
CN117253176B (en) | Safe production Al intelligent detection method based on video analysis and computer vision | |
CN113160012A (en) | Computer online examination invigilation method based on deep learning | |
CN116579609B (en) | Illegal operation analysis method based on inspection process | |
CN117354495B (en) | Video monitoring quality diagnosis method and system based on deep learning | |
CN117499621B (en) | Detection method, device, equipment and medium of video acquisition equipment | |
Fan | Evaluation of Machine Learning Methods for Image Classification: A Case Study of Facility Surface Damage | |
CN118261963A (en) | Image-based storage yard residual capacity detection method, device, equipment, medium and system | |
CN116563787A (en) | Method for detecting glove wearing condition of machine tool operator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |