CN117253176B - Safe production Al intelligent detection method based on video analysis and computer vision - Google Patents

Safe production Al intelligent detection method based on video analysis and computer vision Download PDF

Info

Publication number
CN117253176B
CN117253176B CN202311515033.5A CN202311515033A CN117253176B CN 117253176 B CN117253176 B CN 117253176B CN 202311515033 A CN202311515033 A CN 202311515033A CN 117253176 B CN117253176 B CN 117253176B
Authority
CN
China
Prior art keywords
target detection
detection area
frame
area
expressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311515033.5A
Other languages
Chinese (zh)
Other versions
CN117253176A (en
Inventor
于洋
李鑫
袁梦晨
生亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Hainei Software Technology Co ltd
Original Assignee
Jiangsu Hainei Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Hainei Software Technology Co ltd filed Critical Jiangsu Hainei Software Technology Co ltd
Priority to CN202311515033.5A priority Critical patent/CN117253176B/en
Publication of CN117253176A publication Critical patent/CN117253176A/en
Application granted granted Critical
Publication of CN117253176B publication Critical patent/CN117253176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06311Scheduling, planning or task assignment for a person or group
    • G06Q10/063114Status monitoring or status determination for a person or group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of safety production detection, and particularly discloses a safety production Al intelligent detection method based on video analysis and computer vision, which comprises the following steps: the invention detects whether the safety production personnel are on duty or not through analyzing videos in the monitoring area in real time, and delimits a special area to detect whether related personnel are in a specific area or not in the chemical production process so as to prevent off duty behaviors, thereby not only being capable of automatically identifying whether the personnel are on duty or off duty, helping a manager to monitor the working condition of the staff in the working time in real time, but also being capable of effectively improving the safety and efficiency in the chemical production process through an intelligent monitoring and alarm mechanism, and enabling the manager to more comprehensively control the chemical production state.

Description

Safe production Al intelligent detection method based on video analysis and computer vision
Technical Field
The invention relates to the technical field of safety production detection, in particular to an intelligent detection method for safety production Al based on video analysis and computer vision.
Background
The chemical production environment safety monitoring means monitoring and detecting various security threats and loopholes in the chemical production environment so as to ensure the safety of the chemical production environment, the safety awareness and adherence to safety specifications of staff are important for creating and maintaining the safety chemical production environment, if the staff lacks the safety awareness, the safety regulations can be ignored, potential safety risks and loopholes are caused, meanwhile, the autonomy of the staff is higher, and the staff is difficult to effectively manage in the chemical production environment with larger body volume, so that an improved safety detection method of the chemical production environment is required, and the behavior state and off-duty condition of the staff are monitored and managed.
Today, there are also some disadvantages to the safety production test, in particular in the following several aspects: (1) The traditional video monitoring means generally adopts manual monitoring or determines the behavior and off-duty condition of staff through the image fluctuation degree, the passive monitoring means needs a monitor to manually watch the picture of a camera, and a great deal of time and effort are required to watch a great deal of monitoring pictures, so that the mode has certain delay, the monitor can not timely find the off-duty condition of the staff, and important details are easy to ignore, so that potential safety hazards are buried in production work.
(2) The current video monitoring method often generates a large amount of monitoring data, huge data flow can greatly improve the difficulty of information extraction, a large amount of time and resources are consumed for processing, storing and analyzing the data, key information can be omitted, staff abnormal behaviors cannot be accurately positioned, meanwhile, the monitoring data contains a large amount of redundant information, and resource waste can be caused in the information processing process.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a safe production Al intelligent detection method based on video analysis and computer vision, which can effectively solve the problems related to the background art.
In order to achieve the above purpose, the invention is realized by the following technical scheme: the invention provides an xx method, which comprises the following steps: dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area.
And secondly, identifying each target detection area and analyzing the actual number of on-duty staff in each target detection area.
And thirdly, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area.
And step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information.
And fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model.
As a further method, the identifying each target detection area includes the following specific analysis processes: scanning the objects in each target detection area for a set number of times, generating each object detection frame, and calculating each object detection frame in each target detection areaThe value is calculated by the following formula: />Wherein->Denoted as +.>The first target detection area->Person object detection frame->Area of sub-scan, +.>Expressed as the set->The first target detection area->Area of individual object detection frame->Number expressed as each target detection area, +.>,/>Expressed as total number of target detection areas, +.>Expressed as the number of each object->,/>Expressed as the total number of objects>Number expressed as each scan, +.>,/>Expressed as the total number of scans.
As a further method, the actual number of on-duty staff in each target detection area is analyzed, and the specific analysis process is as follows: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, thereby counting the actual number of on-duty staff in each target detection area
As a further method, the safety detection degree evaluation coefficient is calculated, and the specific analysis process is as follows: acquiring the property of each target detection area, and matching the safety influence factor of the unit space volume of each target detection area according to the set safety influence factor of the unit space volume of each property detection areaAnd for each target detection areaSpatially scanning to obtain spatial volume of each target detection region>Comprehensively calculating the space volume safety influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set spatial volume correction factor, +.>Expressed as a natural constant.
Obtaining required staff number of each target detection area from safe production detection libraryCalculating employee number influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as the number of allowed deviations set, +.>Indicated as a set person number influence degree correction factor.
Comprehensive calculation of safety detection degree evaluation coefficient of each target detection areaThe calculation formula is as follows:wherein->And->The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
As a further method, the analysis obtains reference screening data of each target detection area, and the specific analysis process is as follows: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
As a further method, the monitoring of the variation of the personnel in each target detection area comprises the following specific analysis processes: according to the set detection period, carrying out video frame extraction detection by using the extraction interval frame number and the frame extraction rate of each target detection area, further counting each video frame of each target detection area, taking the first extracted video frame of each target detection area as a key video frame, jointly marking the video frames extracted subsequently in each target detection area as each associated video frame, and analyzing to obtain the confidence value of each associated video frame of each target detection frame in each target detection areaAnd extracting confidence value +.>Comprehensively calculating confidence value abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation confidence coefficient,/->Represented as the number of each target detection box,,/>expressed as total number of target detection frames, +.>Represented as the number of each associated video frame,,/>represented as the total number of associated video frames.
Temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature of each temperature screening point in each time point in each target detection area is obtained through monitoringAnd acquiring the temperature +.A. of each temperature monitoring point of each target detection area at the initial time point>Comprehensively calculating temperature abnormality variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Indicated as set allowable deviation temperature, +.>Number expressed as each temperature screening point, +.>,/>Expressed as the total number of temperature screening points.
Analyzing and obtaining the number of on-duty staff in each associated video frame of each target detection areaAnd extracts the height accumulated value +.>Simultaneously extracting initial height accumulated value ++of each target detection area in key video frame>On duty employee count>Comprehensively calculating employee state abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />WhereinAnd->The set unit height fluctuation influence factor and the set unit number fluctuation influence factor are shown respectively.
Scanning each target detection area to establish a three-dimensional model, and taking the central point of each target detection area as a datum point to count the distance between each on-duty employee and the datum point in the key video frame of each target detection areaAnd distance of each on Shift employee from the reference point in each associated video frame +.>Comprehensively calculating employee position abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />WhereinExpressed as a set allowable deviation distance, +.>Number indicated as employee on duty, < >>,/>Expressed as the total number of on Shift employees.
Comprehensively calculating abnormal variation evaluation coefficients corresponding to each time of associated video frames of each target detection area
As a further method, the abnormal variation evaluation coefficients corresponding to each associated video frame in each target detection area are comprehensively calculated, and the calculation formula is as follows:wherein->、/>、/>Andthe confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
As a further method, the recording feedback is performed on the abnormal variation information, and the specific analysis process is as follows: and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
As a further method, the method for evaluating the effect of the AI intelligent detection model specifically comprises the following steps: precision, precision rate, recall rate,Score sum->
Accuracy ofThe calculation formula is as follows: />Wherein->Expressed as total number of samples tested off duty with the model,/->The number of samples that are correctly detected for the model is indicated.
Accuracy rate ofThe calculation formula is as follows: />Wherein->Expressed as the number of times the off-duty employee is detected as off-duty, < >>Represented as the number of times an off-Shift employee is detected as off-Shift.
Recall rate of recallThe calculation formula is as follows: />Wherein->Represented as the number of times an off-Shift employee is detected as not being off-Shift.
The score is calculated by the following formula: />
The specific analysis process of (2) is as follows: constructing a curve by taking the accuracy rate as the abscissa and the recall rate as the ordinate, and calculating the area +.>Calculating the +.>Average value of +.>
As a further method, the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps: according to the accuracy, precision rate, recall rate,Score sum->Calculating an actual effect evaluation index of the AI smart detection model +.>The calculation formula is as follows: />Wherein->、/>、/>And->Expressed as accuracy, precision, recall, and +.>Score sum->Reference value->、/>、/>、/>And->Expressed as accuracy, precision, recall, and +.>Score sum->Corresponding influencing factors.
And acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects: (1) The invention provides the intelligent detection method for the safe production Al based on video analysis and computer vision, which is used for defining the special area to detect the change condition of related staff in the chemical production process, automatically identifying whether the staff is on duty or off duty, helping the manager to monitor the working condition of the staff in real time during the working time, and effectively improving the safety and the efficiency in the chemical production process through an intelligent monitoring and alarming mechanism so that the manager can more comprehensively control the state of the safe production.
(2) According to the invention, through a frame extraction technology, proper frame extraction interval frames and frame extraction rate are selected according to the safety monitoring degree of the chemical production environment, so that the system can selectively process images or video frames, and the generation of redundant data is reduced, thereby saving the use of computing resources and memory, and helping balance the resource consumption and monitoring effect in a real-time target detection task.
(3) According to the invention, the detection frame is intelligently screened by combining the video with the monitoring mode of the sensor and by utilizing the confidence, the defect that the traditional detection algorithm is easy to be blocked or blurred in a complex scene is overcome, the scene can be more comprehensively understood by the algorithm through a multi-scale feature fusion strategy, the position and the state of staff are effectively captured, and the accuracy and the reliability of the algorithm are improved.
(4) According to the intelligent monitoring feedback method, after the off-duty behavior of the staff is detected, the detection result is uploaded to the background management server, the behavior is recorded and displayed in real time, meanwhile, an administrator is notified through an alarm, necessary measures can be timely taken, intelligent off-duty monitoring of the staff in a chemical production area is achieved, the efficiency and the accuracy of safe production are improved, accidents are prevented, and the safety and the health of production staff and park facilities are protected to the greatest extent.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without inventive effort from the following drawings.
FIG. 1 is a schematic flow chart of the method of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making creative efforts based on the embodiments of the present invention are included in the protection scope of the present invention.
Referring to fig. 1, the invention provides a safe production Al intelligent detection method based on video analysis and computer vision, comprising the following steps: dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area.
Step two, identifying each target detection area and analyzing the actual number of on-duty staff in each target detection area
Specifically, the identifying each target detection area includes the following specific analysis processes: scanning the objects in each target detection area for a set number of times, generating each object detection frame, and calculating each object detection frame in each target detection areaThe value is calculated by the following formula: />Wherein->Denoted as +.>The first target detection area->Person object detection frame->Area of sub-scan, +.>Expressed as the set->The first target detection area->Area of individual object detection frame->Number expressed as each target detection area, +.>,/>Expressed as total number of target detection areas, +.>Expressed as the number of each object->,/>Expressed as the total number of objects>Number expressed as each scan, +.>,/>Expressed as the total number of scans.
Further, the analyzing the actual number of on-duty staff in each target detection area comprises the following specific analysis processes: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, thereby counting the actual number of on-duty staff in each target detection area
It should be explained that, the IOU is the overlapping degree value of the prediction frame and the real frame in the target detection, and is used for measuring the quality of model classification, the confidence level setting plays a role in screening in the detection results output by the algorithm, and only the detection frame with the confidence level higher than the threshold value can be regarded as an effective detection result, so that the less reliable detection frame with lower confidence level is filtered, and the accuracy and the reliability of the algorithm are improved.
And thirdly, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area.
Specifically, the safety detection degree evaluation coefficient is calculated, and the specific analysis process is as follows: acquiring the property of each target detection area, and matching the safety influence factor of the unit space volume of each target detection area according to the set safety influence factor of the unit space volume of each property detection areaScanning the space of each target detection area to obtain the space volume of each target detection area>Comprehensively calculating the space volume safety influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set spatial volume correction factor, +.>Expressed as a natural constant.
It should be construed that the properties of the detection area include, but are not limited to, production lines, detection laboratories, material storage, etc.
Obtaining required staff number of each target detection area from safe production detection libraryCalculating employee number influence degree index of each target detection area>The calculation formula is as follows: />Wherein->Expressed as the number of allowed deviations set, +.>Indicated as a set person number influence degree correction factor.
Comprehensive calculation of safety detection degree evaluation coefficient of each target detection areaThe calculation formula is as follows:wherein->And->The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
Further, the analysis obtains reference screening data of each target detection area, and the specific analysis process comprises the following steps: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
And step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information.
Specifically, the monitoring of the change condition of personnel in each target detection area includes the following specific analysis processes: according to the set detection period, and with the number of extraction interval frames and extraction frames of each target detection areaThe video extraction frame detection is carried out according to the rate, and then each video frame of each target detection area is counted, the first extracted video frame of each target detection area is used as a key video frame, the video frames extracted subsequently in each target detection area are jointly marked as each associated video frame, and the confidence value of each associated video frame of each target detection frame in each target detection area is obtained through analysisAnd extracting confidence value +.>Comprehensively calculating confidence value abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation confidence coefficient,/->Number expressed as each target detection frame, +.>,/>Expressed as total number of target detection frames, +.>Denoted as the number of each associated video frame, +.>,/>Represented as the total number of associated video frames.
In a specific embodiment, through a frame extraction technology, a proper frame extraction interval frame number and frame extraction rate are selected according to the safety monitoring degree of a chemical production environment, so that a system can selectively process images or video frames, and the generation of redundant data is reduced, thereby saving the use of computing resources and memory, and helping balance the resource consumption and monitoring effect in a real-time target detection task.
Temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature of each temperature screening point in each time point in each target detection area is obtained through monitoringAnd acquiring the temperature +.A. of each temperature monitoring point of each target detection area at the initial time point>Comprehensively calculating temperature abnormality variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows:wherein->Indicated as set allowable deviation temperature, +.>Number expressed as each temperature screening point, +.>,/>Expressed as the total number of temperature screening points.
It should be explained that the initial time point is a time node of the key video frame.
In a specific embodiment, the detection frame is intelligently screened by combining the video with the monitoring mode of the sensor and utilizing the confidence coefficient, so that the defect that a traditional detection algorithm is easy to be blocked or blurred in a complex scene is overcome, the scene can be more comprehensively understood by the algorithm through a multi-scale feature fusion strategy, the position and the state of staff are effectively captured, and the accuracy and the reliability of the algorithm are improved.
Analyzing and obtaining the number of on-duty staff in each associated video frame of each target detection areaAnd extracts the height accumulated value +.>Simultaneously extracting initial height accumulated value ++of each target detection area in key video frame>On duty employee count>Comprehensively calculating employee state abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Andthe set unit height fluctuation influence factor and the set unit number fluctuation influence factor are shown respectively.
Scanning each target detection area to establish a three-dimensional model, and counting the closing of each target detection area by taking the central point of each target detection area as a datum pointDistance between each on Shift employee and reference point in key video frameAnd distance of each on Shift employee from the reference point in each associated video frame +.>Comprehensively calculating employee position abnormal variation degree index corresponding to each associated video frame of each target detection area>The calculation formula is as follows: />Wherein->Expressed as a set allowable deviation distance, +.>Number indicated as employee on duty, < >>,/>Expressed as the total number of on Shift employees.
It should be explained that, in the above-mentioned processing of video analysis, the feature extraction is performed by extracting pictures at intervals of x frames, and the main steps of feature extraction are that the images are adjusted to 640x640 format, and ten steps are performed after pictures are read in.
The first step transforms the number of channels of the input feature map from 3 (RGB three channels) to 64, and convolves the input feature map with a 3x3 convolution kernel, step size of 2, which reduces the size of the input feature map to half, the convolution layer is used to extract features from the input 640x640x3 image, and the output feature map size is 320x320x64.
The second step is to use a second convolution layer, the number of channels of the input feature map is 64, the number of channels increases to 128 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step size is 2, which again reduces the feature map size to half of the original size, the layer is used for further feature extraction, and the output feature map size is 160x160x128.
The third step is to use a custom C2f layer to convolve the input feature map 3 times, each time using 128 3x3 convolution kernels, and connect the results to the input after each convolution, which does not change the feature map size, but helps to further extract the features.
The fourth step is to use a convolution layer, the number of channels of the input feature map is 128, the number of channels increases to 256 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, the feature map size is reduced to half of the original size again, the layer is used for continuously extracting features, and the output feature map size is 80x80x256.
The fifth step is to continue using the custom C2f layer, similar to before, but here the 6 convolutions are performed and the result is connected to the input after each convolution, this layer continues to extract features, the output feature map size is 80x80x256.
The sixth step is to use a convolution layer, the number of channels of the input feature map is 256, the number of channels increases to 512 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, this again reduces the feature map size to half of the original size, the layer is used for continuously extracting features, and the output feature map size is 40x40x512.
The seventh step is to use a custom C2f layer, similar to the previous custom C2f layer, to perform 6 convolution operations and connect the result with the input after each convolution, this layer continues to extract features, the output feature map size is 40x40x512.
The eighth step uses a convolution layer, the number of channels of the input feature map is 512, the number of channels increases to 1024 after passing through the convolution layer, the convolution operation is performed by using a convolution kernel of 3x3, the step length is 2, this reduces the feature map size to half of the original size again, the layer is used for continuously extracting features, and the output feature map size is 20x20x1024.
The ninth step uses the last custom C2f layer, similar to the previous custom C2f layer, performs 3 convolutions and connects the result with the input after each convolution, this layer continues to extract features, the output feature map size is 20x20x1024.
And a tenth step of operation uses an SPPF layer, the layer performs spatial pyramid pooling operation, the feature map is divided into subareas with different scales, and each subarea is subjected to maximum pooling operation, and pooling results with different scales are spliced together.
The characteristic information extraction and the multi-scale characteristic fusion of the image are completed through the steps, and information collection is performed for realizing the classification and detection of the identification targets.
Comprehensively calculating abnormal variation evaluation coefficients corresponding to each time of associated video frames of each target detection area
Further, the calculating formula of the abnormal variation evaluation coefficient corresponding to each associated video frame in each target detection area is as follows:wherein->、/>、/>And->The confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
Specifically, the recording feedback is performed on the abnormal change information, and the specific analysis process is as follows: and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
In a specific embodiment, through intelligent monitoring feedback of machine learning, when the existence of off-duty behaviors of staff is detected, the detection result is uploaded to a background management server, the behaviors are recorded and displayed in real time, and meanwhile, an administrator is notified through an alarm, so that necessary measures can be timely taken, intelligent off-duty monitoring of staff in a chemical production area is realized, the efficiency and the accuracy of safe production are improved, accidents are prevented, and the safety and the health of production staff and park facilities are protected to the greatest extent.
It should be explained that the abnormal change information processing procedure is as follows: the method comprises the steps of setting the number of people in a safe production area in a background service system, monitoring whether irrelevant personnel intervene in a chemical production process in a counting mode, ensuring that only relevant staff enter a specific area to engage in production activities, uploading detection results to a background management server when the system detects that staff leave the post, recording the actions and displaying the actions in real time, informing an administrator through an alarm, enabling the administrator to take necessary measures in time, enabling timely feedback of abnormal information to achieve intelligent leave post monitoring on staff in the chemical production area, improving efficiency and accuracy of safe production, preventing accidents, and protecting safety and health of production staff and park facilities to the greatest extent.
And fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model.
Specifically, the effect evaluation on the AI intelligent detection model specifically includes: precision, precision rate, recall rate,Score sum->
Accuracy ofThe calculation formula is as follows: />Wherein->Expressed as total number of samples tested off duty with the model,/->The number of samples that are correctly detected for the model is indicated.
Accuracy rate ofThe calculation formula is as follows: />Wherein->Expressed as the number of times the off-duty employee is detected as off-duty, < >>Represented as the number of times an off-Shift employee is detected as off-Shift.
Recall rate of recallThe calculation formula is as follows: />Wherein->Represented as the number of times an off-Shift employee is detected as not being off-Shift.
The score is calculated by the following formula: />
The specific analysis process of (2) is as follows: constructing a curve by taking the accuracy rate as the abscissa and the recall rate as the ordinate, and calculating the area +.>Calculating the +.>Average value of +.>
Further, the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps:
according to the accuracy, precision rate, recall rate,Score sum->Calculating an actual effect evaluation index of the AI smart detection model +.>The calculation formula is as follows: />Wherein->、/>、/>、/>And->Expressed as accuracy, precision, recall, and +.>Score and scoreCorresponding reference value,/->、/>、/>、/>And->Influence factors expressed as set precision, recall, score, and sum, respectively;
and acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
The foregoing is merely illustrative of the structures of this invention and various modifications, additions and substitutions for those skilled in the art can be made to the described embodiments without departing from the scope of the invention or from the scope of the invention as defined in the accompanying claims.

Claims (9)

1. The intelligent detection method for the safe production AI based on video analysis and computer vision is characterized by comprising the following steps:
dividing a safe production area and each appointed area to be detected in a background service management system, and jointly marking the safe production area and each appointed area to be detected as each target detection area;
step two, identifying each target detection area, and analyzing the actual number of on-duty staff in each target detection area;
step three, calculating a safety detection degree evaluation coefficient, and analyzing to obtain reference screening data of each target detection area;
step four, monitoring the change condition of personnel in each target detection area, and recording and feeding back abnormal change information;
fifthly, performing effect evaluation on the AI intelligent detection model, and performing comprehensive effect evaluation on the model;
the specific analysis process for identifying each target detection area comprises the following steps:
scanning the objects in each target detection area with set times, generating each object detection frame and calculating the IOU of each object detection frame in each target detection area ij The value is calculated by the following formula:wherein->Area of the jth scan of the jth object detection frame expressed as the ith target detection area,/>The area of the j object detection frame, which is denoted as the i-th target detection area set, i is denoted as the number of each target detection area, i=1, 2,3,..n, n is denoted as the total number of target detection areas, j is denoted as the number of each object, j=1, 2,3,..m, m is denoted as the total number of objects, q is denoted as the number of each scan, q=1, 2,3,..k, k is denoted as the total number of scans.
2. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 1, wherein the method comprises the following steps: the actual number of on-duty staff in each target detection area is analyzed, and the specific analysis process is as follows: extracting a set confidence coefficient threshold value from a safety production detection library, comparing the confidence coefficient of each object detection frame in each target detection area with the confidence coefficient threshold value, and if the confidence coefficient of a certain object detection frame is higher than the confidence coefficient threshold value, marking the object detection frame as the target detection frame, and further counting the actual number N of on-duty staff in each target detection area i On duty
3. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 2, wherein the method comprises the following steps: the specific analysis process of calculating the safety detection degree evaluation coefficient is as follows:
acquiring the property of each target detection area, and matching the security influence factor zeta of the unit space volume of each target detection area according to the security influence factor zeta of the unit space volume of each target detection area i Scanning the space of each target detection area to obtain the space volume V of each target detection area i Comprehensively calculating the space volume safety influence degree index alpha of each target detection area i The calculation formula is as follows:wherein, psi is expressed as a set spatial volume correction factor, and e is expressed as a natural constant;
obtaining the required employee number N of each target detection area from a safe production detection library i Calculating the employee number influence degree index beta of each target detection area i The calculation formula is as follows:wherein delta N is expressed as the set number of allowable deviation people, phi is expressed as the set number influence degree correction factor;
comprehensively calculating a safety detection degree evaluation coefficient χ of each target detection area i The calculation formula is as follows:wherein xi 1 And xi 2 The space volume safety influence degree and the employee number influence degree are respectively expressed as the duty ratio weight of the set space volume safety influence degree and the employee number influence degree.
4. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 3, wherein the method comprises the following steps: the analysis obtains reference screening data of each target detection area, and the specific analysis process comprises the following steps: and matching the safety detection degree evaluation coefficient of each target detection area with reference screening data corresponding to each safety detection degree evaluation coefficient interval in the safety production detection library, wherein the reference screening data comprises extraction interval frame numbers and extraction frame rates, and further obtaining the extraction interval frame numbers and the extraction frame rates of each target detection area.
5. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 4, wherein the method comprises the following steps: the specific analysis process for monitoring the change condition of personnel in each target detection area comprises the following steps:
according to the set detection period, carrying out video frame extraction detection by using the extraction interval frame number and the frame extraction rate of each target detection area, further counting each video frame of each target detection area, taking the first extracted video frame of each target detection area as a key video frame, jointly marking the video frames extracted subsequently in each target detection area as each associated video frame, and analyzing to obtain the confidence value IOU of each associated video frame of each target detection frame in each target detection area ipr And extracting confidence values of key video frames of each target detection frame in each target detection areaComprehensively calculating each time of closing of each target detection areaConfidence value abnormality variation degree index delta corresponding to linked video frame ir The calculation formula is as follows: />Wherein Δiou is expressed as a set allowable deviation confidence coefficient, p is expressed as the number of each target detection frame, p=1, 2,3,..f, p is expressed as the total number of target detection frames, r is expressed as the number of each associated video frame, r=1, 2,3,..h, h is expressed as the total number of associated video frames;
temperature screening points are distributed on each target detection area, time points are obtained according to time node division of each associated video frame, and the temperature Q of each temperature screening point in each time point in each target detection area is obtained through monitoring ird And acquiring the temperature Q of each temperature monitoring point of each target detection area at the initial time point i Initial initiation Comprehensively calculating the temperature abnormality variation degree index epsilon corresponding to each associated video frame of each target detection area ir The calculation formula is as follows:where Δq is expressed as the set allowable deviation temperature, d is expressed as the number of each temperature screening point, d=1, 2,3,..s, s is expressed as the total number of temperature screening points;
analyzing and obtaining the number N of on-duty staff in each associated video frame of each target detection area ir And extract the height accumulated value H of the on-duty staff ir Simultaneously extracting initial height accumulated value H of each target detection area in key video frame i Initial initiation On duty employee count N i Initial initiation Comprehensively calculating employee state abnormal variation degree index phi corresponding to each associated video frame of each target detection area ir The calculation formula is as follows:wherein omega 1 And omega 2 The unit height variation influence factor and the unit number variation influence factor are respectively set;
scanning each target detection area to establish a three-dimensional model, taking the central point of each target detection area as a datum point, and counting the distance L between each on-duty employee and the datum point in the key video frame of each target detection area iz Distance L between each on Shift employee and reference point in each associated video frame irz Comprehensively calculating employee position abnormal variation degree indexes corresponding to each time of associated video frames of each target detection areaThe calculation formula is as follows: />Where Δl is the set allowable deviation distance, z is the number of each on Shift employee, z=1, 2,3,..u, u is the total number of on Shift employees;
comprehensively calculating abnormal variation evaluation coefficients gamma corresponding to each time of associated video frames of each target detection area ir
6. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 5, wherein the method comprises the following steps: the abnormal variation evaluation coefficients corresponding to each associated video frame of each target detection area are comprehensively calculated, and the calculation formula is as follows:wherein v is 1 、υ 2 、υ 3 And v 4 The confidence value abnormal change degree, the temperature abnormal change index, the staff state abnormal change degree and the staff position abnormal change degree are respectively expressed as the duty ratio weights.
7. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 6, wherein the method comprises the following steps: the recording feedback is carried out on the abnormal change information, and the specific analysis process is as follows:
and acquiring an abnormal variation evaluation coefficient threshold value from the safety production detection library, comparing the abnormal variation evaluation coefficient of each target detection area with the abnormal variation evaluation coefficient threshold value, and if the abnormal variation evaluation coefficient of a certain target detection area is higher than the abnormal variation evaluation coefficient threshold value, recording and displaying the target detection area.
8. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 7, wherein: the method for evaluating the effect of the AI intelligent detection model specifically comprises the following steps: precision, recall, F1 score, and mAP;
accuracy Ac, the calculation formula of which is:wherein N is Total (S) Represented as the total number of samples detected off duty with the model, N Correct and correct The number of samples that are correctly detected for the model is represented;
the accuracy Pr is calculated by the following formula:where TP is the number of times that off-Shift employees are detected as off-Shift, and FP is the number of times that off-Shift employees are detected as off-Shift;
the recall rate Re has the following calculation formula:where FN is expressed as the number of times an off-Shift employee is detected as not being off-Shift;
the F1 fraction has the following calculation formula:
the specific analysis process of mAP is as follows: and constructing a curve by taking the accuracy rate as an abscissa and the recall rate as an ordinate, calculating the area AP under the curve by utilizing integral, and calculating the average value of the APs under a plurality of categories to obtain mAP.
9. The intelligent detection method for the safe production AI based on video analysis and computer vision as claimed in claim 8, wherein the method comprises the following steps: the model is subjected to comprehensive effect evaluation, and the specific analysis process comprises the following steps:
according to the accuracy, the precision, the recall, the F1 fraction and the mAP, calculating an actual effect evaluation index eta of the AI intelligent detection model, wherein the calculation formula is as follows:wherein Ac 0 、Pr 0 、Re 0 、F1 0 And mAP 0 Expressed as the set precision, recall, F1 score, and mAP reference values, τ, respectively 1 、τ 2 、τ 3 、τ 4 And τ 5 Respectively representing the set precision, accuracy, recall rate, F1 fraction and mAP corresponding influence factors;
and acquiring an actual effect evaluation index threshold from the safety production detection library, comparing the actual effect evaluation index of the AI intelligent detection model with the actual effect evaluation index threshold, and if the actual effect evaluation index of the AI intelligent detection model is lower than the actual effect evaluation index threshold, carrying out feedback prompt and improving and optimizing the AI intelligent detection model.
CN202311515033.5A 2023-11-15 2023-11-15 Safe production Al intelligent detection method based on video analysis and computer vision Active CN117253176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311515033.5A CN117253176B (en) 2023-11-15 2023-11-15 Safe production Al intelligent detection method based on video analysis and computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311515033.5A CN117253176B (en) 2023-11-15 2023-11-15 Safe production Al intelligent detection method based on video analysis and computer vision

Publications (2)

Publication Number Publication Date
CN117253176A CN117253176A (en) 2023-12-19
CN117253176B true CN117253176B (en) 2024-01-26

Family

ID=89137187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311515033.5A Active CN117253176B (en) 2023-11-15 2023-11-15 Safe production Al intelligent detection method based on video analysis and computer vision

Country Status (1)

Country Link
CN (1) CN117253176B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580455A (en) * 2019-08-21 2019-12-17 广州洪森科技有限公司 image recognition-based illegal off-duty detection method and device for personnel
WO2022022368A1 (en) * 2020-07-28 2022-02-03 宁波环视信息科技有限公司 Deep-learning-based apparatus and method for monitoring behavioral norms in jail
CN115019236A (en) * 2022-06-27 2022-09-06 禾麦科技开发(深圳)有限公司 Mobile phone playing and off-duty detection alarm system and method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110580455A (en) * 2019-08-21 2019-12-17 广州洪森科技有限公司 image recognition-based illegal off-duty detection method and device for personnel
WO2022022368A1 (en) * 2020-07-28 2022-02-03 宁波环视信息科技有限公司 Deep-learning-based apparatus and method for monitoring behavioral norms in jail
CN115019236A (en) * 2022-06-27 2022-09-06 禾麦科技开发(深圳)有限公司 Mobile phone playing and off-duty detection alarm system and method based on deep learning

Also Published As

Publication number Publication date
CN117253176A (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN111739250B (en) Fire detection method and system combining image processing technology and infrared sensor
CN111191576B (en) Personnel behavior target detection model construction method, intelligent analysis method and system
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN110826514A (en) Construction site violation intelligent identification method based on deep learning
CN108711148B (en) Tire defect intelligent detection method based on deep learning
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN110619620A (en) Method, device and system for positioning abnormity causing surface defects and electronic equipment
CN109711322A (en) A kind of people&#39;s vehicle separation method based on RFCN
CN113642474A (en) Hazardous area personnel monitoring method based on YOLOV5
CN115660262B (en) Engineering intelligent quality inspection method, system and medium based on database application
CN113343779B (en) Environment abnormality detection method, device, computer equipment and storage medium
CN111259736B (en) Real-time pedestrian detection method based on deep learning in complex environment
US11521120B2 (en) Inspection apparatus and machine learning method
CN115035328A (en) Converter image increment automatic machine learning system and establishment training method thereof
CN113657143B (en) Garbage classification method based on classification and detection combined judgment
CN113192038B (en) Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning
CN109001210B (en) System and method for detecting aging and cracking of sealing rubber strip of civil air defense door
CN117035419B (en) Intelligent management system and method for enterprise project implementation
CN116664846B (en) Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
CN117253176B (en) Safe production Al intelligent detection method based on video analysis and computer vision
CN116110127A (en) Multi-linkage gas station cashing behavior recognition system
CN115311601A (en) Fire detection analysis method based on video analysis technology
CN117291430B (en) Safety production detection method and device based on machine vision
CN116579609B (en) Illegal operation analysis method based on inspection process
CN117354495B (en) Video monitoring quality diagnosis method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant