CN113963436A - Helmet identification incremental learning and role judgment method based on deep learning - Google Patents

Helmet identification incremental learning and role judgment method based on deep learning Download PDF

Info

Publication number
CN113963436A
CN113963436A CN202111196326.2A CN202111196326A CN113963436A CN 113963436 A CN113963436 A CN 113963436A CN 202111196326 A CN202111196326 A CN 202111196326A CN 113963436 A CN113963436 A CN 113963436A
Authority
CN
China
Prior art keywords
module
learning
frame
role
safety
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111196326.2A
Other languages
Chinese (zh)
Inventor
郑艳伟
赵增瑞
贾舒涵
于东晓
梁会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Meilun Engineering Technology Co ltd
Shandong University
Original Assignee
Shenzhen Meilun Engineering Technology Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Meilun Engineering Technology Co ltd, Shandong University filed Critical Shenzhen Meilun Engineering Technology Co ltd
Priority to CN202111196326.2A priority Critical patent/CN113963436A/en
Publication of CN113963436A publication Critical patent/CN113963436A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a safety helmet identification increment learning and role judgment method based on deep learning. Recording batch information of the training data added into the data set, and after new training data is added, performing incremental learning on network fine tuning; and the learning rule is determined according to the batch, so that catastrophic forgetting is avoided. The method comprises the steps of detecting and identifying safety helmets with different colors and the situations without wearing the safety helmets in videos, and determining roles of people wearing the safety helmets according to colors of the safety helmets while judging unsafe behaviors without wearing the safety helmets. According to the invention, the target detection technology is combined with the safety mechanism of the factory, the risk detection early warning system is completed, a large amount of manpower and material resources are saved, the safety protection mechanism of the factory is improved, the factory management system is perfected, and the life health safety of personnel is maintained.

Description

Helmet identification incremental learning and role judgment method based on deep learning
Technical Field
The invention belongs to the technical field of computer vision, and relates to a monitoring video incremental learning method based on deep learning, which is used for identifying the wearing condition of a safety helmet, distinguishing the color of the safety helmet, judging the role of a person and designing an alarm system.
Background
In many factories, wearing safety helmets has become a hard regulation, indispensable. The safety helmet can effectively prevent the head from being hit by a sudden flying object, prevent the head from being shocked by electricity, and prevent hair from being wound into a machine or exposed to dust. But its importance is easily overlooked by some workers, thereby causing a safety accident.
The wearing condition of the safety helmet is manually checked by staff such as security guards and the like, unsafe behaviors are identified, great instability and subjectivity exist, and meanwhile, the monitoring without dead angles is difficult to carry out in real time. Meanwhile, different colors of the safety helmet represent different roles, different work needs different personnel to carry out in an actual scene, and at the moment, the roles of the personnel can be correctly judged by depending on the colors of the safety helmet so as to determine whether certain operations can be executed or not and prevent the personnel without the authority role from entering and operating.
At present, deep learning plays an increasingly greater role in the field of target detection, and a multi-level learning system is constructed by utilizing advanced convolutional neural theory, a deep belief network, a neural network and the like, so that feature detection, target detection and the like can be realized, and the accuracy, reliability and advancement of machine learning are improved. And in particular the advent of YOLO v5, facilitated the accuracy and precision of target detection. Visual algorithms in the application field often fail because new classes (e.g. new colored helmets) appear, completely different new scenes appear, etc., and it is very costly if a model is trained from scratch. To solve this problem, incremental learning is required, and in incremental learning, catastrophic forgetting is a condition to be mainly avoided. The technical problem is solved by the scheme.
Disclosure of Invention
The invention aims to provide a safety helmet identification incremental learning and role judgment method based on deep learning, which solves the technical problem of how to improve the safety protection mechanism of a factory, and provides a method which can timely acquire a camera video, identify the wearing state of a safety helmet of a worker by processing the video, distinguish the role of the worker through different colors of the safety helmet, timely detect unsafe behaviors that the safety helmet is not worn and determine whether the safety helmet meets safety production regulations, and in addition, can support incremental learning and avoid catastrophic forgetting.
The invention adopts the ideas of incremental learning and role mapping, provides a safety helmet identification incremental learning and role judgment method based on deep learning, and can not only detect the safety helmet, but also realize automatic supervision of safety supervision; meanwhile, the role recognition can be realized, and technical support is provided for subsequent services. Considering the problem that categories and scenes are frequently added in the actual factory environment, incremental learning is adopted to avoid high-cost zero-starting-point learning, and meanwhile, a batch mode is adopted to avoid catastrophic forgetting in the incremental learning. The system comprises: the system comprises a training module, a camera scheduling module, a GPU scheduling module, a personnel detection module, a role mapping module and a risk reporting and recording module; wherein:
a training module: the system is responsible for collecting training data, establishing learning rules, realizing incremental learning and avoiding catastrophic forgetting.
The camera scheduling module: and (3) streaming the working camera based on the RTSP, and storing the data into a buffer area to be detected by polling the camera and extracting frames.
A GPU scheduling module: and each GPU is provided with a controller, each controller detects the number of images in the buffer area to be detected, and when the number reaches a certain threshold value, a batch is formed for prediction, and then the prediction result is pushed back through a queue, so that the efficiency and the precision of the model are improved.
The personnel detection module: based on the improved YOLO v5 target detection model, whether a safety helmet is worn or not and the color of the safety helmet are detected and identified.
A role mapping module: and mapping to the role of the personnel through the detected colors of the safety helmets of the personnel.
A risk reporting and recording module: and the safety helmet is responsible for uploading, storing and alarming the detected personnel information without wearing the safety helmet and photo evidence, and providing historical records and evidence inquiry.
The specific implementation method of the training module is as follows:
on the basis of an original training set, pictures of newly added categories are collected and marked, and the marked labels adopt the following data (I, x, y, w, h, c, b), wherein I is the ID of the images, x is the x coordinate of the upper left corner of the marking frame, y is the y coordinate of the upper left corner of the marking frame, w is the width of the marking frame, h is the height of the marking frame, c is the category label of the marking frame, and b is the batch of the marking frame.
Regarding the determination of the batch of the labeling frame, the batch is set to 1 when the training set is initialized, and the batch of the subsequently added pictures is increased by 1.
In incremental learning, fine-tuning (fine-tuning) is performed on the basis of the original training model, and catastrophic forgetting is prevented by the following loss function:
Figure BDA0003303102810000021
wherein IobjAn indication function, which indicates that a certain anchor window is responsible for the category and then participates in the calculation; biIs the batch information recorded in step (1), ciIs a class label, pj(ci) Is the probability of prediction.
For the detection box regression, the following coordinate calculation was used:
Figure BDA0003303102810000031
Figure BDA0003303102810000032
wherein x*And xaRespectively representing x coordinates of the upper left corners of the prediction frame and the anchor frame, and y, w and h are similar and respectively represent y coordinates of the upper left corners, frame width and frame height; and t is a frame regression parameter obtained during training. Regression uses the CIOU loss function.
The implementation method of the camera scheduling module is as follows:
constructing a global image linked list, and polling all cameras by one thread;
for each camera, streaming based on RTSP (real time streaming protocol) and extracting a current frame;
and locking the global image linked list, storing the current frame, and unlocking, wherein the process is as follows:
if Is.Length<MaxLen
{
Lock(Is);
Is.Add(I);
Unlock(Is);
}
wherein Is a global image linked list, I Is a current image obtained by frame extraction, and MaxLen Is the maximum allowed length of the linked list. And when the linked list reaches the maximum length, stopping extracting the images, and waiting for the images in the linked list to be consumed by the GPU scheduling module.
The GPU scheduling module implementation method comprises the following steps:
each GPU scheduling module independently detects whether the length of the global image linked list exceeds a certain threshold value;
when the detection exceeds the threshold value, locking the global linked list, acquiring images with the threshold quantity, deleting the images from the global linked list, and then unlocking;
and (4) forming the acquired images into a batch, and loading the batch into a GPU for training.
The process is as follows:
if Is.Lenght>=MinLen
{
Lock(Is);
Batch=Is.Get(MinLen);
Is.Remove(MinLen);
Unlock(Is);
Pack(Batch);
}
where MinLen is a set threshold for the number of GPUs that a batch can process.
The personnel detection module is realized by the following steps:
the detection box is calculated as follows:
Figure BDA0003303102810000041
the category of the detection frame:
Figure BDA0003303102810000042
wherein
Figure BDA0003303102810000043
To predict the probability of belonging to a certain class.
The role mapping module is realized by the following steps:
constructing a color-to-role mapping function
Figure BDA0003303102810000044
Wherein
Figure BDA0003303102810000045
Is a set of colors for the helmet,
Figure BDA0003303102810000046
is a set of roles. And after the color of the safety helmet is identified, determining the role through a mapping function.
The risk reporting and recording module is realized by the following steps:
after the personnel detection module detects the information, sending a Kafka message;
after receiving the Kafka message, the risk reporting and recording module judges the condition that the safety helmet is not worn as a risk condition and persists information such as evidence and the like;
the risk reporting and recording module pops up alarm information;
and the risk reporting and recording module provides a subsequent query function.
The invention is divided into a plurality of parts, each part has own configuration description, and the corresponding configuration can be modified by changing different configuration files, thereby improving the flexibility and the expandability of the system. The configuration file can be used for carrying out personalized configuration on the system so as to meet the requirements of different products, the future expandability is realized, and the usability and the maintainability are improved.
The invention achieves the following remarkable effects:
the invention combines the deep learning technology with the factory safety mechanism, further distinguishes the colors of the safety helmet on the basis of identifying the safety helmet, thereby distinguishing the roles of the workers, carries out real-time stream taking on the camera through the RTSP, monitors the activity states of the workers of the safety helmet with different colors in time, judges whether the workers have unsafe behaviors or not and has risk information, and uploads a risk picture in time when the unsafe behaviors occur to send the risk information. When newly added identification types and scenes, incremental learning can be realized, and the memory of the past learning is saved, so that the method can be operated safely, stably, reliably and efficiently for a long time.
Drawings
Fig. 1 is a system configuration diagram in the embodiment of the present invention.
FIG. 2 is a flow chart of a training model in an embodiment of the invention.
Fig. 3 is a data flow diagram in an embodiment of the invention.
Detailed Description
In order to clearly illustrate the technical features of the present solution, the present solution is described below by way of specific embodiments.
The invention relates to a safety helmet identification and role judgment method based on deep learning. The invention can be mainly divided into two large processes of training and running.
Referring to fig. 1, the training module is responsible for model training, and the obtained data model provides a basis for intelligent identification of the personnel detection module. The operation module firstly uses a camera scheduling module to obtain a frame extraction image and stores the frame extraction image into a global image linked list. And the GPU scheduling module continuously detects the image linked list, and when the data is sufficient, the image linked list is packed into a batch and sent into the personnel detection module for safety helmet identification. And determining the role by the recognition result through a role mapping module. And simultaneously, recording the risk information to a risk reporting and recording module and keeping the evidence.
Referring to fig. 2, the existing video data and image data are obtained from the unit to which the original security camera belongs, the video and image data under the condition that people with caps of various colors flow greatly at different angles and in different scenes and in different time periods are picked up, or the streaming operation is directly carried out from a camera through an RTSP protocol, and then the obtained video and image data are transmitted to the system.
In the process of video frame extraction processing, the system needs to perform frame skipping operation on the obtained video, in order to approach the actual condition of a factory as much as possible, the accuracy of the system is improved, frames with the same interval time are selected during frame skipping processing, and then necessary selection is performed on the obtained images. Some images have no obvious change or no person appears on the images, and the images have no blurring, so that the images which do not have positive effect on model training are correspondingly discarded. The personnel flow is obvious, the safety helmet has various colors, and the comprehensive images of the close-distance shooting condition and the long-distance shooting condition of the personnel are reserved.
And marking the obtained pictures mainly by means of manpower and related marking software at the same time. Because the obtained data is limited, a series of data enhancement operations are carried out on the obtained data, the accuracy of the method is improved, then workers wearing safety helmets of different colors are labeled, the safety helmets of the same color are labeled into the same category, the safety helmets of different colors are labeled into different categories, and the workers without the safety helmets are labeled into categories without the safety helmets. In the marking process, some safety helmets are held in hands or lifted in front of the body by workers, placed at the back or lifted to the top of the head, and the safety helmets are separated from the head, which are not the case of correctly wearing the safety helmets according to the rules, so all the safety helmets are classified as the case of not wearing the safety helmets.
And after the data labeling is finished, dividing the data into a training set, a verification set, a test set and the like, and then constructing a model for training. And obtaining a final learning model through the regression of the detection frame and the training of the incremental model, wherein the model is supplied to the personnel detection module for use.
Referring to fig. 3, all image data are stored in a global image linked list, which is the storage and centering of the whole data during operation. The linked list sets a maximum length MaxLen and a minimum length MinLen.
And the camera scheduling module continuously polls the camera and judges whether the length of the linked list is less than MaxLen after the frame is extracted. If not, drop the extracted frame or hold the frame but wait until the link list length is less than MaxLen. And if the number of the images is smaller than the preset value, locking the linked list, inserting the images into the linked list, unlocking and continuously polling the camera.
And the GPU scheduling module continuously detects the global image linked list and judges whether the length of the linked list is greater than or equal to MinLen. If the time is less than the preset time, the system waits for 5 seconds and then continues to detect. And if the frame number is larger than or equal to the preset value, locking the chain table, acquiring the front MinLen frames, deleting the frames from the global image chain table, and unlocking. And packaging the acquired frames and sending the frames to a GPU for detection.
After the pictures are detected, whether unsafe behaviors without safety helmets exist or not and whether different persons represented by safety helmets with different colors comply with corresponding personnel quantity requirements or not can be identified, the fact that only one person actually operates in work needing double-person operation or multi-person operation is avoided, risks existing in unsafe conditions and risks are detected in time, and then the risks are reported to a risk system.
The system uploads the detected risk data including the video and image data containing the risk to the storage server, so that the risk data can be conveniently searched and stored afterwards, corresponding risk information is uploaded to the risk processing platform, then the risk is processed according to actual conditions, and corresponding solutions are made in time.
The technical features of the present invention which are not described in the above embodiments may be implemented by or using the prior art, and are not described herein again, of course, the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and variations, modifications, additions or substitutions which may be made by those skilled in the art within the spirit and scope of the present invention should also fall within the protection scope of the present invention.

Claims (7)

1. The helmet identification incremental learning and role judgment method based on deep learning is characterized by comprising a training module, a camera scheduling module, a GPU scheduling module, a personnel detection module, a role mapping module and a risk reporting and recording module; wherein the content of the first and second substances,
a training module: the system is responsible for collecting training data, establishing learning rules, realizing incremental learning and avoiding catastrophic forgetting;
the camera scheduling module: streaming fetching is carried out on a working camera based on RTSP (real time streaming protocol), and data are stored in a buffer area to be detected by polling the camera and extracting frames;
a GPU scheduling module: each GPU is provided with a controller, each controller detects the number of images in a buffer area to be detected, and when the number reaches a certain threshold value, a batch is formed for prediction, and then a prediction result is pushed back through a queue, so that the efficiency and the precision of the model are improved;
the personnel detection module: detecting and identifying whether a safety helmet is worn and the color of the safety helmet based on the improved YOLO v5 target detection model;
a role mapping module: mapping to the role of the personnel through the detected color of the safety helmet of the personnel;
a risk reporting and recording module: and the safety helmet is responsible for uploading, storing and alarming the detected personnel information without wearing the safety helmet and photo evidence, and providing historical records and evidence inquiry.
2. The method for helmet identification incremental learning and role determination based on deep learning of claim 1, wherein the specific implementation method of the training module is as follows:
collecting pictures of newly added categories on the basis of an original training set, labeling the pictures, wherein the labeled labels adopt the following data (I, x, y, w, h, c, b), wherein I is the ID of the image, x is the x coordinate of the upper left corner of the labeling frame, y is the y coordinate of the upper left corner of the labeling frame, w is the width of the labeling frame, h is the height of the labeling frame, c is the category label of the labeling frame, and b is the batch of the labeling frame;
regarding the determination of the batch of the labeling frame, the batch is set to 1 when the training set is initialized, and the batch of the subsequent newly added pictures is increased by 1;
in incremental learning, fine-tuning (fine-tuning) is performed on the basis of the original training model, and catastrophic forgetting is prevented by the following loss function:
Figure FDA0003303102800000011
wherein IobjAn indication function, which indicates that a certain anchor window is responsible for the category and then participates in the calculation; biIs the batch information recorded in step (1), ciIs a class label, pj(ci) Is the probability of prediction.
3. The method for helmet identification incremental learning and role determination based on deep learning of claim 2, wherein the camera scheduling module is implemented as follows:
constructing a global image linked list, and polling all cameras by one thread;
for each camera, streaming based on RTSP (real time streaming protocol) and extracting a current frame;
and locking the global image linked list, storing the current frame and unlocking.
4. The helmet identification incremental learning and role determination method based on deep learning of claim 3, wherein the GPU scheduling module is implemented as follows:
each GPU scheduling module independently detects whether the length of the global image linked list exceeds a certain threshold value;
when the detection exceeds the threshold value, locking the global linked list, acquiring images with the threshold quantity, deleting the images from the global linked list, and then unlocking;
and (4) forming the acquired images into a batch, and loading the batch into a GPU for training.
5. The helmet identification incremental learning and role determination method based on deep learning of claim 4, wherein the personnel detection module is implemented as follows:
the detection box is calculated as follows:
Figure FDA0003303102800000021
wherein x*And xaRespectively representing x coordinates of the upper left corners of the prediction frame and the anchor frame, and y, w and h are similar and respectively represent y coordinates of the upper left corners, frame width and frame height; t is a frame regression parameter obtained during training;
the category of the detection frame:
Figure FDA0003303102800000022
wherein
Figure FDA0003303102800000023
To predict the probability of belonging to a certain class.
6. The method for helmet identification incremental learning and role determination based on deep learning of claim 5, wherein the role mapping module is implemented as follows:
constructing a color-to-role mapping function
Figure FDA0003303102800000024
Wherein
Figure FDA0003303102800000025
Is a set of colors for the helmet,
Figure FDA0003303102800000026
is a set of roles. And after the color of the safety helmet is identified, determining the role through a mapping function.
7. The method for helmet identification incremental learning and role determination based on deep learning of claim 6, wherein the risk reporting and recording module is implemented as follows:
after the personnel detection module detects the information, sending a Kafka message;
after receiving the Kafka message, the risk reporting and recording module judges the condition that the safety helmet is not worn as a risk condition and persists information such as evidence and the like;
the risk reporting and recording module pops up alarm information;
and the risk reporting and recording module provides a subsequent query function.
CN202111196326.2A 2021-10-14 2021-10-14 Helmet identification incremental learning and role judgment method based on deep learning Pending CN113963436A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111196326.2A CN113963436A (en) 2021-10-14 2021-10-14 Helmet identification incremental learning and role judgment method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111196326.2A CN113963436A (en) 2021-10-14 2021-10-14 Helmet identification incremental learning and role judgment method based on deep learning

Publications (1)

Publication Number Publication Date
CN113963436A true CN113963436A (en) 2022-01-21

Family

ID=79464147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111196326.2A Pending CN113963436A (en) 2021-10-14 2021-10-14 Helmet identification incremental learning and role judgment method based on deep learning

Country Status (1)

Country Link
CN (1) CN113963436A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578605A (en) * 2022-11-16 2023-01-06 北京阿丘科技有限公司 Data classification method, device and equipment based on incremental learning and storage medium
CN117278696A (en) * 2023-11-17 2023-12-22 西南交通大学 Method for editing illegal video of real-time personal protective equipment on construction site

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120310864A1 (en) * 2011-05-31 2012-12-06 Shayok Chakraborty Adaptive Batch Mode Active Learning for Evolving a Classifier
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN111079731A (en) * 2019-12-03 2020-04-28 中冶赛迪重庆信息技术有限公司 Configuration system, method, equipment and medium based on safety helmet identification monitoring system
CN112488209A (en) * 2020-11-25 2021-03-12 南京大学 Incremental image classification method based on semi-supervised learning
CN112488073A (en) * 2020-12-21 2021-03-12 苏州科达特种视讯有限公司 Target detection method, system, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120310864A1 (en) * 2011-05-31 2012-12-06 Shayok Chakraborty Adaptive Batch Mode Active Learning for Evolving a Classifier
CN110188724A (en) * 2019-06-05 2019-08-30 中冶赛迪重庆信息技术有限公司 The method and system of safety cap positioning and color identification based on deep learning
CN111079731A (en) * 2019-12-03 2020-04-28 中冶赛迪重庆信息技术有限公司 Configuration system, method, equipment and medium based on safety helmet identification monitoring system
CN112488209A (en) * 2020-11-25 2021-03-12 南京大学 Incremental image classification method based on semi-supervised learning
CN112488073A (en) * 2020-12-21 2021-03-12 苏州科达特种视讯有限公司 Target detection method, system, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
伍吉修: "面向建筑工地人员安全防护的视觉检测研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》, no. 6, 15 June 2021 (2021-06-15), pages 1 - 49 *
金睿琦: "自主目标搜索无人机视觉系统技术研究", 《万方》, 4 February 2021 (2021-02-04), pages 1 - 54 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578605A (en) * 2022-11-16 2023-01-06 北京阿丘科技有限公司 Data classification method, device and equipment based on incremental learning and storage medium
CN117278696A (en) * 2023-11-17 2023-12-22 西南交通大学 Method for editing illegal video of real-time personal protective equipment on construction site
CN117278696B (en) * 2023-11-17 2024-01-26 西南交通大学 Method for editing illegal video of real-time personal protective equipment on construction site

Similar Documents

Publication Publication Date Title
CN110738135B (en) Method and system for judging and guiding worker operation step standard visual recognition
CN113963436A (en) Helmet identification incremental learning and role judgment method based on deep learning
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
CN112766050B (en) Dressing and operation checking method, computer device and storage medium
CN111783744A (en) Operation site safety protection detection method and device
CN113411542A (en) Intelligent working condition monitoring equipment
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN209543514U (en) Monitoring and alarm system based on recognition of face
CN110458794B (en) Quality detection method and device for accessories of rail train
CN112434828B (en) Intelligent safety protection identification method in 5T operation and maintenance
CN110889339A (en) Head and shoulder detection-based dangerous area grading early warning method and system
CN112004061A (en) Oil discharge flow normative intelligent monitoring method based on computer vision
CN111601081A (en) Method and device for monitoring operation of hanging basket
CN117035419B (en) Intelligent management system and method for enterprise project implementation
CN111191507A (en) Safety early warning analysis method and system for smart community
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN112434827A (en) Safety protection identification unit in 5T fortune dimension
CN111523488A (en) Real-time monitoring method for kitchen staff behaviors
CN113673495B (en) Intelligent security method and system based on neural network and readable storage medium
CN114359712A (en) Safety violation analysis system based on unmanned aerial vehicle inspection
CN116682034A (en) Dangerous behavior detection method under complex production operation scene
CN111178198B (en) Automatic monitoring method for potential safety hazards of laboratory dangerous goods based on machine vision
CN115147930A (en) Big data video AI analytic system based on artificial intelligence
CN115035439A (en) Campus abnormal event monitoring system based on deep network learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination