CN116503793A - Site safety dressing recognition monitoring method based on deep learning - Google Patents

Site safety dressing recognition monitoring method based on deep learning Download PDF

Info

Publication number
CN116503793A
CN116503793A CN202310122346.8A CN202310122346A CN116503793A CN 116503793 A CN116503793 A CN 116503793A CN 202310122346 A CN202310122346 A CN 202310122346A CN 116503793 A CN116503793 A CN 116503793A
Authority
CN
China
Prior art keywords
data
resolution
dressing
deep learning
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310122346.8A
Other languages
Chinese (zh)
Inventor
郑翊
覃仕顶
张爱平
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Chujianyi Network Technology Co ltd
Original Assignee
Hubei Chujianyi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Chujianyi Network Technology Co ltd filed Critical Hubei Chujianyi Network Technology Co ltd
Priority to CN202310122346.8A priority Critical patent/CN116503793A/en
Publication of CN116503793A publication Critical patent/CN116503793A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a building site safety dressing recognition monitoring method based on deep learning, belongs to the technical field of safety management, and comprises the following steps: the method comprises the steps that a server obtains first data generated by a sample set of a detection target of a collector; the server optimizes the first data to generate second data; the server establishes a classification model, classifies the second data and generates third data used for representing the unsafe wearing samples; the server establishes a feedback mechanism, pre-warns the third data and generates fourth data; the server transmits fourth data to the display device to alert the target to the unsafe dress. When the technical scheme of the invention is implemented, the optimized second data are obtained by optimizing the first data representing the dressing image and video stream of the site personnel, the classification model and the feedback mechanism are built, the site personnel which are not safely dressed are screened out, and the early warning information is sent out through the display screen, so that the effects of on-line identification, system operation pressure reduction and identification process optimization are achieved.

Description

Site safety dressing recognition monitoring method based on deep learning
Technical Field
The application relates to the technical field of safety management, in particular to a site safety dressing identification monitoring method based on deep learning.
Background
With the large-scale advancement of new towns, the building site scale and the number remain rapidly increasing, and corresponding safety production problems are also gradually exposed. The labor force of the construction site is various, the flow is frequent, some safety consciousness is weak, the construction site environment is complex, mechanical equipment, irregular operation and the like are easy to cause safety accidents, casualties and property loss are caused, and the management difficulty is high and the cost is high.
In order to meet the safety supervision requirements of the construction site, in the existing situation, the safety management on the construction site is mainly carried out by manual supervision, and the monitoring is auxiliary. In the aspect of detecting safety dressing, in the existing intelligent monitoring auxiliary details, a plurality of continuous personnel tracking and classification recognition model assistance are applied, for example, chinese patent publication No. CN111401301A discloses a personnel dressing monitoring method, device, equipment and storage medium.
However, in the implementation process of the above technical solution, the monitoring object needs to be continuously identified, so that more computational redundancy is generated, computational resources are consumed, and for the person in the distant view during monitoring, the low resolution acquisition equipment can cause the reduction of the classification recognition rate, so that the recognition effect is affected, so that it is necessary to provide an on-line recognition based on-site safety dressing recognition monitoring method for solving the above problems.
In order to solve the problem of low resolution, an emerging super-resolution reconstruction method based on deep learning can be adopted, and among various super-resolution reconstruction network algorithms, the Real-ESRGAN super-resolution network algorithm has the following advantages: compared with the traditional algorithm in the method for constructing the data set, the complexity of the reduced image is enhanced by high-order processing; ringing and overshoot phenomena in the image are solved; the advantages of the image such as antagonism learning on details are enhanced.
It should be noted that the above information disclosed in this background section is only for understanding the background of the present application concept and, therefore, it may contain information that does not constitute prior art.
Disclosure of Invention
The technical scheme adopted for solving the technical problems is as follows: a construction site safety dressing recognition monitoring method based on deep learning comprises the following steps:
the method comprises the steps that a server obtains first data generated by a sample set of a detection target of a collector, wherein the first data comprises a construction site personnel dressing image and a video stream;
the server optimizes the first data and generates second data, wherein the second data comprises optimized site personnel dressing images and video streams;
the server builds a classification model and classifies the second data to generate third data for characterizing the unsecure dressing sample and fourth data for characterizing the installation dressing sample;
the server establishes a feedback mechanism and performs early warning on the third data to generate fourth data for early warning;
and the server displays the fourth data through a display screen and is used for reminding the target of unsafe dressing.
When the technical scheme of the invention is implemented, the server optimizes the first data representing the dressing image and video stream of the site personnel to obtain optimized second data, establishes a classification model and a feedback mechanism, screens the site personnel which are not safely dressed, and sends out early warning information through the display screen, thereby achieving the effects of on-line identification, reducing the system operation force and optimizing the identification flow.
Further, the optimizing the first data includes:
acquiring the resolution of the picture in the first data;
setting a resolution threshold value, and comparing the resolution threshold value with the data of the picture in the first data to generate fifth data of which the characterization resolution does not reach the resolution threshold value;
and establishing a single-image super-resolution reconstruction model based on the deep learning network, carrying out resolution reconstruction on the fifth data, and using the reconstructed high-resolution image for subsequent target recognition.
Further, the establishing of the classification model includes:
configuring a training set, wherein the training set comprises a safety helmet picture, a work clothes picture and a reflective clothes picture;
establishing a safe dressing pre-training model through a ResNet50 network and performing migration training;
a secure wearing classification model is generated.
A site safety wear identification monitoring system based on deep learning, comprising:
the sample acquisition module is used for detecting first data generated by a sample set of a target;
the optimizing module is used for optimizing the first data and generating second data;
a classification module for creating a classification model and classifying the second data to generate third data for characterizing the unsafe wearing article;
the feedback early warning module is used for establishing a feedback mechanism and carrying out early warning on the third data so as to generate fourth data for early warning;
and the display output module is used for displaying the fourth data through the display screen and reminding the target of unsafe dressing.
Further, the optimizing module includes:
and the motion state detection unit is used for acquiring the data in a non-static state in the first data.
A resolution obtaining unit, configured to obtain a resolution of a picture in the first data;
the threshold comparison unit is used for setting a resolution threshold and comparing the resolution threshold with the data of the picture in the first data to generate fifth data of which the characterization resolution does not reach the resolution threshold;
the resolution reconstruction unit is used for establishing a single-image super-resolution reconstruction model based on the deep learning network, carrying out resolution reconstruction on the fifth data, and using the reconstructed high-resolution image for subsequent target recognition.
Further, the classification module includes:
the training set configuration unit is used for configuring a training set, wherein the training set comprises a safety helmet picture, a work clothes picture and a reflective clothes picture;
the training unit is used for establishing a safe dressing pre-training model through a ResNet50 network and performing migration training;
and the model generating unit is used for generating a safe dressing classification model.
Further, the input device, the output device, the processor and the memory are connected through a bus.
The beneficial effects of this application are: according to the site safety dressing recognition monitoring method based on deep learning, the server is used for optimizing first data representing site personnel dressing images and video streams to obtain optimized second data, a classification model and a feedback mechanism are built, site personnel without safety dressing are screened out, early warning information is sent out through a display screen, on-line recognition can be achieved, through motion state screening, picture super-resolution reconstruction is achieved, system operation force is reduced, recognition flow effect is optimized, and through building of a portrait recognition model, samples are screened, non-target additional recognition is reduced, and recognition efficiency is improved.
In addition to the objects, features, and advantages described above, there are other objects, features, and advantages of the present application. The present application will be described in further detail with reference to the drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application.
In the drawings:
FIG. 1 is an overall schematic diagram of a method for secure garment identification and monitoring in the present application;
FIG. 2 is a schematic block diagram of a site safety dressing identification monitoring system based on deep learning;
fig. 3 is a schematic diagram of connection of an electronic device.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
As shown in fig. 1, the present application provides a site safety dressing identification monitoring method based on deep learning, which includes:
step S1: the method comprises the steps that a server obtains first data generated by a sample set of a detection target of a collector, wherein the first data comprises a construction site personnel dressing image and a video stream;
when a sample set of an object to be detected is obtained, the object to be detected can be divided into a static object and a dynamic object, the static object is a picture in a static state, the dynamic object is a monitoring video stream, and therefore first data in a dynamic state need to be screened through an inter-frame difference method, wherein the picture in the static state needs to be subjected to human image recognition through a YOLOv5 algorithm, is determined to be a human image, then is subjected to picture capturing, is sent to the sample set, and the monitoring video stream needs to be subjected to motion detection and then is subjected to next processing.
Step S2: the server optimizes the first data and generates second data, wherein the second data comprises optimized site personnel dressing images and video streams;
the sample set contains static portrait pictures and key frames extracted from the monitoring video stream, and the pictures extracted from the key frames are targets in a motion state, so that the sample set needs to be optimized in order to improve the recognition accuracy.
Step S3: the server builds a classification model and classifies the second data to generate third data for characterizing the unsecure wearing sample;
in this embodiment, a safe dressing pre-training model is built and a ResNet50 network is used for migration training. After the training set is configured, training is performed to generate a safety dressing classification model, in this embodiment, the training set is a helmet picture, a work clothes picture or a reflective clothing picture, and in other embodiments of the present invention, different types of classification models, such as a mask recognition classification model and a work card recognition classification model, can be realized by changing elements of the training set.
Step S4: the server establishes a feedback mechanism and performs early warning on the third data to generate fourth data for early warning.
Step S5: and the server displays the fourth data through a display screen and is used for reminding the target of unsafe dressing.
The above embodiment describes a process of obtaining a sample, optimizing the sample, identifying and classifying the sample, and outputting a result, so as to achieve the purpose of identifying and monitoring the safe dressing. The following embodiments provide a method of portrait identification of a sample, the method comprising the steps of:
step A: acquiring a data set, wherein the data set contains portrait elements;
in this embodiment, the data set may include multiple elements, and may be displayed in a multi-angle and multi-background manner, so as to reduce the false detection rate.
And (B) step (B): performing frame selection and marking on the data set, specifically selecting a frame and marking out a portrait part;
step C: one tenth of the data set is selected as a verification set, and verification is carried out continuously in the training process, so that the recognition effect is improved;
step D: and after training, generating a portrait identification model, and accessing the portrait identification model into a sample set.
In this embodiment, the purpose of performing portrait identification on the sample is to screen sample features and eliminate the waste of computational power resources caused by non-portrait samples.
The foregoing embodiments describe in detail a method for performing image recognition on a sample to improve the validity of a sample set, and the following embodiments provide a processing method for a surveillance video stream, where the method includes:
step S101: performing uniform key frame extraction on the monitoring video stream, and setting an extraction interval, wherein in the embodiment, the extraction interval is set to be one second;
and according to the extraction interval, extracting key frames of the monitoring video stream every second, naming the extracted key frames by using the current time stamp, wherein the initial time stamp is zero seconds, and generating an image sequence based on the time stamp.
Step S102: performing differential operation on the continuous key frames, and judging the motion state of the target;
in this embodiment, a continuous three-frame sequence k, k+1, k+2 is taken, the image sequences k and k+1 are subjected to gray level difference to obtain a first gray level value, the image sequences k+1 and k+2 are subjected to gray level difference to obtain a second gray level value, the first gray level value and the second gray level value are subjected to AND operation according to pixel bits to obtain a gray level difference value, the obtained gray level difference value is a gray level change value from k frames to k+2 frames, the gray level difference value is compared with a set threshold value, and a target with a result higher than the threshold value is determined to be subject to motion and is sent to a sample set.
In the embodiment, if the whole monitoring video is classified, the calculation is complex and the calculation amount is large, and the load of the server is increased, so that the monitoring video is further analyzed, the motion detection is performed, whether the target is dynamic is determined according to the judgment result, if yes, the target image frame is marked, if not, the detection of the key frame when no moving target exists in the monitoring video is eliminated, the calculation resource is saved, and the recognition efficiency is improved.
In this embodiment, instead of taking a continuous three-frame image sequence to perform differential detection, two adjacent frame image sequences may be taken, so that the recognition efficiency is improved under the condition of moderate data volume.
The above embodiment provides a processing method for a surveillance video stream, which reduces the computational power resource waste caused by non-target images, improves the recognition efficiency, and the following embodiments provide a process for further optimizing a sample set, including:
step S201: judging the recognition success rate of the samples, screening out samples with the recognition success rate lower than a set threshold value, and establishing a sample set to be optimized;
the samples in the sample set collected in the step S1 have the problem of recognition failure due to the influence of the sample quality, and the samples which are successfully recognized are sent to a sample classification library to wait for classification by screening, so that the samples which are not successfully recognized are further optimized.
Step S202: obtaining the resolution of samples in a sample set to be optimized, and setting a resolution threshold;
in this embodiment, the resolution threshold is determined to be 200px, and the length-width resolution of the sample is compared with the resolution threshold, if the length-width resolution of the sample is higher than the resolution threshold, the sample is sent to the first optimization system, and if any one of the length-width resolutions of the sample is lower than the resolution threshold, the sample is sent to the second optimization system.
Step S203: carrying out directional optimization on samples in the second optimization system;
in order to ensure the recognition effect, the resolution of the samples needs to meet the recognition requirement, and the unsatisfied samples need to be optimized, when the embodiment is implemented, the samples with the resolution which does not reach the standard are subjected to resolution reconstruction by training a single-image super-resolution reconstruction model based on the Real-ESRGAN network.
The above embodiment provides a process of further optimizing a sample set to ensure that the resolution of the sample meets the recognition requirement, and the following embodiment will describe in detail a site safety wear recognition monitoring system based on deep learning, as shown in fig. 2, which includes:
the sample acquisition module is used for acquiring identification samples and comprises a picture input unit and a monitoring video access unit, and the sample acquisition module is used for respectively acquiring pictures and videos;
an optimization module for optimizing a sample, the optimization module further comprising:
and the motion state detection unit is used for acquiring the data in a non-static state in the first data.
A resolution acquisition unit for acquiring a resolution of a picture in the first data;
the threshold comparison unit is used for setting a resolution threshold and comparing the resolution threshold with the data of the picture in the first data to generate fifth data of which the characterization resolution does not reach the resolution threshold;
the resolution reconstruction unit is used for establishing a single-image super-resolution reconstruction model based on the Real-ESRGAN network, and carrying out resolution reconstruction on the image which does not reach the resolution threshold and is contained in the fifth data to generate second data representing the new high-resolution image.
A classification module for creating a classification model and classifying the second data to generate third data for characterizing the unsafe wearing article;
the feedback early warning module is used for establishing a feedback mechanism and carrying out early warning on the third data so as to generate fourth data for early warning;
and the display output module is used for displaying the fourth data through the display screen and reminding the target of unsafe dressing.
The embodiment describes a construction site safety dressing recognition monitoring system based on deep learning in detail, and can output target classification results when a program runs. The sample acquisition module acquires a target or a specific area at multiple angles, generates an initial sample set, sends the initial sample set to the optimization module for optimization, performs resolution reconstruction and portrait identification on the sample, classifies the sample through the training model after the optimization is completed, sends a classification result to the feedback early warning module, and outputs early warning information on a display screen.
The recognition monitoring and early warning system in the embodiment of the present invention is described above from the point of view of the modularized functional entity, and the recognition monitoring and early warning system in the embodiment of the present invention is described below from the point of view of hardware processing, referring to fig. 3, another embodiment of the recognition monitoring and early warning system in the embodiment of the present invention includes:
in some embodiments of the present invention, the input device, the output device, the processor, the memory, and the output device may be connected by a bus or other connection, where fig. 3 is an example of a connection by a bus.
Wherein, by invoking the operating instructions stored in the memory, the processor performs the steps of:
the input device inputs sample information;
the processor receives sample data;
the processor optimizes the sample data;
the processor classifies the sample data and outputs the classification result to the output device;
in addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium.
Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (8)

1. A site safety dressing recognition monitoring method based on deep learning is applied to a server and is characterized in that: comprising the following steps:
the method comprises the steps that a server obtains first data generated by a sample set of a detection target of a collector, wherein the first data comprises a construction site personnel dressing image and a video stream;
the server optimizes the first data and generates second data, wherein the second data comprises optimized site personnel dressing images and video streams;
the server builds a classification model and classifies the second data to generate third data for characterizing the unsecure wearing sample;
the server establishes a feedback mechanism and performs early warning on the third data to generate fourth data for early warning;
and the server transmits the fourth data to the display screen for displaying, and the fourth data is used for reminding the target of unsafe dressing.
2. The method for identifying and monitoring site safety dressing based on deep learning according to claim 1, wherein the method comprises the following steps: the optimizing the first data includes:
acquiring the resolution of the picture in the first data;
setting a resolution threshold value, and comparing the resolution threshold value with the data of the picture in the first data to generate fifth data of which the characterization resolution does not reach the resolution threshold value;
and establishing a single-image super-resolution reconstruction model based on the deep learning network, and performing resolution reconstruction on the fifth data.
3. The method for identifying and monitoring site safety dressing based on deep learning according to claim 1, wherein the method comprises the following steps: the establishment of the classification model comprises the following steps:
configuring a training set, wherein the training set comprises a safety helmet picture, a work clothes picture and a reflective clothes picture;
establishing a safe dressing pre-training model through a ResNet50 network and performing migration training;
a secure wearing classification model is generated.
4. A building site safety dressing discernment monitored control system based on degree of depth study, its characterized in that: comprising the following steps:
the sample acquisition module is used for detecting first data generated by a sample set of a target;
the optimizing module is used for optimizing the first data and generating second data;
a classification module for creating a classification model and classifying the second data to generate third data for characterizing the unsafe wearing article;
the feedback early warning module is used for establishing a feedback mechanism and carrying out early warning on the third data so as to generate fourth data for early warning;
and the display output module is used for displaying the fourth data through the display screen and reminding the target of unsafe dressing.
5. The deep learning-based worksite safety wear identification monitoring system according to claim 4, wherein: the optimization module comprises:
and the motion state detection unit is used for acquiring the data in a non-static state in the first data.
A resolution obtaining unit, configured to obtain a resolution of a picture in the first data;
the threshold comparison unit is used for setting a resolution threshold and comparing the resolution threshold with the data of the picture in the first data to generate fifth data of which the characterization resolution does not reach the resolution threshold;
and the resolution reconstruction unit is used for establishing a single-image super-resolution reconstruction model based on the deep learning network and carrying out resolution reconstruction on the fifth data.
6. The deep learning-based worksite safety wear identification monitoring system according to claim 4, wherein: the classification module comprises:
the training set configuration unit is used for configuring a training set, wherein the training set comprises a safety helmet picture, a work clothes picture and a reflective clothes picture;
the training unit is used for establishing a safe dressing pre-training model through a ResNet50 network and performing migration training;
and the model generating unit is used for generating a safe dressing classification model.
7. An electronic device, characterized in that: the electronic equipment at least comprises an input device, an output device, at least one processor and a memory connected with the processor, wherein the memory stores instructions executable by the processor, and the instructions are executed by the processor so that the processor can execute the site safety dressing identification monitoring method based on deep learning as set forth in any one of claims 1-3.
8. An electronic device as recited in claim 7, wherein: the input device, the output device, the processor and the memory are connected through buses.
CN202310122346.8A 2023-02-13 2023-02-13 Site safety dressing recognition monitoring method based on deep learning Pending CN116503793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310122346.8A CN116503793A (en) 2023-02-13 2023-02-13 Site safety dressing recognition monitoring method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310122346.8A CN116503793A (en) 2023-02-13 2023-02-13 Site safety dressing recognition monitoring method based on deep learning

Publications (1)

Publication Number Publication Date
CN116503793A true CN116503793A (en) 2023-07-28

Family

ID=87323726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310122346.8A Pending CN116503793A (en) 2023-02-13 2023-02-13 Site safety dressing recognition monitoring method based on deep learning

Country Status (1)

Country Link
CN (1) CN116503793A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112039A (en) * 2023-08-24 2023-11-24 中邮通建设咨询有限公司 Transmission optimization system and operation method of data center

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112039A (en) * 2023-08-24 2023-11-24 中邮通建设咨询有限公司 Transmission optimization system and operation method of data center
CN117112039B (en) * 2023-08-24 2024-04-26 中邮通建设咨询有限公司 Transmission optimization system and operation method of data center

Similar Documents

Publication Publication Date Title
CN110188807A (en) Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN
Nguyen et al. Artificial intelligence based data processing algorithm for video surveillance to empower industry 3.5
CN112200081A (en) Abnormal behavior identification method and device, electronic equipment and storage medium
CN110826429A (en) Scenic spot video-based method and system for automatically monitoring travel emergency
CN111783665A (en) Action recognition method and device, storage medium and electronic equipment
CN116503793A (en) Site safety dressing recognition monitoring method based on deep learning
CN111813997A (en) Intrusion analysis method, device, equipment and storage medium
CN112184773A (en) Helmet wearing detection method and system based on deep learning
Sahay et al. A real time crime scene intelligent video surveillance systems in violence detection framework using deep learning techniques
CN115457466A (en) Inspection video-based hidden danger detection method and system and electronic equipment
CN114943936A (en) Target behavior identification method and device, electronic equipment and storage medium
CN112927178B (en) Occlusion detection method, occlusion detection device, electronic device, and storage medium
CN115731513A (en) Intelligent park management system based on digital twin
CN111553305A (en) Violation video identification system and method
WO2018210039A1 (en) Data processing method, data processing device, and storage medium
US20210027463A1 (en) Video image processing and motion detection
CN114360182B (en) Intelligent alarm method, device, equipment and storage medium
CN112633179A (en) Farmer market aisle object occupying channel detection method based on video analysis
CN116071784A (en) Personnel illegal behavior recognition method, device, equipment and storage medium
JP7372391B2 (en) Concepts for detecting anomalies in input data
CN116229341A (en) Method and system for analyzing and alarming suspicious behaviors in video monitoring among electrons
JP2019046453A (en) Intermediate information analyzer, optimizing device and feature visualizing device of neural network
CN111582031A (en) Multi-model cooperative violence detection method and system based on neural network
CN114663980A (en) Behavior recognition method, and deep learning model training method and device
CN111860326A (en) Building site article movement detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination