CN113610006A - Overtime labor discrimination method based on target detection model - Google Patents

Overtime labor discrimination method based on target detection model Download PDF

Info

Publication number
CN113610006A
CN113610006A CN202110910812.XA CN202110910812A CN113610006A CN 113610006 A CN113610006 A CN 113610006A CN 202110910812 A CN202110910812 A CN 202110910812A CN 113610006 A CN113610006 A CN 113610006A
Authority
CN
China
Prior art keywords
video
time
video frame
detection model
judging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110910812.XA
Other languages
Chinese (zh)
Other versions
CN113610006B (en
Inventor
范振军
丁剑飞
闫盈盈
赵青
李育斌
刘汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC Big Data Research Institute Co Ltd
Original Assignee
CETC Big Data Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC Big Data Research Institute Co Ltd filed Critical CETC Big Data Research Institute Co Ltd
Priority to CN202110910812.XA priority Critical patent/CN113610006B/en
Publication of CN113610006A publication Critical patent/CN113610006A/en
Application granted granted Critical
Publication of CN113610006B publication Critical patent/CN113610006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an overtime work distinguishing method based on a target detection model, which mainly comprises the following steps: (1) inputting monitoring video data to be identified and start-stop time of work; (2) after video data are preprocessed, a target detection model based on a deep convolutional neural network is used for automatically detecting people and head targets in a video frame picture; (3) simultaneously identifying a time watermark in a video frame picture; (4) judging whether overtime work is performed or not according to the detection and identification result; (5) and (4) according to the judgment result, if so, intercepting and storing the corresponding video segment, otherwise, repeating the steps (2) to (5) until the video data is processed. The method can quickly detect the small target in the monitoring video scene and can realize automatic judgment of the production workshop exceeding the specified labor duration.

Description

Overtime labor discrimination method based on target detection model
Technical Field
The invention relates to an overtime labor discrimination method based on a target detection model, and belongs to the technical field of computer vision.
Background
Under the video monitoring scene of a production workshop and a special place, the factors of changeable image directions, changeable illumination, shielding of a target object, uncertain distance distribution and the like are faced, so that the target detection precision under the scene is not high, and the actual requirements of engineering can not be well met. The video image subtitle recognition is the popularization and application of an image character recognition technology, the problems of missed detection rate, high false recognition and the like exist in the traditional rule-based character recognition of a fixed area, and the service requirement cannot be fully automatically realized.
With the wide application of deep learning in the field of computer vision in recent years, the target detection recognition model based on deep learning greatly improves the accuracy and recognition effect of target detection. In order to effectively improve the performance of small target and multi-target detection and image subtitle recognition in a video monitoring scene, a video target detection model and a video frame image subtitle recognition model are trained based on deep learning and by utilizing the idea of transfer learning, and are applied to overtime work recognition in production workshops, labor places and other situations, and the precision and the speed of the model meet the basic service requirements.
Disclosure of Invention
In order to solve the technical problems, the invention provides an overtime work distinguishing method based on a target detection model, which mainly uses a target detection model based on deep learning and combines a standard system to comprehensively distinguish abnormal behaviors of overtime work.
The invention is realized by the following technical scheme.
The invention provides an overtime work distinguishing method based on a target detection model, which comprises the following steps:
video data preprocessing: sampling input video data to be identified, and performing image enhancement and scaling;
Video target detection and identification: adjusting a target detection model based on a deep convolutional neural network by using a transfer learning method, rapidly detecting human and human head targets in the preprocessed video data by using the adjusted target detection model, and training an image OCR algorithm model for recognizing a time watermark in a video frame picture by using an image text detection and recognition algorithm;
judging overtime labor and processing a result: and comparing the target and time watermark detected in the video frame picture with the set work starting and stopping time, and judging whether the threshold value of the specified work ending time is exceeded or not, so as to judge whether the phenomenon of overtime work occurs or not, if so, intercepting the video segment of the corresponding time period and storing the video segment in an abnormal video library, and if not, returning to a video preprocessing link to continue processing until the processing is finished.
The video data preprocessing is specifically divided into the following steps:
firstly, inputting a video and starting and stopping time of work: inputting monitoring video data to be identified as a processing object, and simultaneously transmitting specified work start-stop time as a reference standard for judging overtime work;
preprocessing video data: and sampling the video frames according to a set interval frame rate by using a downsampling mode, and simultaneously carrying out preprocessing operations of image enhancement and scale conversion on the sampled video frames.
The video target detection and identification specifically comprises the following steps:
detecting a target of a sampled video frame: on a large-scale image classification data set, pre-training is carried out based on a YOLO series algorithm to obtain a target detection model, and then the target detection model is adjusted by using a human head data set, so that the target detection model can simultaneously detect a human body and a human head target in a video frame picture;
and fourthly, identifying the time watermark of the sampled video frame: and based on the public Chinese and English data set, the identification of the time watermark in the video frame picture is realized through an EAST text detection algorithm and a CRNN text identification algorithm.
The overtime labor judgment and result processing specifically comprises the following steps:
judging whether overtime work is performed: counting the number of people and head targets in a video frame picture, and comprehensively judging whether overtime work is performed or not according to a comparison result of picture time and work start-stop time and a target detection result;
processing a discrimination result: if overtime work happens, intercepting the video from the moment until a specified time period or the video is finished, and simultaneously storing the video to an abnormal video library, or returning to the step II to continuously process the video until the processing is finished.
In the first step, input monitoring video data to be identified is taken as a processing object, and the start-stop time of the work is a preset value meeting the normal work time regulation and is marked as [ t [ ] s,tf]Additionally remember later than tfThe number of minutes of the time is tcAnd the data are used as reference standards for judging overtime labor.
In the second step, the video data preprocessing is divided into the following steps:
(2.1) down-sampling the video frame;
(2.2) video frame image enhancement;
and (2.3) carrying out image scale transformation on the video frames.
In the third step, the target detection model is used for detecting the human and human head targets in the sampled video frame picture, and the specific steps are as follows:
(3.1) training on a COCO image data set based on a YOLO series algorithm to obtain a target detection model;
(3.2) adjusting on a braiwash and NWPU-crown mixed data set by using a transfer learning idea and based on a target detection model to obtain a human head detection model;
and (3.3) inputting a sampled video frame image, detecting people and head targets in the video frame image by using a head detection model, and storing a non-empty result into an array r.
In the fourth step, the time watermark in the sampled video frame picture is identified, and the specific steps are as follows:
(4.1) training an EAST text detection algorithm based on a ResNet50_ vd backbone network on a public Chinese and English data set, and extracting a time watermark ROI image in a video frame picture;
(4.2) simultaneously, training a CRNN text recognition algorithm based on a Resnet34_ vd backbone network, and recognizing the subtitles in the ROI image extracted in the step (4.1);
(4.3) extracting date and time from the subtitles identified in the step (4.2) by adopting a rule matching mode, and converting the date and time into a standard date and time format, thereby obtaining a time watermark in the video frame picture, and recording the time watermark as tp
In the fifth step, whether overtime work is carried out or not is judged, and the method comprises the following steps:
(5.1) calculating the end time t of the preset work start-stop timefWith the currently recognized picture time tpIs marked as t0The calculation formula is t0=tp-tf
(5.2) when t is0Greater than zero but less than a set threshold tcWhen, or t0When the video frame is less than zero, ignoring the video frame, judging whether the video frame is negative, and setting a mark bit rf as False;
(5.3) conversely, when t is0Is greater than zero and greater than a set threshold tcIf the detection result r in the step (3.3) is not null, the current video frame picture time t is compared with the current video frame picture time tpAnd storing the data into an array s, judging that the flag bit rf is True if the data is True, and otherwise, judging that the flag bit rf is False if the data is not True.
In the step sixthly, the judgment result processing link is divided into the following steps:
(6.1) according to the judgment result of the fifth step, if rf is True, storing the current video frame and processing the next video frame, otherwise, directly processing the next video frame;
and (6.2) judging whether the current video is the last frame, if so, indicating that the video is processed completely, saving all results and quitting, and if not, continuously processing the video data until the processing is completed.
The invention has the beneficial effects that: the small target under the monitoring video scene can be quickly detected, and the automatic judgment of the production workshop exceeding the specified labor duration can be realized, so that intelligent solutions are provided for the fields of intelligent workshops and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The technical solution of the present invention is further described below, but the scope of the claimed invention is not limited to the described.
EXAMPLE 1
As shown in fig. 1, the method for judging overtime work based on the target detection model includes the following main processes:
video data preprocessing: sampling input video data to be identified and carrying out preprocessing such as image enhancement, zooming and the like;
video target detection and identification: finely adjusting a target detection model based on a deep convolutional neural network by using a transfer learning method to realize rapid detection of a person and a human head target, and training an image OCR algorithm model by using an image text detection and recognition algorithm to recognize a time watermark in a video frame picture;
judging overtime labor and processing a result: and comparing the target and time watermarks detected in the video frame picture with the set work starting and stopping time, judging whether the threshold value of the specified work ending time is exceeded or not so as to judge whether overtime work occurs or not, if so, intercepting the video segment in the corresponding time period and storing the video segment in an abnormal video library, and if not, returning to a video preprocessing link to continue processing until the processing is finished.
Specifically, the method comprises the following steps:
firstly, inputting a video and starting and stopping time of work: transmitting monitoring video data to be identified as a processing object, wherein the format of the processing object is mp4 or avi and the like, and transmitting specified work starting and stopping time as a reference standard for judging overtime work;
preprocessing video data: the method comprises the steps that a large number of redundant video frames exist in the input original video data due to different formats, frame rates and the like, the video frames are sampled according to a set interval frame rate in a down-sampling mode so as to reduce the processing amount, and meanwhile, preprocessing operations such as image enhancement, scale conversion and the like are carried out on the sampled video frames;
detecting a target of a sampled video frame: pre-training on a large-scale image classification data set based on a YOLO series by utilizing a transfer learning idea to obtain a target detection model, and finely adjusting the detection model by using a human head data set, so that the target detection model can simultaneously detect a human body and a human head target in a video frame picture;
and fourthly, identifying the time watermark of the sampled video frame: based on the public Chinese and English data set, adopting an EAST text detection algorithm based on a ResNet50_ vd backbone network and simultaneously adopting a CRNN text recognition algorithm based on a Resnet34_ vd backbone network to realize the recognition of the time watermark in the video frame picture;
Judging whether overtime work is performed: counting the number of people and head targets in a video frame picture, and comprehensively judging whether overtime work is performed or not according to a comparison result of picture time and work start-stop time and a target detection result;
processing a discrimination result: if overtime work happens, intercepting the video from the moment until a specified time period or the video is finished, and simultaneously storing the video to an abnormal video library, or returning to the step II to continuously process the video until the processing is finished.
Wherein, the input monitoring video data is used as the processing object of the method, and the start-stop time of the work is the preset value meeting the normal work time regulation and is marked as [ ts,tf]Additionally remember later than tfThe number of minutes of the time is tcAnd the data are used as reference standards for judging overtime labor.
The video data preprocessing specifically comprises the following steps:
(2.1) down-sampling the video frames to reduce the amount of processing;
(2.2) enhancing the video frame image to improve the image contrast;
and (2.3) carrying out image scale transformation on the video frames to adapt to the deep learning detection model.
The method comprises the following steps of detecting people and head targets in a sampled video frame picture by using a target detection model, and specifically comprises the following steps:
(3.1) training on a COCO image data set based on a YOLO series algorithm to obtain a target detection model;
(3.2) fine-tuning on a braiwash and NWPU-crown mixed data set by utilizing a transfer learning idea based on the pre-training model of (3.1) to obtain a human head detection model;
and (3.3) inputting a sampled video frame image, carrying out reasoning by using the detection model, detecting the human and human head targets in the video frame image, and storing a non-null result into an array r.
The method comprises the following steps of identifying a time watermark in a sampled video frame picture, and specifically comprises the following steps:
(4.1) training an EAST text detection algorithm based on a ResNet50_ vd backbone network on a public Chinese and English data set, and extracting a time watermark ROI image in a video frame picture;
(4.2) simultaneously, training a CRNN text recognition algorithm based on a Resnet34_ vd backbone network, and recognizing the subtitles in the ROI image extracted in the step (4.1);
(4.3) extracting date and time from the subtitles identified in the step (4.2) by adopting a rule matching mode, and converting the date and time into a standard date and time format, thereby obtaining a time watermark in the video frame picture, and recording the time watermark as tp
Wherein, whether the overtime work is judged, and the method comprises the following steps:
(5.1) calculating the end time t of the preset work start-stop timefWith the currently recognized picture time tpIs marked as t0The calculation formula is t 0=tp-tf
(5.2) when t is0Greater than zero but less than a set threshold tcWhen, or t0When the video frame is less than zero, ignoring the video frame, judging whether the video frame is negative, and setting a mark bit rf as False;
(5.3) conversely, when t is0Is greater than zero and greater than a set threshold tcIf the detection result r in the step (3.3) is not null, the current video frame picture time t is compared with the current video frame picture time tpAnd storing the data into an array s, judging that the flag bit rf is True if the data is True, and otherwise, judging that the flag bit rf is False if the data is not True.
The sixth step of judging the result processing link comprises the following steps:
(6.1) according to the judgment result of the fifth step, if rf is True, storing the current video frame and processing the next video frame; otherwise, directly processing the next video frame;
and (6.2) judging whether the current video is the last frame, if so, indicating that the video is processed completely, saving all results and quitting, and if not, continuously processing the video data until the processing is completed.
Example 2
As shown in figure 1, the overtime work distinguishing method based on the target detection model comprises the steps of firstly inputting a surveillance video to be identified and the start-stop time of work, then carrying out down-sampling on the video in a mode of separating video frame rates and carrying out preprocessing operations such as image enhancement, scale conversion and the like, further detecting people and head targets in video frame images based on the trained target detection model, simultaneously identifying time watermarks in the video frame images to obtain the current video frame time, finally carrying out distinguishing by combining with distinguishing rules whether overtime work exists or not, and finally carrying out video interception on overtime work situations and storing in an abnormal video library until the video is processed.
The method specifically comprises the following steps:
(1) input video and start-stop time of work:
inputting monitoring video data as a processing object of the method, wherein the format of the monitoring video data is mp4 or avi;
while transmitting a specified start-stop time of operation as ts,tf]Additionally remember later than tfThe number of minutes of the time is tcAnd the data are used as reference standards for judging overtime labor.
(2) Video data preprocessing:
the input original video data has the factors of data redundancy, illumination, non-uniform size and the like, and the pretreatment operation is required:
step 2.1, down-sampling the video frame to reduce the processing amount;
step 2.2, enhancing the video frame image to improve the image contrast;
and 2.3, carrying out scale transformation on the video frame image so as to train a deep learning detection model better.
(3) Detecting a sampling video frame target:
the method for detecting the human and human head targets in the frame picture of the sampled video by using the target detection model comprises the following specific steps:
step 3.1, training on a COCO image data set based on a YOLO series algorithm to obtain a target detection model;
step 3.2, fine-tuning on a braiwash and NWPU-crown mixed data set by utilizing a transfer learning idea based on the pre-training model in the step 3.1 to obtain a human head detection model;
And 3.3, inputting and adopting a video frame image, reasoning by using the detection model, detecting the human and human head targets in the video frame image, and storing a non-null result into an array r.
(4) Identifying a sampled video frame temporal watermark:
identifying a temporal watermark in a video frame picture, comprising:
step 4.1, training an EAST text detection algorithm based on a ResNet50_ vd backbone network on a public Chinese and English data set, and extracting a time watermark ROI image in a video frame picture;
step 4.2, simultaneously, training a CRNN text recognition algorithm based on a Resnet34_ vd backbone network, and recognizing the subtitles in the ROI image extracted in the step 4.1;
step 4.3, extracting date and time from the subtitles identified in the step 4.2 by adopting a rule matching mode and converting the date and time into a standard date and time format, thereby obtaining a time watermark in the video frame picture, and recording the time watermark as tp
(5) Judging whether overtime work is performed:
judging whether overtime work is carried out or not, comprising the following steps:
step 5.1, calculating the end time t of the preset work start-stop timefWith the currently recognized picture time tpIs marked as t0The calculation formula is t0=tp-tf
Step 5.2, when t is0Greater than zero but less than setThreshold value t cWhen, or t0When the video frame is less than zero, ignoring the video frame, judging whether the video frame is negative, and setting a mark bit rf as False;
step 5.3, otherwise, when t is0Is greater than zero and greater than a set threshold tcIf the detection result r in the step 3.3 is not null, the current video frame picture time t is determinedpAnd storing the data into an array s, judging that the flag bit rf is True if the data is True, and otherwise, judging that the flag bit rf is False if the data is not True.
(6) And (4) processing a judgment result:
the judgment result processing link comprises the following steps:
step 6.1, according to the judgment result of the step 5, if rf is True, storing the current video frame and processing the next video frame; otherwise, directly processing the next video frame;
and 6.2, judging whether the current video is the last frame, if so, indicating that the video is processed completely, saving all results and quitting, and otherwise, continuously processing the video data until the processing is finished.
In summary, the invention aims at the problems of small targets, occlusion and the like in special places such as workshops and the like, is based on the idea of transfer learning, and realizes target detection in the surveillance video under the scene by adopting a target detection model based on a deep convolutional neural network for training, so that the overtime labor distinguishing method based on the target detection model is obtained.

Claims (10)

1. A timeout labor discrimination method based on a target detection model is characterized in that: the method comprises the following steps:
video data preprocessing: sampling input video data to be identified, and performing image enhancement and scaling;
video target detection and identification: adjusting a target detection model based on a deep convolutional neural network by using a transfer learning method, rapidly detecting human and human head targets in the preprocessed video data by using the adjusted target detection model, and training an image OCR algorithm model for recognizing a time watermark in a video frame picture by using an image text detection and recognition algorithm;
judging overtime labor and processing a result: and comparing the target and time watermark detected in the video frame picture with the set work starting and stopping time, and judging whether the threshold value of the specified work ending time is exceeded or not, so as to judge whether the phenomenon of overtime work occurs or not, if so, intercepting the video segment of the corresponding time period and storing the video segment in an abnormal video library, and if not, returning to a video preprocessing link to continue processing until the processing is finished.
2. The method for judging overtime work based on object detection model according to claim 1, characterized in that: the video data preprocessing is specifically divided into the following steps:
Firstly, inputting a video and starting and stopping time of work: inputting monitoring video data to be identified as a processing object, and simultaneously transmitting specified work start-stop time as a reference standard for judging overtime work;
preprocessing video data: and sampling the video frames according to a set interval frame rate by using a downsampling mode, and simultaneously carrying out preprocessing operations of image enhancement and scale conversion on the sampled video frames.
3. The method for judging overtime work based on object detection model according to claim 1, characterized in that: the video target detection and identification specifically comprises the following steps:
detecting a target of a sampled video frame: on a large-scale image classification data set, pre-training is carried out based on a YOLO series algorithm to obtain a target detection model, and then the target detection model is adjusted by using a human head data set, so that the target detection model can simultaneously detect a human body and a human head target in a video frame picture;
and fourthly, identifying the time watermark of the sampled video frame: and based on the public Chinese and English data set, the identification of the time watermark in the video frame picture is realized through an EAST text detection algorithm and a CRNN text identification algorithm.
4. The method for judging overtime work based on object detection model according to claim 1, characterized in that: the overtime labor judgment and result processing specifically comprises the following steps:
Judging whether overtime work is performed: counting the number of people and head targets in a video frame picture, and comprehensively judging whether overtime work is performed or not according to a comparison result of picture time and work start-stop time and a target detection result;
processing a discrimination result: if overtime work happens, intercepting the video from the moment until a specified time period or the video is finished, and simultaneously storing the video to an abnormal video library, or returning to the step II to continuously process the video until the processing is finished.
5. The method for judging overtime work based on object detection model according to claim 2, characterized in that: in the first step, input monitoring video data to be identified is taken as a processing object, and the start-stop time of the work is a preset value meeting the normal work time regulation and is marked as [ t [ ]s,tf]Additionally remember later than tfThe number of minutes of the time is tcAnd the data are used as reference standards for judging overtime labor.
6. The method for judging overtime work based on object detection model according to claim 2, characterized in that: in the second step, the video data preprocessing is divided into the following steps:
(2.1) down-sampling the video frame;
(2.2) video frame image enhancement;
and (2.3) carrying out image scale transformation on the video frames.
7. The method for judging overtime work based on object detection model according to claim 3, characterized in that: in the third step, the target detection model is used for detecting the human and human head targets in the sampled video frame picture, and the specific steps are as follows:
(3.1) training on a COCO image data set based on a YOLO series algorithm to obtain a target detection model;
(3.2) adjusting on a braiwash and NWPU-crown mixed data set by using a transfer learning idea and based on a target detection model to obtain a human head detection model;
and (3.3) inputting a sampled video frame image, detecting people and head targets in the video frame image by using a head detection model, and storing a non-empty result into an array r.
8. The method for judging overtime work based on object detection model according to claim 3, characterized in that: in the fourth step, the time watermark in the sampled video frame picture is identified, and the specific steps are as follows:
(4.1) training an EAST text detection algorithm based on a ResNet50_ vd backbone network on a public Chinese and English data set, and extracting a time watermark ROI image in a video frame picture;
(4.2) simultaneously, training a CRNN text recognition algorithm based on a Resnet34_ vd backbone network, and recognizing the subtitles in the ROI image extracted in the step (4.1);
(4.3) extracting date and time from the subtitles identified in the step (4.2) by adopting a rule matching mode, and converting the date and time into a standard date and time format, thereby obtaining a time watermark in the video frame picture, and recording the time watermark as t p
9. The method for judging overtime work based on object detection model according to claim 4, characterized in that: in the fifth step, whether overtime work is carried out or not is judged, and the method comprises the following steps:
(5.1) calculating the end time t of the preset work start-stop timefWith the currently recognized picture time tpIs marked as t0The calculation formula is t0=tp-tf
(5.2) when t is0Greater than zero but less than a set threshold tcWhen, or t0When the video frame is less than zero, ignoring the video frame, judging whether the video frame is negative, and setting a mark bit rf as False;
(5.3) conversely, when t is0Is greater than zero and greater than a set threshold tcIf the detection result r in the step (3.3) is not null, the current video frame picture time t is compared with the current video frame picture time tpAnd storing the data into an array s, judging that the flag bit rf is True if the data is True, and otherwise, judging that the flag bit rf is False if the data is not True.
10. The method for judging overtime work based on object detection model according to claim 4, characterized in that: in the step sixthly, the judgment result processing link is divided into the following steps:
(6.1) according to the judgment result of the fifth step, if rf is True, storing the current video frame and processing the next video frame, otherwise, directly processing the next video frame;
and (6.2) judging whether the current video is the last frame, if so, indicating that the video is processed completely, saving all results and quitting, and if not, continuously processing the video data until the processing is completed.
CN202110910812.XA 2021-08-09 2021-08-09 Overtime labor discrimination method based on target detection model Active CN113610006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110910812.XA CN113610006B (en) 2021-08-09 2021-08-09 Overtime labor discrimination method based on target detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110910812.XA CN113610006B (en) 2021-08-09 2021-08-09 Overtime labor discrimination method based on target detection model

Publications (2)

Publication Number Publication Date
CN113610006A true CN113610006A (en) 2021-11-05
CN113610006B CN113610006B (en) 2023-09-08

Family

ID=78307806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110910812.XA Active CN113610006B (en) 2021-08-09 2021-08-09 Overtime labor discrimination method based on target detection model

Country Status (1)

Country Link
CN (1) CN113610006B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007076625A1 (en) * 2005-12-30 2007-07-12 Mingde Yin Intelligent data processing system and method for managing performance
US20090252370A1 (en) * 2005-09-09 2009-10-08 Justin Picard Video watermark detection
CN108877357A (en) * 2018-06-21 2018-11-23 广东小天才科技有限公司 A kind of exchange method and private tutor's machine based on private tutor's machine
CN111242829A (en) * 2020-01-19 2020-06-05 苏州浪潮智能科技有限公司 Watermark extraction method, device, equipment and storage medium
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium
CN111401824A (en) * 2018-12-14 2020-07-10 浙江宇视科技有限公司 Method and device for calculating working hours
CN111523510A (en) * 2020-05-08 2020-08-11 国家邮政局邮政业安全中心 Behavior recognition method, behavior recognition device, behavior recognition system, electronic equipment and storage medium
CN111581433A (en) * 2020-05-18 2020-08-25 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium
CN111653023A (en) * 2020-05-22 2020-09-11 深圳欧依云科技有限公司 Intelligent factory supervision method
CN111932392A (en) * 2019-12-06 2020-11-13 南京熊猫电子股份有限公司 Standard operation guidance system and operation method for intelligent manufacturing and processing production
US20210224998A1 (en) * 2018-11-23 2021-07-22 Tencent Technology (Shenzhen) Company Limited Image recognition method, apparatus, and system and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090252370A1 (en) * 2005-09-09 2009-10-08 Justin Picard Video watermark detection
WO2007076625A1 (en) * 2005-12-30 2007-07-12 Mingde Yin Intelligent data processing system and method for managing performance
CN108877357A (en) * 2018-06-21 2018-11-23 广东小天才科技有限公司 A kind of exchange method and private tutor's machine based on private tutor's machine
US20210224998A1 (en) * 2018-11-23 2021-07-22 Tencent Technology (Shenzhen) Company Limited Image recognition method, apparatus, and system and storage medium
CN111401824A (en) * 2018-12-14 2020-07-10 浙江宇视科技有限公司 Method and device for calculating working hours
CN111932392A (en) * 2019-12-06 2020-11-13 南京熊猫电子股份有限公司 Standard operation guidance system and operation method for intelligent manufacturing and processing production
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium
CN111242829A (en) * 2020-01-19 2020-06-05 苏州浪潮智能科技有限公司 Watermark extraction method, device, equipment and storage medium
CN111523510A (en) * 2020-05-08 2020-08-11 国家邮政局邮政业安全中心 Behavior recognition method, behavior recognition device, behavior recognition system, electronic equipment and storage medium
CN111581433A (en) * 2020-05-18 2020-08-25 Oppo广东移动通信有限公司 Video processing method and device, electronic equipment and computer readable medium
CN111653023A (en) * 2020-05-22 2020-09-11 深圳欧依云科技有限公司 Intelligent factory supervision method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LU CHEN, ET.AL: "FAWA: fast adversarial watermark attack on optical character recognition(OCR) systems", 《MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES》, pages 547 - 563 *
NOOR D. AL-SHAKARCHY, ET.AL: "Detecting abnormal movement of driver\'s head based on spatial-temporal features of video using deep neural network DNN", 《INDONESIAN JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE》, vol. 19, no. 1, pages 1 - 4 *
TAILAI WEN, ET.AL: "Time series anomaly detection using convolutional neural networks and transfer learning", 《ARXIV:1905.13628V1》, pages 1 - 8 *
罗德焕: "基于计算机视觉的建筑工人劳动状态分析", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅰ辑)》, no. 1, pages 026 - 61 *

Also Published As

Publication number Publication date
CN113610006B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN108710865B (en) Driver abnormal behavior detection method based on neural network
Dehghan et al. View independent vehicle make, model and color recognition using convolutional neural network
CN111611905B (en) Visible light and infrared fused target identification method
CN110472467A (en) The detection method for transport hub critical object based on YOLO v3
US10679067B2 (en) Method for detecting violent incident in video based on hypergraph transition
CN107977639B (en) Face definition judgment method
CN111582129A (en) Real-time monitoring and alarming method and device for working state of shield machine driver
CN109359697A (en) Graph image recognition methods and inspection system used in a kind of power equipment inspection
CN104506819A (en) Multi-camera real-time linkage mutual feedback tracing system and method
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN102314615A (en) Substation inspection robot-based circuit breaker state template-matching identification method
CN112818951A (en) Ticket identification method
CN109034247B (en) Tracking algorithm-based higher-purity face recognition sample extraction method
CN114973207B (en) Road sign identification method based on target detection
CN109993130A (en) One kind being based on depth image dynamic sign language semantics recognition system and method
CN112183219A (en) Public safety video monitoring method and system based on face recognition
CN113487570A (en) High-temperature continuous casting billet surface defect detection method based on improved yolov5x network model
CN113610006B (en) Overtime labor discrimination method based on target detection model
CN112818970A (en) General detection method for steel coil code spraying identification
CN112669269A (en) Pipeline defect classification and classification method and system based on image recognition
CN111950452A (en) Face recognition method
CN112069898A (en) Method and device for recognizing human face group attribute based on transfer learning
CN109145758A (en) A kind of recognizer of the face based on video monitoring
CN114724091A (en) Method and device for identifying foreign matters on transmission line wire
CN114529894A (en) Rapid scene text detection method fusing hole convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant