CN110569722A - Visual analysis-based constructor dressing standard detection method and device - Google Patents

Visual analysis-based constructor dressing standard detection method and device Download PDF

Info

Publication number
CN110569722A
CN110569722A CN201910708238.2A CN201910708238A CN110569722A CN 110569722 A CN110569722 A CN 110569722A CN 201910708238 A CN201910708238 A CN 201910708238A CN 110569722 A CN110569722 A CN 110569722A
Authority
CN
China
Prior art keywords
constructor
safety helmet
unit
pixels
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910708238.2A
Other languages
Chinese (zh)
Inventor
李学钧
戴相龙
蒋勇
何成虎
杨政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU HAOHAN INFORMATION TECHNOLOGY Co Ltd
Original Assignee
JIANGSU HAOHAN INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU HAOHAN INFORMATION TECHNOLOGY Co Ltd filed Critical JIANGSU HAOHAN INFORMATION TECHNOLOGY Co Ltd
Priority to CN201910708238.2A priority Critical patent/CN110569722A/en
Publication of CN110569722A publication Critical patent/CN110569722A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting a dressing standard of a constructor based on visual analysis, wherein the method comprises the following steps: acquiring an image sample of a constructor, intercepting a local image of the head of the constructor, and dividing the image into a safety belt wearing mode and a safety helmet non-wearing mode; training a classification model on the sample set by using ResNet to obtain a safety helmet wearing classification model; collecting real-time video stream of a construction site, and extracting to obtain a key frame image; analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm, and judging whether a human key point is detected, if so, positioning a head area and an arm area; otherwise, judging whether the short sleeves are worn or not by adopting a skin color model in the arm area; and calling a safety helmet wearing classification model to classify and identify the head area, and judging whether the constructor wears the safety helmet or not. The invention can realize 24-hour all-weather construction personnel non-standard behavior detection, thereby realizing the safety supervision of a construction site and improving the construction efficiency and safety.

Description

visual analysis-based constructor dressing standard detection method and device
Technical Field
the invention relates to the field of safe construction, in particular to a method and a device for detecting dressing standards of constructors based on visual analysis.
background
safety construction is the first major thing in electric power construction, and it is very important to make site operation safety measures. The non-normative behavior of the constructors is one of the main factors that create the risk of construction safety, for example: the irregular behaviors of constructors who don't wear safety helmets, wear short sleeves and the like are all factors generating construction safety risks. The safety management of the existing electric power construction mainly depends on manpower supervision, easily causes supervision loopholes, influences the construction efficiency and has the safety problem.
disclosure of Invention
the technical problem to be solved by the invention is to provide a visual analysis-based constructor dressing specification detection method and device which can realize 24-hour all-weather constructor non-specification behavior detection, thereby realizing safety supervision of a construction site and improving construction efficiency and safety.
The technical scheme adopted by the invention for solving the technical problems is as follows: a constructor dressing standard detection method based on visual analysis is constructed, and comprises the following steps:
A) Acquiring image samples of constructors to form a sample set, intercepting local images of the heads of the constructors, and dividing the local images into a wearable safety belt and a non-wearable safety helmet;
B) Training a classification model on the sample set by using ResNet to obtain a safety helmet wearing classification model;
C) Collecting a real-time video stream of a monitoring camera on a construction site, and extracting to obtain a key frame image;
D) analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm, judging whether a human key point is detected, and if so, executing a step E); otherwise, executing step H);
E) positioning a head region and an arm region of a human body;
F) Calling the safety helmet wearing classification model to classify and identify the head area, and judging whether the constructor wears a safety helmet or not;
G) Judging whether the short sleeves are worn or not by adopting a skin color model in the arm area, and executing the step H);
H) and finishing the detection.
In the method for detecting a dressing standard of a constructor based on visual analysis, the step B) further comprises:
B1) preprocessing image samples in the sample set, wherein the preprocessing at least comprises random rotation, traversing, zooming and overturning;
B2) Constructing ResNet of a set layer, and downloading a pre-training model on an MSCOCO data set;
B3) and loading the image samples on the pre-training model by adopting a transfer learning method, outputting the accuracy at set steps until an optimal result is achieved, and storing the safety helmet wearing classification model.
In the visual analysis-based method for detecting a dressing standard of a constructor according to the present invention, the step G) further includes:
G1) mapping the color space of the arm region from RGB to YCrCb space, wherein the conversion mode is as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
Wherein Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component;
G2) Judging the number of pixels of which the pixel values meet the following formula in the arm area: 133< Cr <173, 77< Cb < 127;
G3) If the proportion of the number of the pixels to the total number of the pixels is smaller than a first set proportion, judging that the constructor does not wear the short sleeves; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard.
in the method for detecting the dressing standard of the constructor based on the visual analysis, the set layer is 50 layers, and the set steps are 50 steps.
in the method for detecting the dressing standard of the constructor based on the visual analysis, the first set proportion is 1/4, and the second set proportion is 1/2.
The invention also relates to a device for realizing the method for detecting the dressing standard of the constructors based on the visual analysis, which comprises the following steps:
an image sample acquisition unit: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring image samples of constructors to form a sample set, intercepting local images of the heads of the constructors and dividing the local images into a wearable safety belt and a non-wearable safety helmet;
A model training unit: the method comprises the steps of training a classification model by using ResNet on the sample set to obtain a safety helmet wearing classification model;
The collection and extraction unit: the system comprises a monitoring camera, a key frame image acquisition unit, a data acquisition unit and a data processing unit, wherein the monitoring camera is used for acquiring a real-time video stream of the monitoring camera on a construction site and extracting to obtain a key frame image;
an attitude estimation unit: the system is used for analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm and judging whether human key points are detected or not;
A positioning unit: for locating the head and arm regions of a human body;
a classification recognition unit: the safety helmet wearing classification model is called to classify and identify the head area, and whether the constructor wears the safety helmet or not is judged;
A dressing determination unit: the skin color model is used for judging whether the short sleeves are worn or not in the arm area;
an end unit: for ending this detection.
In the apparatus of the present invention, the model training unit further includes:
A preprocessing module: the system comprises a pre-processing unit, a display unit and a display unit, wherein the pre-processing unit is used for pre-processing image samples in the sample set, and the pre-processing at least comprises random rotation, traversing, zooming and overturning;
The pre-training model downloading module: the system is used for constructing ResNet of a set layer and downloading a pre-training model on an MSCOCO data set;
an image sample loading module: and the image sample is loaded on the pre-training model by adopting a transfer learning method, the accuracy is output every set step until the optimal result is reached, and the safety helmet wearing classification model is stored.
in the apparatus of the present invention, the dressing determination unit further includes:
a mapping module: the color space for mapping the arm region from RGB to YCrCb space is converted as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
wherein Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component;
a pixel number judgment module: the pixel number used for judging the pixel value in the arm area meets the following formula: 133< Cr <173, 77< Cb < 127;
Dressing standard judging module: the short sleeve protection device is used for judging that the constructor does not wear the short sleeves if the proportion of the number of the pixels to the total number of the pixels is smaller than a first set proportion; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard.
in the device of the present invention, the set layer is 50 layers, and the set step is 50 steps.
in the device of the present invention, the first set ratio is 1/4, and the second set ratio is 1/2.
the method and the device for detecting the dressing standard of the constructor based on the visual analysis have the following beneficial effects that: the local image of the head of the constructor is divided into a safety belt wearing mode and a safety helmet non-wearing mode, and a ResNet training classification model is used for obtaining a safety helmet wearing classification model; extracting a key frame image from the acquired real-time video stream, analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm, and positioning human key points to obtain a head area and an arm area of a positioned human body; calling the safety helmet wearing classification model to classify and identify the head area, and judging whether a constructor wears a safety helmet or not; the invention can realize 24-hour all-weather nonstandard behavior detection of constructors, thereby realizing safety supervision of a construction site and improving the construction efficiency and safety.
Drawings
in order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method in one embodiment of a method and apparatus for visual analysis based inspection of a worker's dressing rules in accordance with the present invention;
FIG. 2 is a specific flowchart of training a classification model by using ResNet to obtain a classification model for wearing a safety helmet in the embodiment;
FIG. 3 is a detailed flowchart of determining whether to wear the short sleeves or not by using a skin color model in the arm area in the embodiment;
fig. 4 is a schematic structural diagram of the device in the embodiment.
Detailed Description
the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
in the embodiment of the method and the device for detecting the dressing standard of the constructor based on the visual analysis, a flow chart of the method for detecting the dressing standard of the constructor based on the visual analysis is shown in figure 1. In fig. 1, the method for detecting the dressing standard of a constructor based on visual analysis comprises the following steps:
Step S01, collecting image samples of constructors to form a sample set, intercepting partial images of the heads of the constructors, and dividing the partial images into a safety belt wearing mode and a safety helmet non-wearing mode: in this step, image samples of the constructors (i.e., head sample pictures of the constructors) are collected, the image samples form a sample set, local images of the heads of the constructors are captured, and the local images are divided into two types. One is wearing a safety belt, and the other is not wearing a safety helmet.
Step S02, training a classification model by using ResNet on a sample set to obtain a safety helmet wearing classification model: in this step, on the sample set collected in step S01, the ResNet training classification model is used to obtain the classification model M for wearing the safety helmeth
Step S03, collecting the real-time video stream of the monitoring camera of the construction site, extracting and obtaining the key frame image: in the step, a real-time video stream of a monitoring camera arranged on a construction site is collected, and a key frame image I is extracted from the real-time video stream.
step S04, analyzing the key frame image by using an openpos human pose estimation algorithm, and determining whether a human key point is detected: in this step, the key frame image I obtained in step S03 is analyzed by using an openpos human pose estimation algorithm, a human key point is located, whether a human key point is detected is determined, and if the determination result is yes, step S05 is executed; otherwise, step S08 is executed.
step S05 locates the head region and arm region of the human body: in this step, the key points of the human body are located to obtain the head region and the arm region of the human body, that is, the head region and the arm region of the human body are located.
Step S06, calling a safety helmet wearing classification model to classify and recognize the head area, and judging whether the constructor wears the safety helmet: in this step, the helmet wearing classification model Mh trained in step S02 is called to classify and recognize the head area, and it is determined whether the worker wears a helmet.
Step S07, judging whether the short sleeves are worn or not by adopting a skin color model in the arm area: in the step, the local image of the arm area is judged whether to wear the short sleeves or not by adopting a skin color model.
Step S08 ends this detection: in this step, this detection is ended.
The construction worker dressing standard detection method based on visual analysis liberates two eyes of a supervisor, and can realize 24-hour all-weather construction worker nonstandard behavior detection, thereby realizing safety supervision of a construction site and improving the construction efficiency and safety.
for the present embodiment, the step S02 can be further refined, and the detailed flowchart is shown in fig. 2. In fig. 2, the step S02 further includes the following steps:
Step S21 is to pre-process the image samples in the sample set, where the pre-process at least includes random rotation, traversing, zooming, and flipping: in the step, the image samples in the sample set are preprocessed, wherein the preprocessing at least comprises random rotation, transverse movement, scaling and overturning, and the methods of random rotation, transverse movement, scaling, overturning and the like are adopted for data enhancement to expand the image sample amount.
step S22, a ResNet of a set layer is constructed, and a pre-training model on the MSCOCO data set is downloaded: in this step, ResNet of the set layer is constructed, and a pre-training model on the MSCOCO data set is downloaded. In this embodiment, the number of the setting layer is 50, but in practical application, the number of the setting layer may be increased or decreased according to specific requirements.
Step S23, loading image samples on the pre-training model by adopting a transfer learning method, outputting the accuracy at set steps until an optimal result is reached, and storing the safety helmet wearing classification model: in the step, image samples are loaded on a pre-training model by adopting a transfer learning method, the accuracy is output every set step until the optimal result is reached, and a safety helmet wearing classification model M is storedh. In this embodiment, the setting step is 50 steps, and certainly, in practical applications, the size of the setting step may be increased or decreased according to specific requirements.
For the present embodiment, the step S07 can be further refined, and the detailed flowchart is shown in fig. 3. In fig. 3, the step S07 further includes the following steps:
step S71 maps the color space of the arm region from RGB to YCrCb space: in this step, the color space of the arm region is mapped from RGB to YCrCb space, and the specific conversion method is as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
where Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component.
The YCrCb space is used in face detection, because a general image is based on an RGB space, and the skin color of a face in the RGB space is greatly affected by brightness, so that skin color points are difficult to separate from non-skin color points, that is, after the space is processed, the skin color points are discrete points, and a lot of non-skin colors are embedded in the middle of the skin color points, which brings a problem to skin color area calibration (face calibration, eyes calibration, etc.). If RGB is converted to YCrCb space, the effect of Y (luminance) is negligible, since this space is less affected by luminance, and skin tones will be very well clustered. Thus, the three-dimensional space is reduced to two-dimensional CrCb, and the skin color points form certain shapes, such as: the human face can see a human face area, and the arm can see an arm shape, so that the method is beneficial to processing mode identification. The CrCb value at a certain point empirically satisfies: 133 ≦ Cr ≦ 173, 77 ≦ Cb ≦ 127, then the dot is considered a skin color dot, and the others are non-skin color dots.
step S72 is to determine the number of pixels in the arm region whose pixel values satisfy the following formula: 133< Cr <173, 77< Cb < 127: in this step, the number N of pixels whose pixel values satisfy the following formula in the arm region (local region) is determined: 133< Cr <173, 77< Cb < 127.
step S73, if the proportion of the number of pixels to the total number of pixels is smaller than a first set proportion, judging that the constructor does not wear the short sleeves; if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of a constructor is not standard: in the step, if the proportion of the number N of pixels to the total number of pixels is smaller than a first set proportion, the constructor is judged not to wear the short sleeves; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard. In this embodiment, the first setting ratio is 1/4, and the second setting ratio is 1/2. Of course, in practical applications, the sizes of the first setting ratio and the second setting ratio can be increased or decreased according to specific requirements.
The embodiment also relates to a device for realizing the method for detecting the dressing standard of the constructor based on the visual analysis, and the structural schematic diagram of the device is shown in fig. 4. In fig. 4, the device comprises an image sample collecting unit 1, a model training unit 2, a collecting and extracting unit 3, an attitude estimating unit 4, a positioning unit 5, a classification and identification unit 6, a dressing judging unit 7 and an ending unit 8; the image sample acquisition unit 1 is used for acquiring image samples of constructors to form a sample set, intercepting local images of the heads of the constructors, and dividing the local images into a safety belt wearing mode and a safety helmet non-wearing mode; the model training unit 2 is used for training a classification model on the sample set by using ResNet to obtain a safety helmet wearing classification model; the acquisition and extraction unit 3 is used for acquiring a real-time video stream of a monitoring camera on a construction site and extracting to obtain a key frame image; the posture estimation unit 4 is used for analyzing the key frame image by using an OpenPose human posture estimation algorithm and judging whether human key points are detected; the positioning unit 5 is used for positioning a head area and an arm area of a human body; the classification identification unit 6 is used for calling a helmet wearing classification model to perform classification identification on the head area and judging whether a constructor wears a helmet or not; the dressing judging unit 7 is used for judging whether the short sleeves are dressed or not by adopting a skin color model in the arm area; the ending unit 8 is used for ending the detection.
the device of the invention liberates the eyes of the supervision personnel and can realize 24-hour all-weather construction personnel non-standard behavior detection, thereby realizing the safety supervision of the construction site and improving the construction efficiency and safety.
in this embodiment, the model training unit 2 further includes a preprocessing module 21, a pre-training model downloading module 22, and an image sample loading module 23; the preprocessing module 21 is configured to preprocess the image samples in the sample set, where the preprocessing at least includes random rotation, traversing, zooming, and flipping. The pre-training model downloading module 22 is used for constructing ResNet of a set layer and downloading a pre-training model on the MSCOCO data set; in this embodiment, the number of the setting layer is 50, but in practical application, the number of the setting layer may be increased or decreased according to specific requirements.
The image sample loading module 23 is configured to load an image sample on the pre-training model by using a transfer learning method, output the accuracy at set intervals until an optimal result is achieved, and store the classification model for wearing the safety helmet. In this embodiment, the setting step is 50 steps, and certainly, in practical applications, the size of the setting step may be increased or decreased according to specific requirements.
in this embodiment, the dressing determination unit 7 further includes a mapping module 71, a pixel number determination module 72, and a dressing specification determination module 73; the mapping module 71 is configured to map the color space of the arm region from RGB to YCrCb space, and the conversion method is as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
where Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component.
the pixel number determining module 72 is configured to determine the number of pixels in the arm region whose pixel values satisfy the following formula: 133< Cr <173, 77< Cb < 127; the dressing standard judging module 73 is used for judging that the constructor does not dress the short sleeves when the proportion of the number of pixels to the total number of pixels is smaller than a first set proportion; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard. In this embodiment, the first setting ratio is 1/4, and the second setting ratio is 1/2. Of course, in practical applications, the sizes of the first setting ratio and the second setting ratio can be increased or decreased according to specific requirements.
In a word, the invention provides a detection method based on an intelligent visual analysis technology aiming at the non-standard behaviors of constructors who do not wear safety helmets and wear short sleeves, liberates the eyes of the supervisors and can realize 24-hour all-weather detection of the non-standard behaviors of the constructors, thereby realizing the safety supervision of a construction site and improving the construction efficiency and the safety.
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A constructor dressing specification detection method based on visual analysis is characterized by comprising the following steps:
A) Acquiring image samples of constructors to form a sample set, intercepting local images of the heads of the constructors, and dividing the local images into a wearable safety belt and a non-wearable safety helmet;
B) training a classification model on the sample set by using ResNet to obtain a safety helmet wearing classification model;
C) Collecting a real-time video stream of a monitoring camera on a construction site, and extracting to obtain a key frame image;
D) analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm, judging whether a human key point is detected, and if so, executing a step E); otherwise, executing step H);
E) Positioning a head region and an arm region of a human body;
F) calling the safety helmet wearing classification model to classify and identify the head area, and judging whether the constructor wears a safety helmet or not;
G) Judging whether the short sleeves are worn or not by adopting a skin color model in the arm area, and executing the step H);
H) and finishing the detection.
2. The visual analysis-based constructor dressing rule detecting method according to claim 1, wherein the step B) further includes:
B1) Preprocessing image samples in the sample set, wherein the preprocessing at least comprises random rotation, traversing, zooming and overturning;
B2) constructing ResNet of a set layer, and downloading a pre-training model on an MSCOCO data set;
B3) And loading the image samples on the pre-training model by adopting a transfer learning method, outputting the accuracy at set steps until an optimal result is achieved, and storing the safety helmet wearing classification model.
3. The visual analysis-based constructor rigging specification detection method according to claim 1, wherein the step G) further comprises:
G1) Mapping the color space of the arm region from RGB to YCrCb space, wherein the conversion mode is as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
wherein Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component;
G2) Judging the number of pixels of which the pixel values meet the following formula in the arm area: 133< Cr <173, 77< Cb < 127;
G3) if the proportion of the number of the pixels to the total number of the pixels is smaller than a first set proportion, judging that the constructor does not wear the short sleeves; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard.
4. The visual analysis-based constructor dressing rule checking method according to claim 2, wherein the set layer is 50 layers, and the set step is 50 steps.
5. the visual analysis-based constructor dressing rule detecting method of claim 3, wherein the first set ratio is 1/4 and the second set ratio is 1/2.
6. an apparatus for implementing the visual analysis-based constructor dressing specification detection method according to claim 1, characterized by comprising:
an image sample acquisition unit: the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring image samples of constructors to form a sample set, intercepting local images of the heads of the constructors and dividing the local images into a wearable safety belt and a non-wearable safety helmet;
A model training unit: the method comprises the steps of training a classification model by using ResNet on the sample set to obtain a safety helmet wearing classification model;
the collection and extraction unit: the system comprises a monitoring camera, a key frame image acquisition unit, a data acquisition unit and a data processing unit, wherein the monitoring camera is used for acquiring a real-time video stream of the monitoring camera on a construction site and extracting to obtain a key frame image;
An attitude estimation unit: the system is used for analyzing the key frame image by utilizing an OpenPose human posture estimation algorithm and judging whether human key points are detected or not;
A positioning unit: for locating the head and arm regions of a human body;
a classification recognition unit: the safety helmet wearing classification model is called to classify and identify the head area, and whether the constructor wears the safety helmet or not is judged;
a dressing determination unit: the skin color model is used for judging whether the short sleeves are worn or not in the arm area;
an end unit: for ending this detection.
7. the apparatus of claim 6, wherein the model training unit further comprises:
A preprocessing module: the system comprises a pre-processing unit, a display unit and a display unit, wherein the pre-processing unit is used for pre-processing image samples in the sample set, and the pre-processing at least comprises random rotation, traversing, zooming and overturning;
the pre-training model downloading module: the system is used for constructing ResNet of a set layer and downloading a pre-training model on an MSCOCO data set;
an image sample loading module: and the image sample is loaded on the pre-training model by adopting a transfer learning method, the accuracy is output every set step until the optimal result is reached, and the safety helmet wearing classification model is stored.
8. the apparatus of claim 6, wherein the dressing determination unit further comprises:
A mapping module: the color space for mapping the arm region from RGB to YCrCb space is converted as follows:
Y=0.299R+0.587G+0.114B
Cr=-0.147R-0.289G+0.436B
Cb=0.615R-0.515G-0.100B
wherein Y is luminance, Cr is chrominance, Cb is saturation, R is a red component, G is a green component, and B is a blue component;
a pixel number judgment module: the pixel number used for judging the pixel value in the arm area meets the following formula: 133< Cr <173, 77< Cb < 127;
Dressing standard judging module: the short sleeve protection device is used for judging that the constructor does not wear the short sleeves if the proportion of the number of the pixels to the total number of the pixels is smaller than a first set proportion; and if the proportion of the number of the pixels to the total number of the pixels is larger than a second set proportion, judging that the dress of the constructor is not standard.
9. the apparatus of claim 7, wherein the set level is 50 levels and the set step is 50 steps.
10. the apparatus according to claim 8, wherein the first set ratio is 1/4 and the second set ratio is 1/2.
CN201910708238.2A 2019-08-01 2019-08-01 Visual analysis-based constructor dressing standard detection method and device Pending CN110569722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910708238.2A CN110569722A (en) 2019-08-01 2019-08-01 Visual analysis-based constructor dressing standard detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910708238.2A CN110569722A (en) 2019-08-01 2019-08-01 Visual analysis-based constructor dressing standard detection method and device

Publications (1)

Publication Number Publication Date
CN110569722A true CN110569722A (en) 2019-12-13

Family

ID=68774448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910708238.2A Pending CN110569722A (en) 2019-08-01 2019-08-01 Visual analysis-based constructor dressing standard detection method and device

Country Status (1)

Country Link
CN (1) CN110569722A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191586A (en) * 2019-12-30 2020-05-22 安徽小眯当家信息技术有限公司 Method and system for inspecting wearing condition of safety helmet of personnel in construction site
CN111616457A (en) * 2020-05-06 2020-09-04 国网浙江省电力有限公司衢州供电公司 Intelligent safety helmet for construction safety
CN111639641A (en) * 2020-04-30 2020-09-08 中国海洋大学 Clothing area acquisition method and device
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113096288A (en) * 2021-04-27 2021-07-09 深圳市智德森水务科技有限公司 Detection system is dressed to worker
CN113505704A (en) * 2021-07-13 2021-10-15 上海西井信息科技有限公司 Image recognition personnel safety detection method, system, equipment and storage medium
CN113822242A (en) * 2021-11-19 2021-12-21 中化学交通建设集团有限公司 Image recognition technology-based helmet wearing recognition method and device
CN113837138A (en) * 2021-09-30 2021-12-24 重庆紫光华山智安科技有限公司 Dressing monitoring method, system, medium and electronic terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN109697430A (en) * 2018-12-28 2019-04-30 成都思晗科技股份有限公司 The detection method that working region safety cap based on image recognition is worn

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN109697430A (en) * 2018-12-28 2019-04-30 成都思晗科技股份有限公司 The detection method that working region safety cap based on image recognition is worn

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191586A (en) * 2019-12-30 2020-05-22 安徽小眯当家信息技术有限公司 Method and system for inspecting wearing condition of safety helmet of personnel in construction site
CN111639641B (en) * 2020-04-30 2022-05-03 中国海洋大学 Method and device for acquiring clothing region not worn on human body
CN111639641A (en) * 2020-04-30 2020-09-08 中国海洋大学 Clothing area acquisition method and device
CN111616457A (en) * 2020-05-06 2020-09-04 国网浙江省电力有限公司衢州供电公司 Intelligent safety helmet for construction safety
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model
CN112183472A (en) * 2020-10-28 2021-01-05 西安交通大学 Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN112487963A (en) * 2020-11-27 2021-03-12 新疆爱华盈通信息技术有限公司 Wearing detection method and system for safety helmet
CN113096288A (en) * 2021-04-27 2021-07-09 深圳市智德森水务科技有限公司 Detection system is dressed to worker
CN113505704A (en) * 2021-07-13 2021-10-15 上海西井信息科技有限公司 Image recognition personnel safety detection method, system, equipment and storage medium
CN113505704B (en) * 2021-07-13 2023-11-10 上海西井科技股份有限公司 Personnel safety detection method, system, equipment and storage medium for image recognition
CN113837138A (en) * 2021-09-30 2021-12-24 重庆紫光华山智安科技有限公司 Dressing monitoring method, system, medium and electronic terminal
CN113837138B (en) * 2021-09-30 2023-08-29 重庆紫光华山智安科技有限公司 Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal
CN113822242A (en) * 2021-11-19 2021-12-21 中化学交通建设集团有限公司 Image recognition technology-based helmet wearing recognition method and device
CN113822242B (en) * 2021-11-19 2022-02-25 中化学交通建设集团有限公司 Image recognition technology-based helmet wearing recognition method and device

Similar Documents

Publication Publication Date Title
CN110569722A (en) Visual analysis-based constructor dressing standard detection method and device
US7916904B2 (en) Face region detecting device, method, and computer readable recording medium
US8675960B2 (en) Detecting skin tone in images
US7747071B2 (en) Detecting and correcting peteye
CN101390128B (en) Detecting method and detecting system for positions of face parts
CN103914708B (en) Food kind detection method based on machine vision and system
CN105844242A (en) Method for detecting skin color in image
US11238301B2 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product
CN108021881B (en) Skin color segmentation method, device and storage medium
CN112149513A (en) Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN108268832A (en) Electric operating monitoring method, device, storage medium and computer equipment
CN108470424A (en) A kind of forest safety monitoring system based on characteristics of image
CN103218615B (en) Face judgment method
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN109948461A (en) A kind of sign language image partition method based on center coordination and range conversion
CN110648336B (en) Method and device for dividing tongue texture and tongue coating
JP4712659B2 (en) Image evaluation apparatus and program thereof
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN116343100B (en) Target identification method and system based on self-supervision learning
CN113743199A (en) Tool wearing detection method and device, computer equipment and storage medium
CN116229570B (en) Aloft work personnel behavior situation identification method based on machine vision
KR20210092914A (en) Method and system for alopecia self-diagnosis
CN110598521A (en) Behavior and physiological state identification method based on intelligent analysis of face image
CN113989886B (en) Crewman identity verification method based on face recognition
CN111860079A (en) Living body image detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213