CN109670441B - Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet - Google Patents

Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet Download PDF

Info

Publication number
CN109670441B
CN109670441B CN201811538958.0A CN201811538958A CN109670441B CN 109670441 B CN109670441 B CN 109670441B CN 201811538958 A CN201811538958 A CN 201811538958A CN 109670441 B CN109670441 B CN 109670441B
Authority
CN
China
Prior art keywords
human body
safety helmet
picture
frame
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811538958.0A
Other languages
Chinese (zh)
Other versions
CN109670441A (en
Inventor
李瀚�
宋建斌
张青
吴武勋
江子强
叶海青
张子淇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Eshore Technology Co Ltd
Original Assignee
Guangdong Eshore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Eshore Technology Co Ltd filed Critical Guangdong Eshore Technology Co Ltd
Priority to CN201811538958.0A priority Critical patent/CN109670441B/en
Publication of CN109670441A publication Critical patent/CN109670441A/en
Application granted granted Critical
Publication of CN109670441B publication Critical patent/CN109670441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses a method, a system, a terminal and a computer-readable storage medium for realizing wearing identification of a safety helmet, and relates to the technical field of computer vision application. By applying the deep learning technology, the human body and the safety helmet in the detected picture are identified, and the accurate area where the safety helmet should be worn is determined according to the identified human body, so that errors caused by different planar relationships of images and actual three-dimensional relationships of people are avoided, the identification rate in the actual production environment reaches 90%, and the identification speed can reach 15 frames per second. The invention effectively reduces the difficulty of artificial supervision in the production environment, and the supervision can be notified as long as abnormal wearing behaviors are analyzed, and the supervision can carry out the next management according to the images and information fed back on site without the need of paying attention to the site situation in a moment.

Description

Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
Technical Field
The present invention relates to the field of computer vision application technologies, and in particular, to a method, a system, a terminal, and a computer readable storage medium for implementing helmet wearing identification.
Background
With the development of economy in China, building sites are more and more, the requirements of the China on the building industry are higher and more, but safety accidents still occur, so that serious economic losses are caused for enterprises, disasters are caused for families of victims, and a certain influence is caused on social stability.
The safety production method in China prescribes that safety helmets are required to be worn on the entering construction site, but whether the safety helmets are actually worn by workers or not is difficult to ensure when the workers work. The safety hidden dangers of wearing no safety helmet can bring great injury when accidents happen, and the safety hidden dangers are difficult to be found by safety managers. If the safety accident can be found in time, the safety accident can be early warned, and the injury of the safety accident can be reduced.
With the development of artificial intelligence, video monitoring is increasingly widely applied in various fields of society, particularly in the field of safety production, and particularly in the field of safety supervision, the requirement on whether to wear a safety helmet through video intelligent analysis is urgent. Helmet wear detection is one of object detection techniques, and object detection is a method for identifying a specific object by using deep learning, image processing, machine vision, or other techniques. Due to the influence of various factors such as illumination change, shielding, target size and the like, the detection difficulty is high, and the recognition effect is poor. Target detection in a complex background has been a research hotspot in theory and application in recent years.
The traditional safety helmet detection algorithm is used for judging through RGB components of images, is greatly influenced by illumination, and has low recognition rate in an actual production environment. Early target detection algorithms obtain good results under the conditions of small shape change and small illumination change by learning the shallow features of the image and performing some exquisite operations such as normalization and pooling, but have large calculated amount and low applicability.
Disclosure of Invention
The invention aims to solve the technical problems of improving the recognition rate and shortening the recognition time under a complex background, and provides a detection method capable of analyzing video images in real time and analyzing whether a human body in the images correctly wears a safety helmet.
In order to solve the problems, the invention provides the following technical scheme:
in a first aspect, the present invention provides a method for detecting a wearing recognition of a helmet, including the following steps:
s1, acquiring a detection picture;
s2, judging whether the detected picture contains a human body or not and a safety helmet;
s3, if the detected picture contains a human body and a safety helmet, acquiring the posture of the human body, and determining a correct wearing area according to the posture of the human body;
s4, judging whether the safety helmet is in the correct wearing area or not;
S5, if so, judging that the human body is correctly wearing the safety helmet, and if not, executing alarm operation.
The further technical scheme is that the step S1 comprises the following steps:
analyzing the real-time video stream to obtain a picture to be analyzed;
acquiring an optical flow component value of the picture to be analyzed;
and if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture.
The further technical scheme is that the step S2 includes:
the convolution characteristic layer of the detection picture is named by using k different rectangular frames, and candidate areas possibly existing in human bodies and/or safety helmets are screened out, wherein k is a positive integer;
extracting the characteristics of the human body and the safety helmet from the candidate areas respectively;
classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not;
if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame;
classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not;
and if the detected picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame.
The technical scheme is that the method further comprises the following steps before the step S2:
collecting a safety helmet image sample and a non-safety helmet image sample, classifying, labeling and training to obtain a safety helmet characteristic model;
and collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model.
The further technical scheme is that the method for determining the correct wearing area according to the posture of the human body comprises the following steps:
acquiring the aspect ratio alpha of the human body frame;
if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing;
and if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting.
If the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame;
if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width on the upper part of the human body frame.
The further technical scheme is that the step S4 comprises the following steps:
Acquiring a safety helmet which has an overlapping area with a human body frame of the human body and is positioned on the head of the human body as a target safety helmet;
if the posture of the human body is standing, judging whether the safety helmet frame of the target safety helmet is positioned in the first wearing area or not;
if yes, judging that the target safety helmet is in the correct wearing area;
if the posture of the human body is squatting, judging whether the safety helmet frame of the target safety helmet is positioned in the second wearing area or not;
if yes, judging that the target safety helmet is in the correct wearing area.
The technical scheme is that the method further comprises the following steps:
if a plurality of human bodies exist in the detection picture, repeating the steps S3-S5, and judging whether all the human bodies in the detection picture wear the safety helmet correctly;
if all the human bodies wear the safety helmet correctly, judging that the detected picture has no illegal action;
if a person does not wear the safety helmet correctly, judging that the detected picture has illegal behaviors, and executing alarm operation.
In a second aspect, the present invention proposes a detection system for safety helmet wearing recognition, comprising: means for performing the method as described in the first aspect.
In a third aspect, an embodiment of the present invention provides a terminal, where the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, and where the memory is configured to store application program code that supports the terminal to perform the method of the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of the first aspect described above.
Compared with the prior art, the invention has the following technical effects:
by applying the deep learning technology, the human body and the safety helmet in the detected picture are identified, the gesture of the human body is judged according to the identified human body, and the accurate area where the safety helmet should be worn is determined, so that errors caused by different planar relationships of images and actual three-dimensional relationships of people are avoided, the identification rate in the actual production environment reaches 90%, and the identification speed can reach 15 frames per second. The invention effectively reduces the difficulty of artificial supervision in the production environment, and the supervision can be notified as long as abnormal wearing behaviors are analyzed, and the supervision can carry out the next management according to the images and information fed back on site without the need of paying attention to the site situation in a moment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a detection method for wearing recognition of a helmet according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a detection system for safety helmet wearing recognition according to another embodiment of the present invention;
fig. 3 is a schematic diagram of a terminal according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of a training process of a detection model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the implementation of step S101 according to the embodiment of the present invention;
FIG. 6 is a flowchart illustrating the implementation of step S102 according to the embodiment of the present invention;
fig. 7 is a schematic diagram of a specific implementation flow of the implementation steps S103-S105 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, in which like reference numerals represent like components. It will be apparent that the embodiments described below are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of embodiments of the invention. As used in the specification of the embodiments of the invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Examples
Referring to fig. 1, which is a flowchart of a detection method for wearing recognition of a helmet according to an embodiment of the present invention, as can be seen from the figure, the method includes the following steps:
s101, acquiring a detection picture;
referring to fig. 5, in the implementation, the quality of the detected picture determines the accuracy and working efficiency of the next recognition and analysis. In order to reduce the calculation amount of the system and shorten the detection time, the embodiment obtains the detection picture through the following specific steps:
Analyzing the real-time video stream to obtain a picture to be analyzed;
in general, there are three encoded frames in a video stream: intra-coded frames (I-frames), predictive-coded frames (P-frames) and bi-directional-coded frames (B-frames). The I frame only uses the spatial correlation of the frame to compress the video, the P frame uses the forward reference frame to do the time domain predictive coding, and the B frame uses the forward and backward bidirectional reference frames to do the time domain predictive coding. In general, the compression ratio of the I frame is low, the image quality is slightly better, and the compression ratio is the basis of the P frame and the B frame, so that the I frame should be preferentially selected for analysis, and then the P frame with more information is selected.
Because the time difference between two adjacent I frames is longer, a plurality of P frames are selected between the two adjacent I frames to serve as pictures to be analyzed, and the omission of key information is prevented.
Acquiring an optical flow component value of the picture to be analyzed;
the present example uses the change in the time domain of the pixels in the image sequence and the correlation between the adjacent frames to find the correspondence between the previous frame and the current frame, thereby calculating the motion information of the object between the adjacent frames. The optical flow field is to find out the moving speed and moving direction of each pixel in the image through a picture sequence. The present embodiment obtains the optical flow component values of the picture to be analyzed by using a calculation method of sparse optical flow:
In the method, in the process of the invention,for the optical flow component value, the motion offset of the corner in the x-direction is represented, +.>The larger the value, i.e. the larger the motion offset in the x-direction, the larger the gap between the two frames of images.
And if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture.
In a specific implementation, a first preset threshold is set to be 1.2, if the optical flow component value of the picture to be analyzedIf the difference between the picture to be analyzed and the upper frame image is larger than 1.2, the picture to be analyzed is selected as a detection picture for further analysis in order to avoid omission of the detection result.
It should be noted that, a person skilled in the art can also determine the difference between the picture to be analyzed and the previous frame image by acquiring the motion offset of the picture in the y direction, and automatically screen the picture to be analyzed, thereby improving the detection accuracy and the detection efficiency.
S102, judging whether the detected picture contains a human body or not and a safety helmet;
referring to fig. 4, in an implementation, training of a mannequin and a helmet feature model is required.
Collecting a safety helmet image sample and a non-safety helmet image sample, classifying, labeling and training to obtain a safety helmet characteristic model; and collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model.
A large number of human body image samples, non-human body image samples, safety helmet image samples and non-safety helmet image samples are collected, marked by special tools and divided into training samples and test samples, wherein the training samples are used for training parameters of a model, and the test samples are used for testing the effect of the model. In the embodiment, 20000 human body image samples and non-human body image samples are selected, 8000 safety helmet image samples and 8000 non-safety helmet image samples are selected, and the region and the category are marked by using an ImgLabel tool. And 7: the scale of 3 is divided into training and test sets.
Referring to fig. 6, the convolution feature layer of the detected picture is named by using k different rectangular frames, and candidate areas possibly existing in a human body and/or a safety helmet are screened out, wherein k is a positive integer;
in specific implementation, since the target may appear at any position of the image and the size and aspect ratio of the target are not determined, the whole image is traversed initially by adopting a sliding window strategy, and different scales and different aspect ratios are required to be set. This exhaustive strategy, while encompassing all possible locations of the target, is time-complex, creating too many redundant windows, severely impacting the speed and performance of subsequent feature extraction and classification.
In the embodiment, the final convolution characteristic layer of the picture is named by using k different rectangular frames (Anchor boxes), and through experimental tests, when k is 9, the method is a good compromise in time efficiency and detection accuracy, and not only real-time detection, but also detection accuracy can be ensured. When k is less than 9, the detection speed is increased, but the accuracy is reduced, and some human bodies cannot detect the detection speed, so that omission is easy to occur. When k is greater than 9, accuracy improves, but detection speed decreases.
Extracting the characteristics of the human body and the safety helmet from the candidate areas respectively;
in particular implementations, how much feature data is extracted directly affects the speed and accuracy of the next recognition operation. The image features that are typically extracted mainly include: texture features, edge features, and motion features of the image. (1) The texture features mainly comprise a gray level histogram, an edge direction histogram, a gray level co-occurrence matrix and the like of the image. (2) Edge features directly display the outline of an image, mainly including perimeter, area, aspect ratio (principal axis ratio), dispersion, compactness, etc. of the image. (3) Motion characteristics are related to motion behavior and generally include motion centroid, velocity, displacement, gradient, etc.
In specific implementation, the whole picture is input, and the characteristics of the picture are automatically acquired by using a Darknet51 network.
Classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not; if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame;
classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not; and if the detected picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame.
In specific implementation, the Feature Map after Feature extraction is classified to determine whether a human body and/or a helmet is contained, k regression models (corresponding to different Anchor boxes) are used for adjusting the positions and the sizes of candidate frames, and finally classification is performed on the human body and the helmet.
The method comprises the steps that the frame predicted value (relative width and height of a center point relative coordinate and a frame) of each Feature in a Feature Map is further processed by a model to obtain an actual value, a sigmoid function is utilized to normalize the predicted center coordinate, and the actual coordinate is obtained by adding a bias value of the current Feature relative to the upper left corner of a picture; and carrying out normalization operation on the predicted object width and height by using an exp function, and multiplying the obtained object width and height by the prior frame width and height corresponding to the current Anchor Box to obtain the actual width and height, wherein the prior frame width and height corresponding to each Anchor Box is obtained by clustering the data set.
The present embodiment determines whether each region contains a specific type of object by means of the detection and classification model and the GPU. Wherein the detection model is Darknet, and the classifier is a YOLOV3 model based on a focal loss function. And carrying out post-treatment operation on the detected human body and safety helmet through frame position regression to obtain a final human body frame and a final safety helmet frame.
The method and the device enable the region nomination, classification and regression to share convolution characteristics, improve the speed of human body detection and ensure the detection accuracy. Other target detection algorithms may also be used in this example. Such as R-CNN, SPP net, SSD, and YOLO algorithms.
And S103, if the detected picture contains a human body and a safety helmet, acquiring the posture of the human body, and determining a correct wearing area according to the posture of the human body.
In the implementation, after detecting the human body and the safety helmet, whether the human body wears the safety helmet correctly is further judged, and the human body close to the lens and the human body far from the lens are different in size due to the relation of shooting visual angles, so that the safety helmet is similar; therefore, the embodiment of the invention simulates the data when the safety helmet is correctly worn according to the posture of the human body, and judges whether the human body correctly wears the safety helmet. The method comprises the following specific steps:
Acquiring the aspect ratio alpha of the human body frame;
if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing;
if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting;
in specific implementation, the posture of the human body is judged according to the aspect ratio of the human body frame, the embodiment sets the second preset threshold value to be 1.8, and if the value of alpha is larger than 1.8, the posture of the human body is judged to be standing; if the value of alpha is smaller than 1.8, judging that the posture of the human body is squatting; the person skilled in the art can also determine the value of the second preset threshold according to the shooting angle of the picture or other factors, and the invention is not limited in particular.
If the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame; if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width on the upper part of the human body frame.
In a specific embodiment, the width of the obtained human frame is a, the height is B, if:
A/B is more than 1.8, the posture of the human body corresponding to the human body frame is judged to be standing, and the correct wearing area is determined to be a square area with a first preset height and a first preset width at the upper part of the human body frame; wherein, the first preset height is B0.2, and the first preset width is A1.2;
A/B is less than 1.8, the posture of the human body corresponding to the human body frame is judged to be squat, and the correct wearing area is determined to be a square area with a second preset height and a second preset width at the upper part of the human body frame; wherein the second preset height is b×0.3, and the second preset width is a×1.1.
It should be noted that, according to the standing posture and the squatting posture of the human body, the embodiment of the invention simulates the numerical value when the safety helmet is correctly worn, the part of the shell of the safety helmet may exceed the frame of the character when the safety helmet is correctly worn, and is particularly obvious when the safety helmet is on the side, so that the area when the safety helmet is correctly worn is limited by setting the first wearing area as a first preset height and a first preset width and setting the second wearing area as a second preset height and a second preset width, wherein the values of the first preset height, the first preset width, the second preset height and the second preset width are strict, if the values of the first preset height, the first preset width, the second preset width are too small, the judgment condition is strict, and the safety helmet is easy to be incorrectly worn on the side and the squatting posture; if the value is too large, the person at the far distance can be matched with the safety helmet at the near place (the next person with the oversized cap). The person skilled in the art can also determine the range of the first wearing area and the second wearing area according to the shooting angle of the picture or other factors, and the invention is not limited in particular.
S104, judging whether the safety helmet is in the correct wearing area or not;
in the implementation, a safety helmet with an overlapping area with a human body frame of the human body and the overlapping area on the head of the human body is obtained as a target safety helmet;
if the posture of the human body is standing, judging whether the safety helmet frame of the target safety helmet is positioned in the first wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area;
if the posture of the human body is squatting, judging whether the safety helmet frame of the target safety helmet is positioned in the second wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area.
In specific implementation, after the target safety helmet is obtained, if the human body posture is judged to be standing, judging whether the target safety helmet is located in a first preset height of the upper part of the human body frame, if yes, judging whether the target safety helmet is located in a first preset width of the upper part of the human body frame, if yes, judging that the target safety helmet is located in the correct wearing area, marking the human body as worn, and marking the corresponding target safety helmet as worn. If one item is not, judging that the target safety helmet is not in the correct wearing area, and judging that the human body does not wear the safety helmet correctly.
If the human body posture is judged to be squatting, judging whether the target safety helmet is located in a second preset height of the upper portion of the human body frame, if so, judging whether the target safety helmet is located in a second preset width of the upper portion of the human body frame, if so, judging that the target safety helmet is located in the correct wearing area, marking the human body as worn, and marking the corresponding target safety helmet as worn. If one item is not, judging that the target safety helmet is not in the correct wearing area.
It should be noted that, a person skilled in the art may adjust the order of determining whether the target helmet is located in the correct wearing area according to the need.
S105, if yes, judging that the safety helmet is worn correctly by the human body, and if not, executing alarm operation.
In the implementation, if the safety helmet is in the correct wearing area, judging that the human body correctly wears the safety helmet, and detecting that the picture has no illegal action; if the safety helmet is not in the correct wearing area, judging that the human body does not wear the safety helmet correctly, detecting that the picture has illegal behaviors, and executing alarm operation.
Referring to fig. 7, in another embodiment, if there are multiple human bodies in the detected picture, repeating steps S103-S105, and judging whether all human bodies in the detected picture wear the safety helmet correctly;
If all the human bodies wear the safety helmet correctly, judging that the detected picture has no illegal action;
if a person does not wear the safety helmet correctly, judging that the detected picture has illegal behaviors, and executing alarm operation.
As shown in fig. 7, in the specific embodiment, steps S103-S105 are performed to determine whether the person wears the helmet correctly, and the steps are as follows:
STEP1 aspect ratio alpha of human body frame
STEP2, judge the human posture according to the value of alpha, presume the correct wearing area of the helmet.
STEP3, traversing all the safety helmets and judging whether the safety helmets coincide with the areas of the people. If yes, the STEP4 is adjusted, otherwise, the next unworn safety helmet is continuously judged.
STEP4, if the human body posture is standing, judging whether the safety helmet is within a first preset height of the upper part of the figure frame, if so, entering STEP5, otherwise, returning to STEP3 to judge the next unworn safety helmet;
if the human body posture is squatting, judging whether the safety helmet is within the second preset height of the upper part of the character frame, if yes, entering STEP5, otherwise, returning to STEP3, and judging the next unworn safety helmet.
STEP5, if the human body posture is standing, judging whether the safety helmet is within the first preset width of the upper part of the figure frame, if so, judging that the safety helmet is correctly worn, marking the person and the safety helmet as worn and worn, and recording matching information;
If the human body posture is squatting, judging whether the safety helmet is positioned in the second preset width at the upper part of the character frame, if so, judging that the safety helmet is correctly worn, marking the person and the safety helmet as worn and worn, and recording matching information.
STEP6: STEP1-STEP5 was repeated until all persons were labeled.
STEP7, judging whether all people mark to wear the safety helmet correctly, if not, entering STEP8.
STEP8: and matching STEP1-STEP5 between the person who does not wear the safety helmet correctly and the safety helmet which is worn, and matching STEP1-STEP5 between the original wearer of the safety helmet which is successfully matched and the safety helmet which is not worn.
STEP9 if still people are marked as not wearing the helmet correctly, consider the figure to have illegal actions. Otherwise, the picture is considered to have no illegal action.
If the picture has illegal behaviors, early warning information is generated and sent to the terminal. If no illegal action exists, no early warning signal is generated.
It should be noted that, in some embodiments, STEPs STEP4 and STEP5 of determining whether the target helmet is in the correct wearing area are interchangeable.
Referring to fig. 2, a schematic block diagram of an area protection system based on human body detection according to an embodiment of the present invention is shown. As shown, the system 200 in this embodiment may include: the acquisition unit 201, the first judgment unit 202, the determination unit 203, the second judgment unit 204, the alarm unit 205. Wherein,
An acquisition unit 201 for acquiring a detection picture;
a first judging unit 202, configured to judge whether the detected picture includes a human body and a helmet;
a determining unit 203, configured to obtain a posture of a human body if the detected picture includes the human body and a helmet, and determine a correct wearing area according to the posture of the human body;
a second judging unit 204, configured to judge whether the helmet is in the correct wearing area;
an alarm unit 205, configured to determine that the person wears the helmet correctly if the helmet is in the correct wearing area, and execute an alarm operation if the helmet is not in the correct wearing area;
in an embodiment, the obtaining unit 201 is further configured to parse the real-time video stream to obtain a picture to be analyzed; acquiring an optical flow component value of the picture to be analyzed; and if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture.
In an embodiment, the system 200 further includes a model training unit 206, where the model training unit 206 is configured to collect a helmet image sample and a non-helmet image sample, classify, annotate, and train the helmet image sample, and obtain a feature model of the helmet; and collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model.
In an embodiment, the determining unit 203 is specifically configured to obtain an aspect ratio α of the human frame; if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing; if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting; if the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame; if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width on the upper part of the human body frame.
In an embodiment, the first determining unit 202 is specifically configured to use k different rectangular frames to make nomination on the convolution feature layer of the detected picture, and screen out candidate regions where a human body and/or a helmet may exist, where k is a positive integer; extracting the characteristics of the human body and the safety helmet from the candidate areas respectively; classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not; if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame; classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not; and if the detected picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame.
In an embodiment, the second determining unit 204 is configured to obtain a helmet that has a coincidence area with a human body frame of the human body and the coincidence area is on the head of the human body as a target helmet; if the posture of the human body is standing, judging whether the safety helmet frame of the target safety helmet is positioned in the first wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area; if the posture of the human body is squatting, judging whether the safety helmet frame of the target safety helmet is positioned in the second wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area.
Referring to fig. 3, a schematic block diagram of a terminal 300 according to another embodiment of the present invention is provided. The terminal 300 in the present embodiment as shown in the drawing may include: one or more processors 301; one or more input devices 302, one or more output devices 303, and a memory 304. The processor 301, the input device 302, the output device 303, and the memory 304 are connected via a bus 305. The memory 302 is used for storing instructions and the processor 301 is used for executing the instructions stored by the memory 302. Wherein the processor 301 is configured to perform: obtaining a detection picture; judging whether the detected picture contains a human body or not and a safety helmet; if the detected picture contains a human body and a safety helmet, acquiring the posture of the human body, and determining a correct wearing area according to the posture of the human body; judging whether the safety helmet is in the correct wearing area or not; if yes, the person is judged to wear the safety helmet correctly, and if not, alarm operation is executed.
In an embodiment, the processor 301 is further configured to perform: analyzing the real-time video stream to obtain a picture to be analyzed; acquiring an optical flow component value of the picture to be analyzed; and if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture.
In an embodiment, the processor 301 is further configured to perform: and collecting sample pictures containing human bodies, marking the sample pictures, and training to obtain parameters of a human body classifier model.
In an embodiment, the processor 301 is further configured to perform: collecting a safety helmet image sample and a non-safety helmet image sample, classifying, labeling and training to obtain a safety helmet characteristic model; collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model; the convolution characteristic layer of the detection picture is named by using k different rectangular frames, and candidate areas possibly existing in human bodies and/or safety helmets are screened out, wherein k is a positive integer; extracting the characteristics of the human body and the safety helmet from the candidate areas respectively; classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not; if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame; classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not; and if the detected picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame.
In an embodiment, the processor 301 is further configured to perform: acquiring the aspect ratio alpha of the human body frame; if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing; if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting; if the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame; if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width on the upper part of the human body frame.
In an embodiment, the processor 301 is further configured to perform: acquiring a safety helmet which has an overlapping area with a human body frame of the human body and is positioned on the head of the human body as a target safety helmet; if the posture of the human body is standing, judging whether the safety helmet frame of the target safety helmet is positioned in the first wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area; if the posture of the human body is squatting, judging whether the safety helmet frame of the target safety helmet is positioned in the second wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area.
In an embodiment, the processor 301 is further configured to perform: if a plurality of human bodies exist in the detection picture, repeating the steps S3-S5, and judging whether all the human bodies in the detection picture wear the safety helmet correctly; if all the human bodies wear the safety helmet correctly, judging that the detected picture has no illegal action; if a person does not wear the safety helmet correctly, judging that the detected picture has illegal behaviors, and executing alarm operation.
It should be appreciated that in embodiments of the present invention, the processor 301 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 302 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output device 303 may include a display (LCD, etc.), a speaker, etc.
The memory 304 may include read only memory and random access memory and provides instructions and data to the processor 301. A portion of memory 304 may also include non-volatile random access memory. For example, the memory 304 may also store information of device type.
In a specific implementation, the processor 301, the input device 302, and the output device 303 described in the embodiments of the present invention may execute the implementation described in the embodiments of the parameter adjustment method provided in the embodiments of the present invention, and may also execute the implementation of the terminal 300 described in the embodiments of the present invention, which is not described herein again.
In another embodiment of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements:
obtaining a detection picture; judging whether the detected picture contains a human body or not and a safety helmet; if the detected picture contains a human body and a safety helmet, determining a correct wearing area according to the posture of the human body; judging whether the safety helmet is in the correct wearing area or not; if yes, the person is judged to wear the safety helmet correctly, and if not, alarm operation is executed.
The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used to store the computer program and other programs and data required by the terminal. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working procedures of the terminal and the unit described above may refer to the corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In several embodiments provided by the present invention, it should be understood that the disclosed terminal and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (5)

1. The detection method for the wearing recognition of the safety helmet is characterized by comprising the following steps of:
s1, acquiring a detection picture;
s2, judging whether the detected picture contains a human body or not and a safety helmet;
s3, if the detected picture contains a human body and a safety helmet, acquiring the posture of the human body, and determining a correct wearing area according to the posture of the human body;
s4, judging whether the safety helmet is in the correct wearing area or not;
s5, if so, judging that the human body wears the safety helmet correctly, and if not, executing alarm operation;
wherein, step S1 includes:
Analyzing the real-time video stream to obtain a picture to be analyzed, wherein the picture to be analyzed comprises an intra-frame coding frame and a predictive coding frame;
acquiring an optical flow component value of the picture to be analyzed;
if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture;
the determining a correct wearing area according to the posture of the human body comprises the following steps:
if the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame;
if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width at the upper part of the human body frame;
the step S2 comprises the following steps:
collecting a safety helmet image sample and a non-safety helmet image sample, classifying, labeling and training to obtain a safety helmet characteristic model;
collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model;
the convolution characteristic layer of the detection picture is named by using k different rectangular frames, and candidate areas possibly existing in human bodies and/or safety helmets are screened out, wherein k is a positive integer;
Extracting the characteristics of the human body and the safety helmet from the candidate areas respectively;
classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not;
if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame;
classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not;
if the detection picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame;
the acquiring the posture of the human body includes:
acquiring the aspect ratio alpha of the human body frame;
if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing;
if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting;
the step S4 comprises the following steps:
acquiring a safety helmet which has an overlapping area with a human body frame of the human body and is positioned on the head of the human body as a target safety helmet;
if the posture of the human body is standing, judging whether a helmet frame of the target helmet is positioned in the first wearing area or not;
if yes, judging that the target safety helmet is in the correct wearing area;
If the posture of the human body is squatting, judging whether a safety helmet frame of the target safety helmet is positioned in the second wearing area;
if yes, judging that the target safety helmet is in the correct wearing area.
2. The method for detecting a wear identification of a helmet of claim 1, further comprising:
if a plurality of human bodies exist in the detection picture, repeating the steps S3-S5, and judging whether all the human bodies in the detection picture wear the safety helmet correctly;
if all the human bodies wear the safety helmet correctly, judging that the detected picture has no illegal action;
if a person does not wear the safety helmet correctly, judging that the detected picture has illegal behaviors, and executing alarm operation.
3. A detection system for headgear wear identification, comprising:
the device comprises an acquisition unit, a prediction unit and a prediction unit, wherein the acquisition unit is used for acquiring a detection picture, specifically, analyzing a real-time video stream to obtain a picture to be analyzed, and the picture to be analyzed comprises an intra-frame coding frame and a prediction coding frame; acquiring an optical flow component value of the picture to be analyzed; if the optical flow component value of the picture to be analyzed is larger than a first preset threshold value, selecting the picture to be analyzed as a detection picture;
The first judging unit is used for judging whether the detected picture contains a human body and a safety helmet, specifically, collecting a safety helmet image sample and a non-safety helmet image sample, classifying, labeling and training to obtain a safety helmet characteristic model; collecting human body image samples and non-human body image samples, classifying, labeling and training to obtain a human body characteristic model; the convolution characteristic layer of the detection picture is named by using k different rectangular frames, and candidate areas possibly existing in human bodies and/or safety helmets are screened out, wherein k is a positive integer;
extracting the characteristics of the human body and the safety helmet from the candidate areas respectively; classifying and regressing the features by utilizing a human feature model, and identifying whether the detected picture has a human body or not; if the detected picture is identified to have a human body, carrying out frame position regression processing on the human body to obtain a human body frame; classifying and returning the features by using a safety helmet feature model, and identifying whether the detected picture has a safety helmet or not; if the detection picture is identified to have the safety helmet, carrying out frame position regression processing on the safety helmet to obtain a safety helmet frame;
the determining unit is configured to obtain a posture of a human body if the detected picture includes the human body and a helmet, and determine a correct wearing area according to the posture of the human body, specifically, the obtaining the posture of the human body includes: acquiring the aspect ratio alpha of the human body frame; if the value of alpha is larger than a second preset threshold value, judging that the posture of the human body is standing; if the value of alpha is smaller than a second preset threshold value, judging that the posture of the human body is squatting; the determining a correct wearing area according to the posture of the human body comprises the following steps: if the posture of the human body is standing, determining the correct wearing area as a first wearing area, wherein the first wearing area is a square area with a first preset height and a first preset width at the upper part of a human frame; if the posture of the human body is squatting, determining the correct wearing area as a second wearing area, wherein the second wearing area is a square area with a second preset height and a second preset width at the upper part of the human body frame;
The second judging unit is used for judging whether the safety helmet is in the correct wearing area or not, and specifically, acquiring the safety helmet which has an overlapping area with a human body frame of the human body and is in the head of the human body as a target safety helmet; if the posture of the human body is standing, judging whether a helmet frame of the target helmet is positioned in the first wearing area or not; if yes, judging that the target safety helmet is in the correct wearing area; if the posture of the human body is squatting, judging whether a safety helmet frame of the target safety helmet is positioned in the second wearing area; if yes, judging that the target safety helmet is in the correct wearing area;
and the alarm unit is used for judging that the human body wears the safety helmet correctly if the safety helmet is in the correct wearing area, and executing alarm operation if the safety helmet is not in the correct wearing area.
4. A terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is for storing application code supporting the terminal to perform the method of any one of claims 1-2, the processor being configured to perform the method of any one of claims 1-2.
5. A computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-2.
CN201811538958.0A 2018-12-14 2018-12-14 Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet Active CN109670441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811538958.0A CN109670441B (en) 2018-12-14 2018-12-14 Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811538958.0A CN109670441B (en) 2018-12-14 2018-12-14 Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet

Publications (2)

Publication Number Publication Date
CN109670441A CN109670441A (en) 2019-04-23
CN109670441B true CN109670441B (en) 2024-02-06

Family

ID=66144377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811538958.0A Active CN109670441B (en) 2018-12-14 2018-12-14 Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet

Country Status (1)

Country Link
CN (1) CN109670441B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263665A (en) * 2019-05-29 2019-09-20 朗坤智慧科技股份有限公司 Safety cap recognition methods and system based on deep learning
CN110188724B (en) * 2019-06-05 2023-02-28 中冶赛迪信息技术(重庆)有限公司 Method and system for helmet positioning and color recognition based on deep learning
CN110502965B (en) * 2019-06-26 2022-05-17 哈尔滨工业大学 Construction safety helmet wearing monitoring method based on computer vision human body posture estimation
CN110458075B (en) * 2019-08-05 2023-08-25 北京泰豪信息科技有限公司 Method, storage medium, device and system for detecting wearing of safety helmet
CN110443976B (en) * 2019-08-14 2021-05-28 深圳市沃特沃德股份有限公司 Safety reminding method and device based on safety helmet and storage medium
CN110619324A (en) * 2019-11-25 2019-12-27 南京桂瑞得信息科技有限公司 Pedestrian and safety helmet detection method, device and system
CN111062429A (en) * 2019-12-12 2020-04-24 上海点泽智能科技有限公司 Chef cap and mask wearing detection method based on deep learning
CN111199200A (en) * 2019-12-27 2020-05-26 深圳供电局有限公司 Wearing detection method and device based on electric protection equipment and computer equipment
CN111275058B (en) * 2020-02-21 2021-04-27 上海高重信息科技有限公司 Safety helmet wearing and color identification method and device based on pedestrian re-identification
CN112101288B (en) * 2020-09-25 2024-02-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for detecting wearing of safety helmet
CN112257570B (en) * 2020-10-20 2021-07-27 江苏濠汉信息技术有限公司 Method and device for detecting whether safety helmet of constructor is not worn based on visual analysis
CN112861751B (en) * 2021-02-22 2024-01-12 中国中元国际工程有限公司 Airport luggage room personnel management method and device
CN113283296A (en) * 2021-04-20 2021-08-20 晋城鸿智纳米光机电研究院有限公司 Helmet wearing detection method, electronic device and storage medium
CN113361347A (en) * 2021-05-25 2021-09-07 东南大学成贤学院 Job site safety detection method based on YOLO algorithm
CN114332738B (en) * 2022-01-18 2023-08-04 浙江高信技术股份有限公司 Safety helmet detection system for intelligent construction site
CN114283485B (en) * 2022-03-04 2022-10-14 杭州格物智安科技有限公司 Safety helmet wearing detection method and device, storage medium and safety helmet
CN115150552A (en) * 2022-06-23 2022-10-04 中国华能集团清洁能源技术研究院有限公司 Constructor safety monitoring method, system and device based on deep learning self-adaption
CN116958702A (en) * 2023-08-01 2023-10-27 浙江钛比科技有限公司 Hotel guard personnel wearing detection method and system based on edge artificial intelligence
CN116824723A (en) * 2023-08-29 2023-09-29 山东数升网络科技服务有限公司 Intelligent security inspection system and method for miner well-down operation based on video data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
CN107103617A (en) * 2017-03-27 2017-08-29 国机智能科技有限公司 The recognition methods of safety cap wearing state and system based on optical flow method
WO2017197308A1 (en) * 2016-05-12 2017-11-16 One Million Metrics Corp. System and method for monitoring safety and productivity of physical tasks
KR20180082856A (en) * 2017-01-11 2018-07-19 금오공과대학교 산학협력단 A safety helmet to prevent accidents and send information of the wearer in real time
CN108319934A (en) * 2018-03-20 2018-07-24 武汉倍特威视系统有限公司 Safety cap wear condition detection method based on video stream data
CN108460358A (en) * 2018-03-20 2018-08-28 武汉倍特威视系统有限公司 Safety cap recognition methods based on video stream data
CN108537256A (en) * 2018-03-26 2018-09-14 北京智芯原动科技有限公司 A kind of safety cap wears recognition methods and device
CN108535683A (en) * 2018-04-08 2018-09-14 安徽宏昌机电装备制造有限公司 A kind of intelligent safety helmet and its localization method based on NB-IOT honeycomb technology of Internet of things
CN108921004A (en) * 2018-04-27 2018-11-30 淘然视界(杭州)科技有限公司 Safety cap wears recognition methods, electronic equipment, storage medium and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6364747B2 (en) * 2013-11-14 2018-08-01 オムロン株式会社 Monitoring device and monitoring method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017197308A1 (en) * 2016-05-12 2017-11-16 One Million Metrics Corp. System and method for monitoring safety and productivity of physical tasks
CN106571014A (en) * 2016-10-24 2017-04-19 上海伟赛智能科技有限公司 Method for identifying abnormal motion in video and system thereof
KR20180082856A (en) * 2017-01-11 2018-07-19 금오공과대학교 산학협력단 A safety helmet to prevent accidents and send information of the wearer in real time
CN107103617A (en) * 2017-03-27 2017-08-29 国机智能科技有限公司 The recognition methods of safety cap wearing state and system based on optical flow method
CN108319934A (en) * 2018-03-20 2018-07-24 武汉倍特威视系统有限公司 Safety cap wear condition detection method based on video stream data
CN108460358A (en) * 2018-03-20 2018-08-28 武汉倍特威视系统有限公司 Safety cap recognition methods based on video stream data
CN108537256A (en) * 2018-03-26 2018-09-14 北京智芯原动科技有限公司 A kind of safety cap wears recognition methods and device
CN108535683A (en) * 2018-04-08 2018-09-14 安徽宏昌机电装备制造有限公司 A kind of intelligent safety helmet and its localization method based on NB-IOT honeycomb technology of Internet of things
CN108921004A (en) * 2018-04-27 2018-11-30 淘然视界(杭州)科技有限公司 Safety cap wears recognition methods, electronic equipment, storage medium and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘鑫昱.面向监控图像的行人再识别关键技术研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2018,I138-2459. *
面向监控图像的行人再识别关键技术研究;刘鑫昱;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180215;I138-2459 *

Also Published As

Publication number Publication date
CN109670441A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
Fang et al. Detecting non-hardhat-use by a deep learning method from far-field surveillance videos
CN110502965B (en) Construction safety helmet wearing monitoring method based on computer vision human body posture estimation
CN110425005B (en) Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
US20190012531A1 (en) Movement monitoring system
CN109145742B (en) Pedestrian identification method and system
CN104166841A (en) Rapid detection identification method for specified pedestrian or vehicle in video monitoring network
CN106886216A (en) Robot automatic tracking method and system based on RGBD Face datections
CN110874583A (en) Passenger flow statistics method and device, storage medium and electronic equipment
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN107194361A (en) Two-dimentional pose detection method and device
CN109145696B (en) Old people falling detection method and system based on deep learning
CN110674680B (en) Living body identification method, living body identification device and storage medium
CN103034852A (en) Specific color pedestrian detecting method in static video camera scene
CN105740751A (en) Object detection and identification method and system
CN104616006A (en) Surveillance video oriented bearded face detection method
CN110781853A (en) Crowd abnormality detection method and related device
Kwaśniewska et al. Face detection in image sequences using a portable thermal camera
CN115862113A (en) Stranger abnormity identification method, device, equipment and storage medium
CN113240829B (en) Intelligent gate passing detection method based on machine vision
CN105809183A (en) Video-based human head tracking method and device thereof
CN111814659B (en) Living body detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant