CN113408365B - Safety helmet identification method and device under complex scene - Google Patents
Safety helmet identification method and device under complex scene Download PDFInfo
- Publication number
- CN113408365B CN113408365B CN202110579308.6A CN202110579308A CN113408365B CN 113408365 B CN113408365 B CN 113408365B CN 202110579308 A CN202110579308 A CN 202110579308A CN 113408365 B CN113408365 B CN 113408365B
- Authority
- CN
- China
- Prior art keywords
- safety helmet
- picture
- color
- adopting
- worn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Abstract
The application discloses a safety helmet identification method and device under a complex scene, wherein the method comprises the following steps: acquiring a safety helmet wearing state picture in a complex scene, and performing data annotation on the safety helmet wearing state picture to obtain an annotation picture; preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture; training a neural network model by adopting a preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism; and acquiring a picture to be identified in a complex scene, inputting the picture to be identified into a safety helmet identification model, and identifying the wearing state of the safety helmet of a worker in the picture to be identified by adopting a TTA method. The embodiment of the application can effectively reduce the influence of the actual environmental factors on the identification result in the complex scene, and can improve the accuracy of safety helmet identification by modifying the network structure of the Yo-ov neural network and fusing the attention mechanism.
Description
Technical Field
The application relates to the technical field of target detection, in particular to a safety helmet identification method and device under a complex scene.
Background
The safety helmet can play a role in buffering and damping, and is an indispensable safety tool for safety production workers and high-altitude operation personnel in various industries. The operation safety in a complex scene is closely related to the wearing relation of the safety helmet, the existing safety helmet identification method mainly aims at how to comprehensively utilize all main stream algorithm models to improve the safety helmet detection identification rate, and related researches are mostly aimed at safety helmet identification in a simple scene, and the influence of various factors of an actual building site on the safety helmet identification is not considered, so that the wearing condition of the safety helmet in the complex scene is difficult to identify.
Disclosure of Invention
The application provides a safety helmet identification method and device under a complex scene, which are used for solving the problem that the conventional safety helmet identification method does not consider the influence of various factors of an actual site on safety helmet identification, so that the wearing condition of the safety helmet under the complex scene is difficult to identify.
The first embodiment of the application provides a safety helmet identification method under a complex scene, which comprises the following steps:
acquiring a safety helmet wearing state picture in a complex scene, and performing data annotation on the safety helmet wearing state picture to obtain an annotation picture;
preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture;
training a neural network model by adopting the preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism;
and acquiring a picture to be identified in a complex scene, inputting the picture to be identified into the safety helmet identification model, and identifying the wearing state of the safety helmet of the worker in the picture to be identified by adopting a TTA method.
Further, in the steps of acquiring a helmet wearing state picture under a complex scene, performing data annotation on the helmet wearing state picture to obtain an annotation picture, and preprocessing the annotation picture by adopting an illumination equalization picture processing method to obtain a preprocessed picture, the method further comprises:
and carrying out cluster analysis on the detection frame of the marked picture by using a k-means clustering method, and randomly erasing the picture region of the marked picture by using a random-serving data enhancement method.
Further, the method for processing the picture by adopting illumination equalization carries out pretreatment on the marked picture to obtain a pretreated picture, which specifically comprises the following steps:
performing brightness equalization processing on the marked picture, reading three RGB color channels of the marked picture, and converting the color channels into YUV color space;
selecting Y channel information of the YUV color space, counting Y channel values of each pixel, and calculating according to the Y channel values to obtain probability of occurrence of preset brightness;
and obtaining a brightness histogram according to the occurrence probability of each brightness, and carrying out normalization processing on the brightness histogram to obtain a preprocessed picture.
Further, the neural network is a Yolov5 model, the preprocessing picture is adopted to train the neural network model, and the safety helmet recognition model is obtained by changing the network structure of the neural network model and adding an attention mechanism, specifically:
and adding a layer of SElayer into the network structure of the neural network model, and adding a backstene fused with an attention mechanism to obtain the safety helmet recognition model.
Further, the method further comprises:
generating a voice message reminder when the safety helmet wearing state is recognized as the state of not wearing the safety helmet;
and when the wearing state of the safety helmet is identified as the worn safety helmet, classifying the worn safety helmet by adopting a machine learning method to obtain the color type of the worn safety helmet.
Further, the method for classifying the worn safety helmet by adopting the machine learning method obtains the color category of the worn safety helmet, which is specifically as follows:
detecting the position of a safety helmet in the preprocessed picture;
manufacturing color class templates of a plurality of safety helmets;
selecting the upper half part of the worn safety helmet according to the position of the safety helmet, converting the upper half part of the worn safety helmet into the YUV color space, and respectively calculating the Euclidean distance from the upper half part of the worn safety helmet to a plurality of color class templates;
and respectively comparing the Euclidean distances with a distance threshold range, and obtaining the color class of the safety helmet according to the comparison result.
Further, the comparing the euclidean distances with the distance threshold ranges respectively, and obtaining the color class of the safety helmet according to the comparison result, specifically includes:
comparing the Euclidean distances with a distance threshold range respectively, and if at least one Euclidean distance is in the threshold range, selecting a color class template corresponding to the minimum Euclidean distance in the Euclidean distances as a final calculation result to obtain the color class of the safety helmet;
and if all Euclidean distances are not in the distance threshold range, judging that the safety helmet is of other color types.
A second embodiment of the present application provides a helmet recognition device in a complex scene, including:
the data labeling module is used for acquiring a safety helmet wearing state picture in a complex scene and labeling the safety helmet wearing state picture with data to obtain a labeling picture;
the preprocessing module is used for preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture;
the model training module is used for training the neural network model by adopting the preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism;
the identification module is used for acquiring pictures to be identified in a complex scene, inputting the pictures to be identified into the safety helmet identification model, and identifying the wearing state of the safety helmet of workers in the pictures to be identified by adopting a TTA method.
According to the embodiment of the application, the influence of factors such as strong light, weak light and shielding in a complex scene on the safety helmet state recognition is fully considered, and the data preprocessing is performed by adopting the image processing method of illumination equalization, so that the influence of actual environmental factors on the recognition result in the complex scene can be effectively reduced, and the safety helmet recognition can be more accurate; according to the embodiment of the application, the network structure of the Yolov neural network is modified and the attention mechanism is fused, so that the attention of the model in space is more concentrated, and the accuracy of identification is improved; the reliability of the safety helmet recognition model can be improved by adding the TTA method.
Furthermore, after the worker in the complex scene is identified to wear the safety helmet, the color type of the safety helmet can be further distinguished by manufacturing different color type templates of the safety helmet and calculating the Euclidean distance between the position of the safety helmet and the color type templates, so that the management efficiency of the complex scene on the wearing state of the safety helmet is improved.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying a helmet in a complex scene according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the present application;
fig. 3 is another flow chart of a method for identifying a helmet in a complex scenario according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a safety helmet recognition device under a complex scene according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Referring to fig. 1-3, in a first embodiment of the present application, the first embodiment of the present application provides a method for identifying a helmet in a complex scenario shown in fig. 1, including:
s1, acquiring a safety helmet wearing state picture in a complex scene, and marking the safety helmet wearing state picture with data to obtain a marked picture;
in the embodiment of the application, a safety helmet wearing state picture in a complex scene is acquired as a safety helmet data set in step S1, wherein the safety helmet wearing state picture comprises a worn safety helmet picture and an unworn safety helmet picture. According to the method, the data are marked on the wearing state picture of the safety helmet, so that the data balance is ensured.
S2, preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture;
by means of the method, contrast of the picture can be effectively improved, details of the picture can be increased, influences caused by changes among different illuminations in the complex environment can be effectively resisted, and recognition accuracy of the model in the strong light or weak light environment can be effectively improved.
S3, training a neural network model by adopting a preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism;
according to the embodiment of the application, the network structure of the neural network model is modified and the attention mechanism is fused, so that the safety helmet recognition model obtained through training is more concentrated in space, and the accuracy and the efficiency of safety helmet wearing state recognition can be improved.
S4, acquiring a picture to be identified in a complex scene, inputting the picture to be identified into a safety helmet identification model, and identifying the wearing state of the safety helmet of a worker in the picture to be identified by adopting a TTA method.
Specifically, when the identification picture is used for identifying the wearing state of the safety helmet, the picture is randomly turned over and scaled by adopting a TTA method in the safety helmet identification model, and a final safety helmet wearing state identification result is obtained by comprehensively analyzing a plurality of results, so that the reliability of identifying the wearing state of the safety helmet can be effectively improved.
As a specific implementation manner of the embodiment of the application, in the steps of collecting the wearing state picture of the safety helmet in the complex scene, performing data annotation on the wearing state picture of the safety helmet to obtain an annotation picture, and preprocessing the annotation picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture, the method further comprises:
and carrying out cluster analysis on the detection frame of the marked picture by using a k-means clustering method, and randomly erasing the picture region of the marked picture by using a random-serving data enhancement method.
In the embodiment of the application, the k-means clustering method is utilized to perform cluster analysis on the detection frames marked with the pictures to obtain the size of the detection frames suitable for identifying the wearing state of the safety helmet, so that the accuracy of identification is improved. In addition, the embodiment of the application adopts a random-scrolling data enhancement method to randomly erase the picture area of the marked picture, so that the anti-shielding capability of the model can be effectively improved.
As a specific implementation manner of the embodiment of the application, the method for processing the picture by adopting illumination equalization is used for preprocessing the marked picture to obtain a preprocessed picture, and the specific steps are as follows:
performing brightness equalization processing on the marked picture, reading three RGB color channels of the marked picture, and converting the color channels into YUV color space;
YUV is a color coding method, divided into three components, where "Y" represents brightness and "U" and "V" represent chromaticity, used to describe affecting color and saturation, specifying the color of a pixel. The embodiment of the application converts the marked picture into the YUV color space, which is favorable for balancing the brightness information of the marked picture, and can effectively reduce the influence of various factors in a complex environment on the safety helmet state identification.
Selecting Y channel information of a YUV color space, counting Y channel values of each pixel, and calculating according to the Y channel values to obtain probability of occurrence of preset brightness;
and obtaining a brightness histogram according to the occurrence probability of each brightness, and carrying out normalization processing on the brightness histogram to obtain a preprocessed picture.
Specifically, in one discrete picture { x }, the number of occurrences of luminance i is represented by ni, that is, the occurrence probability of a pixel of luminance i in the picture is:
where L is all the luminance numbers in the picture (typically 256), n is all the pixel numbers in the picture, px (i) is actually an image histogram with pixel value i, normalized to [0,1].
The cumulative distribution function corresponding to px is defined as:
alternatively, creating a transform in the form of y=t (x), generating a cumulative probability function of y for each value in the original image, can be linearized over all value ranges, where the transformation formula is defined as:
cdf y (i)=iK
as a specific implementation manner of the embodiment of the application, the neural network is a Yolov5 model, the neural network model is trained by adopting a preprocessing picture, and a safety helmet recognition model is obtained by changing the network structure of the neural network model and adding an attention mechanism, which is specifically as follows:
and adding a layer of SElayer into the network structure of the neural network model, and adding a backstage fused with an attention mechanism to obtain the safety helmet recognition model.
Specifically, when the preprocessing picture is input, a layer of SElayer is added in the network structure of the neural network model so as to pay attention to the importance degree of different channel characteristics. The added SElayer sequentially carries out average pooling and linear classification, and then learns the correlation among different channels through a Relu activation function and linear classification, so that the attention of the channel can be screened.
Referring to fig. 2, in the embodiment of the present application, after a preprocessed picture passes through a layer, the preprocessed picture sequentially passes through Focus, CBL, CSP _1, CBL, cdp_3, CBL, csp1_3, CBL, and SPP modules, the layer is added to the last layer of BackBone, and a one-dimensional vector as many as the number of channels is obtained as an evaluation score of each channel by processing the convolved feature map, and then the evaluation scores are applied to the corresponding channels respectively. After the preprocessed picture passes the BackBone fused with the attribute mechanism, the feature map is transmitted into a YOLOv5-Neck structure to obtain a safety helmet recognition model.
As a specific implementation manner of the embodiment of the present application, the method further includes:
generating a voice message reminder when the safety helmet wearing state is recognized as the state of not wearing the safety helmet;
and when the wearing state of the safety helmet is identified as the worn safety helmet, classifying the worn safety helmet by adopting a machine learning method to obtain the color type of the worn safety helmet.
Illustratively, the color classification of the headgear includes red, white, yellow, and blue.
As a specific implementation manner of the embodiment of the application, the worn safety helmet is classified by adopting a machine learning method, and the color categories of the worn safety helmet are obtained, specifically:
detecting the position of a safety helmet in the preprocessed picture;
manufacturing color class templates of a plurality of safety helmets;
the embodiment of the application selects four full-white, full-red, full-yellow and full-blue pictures as color category templates
Selecting the upper half part of the worn safety helmet according to the position of the safety helmet, converting the upper half part of the worn safety helmet into a YUV color space, and respectively calculating the Euclidean distance from the upper half part of the worn safety helmet to a plurality of color class templates;
and respectively comparing the Euclidean distances with the distance threshold ranges, and obtaining the color category of the safety helmet according to the comparison result.
According to the embodiment of the application, four full-white, full-red, full-yellow and full-blue pictures are selected as the color class templates, and the color class of the safety helmet is accurately identified according to Euclidean distances between a plurality of color class templates and the upper half part of the position of the safety helmet.
As a specific implementation manner of the embodiment of the present application, comparing a plurality of euclidean distances with a distance threshold range, and obtaining a color class of a safety helmet according to a comparison result, specifically includes:
comparing the Euclidean distances with a distance threshold range respectively, and if at least one Euclidean distance is in the threshold range, selecting a color class template corresponding to the minimum Euclidean distance in the Euclidean distances as a final calculation result to obtain the color class of the safety helmet;
and if all Euclidean distances are not in the range of the distance threshold value, judging that the safety helmet is of other color types.
Fig. 3 is another flow chart of a method for identifying a helmet in a complex scenario according to an embodiment of the present application.
The embodiment of the application has the following beneficial effects:
according to the embodiment of the application, the influence of factors such as strong light, weak light and shielding in a complex scene on the safety helmet state recognition is fully considered, and the data preprocessing is performed by adopting the image processing method of illumination equalization, so that the influence of actual environmental factors on the recognition result in the complex scene can be effectively reduced, and the safety helmet recognition can be more accurate; according to the embodiment of the application, the network structure of the Yolov neural network is modified and the attention mechanism is fused, so that the attention of the model in space is more concentrated, and the accuracy of identification is improved; the reliability of the safety helmet recognition model can be improved by adding the TTA method.
Furthermore, after the worker in the complex scene is identified to wear the safety helmet, the color type of the safety helmet can be further distinguished by manufacturing different color type templates of the safety helmet and calculating the Euclidean distance between the position of the safety helmet and the color type templates, so that the management efficiency of the complex scene on the wearing state of the safety helmet is improved.
Referring to fig. 4, a second embodiment of the present application provides a helmet recognition device under a complex scene, including:
the data labeling module 10 is used for acquiring a wearing state picture of the safety helmet in a complex scene, and labeling the wearing state picture of the safety helmet with data to obtain a labeling picture;
in the embodiment of the application, a safety helmet wearing state picture in a complex scene is acquired as a safety helmet data set in step S1, wherein the safety helmet wearing state picture comprises a worn safety helmet picture and an unworn safety helmet picture. According to the method, the data are marked on the wearing state picture of the safety helmet, so that the data balance is ensured.
The preprocessing module 20 is configured to preprocess the labeling picture by using a photo processing method of illumination equalization to obtain a preprocessed picture;
by means of the method, contrast of the picture can be effectively improved, details of the picture can be increased, influences caused by changes among different illuminations in the complex environment can be effectively resisted, and recognition accuracy of the model in the strong light or weak light environment can be effectively improved.
The model training module 30 is configured to train the neural network model by using the preprocessed image, and obtain a safety helmet recognition model by changing a network structure of the neural network model and adding an attention mechanism;
according to the embodiment of the application, the network structure of the neural network model is modified and the attention mechanism is fused, so that the safety helmet recognition model obtained through training is more concentrated in space, and the accuracy and the efficiency of safety helmet wearing state recognition can be improved.
The recognition module 40 is configured to collect a picture to be recognized in a complex scene, input the picture to be recognized into the helmet recognition model, and recognize the wearing state of the helmet of the worker in the picture to be recognized by using a TTA method.
Specifically, when the identification picture is used for identifying the wearing state of the safety helmet, the picture is randomly turned over and scaled by adopting a TTA method in the safety helmet identification model, and a final safety helmet wearing state identification result is obtained by comprehensively analyzing a plurality of results, so that the reliability of identifying the wearing state of the safety helmet can be effectively improved.
As a specific implementation of the embodiment of the present application, the preprocessing module 20 is further configured to:
and carrying out cluster analysis on the detection frame of the marked picture by using a k-means clustering method, and randomly erasing the picture region of the marked picture by using a random-serving data enhancement method.
In the embodiment of the application, the k-means clustering method is utilized to perform cluster analysis on the detection frames marked with the pictures to obtain the size of the detection frames suitable for identifying the wearing state of the safety helmet, so that the accuracy of identification is improved. In addition, the embodiment of the application adopts a random-scrolling data enhancement method to randomly erase the picture area of the marked picture, so that the anti-shielding capability of the model can be effectively improved.
As a specific implementation of the embodiment of the present application, the preprocessing module 20 is further configured to:
performing brightness equalization processing on the marked picture, reading three RGB color channels of the marked picture, and converting the color channels into YUV color space;
YUV is a color coding method, divided into three components, where "Y" represents brightness and "U" and "V" represent chromaticity, used to describe affecting color and saturation, specifying the color of a pixel. The embodiment of the application converts the marked picture into the YUV color space, which is favorable for balancing the brightness information of the marked picture, and can effectively reduce the influence of various factors in a complex environment on the safety helmet state identification.
Selecting Y channel information of a YUV color space, counting Y channel values of each pixel, and calculating according to the Y channel values to obtain probability of occurrence of preset brightness;
and obtaining a brightness histogram according to the occurrence probability of each brightness, and carrying out normalization processing on the brightness histogram to obtain a preprocessed picture.
Specifically, in one discrete picture { x }, the number of occurrences of luminance i is represented by ni, that is, the occurrence probability of a pixel of luminance i in the picture is:
where L is all the luminance numbers in the picture (typically 256), n is all the pixel numbers in the picture, px (i) is actually an image histogram with pixel value i, normalized to [0,1].
The cumulative distribution function corresponding to px is defined as:
alternatively, creating a transform in the form of y=t (x), generating a cumulative probability function of y for each value in the original image, can be linearized over all value ranges, where the transformation formula is defined as:
cdf y (i)=iK
as a specific implementation of the embodiment of the present application, the neural network is a Yolov5 model, and the model training module 30 is specifically configured to:
and adding a layer of SElayer into the network structure of the neural network model, and adding a backstage fused with an attention mechanism to obtain the safety helmet recognition model.
Specifically, when the preprocessing picture is input, a layer of SElayer is added in the network structure of the neural network model so as to pay attention to the importance degree of different channel characteristics. The added SElayer sequentially carries out average pooling and linear classification, and then learns the correlation among different channels through a Relu activation function and linear classification, so that the attention of the channel can be screened.
Referring to fig. 2, in the embodiment of the present application, after a preprocessed picture passes through a layer, the preprocessed picture sequentially passes through Focus, CBL, CSP _1, CBL, cdp_3, CBL, csp1_3, CBL, and SPP modules, the layer is added to the last layer of BackBone, and a one-dimensional vector as many as the number of channels is obtained as an evaluation score of each channel by processing the convolved feature map, and then the evaluation scores are applied to the corresponding channels respectively. After the preprocessed picture passes the BackBone fused with the attribute mechanism, the feature map is transmitted into a YOLOv5-Neck structure to obtain a safety helmet recognition model.
As a specific implementation of the embodiment of the present application, the identification module 40 is further configured to:
generating a voice message reminder when the safety helmet wearing state is recognized as the state of not wearing the safety helmet;
and when the wearing state of the safety helmet is identified as the worn safety helmet, classifying the worn safety helmet by adopting a machine learning method to obtain the color type of the worn safety helmet.
Illustratively, the color classification of the headgear includes red, white, yellow, and blue.
As a specific implementation manner of the embodiment of the application, the worn safety helmet is classified by adopting a machine learning method, and the color categories of the worn safety helmet are obtained, specifically:
detecting the position of a safety helmet in the preprocessed picture;
manufacturing color class templates of a plurality of safety helmets;
the embodiment of the application selects four full-white, full-red, full-yellow and full-blue pictures as color category templates
Selecting the upper half part of the worn safety helmet according to the position of the safety helmet, converting the upper half part of the worn safety helmet into a YUV color space, and respectively calculating the Euclidean distance from the upper half part of the worn safety helmet to a plurality of color class templates;
and respectively comparing the Euclidean distances with the distance threshold ranges, and obtaining the color category of the safety helmet according to the comparison result.
According to the embodiment of the application, four full-white, full-red, full-yellow and full-blue pictures are selected as the color class templates, and the color class of the safety helmet is accurately identified according to Euclidean distances between a plurality of color class templates and the upper half part of the position of the safety helmet.
As a specific implementation manner of the embodiment of the present application, comparing a plurality of euclidean distances with a distance threshold range, and obtaining a color class of a safety helmet according to a comparison result, specifically includes:
comparing the Euclidean distances with a distance threshold range respectively, and if at least one Euclidean distance is in the threshold range, selecting a color class template corresponding to the minimum Euclidean distance in the Euclidean distances as a final calculation result to obtain the color class of the safety helmet;
and if all Euclidean distances are not in the range of the distance threshold value, judging that the safety helmet is of other color types.
Fig. 3 is another flow chart of a method for identifying a helmet in a complex scenario according to an embodiment of the present application.
The embodiment of the application has the following beneficial effects:
according to the embodiment of the application, the influence of factors such as strong light, weak light and shielding in a complex scene on the safety helmet state recognition is fully considered, and the data preprocessing is performed by adopting the image processing method of illumination equalization, so that the influence of actual environmental factors on the recognition result in the complex scene can be effectively reduced, and the safety helmet recognition can be more accurate; according to the embodiment of the application, the network structure of the Yolov neural network is modified and the attention mechanism is fused, so that the attention of the model in space is more concentrated, and the accuracy of identification is improved; the reliability of the safety helmet recognition model can be improved by adding the TTA method.
Furthermore, after the worker in the complex scene is identified to wear the safety helmet, the color type of the safety helmet can be further distinguished by manufacturing different color type templates of the safety helmet and calculating the Euclidean distance between the position of the safety helmet and the color type templates, so that the management efficiency of the complex scene on the wearing state of the safety helmet is improved.
The foregoing is a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.
Claims (4)
1. The safety helmet identification method under the complex scene is characterized by comprising the following steps of:
acquiring a safety helmet wearing state picture in a complex scene, and performing data annotation on the safety helmet wearing state picture to obtain an annotation picture;
randomly erasing the picture area of the marked picture by using a random-scrolling data enhancement method;
preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture;
training a neural network model by adopting the preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism;
acquiring a picture to be identified in a complex scene, inputting the picture to be identified into the safety helmet identification model, and identifying the wearing state of the safety helmet of a worker in the picture to be identified by adopting a TTA method;
the method for processing the picture by adopting illumination equalization is used for preprocessing the marked picture to obtain a preprocessed picture, and specifically comprises the following steps:
performing brightness equalization processing on the marked picture, reading three RGB color channels of the marked picture, and converting the color channels into YUV color space;
selecting Y channel information of the YUV color space, counting Y channel values of each pixel, and calculating according to the Y channel values to obtain probability of occurrence of preset brightness;
obtaining a brightness histogram according to the occurrence probability of each brightness, and carrying out normalization processing on the brightness histogram to obtain a preprocessed picture;
the neural network is a Yolov5 model;
generating a voice message reminder when the safety helmet wearing state is recognized as the state of not wearing the safety helmet;
when the wearing state of the safety helmet is identified as the worn safety helmet, classifying the worn safety helmet by adopting a machine learning method to obtain the color class of the worn safety helmet;
the method for classifying the worn safety helmet by adopting the machine learning method is characterized in that the color class of the worn safety helmet is obtained, and specifically:
detecting the position of a safety helmet in the preprocessed picture;
manufacturing color class templates of a plurality of safety helmets;
selecting the upper half part of the worn safety helmet according to the position of the safety helmet, converting the upper half part of the worn safety helmet into the YUV color space, and respectively calculating the Euclidean distance from the upper half part of the worn safety helmet to a plurality of color class templates;
comparing the Euclidean distances with a distance threshold range respectively, and obtaining the color class of the safety helmet according to the comparison result;
the comparison is carried out on a plurality of Euclidean distances and a distance threshold range respectively, and the color class of the safety helmet is obtained according to the comparison result, specifically:
comparing the Euclidean distances with a distance threshold range respectively, and if at least one Euclidean distance is in the threshold range, selecting a color class template corresponding to the minimum Euclidean distance in the Euclidean distances as a final calculation result to obtain the color class of the safety helmet;
and if all Euclidean distances are not in the distance threshold range, judging that the safety helmet is of other color types.
2. The method for identifying a helmet in a complex scene according to claim 1, wherein, between "acquiring a helmet wearing state picture in the complex scene, performing data annotation on the helmet wearing state picture to obtain an annotation picture" and "preprocessing the annotation picture by using a photo processing method of illumination equalization to obtain a preprocessed picture", further comprising:
and carrying out cluster analysis on the detection frame of the marked picture by using a k-means clustering method.
3. The method for identifying the safety helmet in the complex scene according to claim 1, wherein the training of the neural network model by using the preprocessed image is performed by changing a network structure of the neural network model and adding an attention mechanism, so as to obtain the safety helmet identification model, which is specifically:
and adding a layer of SElayer into the network structure of the neural network model, and adding a backstene fused with an attention mechanism to obtain the safety helmet recognition model.
4. A helmet recognition device in a complex scene, comprising:
the data labeling module is used for acquiring a safety helmet wearing state picture in a complex scene and labeling the safety helmet wearing state picture with data to obtain a labeling picture;
the preprocessing module is used for preprocessing the marked picture by adopting a picture processing method of illumination equalization to obtain a preprocessed picture; the random-erasing data enhancement method is also used for randomly erasing the picture area of the marked picture;
the model training module is used for training the neural network model by adopting the preprocessing picture, and obtaining a safety helmet recognition model by changing the network structure of the neural network model and adding an attention mechanism;
the identification module is used for acquiring pictures to be identified in a complex scene, inputting the pictures to be identified into the safety helmet identification model, and identifying the wearing state of the safety helmet of workers in the pictures to be identified by adopting a TTA method; the voice message reminding device is also used for generating a voice message reminding when the fact that the safety helmet is not worn in the wearing state is recognized; when the wearing state of the safety helmet is identified as the worn safety helmet, classifying the worn safety helmet by adopting a machine learning method to obtain the color class of the worn safety helmet;
the method for processing the picture by adopting illumination equalization is used for preprocessing the marked picture to obtain a preprocessed picture, and specifically comprises the following steps:
performing brightness equalization processing on the marked picture, reading three RGB color channels of the marked picture, and converting the color channels into YUV color space;
selecting Y channel information of the YUV color space, counting Y channel values of each pixel, and calculating according to the Y channel values to obtain probability of occurrence of preset brightness;
obtaining a brightness histogram according to the occurrence probability of each brightness, and carrying out normalization processing on the brightness histogram to obtain a preprocessed picture;
the neural network is a Yolov5 model;
the method for classifying the worn safety helmet by adopting the machine learning method is characterized in that the color class of the worn safety helmet is obtained, and specifically:
detecting the position of a safety helmet in the preprocessed picture;
manufacturing color class templates of a plurality of safety helmets;
selecting the upper half part of the worn safety helmet according to the position of the safety helmet, converting the upper half part of the worn safety helmet into the YUV color space, and respectively calculating the Euclidean distance from the upper half part of the worn safety helmet to a plurality of color class templates;
comparing the Euclidean distances with a distance threshold range respectively, and obtaining the color class of the safety helmet according to the comparison result;
the comparison is carried out on a plurality of Euclidean distances and a distance threshold range respectively, and the color class of the safety helmet is obtained according to the comparison result, specifically:
comparing the Euclidean distances with a distance threshold range respectively, and if at least one Euclidean distance is in the threshold range, selecting a color class template corresponding to the minimum Euclidean distance in the Euclidean distances as a final calculation result to obtain the color class of the safety helmet;
and if all Euclidean distances are not in the distance threshold range, judging that the safety helmet is of other color types.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110579308.6A CN113408365B (en) | 2021-05-26 | 2021-05-26 | Safety helmet identification method and device under complex scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110579308.6A CN113408365B (en) | 2021-05-26 | 2021-05-26 | Safety helmet identification method and device under complex scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113408365A CN113408365A (en) | 2021-09-17 |
CN113408365B true CN113408365B (en) | 2023-09-08 |
Family
ID=77675373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110579308.6A Active CN113408365B (en) | 2021-05-26 | 2021-05-26 | Safety helmet identification method and device under complex scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113408365B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325114A (en) * | 2013-06-13 | 2013-09-25 | 同济大学 | Target vehicle matching method based on improved vision attention model |
KR101715001B1 (en) * | 2015-12-30 | 2017-03-23 | 연세대학교 산학협력단 | Display system for safety evaluation in construction sites using of wearable device, and thereof method |
CN109766869A (en) * | 2019-01-23 | 2019-05-17 | 中国建筑第八工程局有限公司 | A kind of artificial intelligent detecting method of safety cap wearing based on machine vision |
CN109886325A (en) * | 2019-02-01 | 2019-06-14 | 辽宁工程技术大学 | A kind of stencil-chosen and acceleration matching process of non linear color space classification |
CN110070033A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | Safety cap wearing state detection method in a kind of power domain dangerous work region |
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
WO2020019673A1 (en) * | 2018-07-25 | 2020-01-30 | 深圳云天励飞技术有限公司 | Construction site monitoring method and device based on image analysis, and readable storage medium |
CN111680682A (en) * | 2020-06-12 | 2020-09-18 | 哈尔滨理工大学 | Method for identifying safety helmet in complex scene |
CN112184773A (en) * | 2020-09-30 | 2021-01-05 | 华中科技大学 | Helmet wearing detection method and system based on deep learning |
CN112232307A (en) * | 2020-11-20 | 2021-01-15 | 四川轻化工大学 | Method for detecting wearing of safety helmet in night vision environment |
CN112381005A (en) * | 2020-11-17 | 2021-02-19 | 温州大学 | Safety helmet detection system for complex scene |
CN112560741A (en) * | 2020-12-23 | 2021-03-26 | 中国石油大学(华东) | Safety wearing detection method based on human body key points |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170132821A1 (en) * | 2015-11-06 | 2017-05-11 | Microsoft Technology Licensing, Llc | Caption generation for visual media |
-
2021
- 2021-05-26 CN CN202110579308.6A patent/CN113408365B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325114A (en) * | 2013-06-13 | 2013-09-25 | 同济大学 | Target vehicle matching method based on improved vision attention model |
KR101715001B1 (en) * | 2015-12-30 | 2017-03-23 | 연세대학교 산학협력단 | Display system for safety evaluation in construction sites using of wearable device, and thereof method |
WO2020019673A1 (en) * | 2018-07-25 | 2020-01-30 | 深圳云天励飞技术有限公司 | Construction site monitoring method and device based on image analysis, and readable storage medium |
CN109766869A (en) * | 2019-01-23 | 2019-05-17 | 中国建筑第八工程局有限公司 | A kind of artificial intelligent detecting method of safety cap wearing based on machine vision |
CN109886325A (en) * | 2019-02-01 | 2019-06-14 | 辽宁工程技术大学 | A kind of stencil-chosen and acceleration matching process of non linear color space classification |
CN110070033A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | Safety cap wearing state detection method in a kind of power domain dangerous work region |
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
CN111680682A (en) * | 2020-06-12 | 2020-09-18 | 哈尔滨理工大学 | Method for identifying safety helmet in complex scene |
CN112184773A (en) * | 2020-09-30 | 2021-01-05 | 华中科技大学 | Helmet wearing detection method and system based on deep learning |
CN112381005A (en) * | 2020-11-17 | 2021-02-19 | 温州大学 | Safety helmet detection system for complex scene |
CN112232307A (en) * | 2020-11-20 | 2021-01-15 | 四川轻化工大学 | Method for detecting wearing of safety helmet in night vision environment |
CN112560741A (en) * | 2020-12-23 | 2021-03-26 | 中国石油大学(华东) | Safety wearing detection method based on human body key points |
Also Published As
Publication number | Publication date |
---|---|
CN113408365A (en) | 2021-09-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126325B (en) | Intelligent personnel security identification statistical method based on video | |
CN104063722B (en) | A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier | |
CN108009473A (en) | Based on goal behavior attribute video structural processing method, system and storage device | |
CN111881730A (en) | Wearing detection method for on-site safety helmet of thermal power plant | |
CN112149761B (en) | Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm | |
CN110378324B (en) | Quality dimension-based face recognition algorithm evaluation method | |
CN106384117B (en) | A kind of vehicle color identification method and device | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN108108760A (en) | A kind of fast human face recognition | |
CN111522951A (en) | Sensitive data identification and classification technical method based on image identification | |
CN107729940A (en) | A kind of user bill big data base station connection information customer relationship estimates method | |
CN115035088A (en) | Helmet wearing detection method based on yolov5 and posture estimation | |
CN111260645A (en) | Method and system for detecting tampered image based on block classification deep learning | |
CN113221667B (en) | Deep learning-based face mask attribute classification method and system | |
CN115690234A (en) | Novel optical fiber color line sequence detection method and system | |
CN108510483B (en) | Method for generating color image tampering detection by adopting VLAD coding and SVM calculation | |
CN113408365B (en) | Safety helmet identification method and device under complex scene | |
CN113297913A (en) | Method for identifying dressing specification of distribution network field operating personnel | |
CN115273150A (en) | Novel identification method and system for wearing safety helmet based on human body posture estimation | |
CN109766860A (en) | Method for detecting human face based on improved Adaboost algorithm | |
CN113450369B (en) | Classroom analysis system and method based on face recognition technology | |
CN113076916B (en) | Dynamic facial expression recognition method and system based on geometric feature weighted fusion | |
CN114972888A (en) | Communication maintenance tool identification method based on YOLO V5 | |
CN111414825B (en) | Method for detecting wearing of safety helmet | |
CN113642473A (en) | Mining coal machine state identification method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |