CN113963162A - Helmet wearing identification method and device, computer equipment and storage medium - Google Patents

Helmet wearing identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113963162A
CN113963162A CN202111341753.5A CN202111341753A CN113963162A CN 113963162 A CN113963162 A CN 113963162A CN 202111341753 A CN202111341753 A CN 202111341753A CN 113963162 A CN113963162 A CN 113963162A
Authority
CN
China
Prior art keywords
image frame
image
person
identified
safety helmet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111341753.5A
Other languages
Chinese (zh)
Inventor
曹鹏
江海
唐力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianshengqiao Bureau of Extra High Voltage Power Transmission Co
Original Assignee
Tianshengqiao Bureau of Extra High Voltage Power Transmission Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianshengqiao Bureau of Extra High Voltage Power Transmission Co filed Critical Tianshengqiao Bureau of Extra High Voltage Power Transmission Co
Priority to CN202111341753.5A priority Critical patent/CN113963162A/en
Publication of CN113963162A publication Critical patent/CN113963162A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a safety helmet wearing identification method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring an image frame to be identified; carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized; identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet; and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition. By adopting the method, whether the person in the monitoring range of the camera wears the safety helmet or not and whether the person wears the safety helmet correctly or not can be judged in real time, so that the monitoring intelligence of the correct wearing of the safety helmet is improved.

Description

Helmet wearing identification method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for identifying wearing of a safety helmet.
Background
The safety helmet is a self-protection tool which is necessary to be worn by people in many special working occasions, the people cannot enter corresponding dangerous special working areas without correctly wearing the safety helmet, the previous safety helmet wearing supervision basically depends on the consciousness of people, and the situation that the safety helmet is worn but not correctly worn exists in many cases, so that the danger that the safety helmet falls off easily occurs. Identification of whether the headgear is worn correctly is therefore of great importance.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium and a computer program product for identifying the wearing of a safety helmet.
In a first aspect, the present application provides a method for headgear wear identification. The method comprises the following steps:
acquiring an image frame to be identified;
carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet;
and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
In one embodiment, the acquiring the image frame to be identified includes:
extracting an initial image frame from a video and acquiring red, green and blue tristimulus values of the initial image frame;
determining a brightness value of the initial image frame based on the red, green and blue tristimulus values;
and when the brightness value of the initial image frame is greater than the brightness threshold value, taking the initial image frame as an image frame to be identified.
In one embodiment, after the brightness value of the initial image frame is greater than the brightness threshold, the method further includes:
when the brightness value of the initial image frame is within a preset brightness range, performing image enhancement processing on the initial image frame to obtain an enhanced image frame;
carrying out personnel identification processing on the enhanced image frame to obtain a personnel identification result of the enhanced image frame;
and when the person is identified to exist in the enhanced image frame, taking the initial image frame as an image frame to be identified.
In one embodiment, the performing the person identification process on the enhanced image frame to obtain a person identification result of the enhanced image frame includes:
acquiring a background environment image without people;
comparing the pixel values of the pixels of the enhanced image frame and the background environment image one by one to obtain the number of changed pixel points;
and when the proportion of the changed pixel number to the total pixel number is larger than a proportion threshold value, judging that personnel exists in the enhanced image frame.
In one embodiment, the image segmentation processing on the image frame to be recognized to obtain a face image in the image frame to be recognized includes:
cutting off the environment background image determined from the image frame to be identified to obtain a cut image frame;
determining a zero coordinate point of the cutting image frame;
and based on a coordinate graph formed by the zero point coordinate points, segmenting the cut image frame to obtain a human face head image in the image frame to be recognized.
In one embodiment, the recognizing the facial image through the trained helmet wearing recognition model to obtain a recognition result of whether the person in the image frame to be recognized correctly wears a helmet comprises:
when a person is identified to be not wearing a safety helmet in the image frame to be identified, determining that an identification result is a first-class identification result;
when the fact that a person wears a safety helmet in the image frame to be recognized but the face of the person does not have a safety helmet rope is recognized, determining that the recognition result is a second type of recognition result;
and when all the people in the image frame to be recognized are recognized to wear safety helmets and the face of the people has safety helmet ropes, determining that the recognition result is a third type recognition result.
In a second aspect, the present application further provides a headgear wearing identification device. The device comprises:
the acquisition module is used for acquiring an image frame to be identified;
the segmentation module is used for carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
the recognition module is used for recognizing the face head image through the trained helmet wearing recognition model to obtain a recognition result of whether the person in the image frame to be recognized correctly wears the helmet or not;
and the sending module is used for sending the identification result to a prompter so as to prompt information when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring an image frame to be identified;
carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet;
and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image frame to be identified;
carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet;
and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring an image frame to be identified;
carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet;
and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
According to the method, the device, the computer equipment, the storage medium and the computer program product for identifying the wearing of the safety helmet, the image segmentation processing is carried out on the image frame to be identified, so that the face and head image in the image frame to be identified is obtained; identifying the image of the head of the face through the trained helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the helmet or not; and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition. The method can realize real-time judgment of whether the person in the camera monitoring range wears the safety helmet or not and whether the person wears the safety helmet correctly or not, thereby improving the monitoring intelligence of the correct wearing of the safety helmet and reducing safety accidents caused by the fact that the person does not wear the safety helmet or does not wear the safety helmet correctly.
Drawings
FIG. 1 is a block diagram of an overall framework of a system for identifying whether a person is wearing a crash helmet correctly in a multi-person scenario, in accordance with one embodiment;
FIG. 2 is a diagram illustrating the components of the smart recognition host, according to one embodiment;
FIG. 3 is a schematic flow chart illustrating a method for identifying the wearing of a protective helmet in one embodiment;
FIG. 4 is a software flowchart of an identification system for identifying whether a person is wearing a hard hat in a multiple person scenario in an alternative embodiment;
FIG. 5 is a diagram illustrating the structure of an intelligent recognition algorithm and a matching network in one embodiment;
FIG. 5.1 a detailed exploded view of the Focus portion of FIG. 5;
FIG. 5.2 is a detailed exploded view of the CSP2_ X portion of FIG. 5;
FIG. 5.3 is a detailed exploded view of the CSP1_ X portion of FIG. 5;
FIG. 5.4 is a detailed exploded view of the SPP portion of FIG. 5;
FIG. 5.5 is a detailed exploded view of the CBL portion of FIG. 5;
FIG. 5.6 is a detailed exploded view of the final output in one embodiment;
FIG. 6 is a flowchart of the system for identifying whether a person is wearing a hard hat in a multi-person scenario, under an embodiment;
FIG. 7 is a block diagram showing the structure of a helmet wearing identification device according to an embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
With the development of artificial intelligence technology, graphic image and video image recognition and classification technology comes to the fore, and adopting a recognition device with intelligent recognition technology to supervise whether a person entering a special working scene correctly wears a safety helmet is a time-saving and labor-saving correct choice, so that inertia or personal restraint of supervision and examination of the person is avoided. Although the application of artificial intelligence technology to the fields of image classification, face recognition and the like is mature, more identification of a person wearing a safety helmet is only performed in the situation that whether the person wears the safety helmet is intelligently judged, and the identification is performed on a large system, namely a Wintel system. The algorithm for judging whether a person wears a safety helmet or not and whether the person correctly wears the safety helmet is rarely found, and a case for realizing the algorithm by adopting a small embedded system is not found. Due to the influence of cost, installation and the like, various low-cost intelligent identification monitoring equipment consisting of embedded small systems and easy to install are required to be flexibly configured on more and more special working occasions, and obviously, algorithms and devices with complete functions for correctly identifying whether a person wears a safety helmet or not are absent in the market at present, so that the identification system for correctly wearing the safety helmet in a multi-person scene can meet the requirement of a new situation.
Therefore, a method, a system and a device for identifying whether a person wears a safety helmet and whether the person correctly wears the safety helmet in a multi-person scene are needed, and a set of functions of determining whether the person who is within the monitoring range of the camera wears the safety helmet and whether the person correctly wears the safety helmet in a near real-time manner can be achieved, so that the monitoring intelligence of the correct wearing of the safety helmet is improved, and safety accidents caused by the fact that the person does not wear the safety helmet and does not correctly wear the safety helmet are reduced.
To achieve the above object, the present application provides a system for recognizing whether a person correctly wears a crash helmet, comprising: the intelligent recognition system comprises a video image acquisition camera, an intelligent recognition algorithm, a recognition matching frame model, a hardware operation platform supporting intelligent recognition software to run, a display screen and a voice prompt module, wherein the overall model architecture diagram is shown in figure 1;
the video image acquisition camera is a high-definition camera, pixels can be selected to be more than 200 thousands, and the storage codes are as follows: h.265 or H.264; the video image acquisition camera is used for acquiring video images of people entering the monitoring area.
The hardware operation platform for supporting the operation of the intelligent recognition software is an embedded operation platform, and the platform structure is shown in fig. 2 and mainly comprises: the embedded type ARM CPU processor, the embedded type GPU/NPU artificial intelligence parallel processor, the memory unit, the flash memory unit, the USB interface and the power interface.
The voice prompt module is an alarm horn, the power amplification power of the selected alarm horn can be more than 5 watts, and the voice module also has the characteristics of simple structure, low price and the like.
The monitoring display screen is an optional module, the resolution ratio of the monitoring display screen can be above 1080, and the monitoring display screen is used for displaying video images shot by the camera. The system is simple in structure, easy to operate and low in cost, can adapt to various indoor and outdoor working area occasions and the like, greatly improves the accuracy rate of intelligently identifying whether a person correctly wears the safety helmet in a special working occasion requiring the safety helmet to be correctly worn, and meets the requirement that the person can enter the special working occasion when the safety helmet is correctly worn.
The helmet wearing identification method provided by the embodiment of the application can be applied to an intelligent identification terminal in fig. 1, a video image intelligent processing module in the intelligent identification terminal acquires an image frame to be identified from a high-definition camera, then image segmentation processing is carried out on the image frame to be identified, a face head image in the image frame to be identified is obtained, the face head image is identified through a trained helmet wearing identification model, an identification result of whether a person in the image frame to be identified correctly wears a helmet is obtained, and then the identification result is sent to a prompter (namely a voice prompt module in fig. 1) so that the prompter can carry out information prompt when the helmet wearing condition of the person in the image frame to be identified does not accord with a preset condition.
In one embodiment, as shown in fig. 3, there is provided a method for identifying wearing of a helmet, the method comprising the steps of:
step 310, acquiring an image frame to be identified.
The image frame to be recognized may include a human face image of a person, or may include human face images of a plurality of persons.
In specific implementation, the intelligent identification terminal can perform sampling extraction on image frames from videos recorded by the camera to obtain the image frames to be identified. More specifically, in order to avoid the waste of computing resources caused by identification due to the fact that no person to be identified exists in the obtained image frame to be identified or the brightness is insufficient, after the image frame is sampled from the camera, the image frame needs to be screened, and when the image frame obtained by sampling meets the preset conditions, the image frame is used as the image frame to be identified to carry out wearing identification on the safety helmet.
And 320, performing image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized.
In the specific implementation, before the image segmentation processing is performed on the image frame to be recognized, an environment background image in the image frame to be recognized may be cut out from the image frame to be recognized to obtain a cut image frame, then zero-point coordinate points of the cut image frame are determined, and the cut image frame is segmented based on a coordinate graph formed by the zero-point coordinate points to obtain at least one human face head image in the image frame to be recognized.
And 330, identifying the image of the head of the face through the trained helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the helmet.
Wherein, the recognition results can be divided into three categories: people do not wear the safety helmet, people wear the safety helmet but do not wear the safety helmet rope on the face, and all people wear the safety helmet and have the safety helmet rope on the face.
In the specific implementation, a plurality of types of face and head image sample sets can be obtained in advance, for example, the face and head images of persons who do not wear safety helmets, the face and head images of persons who wear safety helmets but do not have safety helmet ropes on the face, the face and head images of persons who wear safety helmets and have safety helmet ropes on the face, and other types of sample sets are obtained, the sample sets are divided into training samples and testing samples according to a certain proportion, the training samples are used for training the recognition model of wearing the safety helmets to be trained, the testing samples are used for testing the recognition model of wearing the safety helmets after training, and when the model precision of the recognition model of wearing the safety helmets obtained through training reaches a preset precision value, the trained safety helmet wearing model is obtained. Further, after the face head image of the image frame to be recognized is obtained, the face head image can be input into the trained helmet wearing recognition model, and a recognition result of whether the person in the image frame to be recognized correctly wears the helmet or not is obtained.
And 340, sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
The prompter is a device with a prompting function, and can be a loudspeaker, a sound box, a signal lamp, a display and the like.
In the specific implementation, after the intelligent identification terminal obtains the identification result of whether the person in the image frame to be identified correctly wears the safety helmet, the identification result can be sent to the prompter, so that the prompter can prompt information when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition. More specifically, the prompter may be configured to perform corresponding information prompting when a person in the image frame to be recognized does not wear a safety helmet or the person wears the safety helmet but the face does not have a safety helmet rope, and may perform voice broadcast prompting when an alarm horn or a sound box is used as the prompter: if the safety helmet is not worn, the user should wear the safety helmet correctly.
In the method for identifying the wearing of the safety helmet, the image segmentation processing is carried out on the image frame to be identified, so that the face and head image in the image frame to be identified is obtained; identifying the image of the head of the face through the trained helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the helmet or not; and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition. The method can realize real-time judgment of whether the person in the camera monitoring range wears the safety helmet or not and whether the person wears the safety helmet correctly or not, thereby improving the monitoring intelligence of the correct wearing of the safety helmet and reducing safety accidents caused by the fact that the person does not wear the safety helmet or does not wear the safety helmet correctly.
In an exemplary embodiment, acquiring an image frame to be identified includes: extracting an initial image frame from a video and acquiring red, green and blue tristimulus values of the initial image frame; determining a brightness value of an initial image frame based on the red, green and blue tristimulus values; and when the brightness value of the initial image frame is greater than the brightness threshold value, taking the initial image frame as an image frame to be identified.
In a specific implementation, the luminance threshold may be 50, after an initial image frame is sampled and extracted from a video, a luminance value Y of the initial image frame is calculated first, and a calculation formula of the Y value is as follows: y ═ 0.257 × R) + (0.504 × G) + (0.098 × B) +16, and the RGB values (i.e., RGB) in the formula are relative values of the initial image frame RGB format. Only the image with the brightness value Y of the sampled and extracted initial image frame being greater than the brightness threshold 50 can be judged as the usable qualified image frame, namely if the brightness value Y of the extracted initial image frame is less than the brightness threshold 50, the image frame is discarded, the next image frame is continuously sampled and extracted, and the extraction sampling frequency of the video extraction image frame is that 1 image frame is extracted every 10 image frames.
In the embodiment, the image frames extracted from the video are filtered and screened through the brightness values, so that the quality of the obtained image frames to be identified can be improved, and the waste of computing resources caused by the fact that the brightness of the image frames to be identified is too low and cannot be identified is avoided.
In an exemplary embodiment, after the brightness value of the initial image frame is greater than the brightness threshold, the method further includes: when the brightness value of the initial image frame is within a preset brightness range, performing image enhancement processing on the initial image frame to obtain an enhanced image frame; carrying out personnel identification processing on the enhanced image frame to obtain a personnel identification result of the enhanced image frame; and when the person is identified to exist in the enhanced image frame, taking the initial image frame as the image frame to be identified.
In a specific implementation, the preset brightness range may be 50-100, and when the brightness value Y of the initial image frame is between 50 and 100 (i.e., 50 ═ Y <100), the initial image frame is subjected to image enhancement processing to obtain an enhanced image frame. More specifically, the formula of the image enhancement processing for the initial image frame is as follows: and g (i, j) ═ a × f (i, j) + b, wherein f (i, j) in the formula is a pixel value at a coordinate position (i, j) in the initial image frame, and g (i, j) is a pixel value at a corresponding position after image enhancement processing, wherein the value of a can be 1.8, and the value of b can be 12. After the enhanced image frame is obtained, whether personnel exist in the enhanced image frame or not can be further identified, a personnel identification result is obtained, image segmentation processing is carried out on the image frame to be identified only when the personnel exist in the enhanced image frame, and the enhanced image frame is discarded if the personnel do not exist in the enhanced image frame.
In this embodiment, when the brightness value of the initial image frame is within the preset brightness range, the initial image frame is subjected to image enhancement processing to improve the brightness of the initial image frame, so as to further perform personnel identification processing on the enhanced image frame, and whether the enhanced image frame needs to be subjected to helmet wearing identification is determined according to the identification result.
In an exemplary embodiment, the performing the person identification process on the enhanced image frame to obtain a person identification result of the enhanced image frame includes: acquiring a background environment image without people; comparing the pixel values of the pixels of the enhanced image frame and the background environment image one by one to obtain the number of changed pixel points; and when the proportion of the changed pixel number to the total pixel number is larger than a proportion threshold value, judging that personnel exists in the enhanced image frame.
In the concrete implementation, after the initial image frame is judged to be qualified and the image enhancement processing is carried out according to the concrete requirements, whether personnel exist in the obtained enhanced image frame can be judged, and the judging method comprises the following steps: firstly, a background environment image without personnel is prefetched, and the background environment image can be continuously subjected to micro adjustment in the working process; then, comparing the obtained enhanced image frame with the background environment image, wherein the processing formula is as follows: x ═ ΣAll pixel points(f' (i, j) -f (i, j))! When the ratio of the number of the changed pixel points to the total number of the pixel points is greater than a proportional threshold value, such as 15%, it can be determined that a person exists in the enhanced image frame, the enhanced image frame can be used as an image frame to be identified for identification, otherwise, the enhanced image frame is discarded, and the next image frame is continuously sampled.
In this embodiment, personnel identification processing is performed on the enhanced image frames, so as to determine whether helmet wearing identification needs to be performed on the enhanced image frames according to the identification result, thereby avoiding the waste of computing resources caused by the fact that helmet wearing identification is still performed when no personnel exist in the enhanced image frames.
In an exemplary embodiment, the image segmentation processing is performed on the image frame to be recognized to obtain a facial head image in the image frame to be recognized, and the image segmentation processing includes: cutting off an environment background image determined from the image frame to be identified to obtain a cut image frame; determining a zero coordinate point of a cut image frame; and based on a coordinate graph formed by the zero point coordinate points, segmenting the cut image frame to obtain a human face head image in the image frame to be recognized.
In a specific implementation, the segmentation method for performing image segmentation on the image to be recognized may specifically be: the method comprises the steps of firstly subtracting an environment background image from a selected image to be recognized to obtain a cut image frame, determining a pixel coordinate point equal to zero from the cut image frame to serve as a zero point coordinate point, then segmenting the cut image frame according to a block coordinate graph formed by the pixel coordinate points equal to zero to obtain a human face head image in the image frame to be recognized, and further inputting the obtained human face head image into a helmet wearing recognition model for recognition.
In the embodiment, the image segmentation processing is carried out on the image frame to be recognized to obtain the human face head image in the image frame to be recognized, and the wearing recognition of the safety helmet is carried out through the human face head image, so that the pixel points of the recognition image can be greatly reduced, and the recognition speed is further improved.
In an exemplary embodiment, the method for recognizing the image of the head of the face through the trained helmet wearing recognition model to obtain the recognition result whether the person in the image frame to be recognized correctly wears the helmet comprises the following steps: when a person is identified to be in the image frame to be identified, the person does not wear a safety helmet, determining that the identification result is a first type of identification result; when the fact that a person wears a safety helmet in the image frame to be recognized but the face of the person does not have a safety helmet rope is recognized, the recognition result is determined to be a second type of recognition result; and when all the people in the image frame to be recognized are recognized to wear the safety helmet and the face of the people has the safety helmet rope, determining that the recognition result is the third type recognition result.
In the specific implementation, the image frames of the human face head used for training the helmet wearing recognition model are identified as 3 types, which are respectively: people in the first type of image frames do not wear a safety helmet; all people in the second type of image frames wear the safety helmet, but the people do not wear the safety helmet correctly according to the specification; all people in the third type of image frames wear the safety helmet correctly according to the specifications. The second type of image frame is judged and identified as that a person wears a safety helmet but does not pull the safety helmet rope according to the correct wearing specification, namely, only the head is seen to wear the safety helmet but the face does not have the related safety helmet rope.
In this embodiment, the image frames for training the helmet wearing identification model are classified into three categories, and the identification results of the corresponding helmet wearing identification model can also be classified into three categories, so that the identification of the helmet wearing in a multi-person scene can be realized.
In one embodiment, to facilitate understanding of embodiments of the present application by those skilled in the art, reference will now be made to the specific examples illustrated in the drawings.
Referring to fig. 4, a flow chart of a method for identifying wearing of a safety helmet is shown, and the flow chart is as follows:
(1) the camera collects videos and decomposes video image frames from the video streams.
(2) And (3) performing quality judgment on the read video image frame (namely the initial image frame) of the camera to determine whether the quality of the video image frame is qualified. Specifically, the luminance value Y of the initial image frame is calculated first, and the calculation formula of the Y value is as follows: y ═ 0.257 × R) + (0.504 × G) + (0.098 × B) +16, and the RGB values in the formula are relative values of the RGB format of the original image. Only the image with the brightness value Y of the sampled and extracted image frame being more than 50 can be judged as the usable qualified image frame, namely if the brightness value Y of the extracted image frame is less than 50, the image frame is discarded, the next image frame is continuously sampled and extracted, and the extraction sampling frequency of the video extraction image frame is that 1 image frame is extracted every 10 image frames.
(3) And after the video image frame is determined to be qualified, if the brightness in the qualified image frame is not enough, performing image enhancement processing. The treatment method comprises the following steps: if the brightness Y value is between 50 and 100 (namely: 50< ═ Y <100), the image enhancement processing is firstly carried out, and the processing formula is as follows: g (i, j) ═ a × f (i, j) + b, wherein f (i, j) in the formula is a pixel value of the source image at the coordinate position (i, j), g (i, j) is a pixel value after the image is processed and enhanced, the value of a is 1.8, and the value of b is 12.
(4) And after the video image frame is judged to be qualified and image enhancement processing is carried out according to specific requirements, judging whether personnel exist in the enhanced image frame after the enhancement processing. The judging method comprises the following steps: firstly, a background environment image without personnel is prefetched, and the background environment image can be continuously subjected to micro adjustment in the working process; then, sequentially comparing each pixel point of the enhanced image frame after the enhancement processing with each pixel point of the background environment image to obtain the number of changed pixel points, wherein a specific processing formula is as follows: x ═ ΣAll pixel points(f' (i, j) -f (i, j))! When the ratio of the number of the changed pixel points to the total number of the pixel points is greater than a proportional threshold value, for example, 15%, the enhanced image frame can be judged to have personnel, the enhanced image frame can be used as an image frame to be identified for identification, otherwise, the enhanced image frame is discarded, and the next image frame is continuously sampled.
(5) When the image frame is determined to be qualified and people exist, the image frame is taken as an image frame to be identified, and then image segmentation is carried out on the image frame to be identified, wherein the segmentation method comprises the following steps: subtracting an environment background image from the selected image frame to be recognized to obtain a cut image frame, determining a pixel coordinate point equal to zero from the cut image frame as a zero point coordinate point, and then segmenting the cut image frame according to a frame coordinate graph formed by the pixel coordinate points equal to zero to obtain a human face head image in the image frame to be recognized.
(6) The image of the head of the human face is input into a safety helmet wearing identification model to be identified and matched, three results are output in a matching mode, the first result is that a person does not wear a safety helmet, the second result is that the person wears the safety helmet but does not wear the safety helmet correctly (namely a fixing rope for pulling the safety helmet is not pulled), and the third result is that all the persons wear the safety helmet correctly. For the first and second types of results, the related images can be saved first, and voice prompt is made, for example, the prompt of the first type of question: no safety helmet, second type of reminder: incorrect wearing of the safety helmet; for the third category of results, no information prompt may be performed.
Referring to fig. 5, a schematic diagram of an identification algorithm of a helmet wearing identification model is shown, and the algorithm is specifically performed as follows:
assuming that the input picture size is 416 × 3, the Focus layer (as shown in fig. 5.1) is passed before the background layer, and the specific operation is to sample 416 × 416 pixels at intervals of a single channel, and perform channel expansion. In fig. 5.1, the single channel 9 × 9 may be expanded to 9 channels 4 × 4, and the actually input 416 × 3 is changed into a 208 × 12 feature layer after passing through the Focus, and then the feature layer of 208 × 32 is output after passing through one convolution.
After being output from Focus, the output enters the first part of Backone, CBL (shown in FIG. 5.2). The CBL is mainly composed of convolution layers, batch normalization and activation functions. The subsequent CSP1_ X layer (shown in fig. 5.3) reduces some information loss mainly by splicing up and down in the figure, where X represents the number of residual-containing components. By the first CSP1_3, a first tier of feature layers of size 52 x 52 is output for temporary storage, and by the second CSP1_3, a second tier of feature layers of size 26 x 26 is output for temporary storage as well. The main function of the SPP layer (as shown in fig. 5.4) is to effectively avoid the distortion problem caused by scaling and correcting the image cropping. The Backone section ends and the Neck section is entered. The difference between CSP2_ x (shown in FIG. 5.5) and CSP1_ x is the exchange of the residual block back to CBL. And outputting a third layer of feature layers with the size of 13 x 13 from the next CBL, inputting one layer into the Prediction part, upsampling the other layer to obtain 26 x 26 feature layers, splicing the feature layers with the previously stored 26 x 26, inputting one layer into the Prediction part, upsampling the other layer to obtain 52 x 52 feature layers, splicing the feature layers with the stored 52 x 52 feature layers, and inputting the feature layers into the Prediction part. The Prediction is mainly constituted by CSP2_1 and Conv. The three final layers were 13 × 24, 26 × 24, and 52 × 24, respectively. The 24 dimensions represent 3 x (5+ 3). The first 3 indicates that each cell generates 3 prediction boxes with three sizes 13 × 13, 26 × 26, 52 × 52. 5 denotes the center coordinate width and height (xywh) of the prediction box, and whether or not the object (p) is included. The last 3 indicates that there are three classes of object classes to predict, and a single prediction box vector is shown in fig. 5.6. The final output of the three sizes is designed by considering different sizes to generate different sizes of receptive fields, wherein the large receptive field is responsible for predicting large objects, and the small receptive field is responsible for predicting small objects.
Referring to fig. 6, a flow chart of a work of an identification system for identifying whether a person wears a crash helmet in a multi-person scenario is shown. As shown in fig. 6, after the device is powered on, the power-on condition of the device host is detected, and if the power-on fails, the device host is detected again; if normal starting prompting sound is heard, starting is normal, then the host equipment reads video data of the camera, if reading fails, the working state of the camera is checked, if intelligent identification processing fails, the hardware and the camera of the host are checked, and if reading and processing are normal, the circulating intelligent detection working state is entered.
The helmet wearing identification method provided by the embodiment is simple in structure, easy to operate and low in cost, can be suitable for various indoor and outdoor work area occasions, greatly improves the accuracy of intelligently identifying whether a worker correctly wears the helmet in a special work occasion requiring correct wearing of the helmet, and meets the requirement that the worker can enter the special work occasion requiring correct wearing of the helmet.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a helmet wearing identification device for realizing the helmet wearing identification method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme recorded in the method, so specific limitations in one or more embodiments of the helmet wearing identification device provided below can be referred to the limitations on the helmet wearing identification method in the above, and details are not described herein.
In one embodiment, as shown in fig. 7, there is provided a headgear wearing identification device including: an obtaining module 710, a segmenting module 720, an identifying module 730, and a sending module 740, wherein:
an obtaining module 710, configured to obtain an image frame to be identified;
a segmentation module 720, configured to perform image segmentation processing on the image frame to be recognized to obtain a face image in the image frame to be recognized;
the recognition module 730 is used for recognizing the face head image through the trained helmet wearing recognition model to obtain a recognition result of whether the person in the image frame to be recognized correctly wears the helmet;
and the sending module 740 is configured to send the identification result to a prompter, so that the prompter performs information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not meet a preset condition.
In an embodiment, the obtaining module 710 is specifically configured to extract an initial image frame from a video and obtain a red, green, and blue tristimulus value of the initial image frame; determining a brightness value of the initial image frame based on the red, green and blue tristimulus values; and when the brightness value of the initial image frame is greater than the brightness threshold value, taking the initial image frame as an image frame to be identified.
In an embodiment, the obtaining module 710 is further configured to perform image enhancement processing on the initial image frame when the brightness value of the initial image frame is within a preset brightness range, so as to obtain an enhanced image frame; carrying out personnel identification processing on the enhanced image frame to obtain a personnel identification result of the enhanced image frame; and when the person is identified to exist in the enhanced image frame, taking the initial image frame as an image frame to be identified.
In an embodiment, the obtaining module 710 is further configured to obtain a background environment map without people; comparing the pixel values of the pixels of the enhanced image frame and the background environment image one by one to obtain the number of changed pixel points; and when the proportion of the changed pixel number to the total pixel number is larger than a proportion threshold value, judging that personnel exists in the enhanced image frame.
In an embodiment, the segmentation module 720 is specifically configured to cut out an environmental background image determined from the image frame to be recognized, so as to obtain a cut image frame; determining a zero coordinate point of the cutting image frame; and based on a coordinate graph formed by the zero point coordinate points, segmenting the cut image frame to obtain a human face head image in the image frame to be recognized.
In one embodiment, the identifying module 730 is configured to determine that the identification result is a first type identification result when a person in the image frame to be identified is identified as not wearing a safety helmet; when the fact that a person wears a safety helmet in the image frame to be recognized but the face of the person does not have a safety helmet rope is recognized, determining that the recognition result is a second type of recognition result; and when all the people in the image frame to be recognized are recognized to wear safety helmets and the face of the people has safety helmet ropes, determining that the recognition result is a third type recognition result.
The modules in the above-mentioned helmet wearing identification device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a headgear wear identification method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A headgear wear identification method, the method comprising:
acquiring an image frame to be identified;
carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
identifying the human face head image through the trained safety helmet wearing identification model to obtain an identification result of whether the person in the image frame to be identified correctly wears the safety helmet;
and sending the identification result to a prompter so that the prompter carries out information prompt when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
2. The method of claim 1, wherein the obtaining the image frame to be identified comprises:
extracting an initial image frame from a video and acquiring red, green and blue tristimulus values of the initial image frame;
determining a brightness value of the initial image frame based on the red, green and blue tristimulus values;
and when the brightness value of the initial image frame is greater than the brightness threshold value, taking the initial image frame as an image frame to be identified.
3. The method of claim 2, further comprising, after the brightness value of the initial image frame is greater than a brightness threshold:
when the brightness value of the initial image frame is within a preset brightness range, performing image enhancement processing on the initial image frame to obtain an enhanced image frame;
carrying out personnel identification processing on the enhanced image frame to obtain a personnel identification result of the enhanced image frame;
and when the person is identified to exist in the enhanced image frame, taking the initial image frame as an image frame to be identified.
4. The method according to claim 3, wherein the performing the person identification process on the enhanced image frame to obtain the person identification result of the enhanced image frame comprises:
acquiring a background environment image without people;
comparing the pixel values of the pixels of the enhanced image frame and the background environment image one by one to obtain the number of changed pixel points;
and when the proportion of the changed pixel number to the total pixel number is larger than a proportion threshold value, judging that personnel exists in the enhanced image frame.
5. The method according to claim 1, wherein the image segmentation processing on the image frame to be recognized to obtain a human face image in the image frame to be recognized comprises:
cutting off the environment background image determined from the image frame to be identified to obtain a cut image frame;
determining a zero coordinate point of the cutting image frame;
and based on a coordinate graph formed by the zero point coordinate points, segmenting the cut image frame to obtain a human face head image in the image frame to be recognized.
6. The method according to claim 1, wherein the recognizing the image of the human face head through the trained helmet wearing recognition model to obtain a recognition result of whether the person in the image frame to be recognized correctly wears a helmet comprises:
when a person is identified to be not wearing a safety helmet in the image frame to be identified, determining that an identification result is a first-class identification result;
when the fact that a person wears a safety helmet in the image frame to be recognized but the face of the person does not have a safety helmet rope is recognized, determining that the recognition result is a second type of recognition result;
and when all the people in the image frame to be recognized are recognized to wear safety helmets and the face of the people has safety helmet ropes, determining that the recognition result is a third type recognition result.
7. An apparatus for headgear wear identification, the apparatus comprising:
the acquisition module is used for acquiring an image frame to be identified;
the segmentation module is used for carrying out image segmentation processing on the image frame to be recognized to obtain a human face head image in the image frame to be recognized;
the recognition module is used for recognizing the face head image through the trained helmet wearing recognition model to obtain a recognition result of whether the person in the image frame to be recognized correctly wears the helmet or not;
and the sending module is used for sending the identification result to a prompter so as to prompt information when the wearing condition of the safety helmet of the person in the image frame to be identified does not accord with the preset condition.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202111341753.5A 2021-11-12 2021-11-12 Helmet wearing identification method and device, computer equipment and storage medium Pending CN113963162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111341753.5A CN113963162A (en) 2021-11-12 2021-11-12 Helmet wearing identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111341753.5A CN113963162A (en) 2021-11-12 2021-11-12 Helmet wearing identification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113963162A true CN113963162A (en) 2022-01-21

Family

ID=79470341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111341753.5A Pending CN113963162A (en) 2021-11-12 2021-11-12 Helmet wearing identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113963162A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524395A (en) * 2023-04-04 2023-08-01 江苏智慧工场技术研究院有限公司 Intelligent factory-oriented video action recognition method and system
CN116645782A (en) * 2023-07-19 2023-08-25 中国建筑第五工程局有限公司 Safety helmet belt detection method based on image recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524395A (en) * 2023-04-04 2023-08-01 江苏智慧工场技术研究院有限公司 Intelligent factory-oriented video action recognition method and system
CN116524395B (en) * 2023-04-04 2023-11-07 江苏智慧工场技术研究院有限公司 Intelligent factory-oriented video action recognition method and system
CN116645782A (en) * 2023-07-19 2023-08-25 中国建筑第五工程局有限公司 Safety helmet belt detection method based on image recognition
CN116645782B (en) * 2023-07-19 2023-10-13 中国建筑第五工程局有限公司 Safety helmet belt detection method based on image recognition

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
CN110119673B (en) Non-inductive face attendance checking method, device, equipment and storage medium
US9268993B2 (en) Real-time face detection using combinations of local and global features
US9514225B2 (en) Video recording apparatus supporting smart search and smart search method performed using video recording apparatus
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN110188807A (en) Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN
US20130258198A1 (en) Video search system and method
CN109740572B (en) Human face living body detection method based on local color texture features
CN113963162A (en) Helmet wearing identification method and device, computer equipment and storage medium
CN109740444B (en) People flow information display method and related product
WO2019109793A1 (en) Human head region recognition method, device and apparatus
CN110889334A (en) Personnel intrusion identification method and device
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN109002776B (en) Face recognition method, system, computer device and computer-readable storage medium
CN115205780A (en) Construction site violation monitoring method, system, medium and electronic equipment
JP7211428B2 (en) Information processing device, control method, and program
CN104270619B (en) A kind of security alarm method and device
CN112633179A (en) Farmer market aisle object occupying channel detection method based on video analysis
CN107316011A (en) Data processing method, device and storage medium
WO2019150649A1 (en) Image processing device and image processing method
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN110796068A (en) Drowning detection method and system for community swimming pool
CN115995097A (en) Deep learning-based safety helmet wearing standard judging method
CN115294661A (en) Pedestrian dangerous behavior identification method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination