CN111476117A - Safety helmet wearing detection method and device and terminal - Google Patents

Safety helmet wearing detection method and device and terminal Download PDF

Info

Publication number
CN111476117A
CN111476117A CN202010220020.5A CN202010220020A CN111476117A CN 111476117 A CN111476117 A CN 111476117A CN 202010220020 A CN202010220020 A CN 202010220020A CN 111476117 A CN111476117 A CN 111476117A
Authority
CN
China
Prior art keywords
image
detected
helmet wearing
helmet
wearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010220020.5A
Other languages
Chinese (zh)
Inventor
丁沛然
宋芳妍
苏世龙
雷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Science and Technology Group Co Ltd
China Construction Science and Technology Group Co Ltd Shenzhen Branch
Original Assignee
China Construction Science and Technology Co Ltd
China Construction Science and Technology Group Co Ltd Shenzhen Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Science and Technology Co Ltd, China Construction Science and Technology Group Co Ltd Shenzhen Branch filed Critical China Construction Science and Technology Co Ltd
Priority to CN202010220020.5A priority Critical patent/CN111476117A/en
Publication of CN111476117A publication Critical patent/CN111476117A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The application is suitable for the technical field of safety management, and provides a safety helmet wearing detection method, a safety helmet wearing detection device and a terminal, wherein the safety helmet wearing detection method comprises the following steps: acquiring an image to be detected; segmenting the image to be detected to obtain a plurality of sub-images to be detected; inputting the image to be detected and the subimage to be detected into a pre-established helmet wearing detection model, carrying out helmet wearing detection on the image to be detected and the subimage to be detected by the helmet wearing detection model, and outputting a marking image carrying helmet wearing marking information and a marking subimage carrying helmet wearing marking information; respectively mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information; the wearing detection precision of the safety helmet is improved.

Description

Safety helmet wearing detection method and device and terminal
Technical Field
The application belongs to the technical field of safety management, and particularly relates to a safety helmet wearing detection method, a safety helmet wearing detection device and a safety helmet wearing detection terminal.
Background
The safety helmet is a hat which can protect the head of a person from being hurt by falling objects and other specific factors. In many working environments, a constructor can require workers to wear safety helmets during working, and the wearing detection of the safety helmets also becomes an important part of safety management of the constructor.
The existing safety helmet wearing detection method usually completes safety helmet wearing detection by inputting an image to be detected into a safety helmet wearing detection model, however, the method is low in precision, and is difficult to accurately identify whether a target with a small area in the image to be detected wears a safety helmet or not, so that the safety of workers during operation cannot be effectively guaranteed.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a computer readable storage medium for detecting wearing of a safety helmet, which can improve the precision of the wearing detection of the safety helmet.
In a first aspect, an embodiment of the present application provides a method for detecting wearing of a safety helmet, including:
acquiring an image to be detected;
segmenting the image to be detected to obtain a plurality of sub-images to be detected;
inputting the image to be detected and the subimage to be detected into a pre-established helmet wearing detection model, carrying out helmet wearing detection on the image to be detected and the subimage to be detected by the helmet wearing detection model, and outputting a marking image carrying helmet wearing marking information and a marking subimage carrying helmet wearing marking information;
and respectively mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information.
A second aspect of embodiments of the present application provides a detection device for detecting wearing of a safety helmet, including:
the acquisition unit is used for acquiring an image to be detected;
the segmentation unit is used for segmenting the image to be detected to obtain a plurality of sub-images to be detected;
the detection unit is used for inputting the image to be detected and the subimage to be detected into a pre-established safety helmet wearing detection model, carrying out safety helmet wearing detection on the image to be detected and the subimage to be detected by the safety helmet wearing detection model, and outputting a marked image carrying safety helmet wearing mark information and a marked subimage carrying safety helmet wearing mark information;
and the mapping unit is used for mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected respectively to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information.
A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the steps of the method.
In the embodiment of the application, the image to be detected is segmented to obtain a plurality of subimages to be detected, the image to be detected and the subimages to be detected are input into the pre-established helmet wearing detection model, the image to be detected and the subimages to be detected are subjected to helmet wearing detection through the helmet wearing detection model, so that the terminal can detect whether a target with a larger area in the image to be detected wears a helmet or not, and the terminal can detect whether the target with a smaller area in the image to be detected wears a helmet or not through the helmet wearing detection model because the area of the target with a smaller area in the image to be detected in the subimage to be detected in the image to be detected in the sub-image to be detected in the area of the target with a smaller area in the image to be detected in the helmet wearing detection model is larger, therefore, the helmet wearing mark information carried by the mark image output by the helmet wearing detection model and the mark The first target image obtained after the image is shot not only contains the helmet wearing mark information corresponding to the target with larger area in the image to be detected, but also contains the helmet wearing mark information corresponding to the target with smaller area in the image to be detected, so that the precision of helmet wearing detection is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for detecting wearing of a safety helmet according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an implementation of training a detection model of wearing a crash helmet to be trained according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a helmet wearing detection provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a safety helmet wearing detection device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The safety helmet is a hat which can protect the head of a person from being hurt by falling objects and other specific factors. In many working environments, a constructor can require workers to wear safety helmets during working, and the wearing detection of the safety helmets also becomes an important part of safety management of the constructor.
The existing safety helmet wearing detection method usually completes safety helmet wearing detection by inputting an image to be detected into a safety helmet wearing detection model, however, the method is low in precision, and is difficult to accurately identify whether a target with a small area in the image to be detected wears a safety helmet or not, so that the safety of workers during operation cannot be effectively guaranteed.
Based on this, the embodiment of the application provides a method, a device, a terminal and a computer readable storage medium for detecting wearing of a safety helmet, which can improve the accuracy of the detection of wearing of the safety helmet.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a method for detecting wearing of a safety helmet provided in an embodiment of the present application, where the method may be applied to a terminal, may be executed by a safety helmet wearing detection device configured on the terminal, and is suitable for a situation where it is necessary to improve the precision of detecting wearing of the safety helmet.
The above-described headgear wearing detection method may include steps 101 to 104.
Step 101, obtaining an image to be detected.
Wherein, the image to be detected represents an image which needs to be detected when the safety helmet is worn. Generally, a terminal can acquire a monitoring video shot by monitoring equipment installed on a construction site, and perform helmet wearing detection by taking each frame of monitoring image in the monitoring video as an image to be detected.
And 102, segmenting the image to be detected to obtain a plurality of sub-images to be detected.
In some embodiments of the present application, the image to be detected may be equally divided to obtain a plurality of sub-images to be detected.
For example, after an image to be detected having a pixel value of 1920 × 1080(1080P) is acquired, the image to be detected may be equally divided into 12 sub-images to be detected having a pixel value of 320 × 540.
In the embodiment of the application, the image to be detected and the plurality of sub-images to be detected obtained after the image to be detected is segmented are used for detecting the wearing of the safety helmet together, so that whether the safety helmet is worn by a target with a large area in the image to be detected cannot be detected due to segmentation, and whether the safety helmet is worn by a target with a small area in the image to be detected is detected because the area of the target with a small area in the image to be detected in the sub-image to be detected is larger.
103, inputting the image to be detected and the subimage to be detected into a pre-established helmet wearing detection model, carrying out helmet wearing detection on the image to be detected and the subimage to be detected by the helmet wearing detection model, and outputting a marking image carrying helmet wearing marking information and a marking subimage carrying helmet wearing marking information.
The pre-established helmet wearing detection model may be a model based on a single multi-box Detector (SSD), a model based on a Convolutional Neural Network (CNN), or a model based on other deep learning algorithms.
In some embodiments of the present application, before inputting the image to be detected and the sub-image to be detected into the pre-established helmet wearing detection model, the pre-established helmet wearing detection model is obtained by training the pre-trained helmet wearing detection model.
Specifically, as shown in fig. 2, the training of the helmet wearing detection model to be trained may include: step 201 to step 203.
Step 201, obtaining a plurality of sample images and standard images which respectively correspond to each sample image and carry pre-marked helmet wearing mark information.
In some embodiments of the present application, the sample image may be obtained by obtaining a monitoring video captured by a monitoring device installed at a construction site, and using each frame of the monitoring video as the sample image, or obtaining a picture of a person who wears a safety helmet and who is open on a network, and using the picture as the sample image. By marking the helmet wearing mark information on the plurality of sample images, a standard image corresponding to each sample image can be obtained.
For example, L abelmg software may be used to frame the sample images to obtain standard images carrying pre-marked headgear wearing mark information corresponding to each sample image.
Wherein the safety helmet wearing mark information at least comprises non-wearing safety helmet mark information; that is, the helmet wearing mark information may include only the non-wearing helmet mark information, or may include the non-wearing helmet mark information and the wearing helmet mark information; the worn headgear marking information may be marking information that marks that red, yellow, blue, white, and/or other colors of headgear have been worn.
Step 202, inputting a target sample image in the plurality of sample images into a to-be-trained helmet wearing detection model, and outputting an image to be confirmed, which carries helmet wearing mark information and corresponds to the target sample image, by the to-be-trained helmet wearing detection model.
The target sample image is any one of a plurality of sample images. In the embodiment of the application, the helmet wearing detection model to be trained is trained in sequence by using a large number of sample images, so that the obtained pre-established helmet wearing detection model can be used for helmet wearing identification on images containing different targets.
Similarly, the helmet wearing mark information carried in the image to be confirmed at least includes the non-wearing helmet mark information.
Step 203, calculating the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image, if the matching degree is smaller than the matching degree threshold value, adjusting the parameters of the helmet wearing detection model to be trained, re-using the target sample image to train the helmet wearing detection model to be trained until the number of times of training the helmet wearing detection model to be trained by using the target sample image is larger than or equal to the first time threshold value, or when the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image is larger than or equal to the matching degree threshold value, training the helmet wearing detection model to be trained by using the next target sample image in the plurality of sample images, and obtaining the pre-established safety helmet wearing detection model until the total training times of the safety helmet wearing detection model to be trained is greater than or equal to the second time threshold value.
For example, after the terminal obtains 1000 sample images, any one of the sample images may be input to the to-be-trained helmet wearing detection model, the to-be-trained helmet wearing detection model outputs an image to be confirmed carrying helmet wearing mark information corresponding to the sample image, at this time, by calculating a matching degree between the helmet wearing mark information in the image to be confirmed and pre-marked helmet wearing mark information carried by a standard image corresponding to a target sample image, if the matching degree is greater than a matching degree threshold, the next sample image is used to train the to-be-trained helmet wearing detection model until the total training frequency of the to-be-trained helmet wearing detection model is greater than or equal to a second frequency threshold, and the pre-established helmet wearing detection model is obtained.
The matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried in the standard image corresponding to the target sample image can be calculated by calculating the coincidence degree between the helmet wearing mark frame in the image to be confirmed and the pre-marked helmet wearing mark frame carried in the standard image corresponding to the target sample image.
It should be noted that, in some embodiments of the present application, before target sample images in the plurality of sample images are input into the detection model for detecting wearing of the crash helmet to be trained, the sample images may be grouped to obtain a plurality of groups of sample images, and one group of sample images in the plurality of groups of sample images is sequentially input into the detection model for detecting wearing of the crash helmet to be trained, and the detection model for detecting wearing of the crash helmet to be trained is trained to obtain the pre-established detection model for detecting wearing of the crash helmet.
In order to accelerate the operation speed of the pre-established helmet wearing detection model and ensure the real-time performance of helmet wearing detection, in some embodiments of the present application, parameters in the pre-established helmet wearing detection model may be subjected to quantization processing to obtain a quantized helmet wearing detection model.
For example, a TensorRT framework can be utilized to convert 32-bit floating-point type parameters in a pre-established helmet wearing detection model into 8-bit integer type parameters to obtain a quantized helmet wearing detection model, so that the calculation amount of the pre-established helmet wearing detection model is reduced, the calculation speed of the pre-established helmet wearing detection model is increased, and the real-time performance of helmet wearing detection is improved.
In the implementation mode of the application, the pre-established helmet wearing detection model is obtained by training the helmet wearing detection model to be trained, the image to be detected and the subimage to be detected are input into the pre-established helmet wearing detection model, the helmet wearing detection model is used for carrying out helmet wearing detection on the image to be detected and the subimage to be detected, and the marked image carrying the helmet wearing mark information and the marked subimage carrying the helmet wearing mark information can be obtained.
Correspondingly, the helmet wearing mark information in the mark image carrying the helmet wearing mark information and the mark sub-image carrying the helmet wearing mark information corresponds to the helmet wearing mark information carried by the standard image in the training process of the to-be-trained helmet wearing detection model, and at least comprises the non-worn helmet mark information.
It should be noted that, because the pre-established helmet wearing detection model often limits the pixel value of the input image, before the image to be detected and the sub-image to be detected are input into the pre-established helmet wearing detection model, the terminal may respectively compress the image to be detected and the sub-image to be detected to obtain a compressed image to be detected and a compressed sub-image to be detected with a preset resolution, and input the compressed image to be detected and the compressed sub-image to be detected into the pre-established helmet wearing detection model, so that the compressed image to be detected and the compressed sub-image to be detected meet the pixel size requirement of the pre-established helmet wearing detection model on the input image.
For example, after the image to be detected is equally divided into 12 sub-images to be detected with pixel values of 320 × 540, the image to be detected and the 12 sub-images to be detected with pixel values of 320 × 540 may be compressed respectively to obtain a compressed image to be detected with pixel values of 300 × 300 and 12 compressed sub-images to be detected with pixel values of 300 × 300.
In some embodiments of the present application, if a plurality of sub-images to be detected obtained by segmenting an image to be detected satisfy a requirement of a pre-established helmet wearing detection model on the size of a pixel value of an input image, only the image to be detected may be compressed.
And 104, mapping the helmet wearing mark information carried by the mark image and the mark sub-image to an image to be detected respectively to obtain a first target image which carries the helmet wearing mark information and corresponds to the image to be detected.
For example, as shown in fig. 3, after the image to be detected 31 is obtained, the image to be detected may be segmented to obtain a plurality of sub-images to be detected, and the image to be detected and the plurality of sub-images to be detected are compressed respectively to obtain a compressed image to be detected 305 and compressed sub-images to be detected 301, 302, 303, and 304; at this time, the compressed image 305 to be detected and the compressed sub-images 301, 302, 303 and 304 to be detected can be input into a pre-established helmet wearing detection model, the helmet wearing detection model performs helmet wearing detection on the compressed image to be detected and the compressed sub-images to be detected, and outputs a mark image 310 carrying helmet wearing mark information and mark sub-images 306, 307, 308 and 309 carrying helmet wearing mark information; then, the helmet wearing mark information carried by the mark image 310 and the mark sub-images 306, 307, 308 and 309 are mapped to the image 31 to be detected, so that the first target image 32 carrying the helmet wearing mark information corresponding to the image 31 to be detected can be obtained.
In some embodiments of the present application, the image coordinates of the helmet wearing mark information in the mark image and the mark sub-image may be utilized to map the helmet wearing mark information carried by the mark image and the mark sub-image into the image to be detected, so as to obtain a first target image carrying the helmet wearing mark information corresponding to the image to be detected.
Since the headgear wearing tag information carried by the tag image may be repeated with the headgear wearing tag information carried by the tag sub-image, in some embodiments of the present application, after obtaining the first target image carrying the headgear wearing tag information corresponding to the image to be detected, the method may include: and filtering the helmet wearing mark information carried in the first target image to obtain a second target image without redundant mark information.
For example, as shown in fig. 3, the helmet worn marker information carried in the first target image 32 may be filtered to obtain a second target image 33 from which redundant marker information is removed.
Specifically, in some embodiments of the present application, a Non-maximum suppression (NMS) algorithm may be used to filter the helmet wearing mark information carried in the first target image, so as to obtain a second target image from which the redundant mark information is removed.
In the embodiment of the application, the image to be detected is segmented to obtain a plurality of subimages to be detected, the image to be detected and the subimages to be detected are input into the pre-established helmet wearing detection model, the image to be detected and the subimages to be detected are subjected to helmet wearing detection through the helmet wearing detection model, so that the terminal can detect whether a target with a larger area in the image to be detected wears a helmet or not, and the terminal can detect whether the target with a smaller area in the image to be detected wears a helmet or not through the helmet wearing detection model because the area of the target with a smaller area in the image to be detected in the subimage to be detected in the image to be detected in the sub-image to be detected in the area of the target with a smaller area in the image to be detected in the helmet wearing detection model is larger, therefore, the helmet wearing mark information carried by the mark image output by the helmet wearing detection model and the mark The first target image obtained after the image is shot not only contains the helmet wearing mark information corresponding to the target with larger area in the image to be detected, but also contains the helmet wearing mark information corresponding to the target with smaller area in the image to be detected, so that the precision of helmet wearing detection is improved.
In order to facilitate further previewing, analyzing or processing of the second target image, in some embodiments of the present application, after obtaining the second target image with redundant marking information removed, the method may include: judging whether the safety helmet wearing mark information carried by the second target image exists in the non-wearing safety helmet wearing mark information or not; and if the mark information carried by the second target image and worn by the safety helmet has the mark information without the safety helmet is present, saving the second target image.
Further, the storage path of the saved second target image may be saved in a json file, so that the front end webpage may obtain the saved second target image according to the storage path saved in the json file.
According to the embodiment of the application, the second target image with the mark information of the non-worn safety helmet is stored in the mark information of the safety helmet worn by the second target image, so that safety management personnel can check the stored second target image regularly, and further can timely perform safety education on the personnel without wearing the safety helmet, and the incidence rate of safety accidents is indirectly reduced.
In some further embodiments of the present application, the second target image may also be encoded as a headgear wearing detection video.
For example, a Real Time Messaging Protocol (rtmp) plug flow technology may be used to encode the second target image into a headgear wearing detection video, and generate a rtmp address corresponding to the headgear wearing detection video, so that the front-end web page may obtain the headgear wearing detection video according to the rtmp address, and preview and play the headgear wearing detection video is implemented.
It should be noted that the method for detecting wearing of a safety helmet provided by the application performs the wearing detection of the safety helmet by using the pre-established safety helmet wearing detection model, and the detection speed is high, so that in practical application, each frame of monitoring image in the monitoring video is used for performing the wearing detection of the safety helmet, and the second target image is encoded into the safety helmet wearing detection video in real time to be displayed, so that the real-time display of the wearing detection condition of the safety helmet of the target in the monitoring video can be realized.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
As shown in fig. 4, a schematic structural diagram of a detection device 400 for detecting whether a user wears a safety helmet is provided in an embodiment of the present application, where the detection device 400 for detecting whether a user wears a safety helmet includes: an acquisition unit 401, a segmentation unit 402, a detection unit 403 and a mapping unit 404.
An acquiring unit 401, configured to acquire an image to be detected;
a segmentation unit 402, configured to perform segmentation processing on the image to be detected to obtain multiple sub-images to be detected;
the detection unit 403 is configured to input the image to be detected and the subimage to be detected into a pre-established helmet wearing detection model, perform helmet wearing detection on the image to be detected and the subimage to be detected by the helmet wearing detection model, and output a tag image carrying helmet wearing tag information and a tag subimage carrying helmet wearing tag information;
a mapping unit 404, configured to map the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected, respectively, so as to obtain a first target image carrying helmet wearing mark information corresponding to the image to be detected.
In some embodiments of the present application, the mapping unit 404 is further specifically configured to: and filtering the helmet wearing mark information carried in the first target image to obtain a second target image with redundant mark information removed.
In some embodiments of the present application, the above-mentioned headgear wearing detection apparatus further includes a storage unit configured to: judging whether the safety helmet wearing mark information carried by the second target image exists in the non-wearing safety helmet wearing mark information or not; and if the mark information carried by the second target image and worn by the safety helmet has the mark information without the safety helmet is present, the second target image is saved.
In some embodiments of the present application, the storage unit is further specifically configured to: encoding the second target image as a headgear wearing detection video.
In some embodiments of the present application, the detecting unit 403 is further specifically configured to: and respectively compressing the image to be detected and the subimage to be detected to obtain a compressed image to be detected and a compressed subimage to be detected with preset resolution, and inputting the compressed image to be detected and the compressed subimage to be detected into a pre-established safety helmet wearing detection model.
In some embodiments of the present application, the detecting unit 403 is further specifically configured to: training a safety helmet wearing detection model to be trained to obtain a pre-established safety helmet wearing detection model; wherein, the training of the helmet wearing detection model to be trained comprises: acquiring a plurality of sample images and standard images which are respectively corresponding to each sample image and carry pre-marked helmet wearing mark information; inputting a target sample image in the plurality of sample images into the helmet wearing detection model to be trained, and outputting an image to be confirmed, which carries helmet wearing mark information and corresponds to the target sample image, by the helmet wearing detection model to be trained; calculating the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image, if the matching degree is smaller than a matching degree threshold value, adjusting parameters of the helmet wearing detection model to be trained, and training the helmet wearing detection model to be trained by utilizing the target sample image again until the number of times of training the helmet wearing detection model to be trained by utilizing the target sample image is larger than or equal to a first time threshold value, or when the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image is larger than or equal to a matching degree threshold value, training the helmet wearing detection model to be trained by utilizing the next target sample image in the plurality of sample images And training until the total training times of the safety helmet wearing detection model to be trained is greater than or equal to a second time threshold value, and obtaining the pre-established safety helmet wearing detection model.
In some embodiments of the present application, the detecting unit 403 is further specifically configured to: and quantizing the parameters in the pre-established safety helmet wearing detection model to obtain a quantized safety helmet wearing detection model.
It should be noted that, for convenience and simplicity of description, the specific working process of the detection device 400 for wearing a safety helmet may refer to the corresponding process of the method described in fig. 1 to fig. 3, and is not described herein again.
Fig. 5 is a schematic diagram of a terminal according to an embodiment of the present application. The terminal 5 may include: a processor 50, a memory 51 and a computer program 52, such as a headgear wear detection device program, stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps in the various headgear wear detection method embodiments described above, such as steps 101-104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the units 401 to 404 shown in fig. 4.
The computer program may be divided into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal. For example, the computer program may be divided into an acquisition unit, a division unit, a detection unit and a mapping unit, each unit functioning specifically as follows: the acquisition unit is used for acquiring an image to be detected; the segmentation unit is used for segmenting the image to be detected to obtain a plurality of sub-images to be detected; the detection unit is used for inputting the image to be detected and the subimage to be detected into a pre-established safety helmet wearing detection model, carrying out safety helmet wearing detection on the image to be detected and the subimage to be detected by the safety helmet wearing detection model, and outputting a marked image carrying safety helmet wearing mark information and a marked subimage carrying safety helmet wearing mark information; and the mapping unit is used for mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected respectively to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information.
The terminal can be a mobile terminal such as a smart television, or a computing device such as a smart phone, a desktop computer, a notebook, a palm computer and a cloud server. The terminal may include, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 51 may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal. The memory 51 is used for storing the computer program and other programs and data required by the terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for detecting wearing of a safety helmet, comprising:
acquiring an image to be detected;
segmenting the image to be detected to obtain a plurality of sub-images to be detected;
inputting the image to be detected and the subimage to be detected into a pre-established helmet wearing detection model, carrying out helmet wearing detection on the image to be detected and the subimage to be detected by the helmet wearing detection model, and outputting a marking image carrying helmet wearing marking information and a marking subimage carrying helmet wearing marking information;
and respectively mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information.
2. The method for detecting the wearing of the safety helmet according to claim 1, wherein after obtaining the first target image carrying the information of the wearing mark of the safety helmet corresponding to the image to be detected, the method comprises:
and filtering the helmet wearing mark information carried in the first target image to obtain a second target image with redundant mark information removed.
3. The method for detecting wearing of a helmet according to claim 2, wherein after filtering the helmet wearing mark information carried in the first target image to obtain a second target image with redundant mark information removed, the method includes:
judging whether the safety helmet wearing mark information carried by the second target image exists in the non-wearing safety helmet wearing mark information or not;
and if the mark information carried by the second target image and worn by the safety helmet has the mark information without the safety helmet is present, the second target image is saved.
4. The method for detecting wearing of a helmet according to claim 2, wherein after filtering the helmet wearing mark information carried in the first target image to obtain a second target image with redundant mark information removed, the method further comprises:
encoding the second target image as a headgear wearing detection video.
5. The method for detecting the wearing of the safety helmet as claimed in claim 2, wherein the inputting the image to be detected and the sub-image to be detected into a pre-established safety helmet wearing detection model comprises:
and respectively compressing the image to be detected and the subimage to be detected to obtain a compressed image to be detected and a compressed subimage to be detected with preset resolution, and inputting the compressed image to be detected and the compressed subimage to be detected into a pre-established safety helmet wearing detection model.
6. The method for detecting the wearing of the safety helmet as claimed in claim 1, wherein before inputting the image to be detected and the sub-image to be detected into a pre-established safety helmet wearing detection model, the method comprises:
training a safety helmet wearing detection model to be trained to obtain a pre-established safety helmet wearing detection model;
the training of the helmet wearing detection model to be trained comprises the following steps:
acquiring a plurality of sample images and standard images which are respectively corresponding to each sample image and carry pre-marked helmet wearing mark information;
inputting a target sample image in the plurality of sample images into the helmet wearing detection model to be trained, and outputting an image to be confirmed, which carries helmet wearing mark information and corresponds to the target sample image, by the helmet wearing detection model to be trained;
calculating the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image, if the matching degree is smaller than a matching degree threshold value, adjusting parameters of the helmet wearing detection model to be trained, and training the helmet wearing detection model to be trained by utilizing the target sample image again until the number of times of training the helmet wearing detection model to be trained by utilizing the target sample image is larger than or equal to a first time threshold value, or when the matching degree between the helmet wearing mark information in the image to be confirmed and the pre-marked helmet wearing mark information carried by the standard image corresponding to the target sample image is larger than or equal to a matching degree threshold value, training the helmet wearing detection model to be trained by utilizing the next target sample image in the plurality of sample images And training until the total training times of the safety helmet wearing detection model to be trained is greater than or equal to a second time threshold value, and obtaining the pre-established safety helmet wearing detection model.
7. The headgear wear detection method of claim 6, wherein after the obtaining of the pre-established headgear wear detection model, comprising:
and quantizing the parameters in the pre-established safety helmet wearing detection model to obtain a quantized safety helmet wearing detection model.
8. A headgear wear detection device, comprising:
the acquisition unit is used for acquiring an image to be detected;
the segmentation unit is used for segmenting the image to be detected to obtain a plurality of sub-images to be detected;
the detection unit is used for inputting the image to be detected and the subimage to be detected into a pre-established safety helmet wearing detection model, carrying out safety helmet wearing detection on the image to be detected and the subimage to be detected by the safety helmet wearing detection model, and outputting a marked image carrying safety helmet wearing mark information and a marked subimage carrying safety helmet wearing mark information;
and the mapping unit is used for mapping the helmet wearing mark information carried by the mark image and the mark sub-image to the image to be detected respectively to obtain a first target image which is corresponding to the image to be detected and carries the helmet wearing mark information.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010220020.5A 2020-03-25 2020-03-25 Safety helmet wearing detection method and device and terminal Pending CN111476117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010220020.5A CN111476117A (en) 2020-03-25 2020-03-25 Safety helmet wearing detection method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010220020.5A CN111476117A (en) 2020-03-25 2020-03-25 Safety helmet wearing detection method and device and terminal

Publications (1)

Publication Number Publication Date
CN111476117A true CN111476117A (en) 2020-07-31

Family

ID=71748403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010220020.5A Pending CN111476117A (en) 2020-03-25 2020-03-25 Safety helmet wearing detection method and device and terminal

Country Status (1)

Country Link
CN (1) CN111476117A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487976A (en) * 2020-11-30 2021-03-12 中科院计算所西部高等技术研究院 Monitoring method and device based on image recognition and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222672A (en) * 2019-06-19 2019-09-10 广东工业大学 The safety cap of construction site wears detection method, device, equipment and storage medium
CN110310264A (en) * 2019-06-25 2019-10-08 北京邮电大学 A kind of large scale object detection method, device based on DCNN
CN110443765A (en) * 2019-08-02 2019-11-12 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN110766650A (en) * 2019-08-05 2020-02-07 南方科技大学 Biological detection early warning method, system, device, computer equipment and storage medium
CN110889376A (en) * 2019-11-28 2020-03-17 创新奇智(南京)科技有限公司 Safety helmet wearing detection system and method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222672A (en) * 2019-06-19 2019-09-10 广东工业大学 The safety cap of construction site wears detection method, device, equipment and storage medium
CN110310264A (en) * 2019-06-25 2019-10-08 北京邮电大学 A kind of large scale object detection method, device based on DCNN
CN110443765A (en) * 2019-08-02 2019-11-12 厦门美图之家科技有限公司 Image processing method, device and electronic equipment
CN110766650A (en) * 2019-08-05 2020-02-07 南方科技大学 Biological detection early warning method, system, device, computer equipment and storage medium
CN110889376A (en) * 2019-11-28 2020-03-17 创新奇智(南京)科技有限公司 Safety helmet wearing detection system and method based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112487976A (en) * 2020-11-30 2021-03-12 中科院计算所西部高等技术研究院 Monitoring method and device based on image recognition and storage medium
CN112487976B (en) * 2020-11-30 2023-10-24 中科院计算所西部高等技术研究院 Monitoring method, device and storage medium based on image recognition

Similar Documents

Publication Publication Date Title
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
CN109657564B (en) Personnel on-duty detection method and device, storage medium and terminal equipment
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN110781853B (en) Crowd abnormality detection method and related device
EP3376431B1 (en) Method and apparatus for identifying pupil in image
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN111860277B (en) Safety warning method for airspeed tube sleeve of civil aircraft based on color histogram feature
CN110717542A (en) Emotion recognition method, device and equipment
CN113111844A (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN108174198B (en) Video image quality diagnosis analysis detection device and application system
CN113015022A (en) Behavior recognition method and device, terminal equipment and computer readable storage medium
CN113139428A (en) Target identification method, edge device, frontier defense monitoring system and readable storage medium
CN111178241A (en) Intelligent monitoring system and method based on video analysis
CN110599520B (en) Open field experiment data analysis method, system and terminal equipment
CN111476117A (en) Safety helmet wearing detection method and device and terminal
CN113963162A (en) Helmet wearing identification method and device, computer equipment and storage medium
CN110633630B (en) Behavior identification method and device and terminal equipment
CN112966687A (en) Image segmentation model training method and device and communication equipment
CN115690747B (en) Vehicle blind area detection model test method and device, electronic equipment and storage medium
CN106407886A (en) Apparatus for establishing face model
CN107592297B (en) Method, system and terminal equipment for mobile detection
CN112348112B (en) Training method and training device for image recognition model and terminal equipment
CN114049619A (en) Insulator icing identification method and device, storage medium and equipment
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
CN111797922A (en) Text image classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination