CN113469043A - Method and device for detecting wearing state of safety helmet, computer equipment and storage medium - Google Patents
Method and device for detecting wearing state of safety helmet, computer equipment and storage medium Download PDFInfo
- Publication number
- CN113469043A CN113469043A CN202110739196.6A CN202110739196A CN113469043A CN 113469043 A CN113469043 A CN 113469043A CN 202110739196 A CN202110739196 A CN 202110739196A CN 113469043 A CN113469043 A CN 113469043A
- Authority
- CN
- China
- Prior art keywords
- dimensional image
- target
- target detection
- camera
- safety helmet
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003860 storage Methods 0.000 title abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 217
- 238000004590 computer program Methods 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000009440 infrastructure construction Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a method and a device for detecting wearing state of a safety helmet, computer equipment and a storage medium. The method comprises the following steps: acquiring a two-dimensional image and a three-dimensional image of a target detection person; determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image; projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image; and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel. By adopting the method, the detection accuracy of the wearing state of the safety helmet can be improved.
Description
Technical Field
The application relates to the technical field of deep learning, in particular to a method and a device for detecting wearing state of a safety helmet, computer equipment and a storage medium.
Background
With the development of computer technology, deep learning technology has emerged, which is an intrinsic rule and expression level for learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. At present, the deep learning technology is widely applied to intelligent supervision systems of infrastructure construction sites, industrial production sites and various high-altitude operation environments, so as to realize identity recognition of field workers and real-time monitoring on whether safety helmets are worn.
However, the conventional method for detecting the wearing state of the safety helmet only considers the recognition of the two-dimensional image of the worker, so that the detection accuracy of the wearing state of the safety helmet is low.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium for detecting a wearing state of a helmet, which can improve the detection accuracy of the wearing state of the helmet.
A headgear wear state detection method, the method comprising:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining a deflection angle of the two-dimensional image relative to a monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
In one embodiment, the acquiring the two-dimensional image and the three-dimensional image of the target detection person includes:
the method comprises the steps of obtaining a two-dimensional image of a target detection person through a monocular camera, and obtaining a three-dimensional image of the target detection person through a target binocular camera.
In one embodiment, before the acquiring the three-dimensional image of the target detection person by the target binocular camera, the method further includes:
acquiring a light brightness value of the current environment;
when the light brightness value is larger than or equal to a preset light brightness value, taking the binocular color camera as a target binocular camera;
and when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
In one embodiment, the acquiring a three-dimensional image of the target detection person by the target binocular camera includes:
respectively acquiring a left two-dimensional image and a right two-dimensional image of a target detection person through a left camera and a right camera in a target binocular camera;
and generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the generating a three-dimensional image of the target detection person according to the left two-dimensional image and the right two-dimensional image includes:
when the target binocular camera is a binocular color camera, generating a color three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image;
and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the method further comprises:
extracting human face characteristic points and human face vertexes of the target detection personnel from the three-dimensional image;
aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel;
after the human face vertex of the target detection person is aligned with the human face vertex of the calibration detection person, determining a feature difference value between the human face feature point of the target detection person and the human face feature point of the calibration detection person;
and when the characteristic difference value is smaller than a preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
In one embodiment, the training step of the helmet wearing state discrimination model includes:
acquiring two-dimensional images of samples wearing the safety helmet and not wearing the safety helmet of the calibration detection personnel through monocular cameras respectively, and acquiring three-dimensional images of samples wearing the safety helmet and not wearing the safety helmet of the calibration detection personnel through target binocular cameras respectively; the sample two-dimensional image and the sample three-dimensional image are obtained by the monocular camera and the target binocular camera at each preset angle interval when the calibration detector rotates the head within the preset angle range;
and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
In one embodiment, the method further comprises:
when the wearing state result of the safety helmet is that the safety helmet is not worn, generating alarm information;
and sending the alarm information to alarm equipment to indicate the alarm equipment to alarm and remind based on the alarm information.
A headgear wearing state detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring a two-dimensional image and a three-dimensional image of a target detection person;
the determining module is used for determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image;
the projection module is used for projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle so as to obtain a corresponding projected two-dimensional image;
and the judging module is used for inputting the projected two-dimensional image into a pre-trained safety helmet wearing state judging model to obtain a safety helmet wearing state result of the target detection personnel.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining a deflection angle of the two-dimensional image relative to a monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining a deflection angle of the two-dimensional image relative to a monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
According to the method and the device for detecting the wearing state of the safety helmet, the computer equipment and the storage medium, the two-dimensional image and the three-dimensional image of the target detection personnel are obtained; determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image; projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image; and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel. Therefore, the wearing state of the safety helmet of the target detection personnel is detected by fusing the two-dimensional image and the three-dimensional image of the target detection personnel, and compared with the traditional mode of detecting the wearing state of the safety helmet only based on the two-dimensional image of the target detection personnel, the detection accuracy of the wearing state of the safety helmet is improved.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a method for detecting a wearing state of a helmet;
FIG. 2 is a schematic flow chart illustrating a method for detecting a wearing state of a helmet in one embodiment;
FIG. 3 is a schematic flowchart of the target binocular camera determination step in one embodiment;
FIG. 4 is a schematic flow chart diagram illustrating the steps for generating a three-dimensional image of a target inspector in one embodiment;
FIG. 5 is a schematic flow chart of a method for detecting the wearing state of a helmet in another embodiment;
FIG. 6 is a schematic structural diagram of a system for detecting a wearing state of a helmet in one embodiment;
FIG. 7 is a block diagram showing the structure of a device for detecting the wearing state of a helmet in one embodiment;
FIG. 8 is a block diagram showing the construction of a helmet wearing state detecting device in another embodiment;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The method for detecting the wearing state of the safety helmet can be applied to the application environment shown in fig. 1. The application environment includes a monocular camera 102, a target binocular camera 104, and a server 106. The monocular camera 102 and the target binocular camera 104 are respectively in communication with the server 106 through a network. The server 106 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. Those skilled in the art will understand that the application environment shown in fig. 1 is only a part of the scenario related to the present application, and does not constitute a limitation to the application environment of the present application.
The server 106 acquires a two-dimensional image of the target detection person through the monocular camera 102, and acquires a three-dimensional image of the target detection person through the target binocular camera 104; the server 106 determines the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image; the server 106 projects the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image; the server 106 inputs the projected two-dimensional image into a pre-trained helmet wearing state discrimination model to obtain a helmet wearing state result of the target detection personnel.
In one embodiment, as shown in fig. 2, a method for detecting a wearing state of a helmet is provided, which is exemplified by applying the method to the server 106 in fig. 1, and includes the following steps:
s202, acquiring a two-dimensional image and a three-dimensional image of the target detection person.
The two-dimensional image is a planar image, and the three-dimensional image is a stereoscopic image.
In one embodiment, the third-party storage device stores two-dimensional images and three-dimensional images of the target detection person, and the server can communicate with the third-party storage device and directly acquire the two-dimensional images and the three-dimensional images of the target detection person from the third-party storage device.
In one embodiment, the server may directly acquire and obtain a two-dimensional image of the target detection person through the monocular camera. And the server can respectively acquire two-dimensional images through the left camera and the right camera of the target binocular camera, and generate a three-dimensional image of the target detection personnel according to a triangulation method based on the two-dimensional images acquired through the left camera and the right camera of the target binocular camera. The monocular camera is a camera device comprising one camera, and the binocular camera is a camera device comprising two cameras.
In one embodiment, the monocular camera may be a monocular color camera or a monocular infrared camera. The target binocular camera may be a binocular color camera or a binocular infrared camera. It is understood that the image data of the two-dimensional image obtained by the monocular color camera may include two-dimensional coordinates and RGB (primary optical colors, i.e., Red; Green, Green; Blue, Blue) color values of each pixel point in the image. The image data of the two-dimensional image acquired by the monocular infrared camera can comprise two-dimensional coordinates of each pixel point in the image. The image data of the three-dimensional image acquired by the binocular color camera can comprise three-dimensional coordinates and RGB color values of all pixel points in the image. The image data of the three-dimensional image acquired by the binocular infrared camera can comprise the three-dimensional coordinates of each pixel point in the image.
And S204, determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image.
The face reference point is a special point that can quickly determine a face, for example, the face reference point may be a face vertex or a face feature point. It should be noted that, in different environments or application scenarios, experimental conditions, the concept and range of the face vertex may be different. Alternatively, the face vertex may include at least one of a tip of a chin, a tip of a nose, a center of an eye, an ear angle, and the like of the person; the human face feature point may include at least one of a chin, a nose, an eye, an ear, a mouth, and the like of the human. It is understood that the face vertices may be, but are not exactly, face feature points.
Specifically, the server may determine the deflection angle of the two-dimensional image relative to the monocular camera according to a face reference point, i.e., a face vertex or a face feature point, in the two-dimensional image.
It can be understood that the face image collected at the gate at the site of the construction site is an image of the instantaneous deflection angle of the face relative to the camera, for example, the deflection angle of the face when the face is opposite to the camera is 0 °, and the side face is 90 °. The deflection angle of the face image can be used for reducing the calculation amount of face recognition, calculation and comparison are only carried out in a specific angle range, and full-face comparison is not needed.
S206, projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image.
Specifically, the server may project the three-dimensional image onto a two-dimensional plane according to a deflection angle of the acquired two-dimensional image with respect to the monocular camera, so as to obtain a corresponding projected two-dimensional image.
And S208, inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
The helmet wearing state discrimination model is a discrimination model based on a convolutional neural network.
Specifically, the server can input the projected two-dimensional image into a pre-trained helmet wearing state discrimination model to obtain a helmet wearing state result of the target detection personnel. The headgear wear status results may include both non-worn headgear and worn headgear.
In the method for detecting the wearing state of the safety helmet, a two-dimensional image and a three-dimensional image of a target detection person are obtained; determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image; projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image; and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel. Therefore, the wearing state of the safety helmet of the target detection personnel is detected by fusing the two-dimensional image and the three-dimensional image of the target detection personnel, and compared with the traditional mode of detecting the wearing state of the safety helmet only based on the two-dimensional image of the target detection personnel, the detection accuracy of the wearing state of the safety helmet is improved.
Compared with the traditional mode that the three-dimensional image of the target detection personnel is directly acquired through the three-dimensional image acquisition equipment, the method has low requirement on the operation performance of the server, and the data processing efficiency is improved. Meanwhile, the three-dimensional image acquisition equipment is expensive, and the three-dimensional image of the target detection personnel is acquired through the target binocular camera, so that the detection cost is greatly reduced.
In one embodiment, as shown in fig. 3, before the step of acquiring the three-dimensional image of the target detection person by the target binocular camera in step S202, the method for detecting the wearing state of the helmet further includes the following steps:
s302, obtaining the light brightness value of the current environment.
Specifically, the photoelectric sensor can be in communication connection with the server, and the photoelectric sensor can acquire the light brightness value of the current environment in the detection area and send the light brightness value of the current environment to the server. The server can receive the light brightness value of the current environment sent by the photoelectric sensor.
And S304, when the light brightness value is greater than or equal to the preset light brightness value, taking the binocular color camera as a target binocular camera.
Specifically, the server can compare the light brightness value with the preset light brightness value, and when the light brightness value is greater than or equal to the preset light brightness value, the server can take the binocular color camera as a target binocular camera, namely, the server can acquire the three-dimensional image of the target detection personnel through the binocular color camera.
And S306, when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
Specifically, the server can compare the light brightness value with preset light brightness value, and when the light brightness value is smaller than the preset light brightness value, the server can regard the binocular infrared camera as a target binocular camera, namely, the server can acquire the three-dimensional image of the target detection personnel through the binocular infrared camera.
In the above embodiment, the light brightness value of the current environment is determined, so that whether the binocular color camera or the binocular infrared camera is used to obtain the three-dimensional image of the target detection person is determined. The two binocular cameras can complement each other according to the change of ambient light, the sensitivity of image acquisition to the ambient light is reduced, no matter whether the ambient light is dark or bright, the three-dimensional image of high-quality target detection personnel can be acquired, and the detection accuracy is further improved.
In one embodiment, as shown in fig. 4, the step of acquiring a three-dimensional image of the target detection person through the target binocular camera in step S202 specifically includes the following steps:
s402, respectively collecting a left two-dimensional image and a right two-dimensional image of the target detection personnel through a left camera and a right camera in the target binocular camera.
The left two-dimensional image is a two-dimensional image acquired by a left camera in the target binocular camera. The right two-dimensional image is a two-dimensional image acquired by a right camera in the target binocular camera.
Specifically, the target binocular camera may include a left camera and a right camera. The server can acquire a left two-dimensional image of the target detection personnel through a left camera in the target binocular camera, and acquire a right two-dimensional image of the target detection personnel through a right camera in the target binocular camera.
And S404, generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
Specifically, the left two-dimensional image and the right two-dimensional image are composed of a plurality of pixel points. The server can calculate and obtain the three-dimensional coordinate value corresponding to each pixel point based on the two-dimensional coordinate value of each pixel point in the left two-dimensional image and the right two-dimensional image, and therefore the three-dimensional image of the target detection personnel is obtained.
It is understood that both the left and right two-dimensional images may include face vertices and face feature points. The server can calculate three-dimensional coordinate values corresponding to the human face vertex and the human face characteristic point based on the two-dimensional coordinate values of the human face vertex and the human face characteristic point in the left two-dimensional image and the right two-dimensional image.
In the above embodiment, the left two-dimensional image and the right two-dimensional image are collected by the left camera and the right camera of the target binocular camera, and the three-dimensional image of the target detection person is generated based on the left two-dimensional image and the right two-dimensional image. Compared with the traditional mode that the three-dimensional image of the target detection personnel is directly acquired through the three-dimensional image acquisition equipment, the method has low requirement on the operation performance of the server and improves the data processing efficiency. Meanwhile, the three-dimensional image acquisition equipment is expensive, the target binocular camera is low in price, the three-dimensional image of the target detection personnel is acquired through the target binocular camera, and the detection cost is greatly reduced.
In one embodiment, the step S404 of generating a three-dimensional image of the target detection person according to the left two-dimensional image and the right two-dimensional image specifically includes: when the target binocular camera is a binocular color camera, generating a color three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image; and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
Specifically, when the target binocular camera is a binocular color camera, the server may generate a color three-dimensional image of the target detection person according to the two-dimensional coordinate values of the pixel points in the left two-dimensional image and the right two-dimensional image, and the RGB color values. When the target binocular camera is a binocular infrared camera, the server can generate a depth three-dimensional image of the target detection personnel according to the two-dimensional coordinate values of all pixel points in the left two-dimensional image and the right two-dimensional image.
In the above embodiment, acquire target detection personnel's colored three-dimensional image through binocular color camera, acquire target detection personnel's degree of depth three-dimensional image through binocular infrared camera, these two kinds of binocular cameras can supplement each other according to the change of ambient light, reduce the sensitivity of image acquisition to ambient light, guarantee the quality of the three-dimensional image who acquires, further promote face identification and the rate of accuracy that the safety helmet wearing state detected.
In one embodiment, the method for detecting the wearing state of the safety helmet further includes: extracting human face characteristic points and human face vertexes of target detection personnel from the three-dimensional image; aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel; after the human face vertex of the target detection personnel is aligned with the human face vertex of the calibration detection personnel, determining a characteristic difference value between the human face characteristic point of the target detection personnel and the human face characteristic point of the calibration detection personnel; and when the characteristic difference value is smaller than the preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
Wherein the feature difference value is a difference value between the face feature points. The calibration detection personnel are the workers to be detected who need to pass through the detection area. For example, all workers at a worksite are calibration testers.
Specifically, the server may extract a face feature point and a face vertex of the target detection person from the three-dimensional image, and align the face vertex of the target detection person with a face vertex of a calibration detection person stored in advance. After the human face vertex of the target detection person is aligned with the human face vertex of the calibration detection person, the server can calculate a feature difference value between the human face feature point of the target detection person and the human face feature point of the calibration detection person, compare the feature difference value with a preset feature difference value, and when the feature difference value is smaller than the preset feature difference value, the server can use the identity information of the corresponding calibration detection person as the identity information of the target detection person, namely the identity of the target detection person can be recognized by the server.
Optionally, the feature difference value between the face feature point of the target detection person and the face feature point of the calibration detection person is calculated, and specifically, the feature difference value can be obtained by calculating a normal or curvature of three-dimensional point cloud data corresponding to coordinates of the face feature point and the face vertex in the three-dimensional image.
In the above embodiment, after the human face vertex of the target detection person is aligned with the human face vertex of the calibration detection person, the feature difference value between the human face feature point of the target detection person and the human face feature point of the calibration detection person is calculated, and the identity information of the target detection person is identified through the feature difference value.
In one embodiment, the training step of the helmet wearing state discrimination model comprises the following steps: acquiring two-dimensional images of samples of a calibrated tester wearing a safety helmet and a calibrated tester not wearing the safety helmet through a monocular camera respectively, and acquiring three-dimensional images of samples of the calibrated tester wearing the safety helmet and the calibrated tester not wearing the safety helmet through a target binocular camera respectively; the sample two-dimensional image and the sample three-dimensional image are obtained by respectively acquiring a monocular camera and a target binocular camera at each preset angle interval when a tester rotates the head within a preset angle range; and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
The two-dimensional image and the three-dimensional image are sample data for training the helmet wearing state discrimination model.
In particular, a calibration inspector may be understood as a worker of the acquired image. The calibration detection personnel can look at the monocular camera and the binocular camera at the same level, the head is rotated within a preset angle range, for example, the head is rotated within a range from-90 degrees to 90 degrees, every preset angle is set at intervals, for example, every interval is 10 degrees, two-dimensional images of samples of the calibration detection personnel wearing the safety helmet and not wearing the safety helmet are respectively obtained through the monocular camera, and three-dimensional images of the samples of the calibration detection personnel wearing the safety helmet and not wearing the safety helmet are respectively obtained through the target binocular camera. The server can train the helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
In the embodiment, the two-dimensional images of the sample for calibrating whether the detection person wears the safety helmet or does not wear the safety helmet and the three-dimensional images of the sample for wearing the safety helmet or not wear the safety helmet are acquired at intervals of the preset angle within the preset angle range, so that the wearing state discrimination model of the safety helmet is trained, and the discrimination accuracy of the wearing state discrimination model of the safety helmet is improved.
In one embodiment, the method for detecting the wearing state of the safety helmet further includes: when the wearing state result of the safety helmet is that the safety helmet is not worn, alarm information is generated; and sending the alarm information to alarm equipment to instruct the alarm equipment to alarm and remind based on the alarm information.
In particular, the headgear wear status results may include both non-worn headgear and worn headgear. When the wearing state result of the safety helmet is that the safety helmet is not worn, the server can generate alarm information and send the alarm information to the alarm device. The alarm device can receive alarm information sent by the server and carry out alarm reminding based on the alarm information.
Optionally, the alarm device may include a flash lamp, an alarm bell, a loudspeaker, and the like, and the alarm prompt may specifically be that the flash lamp flashes, and/or the alarm bell rings, and/or the loudspeaker performs voice broadcast, and the like.
In the embodiment, when the wearing state result of the safety helmet is that the safety helmet is not worn, the alarm device gives an alarm and informs related management personnel in time, so that the safety of the working personnel is further ensured.
In an embodiment, as shown in fig. 5, the process of detecting the wearing state of the safety helmet may specifically be: the server can obtain a two-dimensional image of the target detection person and a three-dimensional image of the target detection person, and projects the three-dimensional image based on the two-dimensional image to obtain a projected two-dimensional image. The server can input the projected two-dimensional image into the safety helmet wearing state discrimination model to obtain a safety helmet wearing state result. And when the wearing state result of the safety helmet shows that the safety helmet is not worn, the server can control the alarm device to give an alarm and store the obtained projection two-dimensional image into the database. When the safety helmet is worn as a result of the wearing state of the safety helmet, the server can directly store the obtained projected two-dimensional image into the database.
In one embodiment, as shown in fig. 6, there is provided a headgear wearing state detection system comprising:
and the two-dimensional image acquisition unit 601 is used for acquiring a two-dimensional image of the target detection person through the monocular camera.
And the three-dimensional image acquisition unit 602 is configured to acquire a three-dimensional image of the target detection person through the target binocular camera.
A worker face information database 603 for storing two-dimensional images and three-dimensional images of the target detection person.
A control unit 604 for controlling the two-dimensional image capturing unit 601, the three-dimensional image capturing unit 602, the worker face information database 603, the recognition unit 605, the alarm unit 606, and the data storage unit 607.
The identification unit 605 is configured to determine a wearing state of the safety helmet of the target detection person, and identify identity information of the target detection person.
And the alarm unit 606 is used for giving an alarm when the safety helmet is not worn.
A data storage unit 607 for storing the projected two-dimensional image.
It should be understood that although the various steps of fig. 2, 3, 4 and 5 are shown sequentially in order, these steps are not necessarily performed sequentially in order. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 4, and 5 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a helmet wearing state detection apparatus 700 including: an obtaining module 701, a determining module 702, a projecting module 703 and a judging module 704, wherein:
the acquiring module 701 is configured to acquire a two-dimensional image and a three-dimensional image of a target detection person.
A determining module 702, configured to determine a deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image.
The projection module 703 is configured to project the three-dimensional image onto a two-dimensional plane according to the deflection angle, so as to obtain a corresponding projected two-dimensional image.
And the judging module 704 is used for inputting the projected two-dimensional image into a pre-trained safety helmet wearing state judging model to obtain a safety helmet wearing state result of the target detection personnel.
In one embodiment, the obtaining module 701 is further configured to obtain a two-dimensional image of the target detection person through a monocular camera, and obtain a three-dimensional image of the target detection person through a target binocular camera.
In one embodiment, the obtaining module 701 is further configured to obtain a light brightness value of the current environment; when the light brightness value is larger than or equal to the preset light brightness value, taking the binocular color camera as a target binocular camera; and when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
In one embodiment, the obtaining module 701 is further configured to collect a left two-dimensional image and a right two-dimensional image of the target detection person through a left camera and a right camera of the target binocular camera, respectively; and generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the obtaining module 701 is further configured to generate a color three-dimensional image of the target detection person according to the left two-dimensional image and the right two-dimensional image when the target binocular camera is a binocular color camera; and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
Referring to fig. 8, in one embodiment, the helmet wearing state detection apparatus 700 further includes: an identification module 705, a training module 706, and an alarm module 707, wherein:
the recognition module 705 is used for extracting the human face characteristic points and the human face vertexes of the target detection personnel from the three-dimensional image; aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel; after the human face vertex of the target detection personnel is aligned with the human face vertex of the calibration detection personnel, determining a characteristic difference value between the human face characteristic point of the target detection personnel and the human face characteristic point of the calibration detection personnel; and when the characteristic difference value is smaller than the preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
The training module 706 is used for respectively acquiring two-dimensional images of samples for calibrating whether the detection personnel wears the safety helmet or does not wear the safety helmet through the monocular camera, and respectively acquiring three-dimensional images of samples for calibrating whether the detection personnel wears the safety helmet or does not wear the safety helmet through the target binocular camera; the sample two-dimensional image and the sample three-dimensional image are obtained by respectively acquiring a monocular camera and a target binocular camera at each preset angle interval when a tester rotates the head within a preset angle range; and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
An alarm module 707, configured to generate alarm information when the result of the wearing state of the safety helmet indicates that the safety helmet is not worn; and sending the alarm information to alarm equipment to instruct the alarm equipment to alarm and remind based on the alarm information.
The safety helmet wearing state detection device acquires a two-dimensional image and a three-dimensional image of a target detection person; determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image; projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image; and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel. Therefore, the wearing state of the safety helmet of the target detection personnel is detected by fusing the two-dimensional image and the three-dimensional image of the target detection personnel, and compared with the traditional mode of detecting the wearing state of the safety helmet only based on the two-dimensional image of the target detection personnel, the detection accuracy of the wearing state of the safety helmet is improved.
For specific limitations of the helmet wearing state detection device, reference may be made to the above limitations of the helmet wearing state detection method, which are not described herein again. The modules in the helmet wearing state detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be the server 106 of fig. 1, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the detection data of the wearing state of the safety helmet. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a headgear wearing state detection method.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the method comprises the steps of obtaining a two-dimensional image of a target detection person through a monocular camera, and obtaining a three-dimensional image of the target detection person through a target binocular camera.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a light brightness value of the current environment;
when the light brightness value is larger than or equal to the preset light brightness value, taking the binocular color camera as a target binocular camera;
and when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
respectively acquiring a left two-dimensional image and a right two-dimensional image of a target detection person through a left camera and a right camera in a target binocular camera;
and generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the target binocular camera is a binocular color camera, generating a color three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image;
and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting human face characteristic points and human face vertexes of target detection personnel from the three-dimensional image;
aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel;
after the human face vertex of the target detection personnel is aligned with the human face vertex of the calibration detection personnel, determining a characteristic difference value between the human face characteristic point of the target detection personnel and the human face characteristic point of the calibration detection personnel;
and when the characteristic difference value is smaller than the preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring two-dimensional images of samples of a calibrated tester wearing a safety helmet and a calibrated tester not wearing the safety helmet through a monocular camera respectively, and acquiring three-dimensional images of samples of the calibrated tester wearing the safety helmet and the calibrated tester not wearing the safety helmet through a target binocular camera respectively; the sample two-dimensional image and the sample three-dimensional image are obtained by respectively acquiring a monocular camera and a target binocular camera at each preset angle interval when a tester rotates the head within a preset angle range;
and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the wearing state result of the safety helmet is that the safety helmet is not worn, alarm information is generated;
and sending the alarm information to alarm equipment to instruct the alarm equipment to alarm and remind based on the alarm information.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the method comprises the steps of obtaining a two-dimensional image of a target detection person through a monocular camera, and obtaining a three-dimensional image of the target detection person through a target binocular camera.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a light brightness value of the current environment;
when the light brightness value is larger than or equal to the preset light brightness value, taking the binocular color camera as a target binocular camera;
and when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively acquiring a left two-dimensional image and a right two-dimensional image of a target detection person through a left camera and a right camera in a target binocular camera;
and generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the target binocular camera is a binocular color camera, generating a color three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image;
and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting human face characteristic points and human face vertexes of target detection personnel from the three-dimensional image;
aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel;
after the human face vertex of the target detection personnel is aligned with the human face vertex of the calibration detection personnel, determining a characteristic difference value between the human face characteristic point of the target detection personnel and the human face characteristic point of the calibration detection personnel;
and when the characteristic difference value is smaller than the preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring two-dimensional images of samples of a calibrated tester wearing a safety helmet and a calibrated tester not wearing the safety helmet through a monocular camera respectively, and acquiring three-dimensional images of samples of the calibrated tester wearing the safety helmet and the calibrated tester not wearing the safety helmet through a target binocular camera respectively; the sample two-dimensional image and the sample three-dimensional image are obtained by respectively acquiring a monocular camera and a target binocular camera at each preset angle interval when a tester rotates the head within a preset angle range;
and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when the wearing state result of the safety helmet is that the safety helmet is not worn, alarm information is generated;
and sending the alarm information to alarm equipment to instruct the alarm equipment to alarm and remind based on the alarm information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for detecting a wearing state of a helmet, the method comprising:
acquiring a two-dimensional image and a three-dimensional image of a target detection person;
determining a deflection angle of the two-dimensional image relative to a monocular camera according to the face reference point in the two-dimensional image;
projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle to obtain a corresponding projected two-dimensional image;
and inputting the projected two-dimensional image into a pre-trained safety helmet wearing state discrimination model to obtain a safety helmet wearing state result of the target detection personnel.
2. The method of claim 1, wherein the acquiring two-dimensional images and three-dimensional images of the target detection person comprises:
the method comprises the steps of obtaining a two-dimensional image of a target detection person through a monocular camera, and obtaining a three-dimensional image of the target detection person through a target binocular camera.
3. The method of claim 2, wherein prior to said acquiring a three-dimensional image of the target test person with the target binocular camera, the method further comprises:
acquiring a light brightness value of the current environment;
when the light brightness value is larger than or equal to a preset light brightness value, taking the binocular color camera as a target binocular camera;
and when the light brightness value is smaller than the preset light brightness value, taking the binocular infrared camera as a target binocular camera.
4. The method of claim 3, wherein the acquiring of the three-dimensional image of the target detection person by the target binocular camera comprises:
respectively acquiring a left two-dimensional image and a right two-dimensional image of a target detection person through a left camera and a right camera in a target binocular camera;
and generating a three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
5. The method of claim 4, wherein generating the three-dimensional image of the target detection person from the left two-dimensional image and the right two-dimensional image comprises:
when the target binocular camera is a binocular color camera, generating a color three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image;
and when the target binocular camera is a binocular infrared camera, generating a depth three-dimensional image of the target detection personnel according to the left two-dimensional image and the right two-dimensional image.
6. The method of claim 1, further comprising:
extracting human face characteristic points and human face vertexes of the target detection personnel from the three-dimensional image;
aligning the human face vertex of the target detection personnel with the pre-stored human face vertex of the calibration detection personnel;
after the human face vertex of the target detection person is aligned with the human face vertex of the calibration detection person, determining a feature difference value between the human face feature point of the target detection person and the human face feature point of the calibration detection person;
and when the characteristic difference value is smaller than a preset characteristic difference value, taking the identity information of the corresponding calibration detection personnel as the identity information of the target detection personnel.
7. The method of claim 6, wherein the step of training the crash helmet fit state discrimination model comprises:
acquiring two-dimensional images of samples wearing the safety helmet and not wearing the safety helmet of the calibration detection personnel through monocular cameras respectively, and acquiring three-dimensional images of samples wearing the safety helmet and not wearing the safety helmet of the calibration detection personnel through target binocular cameras respectively; the sample two-dimensional image and the sample three-dimensional image are obtained by the monocular camera and the target binocular camera at each preset angle interval when the calibration detector rotates the head within the preset angle range;
and training a helmet wearing state discrimination model through the sample two-dimensional image and the sample three-dimensional image.
8. The method according to any one of claims 1 to 7, further comprising:
when the wearing state result of the safety helmet is that the safety helmet is not worn, generating alarm information;
and sending the alarm information to alarm equipment to indicate the alarm equipment to alarm and remind based on the alarm information.
9. A helmet wearing state detection apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a two-dimensional image and a three-dimensional image of a target detection person;
the determining module is used for determining the deflection angle of the two-dimensional image relative to the monocular camera according to the face reference point in the two-dimensional image;
the projection module is used for projecting the three-dimensional image onto a two-dimensional plane according to the deflection angle so as to obtain a corresponding projected two-dimensional image;
and the judging module is used for inputting the projected two-dimensional image into a pre-trained safety helmet wearing state judging model to obtain a safety helmet wearing state result of the target detection personnel.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented by the processor when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739196.6A CN113469043B (en) | 2021-06-30 | 2021-06-30 | Method and device for detecting wearing state of helmet, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739196.6A CN113469043B (en) | 2021-06-30 | 2021-06-30 | Method and device for detecting wearing state of helmet, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113469043A true CN113469043A (en) | 2021-10-01 |
CN113469043B CN113469043B (en) | 2024-10-18 |
Family
ID=77876674
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110739196.6A Active CN113469043B (en) | 2021-06-30 | 2021-06-30 | Method and device for detecting wearing state of helmet, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113469043B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203400A (en) * | 2016-07-29 | 2016-12-07 | 广州国信达计算机网络通讯有限公司 | A kind of face identification method and device |
CN108564010A (en) * | 2018-03-28 | 2018-09-21 | 浙江大华技术股份有限公司 | A kind of detection method, device, electronic equipment and storage medium that safety cap is worn |
CN109344679A (en) * | 2018-07-25 | 2019-02-15 | 深圳云天励飞技术有限公司 | Building site monitoring method, device and readable storage medium storing program for executing based on image analysis |
CN110163814A (en) * | 2019-04-16 | 2019-08-23 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of modification picture based on recognition of face |
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
WO2020056677A1 (en) * | 2018-09-20 | 2020-03-26 | 中建科技有限公司深圳分公司 | Violation detection method, system, and device for building construction site |
CN111199200A (en) * | 2019-12-27 | 2020-05-26 | 深圳供电局有限公司 | Wearing detection method and device based on electric protection equipment and computer equipment |
AU2020100711A4 (en) * | 2020-05-05 | 2020-06-11 | Chang, Cheng Mr | The retrieval system of wearing safety helmet based on deep learning |
CN111368746A (en) * | 2020-03-06 | 2020-07-03 | 杭州宇泛智能科技有限公司 | Method and device for detecting wearing state of personal safety helmet in video and electronic equipment |
CN111414873A (en) * | 2020-03-26 | 2020-07-14 | 广州粤建三和软件股份有限公司 | Alarm prompting method, device and alarm system based on wearing state of safety helmet |
CN111523398A (en) * | 2020-03-30 | 2020-08-11 | 西安交通大学 | Method and device for fusing 2D face detection and 3D face recognition |
CN111815577A (en) * | 2020-06-23 | 2020-10-23 | 深圳供电局有限公司 | Method, device, equipment and storage medium for processing safety helmet wearing detection model |
CN112613449A (en) * | 2020-12-29 | 2021-04-06 | 国网山东省电力公司建设公司 | Safety helmet wearing detection and identification method and system based on video face image |
-
2021
- 2021-06-30 CN CN202110739196.6A patent/CN113469043B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203400A (en) * | 2016-07-29 | 2016-12-07 | 广州国信达计算机网络通讯有限公司 | A kind of face identification method and device |
CN108564010A (en) * | 2018-03-28 | 2018-09-21 | 浙江大华技术股份有限公司 | A kind of detection method, device, electronic equipment and storage medium that safety cap is worn |
CN109344679A (en) * | 2018-07-25 | 2019-02-15 | 深圳云天励飞技术有限公司 | Building site monitoring method, device and readable storage medium storing program for executing based on image analysis |
WO2020056677A1 (en) * | 2018-09-20 | 2020-03-26 | 中建科技有限公司深圳分公司 | Violation detection method, system, and device for building construction site |
CN110163814A (en) * | 2019-04-16 | 2019-08-23 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of modification picture based on recognition of face |
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
CN111199200A (en) * | 2019-12-27 | 2020-05-26 | 深圳供电局有限公司 | Wearing detection method and device based on electric protection equipment and computer equipment |
CN111368746A (en) * | 2020-03-06 | 2020-07-03 | 杭州宇泛智能科技有限公司 | Method and device for detecting wearing state of personal safety helmet in video and electronic equipment |
CN111414873A (en) * | 2020-03-26 | 2020-07-14 | 广州粤建三和软件股份有限公司 | Alarm prompting method, device and alarm system based on wearing state of safety helmet |
CN111523398A (en) * | 2020-03-30 | 2020-08-11 | 西安交通大学 | Method and device for fusing 2D face detection and 3D face recognition |
AU2020100711A4 (en) * | 2020-05-05 | 2020-06-11 | Chang, Cheng Mr | The retrieval system of wearing safety helmet based on deep learning |
CN111815577A (en) * | 2020-06-23 | 2020-10-23 | 深圳供电局有限公司 | Method, device, equipment and storage medium for processing safety helmet wearing detection model |
CN112613449A (en) * | 2020-12-29 | 2021-04-06 | 国网山东省电力公司建设公司 | Safety helmet wearing detection and identification method and system based on video face image |
Also Published As
Publication number | Publication date |
---|---|
CN113469043B (en) | 2024-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106372662B (en) | Detection method and device for wearing of safety helmet, camera and server | |
CN108764052B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN108805024B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN110850723B (en) | Fault diagnosis and positioning method based on transformer substation inspection robot system | |
CN111191567B (en) | Identity data processing method, device, computer equipment and storage medium | |
CN108549867B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN110516522B (en) | Inspection method and system | |
CN108711054B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
US8428313B2 (en) | Object image correction apparatus and method for object identification | |
CN112364715A (en) | Nuclear power operation abnormity monitoring method and device, computer equipment and storage medium | |
CN109167997A (en) | A kind of video quality diagnosis system and method | |
CN114894337B (en) | Temperature measurement method and device for outdoor face recognition | |
CN112595730A (en) | Cable breakage identification method and device and computer equipment | |
CN105022999A (en) | Man code company real-time acquisition system | |
CN111161202A (en) | Vehicle behavior information acquisition method and device, computer equipment and storage medium | |
CN110472574A (en) | A kind of nonstandard method, apparatus of detection dressing and system | |
CN113343854A (en) | Fire operation flow compliance detection method based on video monitoring | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN111523499A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN111325133A (en) | Image processing system based on artificial intelligence recognition | |
CN111307331A (en) | Temperature calibration method, device, equipment and storage medium | |
CN115171361A (en) | Dangerous behavior intelligent detection and early warning method based on computer vision | |
CN111064935B (en) | Intelligent construction site personnel posture detection method and system | |
CN116863297A (en) | Monitoring method, device, system, equipment and medium based on electronic fence | |
CN111383256A (en) | Image processing method, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |