CN115601699A - Safety helmet wearing detection method, electronic equipment and computer readable medium - Google Patents
Safety helmet wearing detection method, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN115601699A CN115601699A CN202211265845.4A CN202211265845A CN115601699A CN 115601699 A CN115601699 A CN 115601699A CN 202211265845 A CN202211265845 A CN 202211265845A CN 115601699 A CN115601699 A CN 115601699A
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- detected
- safety helmet
- human body
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Helmets And Other Head Coverings (AREA)
Abstract
The present disclosure provides a helmet wearing detection method, an electronic device, and a computer-readable medium, the helmet wearing detection method including: acquiring an image to be detected in a detection area; carrying out human body detection on the image to be detected by utilizing a first target detection model to obtain a human body detection frame; when the human body detection frame is positioned in a preset detection area of the image to be detected, preprocessing the image to be detected to obtain a processed human body image; carrying out safety helmet wearing detection on the processed human body image by using a second target detection model to obtain a primary detection result; judging whether the person in the image to be detected wears a safety helmet or not according to the primary detection result; and the first target detection model and the second target detection model both adopt a Yolov5 model.
Description
Technical Field
The present disclosure relates to the field of image detection, and in particular, to a method for detecting wearing of a safety helmet, an electronic device, and a computer-readable medium.
Background
In the construction process, the safety helmet is used as the most basic protective measure in the building process, and the damage condition after danger is generated can be effectively reduced. However, in the actual process, illegal behaviors such as not wearing safety helmets, taking off the helmets temporarily and the like occur occasionally, so that the supervision difficulty of a safety worker is increased, and the safety of workers on a construction site is seriously threatened. In this case, helmet donning detection becomes especially important at construction sites.
Disclosure of Invention
The present disclosure provides a helmet wearing detection method, including:
acquiring an image to be detected of a detection area;
carrying out human body detection on the image to be detected by utilizing a first target detection model to obtain a human body detection frame; when the human body detection frame is positioned in a preset detection area of the image to be detected, preprocessing the image to be detected to obtain a processed human body image;
carrying out safety helmet wearing detection on the processed human body image by using a second target detection model to obtain a primary detection result;
judging whether a person in the image to be detected wears a safety helmet or not according to a primary detection result;
and the first target detection model and the second target detection model both adopt a Yolov5 model.
In some embodiments, the preprocessing the image to be detected to obtain a processed human body image includes:
amplifying the human body detection frame according to a preset proportion to obtain an amplified detection frame;
and carrying out image defogging treatment on the part of the image to be detected, which is positioned in the amplified detection frame, so as to obtain the treated human body image.
In some embodiments, the image defogging process is performed on the portion of the image to be detected in the enlarged detection frame, and includes:
and carrying out image defogging treatment on the part of the image to be detected in the amplified detection frame by utilizing a dark channel prior defogging algorithm.
In some embodiments, the predetermined ratio is between 1.1 and 1.3 times.
In some embodiments, the preliminary detection results include: whether the head of the person in the processed human body image is provided with a safety helmet or not and the position of a head detection frame provided with the safety helmet;
according to preliminary testing result, judge whether the personnel in treating the monitoring image have worn the safety helmet, include:
when the head of the person in the processed human body image does not wear the safety helmet in the preliminary detection result, determining that the person in the image to be detected does not wear the safety helmet;
when the head of the person in the processed human body image is worn with a safety helmet in the preliminary detection result, judging whether the head detection frame is positioned at the upper half part of the processed human body image, and if so, determining that the person in the image to be detected is worn with the safety helmet; otherwise, determining that the person in the image to be detected does not wear a safety helmet.
In some embodiments, determining whether a person in the image to be detected wears a safety helmet further comprises:
when a person in the image to be detected wears a safety helmet, outputting the image to be detected with a first mark;
and when the person in the image to be detected does not wear a safety helmet, outputting the image to be detected with a second mark.
In some embodiments, the first indicia comprises: a human body detection frame having a first color; the second mark includes: a human body detection frame with a second color.
In some embodiments, acquiring the image to be detected in the detection area further comprises: acquiring a first data set comprising a plurality of first sample images;
inputting the first sample image into a first initial detection model for training to obtain the first target detection model;
acquiring a second data set comprising a plurality of second sample images;
and inputting the second sample image into a second initial detection model for training to obtain the second target detection model.
The present disclosure also provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements a method as in any one of the above embodiments.
The present disclosure also provides an electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the above embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flow chart of a method of headgear wear detection provided in some embodiments of the present disclosure.
Fig. 2 is a flowchart of a preprocessing method for an image to be detected according to some embodiments of the disclosure.
Fig. 3 is a flowchart of an alternative implementation of step S14 provided in some embodiments of the present disclosure.
Fig. 4 is a flow chart of another headgear wear detection method provided in some embodiments of the present disclosure.
Detailed Description
The following detailed description of the embodiments of the disclosure refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In this specification, for convenience, the terms "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicating the orientation or positional relationship are used to explain the positional relationship of the constituent elements with reference to the drawings only for the purpose of describing the specification and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured in a specific orientation, and operate, and thus, should not be construed as limiting the present disclosure. The positional relationship of the components is changed as appropriate in accordance with the direction in which each component is described. Therefore, the words and phrases described in the specification are not limited thereto, and may be replaced as appropriate depending on the case.
Unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should have the ordinary meaning as understood by those having ordinary skill in the art to which the present disclosure belongs. The use of "first," "second," and the like in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. Likewise, the word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "mounted," "connected," and "connected" are to be construed broadly. For example, they may be fixedly attached, or removably attached, or integrally attached; may be a mechanical connection, or a connection; either directly or indirectly through an intermediary member, or the two elements may be in internal communication. The specific meaning of the above terms in the present disclosure can be understood in specific instances by those of ordinary skill in the art.
In the construction process, the safety production is higher than all. The safety helmet is used as the most basic protective measure in the building process, and the injury condition after danger is generated can be effectively reduced. However, in the actual process, illegal behaviors such as not wearing safety helmets, taking off the helmets temporarily and the like occur occasionally, so that the supervision difficulty of a safety worker is increased, and the safety of workers on a construction site is seriously threatened. At the moment, the wearing detection of the safety helmet on the construction site becomes more important, but the test scenes of many current similar methods are single, the background is simple, the detection target is clear, and the shooting angle mainly comprises head-up and closer-distance shooting. However, it is complicated to detect actual building site scene, and the distance that the camera was shot is mostly higher angle, and definition and shooting distance are all difficult to guarantee, and simultaneously, the pattern colour of safety helmet is different, and the discriminability is not high under the condition of low pixel, and the like has puzzlement's building material etc. in addition many building sites' spherical lamps and lanterns, and these problems have all strengthened the degree of difficulty for the testing process.
Fig. 1 is a flowchart of a method for detecting wearing of a safety helmet provided in some embodiments of the present disclosure, as shown in fig. 1, in an embodiment of the present disclosure, the method for detecting wearing of a safety helmet includes the following steps:
and S10, acquiring an image to be detected in the detection area.
In some embodiments, the image to be detected is an image in a video stream of the detection area acquired by the camera. In one example, an image to be detected is obtained from a video stream at a certain frequency; for example, the image in the video stream may be read frame by frame as the image to be detected. The following steps S11 to S14 are performed for each frame of image to be detected. The detection area is a target area needing to be detected, wherein the detection area is at least one part of a camera view area.
S11, detecting the human body of the image to be detected by using the first target detection model to obtain a human body detection frame.
The detection frame is an anchor frame obtained in a target detection mode, namely a minimum surrounding frame capable of surrounding a target object; the human body detection frame is the minimum surrounding frame capable of surrounding the human body.
In step S11, the obtaining of the human body detection frame at least includes: and obtaining the relative position of the human body detection frame in the image to be detected, namely the coordinates of the human body detection frame in the image to be detected.
And S12, when the human body detection frame is positioned in a preset detection area of the image to be detected, preprocessing the image to be detected to obtain a processed human body image.
The image to be detected can be an image of the whole detection area acquired by the camera, and the preset detection area can be a part of the image to be detected. For example, the image to be detected is an image of an entry area of the work site and a part of the plant area, while the predetermined detection area is an image of an entry of the work site.
In some embodiments, whether the human body detection frame is located in the predetermined detection area of the image to be detected may be determined according to the coordinates of the human body detection frame in the image to be detected. When the human body detection frame is positioned in a preset detection area of the image to be detected, preprocessing is carried out on the image to be detected, wherein the preprocessing comprises defogging processing. And when the human body detection frame is not in the preset detection area of the image to be detected, the subsequent processing on the image to be detected can not be carried out any more.
And S13, carrying out safety helmet wearing detection on the processed human body image by using a second target detection model to obtain a primary detection result.
And S14, judging whether the person in the image to be detected wears a safety helmet or not according to the primary detection result.
In the embodiment of the present disclosure, the Yolov5 model is adopted for both the first target detection model and the second target detection model. "YOLO" is the name of an object detection algorithm, which redefines object detection as a regression problem. It applies a single Convolutional Neural Network (CNN) to the entire image, divides the image into meshes, and predicts the class probability and bounding box for each mesh. Since the detection problem is a regression problem, no complex piping is required. The YOLO algorithm is 1000 times faster than "R-CNN" and 100 times faster than "Fast R-CNN". YOLO v5 is a more mature version of YOLO.
The embodiment of the disclosure adopts the YOLOv5 model to detect the person and the safety helmet twice respectively, the person image is detected firstly, and then the safety helmet detection is carried out on the person image in a small range, so that the detection effect can be improved.
Fig. 2 is a flowchart of a method for preprocessing an image to be detected according to some embodiments of the present disclosure, and as shown in fig. 2, preprocessing the image to be detected to obtain a processed human body image includes the following steps:
and S111, amplifying the human body detection frame according to a preset proportion to obtain an amplified detection frame.
And S112, carrying out image defogging treatment on the part of the image to be detected in the amplified detection frame to obtain the treated human body image.
As shown in fig. 2, in some embodiments, the human body detection frame is enlarged by a preset ratio of 1.1 to 1.3 times, for example, the enlarged detection frame is 1.1 times, 1.2 times, or 1.3 times of the human body detection frame. It should be noted that the human body detection frame may be a rectangular frame; the step of amplifying the human body detection frame according to the preset proportion means that the length and the width of the rectangular frame are both enlarged according to the preset proportion, and the central position of the rectangular frame is kept still. Through enlargiing human detection frame, be favorable to the model to carry out the secondary screening, promote the detection accuracy nature to not wearing the safety helmet for model suitability and accurate effect promote. Meanwhile, in order to avoid the situation that the image in the amplified detection frame has low pixels, the image in the amplified detection frame is subjected to image defogging treatment, so that the definition of the image in the detection frame is higher, and the accuracy of the YOLOv5 model in secondary identification of the small target is improved.
In some embodiments, the image defogging process may be defogging by using a dark channel prior defogging algorithm, which combines with strong learning ability of a neural network to find a mapping relationship between the foggy image and some coefficients in the image restoration physical model, so as to restore the foggy image into a fogless clear image.
In some embodiments, the preliminary detection result obtained in step S13 includes: whether the head of the person in the processed human body image is worn with a safety helmet or not and the position of a head detection frame worn with the safety helmet. Fig. 3 is a flowchart of an alternative implementation manner of step S14 provided in some embodiments of the present disclosure, and as shown in fig. 3, the step S14 includes the following steps:
and S20, when the head of the person in the processed human body image does not wear the safety helmet in the initial detection result, determining that the person in the image to be detected does not wear the safety helmet.
S21, judging whether the head detection frame is positioned at the upper half part of the processed human body image or not when the head of the person in the processed human body image wears a safety helmet in the primary detection result, and if so, determining that the person in the image to be detected wears the safety helmet; otherwise, determining that the person in the image to be detected does not wear the safety helmet.
In practical applications, when the second object detection model is used to detect whether a person wears a helmet, there is a possibility that an erroneous determination may occur, for example, when the second object detection model determines that a helmet is worn on the head, but the helmet is actually held by the person. In the embodiment of the disclosure, in the step S21, it is determined whether the head detection frame is located in the upper half of the processed human body image, and if so, it is determined that the person in the image to be detected wears a safety helmet; otherwise, determining that the person in the image to be detected does not wear the safety helmet. By the aid of the judgment method, the situation that the helmet is mistakenly judged to be worn by a person holding the helmet by hand can be effectively avoided.
Fig. 4 is a flowchart of another helmet wearing detection method provided in some embodiments of the present disclosure, as shown in fig. 4, in some embodiments, the helmet wearing detection method includes the above steps S10 to S14, and further, after determining whether the person in the image to be detected wears a helmet, the method further includes:
s141, when a person in the image to be detected wears a safety helmet, outputting the image to be detected with a first mark; and when the person in the image to be detected does not wear the safety helmet, outputting the image to be detected with the second mark. Wherein, the first mark and the second mark can be marks with different shapes or marks with different colors. By displaying different marks, a monitoring person can intuitively find whether the person wears the safety helmet or not.
Further, the first mark includes: a human body detection frame having a first color; the second mark includes: a human body detection frame with a second color. For example, the first mark may be a human body detection frame having a red color, and the second mark may be a human body detection frame having a green color, but the first mark and the second mark may also be any other colors, and are not limited herein. The method and the device can help the supervisor to effectively distinguish the target detection result by marking the image to be detected which needs to be output.
In some disclosed embodiments, the first object detection model and the second object detection model are trained detection models, and before performing the step S10, the training of the first object detection model and the second object detection model is performed first. The training process of the first target detection model comprises the following steps:
s01, acquiring a first data set, wherein the first data set comprises a plurality of first sample images.
Wherein the first data set may be a coco data set.
And S02, inputting the first sample image into a first initial detection model for training to obtain a first target detection model.
In step S02, a plurality of first sample images are used to train a first initial detection model, and each time a first sample image is input, the first initial detection model outputs a sample result, and the difference between the sample result and the corresponding target result meets the requirement by adjusting the parameters of the first initial detection model. At this point, the training of the first target detection model is complete.
The training process of the second target detection model comprises the following steps:
and S03, acquiring a second data set, wherein the second data set comprises a plurality of second sample images.
The second data set may include, among other things, a helmet data set (VOC 2028), as well as a self-created data set.
The VOC2028 data set is a safety helmet identification data set shared on the internet, and comprises high-definition construction site photos, classroom pictures and the like. The self-built data set mainly comes from partial images of pedestrians, construction workers and the like intercepted in ordinary video monitoring.
And S04, inputting the second sample image into a second initial detection model for training to obtain the second target detection model.
In step S04, a plurality of second sample images are used to train a second initial detection model, and each time a second sample image is input, the second initial detection model outputs a sample label, and the difference between the sample label and the corresponding target label meets the requirement by adjusting parameters of the second initial detection model. At this point, the training of the second target detection model is completed. Here, the "tag" is a probability value indicating whether or not the person wears a helmet, and when the model determines that the person wears a helmet on the head, the "1" may be output; otherwise, "0" is output. In addition, the model can also output the position of the head detection frame with the safety helmet when judging that the safety helmet is worn on the head of the person.
Also provided in embodiments of the present disclosure is a computer-readable medium having a computer program stored thereon. Wherein the computer program, when executed by the processor, implements any of the methods mentioned in the above embodiments.
In addition, an embodiment of the present disclosure further provides an electronic device, including: one or more processors; a memory for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement any of the methods mentioned in the embodiments above.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and scope of the invention, and such modifications and improvements are also considered to be within the scope of the invention.
Claims (10)
1. A method for detecting wearing of a safety helmet, comprising:
acquiring an image to be detected in a detection area;
carrying out human body detection on the image to be detected by utilizing a first target detection model to obtain a human body detection frame;
when the human body detection frame is positioned in a preset detection area of the image to be detected, preprocessing the image to be detected to obtain a processed human body image;
carrying out safety helmet wearing detection on the processed human body image by using a second target detection model to obtain a primary detection result;
judging whether the person in the image to be detected wears a safety helmet or not according to the primary detection result;
and the first target detection model and the second target detection model both adopt a Yolov5 model.
2. The method for detecting wearing of a safety helmet according to claim 1, wherein preprocessing the image to be detected to obtain a processed human body image comprises:
amplifying the human body detection frame according to a preset proportion to obtain an amplified detection frame;
and carrying out image defogging treatment on the part of the image to be detected, which is positioned in the amplified detection frame, so as to obtain the treated human body image.
3. The method for detecting wearing of a safety helmet according to claim 2, wherein performing image defogging processing on a portion of the image to be detected in the enlarged detection frame includes:
and carrying out image defogging treatment on the part of the image to be detected in the amplified detection frame by utilizing a dark channel prior defogging algorithm.
4. The method for detecting wearing of a safety helmet according to claim 2, wherein the predetermined ratio is between 1.1 and 1.3 times.
5. The headgear wearing detection method according to any one of claims 1 to 4, wherein the preliminary detection result includes: whether the head of a person in the processed human body image is worn with a safety helmet or not and the position of a head detection frame worn with the safety helmet;
according to preliminary detection result, judge whether personnel in waiting to detect the image have worn the safety helmet, include:
when the head of the person in the processed human body image does not wear a safety helmet in the preliminary detection result, determining that the person in the image to be detected does not wear the safety helmet;
when the head of the person in the processed human body image wears a safety helmet in the preliminary detection result, judging whether the head detection frame is positioned at the upper half part of the processed human body image, and if so, determining that the person in the image to be detected wears the safety helmet; otherwise, determining that the person in the image to be detected does not wear a safety helmet.
6. The method for detecting wearing of a safety helmet according to any one of claims 1 to 4, wherein determining whether or not a person in the image to be detected wears a safety helmet further includes:
when a person in the image to be detected wears a safety helmet, outputting the image to be detected with a first mark;
and when the person in the image to be detected does not wear a safety helmet, outputting the image to be detected with a second mark.
7. The headgear wear detection method of claim 6, wherein the first indicia comprises: a human body detection frame having a first color; the second mark includes: a human body detection frame with a second color.
8. The headgear wearing detection method according to any one of claims 1 to 4,
still include before acquireing the image of waiting to examine in the detection area: acquiring a first data set comprising a plurality of first sample images;
inputting the first sample image into a first initial detection model for training to obtain the first target detection model;
acquiring a second data set comprising a plurality of second sample images;
and inputting the second sample image into a second initial detection model for training to obtain the second target detection model.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
10. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211265845.4A CN115601699A (en) | 2022-10-17 | 2022-10-17 | Safety helmet wearing detection method, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211265845.4A CN115601699A (en) | 2022-10-17 | 2022-10-17 | Safety helmet wearing detection method, electronic equipment and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115601699A true CN115601699A (en) | 2023-01-13 |
Family
ID=84847322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211265845.4A Pending CN115601699A (en) | 2022-10-17 | 2022-10-17 | Safety helmet wearing detection method, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115601699A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116824723A (en) * | 2023-08-29 | 2023-09-29 | 山东数升网络科技服务有限公司 | Intelligent security inspection system and method for miner well-down operation based on video data |
-
2022
- 2022-10-17 CN CN202211265845.4A patent/CN115601699A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116824723A (en) * | 2023-08-29 | 2023-09-29 | 山东数升网络科技服务有限公司 | Intelligent security inspection system and method for miner well-down operation based on video data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110419048B (en) | System for identifying defined objects | |
CN110309719A (en) | A kind of electric network operation personnel safety cap wears management control method and system | |
US10824900B2 (en) | Information processing device and recognition support method | |
KR20170091677A (en) | Method and system for identifying an individual with increased body temperature | |
CN105844245A (en) | Fake face detecting method and system for realizing same | |
CN111666920B (en) | Target article wearing detection method and device, storage medium and electronic device | |
CN109389040B (en) | Inspection method and device for safety dressing of personnel in operation field | |
CN107392162A (en) | Dangerous person's recognition methods and device | |
CN112364740A (en) | Unmanned machine room monitoring method and system based on computer vision | |
CN115601699A (en) | Safety helmet wearing detection method, electronic equipment and computer readable medium | |
CN111259763A (en) | Target detection method and device, electronic equipment and readable storage medium | |
CN115700757A (en) | Control method and device for fire water monitor and electronic equipment | |
CN114155492A (en) | High-altitude operation safety belt hanging rope high-hanging low-hanging use identification method and device and electronic equipment | |
JP7063076B2 (en) | Recognition device, recognition method and recognition program | |
US10176695B2 (en) | Real-time water safety analysis based on color-movement tracking | |
CN116959028A (en) | Method for supervising safety of high-altitude operation, inspection equipment and computing equipment | |
CN116425047A (en) | Crane operation alarming method, device, equipment and computer readable storage medium | |
CN111597954A (en) | Method and system for identifying vehicle position in monitoring video | |
US20230051823A1 (en) | Systems, methods, and computer program products for image analysis | |
CN116030500A (en) | Personnel dressing standard identification method and system | |
JP6868057B2 (en) | Reading system, reading method, program, storage medium, and mobile | |
CN114241513A (en) | Safety helmet detection method, detection device, storage medium and computer equipment | |
CN111274888B (en) | Helmet and work clothes intelligent identification method based on wearable mobile glasses | |
CN114663805A (en) | Flame positioning alarm system and method based on convertor station valve hall fire-fighting robot | |
CN115376275B (en) | Construction safety warning method and system based on image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |