CN110619324A - Pedestrian and safety helmet detection method, device and system - Google Patents
Pedestrian and safety helmet detection method, device and system Download PDFInfo
- Publication number
- CN110619324A CN110619324A CN201911163568.4A CN201911163568A CN110619324A CN 110619324 A CN110619324 A CN 110619324A CN 201911163568 A CN201911163568 A CN 201911163568A CN 110619324 A CN110619324 A CN 110619324A
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- detected
- safety helmet
- image
- detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a pedestrian and safety helmet detection method, a detection device and a system, wherein the method comprises the steps of obtaining a video image; performing pedestrian and safety helmet detection on the video image through a pedestrian-safety helmet detection model; matching a unique identity for the detected pedestrian by adopting a target tracking algorithm; and judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, if so, marking the pedestrian as the wearing safety helmet and outputting. The method and the device can solve the problem that data are unavailable due to unbalanced training convolutional neural network data sets and label missing, and save a large amount of manual labeling cost. Moreover, the pedestrians in the video are made to be objects by adopting a target tracking method, unique IDs are distributed to all the pedestrians appearing in the video, multi-frame long-time detection is carried out on whether the pedestrians wear the safety helmet or not, and the false alarm rate is greatly reduced.
Description
Technical Field
The invention belongs to the technical field of target detection and identification, and particularly relates to a pedestrian and safety helmet detection method, device and system.
Background
The safety helmet is a head protection device for preventing head collision when an object is hit and falls, and a constructor wears the safety helmet to protect the head from being damaged by falling objects. However, the situation that construction workers do not wear safety helmets often occurs, and the real-time monitoring on the wearing situation of the safety helmets is very important.
In the prior art, a machine learning mode is adopted for a detection method for wearing a safety helmet by a worker, and the detection area is large, so that the characteristic of whether the safety helmet is worn or not is relatively unobvious, and inaccurate detection is easily caused. In addition, the existing detection algorithm cannot use data with missing labels, for example, when a user wants to detect a safety helmet and a pedestrian at the same time, the safety helmet and the pedestrian in the image data required by training cannot be missing, and therefore, the existing detection algorithm cannot be used for most of the identification networks.
Disclosure of Invention
In view of the above-mentioned deficiencies of the prior art, the present invention provides a pedestrian and safety helmet detection method comprising obtaining a video image; performing pedestrian and safety helmet detection on the video image through a pedestrian-safety helmet detection model; matching a unique identity for the detected pedestrian by adopting a target tracking algorithm; and judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, if so, marking the pedestrian as the wearing safety helmet and outputting.
In one possible embodiment, the method further comprises, if the pedestrian carrying the identification is not marked as wearing the safety helmet, matching the pedestrian carrying the identification with the identification of the safety helmet, and marking the pedestrian who is not successfully matched within a preset number of frames as not wearing the safety helmet.
In one possible embodiment, the preset number of frames is 15 frames in one possible embodiment.
In one possible embodiment, before acquiring the video image, the method further comprises: receiving a first training image comprising: the image analysis device comprises a sample image only containing a first or second object to be detected, a sample image containing a first object to be detected and a second object to be detected, and a sample image not containing the object to be detected, wherein the first object to be detected and the second object to be detected are respectively one of the following objects: pedestrians, safety helmets; labeling the first training image to obtain a second training image; and performing learning training on the second training image to obtain a pedestrian-safety helmet detection model.
In a possible embodiment, labeling the first training image includes performing classification labeling according to the first object to be detected or the second object to be detected included in the sample image.
In one possible embodiment, the matching the detected pedestrian with a unique identity using a target tracking algorithm includes: carrying out target tracking on the detected pedestrian to obtain the predicted position of the pedestrian; performing feature extraction on the detected pedestrian through a convolutional neural network; and matching the identity of the pedestrian according to the predicted position and the feature similarity.
In one possible embodiment, the method further comprises deleting pedestrians that do not match a successful identity within a preset number of frames.
In one possible embodiment, before acquiring the video image, reading the video image from the video stream and storing the video image in an image pool.
The embodiment of the invention also discloses a pedestrian and safety helmet detection device, which comprises an acquisition unit, a detection unit and a control unit, wherein the acquisition unit is used for acquiring the video image; the detection unit is used for detecting pedestrians and safety helmets on the video image through a pedestrian-safety helmet detection model; the tracking unit is used for matching a unique identity for the detected pedestrian by adopting a target tracking algorithm; and the judging unit is used for judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, and if so, marking the pedestrian as the wearing safety helmet and outputting the pedestrian.
The embodiment of the invention also discloses a pedestrian and safety helmet detection system which comprises a camera and the detection device connected with the camera.
The invention has the beneficial effects that:
the scheme can solve the problem of unavailable data caused by unbalanced training convolutional neural network data sets and label loss, and saves a large amount of manual labeling cost; and the generalization of the detection accuracy of the safety helmet is enhanced, and the accuracy is improved.
Secondly, a Deep Sort target tracking method is adopted to enable pedestrians in the video to be objectified, a unique ID is distributed to each pedestrian appearing in the video, multi-frame long-time detection is carried out on whether the pedestrian wears a safety helmet or not, and the false alarm rate is greatly reduced.
Drawings
FIG. 1 is a schematic diagram of a system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method in an embodiment of the present invention;
FIG. 3 is a flow chart of a method in an embodiment of the present invention;
FIG. 4 is a flow chart of yet another method in an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a detecting device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a neural network according to an embodiment of the present invention.
Detailed Description
In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.
Referring to fig. 1, one embodiment of the invention discloses a pedestrian and safety helmet detection system 10, which comprises a camera 100, a detection device 200 and a display device 300.
Wherein, the camera 100 is connected with the detection device 200 for collecting real-time pictures of a site, such as a construction site. In one embodiment, the camera 100 may include an input device, a processor, a storage medium, a memory, an interface, the input device being configured to capture video images; the processor is used for providing calculation and control capability, and in one embodiment, the processor can perform detection analysis on the video image through a detection model; the storage medium can store a detection model of the pedestrian and the safety helmet, and the model is used for realizing a detection method of the pedestrian and the safety helmet suitable for the camera; the memory provides an environment for the operation of the detection method of pedestrians and safety helmets in the storage medium; the interface is used for data transmission with the detection device, and the interface can be a data line interface or a wireless interface and the like.
The detection device 200 is connected to the camera 100 and the display device 300, respectively, and is used for processing and analyzing the video image input by the camera 100.
In one embodiment, the detection device 200 may be provided at a worksite or the like for detecting video images transmitted by the analysis camera 100.
In one embodiment, the detection apparatus 200 may be a server or a cluster of servers; the server comprises a processor, a storage medium, a memory and a network interface which are connected through a system bus. Wherein the storage medium of the server stores an operating system, a database and a detection model of the pedestrian-safety helmet; the database is used for storing data, such as storing live video images; in one embodiment, the detection device 200 is used to implement a pedestrian and helmet detection method, which is described in detail below. The processor of the server is used for providing calculation and control capacity and supporting the operation of the whole server. The memory of the server provides an environment for the operation of the pedestrian safety helmet detection device in the storage medium. The network interface of the server is used for connecting and communicating with an external display device and a camera through a network.
The display device 300 is connected to the detection device 200 for displaying the output result of the detection device 200. In one embodiment, the display device 300 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, and the like. The display device 300 may be provided at a video image capturing site such as a worksite.
Referring to fig. 2, the embodiment of the invention discloses a pedestrian and safety helmet detection method, which can be applied to the detection device 200 side. The method comprises the following specific steps:
s100, acquiring a video image;
the camera 100 can acquire a video image of a scene in real time, and the detection device 200 acquires the video image. The video image may be a video image that has been pre-processed by the camera 100.
Referring to fig. 3, the detection apparatus 200 includes a memory, and the memory may include an image pool IN and an image pool OUT for storing the image captured by the camera 100 and the image carrying the tag information after the target identification and target detection, respectively. During video image processing, images X are sequentially read from image pool IN, input to detection device 200 for processing and analysis and then stored IN image pool OUT until image reading IN image pool IN is completed.
Information about the detected and tracked images can be sequentially read from the image pool OUT and output to the display device 300, and the display information can include a pedestrian mark, a wearing helmet mark, an unworn helmet mark, and the like.
In one embodiment, before acquiring the video image, acquiring a pedestrian-safety helmet detection model through a large number of sample image learning trainings, specifically including:
first, a first training image is received, comprising at least one of: the image analysis device comprises a sample image only containing a first or second object to be detected, a sample image containing a first object to be detected and a second object to be detected, and a sample image not containing the object to be detected, wherein the first object to be detected and the second object to be detected are respectively one of the following objects: pedestrian, safety helmet.
For example, the sample image includes: partial pedestrian picture data, wherein pictures of pedestrians are marked (only pedestrians are marked, 60000+ pieces); 9000 pictures of the safety helmet and the pedestrians are marked. In the sample image, the absence of a picture only labeling the helmet causes the data set to be unbalanced and the label to be missing.
In order to solve the problem, the embodiment of the present invention introduces a new tag, that is: and labeling the first training image to obtain a second training image.
Labeling the first training image comprises classifying and labeling the first training image according to a first object to be detected and/or a second object to be detected contained in the sample image. Specifically, the pedestrian and the safety helmet detection task are separated, and a label is added to each piece of training data, namely whether the training data contains the pedestrian detection task or the label of the safety helmet detection task is shown in table 1.
And finally, performing learning training on the second training image to obtain a pedestrian-safety helmet detection model, wherein the second training image is a training image with a newly added label.
The method of sample data collection and acquisition of the detection model by the convolutional neural network is described in detail below.
Is provided withWhereinTo detect the number of tasks, in embodiments of the inventionNamely, detecting two types of safety helmets and workers;is the number of pictures that are fed into the model at one time when training the convolutional neural network. The T describes whether the data set to which the picture belongs labels the type of the detection task, namely, adds a label, and adds the attribute of the labeling condition of the data set for each picture.
When t =2, B =1,namely, the labeling condition of the picture to 2 categories. T0, 0]=1 the dataset to which the picture belongs has been labeled for the first category, T [0,0]And =0 indicates that the data set to which the picture belongs is not labeled in the first category. For the same reason, T1, 0]=1 the dataset to which the picture belongs has been labeled for the second category, T [1,0]And =0 indicates that the data set to which the picture belongs is not labeled for the second category.
For each picture there is a penalty for each taskWhereinIs a firstThe loss value of each task, the net final loss is obtained by formula (1).
(1)
The update of the network weight parameter is shown in equation (2):
according to equation (3), the available n task pair weight parameter updates for the b-th picture can be expressed by equation (4):
(4)
when in useIn the meantime, the loss function of the nth task of the b-th picture is any one ofIs 0, i.e. the update contribution of the weight parameter is 0, as shown in equation (5).
When in useAt this time, the loss of the nth task of the b picture can normally update the network weight parameterNumber, as shown in equation (6).
By adding labels to the sample image dataset, the loss of a task allows updating of the network weight parameters, i.e. learning of the task, only after the sample image has been labeled to the task. When the sample image is not labeled with the label of the task, the loss of the task does not allow the network weight parameters to be updated, i.e., the task is not allowed to learn.
According to the loss function design method and the label design method, the network structure and the loss function of the yolov3 target detection algorithm are redesigned. The redesigned network structure is shown in fig. 6. Each output tensor comprises 2 parallel convolution kernels and respectively corresponds to detection outputs of pedestrians and safety helmets.
Due to the task separation, each task is equivalent to a single-class object detector. The classification loss in the original yolov3 will have no impact on the target detection task for the corresponding class. Therefore, the classification loss function in the original yolov3 loss function is removed as shown in the formulas (7) and (8).
(7)
(8)
WhereinIn the form of a frame error,in the case of an error in the border Iou,in order to classify the error in the image,the sum of the border error of the B picture and the t task in the B pictures and the border IOU error.As a function of the original yolov3 loss,the loss function was yolov3 for the redesign.
Although the tasks are separated, the training of the network is still one-time end-to-end training, and the network does not need to be trained in batches by tasks. Through the training method, the pedestrian-safety helmet detection model can be obtained.
The redesigned network structure and loss function solve the problem of unbalanced data set through task separation, and solve the problem of label missing through setting a new label T for each image in the training set. The structure and loss function design method can be easily used in any conventional convolutional neural network task. The method also solves the problem of high data labeling cost, can collect data of different databases and fuse the data by adding new labels, and does not need to label a large number of samples again to train the network.
S101, detecting pedestrians and safety helmets on the video image through a pedestrian-safety helmet detection model.
Referring to fig. 4, the obtained live image is input to a pedestrian-helmet detection model for recognition, and the recognized pedestrian and helmet are respectively marked, i.e., a pedestrian target rectangular frame [ coord ] and a helmet rectangular frame [ helmet _ coord ].
And S102, matching the detected pedestrian with a unique identity by adopting a target tracking algorithm.
Specifically, target tracking is carried out on the pedestrian in the identified pedestrian target rectangular frame to obtain the predicted position of the pedestrian; carrying out feature extraction on the pedestrian of the identified pedestrian target rectangular frame through a convolutional neural network; and matching the identity of the pedestrian according to the predicted position and the feature similarity. The step of predicting the position and the step of extracting the characteristic are not in sequence and can be carried out simultaneously.
In one embodiment, a Deep Sort target tracking algorithm can be adopted for target detection, namely, a method of extracting pedestrian features through a convolutional neural network and carrying out similarity comparison and Kalman filtering are combined, so that the tracking accuracy rate can be improved, and the problem of pedestrian identity ID jumping can be solved.
For example, referring to fig. 4, on one hand, a pedestrian tracker which has already been tracked is subjected to position prediction by a kalman filter algorithm, and a corresponding parameter is represented as [ prediction _ coord ]; on one hand, feature similarity extraction is carried out on the image content of the detected target pedestrian rectangular frame [ coord ] through a convolutional neural network algorithm.
And then matching the tracked pedestrian tracker with the target pedestrian [ coord ] according to the predicted position [ predicted _ coord ] and the feature similarity, and matching the tracked pedestrian tracker with a unique identity ID.
In one embodiment, when the tracked pedestrian tracker does not match the ID within a preset number of frames, for example, 30 frames, the tracked pedestrian tracker related information is deleted.
S103, judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, if so, marking the pedestrian as the wearing safety helmet and outputting.
For example, referring to fig. 4, it is determined whether the tracked pedestrian, i.e., tracker, has been marked as a wearable helmet, such as marking the pedestrian as a wearable helmet, and the result is output to the display device 300. For example, the output result includes information of the identified pedestrian frame rectangular frame, the wearing helmet mark, the helmet frame rectangular frame, and the like.
In one embodiment, the method further comprises, if the pedestrian carrying the identification is not marked to wear the safety helmet, matching the pedestrian carrying the identification with the identification of the safety helmet, and marking the pedestrian who is not successfully matched within a preset number of frames as the non-wearing safety helmet. For example, referring to fig. 4, a rectangular frame parameter of a helmet is matched with a tracked pedestrian tracker, a pedestrian with a non-matching zone [ helmet _ coord ] in 15 frames is marked as a non-wearing helmet, and a result including information of a recognized rectangular frame of a pedestrian, a non-wearing helmet mark, and the like is output.
The above-described target tracking algorithm is important for objectifying pedestrians in the video (assigning a unique ID to each person appearing in the video). After the Deepsort is combined with the redesigned target detection network, whether pedestrians with the same ID wear the safety helmet or not is comprehensively judged through multiple frames, and therefore the false alarm rate of workers who do not wear the safety helmet in a video is greatly reduced.
The embodiment of the invention also discloses a pedestrian and safety helmet detection device 200, as shown in fig. 5, comprising:
an acquisition unit 2001 for acquiring a video image;
a detection unit 2002 for performing pedestrian and helmet detection on the video image through a pedestrian-helmet detection model;
a tracking unit 2003, configured to match a unique identity for the detected pedestrian by using a target tracking algorithm;
and a judging unit 2004, configured to judge whether the pedestrian carrying the identity is marked as a wearing safety helmet, and if so, mark the pedestrian as a wearing safety helmet and output the mark.
The determining unit 2004 is further configured to, if the pedestrian carrying the identity is not marked with a wearable helmet, match the pedestrian carrying the identity with the identity of the helmet, and mark the pedestrian who is not successfully matched within a preset number of frames as a wearable helmet.
The detection apparatus 200 further comprises a training unit for receiving a first training image comprising at least one of: the image analysis device comprises a sample image only containing a first or second object to be detected, a sample image containing a first object to be detected and a second object to be detected, and a sample image not containing the object to be detected, wherein the first object to be detected and the second object to be detected are respectively one of the following objects: pedestrians, safety helmets; labeling the first training image to obtain a second training image; and performing learning training on the second training image to obtain a pedestrian-safety helmet detection model.
The tracking unit 2003 is further configured to perform target tracking on the detected pedestrian to obtain a predicted position of the pedestrian; performing feature extraction on the detected pedestrian through a convolutional neural network; and matching the identity of the pedestrian according to the predicted position and the feature similarity.
The above function of the detection apparatus 200 corresponds to the detection method, and specific embodiments can refer to the method examples.
The foregoing is only a preferred embodiment of this invention and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the invention and these modifications should also be considered as the protection scope of the invention.
Claims (14)
1. A pedestrian and headgear detection method, comprising:
acquiring a video image;
performing pedestrian and safety helmet detection on the video image through a pedestrian-safety helmet detection model;
matching a unique identity for the detected pedestrian by adopting a target tracking algorithm;
and judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, if so, marking the pedestrian as the wearing safety helmet and outputting.
2. The method of claim 1, further comprising, if the identity-bearing pedestrian is not tagged with a wearable helmet, matching the identity-bearing pedestrian with a helmet identification, and tagging the pedestrian who has not successfully matched within a preset number of frames as a non-wearable helmet.
3. The method of claim 2, wherein the preset number of frames is 15 frames.
4. The method of claim 1, further comprising, prior to acquiring the video image:
receiving a first training image comprising: the image analysis device comprises a sample image only containing a first or second object to be detected, a sample image containing a first object to be detected and a second object to be detected, and a sample image not containing the object to be detected, wherein the first object to be detected and the second object to be detected are respectively one of the following objects: pedestrians, safety helmets;
labeling the first training image to obtain a second training image;
and performing learning training on the second training image to obtain a pedestrian-safety helmet detection model.
5. The method of claim 4, wherein labeling the first training image comprises performing classification labeling according to the first object or the second object included in the sample image.
6. The method of claim 1, wherein the matching the detected pedestrian with a unique identification using a target tracking algorithm comprises: carrying out target tracking on the detected pedestrian to obtain the predicted position of the pedestrian; performing feature extraction on the detected pedestrian through a convolutional neural network; and matching the identity of the pedestrian according to the predicted position and the feature similarity.
7. The method of claim 1 or 6, further comprising deleting pedestrians that do not match a successful identity within a preset number of frames.
8. The method of claim 1, further comprising reading the video image from the video stream and storing the video image in an image pool prior to obtaining the video image.
9. A pedestrian and headgear detection arrangement, comprising:
an acquisition unit configured to acquire a video image;
the detection unit is used for detecting pedestrians and safety helmets on the video image through a pedestrian-safety helmet detection model;
the tracking unit is used for matching a unique identity for the detected pedestrian by adopting a target tracking algorithm;
and the judging unit is used for judging whether the pedestrian carrying the identity marks the wearing safety helmet or not, and if so, marking the pedestrian as the wearing safety helmet and outputting the pedestrian.
10. The apparatus of claim 9, wherein the determining unit is further configured to, if the pedestrian carrying the identification is not marked to wear the safety helmet, match the pedestrian carrying the identification with the identification of the safety helmet, and mark the pedestrian who has not successfully matched within a preset number of frames as not wearing the safety helmet.
11. The apparatus of claim 9, further comprising a training unit to receive a first training image comprising: the image analysis device comprises a sample image only containing a first or second object to be detected, a sample image containing a first object to be detected and a second object to be detected, and a sample image not containing the object to be detected, wherein the first object to be detected and the second object to be detected are respectively one of the following objects: pedestrians, safety helmets; labeling the first training image to obtain a second training image; and performing learning training on the second training image to obtain a pedestrian-safety helmet detection model.
12. The apparatus according to claim 9, wherein the tracking unit is further configured to perform target tracking on the detected pedestrian to obtain a predicted position of the pedestrian; performing feature extraction on the detected pedestrian through a convolutional neural network; and matching the identity of the pedestrian according to the predicted position and the feature similarity.
13. A pedestrian and headgear detection system comprising a camera and a detection device according to any one of claims 9 to 12 connected to the camera.
14. The system of claim 13, further comprising a display device coupled to the detection device.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911163568.4A CN110619324A (en) | 2019-11-25 | 2019-11-25 | Pedestrian and safety helmet detection method, device and system |
CN202010588789.2A CN111914636B (en) | 2019-11-25 | 2020-06-24 | Method and device for detecting whether pedestrian wears safety helmet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911163568.4A CN110619324A (en) | 2019-11-25 | 2019-11-25 | Pedestrian and safety helmet detection method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110619324A true CN110619324A (en) | 2019-12-27 |
Family
ID=68927575
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911163568.4A Pending CN110619324A (en) | 2019-11-25 | 2019-11-25 | Pedestrian and safety helmet detection method, device and system |
CN202010588789.2A Active CN111914636B (en) | 2019-11-25 | 2020-06-24 | Method and device for detecting whether pedestrian wears safety helmet |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010588789.2A Active CN111914636B (en) | 2019-11-25 | 2020-06-24 | Method and device for detecting whether pedestrian wears safety helmet |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110619324A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931652A (en) * | 2020-08-11 | 2020-11-13 | 沈阳帝信人工智能产业研究院有限公司 | Dressing detection method and device and monitoring terminal |
CN113255422A (en) * | 2020-12-29 | 2021-08-13 | 四川隧唐科技股份有限公司 | Process connection target identification management method and system based on deep learning |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112487969B (en) * | 2020-11-30 | 2023-06-30 | 苏州热工研究院有限公司 | Position acquisition method for inspection target of inspection robot of steam generator |
CN112668508B (en) * | 2020-12-31 | 2023-08-15 | 中山大学 | Pedestrian labeling, detecting and gender identifying method based on vertical depression angle |
CN113052107B (en) * | 2021-04-01 | 2023-10-24 | 北京华夏启信科技有限公司 | Method for detecting wearing condition of safety helmet, computer equipment and storage medium |
CN113553963A (en) * | 2021-07-27 | 2021-10-26 | 广联达科技股份有限公司 | Detection method and device of safety helmet, electronic equipment and readable storage medium |
CN113592014A (en) * | 2021-08-06 | 2021-11-02 | 广联达科技股份有限公司 | Method and device for identifying safety helmet, computer equipment and storage medium |
CN114120358B (en) * | 2021-11-11 | 2024-04-26 | 国网江苏省电力有限公司技能培训中心 | Super-pixel-guided deep learning-based personnel head-mounted safety helmet recognition method |
CN114387542A (en) * | 2021-12-27 | 2022-04-22 | 广州市奔流电力科技有限公司 | Video acquisition unit abnormity identification system based on portable ball arrangement and control |
CN116152863B (en) * | 2023-04-19 | 2023-07-21 | 尚特杰电力科技有限公司 | Personnel information identification method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670441A (en) * | 2018-12-14 | 2019-04-23 | 广东亿迅科技有限公司 | A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium |
CN110070033A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | Safety cap wearing state detection method in a kind of power domain dangerous work region |
CN110287804A (en) * | 2019-05-30 | 2019-09-27 | 广东电网有限责任公司 | A kind of electric operating personnel's dressing recognition methods based on mobile video monitor |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319926A (en) * | 2018-02-12 | 2018-07-24 | 安徽金禾软件股份有限公司 | A kind of the safety cap wearing detecting system and detection method of building-site |
CN109447168A (en) * | 2018-11-05 | 2019-03-08 | 江苏德劭信息科技有限公司 | A kind of safety cap wearing detection method detected based on depth characteristic and video object |
CN110119686B (en) * | 2019-04-17 | 2020-09-25 | 电子科技大学 | Safety helmet real-time detection method based on convolutional neural network |
CN110263665A (en) * | 2019-05-29 | 2019-09-20 | 朗坤智慧科技股份有限公司 | Safety cap recognition methods and system based on deep learning |
CN110222672B (en) * | 2019-06-19 | 2022-10-21 | 广东工业大学 | Method, device and equipment for detecting wearing of safety helmet in construction site and storage medium |
CN110458075B (en) * | 2019-08-05 | 2023-08-25 | 北京泰豪信息科技有限公司 | Method, storage medium, device and system for detecting wearing of safety helmet |
-
2019
- 2019-11-25 CN CN201911163568.4A patent/CN110619324A/en active Pending
-
2020
- 2020-06-24 CN CN202010588789.2A patent/CN111914636B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670441A (en) * | 2018-12-14 | 2019-04-23 | 广东亿迅科技有限公司 | A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium |
CN110070033A (en) * | 2019-04-19 | 2019-07-30 | 山东大学 | Safety cap wearing state detection method in a kind of power domain dangerous work region |
CN110287804A (en) * | 2019-05-30 | 2019-09-27 | 广东电网有限责任公司 | A kind of electric operating personnel's dressing recognition methods based on mobile video monitor |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111931652A (en) * | 2020-08-11 | 2020-11-13 | 沈阳帝信人工智能产业研究院有限公司 | Dressing detection method and device and monitoring terminal |
CN113255422A (en) * | 2020-12-29 | 2021-08-13 | 四川隧唐科技股份有限公司 | Process connection target identification management method and system based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN111914636B (en) | 2021-04-20 |
CN111914636A (en) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110619324A (en) | Pedestrian and safety helmet detection method, device and system | |
US11367219B2 (en) | Video analysis apparatus, person retrieval system, and person retrieval method | |
US9141184B2 (en) | Person detection system | |
CN108319926A (en) | A kind of the safety cap wearing detecting system and detection method of building-site | |
CN112163469B (en) | Smoking behavior recognition method, system, equipment and readable storage medium | |
CN110222672A (en) | The safety cap of construction site wears detection method, device, equipment and storage medium | |
WO2014050518A1 (en) | Information processing device, information processing method, and information processing program | |
US10037467B2 (en) | Information processing system | |
CN110222582B (en) | Image processing method and camera | |
CN110688980B (en) | Human body posture classification method based on computer vision | |
CN108229289B (en) | Target retrieval method and device and electronic equipment | |
WO2020171066A1 (en) | Image search device and training data extraction method | |
CN112580552A (en) | Method and device for analyzing behavior of rats | |
CN112507860A (en) | Video annotation method, device, equipment and storage medium | |
CN110245564A (en) | A kind of pedestrian detection method, system and terminal device | |
KR20200112681A (en) | Intelligent video analysis | |
KR20190088087A (en) | method of providing categorized video processing for moving objects based on AI learning using moving information of objects | |
US20230186634A1 (en) | Vision-based monitoring of site safety compliance based on worker re-identification and personal protective equipment classification | |
CN111881320A (en) | Video query method, device, equipment and readable storage medium | |
CN111753587A (en) | Method and device for detecting falling to ground | |
CN111898418A (en) | Human body abnormal behavior detection method based on T-TINY-YOLO network | |
CN111539257A (en) | Personnel re-identification method, device and storage medium | |
CN111126112B (en) | Candidate region determination method and device | |
US11995915B2 (en) | Systems and methods for collecting video clip evidence from a plurality of video streams of a video surveillance system | |
CN115131826A (en) | Article detection and identification method, and network model training method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191227 |
|
WD01 | Invention patent application deemed withdrawn after publication |