CN112257620A - Safe wearing condition identification method - Google Patents

Safe wearing condition identification method Download PDF

Info

Publication number
CN112257620A
CN112257620A CN202011169611.0A CN202011169611A CN112257620A CN 112257620 A CN112257620 A CN 112257620A CN 202011169611 A CN202011169611 A CN 202011169611A CN 112257620 A CN112257620 A CN 112257620A
Authority
CN
China
Prior art keywords
safety
wearing
box
frame
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011169611.0A
Other languages
Chinese (zh)
Other versions
CN112257620B (en
Inventor
李静
王荣秋
李朝辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huawei Mingtian Software Technology Co ltd
Original Assignee
Guangzhou Huawei Mingtian Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huawei Mingtian Software Technology Co ltd filed Critical Guangzhou Huawei Mingtian Software Technology Co ltd
Priority to CN202011169611.0A priority Critical patent/CN112257620B/en
Publication of CN112257620A publication Critical patent/CN112257620A/en
Application granted granted Critical
Publication of CN112257620B publication Critical patent/CN112257620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

A safe wearing condition identification method can judge the wearing result of an object in a time period according to collected video stream data through 6 steps. According to the invention, through the long-short memory algorithm, the long-short memory algorithm structure or the short-short memory algorithm structure is started correspondingly according to the wearing condition of the object in a period of time, so that false detection and missed detection of safety clothes and safety helmets caused by target deformation, sudden movement, background clutter, shading, video frame loss and the like are avoided. The alarm quantity generated by false detection and missed detection of the safety clothes and the safety helmet is greatly reduced, the alarm information is reasonably output, and the method has the advantage of accurate judgment.

Description

Safe wearing condition identification method
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for recognizing a safe wearing condition.
Background
In petrochemical engineering job sites, building sites, power plants, inside tracks and the like, due to the fact that scene environments are complex, various factors threatening personal safety exist, and therefore workers in the occasions need to wear safety helmets or wear safety clothes.
Whether intelligent detection workman wears thing safety helmet and safety clothes has the significance to the safety protection management and the intelligent information ization management of job site. If can effectively improve the supervision personnel to the on-the-spot management efficiency of safety helmet and safety clothes wearing condition, reduce artifical tour labour cost by a wide margin, also can provide the safety guarantee to the workman simultaneously, reduce the emergence of incident to a certain extent. However, when the image recognition technology in the prior art is applied to target detection, false detection and missing detection are easy to occur.
Therefore, it is necessary to provide a method for identifying a safe wearing condition to solve the deficiencies of the prior art.
Disclosure of Invention
One of the purposes of the invention is to provide a safe wearing condition identification method by avoiding the defects of the prior art. The method for identifying the safe wearing condition can effectively avoid the condition of false detection or missed detection of the safe wearing object caused by target deformation, sudden motion, background clutter, shading, video frame loss and the like, and has the characteristic of accurate judgment.
The above object of the present invention is achieved by the following technical measures:
the safety wearing condition identification method comprises the following steps:
collecting a plurality of material images of a safe wearing object;
step two, respectively carrying out standard wearing object region labeling on the material image obtained in the step one to obtain a plurality of standard wearing object region frame information;
thirdly, performing re-clustering on the information of the plurality of standard wearing object region frames obtained in the second step to obtain re-clustered grouped data;
step four, training the refocusing grouped data obtained in the step three and the standard wearing object region frame information obtained in the step two by adopting a dark learning neural network yolov3 algorithm of a dark learning Darknet frame to obtain an optimal model;
analyzing video stream data in the acquisition area to obtain multi-frame images, sequentially inputting the multi-frame images into the optimal model in the step four to obtain frame information and counter frame information of the safe wearing object, and then obtaining safe wearing condition data of the images according to the frame information and the counter frame information of the safe wearing object;
and step six, counting the duration of the data of the safety wearing condition in the step five according to a long-short memory algorithm to obtain a wearing judgment result of the object in the time period.
Preferably, the sixth step is to record the accumulated time T of the data of the safety wearing condition of the fifth step in the time period T according to a long-short memory algorithm, and when the T is greater than or equal to alpha T, the safety wearing object is judged to exist; when T is less than alpha T, the safety wearing article is judged to be absent, wherein alpha is a time threshold value, alpha is more than 0.1 and less than 0.9, and T is more than or equal to T and more than 0.
Preferably, the standard clothing region frame information includes a center point coordinate, a frame height, and a frame width of the standard clothing region frame.
Preferably, the long and short memory algorithm includes a long memory structure and a short memory structure.
Preferably, the long memory structure is used for recording T1Safe wearing condition data in a time period, the short memory structure being used for recording T2Safe wearing condition data in a time period, T1>T2>0。
When the duration of the video stream data is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
When video stream data exists, the duration of the picture frame information is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
When the duration of the video stream data is greater than or equal to T1When it is, record T1Step in time periodFifthly, the accumulated time length T of the data of the safe wearing condition is greater than or equal to alpha T1If the judgment result is that the object is worn with a safe wearing object; when T < alpha T1And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
Preferably, in the third step, kmean re-clustering is performed on the information of the plurality of standard wearing object region frames obtained in the second step, and K is set to 9, so as to obtain re-clustered grouping data.
Preferably, the fourth step specifically includes training the regrouping type grouping data obtained in the third step and the information of the plurality of standard wearing object region frames obtained in the second step to obtain an optimal model by using a dark learning neural network yolov3 algorithm of a dark learning Darknet frame according to the training parameters.
Preferably, the training parameters are that the initial learning rate is set to be β, the maximum training round is B, and the learning rate when the round is C is set to be γ × β; the learning rate at round D is δ × β, B > D > C > 1000, and β, γ, and δ are all positive numbers.
And when the average loss value output by training in the training round is constant, selecting the current training round as a starting point, selecting the weight test stored in the range of +/-V rounds as an optimal model, wherein the weight test is more than 0 and less than V.
Preferably, the wearing article is a safety garment or a safety helmet.
Preferably, the fifth step includes:
step 5.1, analyzing the video stream data of the object in the acquisition area to obtain a plurality of frames of images;
step 5.2, respectively and sequentially inputting the multi-frame images into the optimal model in the step four to obtain corresponding class, score and box corresponding to the safety suit, the safety helmet and the object, wherein the class is class information, the score is confidence coefficient of the recognition target, and the box is frame information (x, y, w, h) of the recognition target, wherein x is an x-axis coordinate of a frame central point, y is a y-axis coordinate of the frame central point, w is width of the frame, and h is height of the frame;
step 5.3, comparing the score with a confidence threshold theta, judging as a false detection target when the score is less than the theta, and deleting a corresponding box; when score is larger than or equal to theta, judging the detection target, and entering a step 5.4, wherein theta is larger than or equal to 0.5 and smaller than or equal to 0.8;
and 5.4, matching the box of the safety suit of the detection target with the box of the object to obtain the safety wearing condition data of the safety suit, and matching the box of the safety helmet of the detection target with the box of the object to obtain the safety wearing condition data of the safety helmet.
Preferably, the matching the box of the safety suit of the detection target and the box of the object to obtain the safety wearing condition data of the safety suit is to specifically determine an intersection area S of the box of the safety suit and the box of the object to obtain the safety wearing condition data of the safety suit(box_p)∩(box_c)Sum area threshold η1Product eta of box of safety clothing1Sbox_cComparing when S is(box_p)∩(box_c)≥η1Sbox_cIf so, it is determined that the data of the wearing condition of the safety suit exists, and S is performed(box_p)∩(box_c)<η1*Sbox_cIt is determined that the safety clothing wearing condition data does not exist.
Preferably, the step of matching the box of the target to be detected with the box of the object to obtain the data of the safety wearing condition of the helmet specifically includes expanding the height of the border of the object to be detected by λ times in the positive direction of the y axis to obtain the box of the expanded object, and expanding the intersection area S of the box of the helmet and the box of the expanded object(box_p)∩(box_h)Sum area threshold η2Product eta with box of helmet2*Sbox_hComparing when S is(box_p)∩(box_h)≥η2*Sbox_hIf so, it is determined that the data of the wearing condition of the helmet exists, and S is performed(box_p)∩(box_h)<η2*Sbox_hIt is determined that the helmet wearing condition data does not exist.
Wherein eta is more than or equal to 0.51≤0.8,0.5≤η2≤0.8,0.2≤λ≤0.5。
Preferably, B is 20000, D is 16000, C is 12000, β is 0.001, γ is 0.1, and δ is 0.01;
preferably, θ is 0.65.
Preferably, T is as defined above1Is 10 seconds, T25 seconds, alpha is 0.5.
Preferably, η above1And η2Are all 0.6.
Preferably, λ is 0.3.
Preferably, V is 2000.
According to the method for identifying the safe wearing condition, the wearing judgment result of the object in a time period can be obtained according to the collected video stream data through 6 steps. According to the invention, through the long-short memory algorithm, the long-short memory algorithm structure or the short-short memory algorithm structure is started correspondingly according to the wearing condition of the object in a period of time, so that false detection and missed detection of safety clothes and safety helmets caused by target deformation, sudden movement, background clutter, shading, video frame loss and the like are avoided. The alarm quantity generated by false detection and missed detection of the safety clothes and the safety helmet is greatly reduced, the alarm information is reasonably output, and the method has the characteristic of accurate judgment.
Drawings
The invention is further illustrated by means of the attached drawings, the content of which is not in any way limiting.
Fig. 1 is a flowchart of a method for identifying a safety wearing condition.
Detailed Description
The technical solution of the present invention is further illustrated by the following examples.
Example 1.
A method for identifying a safety wearing condition, as shown in fig. 1, includes the steps of:
collecting a plurality of material images of a safe wearing object;
step two, respectively carrying out standard wearing object region labeling on the material image obtained in the step one to obtain a plurality of standard wearing object region frame information;
thirdly, performing re-clustering on the information of the plurality of standard wearing object region frames obtained in the second step to obtain re-clustered grouped data;
step four, training the refocusing grouped data obtained in the step three and the standard wearing object region frame information obtained in the step two by adopting a dark learning neural network yolov3 algorithm of a dark learning Darknet frame to obtain an optimal model;
analyzing video stream data in the acquisition area to obtain multi-frame images, sequentially inputting the multi-frame images into the optimal model in the step four to obtain frame information and counter frame information of the safe wearing object, and then obtaining safe wearing condition data of the images according to the frame information and the counter frame information of the safe wearing object;
and step six, counting the duration of the data of the safety wearing condition in the step five according to a long-short memory algorithm to obtain a wearing judgment result of the object in the time period.
The wearing articles of the invention are safety clothes and safety helmets. The material images comprise images of the pair of objects wearing safety clothes and images of the pair of objects wearing safety helmets, and then the regional frame information of the pair of objects, the safety clothes and the safety helmets in the images is marked on each material image. The video stream data acquisition of the invention can be realized by wearing a law enforcement camera on the camera equipment of a helmet worn by a worker or other body parts, or by the camera equipment fixed to a fixed position.
Recording the accumulated time T of the safety wearing condition data in the fifth step in the T time period according to a long-short memory algorithm, and judging that a safety wearing object exists when T is larger than or equal to alpha T; when T is less than alpha T, the safety wearing article is judged to be absent, wherein alpha is a time threshold value, alpha is more than 0.1 and less than 0.9, and T is more than or equal to T and more than 0.
The standard wearing object region frame information comprises a center point coordinate, a frame height and a frame width of the standard wearing object region frame.
The long and short memory algorithm comprises a long memory structure and a short memory structure. In which the long memory structure is used for recording T1Data of safe wearing condition in time period, short memory structure for recording T2Safe wearing condition data in a time period, T1>T2>0。
When the duration of the video stream data is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2Time judging knotThe effect is that the safety wearing object is not worn and an alarm is given.
When video stream data exists, the duration of the picture frame information is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
When the duration of the video stream data is greater than or equal to T1When it is, record T1The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T1If the judgment result is that the object is worn with a safe wearing object; when T < alpha T1And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
And step three, specifically, performing kmean re-clustering on the information of the plurality of standard wearing object region frames obtained in the step two, and setting K to be 9 to obtain re-clustered grouped data.
And step four, specifically, training the regrouping type grouping data obtained in step three and the information of the plurality of standard wearing object region frames obtained in step two according to training parameters by adopting a dark learning neural network yolov3 algorithm of a dark learning Darknet frame to obtain an optimal model.
Wherein the training parameters are that the initial learning rate is set as beta, the maximum training round is B, and the learning rate when the round is C is set as gamma beta; the learning rate at round D is δ × β, B > D > C > 1000, and β, γ, and δ are all positive numbers. In this example, β is specifically 0.001, γ is 0.1, δ is 0.01, B is 20000, D is 16000, and C is 12000.
Since the more backward rounds are closer to the true value, the learning rate of the later rounds is less, and thus an optimal model of the more closer to the true value is obtained. Multiple experiments prove that when beta is 0.001, gamma is 0.1, delta is 0.01, B is 20000, D is 16000 and C is 12000, the obtained optimal model is relatively true.
When the average loss value output by training in the training round is constant, the current training round is taken as a starting point, the weight test stored in the range of +/-V rounds is selected as the optimal model, V is greater than 0, and the specific V of the embodiment is 2000.
The present invention is illustrated in this example, where the average loss value is not reduced, i.e. constant, after 14000 rounds of training. And selecting +/-2000 rounds as a starting point, namely selecting the weight test stored in the range of 12000-16000 rounds as an optimal model.
It should be noted that the dark learning neural network yolov3 algorithm of the dark learning neural network of the darknet frame adopted by the invention is common knowledge in the art, and after the parameters are set, the regrouping type grouping data and the information of a plurality of standard wearing object region frames are trained, so that the optimal model of the invention can be obtained. Therefore, the specific setting and operation process of the dark learning neural network yolov3 algorithm of the darknet framework is not repeated. For the algorithm that the adopted kmean re-clustering is known, the technical personnel in the field should know the parameters and parameter settings, and after the parameters are set, the multi-standard wearing object region frame information of the invention is subjected to kmean re-clustering, the re-clustering grouped data of the invention can be obtained. Therefore, the specific setting and operation process of the kmean re-clustering is not described one by one.
According to the safety wearing condition identification method, the wearing judgment result of the object in a time period can be obtained according to the collected video stream data through 6 steps. According to the invention, through the long-short memory algorithm, the long-short memory algorithm structure or the short-short memory algorithm structure is started correspondingly according to the wearing condition of the object in a period of time, so that false detection and missed detection of safety clothes and safety helmets caused by target deformation, sudden movement, background clutter, shading, video frame loss and the like are avoided. The alarm quantity generated by false detection and missed detection of the safety clothes and the safety helmet is greatly reduced, the alarm information is reasonably output, and the method has the advantage of accurate judgment.
Example 2.
The other characteristics of the method for recognizing the safety wearing condition are the same as those of the embodiment 1, and the method further has the following characteristics: the fifth step comprises the following steps:
step 5.1, analyzing the video stream data of the object in the acquisition area to obtain a plurality of frames of images;
step 5.2, respectively and sequentially inputting the multi-frame images into the optimal model in the step four to obtain corresponding class, score and box corresponding to the safety suit, the safety helmet and the object, wherein the class is class information, the score is confidence coefficient of the recognition target, and the box is frame information (x, y, w, h) of the recognition target, wherein x is an x-axis coordinate of a frame central point, y is a y-axis coordinate of the frame central point, w is width of the frame, and h is height of the frame;
step 5.3, comparing the score with a confidence threshold theta, judging as a false detection target when the score is less than the theta, and deleting a corresponding box; when score is larger than or equal to theta, judging the detection target, and entering a step 5.4, wherein theta is larger than or equal to 0.5 and smaller than or equal to 0.8;
and 5.4, matching the box of the safety suit of the detection target with the box of the object to obtain the safety wearing condition data of the safety suit, and matching the box of the safety helmet of the detection target with the box of the object to obtain the safety wearing condition data of the safety helmet.
The confidence threshold θ of the present invention is specifically 0.65, that is, when score is less than 0.65, it is determined as the false detection target.
In step 5.1 of the invention, FFmpeg is utilized to analyze to obtain multi-frame RGB format image data, and each frame or every i (i can be set randomly) frames of the image data can be selected to be transmitted into an optimal model according to the requirements of detection efficiency and accuracy.
Step 5.2 is illustrated by one of the frame images, which is input into the optimal model to obtain class, score and box of the object, class, score and box of the safety suit and class, score and box of the safety helmet, wherein the score and box of the safety suit are specifically (coverall, 0.9641, 526, 361, 185, 432), the score and box of the safety helmet are specifically (hat, 0.9268, 526, 98, 106, 81), 0.9641 and 0.9268 are confidence degrees, and 526, 361, 185, 432 and 26, 98, 106, 81 are border information (x, y, w, h). Wherein coverall and hat are respectively the corresponding category information of the safety suit and the safety helmet.
Specifically, matching the box of the safety suit of the detection target with the box of the object to obtain the safety wearing condition data of the safety suit specifically includes determining the intersection area S of the box of the safety suit and the box of the object(box_p)∩(box_c)And area thresholdValue eta1Product eta of box of safety clothing1Sbox_cComparing when S is(box_p)∩(box_c)≥η1*Sbox_cIf so, it is determined that the data of the wearing condition of the safety suit exists, and S is performed(box_p)∩(box_c)<η1*Sbox_cJudging that no safety clothing wearing condition data exists, wherein eta is more than or equal to 0.51≤0.8。
Specifically, matching the box of the target-to-be-detected safety helmet with the box of the object to obtain the safety wearing condition data of the safety helmet specifically includes expanding the height of the border of the object to the positive direction of the y axis by λ times to obtain the box of the expanded object, and expanding the intersection area S of the box of the safety helmet and the box of the expanded object(box_p)∩(box_h)Sum area threshold η2Product eta with box of helmet2*Sbox_hComparing when S is(box_p)∩(box_h)≥η2*Sbox_hIf so, it is determined that the data of the wearing condition of the helmet exists, and S is performed(box_p)∩(box_h)<η2*Sbox_hJudging that no data of the wearing condition of the safety helmet exists, wherein eta is more than or equal to 0.52≤0.8,0.2≤λ≤0.5。
Wherein the embodiment eta1And η2Both are specifically 0.6 and λ is 0.3.
According to the safety wearing condition identification method, the intersection area of the box of the safety helmet and the box of the safety suit and the box of the object is calculated respectively, the wearing condition data is judged, and the real wearing condition can be reflected accurately.
Example 3.
A safe wearing condition identification method is the same as embodiment 1 in other characteristics, and is different from the embodiment in that: in this example, α is specifically 0.5, T1Is 10 seconds, T2Was 5 seconds.
When the duration of the video stream data is less than 10 seconds, recording the accumulated duration t of the safety wearing condition data in the fifth step within a 5-second time period, and judging that the object is worn with a safety wearing object when t is more than or equal to 2.5 seconds; and when the t is less than 2.5 seconds, judging that the object is not worn with the safety clothing and giving an alarm. That is, the condition that the person wears the safety suit and the safety helmet within 5 seconds is recorded, and the alarm is generated when the person does not wear the safety suit and the safety helmet for more than half of the time within 5 seconds.
When the duration of the video stream data to the picture frame information is less than 10 seconds, recording the accumulated duration t of the safety wearing condition data in the step five in a 5-second time period, and judging that the safety wearing object is worn to the target when the t is more than or equal to 2.5 seconds; and when t is less than 2.5 seconds, judging that the object is not worn with the safety clothing and giving an alarm. That is, when no person is detected within 10 consecutive seconds of video analysis, when the person reappears, the condition that the person wears the safety suit and the safety helmet within 5 seconds is memorized, and when the person does not wear the safety suit and the safety helmet within 5 seconds, the alarm is generated.
When the duration of the video stream data is greater than or equal to 10 seconds, recording the accumulated duration t of the safety wearing condition data in the fifth step within a 10-second time period, and judging that the object is worn as a safety wearing object when t is greater than or equal to 5 seconds; and when t is less than 5 seconds, judging that the object is not worn and giving an alarm. When the video analysis duration is longer than 10 seconds, the condition that the personnel wear the safety suit and the safety helmet within 10 seconds is memorized, and when the personnel do not wear the safety suit and the safety helmet within more than half of the accumulated time within 10 seconds, an alarm is generated.
The method for identifying the safe wearing condition can effectively avoid the condition of false detection or missed detection of the safe wearing object caused by target deformation, sudden movement, background clutter, shading, video frame loss and the like through a long-short memory algorithm, and has the advantage of accurate judgment.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the protection scope of the present invention, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A safety wearing condition identification method is characterized by comprising the following steps:
collecting a plurality of material images of a safe wearing object;
step two, respectively carrying out standard wearing object region labeling on the material image obtained in the step one to obtain a plurality of standard wearing object region frame information;
thirdly, performing re-clustering on the information of the plurality of standard wearing object region frames obtained in the second step to obtain re-clustered grouped data;
step four, training the refocusing grouped data obtained in the step three and the standard wearing object region frame information obtained in the step two by adopting a dark learning neural network yolov3 algorithm of a dark learning Darknet frame to obtain an optimal model;
analyzing video stream data in the acquisition area to obtain multi-frame images, sequentially inputting the multi-frame images into the optimal model in the step four to obtain frame information and counter frame information of the safe wearing object, and then obtaining safe wearing condition data of the images according to the frame information and the counter frame information of the safe wearing object;
and step six, counting the duration of the data of the safety wearing condition in the step five according to a long-short memory algorithm to obtain a wearing judgment result of the object in the time period.
2. The method for recognizing a safety wearing situation according to claim 1, characterized in that: recording the accumulated time T of the safety wearing condition data in the fifth step in the T time period according to a long-short memory algorithm, and judging that a safety wearing object exists when T is larger than or equal to alpha T; when T is less than alpha T, the safety wearing article is judged to be absent, wherein alpha is a time threshold value, alpha is more than 0.1 and less than 0.9, and T is more than or equal to T and more than 0.
3. The safety wearing situation recognition method according to claim 2, characterized in that: the standard wearing object region frame information comprises a center point coordinate, a frame height and a frame width of the standard wearing object region frame;
the long and short memory algorithm comprises a long memory structure and a short memory structure,
the long memory structure is used for recording T1Safe wearing condition data in a time period, the short memory structure being used for recording T2Time periodData of inner safety dressing condition, existence T1>T2>0。
4. The safety wearing situation recognition method according to claim 3, characterized in that: when the duration of the video stream data is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2If the wearing result is not the same as the wearing result, the safety wearing object is judged to be worn and an alarm is sent;
when video stream data exists, the duration of the picture frame information is less than T1When it is, record T2The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T2If the judgment result is that the object is worn with a safe wearing object; when T < alpha T2If the wearing result is not the same as the wearing result, the safety wearing object is judged to be worn and an alarm is sent;
when the duration of the video stream data is greater than or equal to T1When it is, record T1The accumulated time T of the safety wearing condition data of the step five in the time period is more than or equal to alpha T1If the judgment result is that the object is worn with a safe wearing object; when T < alpha T1And the time judgment result is that the object is not worn with the safety wearing article and an alarm is given.
5. The safety wearing situation recognition method according to claim 4, characterized in that: and step three, specifically, performing kmean re-clustering on the information of the plurality of standard wearing object region frames obtained in the step two, and setting K to be 9 to obtain re-clustered grouped data.
6. The safety wearing situation recognition method according to claim 5, characterized in that: the fourth step is specifically that a dark learning neural network yolov3 algorithm of a dark learning neural network of a dark knet frame is adopted, and the regrouping type grouping data obtained in the third step and the information of the plurality of standard wearing object region frames obtained in the second step are trained according to training parameters to obtain an optimal model;
setting the initial learning rate as beta, setting the maximum training turn as B, and setting the learning rate as gamma beta when the turn is C; the learning rate is delta beta when the turn is D, B is more than D and more than C is more than 1000, and beta, gamma and delta are positive numbers;
and when the average loss value output by training in the training round is constant, selecting the current training round as a starting point, selecting the weight test stored in the range of +/-V rounds as an optimal model, wherein the weight test is more than 0 and less than V.
7. The safety wearing situation recognition method according to claim 6, characterized in that: the wearing articles are safety clothes and safety helmets;
the fifth step comprises the following steps:
step 5.1, analyzing the video stream data of the object in the acquisition area to obtain a plurality of frames of images;
step 5.2, respectively and sequentially inputting the multi-frame images into the optimal model in the step four to obtain corresponding class, score and box corresponding to the safety suit, the safety helmet and the object, wherein the class is class information, the score is confidence coefficient of the recognition target, and the box is frame information (x, y, w, h) of the recognition target, wherein x is an x-axis coordinate of a frame central point, y is a y-axis coordinate of the frame central point, w is width of the frame, and h is height of the frame;
step 5.3, comparing the score with a confidence threshold theta, judging as a false detection target when the score is less than the theta, and deleting a corresponding box; when score is larger than or equal to theta, judging the detection target, and entering a step 5.4, wherein theta is larger than or equal to 0.5 and smaller than or equal to 0.8;
and 5.4, matching the box of the safety suit of the detection target with the box of the object to obtain the safety wearing condition data of the safety suit, and matching the box of the safety helmet of the detection target with the box of the object to obtain the safety wearing condition data of the safety helmet.
8. The safety wearing situation recognition method according to claim 7, characterized in that: the matching of the box of the safety suit of the detection target and the box of the object to obtain the safety wearing condition data of the safety suit specifically comprises the intersection area S of the box of the safety suit and the box of the object to be detected(box_p)∩(box_c)Sum area threshold η1Product eta of box of safety clothing1*Sbox_cComparing when S is(box_p)∩(box_c)≥η1Sbox_cIf so, it is determined that the data of the wearing condition of the safety suit exists, and S is performed(box_p)∩(box_c)<η1*Sbox_cJudging that no safety clothing wearing condition data exists, wherein eta is more than or equal to 0.51≤0.8。
9. The safety wearing situation recognition method according to claim 8, characterized in that: the method for matching the box of the target to be detected with the box of the object to obtain the data of the safety wearing condition of the safety helmet specifically comprises the steps of expanding the height of the frame of the object to be detected by lambda times to the positive direction of the y axis to obtain the box of the expanded object, and expanding the intersection area S of the box of the safety helmet and the box of the expanded object(box_p)∩(box_h)Sum area threshold η2Product eta with box of helmet2*Sbox_hComparing when S is(box_p)∩(box_h)≥η2*Sbox_hIf so, it is determined that the data of the wearing condition of the helmet exists, and S is performed(box_p)∩(box_h)<η2*Sbox_hJudging that no data of the wearing condition of the safety helmet exists, wherein eta is more than or equal to 0.52≤0.8,0.2≤λ≤0.5。
10. The safety wearing situation recognition method according to claim 9, characterized in that: said B is 20000, said D is 16000, said C is 12000, said β is 0.001, said γ is 0.1, said δ is 0.01;
the theta is 0.65;
the T is1Is 10 seconds, T25 seconds, alpha is 0.5;
eta of1And η2Are all 0.6;
the lambda is 0.3;
the V is 2000.
CN202011169611.0A 2020-10-27 2020-10-27 Safe wearing condition identification method Active CN112257620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169611.0A CN112257620B (en) 2020-10-27 2020-10-27 Safe wearing condition identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169611.0A CN112257620B (en) 2020-10-27 2020-10-27 Safe wearing condition identification method

Publications (2)

Publication Number Publication Date
CN112257620A true CN112257620A (en) 2021-01-22
CN112257620B CN112257620B (en) 2021-10-26

Family

ID=74262532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169611.0A Active CN112257620B (en) 2020-10-27 2020-10-27 Safe wearing condition identification method

Country Status (1)

Country Link
CN (1) CN112257620B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136076A (en) * 2011-03-14 2011-07-27 徐州中矿大华洋通信设备有限公司 Method for positioning and tracing underground personnel of coal mine based on safety helmet detection
CN108319926A (en) * 2018-02-12 2018-07-24 安徽金禾软件股份有限公司 A kind of the safety cap wearing detecting system and detection method of building-site
CN109101922A (en) * 2018-08-10 2018-12-28 广东电网有限责任公司 Operating personnel device, assay, device and electronic equipment
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety helmet detection method and system in dynamic background
CN110287804A (en) * 2019-05-30 2019-09-27 广东电网有限责任公司 A kind of electric operating personnel's dressing recognition methods based on mobile video monitor
CN110399905A (en) * 2019-07-03 2019-11-01 常州大学 The detection and description method of safety cap wear condition in scene of constructing
CN110717466A (en) * 2019-10-15 2020-01-21 中国电建集团成都勘测设计研究院有限公司 Method for returning position of safety helmet based on face detection frame
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111091069A (en) * 2019-11-27 2020-05-01 云南电网有限责任公司电力科学研究院 Power grid target detection method and system guided by blind image quality evaluation
CN111160440A (en) * 2019-12-24 2020-05-15 广东省智能制造研究所 Helmet wearing detection method and device based on deep learning
CN111192426A (en) * 2020-01-14 2020-05-22 中兴飞流信息科技有限公司 Railway perimeter intrusion detection method based on anthropomorphic visual image analysis video cruising
CN111539276A (en) * 2020-04-14 2020-08-14 国家电网有限公司 Method for detecting safety helmet in real time in power scene
CN111598066A (en) * 2020-07-24 2020-08-28 之江实验室 Helmet wearing identification method based on cascade prediction
CN111639552A (en) * 2020-05-14 2020-09-08 上海闪马智能科技有限公司 Method and system for detecting wearing of safety helmet on construction site

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136076A (en) * 2011-03-14 2011-07-27 徐州中矿大华洋通信设备有限公司 Method for positioning and tracing underground personnel of coal mine based on safety helmet detection
CN108319926A (en) * 2018-02-12 2018-07-24 安徽金禾软件股份有限公司 A kind of the safety cap wearing detecting system and detection method of building-site
CN109255298A (en) * 2018-08-07 2019-01-22 南京工业大学 Safety helmet detection method and system in dynamic background
CN109101922A (en) * 2018-08-10 2018-12-28 广东电网有限责任公司 Operating personnel device, assay, device and electronic equipment
CN110287804A (en) * 2019-05-30 2019-09-27 广东电网有限责任公司 A kind of electric operating personnel's dressing recognition methods based on mobile video monitor
CN110399905A (en) * 2019-07-03 2019-11-01 常州大学 The detection and description method of safety cap wear condition in scene of constructing
CN110717466A (en) * 2019-10-15 2020-01-21 中国电建集团成都勘测设计研究院有限公司 Method for returning position of safety helmet based on face detection frame
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111091069A (en) * 2019-11-27 2020-05-01 云南电网有限责任公司电力科学研究院 Power grid target detection method and system guided by blind image quality evaluation
CN111160440A (en) * 2019-12-24 2020-05-15 广东省智能制造研究所 Helmet wearing detection method and device based on deep learning
CN111192426A (en) * 2020-01-14 2020-05-22 中兴飞流信息科技有限公司 Railway perimeter intrusion detection method based on anthropomorphic visual image analysis video cruising
CN111539276A (en) * 2020-04-14 2020-08-14 国家电网有限公司 Method for detecting safety helmet in real time in power scene
CN111639552A (en) * 2020-05-14 2020-09-08 上海闪马智能科技有限公司 Method and system for detecting wearing of safety helmet on construction site
CN111598066A (en) * 2020-07-24 2020-08-28 之江实验室 Helmet wearing identification method based on cascade prediction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAO WU 等: "Automated visual helmet identification based on deep convolutional neural networks", 《ELSEVIER》 *
张志超: "安全帽佩戴检测方法研究", 《中国优秀硕士学位论文全文数据库 工程科技I辑》 *

Also Published As

Publication number Publication date
CN112257620B (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN109117827B (en) Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system
CN111898514B (en) Multi-target visual supervision method based on target detection and action recognition
CN110263686A (en) A kind of construction site safety of image cap detection method based on deep learning
CN110044486A (en) Method, apparatus, the equipment of repetition of alarms are avoided for human body inspection and quarantine system
Li et al. Toward efficient safety helmet detection based on YoloV5 with hierarchical positive sample selection and box density filtering
CN110738127A (en) Helmet identification method based on unsupervised deep learning neural network algorithm
CN111539276B (en) Method for detecting safety helmet in real time in power scene
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN111325133B (en) Image processing system based on artificial intelligent recognition
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN111062303A (en) Image processing method, system and computer storage medium
CN112434669A (en) Multi-information fusion human behavior detection method and system
CN111079694A (en) Counter assistant job function monitoring device and method
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN112800975A (en) Behavior identification method in security check channel based on image processing
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN113505770B (en) Method and system for detecting clothes and hair ornament abnormity in express industry and electronic equipment
CN112257620B (en) Safe wearing condition identification method
CN113487166A (en) Chemical fiber floating filament quality detection method and system based on convolutional neural network
KR100543706B1 (en) Vision-based humanbeing detection method and apparatus
CN112183532A (en) Safety helmet identification method based on weak supervision collaborative learning algorithm and storage medium
CN110705453A (en) Real-time fatigue driving detection method
CN114495150A (en) Human body tumbling detection method and system based on time sequence characteristics
CN115995093A (en) Safety helmet wearing identification method based on improved YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant