CN109670441A - A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium - Google Patents
A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN109670441A CN109670441A CN201811538958.0A CN201811538958A CN109670441A CN 109670441 A CN109670441 A CN 109670441A CN 201811538958 A CN201811538958 A CN 201811538958A CN 109670441 A CN109670441 A CN 109670441A
- Authority
- CN
- China
- Prior art keywords
- human body
- safety cap
- wearing
- cap
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000003860 storage Methods 0.000 title claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 94
- 230000003287 optical effect Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000002265 prevention Effects 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 abstract description 3
- 230000009471 action Effects 0.000 abstract description 2
- 230000002159 abnormal effect Effects 0.000 abstract 1
- 230000033001 locomotion Effects 0.000 description 9
- 238000012549 training Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Helmets And Other Head Coverings (AREA)
- Image Analysis (AREA)
Abstract
It is dressed the invention discloses a kind of realization safety cap and knows method for distinguishing, system, terminal and computer readable storage medium, be related to computer vision applied technical field.By applying depth learning technology, human body and safety cap in recognition detection picture, and according to the human body recognized, the posture for obtaining human body comes the precise area it is determined that safe wearing cap, the error generated to which the plane relation for avoiding image from presenting is different from the actual spatial relation of personage, discrimination in actual production environment has reached 90%, and to reach as high as 15 frames per second for recognition speed.The present invention effectively reduces the difficulty artificially supervised in production environment, regulator is notified that as long as having analyzed abnormal wearing behavior, regulator only need to carry out next step management according to the image and information of scene feedback, be in the action without the moment and pay close attention to field condition.
Description
Technical field
The present invention relates to computer vision applied technical field more particularly to a kind of sides for realizing safety cap wearing identification
Method, system, terminal and computer readable storage medium.
Background technique
With China's expanding economy, construction site is more and more, and requirement of the country for construction industry is also higher and higher, but
Safety accident still happens occasionally, this not only causes serious economic loss to enterprise, returns victim family and brings on a disaster, to society
It can stablize and exert a certain influence.
Whether China's " safe production act " regulation into the necessary safe wearing cap in building site, but really has when worker's operation
Safe wearing cap, it is difficult to guarantee.And the security risk of non-safe wearing cap, very big wound can be brought when accident occurs
Evil, and these security risks are often difficult to be found by Secure Manager.If can find in time, safety accident is carried out
Early warning, it will reduce the injury of safety accident.
With the development of artificial intelligence, video monitoring is in each field of society especially in safety in production field using increasingly
Extensively, especially in safety monitor field, analyse whether that the demand of safe wearing cap is extremely urgent to by video intelligent.Safety cap is worn
One kind that detection is object detection technique is worn, target detection is gone using technologies such as deep learning, image procossing or machine vision
Identify a kind of method of specific objective.Due to illumination variation, block, the various factors such as target sizes influence, detection difficulty is big, and
And recognition effect is poor.Target detection under complex background is always theoretical in recent years and application research hotspot.
Traditional safety cap detection algorithm is judged by the RGB component of image, and being illuminated by the light influences very big, reality
Discrimination is lower under the production environment of border.The algorithm of target detection of early stage is by learning the shallow-layer feature of image and passing through some exquisitenesses
The operation such as normalizing, pond, obtained in the case where some change in shape are small, illumination variation is small preferable as a result, still
Calculation amount is larger, and applicability is not wide.
Summary of the invention
While the technical problem to be solved by the present invention is under complex background, improve discrimination, shorten recognition time,
Video image can be analyzed in real time by providing one kind, analyze the detection method of the whether correct safe wearing cap of human body in image.
To solve the above-mentioned problems, the present invention proposes following technical scheme:
In a first aspect, the present invention proposes a kind of detection method of safety cap wearing identification, comprising the following steps:
S1 obtains detection picture;
S2 judges the detection picture with the presence or absence of human body and safety cap;
S3 obtains the posture of the human body, according to the human body if there are human body and safety caps for the detection picture
Posture determines correct wearing region;
Whether S4 judges the safety cap in the correct wearing region;
S5, if so, the correct safe wearing cap of the human body is determined, if it is not, then executing alarm operation.
Further technical solution is for it, and the step S1 includes:
Live video stream is parsed, picture to be analyzed is obtained;
Obtain the optical flow components value of the picture to be analyzed;
If the optical flow components value of the picture to be analyzed is greater than the first preset threshold, the picture conduct to be analyzed is chosen
Detect picture.
Further technical solution is for it, and the step S2 includes:
It is nominated on the convolution characteristic layer of the detection picture using k different rectangle frames, filtering out may deposit
In the candidate region of human body and/or safety cap, wherein k is positive integer;
Carry out the feature extraction of human body and safety cap respectively to the candidate region;
Classified using characteristics of human body's model to the feature, returned, identifies whether the detection picture has human body;
If identifying, the detection picture has human body, carries out bezel locations recurrence processing to human body, obtains human body frame;
Classified using safety cap characteristic model to the feature, returned, identifies whether the detection picture has safety
Cap;
If identifying, the detection picture has safety cap, carries out bezel locations recurrence processing to safety cap, obtains safety cap
Frame.
Further technical solution is for it, before the step S2 further include:
Safety cap image pattern and non-security cap image pattern are collected, classified, marked and is trained, safety cap is obtained
Characteristic model;
Human body image pattern and non-human image pattern are collected, classified, marked and is trained, characteristics of human body's mould is obtained
Type.
Further technical solution is for it, described to determine correct wearing region according to the posture of human body, comprising:
Obtain the ratio of width to height α of the human body frame;
If the value of α is greater than the second preset threshold, the posture of the human body is determined to stand;
If the value of α less than the second preset threshold, determines the posture of the human body to squat.
Further technical solution is for it, if the posture of human body is to stand, determines that the correct wearing region is the first pendant
Region is worn, described first wears the square region for first the first predetermined width of preset height * that region is human body frame top;
If the posture of human body is to squat, determine that the correct wearing region is the second wearing region, described second wears region
For the square region of second second predetermined width of preset height * on human body frame top.
Further technical solution is for it, and the S4 step includes:
It obtains with the human body frame of the human body there are overlapping region and overlapping region in the safety cap of the human body head is
Targeted security cap;
If the posture of the human body is to stand, whether judge the safety cap frame of the targeted security cap is located at described first
It wears in region;
If so, determining the targeted security cap in the correct wearing region;
If the posture of the human body is to squat, whether judge the safety cap frame of the targeted security cap is located at second pendant
It wears in region;
If so, determining the targeted security cap in the correct wearing region.
Further technical solution is for it, further includes:
If there are multiple human bodies for the detection picture, repeatedly step S3-S5, judges the owner in the detection picture
The whether all correct safe wearing cap of body;
If all correct safe wearing cap of all human bodies determines the detection picture without unlawful practice;
If there is the incorrect safe wearing cap of a human body, determine that the detection picture has unlawful practice, executes alarm operation.
Second aspect, the present invention propose a kind of detection system of safety cap wearing identification, comprising: for executing such as first party
The unit of method described in face.
The third aspect, the embodiment of the invention provides a kind of terminal, which includes that processor, input equipment, output are set
Standby and memory, the processor, input equipment, output equipment and memory are connected with each other, wherein the memory is for depositing
Storage supports terminal to execute the application code of the above method, and the processor is configured for executing the side of above-mentioned first aspect
Method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer storage medium
It is stored with computer program, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor
State the method that processor executes above-mentioned first aspect.
Compared with prior art, the attainable technical effect of present invention institute includes:
By applying depth learning technology, human body and safety cap in recognition detection picture, and according to the human body recognized,
Judge that the posture of human body comes the precise area it is determined that safe wearing cap, thus the plane relation for avoiding image from presenting and personage
The error that actual spatial relation is different and generates, the discrimination in actual production environment has reached 90%, and identifies speed
It is per second that degree reaches as high as 15 frames.The present invention effectively reduces the difficulty artificially supervised in production environment, as long as having analyzed exception
Wearing behavior is then notified that regulator, regulator only need to carry out next step management according to the image and information of scene feedback,
It is in the action without the moment and pays close attention to field condition.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description
Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field
For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of detection method flow chart of safety cap wearing identification provided in an embodiment of the present invention;
Fig. 2 be another embodiment of the present invention provides safety cap wearing identification detecting system schematic diagram;
Fig. 3 be another embodiment of the present invention provides a kind of terminal schematic diagram;
Fig. 4 is detection model training flow diagram provided in an embodiment of the present invention;
Fig. 5 is the specific implementation flow diagram of step of embodiment of the present invention S101;
Fig. 6 is the specific implementation flow diagram of step of embodiment of the present invention S102;
Fig. 7 is the specific implementation flow diagram of the implementation steps of the invention S103-S105.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, is clearly and completely retouched to the technical solution in embodiment
It states, similar reference numerals represent similar component in attached drawing.Obviously, will be described below embodiment is only the present invention one
Divide embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making
Every other embodiment obtained, shall fall within the protection scope of the present invention under the premise of creative work.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded
Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that in this embodiment of the present invention term used in the description merely for the sake of description particular implementation
Example purpose and be not intended to limit the embodiment of the present invention.Such as the institute in specification and appended book of the embodiment of the present invention
As use, other situations unless the context is clearly specified, otherwise " one " of singular, "one" and "the" are intended to wrap
Include plural form.
Embodiment
There is figure referring to Fig. 1 for a kind of detection method flow chart of safety cap wearing identification provided in an embodiment of the present invention
It is found that itself the following steps are included:
S101 obtains detection picture;
Referring to Fig. 5, in specific implementation, detect the quality of picture to determine identify in next step, the accuracy of analysis
And working efficiency.For the calculation amount for reducing system, shorten detection time, the present embodiment obtains detection by step in detail below
Picture:
Live video stream is parsed, picture to be analyzed is obtained;
In general, there is three kinds of coded frames: intracoded frame (I frame), encoded predicted frame (P frame) and two-way in video streaming
Coded frame (B frame).I frame only carries out video compress using this frame spatial coherence, and P frame is pre- using forward reference frame progress time-domain
Coding is surveyed, B frame carries out time-domain predictive coding using the two-way reference frame of forward and backward.Under normal conditions, I frame compression ratio
Low, picture quality is slightly good, and is the basis of P frame and B frame, so being analyzed should preferentially choose I frame, followed by comprising letter
Cease more P frames.
What it is due to adjacent two frames I frame is that the time difference is longer, is carried out so also to choose several P frames between adjacent two I frame
As picture to be analyzed, key message omission has been prevented.
Obtain the optical flow components value of the picture to be analyzed;
This example is found using pixel in image sequence in the variation in time-domain and the correlation between consecutive frame
Previous frame is with corresponding relationship existing between present frame, to calculate the motion information of object between consecutive frame.Optical flow field is
It is that the movement velocity and the direction of motion of pixel each in image are found out by a sequence of pictures.The present embodiment is by making
The optical flow components value of the picture to be analyzed is obtained with the calculation method of sparse optical flow:
In formula,For optical flow components value, indicate angle point in the motion excursion amount in the direction x,Value is bigger, i.e., in the direction x
Motion excursion amount is bigger, and the gap between two field pictures is bigger.
If the optical flow components value of the picture to be analyzed is greater than the first preset threshold, the picture conduct to be analyzed is chosen
Detect picture.
In specific implementation, the first preset threshold is set as 1.2, if the optical flow components value of the picture to be analyzedIt is greater than
1.2, then determine that the gap of this picture to be analyzed and upper frame image is larger, to avoid testing result from omitting, it should this be selected to wait for
Analysis picture analyze in next step as detection picture.
It should be noted that those skilled in the art can also be determined by obtaining picture in the motion excursion amount in the direction y
The gap of picture to be analyzed and upper frame image, voluntarily filters out picture to be analyzed, takes into account the accuracy for improving detection and detection effect
Rate.
S102 judges the detection picture with the presence or absence of human body and safety cap;
Referring to fig. 4, in specific implementation, need to carry out the training of characteristics of human body's model and safety cap characteristic model.
Safety cap image pattern and non-security cap image pattern are collected, classified, marked and is trained, safety cap is obtained
Characteristic model;Human body image pattern and non-human image pattern are collected, classified, marked and is trained, characteristics of human body is obtained
Model.
Acquire a large amount of human body image samples and non-human image pattern, safety cap image pattern and non-security cap image sample
This, is labeled, and be divided into training sample and test sample using special tool, and training sample is used for the ginseng of training pattern
Number, test sample are used as the effect of test model.This example selects human body image sample and non-human image pattern each 20000
, safety cap image pattern and non-security cap image pattern each 8000, utilize ImgLabel tool tab area and classification.With
The ratio of 7:3 is divided into training set and test set.
Referring to Fig. 6, is nominated, screened using k different rectangle frames on the convolution characteristic layer of the detection picture
There may be the candidate regions of human body and/or safety cap out, and wherein k is positive integer;
In specific implementation, since target possibly is present at any position of image, and the size of target, Aspect Ratio
It is uncertain, so the strategy of original adoption sliding window traverses entire image, and need to be arranged different scales, no
Same length-width ratio.Although the strategy of this exhaustion contains all positions being likely to occur of target, but time complexity is high, produces
Raw redundancy window is too many, seriously affects speed and performance that subsequent characteristics are extracted and classify.
The present embodiment is mentioned on the final convolution characteristic layer of picture using k different rectangle frames (Anchor Box)
Name is tested by experiment, is preferably to compromise in terms of time efficiency and accuracy in detection when k takes 9, has both guaranteed to examine in real time
It surveys, and can guarantee the accuracy of detection.When k is less than 9, detection speed is improved, but accuracy declines, some human testings are not
It arrives, is easy to happen omission.When k is greater than 9, accuracy is improved, but detects speed reduction.
Carry out the feature extraction of human body and safety cap respectively to the candidate region;
In specific implementation, characteristic extract number directly affect in next step identify work speed and accuracy.
The characteristics of image usually extracted specifically includes that the textural characteristics, edge feature and motion feature of image.(1) textural characteristics master
It to include grey level histogram, edge orientation histogram, the gray level co-occurrence matrixes etc. of image.(2) edge feature has directly displayed figure
The profile of picture, main includes perimeter, area, the ratio of width to height (main shaft rate), dispersion degree, the tightness etc. of image.(3) motion feature with
Motor behavior is related, generally comprises movement mass center, speed, displacement, gradient etc..
In specific implementation, whole picture is inputted, obtains the feature of picture automatically using Darknet51 network.
Classified using characteristics of human body's model to the feature, returned, identifies whether the detection picture has human body;If
It identifies that the detection picture has human body, then bezel locations recurrence processing is carried out to human body, obtain human body frame;
Classified using safety cap characteristic model to the feature, returned, identifies whether the detection picture has safety
Cap;If identifying, the detection picture has safety cap, carries out bezel locations recurrence processing to safety cap, obtains safety cap frame.
In specific implementation, to the Feature Map after feature extraction carry out two classification judge whether containing human body and/or
Safety cap, then candidate frame position and size are adjusted with k regression model (respectively corresponding to different Anchor Box), finally carry out
Human body, safety cap classification.
This example by model to the frame predicted value of each Feature in Feature Map (central point relative coordinate with
The relatively wide height of frame) be further processed to obtain actual value, prediction centre coordinate is returned using sigmoid function
One changes operation, is being actual coordinate plus Feature is currently located relative to the bias in the picture upper left corner;Utilize exp letter
Several pairs prediction operation is normalized to image width height, multiplied by the corresponding priori frame width height of current Anchor Box be actually
Wide high, the corresponding priori frame width height of each Anchor Box is by clustering and can obtain to data set.
The present embodiment judges whether contain certain types of target in each region by detection and disaggregated model and GPU.
Wherein detection model is Darknet, and classifier is the YOLOV3 model based on focal loss function.Again to the human body detected
Post-processing operation is returned by bezel locations with safety cap, obtains final human body frame and safety cap frame.
This example makes region nomination, classification, returns shared convolution feature together, improves the speed of human testing, protects
The accuracy of detection is demonstrate,proved.Other algorithm of target detection can also be used in this example.As R-CNN, SPP net, SSD and YOLO are calculated
Method.
S103 obtains the posture of the human body, according to the people if there are human body and safety caps for the detection picture
The posture of body determines correct wearing region.
In specific implementation, after detecting human body and safety cap, the whether correct safe wearing cap of human body is further judged, by
In the relationship of shooting visual angle, the human body by portrait attachment and the human body far from camera lens, size are different, safety cap also classes
Seemingly;Therefore the data when embodiment of the present invention is correctly worn according to the posture for obtaining human body come simulating Safety cap, are judged
The whether correct safe wearing cap of human body.Specific step is as follows:
Obtain the ratio of width to height α of the human body frame;
If the value of α is greater than the second preset threshold, the posture of the human body is determined to stand;
If the value of α less than the second preset threshold, determines the posture of the human body to squat;
In specific implementation, the posture of human body is judged according to the ratio of width to height of human body frame, the present embodiment sets the second preset threshold
It is 1.8, if the value of α is greater than 1.8, determines the posture of the human body to stand;If the value of α less than 1.8, determines the human body
Posture be squat;Those skilled in the art can also voluntarily determine the second default threshold according to the shooting angle or other factors of picture
The numerical value of value, the present invention are not specifically limited in this embodiment.
If the posture of human body is to stand, determine that the correct wearing region is the first wearing region, described first wears area
Domain is the square region of first first predetermined width of preset height * on human body frame top;If the posture of human body is to squat, described in determination
Correct region of wearing is the second wearing region, and second wearing region is that the second preset height * second on human body frame top is pre-
If the square region of width.
In a particular embodiment, the width of human body frame is got as A, a height of B, if:
A/B > 1.8 determines corresponding its posture of human body of human body frame to stand, and determines that the correct wearing region is behaved
The square region of first first predetermined width of preset height * on body frame top;Wherein, the first preset height is B*0.2, and first is pre-
If width is A*1.2;
A/B < 1.8 determines corresponding its posture of human body of human body frame to squat, and determines that the correct wearing region is human body
The square region of second second predetermined width of preset height * on frame top;Wherein, the second preset height is B*0.3, and second is default
Width is A*1.1.
It should be noted that stance and squatting position of the embodiment of the present invention according to human body, when simulating Safety cap is correctly worn
Numerical value, cap shell part may exceed personage's frame when safety cap is correctly worn, particularly evident when leaning to one side, therefore pass through setting
First wearing region is first the first predetermined width of preset height * and the second wearing region is that the second preset height * second is pre-
If width limits region when safety cap is correctly worn, wherein the first preset height, the first predetermined width, second default
If highly, the value of the second predetermined width is too small, Rule of judgment is harsh, to leaning to one side and squatting position is easily mistaken for not wearing correctly
It wears;And if value is excessive, the people that may cause distant place is matched that (super large cap is next extra small with safety cap nearby
People).Those skilled in the art can also voluntarily be determined according to the shooting angle or other factors of picture first wear region and
Second wears the range in region, and the present invention is not specifically limited in this embodiment.
Whether S104 judges the safety cap in the correct wearing region;
In specific implementation, obtain with the human body human body frame is there are overlapping region and overlapping region is in the human body head
Safety cap be targeted security cap;
If the posture of the human body is to stand, whether judge the safety cap frame of the targeted security cap is located at described first
It wears in region;If so, determining the targeted security cap in the correct wearing region;
If the posture of the human body is to squat, whether judge the safety cap frame of the targeted security cap is located at second pendant
It wears in region;If so, determining the targeted security cap in the correct wearing region.
In specific implementation, after obtaining targeted security cap, according to the posture of human body, if human posture is judged to standing, judgement
Whether targeted security cap is located in first preset height on human body frame top, if so, judging whether targeted security cap is located at people
First predetermined width on body frame top, if so, the targeted security cap is determined in the correct wearing region, by human body mark
It is denoted as and has worn, corresponding targeted security cap is labeled as being worn.If have one be it is no, judge targeted security cap not in institute
It states in correct wearing region, then determines the incorrect safe wearing cap of human body.
If human posture is judged to squatting, judge whether targeted security cap is located in second preset height on human body frame top,
If so, judging whether targeted security cap is located at second predetermined width on human body frame top, if so, determining the targeted security
Cap is in the correct wearing region, and by human body labeled as having worn, corresponding targeted security cap is labeled as being worn.If having
One be it is no, then determine targeted security cap not in the correct wearing region.
It should be noted that those skilled in the art can also adjust as needed specifically judges whether targeted security cap is located at
The correct sequence worn in region.
S105, if so, the correct safe wearing cap of the human body is determined, if it is not, then executing alarm operation.
In specific implementation, if safety cap is correctly being worn in region, determines the correct safe wearing cap of the human body, detect
Picture is without unlawful practice;If safety cap is not being worn in region correctly, determines the incorrect safe wearing cap of the human body, detect
There are unlawful practices for picture, execute alarm operation.
Referring to Fig. 7, in another embodiment, if the detection picture there are multiple human bodies, repeatedly step S103-
S105 judges the whether correct safe wearing cap of all human bodies in the detection picture;
If all correct safe wearing cap of all human bodies determines the detection picture without unlawful practice;
If there is the incorrect safe wearing cap of a human body, determine that the detection picture has unlawful practice, executes alarm operation.
As shown in fig. 7, it judges the specific embodiment of the whether correct safe wearing cap of human body for step S103-S105,
Such as following steps:
STEP1: the ratio of width to height α for body frame of asking for help
STEP2: judging human posture according to the value of α, sets the correct wearing region of safety cap.
STEP3: traversing all safety caps, judges whether safety cap is overlapped with the region of people.If there is being then transferred to
Otherwise STEP4 continues to judge next safety cap that do not worn.
STEP4: if human posture is to stand, judging whether safety cap is in the first preset height of top of personage's frame,
Enter STEP5 if being, otherwise returns to STEP 3 and judge next safety cap that do not worn;
If human posture be squat, judge whether safety cap is in the second preset height of top of personage's frame, if being into
Enter STEP5, otherwise returns to STEP 3 and judge next safety cap that do not worn.
STEP5: if human posture is to stand, judging whether safety cap is in the first predetermined width of top of personage's frame,
It is to be judged to correctly wear, people and safety cap is labeled as having worn and worn, and record matching information;
If human posture is to squat, judges whether safety cap is in the second predetermined width of top of personage's frame, be to be judged to
It is correct to wear, people and safety cap are labeled as having worn and worn, and record matching information.
STEP6: repeating STEP1-STEP5 until all be labeled per capita.
STEP7: judging whether all per capita labeled as correct safe wearing cap, and STEP8 is entered if not.
STEP8: the safety cap worn by the people of incorrect safe wearing cap and carries out the matching of STEP1-STEP5,
The former wearer of the safety cap of successful match carries out the matching of STEP1-STEP5 with the safety cap that do not worn again.
STEP9: if still someone is marked as incorrect safe wearing cap, then it is assumed that there is unlawful practice in figure.Otherwise recognize
It is picture without unlawful practice.
If picture has unlawful practice, generates warning information and be sent to terminal.It is not generated if no unlawful practice
Pre-warning signal.
It should be noted that in certain embodiments, judging whether targeted security cap is correctly wearing the step in region
STEP4 and STEP5 are interchangeable.
It referring to fig. 2, is a kind of schematic block of the region crime prevention system based on human testing provided in an embodiment of the present invention
Figure.As shown, the system 200 in the present embodiment can include: acquiring unit 201, the first judging unit 202, determination unit
203, second judgment unit 204, alarm unit 205.Wherein,
Acquiring unit 201, for obtaining detection picture;
First judging unit 202, for judging the detection picture with the presence or absence of human body and safety cap;
Determination unit 203 obtains the appearance of the human body if there are human body and safety caps for the detection picture
Gesture determines correct wearing region according to the posture of the human body;
Second judgment unit 204, for judging the safety cap whether in the correct wearing region;
Alarm unit 205, if determining that the human body is correctly worn in the correct wearing region for the safety cap
It wears a safety helmet, if the safety cap not in the correct wearing region, executes alarm operation;
In one embodiment, acquiring unit 201 is also used to parse live video stream, obtains picture to be analyzed;It obtains
Take the optical flow components value of the picture to be analyzed;If the optical flow components value of the picture to be analyzed is greater than the first preset threshold,
The picture to be analyzed is chosen as detection picture.
In one embodiment, system 200 further includes model training unit 206, and model training unit 206 is for collecting safety
Cap image pattern and non-security cap image pattern, are classified, marked and are trained, and safety cap characteristic model is obtained;Collector
Body image pattern and non-human image pattern, are classified, marked and are trained, and characteristics of human body's model is obtained.
In one embodiment, determination unit 203 is specifically used for obtaining the ratio of width to height α of the human body frame;If the value of α is greater than the
Two preset thresholds determine the posture of the human body then to stand;If the value of α determines the human body less than the second preset threshold
Posture be squat;If the posture of human body is to stand, determine that the correct wearing region is the first wearing region, described first wears
Region is the square region of first first predetermined width of preset height * on human body frame top;If the posture of human body is to squat, institute is determined
Stating correct region of wearing is the second wearing region, and described second wears the second preset height * second that region is human body frame top
The square region of predetermined width.
In one embodiment, the first judging unit 202 is specifically used for utilizing k on the convolution characteristic layer of the detection picture
A different rectangle frame is nominated, and the candidate region there may be human body and/or safety cap is filtered out, and wherein k is positive integer;
Carry out the feature extraction of human body and safety cap respectively to the candidate region;The feature is divided using characteristics of human body's model
Class, recurrence, identify whether the detection picture has human body;If identifying, the detection picture has human body, carries out frame to human body
Position recurrence processing, obtains human body frame;Classified using safety cap characteristic model to the feature, returned, identifies the inspection
Whether mapping piece has safety cap;If identifying, the detection picture has safety cap, carries out bezel locations recurrence processing to safety cap,
Obtain safety cap frame.
In one embodiment, second judgment unit 204 for obtains and the human body frame of the human body there are overlapping region and
Overlapping region is targeted security cap in the safety cap of the human body head;If the posture of the human body is to stand, the mesh is judged
Whether wearing in region positioned at described first for the safety cap frame of safety cap marked;If so, determining the targeted security cap in institute
State correct wear in region;If the posture of the human body is to squat, whether being located at for the safety cap frame of the targeted security cap judged
Described second wears in region;If so, determining the targeted security cap in the correct wearing region.
Referring to Fig. 3, be another embodiment of the present invention provides a kind of 300 schematic block diagram of terminal.This implementation as shown in the figure
Terminal 300 in example may include: one or more processors 301;One or more input equipments 302, it is one or more defeated
Equipment 303 and memory 304 out.Above-mentioned processor 301, input equipment 302, output equipment 303 and memory 304 pass through bus
305 connections.For storing instruction, processor 301 is used to execute the instruction of the storage of memory 302 to memory 302.Wherein, it handles
Device 301 is for executing: obtaining detection picture;Judge the detection picture with the presence or absence of human body and safety cap;If the detection
There are human body and safety caps for picture, then obtain the posture of the human body, determine correct wearing area according to the posture of the human body
Domain;Judge the safety cap whether in the correct wearing region;If so, determine the correct safe wearing cap of the human body,
If it is not, then executing alarm operation.
In one embodiment, the processor 301 is also used to execute: parsing, obtains to be analyzed to live video stream
Picture;Obtain the optical flow components value of the picture to be analyzed;If it is default that the optical flow components value of the picture to be analyzed is greater than first
Threshold value then chooses the picture to be analyzed as detection picture.
In one embodiment, the processor 301 is also used to execute: the samples pictures comprising human body is collected, to the sample
This picture is labeled, and training obtains the parameter of human body sorter model.
In one embodiment, the processor 301 is also used to execute: collecting safety cap image pattern and non-security cap image
Sample is classified, marked and is trained, and safety cap characteristic model is obtained;Collect human body image pattern and non-human image sample
This, is classified, marked and is trained, and characteristics of human body's model is obtained;K are utilized on the convolution characteristic layer of the detection picture
Different rectangle frames are nominated, and the candidate region there may be human body and/or safety cap is filtered out, and wherein k is positive integer;It is right
The candidate region carries out the feature extraction of human body and safety cap respectively;The feature is divided using characteristics of human body's model
Class, recurrence, identify whether the detection picture has human body;If identifying, the detection picture has human body, carries out frame to human body
Position recurrence processing, obtains human body frame;Classified using safety cap characteristic model to the feature, returned, identifies the inspection
Whether mapping piece has safety cap;If identifying, the detection picture has safety cap, carries out bezel locations recurrence processing to safety cap,
Obtain safety cap frame.
In one embodiment, the processor 301 is also used to execute: obtaining the ratio of width to height α of the human body frame;If the value of α
Greater than the second preset threshold, then the posture of the human body is determined to stand;If the value of α determines institute less than the second preset threshold
The posture of human body is stated to squat;If the posture of human body is to stand, determine that correct the wearings region is the first wearing region, described the
One wears the square region for first the first predetermined width of preset height * that region is human body frame top;If the posture of human body is to squat,
Determine that the correct wearing region is the second wearing region, described second wears the second default height that region is human body frame top
Spend the square region of the * the second predetermined width.
In one embodiment, the processor 301 is also used to execute: acquisition exists with the human body frame of the human body is overlapped area
Domain and overlapping region are targeted security cap in the safety cap of the human body head;If the posture of the human body is to stand, institute is judged
Whether being located in first wearing region for the safety cap frame of targeted security cap stated;If so, determining the targeted security cap
In the correct wearing region;If the posture of the human body be squat, judge the targeted security cap safety cap frame whether
It is worn in region positioned at described second;If so, determining the targeted security cap in the correct wearing region.
In one embodiment, the processor 301 is also used to execute: if the detection picture is weighed there are multiple human bodies
Multiple step S3-S5 judges the whether correct safe wearing cap of all human bodies in the detection picture;If all human bodies are all correct
Safe wearing cap then determines the detection picture without unlawful practice;If there is the incorrect safe wearing cap of a human body, described in judgement
Detection picture has unlawful practice, executes alarm operation.
It should be appreciated that in embodiments of the present invention, alleged processor 301 can be central processing unit (Central
Processing Unit, CPU), which can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device is also possible to any conventional processor etc..
Input equipment 302 may include that Trackpad, fingerprint adopt sensor (for acquiring the finger print information and fingerprint of user
Directional information), microphone etc., output equipment 303 may include display (LCD etc.), loudspeaker etc..
The memory 304 may include read-only memory and random access memory, and to processor 301 provide instruction and
Data.The a part of of memory 304 can also include nonvolatile RAM.For example, memory 304 can also be deposited
Store up the information of device type.
In the specific implementation, processor 301 described in the embodiment of the present invention, input equipment 302, output equipment 303 can
Implementation described in a kind of a embodiment of parameter regulation means provided in an embodiment of the present invention is executed, this also can be performed
The implementation of terminal 300 described in inventive embodiments, details are not described herein.
A kind of computer readable storage medium, the computer-readable storage medium are provided in another embodiment of the invention
Matter is stored with computer program, the realization when computer program is executed by processor:
Obtain detection picture;Judge the detection picture with the presence or absence of human body and safety cap;If the detection picture is deposited
In human body and safety cap, then correct wearing region is determined according to the posture of the human body;Judge the safety cap whether in institute
State correct wear in region;If so, the correct safe wearing cap of the human body is determined, if it is not, then executing alarm operation.
The computer readable storage medium can be the internal storage unit of terminal described in aforementioned any embodiment, example
Such as the hard disk or memory of terminal.The computer readable storage medium is also possible to the External memory equipment of the terminal, such as
The plug-in type hard disk being equipped in the terminal, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card) etc..Further, the computer readable storage medium can also be wrapped both
The internal storage unit for including the terminal also includes External memory equipment.The computer readable storage medium is described for storing
Other programs and data needed for computer program and the terminal.The computer readable storage medium can be also used for temporarily
When store the data that has exported or will export.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This
A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially
Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not
It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience of description and succinctly, the end of foregoing description
The specific work process at end and unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided by the present invention, it should be understood that disclosed terminal and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another system is closed or is desirably integrated into, or some features can be ignored or not executed.In addition, shown or discussed phase
Mutually between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication of device or unit
Connection is also possible to electricity, mechanical or other form connections.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in some embodiment
Part, reference can be made to the related descriptions of other embodiments.
The above is a specific embodiment of the invention, but scope of protection of the present invention is not limited thereto, any ripe
It knows those skilled in the art in the technical scope disclosed by the present invention, various equivalent modifications can be readily occurred in or replaces
It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection scope subject to.
Claims (10)
1. a kind of detection method of safety cap wearing identification, which comprises the following steps:
S1 obtains detection picture;
S2 judges the detection picture with the presence or absence of human body and safety cap;
S3 obtains the posture of the human body, according to the appearance of the human body if there are human body and safety caps for the detection picture
Gesture determines correct wearing region;
Whether S4 judges the safety cap in the correct wearing region;
S5, if so, the correct safe wearing cap of the human body is determined, if it is not, then executing alarm operation.
2. the detection method of safety cap as described in claim 1 wearing identification, which is characterized in that the step S1 includes:
Live video stream is parsed, picture to be analyzed is obtained;
Obtain the optical flow components value of the picture to be analyzed;
If the optical flow components value of the picture to be analyzed is greater than the first preset threshold, the picture to be analyzed is chosen as detection
Picture.
3. the detection method of safety cap as described in claim 1 wearing identification, which is characterized in that the step S2 includes:
Safety cap image pattern and non-security cap image pattern are collected, classified, marked and is trained, safety cap feature is obtained
Model;
Human body image pattern and non-human image pattern are collected, classified, marked and is trained, characteristics of human body's model is obtained;
It is nominated on the convolution characteristic layer of the detection picture using k different rectangle frames, filters out that there may be people
The candidate region of body and/or safety cap, wherein k is positive integer;
Carry out the feature extraction of human body and safety cap respectively to the candidate region;
Classified using characteristics of human body's model to the feature, returned, identifies whether the detection picture has human body;
If identifying, the detection picture has human body, carries out bezel locations recurrence processing to human body, obtains human body frame;
Classified using safety cap characteristic model to the feature, returned, identifies whether the detection picture has safety cap;
If identifying, the detection picture has safety cap, carries out bezel locations recurrence processing to safety cap, obtains safety cap frame.
4. the detection method of safety cap wearing identification as claimed in claim 3, which is characterized in that the acquisition human body
Posture includes:
Obtain the ratio of width to height α of the human body frame;
If the value of α is greater than the second preset threshold, the posture of the human body is determined to stand;
If the value of α less than the second preset threshold, determines the posture of the human body to squat.
5. the detection method of safety cap wearing identification as claimed in claim 4, which is characterized in that the posture according to human body
Determine correct wearing region, comprising:
If the posture of human body is to stand, determine that the correct wearing region is the first wearing region, first wearing region is
The square region of first first predetermined width of preset height * on human body frame top;
If the posture of human body is to squat, determine that the correct wearing region is the second wearing region, described second, which wears region, behaves
The square region of second second predetermined width of preset height * on body frame top.
6. the detection method of safety cap as claimed in claim 5 wearing identification, which is characterized in that the S4 step includes:
It obtains with the human body frame of the human body there are overlapping region and overlapping region in the safety cap of the human body head is target
Safety cap;
If the posture of the human body is to stand, whether judge the safety cap frame of the targeted security cap is located at first wearing
In region;
If so, determining the targeted security cap in the correct wearing region;
If the posture of the human body is to squat, whether judge the safety cap frame of the targeted security cap is located at second wearing area
In domain;
If so, determining the targeted security cap in the correct wearing region.
7. the region prevention method based on human testing as claimed in claim 6, which is characterized in that further include:
If there are multiple human bodies for the detection picture, repeatedly step S3-S5, judges that all human bodies in the detection picture are
No all correct safe wearing cap;
If all correct safe wearing cap of all human bodies determines the detection picture without unlawful practice;
If there is the incorrect safe wearing cap of a human body, determine that the detection picture has unlawful practice, executes alarm operation.
8. a kind of detection system of safety cap wearing identification characterized by comprising for executing as claim 1-7 is any
The unit of method described in.
9. a kind of terminal, which includes processor, input equipment, output equipment and memory, and the processor, input are set
Standby, output equipment and memory are connected with each other, which is characterized in that the memory supports terminal to execute as right is wanted for storing
The application code of the described in any item methods of 1-7 is sought, the processor is configured for executing as claim 1-7 is any
Method described in.
10. a kind of computer readable storage medium, the computer storage medium is stored with computer program, the computer journey
Sequence includes program instruction, and described program instruction executes the processor as claim 1-7 is any
Method described in.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811538958.0A CN109670441B (en) | 2018-12-14 | 2018-12-14 | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811538958.0A CN109670441B (en) | 2018-12-14 | 2018-12-14 | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109670441A true CN109670441A (en) | 2019-04-23 |
CN109670441B CN109670441B (en) | 2024-02-06 |
Family
ID=66144377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811538958.0A Active CN109670441B (en) | 2018-12-14 | 2018-12-14 | Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109670441B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
CN110263665A (en) * | 2019-05-29 | 2019-09-20 | 朗坤智慧科技股份有限公司 | Safety cap recognition methods and system based on deep learning |
CN110443976A (en) * | 2019-08-14 | 2019-11-12 | 深圳市沃特沃德股份有限公司 | Safety prompt function method, apparatus and storage medium based on safety cap |
CN110458075A (en) * | 2019-08-05 | 2019-11-15 | 北京泰豪信息科技有限公司 | Detection method, storage medium, detection device and the detection system that safety cap is worn |
CN110502965A (en) * | 2019-06-26 | 2019-11-26 | 哈尔滨工业大学 | A kind of construction safety helmet wearing monitoring method based on the estimation of computer vision human body attitude |
CN110619324A (en) * | 2019-11-25 | 2019-12-27 | 南京桂瑞得信息科技有限公司 | Pedestrian and safety helmet detection method, device and system |
CN111062429A (en) * | 2019-12-12 | 2020-04-24 | 上海点泽智能科技有限公司 | Chef cap and mask wearing detection method based on deep learning |
CN111199200A (en) * | 2019-12-27 | 2020-05-26 | 深圳供电局有限公司 | Wearing detection method and device based on electric protection equipment and computer equipment |
CN111275058A (en) * | 2020-02-21 | 2020-06-12 | 上海高重信息科技有限公司 | Safety helmet wearing and color identification method and device based on pedestrian re-identification |
CN112101288A (en) * | 2020-09-25 | 2020-12-18 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting wearing of safety helmet and storage medium |
CN112257570A (en) * | 2020-10-20 | 2021-01-22 | 江苏濠汉信息技术有限公司 | Method and device for detecting whether safety helmet of constructor is not worn based on visual analysis |
CN112861751A (en) * | 2021-02-22 | 2021-05-28 | 中国中元国际工程有限公司 | Airport luggage room personnel management method and device |
CN112949354A (en) * | 2019-12-10 | 2021-06-11 | 顺丰科技有限公司 | Method and device for detecting wearing of safety helmet, electronic equipment and computer-readable storage medium |
CN113283296A (en) * | 2021-04-20 | 2021-08-20 | 晋城鸿智纳米光机电研究院有限公司 | Helmet wearing detection method, electronic device and storage medium |
CN113361347A (en) * | 2021-05-25 | 2021-09-07 | 东南大学成贤学院 | Job site safety detection method based on YOLO algorithm |
CN114283485A (en) * | 2022-03-04 | 2022-04-05 | 杭州格物智安科技有限公司 | Safety helmet wearing detection method and device, storage medium and safety helmet |
CN114332738A (en) * | 2022-01-18 | 2022-04-12 | 浙江高信技术股份有限公司 | A safety helmet detecting system for wisdom building site |
CN115150552A (en) * | 2022-06-23 | 2022-10-04 | 中国华能集团清洁能源技术研究院有限公司 | Constructor safety monitoring method, system and device based on deep learning self-adaption |
CN116824723A (en) * | 2023-08-29 | 2023-09-29 | 山东数升网络科技服务有限公司 | Intelligent security inspection system and method for miner well-down operation based on video data |
CN116958702A (en) * | 2023-08-01 | 2023-10-27 | 浙江钛比科技有限公司 | Hotel guard personnel wearing detection method and system based on edge artificial intelligence |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131856A1 (en) * | 2013-11-14 | 2015-05-14 | Omron Corporation | Monitoring device and monitoring method |
CN106571014A (en) * | 2016-10-24 | 2017-04-19 | 上海伟赛智能科技有限公司 | Method for identifying abnormal motion in video and system thereof |
CN107103617A (en) * | 2017-03-27 | 2017-08-29 | 国机智能科技有限公司 | The recognition methods of safety cap wearing state and system based on optical flow method |
WO2017197308A1 (en) * | 2016-05-12 | 2017-11-16 | One Million Metrics Corp. | System and method for monitoring safety and productivity of physical tasks |
KR20180082856A (en) * | 2017-01-11 | 2018-07-19 | 금오공과대학교 산학협력단 | A safety helmet to prevent accidents and send information of the wearer in real time |
CN108319934A (en) * | 2018-03-20 | 2018-07-24 | 武汉倍特威视系统有限公司 | Safety cap wear condition detection method based on video stream data |
CN108460358A (en) * | 2018-03-20 | 2018-08-28 | 武汉倍特威视系统有限公司 | Safety cap recognition methods based on video stream data |
CN108535683A (en) * | 2018-04-08 | 2018-09-14 | 安徽宏昌机电装备制造有限公司 | A kind of intelligent safety helmet and its localization method based on NB-IOT honeycomb technology of Internet of things |
CN108537256A (en) * | 2018-03-26 | 2018-09-14 | 北京智芯原动科技有限公司 | A kind of safety cap wears recognition methods and device |
CN108921004A (en) * | 2018-04-27 | 2018-11-30 | 淘然视界(杭州)科技有限公司 | Safety cap wears recognition methods, electronic equipment, storage medium and system |
-
2018
- 2018-12-14 CN CN201811538958.0A patent/CN109670441B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150131856A1 (en) * | 2013-11-14 | 2015-05-14 | Omron Corporation | Monitoring device and monitoring method |
WO2017197308A1 (en) * | 2016-05-12 | 2017-11-16 | One Million Metrics Corp. | System and method for monitoring safety and productivity of physical tasks |
CN106571014A (en) * | 2016-10-24 | 2017-04-19 | 上海伟赛智能科技有限公司 | Method for identifying abnormal motion in video and system thereof |
KR20180082856A (en) * | 2017-01-11 | 2018-07-19 | 금오공과대학교 산학협력단 | A safety helmet to prevent accidents and send information of the wearer in real time |
CN107103617A (en) * | 2017-03-27 | 2017-08-29 | 国机智能科技有限公司 | The recognition methods of safety cap wearing state and system based on optical flow method |
CN108319934A (en) * | 2018-03-20 | 2018-07-24 | 武汉倍特威视系统有限公司 | Safety cap wear condition detection method based on video stream data |
CN108460358A (en) * | 2018-03-20 | 2018-08-28 | 武汉倍特威视系统有限公司 | Safety cap recognition methods based on video stream data |
CN108537256A (en) * | 2018-03-26 | 2018-09-14 | 北京智芯原动科技有限公司 | A kind of safety cap wears recognition methods and device |
CN108535683A (en) * | 2018-04-08 | 2018-09-14 | 安徽宏昌机电装备制造有限公司 | A kind of intelligent safety helmet and its localization method based on NB-IOT honeycomb technology of Internet of things |
CN108921004A (en) * | 2018-04-27 | 2018-11-30 | 淘然视界(杭州)科技有限公司 | Safety cap wears recognition methods, electronic equipment, storage medium and system |
Non-Patent Citations (2)
Title |
---|
刘鑫昱: "面向监控图像的行人再识别关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2018 (2018-02-15), pages 138 - 2459 * |
刘鑫昱: "面向监控图像的行人再识别关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 2359 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263665A (en) * | 2019-05-29 | 2019-09-20 | 朗坤智慧科技股份有限公司 | Safety cap recognition methods and system based on deep learning |
CN110188724B (en) * | 2019-06-05 | 2023-02-28 | 中冶赛迪信息技术(重庆)有限公司 | Method and system for helmet positioning and color recognition based on deep learning |
CN110188724A (en) * | 2019-06-05 | 2019-08-30 | 中冶赛迪重庆信息技术有限公司 | The method and system of safety cap positioning and color identification based on deep learning |
CN110502965A (en) * | 2019-06-26 | 2019-11-26 | 哈尔滨工业大学 | A kind of construction safety helmet wearing monitoring method based on the estimation of computer vision human body attitude |
CN110502965B (en) * | 2019-06-26 | 2022-05-17 | 哈尔滨工业大学 | Construction safety helmet wearing monitoring method based on computer vision human body posture estimation |
CN110458075B (en) * | 2019-08-05 | 2023-08-25 | 北京泰豪信息科技有限公司 | Method, storage medium, device and system for detecting wearing of safety helmet |
CN110458075A (en) * | 2019-08-05 | 2019-11-15 | 北京泰豪信息科技有限公司 | Detection method, storage medium, detection device and the detection system that safety cap is worn |
CN110443976A (en) * | 2019-08-14 | 2019-11-12 | 深圳市沃特沃德股份有限公司 | Safety prompt function method, apparatus and storage medium based on safety cap |
CN110443976B (en) * | 2019-08-14 | 2021-05-28 | 深圳市沃特沃德股份有限公司 | Safety reminding method and device based on safety helmet and storage medium |
CN111914636A (en) * | 2019-11-25 | 2020-11-10 | 南京桂瑞得信息科技有限公司 | Method and device for detecting whether pedestrian wears safety helmet |
CN111914636B (en) * | 2019-11-25 | 2021-04-20 | 南京桂瑞得信息科技有限公司 | Method and device for detecting whether pedestrian wears safety helmet |
CN110619324A (en) * | 2019-11-25 | 2019-12-27 | 南京桂瑞得信息科技有限公司 | Pedestrian and safety helmet detection method, device and system |
CN112949354A (en) * | 2019-12-10 | 2021-06-11 | 顺丰科技有限公司 | Method and device for detecting wearing of safety helmet, electronic equipment and computer-readable storage medium |
CN111062429A (en) * | 2019-12-12 | 2020-04-24 | 上海点泽智能科技有限公司 | Chef cap and mask wearing detection method based on deep learning |
CN111199200A (en) * | 2019-12-27 | 2020-05-26 | 深圳供电局有限公司 | Wearing detection method and device based on electric protection equipment and computer equipment |
CN111275058A (en) * | 2020-02-21 | 2020-06-12 | 上海高重信息科技有限公司 | Safety helmet wearing and color identification method and device based on pedestrian re-identification |
CN112101288B (en) * | 2020-09-25 | 2024-02-13 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for detecting wearing of safety helmet |
CN112101288A (en) * | 2020-09-25 | 2020-12-18 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting wearing of safety helmet and storage medium |
CN112257570A (en) * | 2020-10-20 | 2021-01-22 | 江苏濠汉信息技术有限公司 | Method and device for detecting whether safety helmet of constructor is not worn based on visual analysis |
CN112257570B (en) * | 2020-10-20 | 2021-07-27 | 江苏濠汉信息技术有限公司 | Method and device for detecting whether safety helmet of constructor is not worn based on visual analysis |
CN112861751A (en) * | 2021-02-22 | 2021-05-28 | 中国中元国际工程有限公司 | Airport luggage room personnel management method and device |
CN112861751B (en) * | 2021-02-22 | 2024-01-12 | 中国中元国际工程有限公司 | Airport luggage room personnel management method and device |
CN113283296A (en) * | 2021-04-20 | 2021-08-20 | 晋城鸿智纳米光机电研究院有限公司 | Helmet wearing detection method, electronic device and storage medium |
CN113361347A (en) * | 2021-05-25 | 2021-09-07 | 东南大学成贤学院 | Job site safety detection method based on YOLO algorithm |
CN114332738A (en) * | 2022-01-18 | 2022-04-12 | 浙江高信技术股份有限公司 | A safety helmet detecting system for wisdom building site |
CN114332738B (en) * | 2022-01-18 | 2023-08-04 | 浙江高信技术股份有限公司 | Safety helmet detection system for intelligent construction site |
CN114283485B (en) * | 2022-03-04 | 2022-10-14 | 杭州格物智安科技有限公司 | Safety helmet wearing detection method and device, storage medium and safety helmet |
CN114283485A (en) * | 2022-03-04 | 2022-04-05 | 杭州格物智安科技有限公司 | Safety helmet wearing detection method and device, storage medium and safety helmet |
CN115150552A (en) * | 2022-06-23 | 2022-10-04 | 中国华能集团清洁能源技术研究院有限公司 | Constructor safety monitoring method, system and device based on deep learning self-adaption |
CN116958702A (en) * | 2023-08-01 | 2023-10-27 | 浙江钛比科技有限公司 | Hotel guard personnel wearing detection method and system based on edge artificial intelligence |
CN116958702B (en) * | 2023-08-01 | 2024-05-24 | 浙江钛比科技有限公司 | Hotel guard personnel wearing detection method and system based on edge artificial intelligence |
CN116824723A (en) * | 2023-08-29 | 2023-09-29 | 山东数升网络科技服务有限公司 | Intelligent security inspection system and method for miner well-down operation based on video data |
Also Published As
Publication number | Publication date |
---|---|
CN109670441B (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109670441A (en) | A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium | |
CN109508688B (en) | Skeleton-based behavior detection method, terminal equipment and computer storage medium | |
CN110210302B (en) | Multi-target tracking method, device, computer equipment and storage medium | |
CN105631439B (en) | Face image processing process and device | |
CN110188724A (en) | The method and system of safety cap positioning and color identification based on deep learning | |
CN108629791A (en) | Pedestrian tracting method and device and across camera pedestrian tracting method and device | |
CN110390229B (en) | Face picture screening method and device, electronic equipment and storage medium | |
CN108171158B (en) | Living body detection method, living body detection device, electronic apparatus, and storage medium | |
CN104166841A (en) | Rapid detection identification method for specified pedestrian or vehicle in video monitoring network | |
CN110298297A (en) | Flame identification method and device | |
CN109711322A (en) | A kind of people's vehicle separation method based on RFCN | |
CN107194361A (en) | Two-dimentional pose detection method and device | |
CN111414858B (en) | Face recognition method, target image determining device and electronic system | |
CN113012383B (en) | Fire detection alarm method, related system, related equipment and storage medium | |
CN110674680B (en) | Living body identification method, living body identification device and storage medium | |
CN107316029A (en) | A kind of live body verification method and equipment | |
CN108875474A (en) | Assess the method, apparatus and computer storage medium of face recognition algorithms | |
CN112183472A (en) | Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet | |
CN108171135A (en) | Method for detecting human face, device and computer readable storage medium | |
CN112990057A (en) | Human body posture recognition method and device and electronic equipment | |
CN115223204A (en) | Method, device, equipment and storage medium for detecting illegal wearing of personnel | |
CN116863297A (en) | Monitoring method, device, system, equipment and medium based on electronic fence | |
CN115862113A (en) | Stranger abnormity identification method, device, equipment and storage medium | |
CN109583396A (en) | A kind of region prevention method, system and terminal based on CNN two stages human testing | |
CN102867214B (en) | Counting management method for people within area range |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |