CN106446926A - Transformer station worker helmet wear detection method based on video analysis - Google Patents
Transformer station worker helmet wear detection method based on video analysis Download PDFInfo
- Publication number
- CN106446926A CN106446926A CN201610546587.5A CN201610546587A CN106446926A CN 106446926 A CN106446926 A CN 106446926A CN 201610546587 A CN201610546587 A CN 201610546587A CN 106446926 A CN106446926 A CN 106446926A
- Authority
- CN
- China
- Prior art keywords
- classifier
- sample
- safety cap
- matrix
- positive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Abstract
The invention discloses a transformer station worker helmet wear detection method based on video analysis. According to the method, a VIBE algorithm is selected to detect a motion target region and an HSV color characteristic preliminary positioning helmet region on the basis of a transformer station scene, Haar characteristics and HSV color space characteristics are acquired through selection and fusion, positive and negative helmet samples under the field scene are acquired, an Adaboost algorithm is utilized to carry out classifier training, fusion characteristics of a positioning helmet region are extracted and are sent to a trained helmet classifier to carry out characteristic matching detection, and thereby precise helmet identification positioning is realized. Compared with a traditional method, the method is advantaged in that high accuracy and good robustness are realized, a worker region can be rapidly and accurately positioned whether workers wear helmets or not and whether all personnel targets appear in a camera region or not, behaviors of wearing no helmets are timely warned, and hidden safety trouble is eliminated.
Description
Technical field
This law invention is related to field of intelligent video surveillance.
Background technology
With the frequent generation of transformer station's place worker safety's accident, operating personnel's safety cap detects, follows the tracks of and reports to the police
Safety in production for power industry significant it is necessary to a kind of effective monitoring method of demand, for transformer station's place operation
The phenomenon that personnel's non-safe wearing cap is just started to work finds in time and reports to the police.Power industry as the lifeline of economic development,
Transformer station continues to increase for substation safety work pay attention to day by day, control and monitoring as the important step of power generation, country.
Whole nation major part transformer station place has all adopted corresponding surveillance and control measure, has effectively contained the generation of serious accident.
At present, monitoring situation worn by transformer station place operating personnel's safety cap, relies primarily on video surveillance technology.Administrative staff pass through prison
Control system can see the real-time scene image of transformer substation video monitoring site in control centre, so can intuitive monitoring and note
The safety in production situation at record scene, also can occur analysis to provide effective data for accident afterwards;Transformer station's monitoring system simultaneously
Enforcement allow the unmanned that transformer station realizes truly, for realizing cutting payroll to improve efficiency, power grid visualization monitoring and
Scheduling provides powerful guarantee, makes operation of power networks safer, reliable.But, though current unattended operation transformer station monitoring system
So there is monitoring alarm function, but focus primarily upon the alarm analysis of sensor, video detection mostly is supplementary means and was using
Cheng Zhong, monitoring personnel needs ceaselessly to observe scenic picture, makes corresponding decision-making by explaining the video information obtaining.Monitoring
Personnel are in this working condition for a long time, easily produce decreased attention, so that losing due warning message;See when simultaneously
Survey during multiple watch-dog it is virtually impossible to accomplish comprehensively complete monitoring.If monitoring personnel is away from keyboard for some reason, and at this moment have existing
Field operating personnel's non-safe wearing cap, just cannot find, monitoring just loses meaning in time.
Detection technique currently for safety cap also fewer it is thus proposed that the special safety cap sample of collection transformer station and
The sample of non-safe wearing cap is translated into hsv color space, asks for V passage color histogram in sample hsv color space,
By the relatively histogrammic difference of positive and negative sample statistics, realize the identification to safety cap.The shortcoming of this method:In detection safety
The color threshold values of cap fixes it is impossible to self adaptation;When environment is poor, when illumination is dark, report more, effect by mistake using colour recognition
Poor.Also it has been proposed that gathering positive negative sample manually, then carried out with SVM or Adaboost grader using extraction Haar feature
The method merging, carries out the identification of safety cap, but the selection of this method sample has relatively to final Detection results to two field picture
Big impact.
Content of the invention
It is an object of the invention to provide a kind of be based on hsv color space characteristics, Haar feature, Adaboost cascade classifier
The method of the safety cap detection merged, the staff of the abnormal motion target to substation field and non-safe wearing cap is timely
Report to the police, with it, solving the problems, such as that current safety cap recognition efficiency is relatively low.
Employed technical scheme comprise that such for realizing the object of the invention, a kind of transformer station workman based on video analysis
Safety cap wears detection method it is characterised in that comprising the following steps:
1) prepare positive negative sample:
Positive sample gathers:Collection safety cap positive sample picture simultaneously normalizes, and size is unified to be W × H;Negative sample gathers:Adopt
The image of collection actual scene;
2) feature extraction:
2-1) the safety cap video image being gathered is pre-processed, pretreatment includes:Illumination pretreatment, noise filtering
Process;
Reason and geometrical normalization and dimension normalization are processed;
2-2) to state safety cap using two rectangular characteristic in edge feature, the characteristic value defining edge feature is white
Rectangular pixels and deduct black rectangle pixel and;
2-3) construct the detection window of W × H size, be 2 × 1 matrix of edge feature for size, level in window
Slidably W-1 step, vertical sliding motion H walks, that is, have (W-1) × H matrix character;The matrix of edge being 1 × 2 for size is special
Levy, Characteristic Number is H × (W-1);For two kinds of matrix of edge features, amplify respectively along level, vertical direction, 2 × 1 edge
Feature is prolonged horizontal direction and is enlarged into:I × 1, i=4,6,8 ..., W;Vertical direction can be enlarged into:2 × j, j=1,2,3 ...,
H.It is that each feature has X × Y kind to amplify mode, to new eigenmatrix, calculating matrix Characteristic Number in detection window is:
Wherein, W × H is detection window, and w × h is matrix of edge feature sizes,WithRepresent rectangle respectively
Feature is in the maximum ratio coefficient that both horizontally and vertically can amplify;
Integrogram thought 2-4) is utilized to calculate by the characteristic value of the various matrix characters of two kinds of matrix of edge combinations of features, raw
Become Haar eigenvectors matrix [a1,a2,…,an]∈Rm×n, n is Haar intrinsic dimensionality;
2-5) positive and negative samples pictures are converted into hsv color space from RGB color, generate the 3-dimensional of positive and negative sample set
Eigenvectors matrix [b1,b2,b3]∈Rm×3;
2-6) merge Haar feature and hsv color feature generates final eigenvectors matrix [a1,a2,…,an,b1,b2,b3]
∈Rm×n', n' is characterized dimension
3) Adaboost algorithm is the sifting sort device by setting up multi-level cascade, and couple candidate detection window is passed sequentially through
Detector, safety cap positive sample and non-security cap negative sample separate the most at last.Each layer is all a strong classifier, and each
Strong classifier is all to be made up of several Weak Classifiers, and therefore safety cap Adaboost detects the training step of grader:
3-1) with step 1) the positive and negative sample set that gathers is input, calculates and obtain characteristic vector square in positive and negative sample set
Battle array;Each characteristic vector in described eigenvectors matrix corresponds to a basic Weak Classifier, is selected using Adaboost algorithm
Go out optimum Weak Classifier, detailed process is as follows:
A gives the iterations training positive and negative sample set N, T to be training;
B initialization sample weight is 1/N, the as initial probability distribution of training sample;
C first round iteration, step is as follows:
A) probability distribution according to training sample, trains a Weak Classifier for each characteristic vector f;
B) calculate the classification error rate of basic Weak Classifier;
C) calculate the minimum Weak Classifier of classification error rate, as optimum Weak Classifier
D improves the weight of misjudged sample, that is, update sample weights, continues iteration;After T wheel iteration, obtain T optimum
Weak Classifier;The often optimum Weak Classifier G of wheel grey iterative generation onet(x);
1. calculate the accumulation of error in classification r and the e that t takes turns the sample of misclassification in iteration sample sett, t=1,2 ... T;
2. calculate the optimum Weak Classifier G of t wheelt(x) shared weight ratio α in final strong classifiert:
3. update training dataset weight after t wheel iteration and obtain new probability distribution D of samplet+1, change for next round
Generation, wherein
Dt+1=(wt+1,1, wt+1,2...wt+1,N)
Wherein wt+1,iIt is the sample weights updating, ZtIt is the specification factor:
E combines T optimum Weak Classifier composition strong classifier, and combination is as follows:
Wherein G (x) is strong classifier, αtRepresent Gt(x) significance level in final classification device, GtX () is the T obtaining
In individual basic optimum Weak Classifier t-th;
3-2) final cascade classifier algorithm step is as follows:
Step1:The number of plies setting cascade classifier is as S, every layer of strong classifier minimum detection rate d of setting, maximum misclassification rate
F, the target misclassification rate F of cascade classifier, cascade classifier verification and measurement ratio is Di, the false drop rate of cascade classifier is Fi, wherein i is
The number of plies of cascade classifier;;
Step2:Positive sample trained by P=safety cap, and negative sample trained by N=safety cap;
Step3:Initialization training number of plies i=1, D1=1.0, F1=1.0
Step4:Loop iteration;
N is contained using Adaboost algorithm training packageiI-th layer of strong classifier of individual final rectangular characteristic;Strong point of i-th layer of adjustment
Class device threshold value makes current layer misclassification rate FiLess than f × Fi-1, verification and measurement ratio DiMore than d × Di-1, i=1,2 ... S;If i-th layer strong
The misclassification rate F of graderiMore than F, and this layer of grader is carried out to sample image detect that the negative sample of mistake is included into N;
Step5:If the misclassification rate F of i-th layer of strong classifieriNumber of plies i more than F or cascade classifier reaches S layer, repeatedly
In generation, terminates
4) operating personnel's positioning:
4-1) arrange some IP Cameras in transformer station operating area, obtain operating area frame of video figure using camera
Picture
4-2) utilize moving object detection algorithm, detect video image operating personnel's moving target, and demarcated with rectangle frame
Motion target area;
4-3) according to operating personnel's length-width ratio and elemental area, obtaining step 4-2) in rectangle frame area image top subregion
Then subregion image on this is switched to HSV model by RGB model by domain;
4-4) set safety cap color span threshold value in HSV space model, by safety cap color mark under HSV model
Quasi- span is converted to HSV model safety cap color span in Opencv function library, to adapt to concrete application process;
4-5) according to safety cap color under HSV model span, to 4-3) in obtain HSV model image carry out two
Value, is then passed through morphological erosion and expansion, eliminates unrelated region.Last searching connected domain in binary map, if looked for
Arrive connected domain, just explanation has color of object region in region to be detected;If not existing, illustrate that moving target is not worn
Safety cap, then alarm;
5) obtain 4-5) in the rectangle frame area image of color of object is detected, extract this area image Haar feature with
Hsv color space characteristics, are sent to step 3) the final cascade classifier that obtains;Using dynamic sliding window mode to rectangle frame
Target area carries out characteristic matching detection, differentiates whether safe wearing cap (if characteristic matching, judges operation people to operating personnel
Member has worn safety cap, otherwise, alarm).
The solution have the advantages that mathematical, for deficiency in safety cap detection for traditional monitoring system,
Obtain mobile foreground target from a kind of visual background extraction algorithm (VIBE), be then based on normal personnel's edge contour length-width ratio
Threshold value (1/3) and operating personnel's elemental area threshold value ([1000,10000]), Primary Location safety cap region, then by safety cap
Regioinvertions are hsv color space, according to red blue white safety cap hsv color spatial threshold range in practice, positioning peace further
This rectangular area is simultaneously split in full cap target rectangle region, extracts Haar feature and HSV face to the safety cap area image being partitioned into
Color characteristic is merged, and is sent to and carries out characteristic matching using the safety cap grader that Adaboost algorithm trains, it is right to realize
The detection of safety cap.This method can not only find the abnormal motion target of specific region in time, can not worn with real-time detection
The staff that wears a safety helmet simultaneously is reported to the police in time.It is demonstrated experimentally that the more traditional method of this method is in discrimination and wrong report
Larger lifting is had on rate, and robustness is preferable.
Brief description
Fig. 1 provides for the embodiment of the present invention and calculates matrix of edge characteristic value integrogram
Fig. 2 provides safety cap edge feature Prototype drawing for the embodiment of the present invention
Fig. 3 provides safety cap identification overall flow figure for the embodiment of the present invention
Fig. 4 provides safety cap recognition classifier training flow chart for the embodiment of the present invention
Fig. 5 provides the personnel targets Detection results figure based on VIBE algorithm for the embodiment of the present invention
Fig. 6 provides the personnel targets overhaul flow chart based on VIBE algorithm for the embodiment of the present invention
Fig. 7 is safety cap overhaul flow chart provided in an embodiment of the present invention
Fig. 8 provides safety cap recognition effect figure for the embodiment of the present invention.
Specific embodiment
With reference to embodiment, the invention will be further described, but only should not be construed the above-mentioned subject area of the present invention
It is limited to following embodiments.Without departing from the idea case in the present invention described above, according to ordinary skill knowledge and used
With means, make various replacements and change, all should include within the scope of the present invention.
A kind of detection method is worn based on transformer station's worker safety helmet of video analysis, comprise the following steps:
1) prepare positive negative sample:
Positive sample gathers:Safety cap positive sample simultaneously normalizes and less containing having powerful connections, under negative sample collection substation not
Comprise the background picture of safety cap;Size is unified to be W × H=24 × 24;Negative sample gathers:The figure of collection transformer station actual scene
Picture, negative sample does not need to scale, but the size more than positive sample;The ratio 1 of positive negative sample:2~1:3 are preferred, and generate positive sample
This describes file pos.vec.
2) feature extraction:
2-1) the safety cap video image being gathered is pre-processed, pretreatment includes:Illumination pretreatment, noise filtering
Place
Reason and geometrical normalization and dimension normalization are processed;;
Edge feature 2-2) adopting Haar feature states safety cap, and the characteristic value defining edge feature is white rectangle picture
Element and deduct black rectangle pixel and;In embodiment, to state safety cap using two rectangular characteristic (as Fig. 2) in edge feature
, the characteristic value of rectangular characteristic be white rectangle pixel and deduct black rectangle pixel and;
2-3) calculate Haar Characteristic Number:
The detection window of construction W × H=24 × 24 size, is 2 × 1 matrix of edge feature for size, in window
Level slidably W-1 step, vertical sliding motion H walks, that is, have (W-1) × H feature;It is 1 × 2 edge feature for size, special
Levying number is H × (W-1);For two kinds of edge features, amplify respectively along level, vertical direction and generate new matrix character, 2 ×
1 edge feature prolongs horizontal direction and can be enlarged into:I × 1, i=4,6,8 ..., W;Vertical direction can be enlarged into:2 × j, j=
1、2、3、...、H;1 × 2 edge feature, prolonging horizontal direction can be enlarged into:2 × i, i=1,2,3 ..., W;Vertical direction can
It is enlarged into:I × 1, i=4,6,8 ..., H, that is, each feature have X × Y kind to amplify mode, to every kind of new matrix character, again
Calculate matrix Characteristic Number in detection window:
Wherein, W × H is detection window size, and w × h is edge feature size, and value is 2 × 1 and 1 × 2,WithRepresent rectangular characteristic in the maximum ratio coefficient that can amplify both horizontally and vertically respectively;Slided by detection window
Mode calculates the rectangular characteristic number 86400 of each samples pictures;
Integrogram thought 2-4) is utilized to calculate by the characteristic value of the various matrix characters of two kinds of matrix of edge combinations of features, tool
Body process is as follows:
A), shown in the concept of integrogram such as Fig. 1 (a), the integrogram of coordinate A (x, y) is all pixels sum in its upper left corner
(dash area of in figure), formula is
Wherein ii (x, y) represents integrogram pixel value and i (x', y') represents original-gray image pixel value, s (x, y) table
Show all original image sums in the y direction of point (x, y);Then the integrogram of Fig. 1 is calculated as follows by recurrence formula:
S (x, y)=S (x, y-1)+i (x, y)
Ii (x, y)=ii (x-1, y)+S (x, y)
B) this example is calculated using integrogram and can quickly calculate shown in any rectangular image area characteristic value such as Fig. 1 (b),
Method is as follows:
Pixel value=ii (4)+ii (1)-ii (the 2)-ii (3) of region D
Wherein ii (1) represents the pixel value of region A, and ii (2) represents the pixel value of region A+B with ii (3) represents region A+
The pixel value of C and, ii (4) represent region A+B+C+D pixel value and;It can thus be appreciated that the pixel value in a region, can be by this
The integrogram of the end points in region, to calculate, taking edge rectangular characteristic in Fig. 11 × 2,2 × 1 as a example illustrates the meter of characteristic value below
Calculate:
Shown in edge feature for 1 × 2 such as Fig. 1 (c), its feature value calculating method is:
Margin of image element=[ii (5)-ii (4)]+[ii (the 3)-ii (2)] of region A and B
-[ii(2)-ii(1)]-[ii(6)-ii(5)]
Shown in edge feature for 2 × 1 such as Fig. 1 (d), its feature value calculating method is:
Margin of image element=[ii (4)-ii (3)]-[ii (the 2)-ii (1)] of region A and B
+[ii(4)-ii(3)]-[ii(6)-ii(5)]
When carrying out multiple scale detecting to image, it is still able to think using such as Fig. 1 (a) integrogram
Want to be calculated, therefore, the detection process of whole image only need to artwork run-down,
Just all extension rectangles of 1 × 2,2 × 1 edge rectangular characteristic can conveniently and efficiently be carried out
The characteristic value of feature solves, and drastically increases the speed of detection.
C) generate Haar eigenvectors matrix [a1,a2,…,an]∈Rm×n, n is Haar intrinsic dimensionality;
2-5) extract hsv color space characteristics
Positive and negative samples pictures are converted into HSV model image from RGB model image, wherein, HSV (Hue, Saturation,
Value it is) that a kind of color space that the intuitive nature according to color describes claims hexagonal pyramid model tone (H), calculate sample set
Hsv color spatial pixel values;Generate the 3-dimensional vector [b based on HSV1,b2,b3]∈Rm×3;
2-6) merge Haar feature and hsv color feature generates final eigenvectors matrix [a1,a2,…,an,b1,b2,b3]
∈Rm×n', n' is characterized dimension
3) Adaboost algorithm is the sifting sort device by setting up multi-level cascade, and couple candidate detection window is passed sequentially through
Detector, safety cap positive sample and non-security cap negative sample separate the most at last.Each layer is all a strong classifier, and each
Strong classifier is all to be made up of several Weak Classifiers, and therefore safety cap Adaboost detects the training step of grader:
3-1) with step 1) gather a series of safety cap training sample (x1,y1),(x2,y2),…,(xn,yn) for inputting, its
Middle yi=-1 represents that it is non-security cap negative sample, yi=1 is expressed as safety cap positive sample, and in this example, training sample number is N=
4200, wherein positive sample 1200, negative sample 3000, calculate and obtain eigenvectors matrix in positive and negative sample set;Each
Individual characteristic vector f corresponds to a Weak Classifier h (x, f, p, θ):
Wherein f is characterized, and θ is the critical value distinguishing positive negative sample, and p indicates the direction of inequality, and x represents detection
Window.If it is higher than 50% that Weak Classifier distinguishes safety cap its accuracy of positive negative sample, for effective Weak Classifier, and real
Test result to show, in safety cap image, most Haar features are all very faint for the ability of the identification positive negative sample of safety cap.
3-2) pick out optimum Weak Classifier using Adaboost algorithm, detailed process is as follows:
A. according to given positive and negative training sample set N, T is the iterations of training;
B. initialization sample weight is 1/N, the as initial probability distribution of training sample:
C. first round iteration, step is as follows:
A) probability distribution D according to training sample1, train a Weak Classifier for each characteristic vector f;
B) calculate the classification error rate of each Weak Classifier, obtain optimum Weak Classifier step as follows:
Step1:For each characteristic vector f, calculate the characteristic value of all training samples, and sorted.
Step2:Scan sorted characteristic value, to each element in sorted table, calculate following four value:
I. the weight and t1 of whole positive samples;
II. the weight and t0 of whole negative samples;
III. the weight and s1 of the positive sample before this element;
IV. the weight and s0 of the negative sample before this element;
Step3:Try to achieve the safety cap error in classification of each element:
R=min ((s1+ (t0-s0)), (s1+ (t1-s1))
Step4:Table is found and makes the minimum element of r value constitute optimum Weak Classifier as optimal threshold;
C) update sample weights, carry out next round iteration;After T wheel iteration, obtain T optimum Weak Classifier, wherein
The minimum verification and measurement ratio of iterations T and every Stage Classification device, each Stage Classification device maximum rate of false alarm related;
1. for the optimum Weak Classifier G of every wheel grey iterative generationtThe error in classification r accumulation of (x) misclassification sample and et, t=
1st, 2 ... T, obtain optimum Weak Classifier Gt(x) shared weight ratio α in final strong classifiert:
Above-mentioned formula understands,When, αtWith etReduction and increase, the less basic classification device of error in classification rate exists
The effect of final classification device is bigger.
2. often update training dataset weight after wheel iteration and obtain new probability distribution D of samplet+1, for next round iteration,
Wherein
Dt+1=(wt+1,1, wt+1,2...wt+1,N)
So that being increased by the weight of Weak Classifier misclassification sample, and reduced by the weight of correct classification samples, wherein
wt+1,iIt is the sample weights updating, ZtIt is the specification factor:
3-3) combination T optimum Weak Classifier composition strong classifier, combination is as follows:
G (x) is strong classifier, αtRepresent Gt(x) significance level in final classification device,
GtX () is t-th in the substantially optimum Weak Classifier of T obtaining, R is to manually set the threshold value meeting positive sample error rate;
So the confidence level of G (x) is:|G(x)|;
3-4) meet the requirement of real-time detection to further improve detection efficiency, Viola etc. proposes a kind of multi-level
The strong classifier of cascade.Cascade of strong classifiers is by forming strong classifier series connection N number of for classification accuracy rate highest
A kind of grader, as shown in Figure 3.Wherein, each layer is all the strong classifier training out through Adaboost algorithm, final level
Connection classifier algorithm step is as follows:
Step1:The number of plies setting cascade classifier is as S, every layer of strong classifier minimum detection rate d of setting, maximum misclassification rate
F, the target misclassification rate F of cascade classifier, cascade classifier verification and measurement ratio is Di, the false drop rate of cascade classifier is Fi, wherein i is
The number of plies of cascade classifier;
Step2:Positive sample trained by P=safety cap, and negative sample trained by N=safety cap;
Step3:Initialization training number of plies i=1, D1=1.0, F1=1.0
Step4:Loop iteration;
N is contained using Adaboost algorithm training packageiI-th layer of strong classifier of individual final rectangular characteristic;Strong point of i-th layer of adjustment
Class device threshold value makes current layer misclassification rate FiLess than f × Fi-1, verification and measurement ratio DiMore than d × Di-1, i=1,2 ... S;If i-th layer strong
The misclassification rate F of graderiMore than F, and this layer of grader is carried out to sample image detect that the negative sample of mistake is included into N;
Step5:If the misclassification rate F of i-th layer of strong classifieriNumber of plies i more than F or cascade classifier reaches S layer, repeatedly
In generation, terminates
4) operating personnel's positioning:
4-1) arrange some IP Cameras in transformer station operating area, obtain operating area frame of video figure using camera
Picture
4-2) utilize moving object detection algorithm, detect video image operating personnel's moving target, and demarcated with rectangle frame
Motion target area, comprises the following steps that:
A) input step 4-1) in obtain substation under video frame images
B) using medium filtering, original image is filtered, the impact of exclusion high-frequency noise.In view of between frame and frame
Shake vibe algorithm can be relied on to give tacit consent to the related hypothesis in field eliminate the impact that part slight jitter causes
C) adopt the mobile foreground target step step of VIBE algorithm detection as follows:
Step1 model initialization:Initialized using a two field picture in transformer station's camera.Vibe algorithm is in two field picture
Each of randomly draw N number of pixel value as the model sample value of current pixel in the m contiguous range of pixel.Each pixel
Model is made up of N number of pixel value:
M={ M1,M2…Mn}
This using close pixel between have close spatial and temporal distributions characteristic and can effectively overcome ViBe algorithm initial
Change repeated sampling problem.
Step2 foreground extraction
After establishing background model, the image new to each frame just can carry out dividing of foreground/background according to decision criteria
Cut.Specifically dividing method is:By each of new images pixel and its corresponding N number of sample value carry out between pixel away from
From calculating.If the collection of N number of sample is combined into M { M1, m2 ... Mn } in background model, and the distance with current pixel value x in N number of sample
It is # { #1, #2 ... #n } less than the sample set of given threshold r.Number P that statistics M and # occurs simultaneously, if common factor number foot
Enough then it is considered background greatly.Show the prospect decision method using two-dimensional space distance.
P=M ∩ #, M={ M1,M2…Mn, #={ #1,#2…}
Here adopt adaptive threshold, because in transformer station, background is relatively stable, not high to the demand processing dynamic background,
Performance be concentrated mainly on to shade, gradual change illumination, block the problems such as disposal ability.To gradual change illumination and occlusion issue, we adopt
Threshold value with change:According to the different time periods, count global illumination change, dynamic threshold r is obtained according to global illumination.This
Have more robustness according to the adjustable strategies of priori.Initial setting up threshold value r, and count overall gray scale G0, each a period of time
Count new overall gray value Gt, more according to overall grey scale change, r is adjusted.Formula is as follows:
R=r (Gt/G0)
Step3 context update and adjacent diffusion
Context update:It is the pixel of background being identified, be updated when determining using the strategy randomly choosing, this
Plant randomized policy and obtain splendid effect in experimental verification.A sample is selected to replace with currently at random in N number of sample
The pixel value of pixel;
Adjacent diffusion:When certain pixel x is considered as background, in m neighborhood around him, also select a pixel immediately
It is updated.The pixel value that a sample replaces with x is randomly drawed in N number of sample of this selected neighborhood territory pixel.
This randomized policy ensure that the time that each pixel exists more meets certain probability distribution, rather than as it
Each of front algorithm sample all preserves regular time.Further, since being not that all background pixels are updated, institute
It is greatly improved with the speed of this algorithm.
4-3) according to operating personnel's depth-width ratio be more than 1/3 and elemental area scope ([1000,10000]) from step 4-2)
Segmentation object region is near head 1/3rd rectangular area
4-4) set safety cap color span threshold value in HSV space model, by safety cap color mark under HSV model
Quasi- span is converted to HSV model safety cap color span in Opencv function library, to adapt to concrete application process;
4-5) by 4-3) in the RGB image of rectangular area that obtains be converted into hsv color spatial image;Red blue white HSV face
Colour space threshold range span of colour recognition in Opencv storehouse is:
Blue safety cap:H:100-124;S:43-255;V:46-255
White safety cap:H:0-180;S:0-30;V:221-255
Red safety cap:H:0-10||156-180;S:43-255;V:46-255
4-6) according to safety cap color under HSV model span, to 4-5) in obtain HSV model image carry out two
Value, is then passed through morphological erosion and expansion, eliminates unrelated region.Last searching connected domain in binary map, if looked for
Arrive connected domain, just explanation has color of object region in region to be detected;If not existing, illustrate that moving target is not worn
Safety cap, then alarm;
5) preserve 4-6) in the rectangle frame area image of color of object is detected, extract this area image Haar feature with
Hsv color space characteristics, are sent to step 3-4) the final cascade classifier that obtains;Using dynamic sliding window mode to described
The rectangle frame region of color of object carries out characteristic matching detection, differentiate operating personnel whether safe wearing cap (if characteristic matching,
Then judge that operating personnel has worn safety cap, otherwise, alarm).
Claims (1)
1. a kind of detection method is worn it is characterised in that comprising the following steps based on transformer station's worker safety helmet of video analysis:
1) prepare positive negative sample:
Positive sample gathers:Collection safety cap positive sample picture simultaneously normalizes, and size is unified to be W × H;Negative sample gathers:Collection is real
The image of border scene;
2) feature extraction:
2-1) the safety cap video image being gathered is pre-processed, pretreatment includes:At illumination pretreatment, noise filtering
Reason;
Reason and geometrical normalization and dimension normalization are processed;;
2-2) to state safety cap using two rectangular characteristic in edge feature, the characteristic value defining edge feature is white rectangle
Pixel and deduct black rectangle pixel and;
2-3) construct the detection window of W × H size, be 2 × 1 matrix of edge feature for size, level can be slided in window
Dynamic W-1 step, vertical sliding motion H walks, that is, have (W-1) × H matrix character;It is 1 × 2 matrix of edge feature for size, special
Levying number is H × (W-1);For two kinds of matrix of edge features, amplify respectively along level, vertical direction, 2 × 1 edge feature
Prolong horizontal direction to be enlarged into:I × 1, i=4,6,8 ..., W;Vertical direction can be enlarged into:2 × j, j=1,2,3 ..., H.I.e. every
Individual feature has X × Y kind to amplify mode, and to new eigenmatrix, calculating matrix Characteristic Number in detection window is:
Wherein, W × H is detection window, and w × h is matrix of edge feature sizes,WithRepresent rectangular characteristic respectively
In the maximum ratio coefficient that both horizontally and vertically can amplify;
2-4) utilize integrogram thought to calculate by the characteristic value of the various matrix characters of two kinds of matrix of edge combinations of features, generate
Haar eigenvectors matrix [a1,a2,…,an]∈Rm×n, n is Haar intrinsic dimensionality;
2-5) positive and negative samples pictures are converted into hsv color space from RGB color, generate the 3-dimensional feature of positive and negative sample set
Vector matrix [b1,b2,b3]∈Rm×3;
2-6) merge Haar feature and hsv color feature generates final eigenvectors matrix [a1,a2,…,an,b1,b2,b3]∈Rm ×n', n' is characterized dimension
3) Adaboost algorithm is the sifting sort device by setting up multi-level cascade, and couple candidate detection window is passed sequentially through detection
Device, safety cap positive sample and non-security cap negative sample separate the most at last.Each layer is all a strong classifier, and each divides by force
Class device is all to be made up of several Weak Classifiers, and therefore safety cap Adaboost detects the training step of grader:
3-1) with step 1) the positive and negative sample set that gathers is input, calculates and obtain eigenvectors matrix in positive and negative sample set;Institute
Each characteristic vector stated in eigenvectors matrix corresponds to a basic Weak Classifier, picks out optimum using Adaboost algorithm
Weak Classifier, detailed process is as follows:
A. give the iterations training positive and negative sample set N, T to be training;
B. initialization sample weight is 1/N, the as initial probability distribution of training sample;
C. first round iteration, step is as follows:
A) probability distribution according to training sample, trains a Weak Classifier for each characteristic vector f;
B) calculate the classification error rate of basic Weak Classifier;
C) calculate the minimum Weak Classifier of classification error rate, as optimum Weak Classifier;
D. improve the weight of misjudged sample, that is, update sample weights, continue iteration;After T wheel iteration, obtain T optimum weak
Grader;The often optimum Weak Classifier G of wheel grey iterative generation onet(x);
1. calculate the accumulation of error in classification r and the e that t takes turns the sample of misclassification in iteration sample sett, t=1,2 ... T;
2. calculate the optimum Weak Classifier G of t wheelt(x) shared weight ratio α in final strong classifiert:
3. update training dataset weight after t wheel iteration and obtain new probability distribution D of samplet+1, for next round iteration, its
In
Dt+1=(wt+1,1, wt+1,2...wt+1,N)
Wherein wt+1,iIt is the sample weights updating, ZtIt is the specification factor:
E. combination T optimum Weak Classifier composition strong classifier, combination is as follows:
Wherein G (x) is strong classifier, αtRepresent Gt(x) significance level in final classification device, GtX () is the T base obtaining
In this optimum Weak Classifier t-th;
3-2) final cascade classifier algorithm step is as follows:
Step1:The number of plies setting cascade classifier is as S, every layer of strong classifier minimum detection rate d of setting, maximum misclassification rate f, level
The target misclassification rate F of connection grader, cascade classifier verification and measurement ratio is Di, the false drop rate of cascade classifier is Fi, wherein i is cascade
The number of plies of grader;;
Step2:Positive sample trained by P=safety cap, and negative sample trained by N=safety cap;
Step3:Initialization training number of plies i=1, D1=1.0, F1=1.0
Step4:Loop iteration;
N is contained using Adaboost algorithm training packageiI-th layer of strong classifier of individual final rectangular characteristic;I-th layer of strong classifier of adjustment
Threshold value makes current layer misclassification rate FiLess than f × Fi-1, verification and measurement ratio DiMore than d × Di-1, i=1,2 ... S;If i-th layer strong is classified
The misclassification rate F of deviceiMore than F, and this layer of grader is carried out to sample image detect that the negative sample of mistake is included into N;
Step5:If the misclassification rate F of i-th layer of strong classifieriNumber of plies i more than F or cascade classifier reaches S layer, and iteration is tied
Bundle
4) operating personnel's positioning:
4-1) arrange some IP Cameras in transformer station operating area, obtain operating area video frame images using camera
4-2) utilize moving object detection algorithm, detect video image operating personnel's moving target, and demarcate motion with rectangle frame
Target area;
4-3) according to operating personnel's length-width ratio and elemental area, obtaining step 4-2) in subregion on rectangle frame area image,
Then subregion image on this is switched to HSV model by RGB model;
4-4) set safety cap color span threshold value in HSV space model, safety cap color standard under HSV model is taken
Value scope is converted to HSV model safety cap color span in Opencv function library, to adapt to concrete application process;
4-5) according to safety cap color under HSV model span, to 4-3) in obtain HSV model image carry out two-value
Change, be then passed through morphological erosion and expansion, eliminate unrelated region.Last searching connected domain in binary map, if found
Connected domain, there is color of object region in just explanation in region to be detected;If not existing, illustrate that moving target does not wear peace
Full cap, then alarm;
5) obtain 4-5) in the rectangle frame area image of color of object is detected, extract this area image Haar feature and HSV face
Colour space feature, is sent to step 3) the final cascade classifier that obtains;Using dynamic sliding window mode to rectangle frame target
Region carries out characteristic matching detection, differentiates whether safe wearing cap (if characteristic matching, judges that operating personnel wears to operating personnel
Safety cap, otherwise, alarm are worn).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610546587.5A CN106446926A (en) | 2016-07-12 | 2016-07-12 | Transformer station worker helmet wear detection method based on video analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610546587.5A CN106446926A (en) | 2016-07-12 | 2016-07-12 | Transformer station worker helmet wear detection method based on video analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106446926A true CN106446926A (en) | 2017-02-22 |
Family
ID=58183580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610546587.5A Pending CN106446926A (en) | 2016-07-12 | 2016-07-12 | Transformer station worker helmet wear detection method based on video analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106446926A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981163A (en) * | 2017-03-26 | 2017-07-25 | 天津普达软件技术有限公司 | A kind of personnel invade abnormal event alarming method |
CN107666594A (en) * | 2017-09-18 | 2018-02-06 | 广东电网有限责任公司东莞供电局 | A kind of video monitoring monitors the method operated against regulations in real time |
CN107679524A (en) * | 2017-10-31 | 2018-02-09 | 天津天地伟业信息系统集成有限公司 | A kind of detection method of the safety cap wear condition based on video |
CN108009574A (en) * | 2017-11-27 | 2018-05-08 | 成都明崛科技有限公司 | A kind of rail clip detection method |
CN108052900A (en) * | 2017-12-12 | 2018-05-18 | 成都睿码科技有限责任公司 | A kind of method by monitor video automatic decision dressing specification |
CN108416289A (en) * | 2018-03-06 | 2018-08-17 | 陕西中联电科电子有限公司 | A kind of working at height personnel safety band wears detection device and detection method for early warning |
CN108873791A (en) * | 2018-08-21 | 2018-11-23 | 朱明增 | A kind of substation field violation detection early warning system |
CN108961314A (en) * | 2018-06-29 | 2018-12-07 | 北京微播视界科技有限公司 | Moving image generation method, device, electronic equipment and computer readable storage medium |
CN109190710A (en) * | 2018-09-13 | 2019-01-11 | 东北大学 | Detection method of leaving post based on Haar-NMF feature and cascade Adaboost classifier |
CN109271903A (en) * | 2018-09-02 | 2019-01-25 | 杭州晶智能科技有限公司 | Infrared image human body recognition method based on probability Estimation |
CN109522838A (en) * | 2018-11-09 | 2019-03-26 | 大连海事大学 | A kind of safety cap image recognition algorithm based on width study |
CN109697430A (en) * | 2018-12-28 | 2019-04-30 | 成都思晗科技股份有限公司 | The detection method that working region safety cap based on image recognition is worn |
CN109785361A (en) * | 2018-12-22 | 2019-05-21 | 国网内蒙古东部电力有限公司 | Substation's foreign body intrusion detection system based on CNN and MOG |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN109858367A (en) * | 2018-12-29 | 2019-06-07 | 华中科技大学 | The vision automated detection method and system that worker passes through support unsafe acts |
CN109978916A (en) * | 2019-03-11 | 2019-07-05 | 西安电子科技大学 | Vibe moving target detecting method based on gray level image characteristic matching |
CN110046601A (en) * | 2019-04-24 | 2019-07-23 | 南京邮电大学 | For the pedestrian detection method of crossroad scene |
CN110062202A (en) * | 2019-03-12 | 2019-07-26 | 国网浙江省电力有限公司杭州供电公司 | A kind of power system information acquisition radio alarm method based on image recognition |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN110399905A (en) * | 2019-07-03 | 2019-11-01 | 常州大学 | The detection and description method of safety cap wear condition in scene of constructing |
CN110427812A (en) * | 2019-06-21 | 2019-11-08 | 武汉倍特威视系统有限公司 | Colliery industry driving not pedestrian detection method based on video stream data |
CN110517291A (en) * | 2019-08-27 | 2019-11-29 | 南京邮电大学 | A kind of road vehicle tracking based on multiple feature spaces fusion |
CN111368727A (en) * | 2020-03-04 | 2020-07-03 | 西安咏圣达电子科技有限公司 | Dressing detection method, storage medium, system and device for power distribution room inspection personnel |
CN111382726A (en) * | 2020-04-01 | 2020-07-07 | 浙江大华技术股份有限公司 | Engineering operation detection method and related device |
CN111982415A (en) * | 2019-05-24 | 2020-11-24 | 杭州海康威视数字技术股份有限公司 | Pipeline leakage detection method and device |
CN112084986A (en) * | 2020-09-16 | 2020-12-15 | 国网福建省电力有限公司营销服务中心 | Real-time safety helmet detection method based on image feature extraction |
CN112364925A (en) * | 2020-11-16 | 2021-02-12 | 哈尔滨市科佳通用机电股份有限公司 | Deep learning-based rolling bearing oil shedding fault identification method |
CN112488031A (en) * | 2020-12-11 | 2021-03-12 | 华能华家岭风力发电有限公司 | Safety helmet detection method based on color segmentation |
CN112613449A (en) * | 2020-12-29 | 2021-04-06 | 国网山东省电力公司建设公司 | Safety helmet wearing detection and identification method and system based on video face image |
CN113343818A (en) * | 2021-05-31 | 2021-09-03 | 湖北微特传感物联研究院有限公司 | Helmet identification method and device, computer equipment and readable storage medium |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113971829A (en) * | 2021-10-28 | 2022-01-25 | 广东律诚工程咨询有限公司 | Intelligent detection method, device, equipment and storage medium for wearing condition of safety helmet |
CN115631160A (en) * | 2022-10-19 | 2023-01-20 | 武汉海微科技有限公司 | LED lamp fault detection method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745226A (en) * | 2013-12-31 | 2014-04-23 | 国家电网公司 | Dressing safety detection method for worker on working site of electric power facility |
CN104063722A (en) * | 2014-07-15 | 2014-09-24 | 国家电网公司 | Safety helmet identification method integrating HOG human body target detection and SVM classifier |
CN104298969A (en) * | 2014-09-25 | 2015-01-21 | 电子科技大学 | Crowd scale statistical method based on color and HAAR feature fusion |
CN104504369A (en) * | 2014-12-12 | 2015-04-08 | 无锡北邮感知技术产业研究院有限公司 | Wearing condition detection method for safety helmets |
-
2016
- 2016-07-12 CN CN201610546587.5A patent/CN106446926A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745226A (en) * | 2013-12-31 | 2014-04-23 | 国家电网公司 | Dressing safety detection method for worker on working site of electric power facility |
CN104063722A (en) * | 2014-07-15 | 2014-09-24 | 国家电网公司 | Safety helmet identification method integrating HOG human body target detection and SVM classifier |
CN104298969A (en) * | 2014-09-25 | 2015-01-21 | 电子科技大学 | Crowd scale statistical method based on color and HAAR feature fusion |
CN104504369A (en) * | 2014-12-12 | 2015-04-08 | 无锡北邮感知技术产业研究院有限公司 | Wearing condition detection method for safety helmets |
Non-Patent Citations (3)
Title |
---|
冯杰: "图像识别技术在换流站监控系统中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
吴勇: "智能视频监控中的目标检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
宋伟等: "基于ViBe 的变电站智能监控技术研究", 《仪器仪表学报》 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981163A (en) * | 2017-03-26 | 2017-07-25 | 天津普达软件技术有限公司 | A kind of personnel invade abnormal event alarming method |
CN106981163B (en) * | 2017-03-26 | 2018-11-27 | 天津普达软件技术有限公司 | A kind of personnel's invasion abnormal event alarming method |
CN107666594A (en) * | 2017-09-18 | 2018-02-06 | 广东电网有限责任公司东莞供电局 | A kind of video monitoring monitors the method operated against regulations in real time |
CN107679524A (en) * | 2017-10-31 | 2018-02-09 | 天津天地伟业信息系统集成有限公司 | A kind of detection method of the safety cap wear condition based on video |
CN108009574A (en) * | 2017-11-27 | 2018-05-08 | 成都明崛科技有限公司 | A kind of rail clip detection method |
CN108052900A (en) * | 2017-12-12 | 2018-05-18 | 成都睿码科技有限责任公司 | A kind of method by monitor video automatic decision dressing specification |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN108416289A (en) * | 2018-03-06 | 2018-08-17 | 陕西中联电科电子有限公司 | A kind of working at height personnel safety band wears detection device and detection method for early warning |
CN108961314A (en) * | 2018-06-29 | 2018-12-07 | 北京微播视界科技有限公司 | Moving image generation method, device, electronic equipment and computer readable storage medium |
CN108961314B (en) * | 2018-06-29 | 2021-09-17 | 北京微播视界科技有限公司 | Moving image generation method, moving image generation device, electronic device, and computer-readable storage medium |
CN108873791A (en) * | 2018-08-21 | 2018-11-23 | 朱明增 | A kind of substation field violation detection early warning system |
CN109271903A (en) * | 2018-09-02 | 2019-01-25 | 杭州晶智能科技有限公司 | Infrared image human body recognition method based on probability Estimation |
CN109190710A (en) * | 2018-09-13 | 2019-01-11 | 东北大学 | Detection method of leaving post based on Haar-NMF feature and cascade Adaboost classifier |
CN109190710B (en) * | 2018-09-13 | 2022-04-08 | 东北大学 | off-Shift detection method based on Haar-NMF characteristics and cascade Adaboost classifier |
CN109522838A (en) * | 2018-11-09 | 2019-03-26 | 大连海事大学 | A kind of safety cap image recognition algorithm based on width study |
CN109785361A (en) * | 2018-12-22 | 2019-05-21 | 国网内蒙古东部电力有限公司 | Substation's foreign body intrusion detection system based on CNN and MOG |
CN109697430A (en) * | 2018-12-28 | 2019-04-30 | 成都思晗科技股份有限公司 | The detection method that working region safety cap based on image recognition is worn |
CN109858367A (en) * | 2018-12-29 | 2019-06-07 | 华中科技大学 | The vision automated detection method and system that worker passes through support unsafe acts |
CN109859236A (en) * | 2019-01-02 | 2019-06-07 | 广州大学 | Mobile object detection method, calculates equipment and storage medium at system |
CN109978916A (en) * | 2019-03-11 | 2019-07-05 | 西安电子科技大学 | Vibe moving target detecting method based on gray level image characteristic matching |
CN109978916B (en) * | 2019-03-11 | 2021-09-03 | 西安电子科技大学 | Vibe moving target detection method based on gray level image feature matching |
CN110062202A (en) * | 2019-03-12 | 2019-07-26 | 国网浙江省电力有限公司杭州供电公司 | A kind of power system information acquisition radio alarm method based on image recognition |
CN110046601A (en) * | 2019-04-24 | 2019-07-23 | 南京邮电大学 | For the pedestrian detection method of crossroad scene |
CN110046601B (en) * | 2019-04-24 | 2023-04-07 | 南京邮电大学 | Pedestrian detection method for crossroad scene |
CN111982415A (en) * | 2019-05-24 | 2020-11-24 | 杭州海康威视数字技术股份有限公司 | Pipeline leakage detection method and device |
CN110427812A (en) * | 2019-06-21 | 2019-11-08 | 武汉倍特威视系统有限公司 | Colliery industry driving not pedestrian detection method based on video stream data |
CN110399905A (en) * | 2019-07-03 | 2019-11-01 | 常州大学 | The detection and description method of safety cap wear condition in scene of constructing |
CN110399905B (en) * | 2019-07-03 | 2023-03-24 | 常州大学 | Method for detecting and describing wearing condition of safety helmet in construction scene |
CN110517291A (en) * | 2019-08-27 | 2019-11-29 | 南京邮电大学 | A kind of road vehicle tracking based on multiple feature spaces fusion |
CN111368727B (en) * | 2020-03-04 | 2023-04-18 | 西安咏圣达电子科技有限公司 | Dressing detection method, storage medium, system and device for inspection personnel in power distribution room |
CN111368727A (en) * | 2020-03-04 | 2020-07-03 | 西安咏圣达电子科技有限公司 | Dressing detection method, storage medium, system and device for power distribution room inspection personnel |
CN111382726A (en) * | 2020-04-01 | 2020-07-07 | 浙江大华技术股份有限公司 | Engineering operation detection method and related device |
CN111382726B (en) * | 2020-04-01 | 2023-09-01 | 浙江大华技术股份有限公司 | Engineering operation detection method and related device |
CN112084986A (en) * | 2020-09-16 | 2020-12-15 | 国网福建省电力有限公司营销服务中心 | Real-time safety helmet detection method based on image feature extraction |
CN112364925B (en) * | 2020-11-16 | 2021-06-04 | 哈尔滨市科佳通用机电股份有限公司 | Deep learning-based rolling bearing oil shedding fault identification method |
CN112364925A (en) * | 2020-11-16 | 2021-02-12 | 哈尔滨市科佳通用机电股份有限公司 | Deep learning-based rolling bearing oil shedding fault identification method |
CN112488031A (en) * | 2020-12-11 | 2021-03-12 | 华能华家岭风力发电有限公司 | Safety helmet detection method based on color segmentation |
CN112613449A (en) * | 2020-12-29 | 2021-04-06 | 国网山东省电力公司建设公司 | Safety helmet wearing detection and identification method and system based on video face image |
CN113343818A (en) * | 2021-05-31 | 2021-09-03 | 湖北微特传感物联研究院有限公司 | Helmet identification method and device, computer equipment and readable storage medium |
CN113553979B (en) * | 2021-07-30 | 2023-08-08 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113971829A (en) * | 2021-10-28 | 2022-01-25 | 广东律诚工程咨询有限公司 | Intelligent detection method, device, equipment and storage medium for wearing condition of safety helmet |
CN115631160A (en) * | 2022-10-19 | 2023-01-20 | 武汉海微科技有限公司 | LED lamp fault detection method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106446926A (en) | Transformer station worker helmet wear detection method based on video analysis | |
CN102521565B (en) | Garment identification method and system for low-resolution video | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN108009473B (en) | Video structuralization processing method, system and storage device based on target behavior attribute | |
CN108052859B (en) | Abnormal behavior detection method, system and device based on clustering optical flow characteristics | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN104751634B (en) | The integrated application method of freeway tunnel driving image acquisition information | |
CN104951784B (en) | A kind of vehicle is unlicensed and license plate shading real-time detection method | |
CN102163290B (en) | Method for modeling abnormal events in multi-visual angle video monitoring based on temporal-spatial correlation information | |
CN102043945B (en) | License plate character recognition method based on real-time vehicle tracking and binary index classification | |
CN104504369B (en) | A kind of safety cap wear condition detection method | |
CN102855622B (en) | A kind of infrared remote sensing image sea ship detection method based on significance analysis | |
CN105160297B (en) | Masked man's event automatic detection method based on features of skin colors | |
CN110188724A (en) | The method and system of safety cap positioning and color identification based on deep learning | |
CN112396658B (en) | Indoor personnel positioning method and system based on video | |
US20130070969A1 (en) | Method and system for people flow statistics | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN103646250A (en) | Pedestrian monitoring method and device based on distance image head and shoulder features | |
CN106339657B (en) | Crop straw burning monitoring method based on monitor video, device | |
CN106128022A (en) | A kind of wisdom gold eyeball identification violent action alarm method and device | |
CN105844245A (en) | Fake face detecting method and system for realizing same | |
CN104732220A (en) | Specific color human body detection method oriented to surveillance videos | |
CN102915433A (en) | Character combination-based license plate positioning and identifying method | |
CN107133563A (en) | A kind of video analytic system and method based on police field | |
CN106548131A (en) | A kind of workmen's safety helmet real-time detection method based on pedestrian detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |