CN106156765A - safety detection method based on computer vision - Google Patents
safety detection method based on computer vision Download PDFInfo
- Publication number
- CN106156765A CN106156765A CN201610782779.6A CN201610782779A CN106156765A CN 106156765 A CN106156765 A CN 106156765A CN 201610782779 A CN201610782779 A CN 201610782779A CN 106156765 A CN106156765 A CN 106156765A
- Authority
- CN
- China
- Prior art keywords
- image
- pedestrian
- computer vision
- safety detection
- safety
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses safety detection method based on computer vision, be used for solving current safety detection demand persistency, high-precision problem.The method first carries out the gray processing of original image to original video data, and input picture carries out the standardization of color space, the contrast of regulation image, the impact that the shade of reduction image local and illumination variation are caused, the interference of suppression noise;Then describe the feature of pedestrian by histogram of gradients, classify in combination with SVM, after finding out pedestrian, then the behavior with degree of depth study analysis pedestrian, if the safety standard of not meeting, send safety alert information.The present invention can make full use of existing hardware equipment, and maximum possible decreases the change to original system.And the degree of depth learns it will be appreciated that the details of more image, has higher discrimination.Degree of depth learning neural network is less to the noise-sensitive such as environment, light sensitive, and once training can run under each environment, and generalization ability is good.
Description
Technical field
The method that the present invention relates to image recognition, belongs to the applied technical field of Activity recognition based on computer vision,
Purpose is to obtain stable, testing result accurately.
Background technology
Safety detection technology is a kind of emerging technology utilizing computer system to replace manually carrying out target detection, safety
The process of detection be computer by getting video and video information being processed and analyzes, find solely in video image
Vertical target, detects position, target area in follow-up video and is marked.Visual analysis is computer vision system
One important process link of system.Computer vision relates to multiple fields such as image procossing, machine learning, pattern recognition, it
Final goal be simulation people visual capacity, identification mission can be completed.At academia, from MIT, ETH Zurich,
Visual analysis relevant issues are expanded positive with many scholars of Microsoft Research Deng Duojia research institution by WSU, Intel
Explore.In industrial quarters, safety detection based on computer vision has been employed for screen monitoring, vehicle assistant drive, intelligence
In the multiple application such as robot.
Along with the development of social safety with science and technology, it is increasingly required and scene is carried out video monitoring, according to regarding
Frequently the information in image makes corresponding reaction.But, most monitor task has the features such as of long duration, high accuracy.For
More efficient completes monitor task, uses the method for computer vision to become important research side of solution problem
To, the most increasing scholar creates great enthusiasm to the detection of target behavior, also achieves huge progress.
Traditional pattern recognition typically uses two steps: feature extraction and tagsort.First, entrance spy it is originally inputted
Levying extraction module, characteristic extracting module will be originally inputted patten transformation and obtain feature.Due to characteristic extracting module and concrete application
Directly related, do not go to understand image from computer angle, the most extremely rely on engineer, considerably increase difficulty.Volume
Long-pending neutral net is as a kind of feedforward neural network, it is possible to automatically learns the data having label in a large number, and therefrom extracts
Complicated feature, only original image need to carry out a small amount of pretreatment just can identify visual pattern from pixel, and right
More diverse identification object also has a preferable recognition effect, simultaneously convolutional neural networks identification ability not by image distortion or
The impact of simple geometric transformation.Therefore use convolutional neural networks to carry out safety detection to provide stable, detect accurately
Result, it is also possible to the problem found during detection is analyzed and classifies.
Summary of the invention
It is an object of the invention to provide the detection method that a kind of computer vision and the degree of depth learn to combine, work as solving
Front safety detection demand persistency, high-precision problem.The method histogram of gradients (HOG) describes the feature of pedestrian, with
Time combine SVM and classify.After finding out pedestrian, then the behavior with degree of depth study analysis pedestrian, finally provide analysis result.This
The bright precision that significantly improves, and reduce required for manpower and hardware cost.
For solving above-mentioned safety detection demand persistency, high-precision problem, the technical scheme that the present invention proposes is a kind of
Safety detection method based on computer vision, specifically includes following steps:
Step 1: user photographic head catches original video data;
Step 2: by original image gray processing, carries out the standardization of color space, the contrast of regulation image to input picture
Degree, the impact that the shade of reduction image local and illumination variation are caused, the interference of suppression noise;
Step 3: calculate the gradient of each pixel of image, captures profile information, the interference of further weakened light photograph;
Every several cells are formed a block (block), by one by step 4: divide an image into little cells (cell)
In individual block, feature descriptor (descriptor) of all cells is together in series, and produces the HOG feature of this block
descriptor;
Step 5: HOG feature descriptor of all block in image image is together in series, generates this image
HOG feature descriptor, and filter out image pedestrian's information with SVM (support vector machine) grader trained;
Step 6: the above-mentioned image pedestrian's information filtered out is input in degree of depth convolutional neural networks, pedestrian is gone
For analyzing, if the safety standard of not meeting, send safety alert information.
Further, the standardization of color space described in above-mentioned steps 2 is to be implemented by Gamma (gal code) correction method.
Further, described in above-mentioned steps 3, gradient includes size and Orientation information.
Further, build the grader of convolutional neural networks described in above-mentioned steps 6 to comprise the steps of
1. set up data set;
2. set up convolutional neural networks, comprise the steps of
1) number of plies and structure are determined;
2) loss function is selected;
3) Dropout layer, and Dropout ratio are determined;
4) output layer equation is selected.
3. start to train neutral net, comprise the steps of
1) weight is initialized;
2) iteration termination condition is set.
Compared with prior art, the beneficial effects of the present invention is:
1, the present invention can make full use of existing hardware equipment, and such as photographic head and server etc., therefore this programme can be made
Being that a plug-in unit is embedded in original system, maximum possible decreases the change to original system.
2, identification technology is compared with the machine learning algorithm of shallow-layer, and degree of depth study is it will be appreciated that the details of more image, energy
Enough understand the relation between the safety helmet of shades of colour and human body, have higher discrimination.
3, degree of depth learning neural network is less to the noise-sensitive such as environment, light sensitive, and once training can be at each ring
Running under border, generalization ability is good.
Accompanying drawing explanation
Fig. 1 is the structure chart of convolutional neural networks.
Fig. 2 is the method flow diagram of the present invention.
Detailed description of the invention
In conjunction with accompanying drawing, specific embodiments of the present invention are further described in detail.The present invention propose based on meter
The safety detection method of calculation machine vision, first passes through photographic head and detection region is taked image information in real time, then to taking
Each pixel of image carry out gradient calculation.With the support vector machines grader trained, entire image is entered
Row traversal, obtains the information of picture position, pedestrian place, intercepts the top half of pedestrian's image, is input to the volume trained
In long-pending neutral net, obtain the result of final safety behavior identification.
Method flow:
The present invention sets people daily pedestrian on-site and includes that two states, one are correctly to have worn safety helmet, separately
Outer one is not have correct safe wearing cap, wherein, does not has correct safe wearing cap to include hand held for safety helmet this feelings
Condition.
Refining the most further, safety detection method the method based on computer vision comprises the steps:
Step 1:
User photographic head catches original video data.
Step 2:
By original image gray processing, and with Gamma correction method, input picture is carried out the standardization (normalizing of color space
Change) regulate the contrast of image, reduce the shade of image local and impact that illumination variation is caused, can suppress to make an uproar simultaneously
The interference of sound.
Step 3:
Calculate the gradient (including size and Orientation) of each pixel of image;Primarily to capture profile information, enter simultaneously
The interference of one step weakened light photograph.
Step 4:
Divide an image into little cells (such as 6*6 pixel/cell).By every several cell one block of composition (such as
3*3 cell/block), in a block, feature descriptor of all cell is together in series and just obtains this block's
HOG feature descriptor.
Step 5:
HOG feature descriptor of all block in image image is together in series and can be obtained by this image
HOG feature descriptor.And filter out pedestrian by the SVM classifier trained.
Step 6:
Image pedestrian's information is input in degree of depth convolutional neural networks, pedestrian is carried out behavior analysis.If do not met
Safety standard then sends safety alert information.
As it is shown in figure 1, be one embodiment of the present of invention, it comprises the steps:
1. carry out feature extraction with HOG.
A. image gray processing, camera collection to RGB image coloured image gray scale turn to the black white image of 0-255.
B. use Gamma (I (x, y)=I (x, y)gamma) correcting algorithm carries out the standardization of color space to input picture,
The contrast of regulation image, the impact that the shade of reduction image local and illumination variation are caused, suppresses noise simultaneously.
C. image gradient is calculated.
Calculate image abscissa and the gradient in vertical coordinate direction, and calculate the gradient direction value of each location of pixels accordingly.
Pixel in image (x, gradient y) is:
Gx(x, y)=H (and x+1, y)-H (x-1, y)
Gy(x, y)=H (x, y+1)-H (x, y-1)
G in formulax=(x, y), Gy(x, y), (x y) represents pixel (x, y) horizontal direction at place in input picture to H respectively
Gradient, vertical gradient and pixel value.(x, y) gradient magnitude and the gradient direction at place is respectively as follows: pixel
D. image descriptor is built.
Dividing an image into little cells (such as 6*6 pixel/cell), we add up this 6*6 with 9 direction histograms
The gradient information of pixel, is namely divided into 9 direction blocks by the gradient of cell.
Cell factory is combined into big block (block), normalized gradient rectangular histogram in block.In one block all of
The characteristic vector of cell is together in series and just obtains the characteristic vector of this block, and they is used as svm classifier.
2. build SVM classifier.
Choose 1000 pedestrians as positive sample, and 1000 non-pedestrian are as negative sample, and normalize to 64*128 picture
Element size.Obtain the Feature Descriptor descriptor of every width picture by the method in 1, so we just can reach 2000 parts and treat
Two class data sets of training.We put into SVM classifier training by tagged for training set.Through several take turns training after we
A model trained can be obtained.
3. build convolutional neural networks grader.
A. data set is set up.
Choosing 1200 pedestrian's photos that wear a safety helmet, intercept above the waist, the square picture cutting into 128*128 size is made
For positive sample, then choose 1200 and there is no safe wearing cap pedestrian's photo, intercept as negative sample by same method.By sample
Carrying out horizontal reflection, such data set can increase and is twice.We are by tagged for all of photo, and positive sample is 1, and negative sample is
0.And all of sample is upset, arbitrarily select 800 samples accomplishing fluently label as test sample.
B. the overall structure of convolutional neural networks structure.
As it is shown in figure 1, have 10 layers.Wherein three convolutional layers, each convolutional layer is followed by pooling layer, last
Pooling layer is followed by the convolutional layer of a 1*1, is immediately a full hidden layer connected, and the output of full articulamentum is sent to one
The Softmax layer of individual 2 dimensions, it produces a probability distribution covering two labels.Our network makes polytypic
Logistic regressive object maximizes, and this is equivalent to maximise under prediction distribution the log probability of correct label in training sample
Meansigma methods.
C. the selection of loss function.
Wherein n represents the example number that training set comprises, and λ > 0 represents regularization parameter, and y represents sample
Actual value, x represents that input value, w represent the weight of Current Situation of Neural Network, and a represents the value that neutral net forward-propagating is calculated.
We encourage smaller weight to reduce sound pollution by deducting weight in proportion.
D. the initialization of weight.
Set up an each layer of Fig. 1 neutral net and have the RBM neutral net of identical input dimension and same depth, we
Data set is put in RBM neutral net and is trained, after 100 take turns training, weight that we train every layer and
Deflection is taken out, and is put in convolutional neural networks corresponding position.
E. before neutral net output layer one layer we select Dropout to reduce overfitting.
When meeting iterated conditional:
Step1: hidden layer randomly selects the neuron of half and deletes;
Step2: carry out forward on the neutral net deleted and reversely update;
Step3: the neuron deleted before recovery, repeats Step1, Step2, until being unsatisfactory for iterated conditional.
Each neuron in the neutral net finally learning out is to learn on the basis of only half neuron.
At the end of iteration, we halve all of for hidden layer weight.The hidden layer that Dropout throws away half the most at random is neural
Unit, is equivalent to us and is trained in different neutral nets, decreases the dependency of neutral net, the most each nerve
Unit cannot rely upon certain or other neuron several, forces neural network learning and other neurons to be joined together more
Strong.
F. the output equation Softmax of output layer.
It is dangerous that final output needs to be divided into two classes, i.e. behavior safety and behavior.The previous neuron of output layer defeated
Go out forThe output equation below of last layer of output layer is calculated by weSafe and unsafe probability, we use that output of maximum probability as testing result.
G. process is trained.
Using stochastic gradient descent method and a collection of size is 128, and power is 0.9, and weight decays to the sample of 0.0005.Right
More new regulation in the w of weight is
wi+1=wi+vi+1
Wherein i is iterations, and v is dynamical variable, and ε is learning rate.
First stage, forward propagation stage:
Sample (X, a Y is taken from sample setp), X is inputted network;
Calculate corresponding actual output Op.In this stage, information through conversion step by step, is sent to output from input layer
Layer.The process that this process performs when to be also network properly functioning after completing training.In the process, what network performed is meter
Calculate (actually the input weight matrix phase dot product with every layer, obtains last output result):
Op=Fn(…(F2(F1(Xpw(1))w(2))…)w(n))
Second stage, the back-propagation stage:
A) reality output O is calculatedpWith corresponding preferable output YpDifference
B) power is adjusted by the method back propagation of minimization error
Termination condition:
Two conditions can be chosen and meet end training neutral net: 1. iteration meets the wheel number of certain setting;2. work as
Learning rate is less than the numeral set the stable wheel number in setting.
The foregoing is only a specific embodiment of the present invention, not in order to limit the present invention, all essences in the present invention
Within god and principle, any modification, equivalent substitution and improvement etc. made, should be included within the scope of the present invention.
Claims (4)
1. safety detection method based on computer vision, it is characterised in that the method comprises the steps:
Step 1: catch original video data with photographic head;
Step 2: by original image gray processing, carries out the standardization of color space, the contrast of regulation image, fall to input picture
The impact that the shade of low image local and illumination variation are caused, the interference of suppression noise;
Step 3: calculate the gradient of each pixel of image, captures profile information, the interference of further weakened light photograph;
Every several junior unit lattice are formed a block, by all little lists in a block by step 4: divide an image into junior unit lattice
The feature descriptor of unit's lattice is together in series, and produces the HOG feature descriptor of this block;
Step 5: be together in series by the HOG feature descriptor of all pieces in image, generates the HOG feature descriptor of this image,
And filter out image pedestrian's information with the support vector machine classifier trained;
Step 6: the above-mentioned image pedestrian's information filtered out is input in degree of depth convolutional neural networks, pedestrian is carried out behavior and divides
Analysis, if the safety standard of not meeting, sends safety alert information.
Safety detection method based on computer vision the most according to claim 1, it is characterised in that: described in step 2
The standardization of color space is implemented by gamma correction method.
Safety detection method based on computer vision the most according to claim 1, it is characterised in that: described in step 3
Gradient includes size and Orientation information.
Safety detection method based on computer vision the most according to claim 1, it is characterised in that: in construction step 6
The grader of described convolutional neural networks comprises the steps of
(1) data set is set up;
(2) set up convolutional neural networks, comprise the most again following sub-step:
A. the number of plies and structure are determined;
B. loss function is selected;
C. Dropout layer, and Dropout ratio are determined;
D. output layer equation is selected.
(3) start to train neutral net, comprise the most again following sub-step:
A. weight is initialized;
B., iteration termination condition is set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610782779.6A CN106156765A (en) | 2016-08-30 | 2016-08-30 | safety detection method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610782779.6A CN106156765A (en) | 2016-08-30 | 2016-08-30 | safety detection method based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106156765A true CN106156765A (en) | 2016-11-23 |
Family
ID=57344710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610782779.6A Pending CN106156765A (en) | 2016-08-30 | 2016-08-30 | safety detection method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106156765A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781198A (en) * | 2016-12-31 | 2017-05-31 | 马宏林 | A kind of kitchen pre-alarm system |
CN106909887A (en) * | 2017-01-19 | 2017-06-30 | 南京邮电大学盐城大数据研究院有限公司 | A kind of action identification method based on CNN and SVM |
CN107454364A (en) * | 2017-06-16 | 2017-12-08 | 国电南瑞科技股份有限公司 | The distributed real time image collection and processing system of a kind of field of video monitoring |
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN108021926A (en) * | 2017-09-28 | 2018-05-11 | 东南大学 | A kind of vehicle scratch detection method and system based on panoramic looking-around system |
CN108319934A (en) * | 2018-03-20 | 2018-07-24 | 武汉倍特威视系统有限公司 | Safety cap wear condition detection method based on video stream data |
CN108460358A (en) * | 2018-03-20 | 2018-08-28 | 武汉倍特威视系统有限公司 | Safety cap recognition methods based on video stream data |
CN109726652A (en) * | 2018-12-19 | 2019-05-07 | 杭州叙简科技股份有限公司 | A method of based on convolutional neural networks detection operator on duty's sleep behavior |
CN109919182A (en) * | 2019-01-24 | 2019-06-21 | 国网浙江省电力有限公司电力科学研究院 | A kind of terminal side electric power safety operation image-recognizing method |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN110135514A (en) * | 2019-05-22 | 2019-08-16 | 国信优易数据有限公司 | A kind of workpiece classification method, device, equipment and medium |
CN110717466A (en) * | 2019-10-15 | 2020-01-21 | 中国电建集团成都勘测设计研究院有限公司 | Method for returning position of safety helmet based on face detection frame |
CN110832505A (en) * | 2017-07-04 | 2020-02-21 | 罗伯特·博世有限公司 | Image analysis processing with target-specific preprocessing |
CN112465028A (en) * | 2020-11-27 | 2021-03-09 | 南京邮电大学 | Perception vision security assessment method and system |
CN114913619A (en) * | 2022-04-08 | 2022-08-16 | 华能苏州热电有限责任公司 | Intelligent mobile inspection method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150117760A1 (en) * | 2013-10-30 | 2015-04-30 | Nec Laboratories America, Inc. | Regionlets with Shift Invariant Neural Patterns for Object Detection |
CN105279485A (en) * | 2015-10-12 | 2016-01-27 | 江苏精湛光电仪器股份有限公司 | Detection method for monitoring abnormal behavior of target under laser night vision |
CN105590099A (en) * | 2015-12-22 | 2016-05-18 | 中国石油大学(华东) | Multi-user behavior identification method based on improved convolutional neural network |
CN105631415A (en) * | 2015-12-25 | 2016-06-01 | 中通服公众信息产业股份有限公司 | Video pedestrian recognition method based on convolution neural network |
-
2016
- 2016-08-30 CN CN201610782779.6A patent/CN106156765A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150117760A1 (en) * | 2013-10-30 | 2015-04-30 | Nec Laboratories America, Inc. | Regionlets with Shift Invariant Neural Patterns for Object Detection |
CN105279485A (en) * | 2015-10-12 | 2016-01-27 | 江苏精湛光电仪器股份有限公司 | Detection method for monitoring abnormal behavior of target under laser night vision |
CN105590099A (en) * | 2015-12-22 | 2016-05-18 | 中国石油大学(华东) | Multi-user behavior identification method based on improved convolutional neural network |
CN105631415A (en) * | 2015-12-25 | 2016-06-01 | 中通服公众信息产业股份有限公司 | Video pedestrian recognition method based on convolution neural network |
Non-Patent Citations (2)
Title |
---|
杨逍: "损失函数", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
邓力 等: "《深度学习:方法及应用》", 31 March 2016, 北京:机械工业出版社 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781198A (en) * | 2016-12-31 | 2017-05-31 | 马宏林 | A kind of kitchen pre-alarm system |
CN106909887A (en) * | 2017-01-19 | 2017-06-30 | 南京邮电大学盐城大数据研究院有限公司 | A kind of action identification method based on CNN and SVM |
CN107454364A (en) * | 2017-06-16 | 2017-12-08 | 国电南瑞科技股份有限公司 | The distributed real time image collection and processing system of a kind of field of video monitoring |
CN107454364B (en) * | 2017-06-16 | 2020-04-24 | 国电南瑞科技股份有限公司 | Distributed real-time image acquisition and processing system in video monitoring field |
CN110832505A (en) * | 2017-07-04 | 2020-02-21 | 罗伯特·博世有限公司 | Image analysis processing with target-specific preprocessing |
CN108021926A (en) * | 2017-09-28 | 2018-05-11 | 东南大学 | A kind of vehicle scratch detection method and system based on panoramic looking-around system |
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN110119656A (en) * | 2018-02-07 | 2019-08-13 | 中国石油化工股份有限公司 | Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations |
CN108319934A (en) * | 2018-03-20 | 2018-07-24 | 武汉倍特威视系统有限公司 | Safety cap wear condition detection method based on video stream data |
CN108460358A (en) * | 2018-03-20 | 2018-08-28 | 武汉倍特威视系统有限公司 | Safety cap recognition methods based on video stream data |
CN109726652A (en) * | 2018-12-19 | 2019-05-07 | 杭州叙简科技股份有限公司 | A method of based on convolutional neural networks detection operator on duty's sleep behavior |
CN109919182A (en) * | 2019-01-24 | 2019-06-21 | 国网浙江省电力有限公司电力科学研究院 | A kind of terminal side electric power safety operation image-recognizing method |
CN109919182B (en) * | 2019-01-24 | 2021-10-22 | 国网浙江省电力有限公司电力科学研究院 | Terminal side electric power safety operation image identification method |
CN110135514A (en) * | 2019-05-22 | 2019-08-16 | 国信优易数据有限公司 | A kind of workpiece classification method, device, equipment and medium |
CN110717466A (en) * | 2019-10-15 | 2020-01-21 | 中国电建集团成都勘测设计研究院有限公司 | Method for returning position of safety helmet based on face detection frame |
CN110717466B (en) * | 2019-10-15 | 2023-06-20 | 中国电建集团成都勘测设计研究院有限公司 | Method for returning to position of safety helmet based on face detection frame |
CN112465028A (en) * | 2020-11-27 | 2021-03-09 | 南京邮电大学 | Perception vision security assessment method and system |
CN112465028B (en) * | 2020-11-27 | 2023-11-14 | 南京邮电大学 | Perception visual safety assessment method and system |
CN114913619A (en) * | 2022-04-08 | 2022-08-16 | 华能苏州热电有限责任公司 | Intelligent mobile inspection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106156765A (en) | safety detection method based on computer vision | |
CN109359559B (en) | Pedestrian re-identification method based on dynamic shielding sample | |
CN110532900B (en) | Facial expression recognition method based on U-Net and LS-CNN | |
CN105825511B (en) | A kind of picture background clarity detection method based on deep learning | |
CN108830188A (en) | Vehicle checking method based on deep learning | |
CN107506722A (en) | One kind is based on depth sparse convolution neutral net face emotion identification method | |
CN107767405A (en) | A kind of nuclear phase for merging convolutional neural networks closes filtered target tracking | |
CN106815566A (en) | A kind of face retrieval method based on multitask convolutional neural networks | |
CN107480730A (en) | Power equipment identification model construction method and system, the recognition methods of power equipment | |
CN106845351A (en) | It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term | |
CN106682697A (en) | End-to-end object detection method based on convolutional neural network | |
CN106803069A (en) | Crowd's level of happiness recognition methods based on deep learning | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN106446930A (en) | Deep convolutional neural network-based robot working scene identification method | |
CN106096602A (en) | A kind of Chinese licence plate recognition method based on convolutional neural networks | |
CN108960189A (en) | Image recognition methods, device and electronic equipment again | |
CN102831411B (en) | A kind of fast face detecting method | |
CN107301376B (en) | Pedestrian detection method based on deep learning multi-layer stimulation | |
CN106709528A (en) | Method and device of vehicle reidentification based on multiple objective function deep learning | |
CN111507227B (en) | Multi-student individual segmentation and state autonomous identification method based on deep learning | |
CN110956158A (en) | Pedestrian shielding re-identification method based on teacher and student learning frame | |
CN110909672A (en) | Smoking action recognition method based on double-current convolutional neural network and SVM | |
CN105404865A (en) | Probability state restricted Boltzmann machine cascade based face detection method | |
CN105809119A (en) | Sparse low-rank structure based multi-task learning behavior identification method | |
CN114492634B (en) | Fine granularity equipment picture classification and identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161123 |