CN106228137A - A kind of ATM abnormal human face detection based on key point location - Google Patents
A kind of ATM abnormal human face detection based on key point location Download PDFInfo
- Publication number
- CN106228137A CN106228137A CN201610593931.6A CN201610593931A CN106228137A CN 106228137 A CN106228137 A CN 106228137A CN 201610593931 A CN201610593931 A CN 201610593931A CN 106228137 A CN106228137 A CN 106228137A
- Authority
- CN
- China
- Prior art keywords
- face
- key point
- image
- atm
- point location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F19/00—Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
- G07F19/20—Automatic teller machines [ATMs]
- G07F19/209—Monitoring, auditing or diagnose of functioning of ATMs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Accounting & Taxation (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Finance (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of ATM abnormal human face detection based on key point location, described method comprises the following steps: (1) Image semantic classification, obtain a real-time monitoring images, eliminate the noise in image, and carry out the video image under the conditions of complex illumination strengthening regulation;(2) Face datection, whether detection image comprises face, the position of face and size information;(3) key point location, detection face key point is put, and obtains key point response value;(4) abnormal face differentiates, is input in grader by key point response value vector, and grader will determine the result of classification according to response value vector, it determines whether is abnormal face;The method is for ATM in bank front face monitor video, and whether detection client has facial camouflage behavior, timely early warning in real time.
Description
Technical field
The present invention relates to method for detecting human face, be more particularly to a kind of ATM abnormal face inspection based on key point location
Survey method.
Background technology
Along with China's rapid development of economy, banking Quick Extended, the quantity of ATM is continuously increased, and uses the most general
And arrived all parts of the country, go deep into popular life.ATM has function and these uses of unmanned such as automatic drawing deposit remittance
Environment, while facilitating people's withdrawal efficiently, result also in the day of the dispute case around ATM and financial crime
Benefit increases.On the one hand holder is made to suffer huge property loss;On the other hand the normal work of financial institution has also been upset
Make order.
Video monitoring is the important component part of safety and protection system, and it is the integrated system that a kind of prevention ability is stronger.
Video monitoring is directly perceived with it, accurate, abundant with information content in time and is widely used in many occasions.Current most ATM
Photographic head it is equipped with to monitor ATM and surrounding in machine surrounding enviroment and ATM.But suspect is in order to escape
Keep away the supervision of photographic head, the most often pretend at face, the most masked, wear dark glasses, wear a mask, wear masks, cover
Lid facial information.When post-mordem forensics, it is impossible to obtain the face that suspect is clear and legible, bring to cracking of cases huge
Difficulty.
If can detect whether client has facial camouflage behavior in time when client uses ATM, as client has puppet
Dress suspicion, prompting client removes shelter.If client does not listen advice, ATM refusal service.The most not only can significantly prevent
The generation of this type of criminal behavior, and can be when the investigation detection after case occurs, it is provided that suspect's clearly may be used
The facial image distinguished, alleviates the difficulty of cracking of cases.
But existing ATM monitoring system is typically all and relies on people to carry out checking monitoring video, but the monitoring energy of people
Power is limited.Scientific investigations showed that, 16 road videos at most can be monitored by a people simultaneously.One bank is come
Saying, the quantity of ATM has tens the most up to a hundred, and corresponding monitor video has been possible to the most thousands of road, hundreds of road,
The method using artificial checking monitoring video, it is desirable to it is impossible for reaching preferable monitoring effect.And watch prison for a long time
Control video easily causes the visual fatigue of monitoring personnel, aprosexia, easily misses the important information in video.
The performance of ATM monitoring system is just had higher requirement by this, in the urgent need to automatization's means analysis monitoring
Video, detects whether client has facial camouflage behavior in time.In order to strengthen the intellectuality of ATM monitoring system.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of ATM abnormal face detection side based on key point location
Method, for ATM in bank front face monitor video, whether detection client has facial camouflage behavior, timely early warning in real time.
For solving above-mentioned technical problem, the present invention adopts the following technical scheme that a kind of ATM based on key point location is different
Ordinary person's face detecting method, described method comprises the following steps:
(1) Image semantic classification, obtains a real-time monitoring images, eliminates the noise in image, and to complex illumination under the conditions of
Video image carry out strengthen regulation;
(2) Face datection, whether detection image comprises face, the position of face and size information;
(3) key point location, detection face key point is put, and obtains key point response value;
(4) abnormal face differentiates, is input in grader by key point response value vector, and grader will according to response value vector really
The result of fixed classification, it determines whether be abnormal face.
Further, (11) use topography's smoothing technique of medium filtering, eliminate isolated noise point in image;
(12) use histogram equalization techniques, by changing the distribution of the gray level of each pixel in image, and come all with this
The intensity profile of weighing apparatus image so that Dimming parts or the darkest part that some local is the brightest brighten.
Further, step (2) is the face in the convolutional neural networks CNN detection image using cascade, wraps further
Include:
(21) one image to be detected of input, with the different scale window whole image of intensive scanning, produces many faces to be discriminated
Frame;
(22) all face frames to be discriminated are adjusted to the resolution of 12*12, through the differentiation network of 12 yardsticks, will wait to sentence
Others' face frame carries out two classification, gets rid of the wrong face frame of 90%;
(23) by the corrective network of 12 yardsticks, the size and location of residue face frame is adjusted, in order to obtain protoplast's face frame
The more excellent face frame that side is potential, uses non-maximum restraining (NMS) algorithm to get rid of the face frame that coincidence factor is excessive simultaneously;
(24) remaining face frame is adjusted to the resolution of 24*24, through the differentiation network of 24 yardsticks, gets rid of further
Close to the wrong face frame of 90%.
(25) by the corrective network of 24 yardsticks, adjust the size and location of residue face frame, use NMS to calculate simultaneously
Method gets rid of the face frame that coincidence factor is excessive.
(26) remaining face frame is adjusted to the resolution of 48*48, through the differentiation network of 48 yardsticks, adopts simultaneously
The face frame that coincidence factor is excessive is got rid of with NMS algorithm.
(27) by the corrective network of 48 yardsticks, output residue face frame, as final detection result.
Further, step (3) farther includes:
(31) entire image is inputted core network, process all types key point, the response image of output low resolution, roughly
Location key point;
(32) according to the key point position of response image coarse localization, centered by key point, the correspondence image in original image is extracted
Block;
(33) image block is inputted branching networks, export high-resolution response image, be accurately positioned key point.
Further, it is: described key point position includes left eye, right eye, nose, the left corners of the mouth, five keys of the right corners of the mouth
Point.
Further, described core network is to be alternately stacked formed by three convolutional layers and three maximum pond layers;
Described branching networks are to be made up of four convolutional layers.
As preferably, described grader is to use support vector machines;Described support vector machines is to use to lead to
Cross AR data base and the parameter of ORL data base's sample training.
As preferably, described step (1) is to use mean filter, Gaussian smoothing filter or wavelet filtering to eliminate in image
Noise;Greyscale transformation, gray level correction or histogram specification is used to carry out image strengthening regulation.
As preferably, described step (2) is to use based on geometric properties, based on template matching, pre-solid plate or statistics reason
Opinion method carries out Face datection;Described step (3) is to use based on priori rules, based on geometry information, based on outward appearance
Information or method based on half-tone information detection face key point are put, and obtain key point response value.
As preferably, described step (4) is to use Nearest Neighbor Classifier, linear classifier, naive Bayesian, logic to return
Return, decision tree or Adaboost differentiate abnormal face.
The invention have the benefit that deserve key point feature obvious time response value just high, time inconspicuous, response value is the lowest.
This method can detect whether real-time video has abnormal face rapidly and accurately, i.e. whether client has facial camouflage behavior, as
Fruit has, and reports to the police.
Accompanying drawing explanation
Below in conjunction with the accompanying drawings the detailed description of the invention of the present invention is described in further detail, wherein:
Fig. 1 is the step structural representation of a kind of ATM abnormal human face detection based on key point location of the present invention;
Fig. 2 is that the Face datection network structure of a kind of ATM abnormal human face detection based on key point location of the present invention is shown
It is intended to.
Detailed description of the invention
Embodiment 1
Image semantic classification: due to the environment that a lot of ATMs are often arranged on circumstance complication, interference factor is numerous, which results in ATM
The video image that the photographic head of machine is gathered, owing to being affected by the factors such as illumination condition, background, comprises shade, noise
Etc. a lot of unfavorable factors.Therefore it is the most necessary that the video image gathered ATM carries out pretreatment.Pretreatment is not only wanted
Consider the noise that comprised of video image, the produced interference of environmental factors such as illumination to be considered.Therefore, the present invention is directed to
The pretreatment of ATM monitor video comprises two parts: one is video image denoising;Two be for complex illumination under the conditions of video
The enhancing of image.
(1) reason producing noise in video image has a lot, and a class is spatial noise, and it is a kind of spatial stochastically distribution
Error noise.Spatial noise can simply be described as two pixels for given by identical color and monochrome information not
Same pixel value represents.Another kind of is noise in time domain, and this noise is the error noise of a kind of time random distribution.It can letter
Single pixel being described as same position provides different pictures to identical color and monochrome information on the different time periods
Element value represents, ultimately causing the error between actual value and pixel value is exactly noise in time domain.The method filtering noise is a lot, such as
Mean filter, medium filtering, Gaussian smoothing filter, wavelet filtering etc., the present invention uses the method for medium filtering.
Medium filtering is a kind of topography based on nonlinear filtering smoothing technique, is possible not only to remove isolated noise
Point, additionally it is possible to keep the local edge of original image.The ultimate principle of the method be exactly in the field utilizing certain pixel each
The intermediate value of grey scale pixel value replaces the gray value that this pixel is original.Process is as follows: for the picture element matrix of piece image, first
First taking a matrix window centered by object pixel, window size typically takes 3 × 3, then to the pixel in window
Gray scale is ranked up, and is worth the new gray value as object pixel using middle one.Because noise spot is generally random mutation
Pixel, typically can be positioned at head of the queue or tail of the queue after pixel value size sorts, and the pixel value taking centre is just just normal
Pixel value, so medium filtering can effectively remove these random noise points in certain degree, and also can not damage figure
The marginal information of picture.
(2) obtaining image in surroundings, illumination condition is one of key factor affecting Face datection.Same feature
Multi-form may be shown under different illumination conditions, be so likely to cause the generation of the situations such as flase drop.For
Reducing the illumination adverse effect to subsequent treatment, traditional way is to increase sample size, add under various illumination condition
Sample.But the method not only increases system-computed amount, and Sample Storehouse to be made covers all light conditions and also is difficult to realize.For
Preferably processing illumination condition change adverse effect, the present invention utilizes histogram equalization techniques to carry out image
Pretreatment operation.
Histogram equalization is by changing the distribution of the gray level of each pixel in image, and carrys out the ash of equilibrium figures picture with this
Degree distribution so that Dimming parts or the darkest part that some local is the brightest brighten.Grey level histogram is an image
The explicit statement of middle gray value, is the frequency or the statistical nature of frequency that in given image, each gray value occurs.Intensity histogram
Figure, as directly instrument the simplest in an image processing field, is possible not only to record tonal gradation in each image, and
The number of any one gray value in image can be recorded.The grey level histogram of any piece image all contains very abundant
Information, the most sometimes can represent some special image by direct grey level histogram.The purpose that histogram equalization processes
Being to make the rectangular histogram of the image after equalization tend to straight, the most each gray level has identical appearance frequency, has more
Uniform probability distribution, thus improve the subjective quality of image.After equalisedization processes, the impact of image is weakened by illumination, ash
Degree level scope broadens and is evenly distributed, and eliminates owing to image crosses interference that is bright or that the most secretly brought.
Face datection: Face datection refers to detect whether input picture comprises face, if comprising face, provides image
The information such as the position of middle face, size.
The present invention uses a convolutional neural networks cascaded (CNN) to realize Face datection.Artificial with in traditional algorithm
The face characteristic of design is compared, and convolutional neural networks can be become from complicated vision by the study to substantial amounts of training data
Change learns automatically face characteristic.And convolutional neural networks can accelerate arithmetic speed by GPU and multithreading.
In order to improve Detection results, take the way of multi-scale division image, obtain a lot of face frame to be discriminated.Detection point
It it is two stages: differentiate stage and calibration phase.At each face frame to be discriminated after differentiating, there are correction rank
Section, improves the accuracy of face frame, and reduces face frame sum.Use the CNN:12 yardstick of three different scales, 24 chis
Degree, 48 yardsticks, each yardstick comprises two CNN, and one is to differentiate network, and one is corrective network.In little yardstick low resolution
Time quick debug face frame, carefully verify face frame when large scale high-resolution.Network frame figure is shown in accompanying drawing, in detail
Step is as follows:
(1) one image to be detected of input, first with the different scale window whole image of intensive scanning, produces many faces to be discriminated
Frame.
(2) all face frames to be discriminated are adjusted to the resolution of 12*12, through the differentiation network of 12 yardsticks, it
Face frame to be discriminated is carried out two classification, gets rid of the wrong face frame of 90%.
(3) by the corrective network of 12 yardsticks, the size and location of residue face frame is adjusted, in order to obtain protoplast's face
The more excellent face frame that frame side is potential, uses non-maximum restraining (NMS) algorithm to get rid of the face frame that coincidence factor is excessive simultaneously.
(4) remaining face frame is adjusted to the resolution of 24*24, through the differentiation network of 24 yardsticks, further
Get rid of the wrong face frame close to 90%.
(5) by the corrective network of 24 yardsticks, adjust the size and location of residue face frame, use NMS to calculate simultaneously
Method gets rid of the face frame that coincidence factor is excessive.
(6) remaining face frame is adjusted to the resolution of 48*48, through the differentiation network of 48 yardsticks, adopts simultaneously
The face frame that coincidence factor is excessive is got rid of with NMS algorithm.
(7) by the corrective network of 48 yardsticks, output residue face frame, as final detection result.
Key point positions: key point positions, and is again feature point detection, refers to the critical zone locations of location face, including
Eyebrow, eyes, nose, face, face mask etc..
The present invention uses a CNN with trunk and branch to realize key point location, and core network is used for rapidly
Coarse localization key point position, branching networks are used for being accurately positioned key point position.Core network is used for quickly producing one slightly
Overall low resolution response diagram slightly, coarse localization key point position, branching networks are used for producing a fine local high score
Resolution response diagram, is accurately positioned key point position.The present invention comprises 5 branching networks, and branching networks produce a type
The response diagram of key point.Detailed step is as follows:
(1) entire image is inputted core network, process all types key point, the response image of output low resolution, roughly
Location key point;
(2) according to the key point position of response image coarse localization, centered by key point, the correspondence image in original image is extracted
Block;
(3) image block is inputted branching networks, export high-resolution response image, be accurately positioned key point.
Core network is alternately stacked is formed by 3 convolutional layers and 3 maximum pond layer, and branching networks only comprise 4 volumes
Lamination.
Abnormal face differentiates: after face is carried out key point location, can obtain the response value of 5 key points.Connect
Getting off, it is simply that be input in grader by this response value vector, grader, by determining the result of classification according to response value vector, is sentenced
The most whether it is abnormal face.
Grader has a variety of, and this method selects support vector machine (SVM) as grader.
SVM is the interior kernel-based learning method having supervision of a kind of classics, is the machine learning skill of a kind of structural risk minimization
Art.The major function of this algorithm is by the study to certain number of samples, at training pattern complexity and model learning energy
Seek optimal compromise between power, reach optimal performance.It can solve the classification problem under Small Sample Size.SVM
Based on linear partition.Not all data are all linear separabilities at two-dimensional space, and the principle of SVM is exactly by lower dimensional space
In point be mapped in higher dimensional space, make linear inseparable data in lower dimensional space become linear separability.Then at this
Higher dimensional space model uses linear learning machine process Nonlinear Classification, between class, finds an optimum linearity classifying face,
It is well separated.
Training set of the present invention: the parameter for the cascade CNN of Face datection is to use AFLW(Annotated
Facial Landmarks in the Wild) data base trains.AFLW 2011 issue large-scale, regard more
Facial image database under angle, natural scene, in storehouse, image is from Flickr.Data base includes 26,000 multiple, exceed altogether
The face of 380,000.
Parameter for the trunk branch CNN of key point location is with the thousands of facial images collected from network and naturally to scheme
As training, manual 5 key points marking these images.
For abnormal face differentiate the parameter of SVM classifier be with AR data base and the sample training of ORL data base
Out.AR data base contains 126 people (70 man, 56 female) more than 4000 facial images, have different illumination,
The faces such as different expression, partial occlusions, are the current testing algorithm conventional face databases to blocking robustness.ORL data base has
40 people, everyone 10 width, there is different expression, different gestures, slightly coverage and the face of different illumination.
Beneficial effect: the method by monitoring the analysis of demand to ATM in bank, it is proposed that for the positive dough figurine of ATM
The application of the abnormal face detection of face monitor video and detection method, taken into full account the field of ATM front face monitor video
Scape feature and change, have certain accuracy, real-time and robustness.
Abnormal face, and alarm can be detected the most fast and accurately, remind monitoring personnel to process, prevention crime
Case occurs.
Embodiment 2
Image semantic classification: the method that image removes noise is a lot, the medium filtering used except the present invention, also mean filter,
Gaussian smoothing filter, wavelet filtering etc..The method of image enhaucament also has a lot, the histogram equalization used except the present invention,
Also have greyscale transformation, gray level correction, histogram specification etc..
Face datection: the Face datection model that the present invention uses is the CNN model of a cascade, also has other moulds many
Type can realize Face datection function.Method based on geometric properties is the method for detecting human face occurred the earliest, by people couple
The basic cognitive information of face carries out extracting abstract, uses the typical rule of face to detect.Face based on template matching
Detection method is divided into pre-solid plate and deforming template.Predetermined template method has levels template matching method, first image is carried out limit
Edge extracts, then mates with the border template of face and the edge feature of image.Method based on theory of statistics is current
Main stream approach, by training face grader, describes to extract the basic feature of face with big data volume.
Key point positions: the key point location model that the present invention uses is the CNN model of a trunk branch, also has many
Other model can realize key point positioning function.Conventional man face characteristic point positioning method, is roughly divided into following classification: based on
The method of priori rules, method based on geometry information, method based on appearance information, method based on half-tone information.
1) method based on priori rules has carried out the experience general introduction of summing-up to face features general characteristic.Geometry is mainly had to throw
Shadow method, mosaic map mosaic method, Generalized Symmetric method and binaryzation positioning mode etc..2) geometry information is exactly the geometry of face object
Feature, in intuitive, understands that the aspect such as complexity, application has outstanding advantage.Main algorithm has Snake model, variable
Template, points distribution models (PDM), active shape model (ASM) and active appearance models (AAM) etc..3) set in image window
Put a stochastic variable, and map that to a point in higher dimensional space.The face feature with same type is described as
A point set in higher dimensional space, the method modeled by Statistical Distribution calculates the matching degree in region to be measured and model, from
And weigh and determine whether it comprises target face feature.Such method mainly includes that artificial neural network (ANN), main constituent divide
Analysis (PCA), support vector machine (SVM) etc..4) face face contour feature, the intensity profile etc. of each characteristic portion constitute ash
Degree feature.Such method mainly includes that geometric projection and paddy are analyzed.According to eyebrow, eyes, nose, the core of face in face
Unique gray distribution features of position, projection function can more successfully realize the location of characteristic point.
Abnormal face differentiates: grader has a variety of, the SVM classifier selected except the present invention, also arest neighbors classification
Device, linear classifier, naive Bayesian, logistic regression, decision tree, Adaboost etc..
Having the beneficial effect that of the present embodiment proposes a technical scheme solving ATM abnormal face problem, and scheme is every
A part has algorithm that many can substitute to reach same effect, and scheme extensibility is strong.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
The change expected without creative work or replacement, all should contain within protection scope of the present invention.
Claims (10)
1. an ATM abnormal human face detection based on key point location, it is characterised in that: described method includes following
Step:
(1) Image semantic classification, obtains a real-time monitoring images, eliminates the noise in image, and to complex illumination under the conditions of
Video image carry out strengthen regulation;
(2) Face datection, whether detection image comprises face, the position of face and size information;
(3) key point location, detection face key point is put, and obtains key point response value;
(4) abnormal face differentiates, is input in grader by key point response value vector, and grader will according to response value vector really
The result of fixed classification, it determines whether be abnormal face.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In, step (1) farther includes:
(11) use topography's smoothing technique of medium filtering, eliminate isolated noise point in image;
(12) use histogram equalization techniques, by changing the distribution of the gray level of each pixel in image, and come all with this
The intensity profile of weighing apparatus image so that Dimming parts or the darkest part that some local is the brightest brighten.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In, step (2) is the face in the convolutional neural networks CNN detection image using cascade, farther includes:
(21) one image to be detected of input, with the different scale window whole image of intensive scanning, produces many faces to be discriminated
Frame;
(22) all face frames to be discriminated are adjusted to the resolution of 12*12, through the differentiation network of 12 yardsticks, will wait to sentence
Others' face frame carries out two classification, gets rid of the wrong face frame of 90%;
(23) by the corrective network of 12 yardsticks, the size and location of residue face frame is adjusted, in order to obtain protoplast's face frame
The more excellent face frame that side is potential, uses non-maximum restraining (NMS) algorithm to get rid of the face frame that coincidence factor is excessive simultaneously;
(24) remaining face frame is adjusted to the resolution of 24*24, through the differentiation network of 24 yardsticks, gets rid of further
Close to the wrong face frame of 90%;
(25) by the corrective network of 24 yardsticks, adjust the size and location of residue face frame, use NMS algorithm simultaneously
Get rid of the face frame that coincidence factor is excessive;
(26) remaining face frame is adjusted to the resolution of 48*48, through the differentiation network of 48 yardsticks, uses simultaneously
NMS algorithm gets rid of the face frame that coincidence factor is excessive;
(27) by the corrective network of 48 yardsticks, output residue face frame, as final detection result.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In, step (3) farther includes:
(31) entire image is inputted core network, process all types key point, the response image of output low resolution, roughly
Location key point;
(32) according to the key point position of response image coarse localization, centered by key point, the correspondence image in original image is extracted
Block;
(33) image block is inputted branching networks, export high-resolution response image, be accurately positioned key point.
5. according to a kind of based on key point location the ATM abnormal human face detection described in claim 1 or 4, its feature
It is: described key point position includes left eye, right eye, nose, the left corners of the mouth, five key points of the right corners of the mouth.
A kind of ATM abnormal human face detection based on key point location the most according to claim 4, its feature exists
In: described core network is to be alternately stacked formed by three convolutional layers and three maximum pond layers;Described branching networks
It is to be made up of four convolutional layers.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In: described grader is to use support vector machines;Described support vector machines is to use by AR data base and ORL
The parameter of data base's sample training.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In: described step (1) is the noise using mean filter, Gaussian smoothing filter or wavelet filtering to eliminate in image;Use ash
Image is carried out strengthening regulation by degree conversion, gray level correction or histogram specification.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
It is to use based on geometric properties, enter based on template matching, pre-solid plate or statistical methods in: described described step (2)
Row Face datection;
Described step (3) is to use based on priori rules, based on geometry information, based on appearance information or based on gray scale
The method detection face key point of information is put, and obtains key point response value.
A kind of ATM abnormal human face detection based on key point location the most according to claim 1, its feature exists
In: described step (4) be use Nearest Neighbor Classifier, linear classifier, naive Bayesian, logistic regression, decision tree or
Adaboost differentiates abnormal face.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610593931.6A CN106228137A (en) | 2016-07-26 | 2016-07-26 | A kind of ATM abnormal human face detection based on key point location |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610593931.6A CN106228137A (en) | 2016-07-26 | 2016-07-26 | A kind of ATM abnormal human face detection based on key point location |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106228137A true CN106228137A (en) | 2016-12-14 |
Family
ID=57532993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610593931.6A Pending CN106228137A (en) | 2016-07-26 | 2016-07-26 | A kind of ATM abnormal human face detection based on key point location |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228137A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874868A (en) * | 2017-02-14 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on three-level convolutional neural networks |
CN106874883A (en) * | 2017-02-27 | 2017-06-20 | 中国石油大学(华东) | A kind of real-time face detection method and system based on deep learning |
CN106897738A (en) * | 2017-01-22 | 2017-06-27 | 华南理工大学 | A kind of pedestrian detection method based on semi-supervised learning |
CN107403197A (en) * | 2017-07-31 | 2017-11-28 | 武汉大学 | A kind of crack identification method based on deep learning |
CN107666573A (en) * | 2017-10-13 | 2018-02-06 | 北京奇虎科技有限公司 | The method for recording of object video and device, computing device under camera scene |
CN107679504A (en) * | 2017-10-13 | 2018-02-09 | 北京奇虎科技有限公司 | Face identification method, device, equipment and storage medium based on camera scene |
CN107679462A (en) * | 2017-09-13 | 2018-02-09 | 哈尔滨工业大学深圳研究生院 | A kind of depth multiple features fusion sorting technique based on small echo |
CN107808129A (en) * | 2017-10-17 | 2018-03-16 | 南京理工大学 | A kind of facial multi-characteristic points localization method based on single convolutional neural networks |
CN108205677A (en) * | 2017-09-21 | 2018-06-26 | 北京市商汤科技开发有限公司 | Method for checking object, device, computer program, storage medium and electronic equipment |
CN108509894A (en) * | 2018-03-28 | 2018-09-07 | 北京市商汤科技开发有限公司 | Method for detecting human face and device |
CN108500992A (en) * | 2018-04-09 | 2018-09-07 | 中山火炬高新企业孵化器有限公司 | A kind of multi-functional mobile security robot |
CN108537208A (en) * | 2018-04-24 | 2018-09-14 | 厦门美图之家科技有限公司 | A kind of multiple dimensioned method for detecting human face and computing device |
CN108537990A (en) * | 2018-04-13 | 2018-09-14 | 深圳壹账通智能科技有限公司 | All-in-one machine cheats judgment method, device, equipment and computer readable storage medium |
CN108764048A (en) * | 2018-04-28 | 2018-11-06 | 中国科学院自动化研究所 | Face critical point detection method and device |
CN108961369A (en) * | 2018-07-11 | 2018-12-07 | 厦门幻世网络科技有限公司 | The method and apparatus for generating 3D animation |
CN109190646A (en) * | 2018-06-25 | 2019-01-11 | 北京达佳互联信息技术有限公司 | A kind of data predication method neural network based, device and nerve network system |
CN109344802A (en) * | 2018-10-29 | 2019-02-15 | 重庆邮电大学 | A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net |
CN109446953A (en) * | 2018-10-17 | 2019-03-08 | 福州大学 | A kind of recognition methods again of the pedestrian based on lightweight convolutional neural networks |
CN109816035A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109815843A (en) * | 2018-12-29 | 2019-05-28 | 深圳云天励飞技术有限公司 | Object detection method and Related product |
CN110472573A (en) * | 2019-08-14 | 2019-11-19 | 北京思图场景数据科技服务有限公司 | A kind of human body behavior analysis method, equipment and computer storage medium based on body key point |
CN110688929A (en) * | 2019-09-20 | 2020-01-14 | 北京华捷艾米科技有限公司 | Human skeleton joint point positioning method and device |
CN110717425A (en) * | 2019-09-26 | 2020-01-21 | 深圳市商汤科技有限公司 | Case association method and device, electronic equipment and storage medium |
CN111209819A (en) * | 2019-12-30 | 2020-05-29 | 新大陆数字技术股份有限公司 | Rotation-invariant face detection method, system equipment and readable storage medium |
CN111918126A (en) * | 2019-05-10 | 2020-11-10 | Tcl集团股份有限公司 | Audio and video information processing method and device, readable storage medium and terminal equipment |
CN112232359A (en) * | 2020-09-29 | 2021-01-15 | 中国人民解放军陆军炮兵防空兵学院 | Visual tracking method based on mixed level filtering and complementary characteristics |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831472A (en) * | 2012-08-03 | 2012-12-19 | 无锡慧眼电子科技有限公司 | People counting method based on video flowing image processing |
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN104966066A (en) * | 2015-06-26 | 2015-10-07 | 武汉大学 | Traffic block port monitoring oriented in-car human face detection method and system |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105447432A (en) * | 2014-08-27 | 2016-03-30 | 北京千搜科技有限公司 | Face anti-fake method based on local motion pattern |
-
2016
- 2016-07-26 CN CN201610593931.6A patent/CN106228137A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831472A (en) * | 2012-08-03 | 2012-12-19 | 无锡慧眼电子科技有限公司 | People counting method based on video flowing image processing |
CN105447432A (en) * | 2014-08-27 | 2016-03-30 | 北京千搜科技有限公司 | Face anti-fake method based on local motion pattern |
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN104966066A (en) * | 2015-06-26 | 2015-10-07 | 武汉大学 | Traffic block port monitoring oriented in-car human face detection method and system |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
Non-Patent Citations (1)
Title |
---|
SACHIN SUDHAKAR FARFADE等: "Multi-view Face Detection Using Deep Convolutional Neural Networks", 《ICMR "15 PROCEEDINGS OF THE 5TH ACM ON INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897738A (en) * | 2017-01-22 | 2017-06-27 | 华南理工大学 | A kind of pedestrian detection method based on semi-supervised learning |
CN106897738B (en) * | 2017-01-22 | 2019-07-16 | 华南理工大学 | A kind of pedestrian detection method based on semi-supervised learning |
CN106874868A (en) * | 2017-02-14 | 2017-06-20 | 北京飞搜科技有限公司 | A kind of method for detecting human face and system based on three-level convolutional neural networks |
CN106874868B (en) * | 2017-02-14 | 2020-09-18 | 北京飞搜科技有限公司 | Face detection method and system based on three-level convolutional neural network |
CN106874883A (en) * | 2017-02-27 | 2017-06-20 | 中国石油大学(华东) | A kind of real-time face detection method and system based on deep learning |
CN107403197A (en) * | 2017-07-31 | 2017-11-28 | 武汉大学 | A kind of crack identification method based on deep learning |
CN107403197B (en) * | 2017-07-31 | 2020-01-24 | 武汉大学 | Crack identification method based on deep learning |
CN107679462A (en) * | 2017-09-13 | 2018-02-09 | 哈尔滨工业大学深圳研究生院 | A kind of depth multiple features fusion sorting technique based on small echo |
CN107679462B (en) * | 2017-09-13 | 2021-10-19 | 哈尔滨工业大学深圳研究生院 | Depth multi-feature fusion classification method based on wavelets |
CN108205677A (en) * | 2017-09-21 | 2018-06-26 | 北京市商汤科技开发有限公司 | Method for checking object, device, computer program, storage medium and electronic equipment |
CN107679504A (en) * | 2017-10-13 | 2018-02-09 | 北京奇虎科技有限公司 | Face identification method, device, equipment and storage medium based on camera scene |
CN107666573A (en) * | 2017-10-13 | 2018-02-06 | 北京奇虎科技有限公司 | The method for recording of object video and device, computing device under camera scene |
CN107808129A (en) * | 2017-10-17 | 2018-03-16 | 南京理工大学 | A kind of facial multi-characteristic points localization method based on single convolutional neural networks |
CN107808129B (en) * | 2017-10-17 | 2021-04-16 | 南京理工大学 | Face multi-feature point positioning method based on single convolutional neural network |
CN108509894A (en) * | 2018-03-28 | 2018-09-07 | 北京市商汤科技开发有限公司 | Method for detecting human face and device |
CN108500992A (en) * | 2018-04-09 | 2018-09-07 | 中山火炬高新企业孵化器有限公司 | A kind of multi-functional mobile security robot |
CN108537990A (en) * | 2018-04-13 | 2018-09-14 | 深圳壹账通智能科技有限公司 | All-in-one machine cheats judgment method, device, equipment and computer readable storage medium |
CN108537208A (en) * | 2018-04-24 | 2018-09-14 | 厦门美图之家科技有限公司 | A kind of multiple dimensioned method for detecting human face and computing device |
CN108764048A (en) * | 2018-04-28 | 2018-11-06 | 中国科学院自动化研究所 | Face critical point detection method and device |
CN108764048B (en) * | 2018-04-28 | 2021-03-16 | 中国科学院自动化研究所 | Face key point detection method and device |
CN109190646B (en) * | 2018-06-25 | 2019-08-20 | 北京达佳互联信息技术有限公司 | A kind of data predication method neural network based, device and nerve network system |
CN109190646A (en) * | 2018-06-25 | 2019-01-11 | 北京达佳互联信息技术有限公司 | A kind of data predication method neural network based, device and nerve network system |
CN108961369B (en) * | 2018-07-11 | 2023-03-17 | 厦门黑镜科技有限公司 | Method and device for generating 3D animation |
CN108961369A (en) * | 2018-07-11 | 2018-12-07 | 厦门幻世网络科技有限公司 | The method and apparatus for generating 3D animation |
CN109446953A (en) * | 2018-10-17 | 2019-03-08 | 福州大学 | A kind of recognition methods again of the pedestrian based on lightweight convolutional neural networks |
CN109344802A (en) * | 2018-10-29 | 2019-02-15 | 重庆邮电大学 | A kind of human-body fatigue detection method based on improved concatenated convolutional nerve net |
CN109344802B (en) * | 2018-10-29 | 2021-09-10 | 重庆邮电大学 | Human body fatigue detection method based on improved cascade convolution neural network |
CN109815843B (en) * | 2018-12-29 | 2021-09-14 | 深圳云天励飞技术有限公司 | Image processing method and related product |
CN109815843A (en) * | 2018-12-29 | 2019-05-28 | 深圳云天励飞技术有限公司 | Object detection method and Related product |
CN109816035A (en) * | 2019-01-31 | 2019-05-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN109816035B (en) * | 2019-01-31 | 2022-10-11 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN111918126A (en) * | 2019-05-10 | 2020-11-10 | Tcl集团股份有限公司 | Audio and video information processing method and device, readable storage medium and terminal equipment |
CN110472573A (en) * | 2019-08-14 | 2019-11-19 | 北京思图场景数据科技服务有限公司 | A kind of human body behavior analysis method, equipment and computer storage medium based on body key point |
CN110688929A (en) * | 2019-09-20 | 2020-01-14 | 北京华捷艾米科技有限公司 | Human skeleton joint point positioning method and device |
CN110688929B (en) * | 2019-09-20 | 2021-11-30 | 北京华捷艾米科技有限公司 | Human skeleton joint point positioning method and device |
CN110717425A (en) * | 2019-09-26 | 2020-01-21 | 深圳市商汤科技有限公司 | Case association method and device, electronic equipment and storage medium |
CN111209819A (en) * | 2019-12-30 | 2020-05-29 | 新大陆数字技术股份有限公司 | Rotation-invariant face detection method, system equipment and readable storage medium |
CN112232359A (en) * | 2020-09-29 | 2021-01-15 | 中国人民解放军陆军炮兵防空兵学院 | Visual tracking method based on mixed level filtering and complementary characteristics |
CN112232359B (en) * | 2020-09-29 | 2022-10-21 | 中国人民解放军陆军炮兵防空兵学院 | Visual tracking method based on mixed level filtering and complementary characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228137A (en) | A kind of ATM abnormal human face detection based on key point location | |
Zhang et al. | Driver fatigue detection based on eye state recognition | |
CN107133943B (en) | A kind of visible detection method of stockbridge damper defects detection | |
CN104794491B (en) | Based on the fuzzy clustering Surface Defects in Steel Plate detection method presorted | |
CN105893946B (en) | A kind of detection method of front face image | |
CN101142584B (en) | Method for facial features detection | |
CN104809463B (en) | A kind of high-precision fire disaster flame detection method for converting dictionary learning based on intensive scale invariant feature | |
Gowsikhaa et al. | Suspicious Human Activity Detection from Surveillance Videos. | |
CN103902962B (en) | One kind is blocked or the adaptive face identification method of light source and device | |
CN106951867A (en) | Face identification method, device, system and equipment based on convolutional neural networks | |
CN106960202A (en) | A kind of smiling face's recognition methods merged based on visible ray with infrared image | |
CN107833221A (en) | A kind of water leakage monitoring method based on multi-channel feature fusion and machine learning | |
Jing et al. | Yarn-dyed fabric defect classification based on convolutional neural network | |
CN107368778A (en) | Method for catching, device and the storage device of human face expression | |
CN110298297A (en) | Flame identification method and device | |
CN106682578A (en) | Human face recognition method based on blink detection | |
CN106874929B (en) | Pearl classification method based on deep learning | |
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN109598681A (en) | The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs | |
CN112926522B (en) | Behavior recognition method based on skeleton gesture and space-time diagram convolution network | |
Liang et al. | Methods of moving target detection and behavior recognition in intelligent vision monitoring. | |
Roa’a et al. | Automated Cheating Detection based on Video Surveillance in the Examination Classes | |
CN114463843A (en) | Multi-feature fusion fish abnormal behavior detection method based on deep learning | |
CN113221655A (en) | Face spoofing detection method based on feature space constraint | |
CN117115147B (en) | Textile detection method and system based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161214 |