CN108062518A - Type of face detection method and device - Google Patents
Type of face detection method and device Download PDFInfo
- Publication number
- CN108062518A CN108062518A CN201711284540.7A CN201711284540A CN108062518A CN 108062518 A CN108062518 A CN 108062518A CN 201711284540 A CN201711284540 A CN 201711284540A CN 108062518 A CN108062518 A CN 108062518A
- Authority
- CN
- China
- Prior art keywords
- grader
- face
- image
- classification results
- face detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
This disclosure relates to a kind of type of face detection method and device, the described method includes:Face-image to be detected is inputted the first grader to handle, obtains the first classification results;First classification results are inputted the second grader to handle, obtain face detection as a result, first grader is different from second grader, the processing speed of first grader is more than the processing speed of second grader.It is handled using the first fast grader of processing speed, after obtaining the first classification results, the second grader that the input detection of the first classification results has higher success rate is handled, obtains final face detection result, on the premise of face detection speed is improved, ensure the success rate of face detection.
Description
Technical field
This disclosure relates to field of image recognition more particularly to a kind of type of face detection method and device.
Background technology
In face detection technique, face detection is usually carried out in images to be recognized using grader, grader is usual
Using Adaboost algorithm, Adaboost algorithm is a kind of iterative algorithm, and the same training set training of core concept is different
Grader (Weak Classifier), then gets up these weak classifier sets, forms a stronger final classification device (strong classification
Device).In traditional face detection based on Adaboost algorithm, multiple strong classifiers are cascaded into composition regular grade connection classification
Device, common cascade classifier is after detection starts, it is necessary to filter substantial amounts of window to be detected, time-consuming more, detection speed is slow.
The content of the invention
In view of this, the present disclosure proposes a kind of type of face detection method and device, to solve to detect speed in face detection
Spend the problem of slow.
According to the one side of the disclosure, a kind of type of face detection method is provided, the described method includes:
Face-image to be detected is inputted the first grader to handle, obtains the first classification results;
First classification results are inputted the second grader to handle, obtain face detection as a result, described first point
Class device is different from second grader, and the processing speed of first grader is more than the processing speed of second grader
Degree.
In a kind of possible realization method, face-image to be detected is inputted into the first grader and is handled, including:
First grader extracts the Ha Er Haar features of the face-image to be detected;
First grader is handled the face-image to be detected according to the Haar features extracted.
First classification results are inputted the second grader to handle, including:
Second grader extracts the Haar features of first classification results and local binary patterns feature Lbp spies
Sign;
Second grader according to the Haar features and Lbp features extracted to first classification results at
Reason.
First grader is nested cascade classifier, and second grader is common cascade classifier or convolution god
Through network classifier.
First grader is common cascade classifier, and second grader is convolutional neural networks grader.
According to another aspect of the present disclosure, a kind of face detection means are provided, including:
First classifier modules handle for face-image to be detected to be inputted the first grader, obtain first point
Class result;
Second classifier modules handle for first classification results to be inputted the second grader, obtain face
Testing result, first grader is different from second grader, and the processing speed of first grader is more than described
The processing speed of second grader.
In a kind of possible realization method, first classifier modules, including:
Fisrt feature extracting sub-module, for extracting the Ha Er Haar features of the face-image to be detected;
First processing submodule, for being handled according to the Haar features extracted the face-image to be detected.
In a kind of possible realization method, second classifier modules, including:
Second feature extracting sub-module, it is special for extracting the Haar features of first classification results and local binary patterns
Levy Lbp features;
Second processing submodule, for according to the Haar features extracted and Lbp features to first classification results into
Row processing.
In a kind of possible realization method, first grader is nested cascade classifier, second grader
For common cascade classifier or convolutional neural networks grader.
In a kind of possible realization method, first grader be common cascade classifier, second grader
For convolutional neural networks grader.
According to another aspect of the present disclosure, a kind of face detection means are provided, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as performing above-mentioned type of face detection method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with
Computer program instructions, which is characterized in that realized when the computer program instructions are executed by processor and perform above-mentioned face inspection
Survey method.
Face-image to be detected is inputted the disclosure the first grader respectively and the second grader is handled, at utilization
It manages fireballing first grader to be handled, after obtaining the first classification results, then inputs second point that detection has higher success rate
Class device is handled, and is obtained final face detection as a result, on the premise of face detection speed is improved, is ensured face detection
Success rate.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become
It is clear.
Description of the drawings
Comprising in the description and the attached drawing of a part for constitution instruction and specification together illustrate the disclosure
Exemplary embodiment, feature and aspect, and the principle for explaining the disclosure.
Fig. 1 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.
Fig. 2 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.
Fig. 3 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.
Fig. 4 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.
Fig. 5 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.
Fig. 6 is according to the face-image to be detected shown in an exemplary embodiment.
Fig. 7 is according to the pretreatment image shown in an exemplary embodiment.
Fig. 8 is according to the first classification results shown in an exemplary embodiment.
Fig. 9 is according to the face detection result shown in an exemplary embodiment.
Figure 10 is the process chart according to the grader based on Adaboost algorithm shown in an exemplary embodiment.
Figure 11 is the process chart according to the convolutional neural networks grader shown in an exemplary embodiment.
Figure 12 is the acquisition process flow chart according to the face detection result shown in an exemplary embodiment.
Figure 13 is the block diagram according to a kind of face detection means shown in an exemplary embodiment.
Figure 14 is the block diagram according to a kind of face detection means shown in an exemplary embodiment.
Figure 15 is the block diagram according to a kind of face detection means shown in an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Reference numeral represent functionally the same or similar element.Although the various aspects of embodiment are shown in the drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details is given in specific embodiment below.
It will be appreciated by those skilled in the art that without some details, the disclosure can equally be implemented.In some instances, for
Method well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 is according to a kind of flow chart of type of face detection method shown in an exemplary embodiment, face as shown in Figure 1
Detection method includes the following steps:
Step 10, face-image to be detected is inputted the first grader to handle, obtains the first classification results.
In a kind of possible realization method, face-image to be detected includes facial image to be detected, can also include it
The face-image to be detected of its animal.Fig. 6 is according to the face-image to be detected shown in an exemplary embodiment.Shown by Fig. 6
Image to be detected be big group photo, including multiple facial images.By the face-image to be detected of Fig. 6 input the first grader into
Before row face recognition, detection face-image is treated first and is pre-processed.Fig. 7 is according to pre- shown in an exemplary embodiment
Handle image.As shown in fig. 7, by image to be detected according to the continuous scaling of some scale after, obtain the pre- of several different resolutions
Handle image.Pretreatment image is inputted into the first grader again, utilizes the window of m*n (m be pixel value with n), exhaustive search
Window at each all positions of image retains the video in window that testing result is face-image.Fig. 8 is exemplary according to one
Implement the schematic diagram of the first classification results exemplified.In fig. 8, testing result is not the figure of face-image by the first grader
After being abandoned as window, testing result is determined as the first classification results for the image window of face-image.First grader carries out
The screening of preliminary face-image ensures the detection speed of face-image processing.
Step 20, first classification results are inputted the second grader to handle, obtains face detection as a result, described
First grader is different from second grader, and the processing speed of first grader is more than the place of second grader
Manage speed.
In a kind of possible realization method, by the first classification results in Fig. 8, different from the first grader is utilized
Two graders carry out further face detection.To ensure the detection success rate of face-image processing, the processing of the second grader
Speed is less than the first grader, but detects success rate and be higher than the first grader.
Fig. 9 is as a result, as shown in figure 9, defeated according to the second grader according to the face detection shown in an exemplary embodiment
As a result, in the group photo shown in Fig. 6, the face-image that will identify that is identified the face detection gone out.
In the present embodiment, face-image to be detected is inputted into the first grader respectively and the second grader is handled,
It is handled using the first fast grader of processing speed, after obtaining the first classification results, then the first classification results is inputted and are examined
It surveys the second grader having higher success rate to be handled, obtains final face detection as a result, improving face detection speed
Under the premise of, ensure the success rate of face detection.
Fig. 2 is according to a kind of flow chart of type of face detection method shown in an exemplary embodiment, side as shown in Figure 2
Method, on the basis of above-described embodiment, step 10 includes the following steps:
Step 11, first grader extracts Ha Er (Haar) feature of the face-image to be detected.
In a kind of possible realization method, the first grader carries out face detection processing using Haar features.Haar is special
It takes over for use in Face datection, Haar features are divided into three classes:Edge feature, linear character, central feature and diagonal feature, combination
Into feature templates.There are white and two kinds of rectangles of black in feature templates, the characteristic value of the template for white rectangle pixel and subtracts
Black rectangle pixel and.Haar characteristic values reflect the grey scale change situation of image.Such as:Some features of face can be by rectangle
Feature simply describes, such as:Eyes are deeper than cheek color, and bridge of the nose both sides are deeper than bridge of the nose color, and face compares ambient color
Will be deep etc..Rectangular characteristic is to some simple graphic structures, if edge, line segment are more sensitive, can describe particular orientation (it is horizontal,
Vertically, diagonally structure).Therefore, the first grader of face detection processing is carried out using Haar features, will can easily be treated
It detects face-image and carries out facial characteristics, carry out the division and extraction of feature, and the characteristic value extracted facilitates calculating.
Step 12, first grader according to the Haar features extracted to the face-image to be detected at
Reason.
In this embodiment, the first grader extracts the Haar features of image to be detected, can in face recognition algorithm, into
Row quickly calculates, so as to ensure the processing speed of the first grader.
In the present embodiment, the first grader carries out the extraction and processing of characteristics of image using Haar features, can ensure
The processing speed of first grader, so as to improve the whole detection speed of face detection.
Fig. 3 is according to a kind of flow chart of type of face detection method shown in an exemplary embodiment, side as shown in Figure 3
Method, on the basis of above-described embodiment, step S20 includes:
Step 21, second grader extracts the Haar features and Lbp features of first classification results.
In a kind of possible realization method, the second grader uses the combination of Haar features and Lbp features, to first point
Class result is further processed.In image procossing, Lbp features (Local Binary Pattern, local binary mould
Formula) it is a kind of operator for being used for describing image local textural characteristics, it is significant excellent with rotational invariance and gray scale consistency etc.
Point.LBP features are used to extract the local textural characteristics of image.And Lab features (Locally Assembled Binary, base
In local code binary pattern feature), the rectangular area merged in the binary-coding pattern and Haar features in Lbp features is bright
Degree and feature represent very capable for the very strong face pattern of regional luminance pattern, are easy to fixed point.Therefore Lbp features tool
There is higher face recognition accuracy rate.
Step 22, second grader according to the Haar features and Lbp features extracted to first classification results
It is handled.
In this embodiment, the mode that Haar features and Lbp features combine ensure that the second grader carries out face recognition
When recognition accuracy.
In the present embodiment, the mode that the second grader is combined using Haar features and Lbp features, to the first classification results
It is handled, ensure that the face recognition accuracy rate of the second grader output result
Fig. 4 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.In the present embodiment,
One grader is nested cascade classifier, and the second grader is common cascade classifier or convolutional neural networks grader.Such as Fig. 4
Shown, this method comprises the following steps:
Step 101, face-image to be detected is inputted nested cascade classifier to handle, obtains the first classification results.
Step 201, first classification results are inputted into common cascade classifier or convolutional neural networks grader carries out
Processing, obtains face detection result.
In a kind of possible realization method, nested cascade classifier and common cascade classifier can be based on
The strong classifier of Adaboost algorithm.Adaboost algorithm is a kind of iterative algorithm, and core concept is for same training
The different grader (Weak Classifier) of collection training, then gets up these weak classifier sets, forms one stronger final point
Class device (strong classifier).Its algorithm realizes that it is according to each among each training set by change data distribution in itself
Whether the classification of sample is correct, and determines the weights of each sample based on the accuracy rate of the general classification of last time.It will change
The new data set of weights is given sub-classification device and is trained, and the grader for finally obtaining each training finally merges,
As last Decision Classfication device.Some unnecessary training data features can be excluded using Adaboost graders, and are put
On crucial training data.
When carrying out image procossing using the grader based on Adaboost algorithm, usually multiple strong classifiers are connected
Connect use, every first-level class device of nested cascade classifier and common cascade classifier is all strong classifier.Figure 10 is according to one
The process chart of the grader based on Adaboost algorithm shown in exemplary embodiment.As shown in Figure 10,1,2 to N are
Grader based on Adaboost algorithm is attached 1,2 to N number of grader, the result that each grader will identify that
It is the strong classifier of the face window input next stage of T (Ture, correct), and is the non-of F (False, mistake) by recognition result
Face window abandons.
For nested cascade classifier, each strong classifier inherits the output result of the upper strong classifier in front.The
The value A2 that two classifier calculateds are drawn, in addition inheriting the result of the value A1 of a upper grader.Strong point of i.e. nested second
The output valve of class device is A2+A1.Then output valve is compared with nested threshold value.It is nested between nested threshold value and grader
Connection mode is related, and in training, each layer of strong classifier inherits the output result of the strong classifier of preceding layer.Therefore it is nested
The processing speed of cascade classifier is fast.
For common cascade classifier.Common cascade classifier includes multiple strong classifiers being cascaded, as long as its
In a strong classifier testing result to be not face-image, then this face-image no longer carry out strong classifier below into
Row detection.After first classification results are by all graders in common cascade classifier, final face detection can be just obtained
As a result.In the common cascade classifier based on Adaboost algorithm, each strong classifier calculates A values, then A values understand with therewith
The threshold value of corresponding strong classifier is compared, and face is considered more than threshold value, less than being then non-face.Wherein, it is first strong
Classifier calculated goes out A1, and second strong classifier is calculated A2, unrelated between the threshold value of each strong classifier.Regular grade
The detection speed for joining grader is less than nested cascade classifier, and Detection accuracy is higher than nested cascade classifier.
In a kind of possible realization method, the first grader is nested cascade classifier, and the second grader is regular grade
Join grader.The detection speed of first grader is fast, and the Detection accuracy of the second grader is high.
In a kind of possible realization method, the first grader is nested cascade classifier, and the second grader is convolution god
Through network classifier.The recognition accuracy of convolutional neural networks is higher than nested cascade classifier, but detection speed is less than nesting level
Join grader.
CNN (Convolutional Neural Network, convolutional neural networks) is a kind of feedforward neural network, it
Artificial neuron can respond the surrounding cells in a part of coverage, have outstanding performance for large-scale image procossing.CNN bags
Include convolutional layer (convolutional layer), pond layer (pooling layer) and full articulamentum (fully-connected
layer).Second grader can include one or more convolutional neural networks graders.Figure 11 is according to an exemplary implementation
The process chart of the convolutional neural networks grader exemplified.In fig. 11, the second grader includes three convolutional Neural nets
Network grader is respectively 12 layers of convolutional neural networks grader (12-net), 24 layers of convolutional neural networks grader (24-net)
With 48 layers of convolutional neural networks grader (48-net).As shown in figure 11, image to be detected (input image) is inputted to each volume
Product neural network classifier, wherein, convolutional neural networks grader includes:Convolutional layer (Convolutional layer), maximum
Pond layer (Max-polling layer), index bed (Normalization layer), full articulamentum (Fully-
connectedlayer).The label (Labels) of convolutional neural networks output includes two kinds (2class), is respectively face
(face) it is and non-face (non-face).After input image to be detected is respectively 3 passages (3channals) according to adjustment resolution ratio
Each convolutional neural networks grader is inputted, as shown in FIG., image to be detected that resolution ratio is 12*12 is inputted into 12-net, it will
Image to be detected that resolution ratio is 24*24 inputs 24-net, and image to be detected that resolution ratio is 48*48 is inputted 48-net.
In a kind of possible realization method, after the first classification results adjustment resolution ratio, 12 layers of convolutional neural networks are inputted
Grader, the handling result of 12 layers of convolutional neural networks grader input 24 layers of convolutional neural networks grader, 24 layers of convolution god
After the handling result of network classifier inputs 48 layers of convolutional neural networks grader, final face detection result is obtained.Figure
12 be according to the acquisition process flow chart of the face detection result shown in an exemplary embodiment, and Figure 12 gives 12 layers of volume successively
Recognition result (After 12-net), the recognition result of 24 layers of convolutional neural networks grader of product neural network classifier
The recognition result of (After 24-net) and 48 layers of convolutional neural networks grader (After 48-net), finally utilizes image calibration
The recognition result (Output detecetions) just exported.
In a kind of possible realization method, 12 layers of convolutional neural networks grader, 24 layers of convolutional neural networks grader
It can be used alone with 48 layers of convolutional neural networks grader or optionally two of which be used in series, in series connection, the number of plies
Few convolutional neural networks grader is preceding.
In the present embodiment, when the first grader is nested cascade classifier, the second grader is common cascade sort
Device or convolutional neural networks grader.Using the detection speed of nested cascade classifier it is fast the characteristics of and common cascade classifier
Or convolutional neural networks grader Detection accuracy it is high the characteristics of, can ensure the fast premise of detection speed of face detection
Under, ensure the success rate of face detection.
Fig. 5 is the flow chart according to a kind of type of face detection method shown in an exemplary embodiment.In the present embodiment,
One grader is common cascade classifier, and the second grader is convolutional neural networks grader.As shown in figure 5, this method includes
Following steps:
Step 102, face-image to be detected is inputted common cascade classifier to handle, obtains the first classification results.
Step 202, the first classification results input stage convolutional neural networks grader is handled, obtains facial inspection
Survey result.
In a kind of possible realization method, the first grader is common cascade classifier, and the second grader is convolution god
Common through network classifier, the detection speed of common cascade classifier is more than convolutional neural networks grader, but common cascade point
The Detection accuracy of class device is less than convolutional neural networks grader.
In the present embodiment, it is attached by common cascade classifier and convolutional neural networks grader, using common
Cascade classifier ensures detection speed, and convolutional neural networks grader is recycled to ensure Detection accuracy.Face can improved
On the premise of detection speed, the Detection accuracy of face detection is improved.
Figure 13 is according to a kind of block diagram of face detection means shown in an exemplary embodiment, is filled as shown in fig. 13 that
It puts, including:
First classifier modules 41 handle for face-image to be detected to be inputted the first grader, obtain first
Classification results.
Second classifier modules 42 handle for first classification results to be inputted the second grader, obtain face
Portion's testing result, first grader is different from second grader, and the processing speed of first grader is more than institute
State the processing speed of the second grader.
In a kind of possible realization method, first grader is nested cascade classifier, second grader
For common cascade classifier or convolutional neural networks grader.
In a kind of possible realization method, first grader be common cascade classifier, second grader
For convolutional neural networks grader.
Figure 14 is according to a kind of block diagram of face detection means shown in an exemplary embodiment, dress as shown in figure 14
It puts, on the basis of the device shown in Figure 13,
In a kind of possible realization method, first classifier modules 41, including:
Fisrt feature extracting sub-module 411, for extracting the Ha Er Haar features of the face-image to be detected.
First processing submodule 412, at according to the Haar features extracted to the face-image to be detected
Reason.
In a kind of possible realization method, second classifier modules 42, including:
Second feature extracting sub-module 421, for the Haar features for extracting first classification results and based on local volume
Code binary pattern feature Lbp features.
Second processing submodule 422, the Haar features and Lbp features extracted for basis are to first classification results
It is handled.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in related this method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
Figure 15 is according to a kind of block diagram for face detection means 800 shown in an exemplary embodiment.For example, device
800 can be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, and medical treatment is set
It is standby, body-building equipment, personal digital assistant etc..
With reference to Figure 15, device 800 can include following one or more assemblies:Processing component 802, memory 804, power supply
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814 and
Communication component 816.
The integrated operation of 802 usual control device 800 of processing component, such as with display, call, data communication, phase
Machine operates and record operates associated operation.Processing component 802 can refer to including one or more processors 820 to perform
Order, to perform all or part of the steps of the methods described above.In addition, processing component 802 can include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 can include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown
Example is included for the instruction of any application program or method that are operated on device 800, contact data, and telephone book data disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 can include power management system
System, one or more power supplys and other generate, manage and distribute electric power associated component with for device 800.
Multimedia component 808 is included in the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch-screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Border, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or
During video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, during such as call model, logging mode and speech recognition mode, microphone by with
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loud speaker, for exports audio signal.
I/O interfaces 812 provide interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detect opening/closed state of device 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 can include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, for into
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application application-specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 804 of instruction, above-metioned instruction can be performed to complete the above method by the processor 820 of device 800.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 1932 of instruction, above-metioned instruction can be performed to complete the above method by the processing component 1922 of device 1900.
For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape,
Floppy disk and optical data storage devices etc..
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice invention disclosed herein
Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or
Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure
Or conventional techniques.Description and embodiments are considered only as illustratively, and the true scope and spirit of the disclosure are by following
Claim is pointed out.
It should be appreciated that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claim.
Claims (12)
1. a kind of type of face detection method, which is characterized in that the described method includes:
Face-image to be detected is inputted the first grader to handle, obtains the first classification results;
First classification results are inputted the second grader to handle, obtain face detection as a result, first grader
Different from second grader, the processing speed of first grader is more than the processing speed of second grader.
2. according to the method described in claim 1, it is characterized in that, face-image to be detected is inputted at the first grader
Reason, including:
First grader extracts the Ha Er Haar features of the face-image to be detected;
First grader is handled the face-image to be detected according to the Haar features extracted.
3. it according to the method described in claim 1, is carried out it is characterized in that, first classification results are inputted the second grader
Processing, including:
Second grader extracts the Haar features and local binary patterns feature Lbp features of first classification results;
Second grader is handled first classification results according to the Haar features and Lbp features extracted.
4. according to the method described in claim 1-3 any one, which is characterized in that first grader is nested cascade point
Class device, second grader are common cascade classifier or convolutional neural networks grader.
5. according to the method described in claim 1-3 any one, which is characterized in that first grader is common cascade point
Class device, second grader are convolutional neural networks grader.
6. a kind of face detection means, which is characterized in that including:
First classifier modules handle for face-image to be detected to be inputted the first grader, obtain the first classification knot
Fruit;
Second classifier modules handle for first classification results to be inputted the second grader, obtain face detection
As a result, first grader is different from second grader, the processing speed of first grader is more than described second
The processing speed of grader.
7. device according to claim 6, which is characterized in that first classifier modules, including:
Fisrt feature extracting sub-module, for extracting the Ha Er Haar features of the face-image to be detected;
First processing submodule, for being handled according to the Haar features extracted the face-image to be detected.
8. device according to claim 6, which is characterized in that second classifier modules, including:
Second feature extracting sub-module, for extracting the Haar features of first classification results and local binary patterns feature
Lbp features;
Second processing submodule, at according to the Haar features and Lbp features extracted to first classification results
Reason.
9. according to the device described in claim 6-8 any one, which is characterized in that first grader is nested cascade point
Class device, second grader are common cascade classifier or convolutional neural networks grader.
10. according to the device described in claim 6-8 any one, which is characterized in that first grader is common cascade
Grader, second grader are convolutional neural networks grader.
11. a kind of face detection means, which is characterized in that including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:In perform claim requirement 1 to 5 the step of any one the method.
12. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute
State the method realized when computer program instructions are executed by processor any one of claim 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711284540.7A CN108062518A (en) | 2017-12-07 | 2017-12-07 | Type of face detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711284540.7A CN108062518A (en) | 2017-12-07 | 2017-12-07 | Type of face detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108062518A true CN108062518A (en) | 2018-05-22 |
Family
ID=62135410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711284540.7A Pending CN108062518A (en) | 2017-12-07 | 2017-12-07 | Type of face detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108062518A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345553A (en) * | 2018-08-31 | 2019-02-15 | 厦门中控智慧信息技术有限公司 | A kind of palm and its critical point detection method, apparatus and terminal device |
CN109446893A (en) * | 2018-09-14 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | Face identification method, device, computer equipment and storage medium |
GB2575852A (en) * | 2018-07-26 | 2020-01-29 | Advanced Risc Mach Ltd | Image processing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350063A (en) * | 2008-09-03 | 2009-01-21 | 北京中星微电子有限公司 | Method and apparatus for locating human face characteristic point |
US20130294642A1 (en) * | 2012-05-01 | 2013-11-07 | Hulu Llc | Augmenting video with facial recognition |
CN104268584A (en) * | 2014-09-16 | 2015-01-07 | 南京邮电大学 | Human face detection method based on hierarchical filtration |
CN105718868A (en) * | 2016-01-18 | 2016-06-29 | 中国科学院计算技术研究所 | Face detection system and method for multi-pose faces |
-
2017
- 2017-12-07 CN CN201711284540.7A patent/CN108062518A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350063A (en) * | 2008-09-03 | 2009-01-21 | 北京中星微电子有限公司 | Method and apparatus for locating human face characteristic point |
US20130294642A1 (en) * | 2012-05-01 | 2013-11-07 | Hulu Llc | Augmenting video with facial recognition |
CN104268584A (en) * | 2014-09-16 | 2015-01-07 | 南京邮电大学 | Human face detection method based on hierarchical filtration |
CN105718868A (en) * | 2016-01-18 | 2016-06-29 | 中国科学院计算技术研究所 | Face detection system and method for multi-pose faces |
Non-Patent Citations (1)
Title |
---|
李颖颖: "基于特征融合的AdaBoost人脸检测研究", 《中国优秀硕士学位论文》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2575852A (en) * | 2018-07-26 | 2020-01-29 | Advanced Risc Mach Ltd | Image processing |
GB2575852B (en) * | 2018-07-26 | 2021-06-09 | Advanced Risc Mach Ltd | Image processing |
US11423645B2 (en) | 2018-07-26 | 2022-08-23 | Apical Limited | Image processing |
CN109345553A (en) * | 2018-08-31 | 2019-02-15 | 厦门中控智慧信息技术有限公司 | A kind of palm and its critical point detection method, apparatus and terminal device |
CN109446893A (en) * | 2018-09-14 | 2019-03-08 | 百度在线网络技术(北京)有限公司 | Face identification method, device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295566B (en) | Facial expression recognizing method and device | |
CN110602527B (en) | Video processing method, device and storage medium | |
CN106295515B (en) | Determine the method and device of the human face region in image | |
CN105528607B (en) | Method for extracting region, model training method and device | |
CN105426867B (en) | Recognition of face verification method and device | |
CN106548468B (en) | The method of discrimination and device of image definition | |
CN106295511B (en) | Face tracking method and device | |
CN105469356B (en) | Face image processing process and device | |
CN106339680A (en) | Human face key point positioning method and device | |
CN106548145A (en) | Image-recognizing method and device | |
CN109543714A (en) | Acquisition methods, device, electronic equipment and the storage medium of data characteristics | |
CN106228556B (en) | image quality analysis method and device | |
CN108010060A (en) | Object detection method and device | |
CN106980840A (en) | Shape of face matching process, device and storage medium | |
CN105528078B (en) | The method and device of controlling electronic devices | |
CN107368810A (en) | Method for detecting human face and device | |
CN105678242B (en) | Focusing method and device under hand-held certificate mode | |
CN104077597B (en) | Image classification method and device | |
CN107463903A (en) | Face key independent positioning method and device | |
CN107392166A (en) | Skin color detection method, device and computer-readable recording medium | |
CN106971164A (en) | Shape of face matching process and device | |
CN106295499A (en) | Age estimation method and device | |
CN109360197A (en) | Processing method, device, electronic equipment and the storage medium of image | |
CN110399934A (en) | A kind of video classification methods, device and electronic equipment | |
CN108062518A (en) | Type of face detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180522 |