CN108985159A - Human-eye model training method, eye recognition method, apparatus, equipment and medium - Google Patents
Human-eye model training method, eye recognition method, apparatus, equipment and medium Download PDFInfo
- Publication number
- CN108985159A CN108985159A CN201810585092.2A CN201810585092A CN108985159A CN 108985159 A CN108985159 A CN 108985159A CN 201810585092 A CN201810585092 A CN 201810585092A CN 108985159 A CN108985159 A CN 108985159A
- Authority
- CN
- China
- Prior art keywords
- eye
- sample data
- facial image
- human
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Ophthalmology & Optometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of human-eye model training method, eye recognition method, apparatus, equipment and media, this method comprises: obtaining facial image sample and being marked the facial image sample to obtain facial image sample data, facial image sample data is divided into training sample data and verifying sample data by the feature vector for extracting facial image sample;Using training sample data Training Support Vector Machines classifier, the critical surface of support vector machine classifier is obtained;Calculate the feature vector of the verifying sample in verifying sample data and the vector distance of critical surface;Real class rate or default false positive class rate are preset in acquisition, obtain classification thresholds according to vector distance and labeled data corresponding with verifying sample, and obtain human eye judgment models according to classification thresholds.Using the human-eye model training method, can obtain judging whether human eye there are the higher human eye judgment models of the accuracy rate blocked.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of human-eye model training method, eye recognition method, dresses
It sets, equipment and medium.
Background technique
With the fast development of artificial intelligence, human eye fixation and recognition, which has obtained extensive concern, becomes artificial intelligence field
Hot topic.Traditionally, in existing human face characteristic point recognizer, Different Organs can be marked out from face picture
Position, such as eyes, ear, mouth or nose etc., even if corresponding position is blocked, (glasses, hair, to seal mouth etc. dynamic
Make), which still can identify the relative position of different components, and provide corresponding picture.However, in the processing of some pictures
In the process, it is desirable that unobstructed eye image, and the conventional eyes figure identified using human face characteristic point recognizer
Piece but can not be readily incorporated error to there is the picture blocked to screen, and be unfavorable for subsequent further processing needs.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide a kind of human-eye model that model training efficiency can be improved
Training method, device, computer equipment and storage medium.
In addition, there is a need to propose a kind of eye recognition method, after being trained according to human-eye model training method, benefit
It is identified with trained human eye picture, to improve the accuracy rate of eye recognition.
A kind of human-eye model training method, comprising:
Facial image sample is obtained, and the facial image sample is marked to obtain facial image sample data,
And extract the feature vector of the facial image sample in the facial image sample data, wherein facial image sample data packet
Include facial image sample and labeled data;
The facial image sample data is divided into training sample data and verifying sample data;
Using the training sample data Training Support Vector Machines classifier, facing for the support vector machine classifier is obtained
Interface;
Calculate the feature vector of the verifying sample in the verifying sample data and the vector distance of the critical surface;
Real class rate or default false positive class rate are preset in acquisition, according to the vector distance and corresponding with sample data is verified
Labeled data obtains classification thresholds;
According to the classification thresholds, human eye judgment models are obtained.
A kind of human-eye model training device, comprising:
Facial image sample data obtains module, for obtaining facial image sample, and to the facial image sample into
Line flag is to obtain facial image sample data, and extracts the feature of the facial image sample in the facial image sample data
Vector, wherein facial image sample data includes facial image sample and labeled data;
Facial image sample data division module, for the facial image sample data to be divided into training sample data
With verifying sample data;
Critical surface obtains module, for using the training sample data Training Support Vector Machines classifier, obtains described
The critical surface of support vector machine classifier;
Vector distance computing module, for calculate it is described verifying sample data in verifying sample feature vector with it is described
The vector distance of critical surface;
Classification thresholds obtain module, real class rate or default false positive class rate are preset for obtaining, according to the vector distance
Classification thresholds are obtained with labeled data corresponding with verifying sample data;
Human eye judgment models obtain module, for obtaining human eye judgment models according to the classification thresholds.
A kind of eye recognition method, comprising:
Face picture to be identified is obtained, positive eye areas image is obtained using facial feature points detection algorithm;
The eye areas image of the forward direction is normalized, eye image to be identified is obtained;
The eye image to be identified is input to the human eye judgment models that the human-eye model training method training obtains
It is identified, obtains recognition result.
A kind of Eye recognition device, comprising:
Face picture to be identified obtains module, for obtaining face picture to be identified, using facial feature points detection algorithm
Obtain positive eye areas image;
Ophthalmologic image-taking module to be identified is normalized for the eye areas image to the forward direction, obtains
To eye image to be identified;
Recognition result obtains module, for the eye image to be identified to be input to the human-eye model training method
The human eye judgment models that training obtains are identified, recognition result is obtained.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, the processor realize above-mentioned human-eye model training method when executing the computer program
Step, alternatively, the step of processor realizes above-mentioned eye recognition method when executing the computer program.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter
The step of calculation machine program realizes above-mentioned human-eye model training method when being executed by processor, alternatively, described in processor execution
The step of above-mentioned eye recognition method is realized when computer program.
Above-mentioned human-eye model training method, device, equipment and medium, first acquisition facial image sample, and to face figure
Decent is marked to obtain facial image sample data, extracts the spy of the facial image sample in facial image sample data
Levy vector, wherein facial image sample data includes facial image sample and labeled data;Then by facial image sample data
It is divided into training sample data and verifying sample data;Using the training sample data Training Support Vector Machines classifier, obtain
To the critical surface of the support vector machine classifier, the assorting process of support vector machine classifier is simplified.Calculate verifying sample
The feature vector of verifying sample in data and the vector distance of critical surface, intuitively more each verifying sample and its affiliated class
Other degree of closeness.Real class rate or default false positive class rate are preset in acquisition, according to vector distance and corresponding with sample data is verified
Labeled data obtain classification thresholds;And according to classification thresholds, human eye judgment models are obtained, facial image to be identified is input to
After the human eye judgment models, can directly be provided according to classification thresholds is or no classification results, it is thus possible to avoid repeating to instruct
Practice, improves the efficiency of human-eye model training.
Above-mentioned eye recognition method, apparatus, equipment and medium, obtain face picture to be identified, using face characteristic first
Point detection algorithm obtains positive eye areas image, is then normalized, obtains to positive eye areas image
Eye image to be identified is input in human eye judgment models and identifies by eye image to be identified, obtains recognition result.Using
When the human eye judgment models identify eye image to be identified, can quickly recognizing the face picture eyes, whether there is or not screenings
Gear improves recognition efficiency.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is the application environment schematic diagram of human-eye model training method provided in an embodiment of the present invention, eye recognition method;
Fig. 2 is the implementation flow chart of human-eye model training method provided in an embodiment of the present invention;
Fig. 3 is the implementation flow chart of step S10 in human-eye model training method provided in an embodiment of the present invention;
Fig. 4 is the implementation flow chart of step S30 in human-eye model training method provided in an embodiment of the present invention;
Fig. 5 is the implementation flow chart of step S15 in human-eye model training method provided in an embodiment of the present invention;
Fig. 6 is the implementation flow chart of step S50 in human-eye model training method provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of human-eye model training device provided in an embodiment of the present invention;
Fig. 8 is the implementation flow chart of eye recognition method provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of Eye recognition device provided in an embodiment of the present invention;
Figure 10 is the schematic diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Human-eye model training method provided by the present application, can be applicable in the application environment such as Fig. 1, wherein client is logical
Network to be crossed to be communicated with server-side, server-side receives the training sample data that client is sent and establishes human eye judgment models,
And then the verifying sample that client is sent is received, carry out the training of human eye judgment models.Wherein, client can be, but not limited to be each
Kind personal computer, laptop, smart phone, tablet computer and portable wearable device.Server-side can be with independently
The server cluster of server either multiple servers composition realize.
In one embodiment, as shown in Fig. 2, being applied to be illustrated for the server-side in Fig. 1 in this way, including
Following steps:
S10: obtaining facial image sample, and facial image sample be marked to obtain facial image sample data,
And extract the feature vector of the facial image sample in facial image sample data, wherein facial image sample data includes people
Face image sample and labeled data.
Wherein, facial image sample data is the eye image data for carrying out model training.Facial image sample
Feature vector refers in facial image sample data for characterizing the vector of the image information feature of each facial image sample, example
Such as: HOG (Histogram of Oriented Gradient, gradient orientation histogram) feature vector, LBP (Local
Binary Patterns, local binary patterns) feature vector or PCA (Principal Component Analysis, it is main at
Analysis) feature vector etc..Feature vector can avoid subsequent training process duplicate with simple data characterization image information
Extraction operation.
Preferably, the HOG feature vector of facial image sample can be extracted in the present embodiment.Due to facial image sample
HOG feature vector is described by the gradient of the local message of facial image sample, therefore, extracts facial image sample
HOG feature vector can be avoided the influence of the Factors on Human phantom eye training such as geometric deformation and light variation.To facial image sample
Originally it is marked, refers to that the content by facial image sample according to sample is divided into positive sample (unobstructed eye image) and negative sample
This (having the eye image blocked), after being labeled respectively to both sample datas, has obtained facial image sample data.People
In face image sample include positive sample and negative sample, it is possible to understand that ground, facial image sample data include facial image sample and
Labeled data.Preferably, negative sample quantity is 2-3 times of positive sample quantity, and sample information can be made more comprehensive, improves mould
The accuracy of type training.
In this embodiment, by obtaining facial image sample data, so as to subsequent carry out model training, and lead to
It crosses using there is the eye image blocked to be trained as facial image sample, so as to reduce false detection rate.
Optionally, which includes but is not limited to facial image sample gathered in advance and is stored in advance
The facial image sample in face database is commonly used in memory.
S20: facial image sample data is divided into training sample data and verifying sample data.
Wherein, training sample data are the sample datas for study, establish classifier by matching some parameters, i.e.,
Using the facial image sample training machine learning model in training sample data, to determine the parameter of machine learning model.It tests
Card sample data is the sample data for verifying the resolution capability of trained machine learning model (such as discrimination).It is optional
Ground, using the number of the 70%-75% of facial image sample data as training sample data, remaining is as verifying sample number
According to.In a specific embodiment, 300 positive samples are chosen and 700 negative samples have 1000 facial image sample groups altogether and close
At facial image sample data, 260 samples therein are as verifying sample data, and 740 samples are as training sample data.
S30: training sample data Training Support Vector Machines classifier is used, the critical of support vector machine classifier is obtained
Face.
Support vector machines (Support Vector Machine, SVM) classifier is one and is defined by classification critical surface
Identification and classification device, for carrying out classification or regression analysis to data.Critical surface is can be these two types of by positive sample and negative sample
Sample correctly separates, and makes two class samples apart from maximum classifying face.Specifically, according to the spy of facial image sample data
Point chooses suitable kernel function, and the feature vector of training sample data and kernel function are then carried out kernel function operation, so that training
The maps feature vectors of sample data realize this feature vector in this high-dimensional feature space to a high-dimensional feature space
Linear separability, obtain critical surface, and using critical surface as the classifying face classified to training sample data, by positive sample and
Negative sample separates.Specifically, training sample data are inputted, support vector machine classifier will export a critical surface to training sample
Notebook data is classified.The assorting process of support vector machine classifier is simplified by obtaining critical surface.
In the present embodiment, by obtaining the feature vector Training Support Vector Machines classifier of facial image sample critical
Face has good classification capacity, improves the efficiency of human-eye model training.
S40: the feature vector of the verifying sample in verifying sample data and the vector distance of critical surface are calculated.
Wherein, verifying sample data is the pre-stored facial image sample data for verifying, and which includes just
Sample data (unobstructed eye image) and negative sample data (having the eye image blocked) distinguish both sample datas
Sample is verified after being labeled.Wherein, the feature vector for verifying sample, which refers to, carries out characteristic vector pickup to verifying sample
The feature vector obtained afterwards.
The feature vector of verifying sample includes but is not limited to: HOG feature vector, LBP feature vector and PCA feature vector
Deng.
Wherein, the feature vector of verifying sample and the vector distance of critical surface verified in sample data refer to verifying sample
Feature vector in mathematical meaning corresponding directed line segment with critical surface both corresponding planes in mathematical meaning
The online distance to face of distance, i.e. mathematical meaning, distance are a numerical value, which is vector distance.Assuming that critical surface
Expression formula is g (x)=wx+b, and w is multi-C vector in formula, is represented by w=[w1,w2,w3...wn], then feature vector x is arrived
The expression formula of the vector distance of critical surface isIn formula | | w | | indicate the norm of w, i.e.,
It, can be intuitive by calculating the feature vector of the verifying sample in verifying sample data and the vector distance of critical surface
The degree of closeness of the more each verifying sample and its generic in ground.
S50: real class rate or default false positive class rate are preset in acquisition, according to vector distance and corresponding with sample data is verified
Labeled data obtains classification thresholds.
It presets real class rate and refers to and preset be judged as positive sample and the correct quantity of result accounts for total positive sample number
The ratio of amount, default false positive class rate refer to that the preset quantity for being judged as negative sample and result mistake accounts for total positive sample number
The ratio of amount.In the present embodiment, real class rate refers to the face that unobstructed eye image is judged as to unobstructed eyes
Image pattern accounts for the ratio of the facial image sample of total unobstructed eye image, and false positive class rate refers to there is the eyes figure blocked
As being judged as that the facial image sample of unobstructed eyes accounts for the ratio of the facial image sample of total unobstructed eye image.
It is readily appreciated that ground, real class rate is higher or false positive class rate is lower, illustrates that the classificating requirement of target is stringenter, adapts to more
Application.Preferably, when the default real class rate in the present embodiment is 95%, or when default false positive class rate 5%, Neng Gouqu
Good classifying quality is obtained, can adapt to a variety of different applications, by the way that real class rate or false positive class rate is rationally arranged, thus
Preferably extend the adaptability of support vector machine classifier.
It is preferred scope of the present invention, but can be according to reality it should be understood that presetting real class rate or default false positive class rate herein
The needs of application are configured, herein with no restrictions.
Classification thresholds are the critical values for classifying to sample, specifically, when classifying to sample, lower than classification
The judgement of threshold value is positive sample, and the judgement higher than classification thresholds is negative sample.
Specifically, labeled data corresponding with verifying sample data refers to the mark of verifying sample, such as: by positive sample mark
It is denoted as 1, negative sample is labeled as -1.In the vector distance and verifying sample of the feature vector and critical surface for obtaining verifying sample
Labeled data after, according to presetting real class rate or classification thresholds are calculated in default false positive class rate.
Such as default false positive class rate is 10%, there is S1,S2...S15Totally 15 verifying samples, wherein have 5 positive samples, 10
A negative sample, the feature vector of 10 negative samples and the vector distance of critical surface are respectively 1,2 ... 10, then classification thresholds at this time
At section [1,2], if classification thresholds take 1.5,10% default false positive class rate can satisfy.
S60: according to classification thresholds, human eye judgment models are obtained.
Specifically, human eye judgment models refer to whether the eye position for judging in facial image sample has the mould blocked
Type.After determining classification thresholds, by by the critical of the feature vector of facial image sample data and support vector machine classifier
The vector distance in face, and compared with classification thresholds, classified according to comparison result to facial image sample data, and then determine
Eye position in facial image sample is to have to block or be two kinds of unobstructed situations.Therefore, after giving classification thresholds, people
Eye judgment models just establish completion, can be directly according to classification threshold after facial image to be identified is input to the human eye judgment models
Value provides, it is thus possible to avoid repetition training, improve the efficiency of human-eye model training.
In the present embodiment, facial image sample is obtained first and facial image sample is marked to obtain face figure
Decent notebook data extracts the feature vector of the facial image sample in facial image sample data, then by facial image sample
Data are divided into training sample data and verifying sample data;Using training sample data Training Support Vector Machines classifier, obtain
To the critical surface of support vector machine classifier, to simplify the process of classification, the verifying in verifying sample data is then calculated
The vector distance of the critical surface of the feature vector and support vector machine classifier of sample, being capable of intuitively more each verifying sample
With the degree of closeness of its generic, real class rate or default false positive class rate are preset in acquisition, to extend support vector cassification
The adaptability of device obtains classification thresholds according to vector distance and labeled data corresponding with verifying sample data, finally obtains people
Eye judgment models, avoid repetition training, improve the efficiency of human-eye model training.
In one embodiment, the facial image sample as shown in figure 3, in step S10, i.e., in extraction facial image sample data
This feature vector, specifically comprises the following steps:
S11: human face characteristic point is obtained using facial feature points detection algorithm, human face characteristic point includes: left eye angle point, right eye
Angle point and place between the eyebrows point;Wherein, left eye angle point, right eye angle point and place between the eyebrows point are the characteristic points for belonging to same eye areas.
Wherein, facial feature points detection algorithm refers to for detecting human face five-sense-organ feature and marking the calculation of location information
Method.Human face characteristic point refers to canthus point, wing of nose point and corners of the mouth point etc. for indicating the point of the face masks such as eye, nose and mouth.Specifically
Ground, facial feature points detection algorithm include but is not limited to according to the facial feature points detection algorithm of deep learning, according to model
Facial feature points detection algorithm or the facial feature points detection algorithm etc. returned according to cascade shape.
It is alternatively possible to obtain face characteristic according to the Viola-Jones algorithm of Harr feature using what OpenCV was carried
Point.Wherein, OpenCV is a cross-platform computer vision library, may operate in Linux, Windows, Android and Mac
It in OS operating system, is made of a series of C functions and a small amount of C++ class, while providing the language such as Python, Ruby, MATLAB
Interface, realize many general-purpose algorithms in terms of image procossing and computer vision, and according to the Viola- of Harr feature
Jones algorithm is one of facial feature points detection algorithm.Haar feature is a kind of feature of grey scale change for reflecting image,
It is a kind of feature for reflecting pixel sub-module difference.Haar feature is divided into three classes: edge feature, linear character and center-are diagonal
Line feature.Viola-Jones algorithm is the method for carrying out Face datection according to the haar characteristic value of face.
Specifically, the facial image sample data for obtaining input, pre-processes facial image sample data, then according to
The step of secondary progress area of skin color segmentation, face characteristic region segmentation and face characteristic territorial classification, finally according to Harr feature
Viola-Jones algorithm and face characteristic territorial classification carry out matching primitives, obtain the human face characteristic point information of facial image.
In the present embodiment, left eye angle point, the right side of facial image sample are got by using facial feature points detection algorithm
Canthus point and place between the eyebrows point, so that the location information according to these characteristic points determines the eyes region of facial image sample.
It is to be appreciated that the left eye angle point referred in this step, right eye angle point and place between the eyebrows point are three for belonging to the same eye areas
Characteristic point, such as corresponding three characteristic points of left eye or corresponding three characteristic points of right eye.In one embodiment, to one
A facial image sample only acquires the image of wherein one eye eyeball (left eye or right eye).Two eyes are handled if necessary
When eyeball, after the image of acquisition one eye eyeball, mirror image processing is done to it can be used as another in a facial image sample
The image of eyes improves data-handling efficiency to save acquisition time.
S12: positive adjustment is carried out to facial image sample according to left eye angle point and right eye angle point.
Wherein, positive adjustment is to be standardized and be set as positive adjustment to the orientation of human face characteristic point.This implementation
In example, forward direction adjustment, which refers to, adjusts (i.e. left eye angle point and right eye in the same horizontal line for left eye angle point and right eye angle point
The ordinate of angle point is equal), to standardizing human eye feature point to same orientation, to avoid training sample Orientation differences to mould
The influence of type training.Facial image sample is improved to the robustness of Orientation differences.
S13: eyes rectangular area is constructed according to left eye angle point, right eye angle point and place between the eyebrows point.
Wherein, eyes rectangular area refers to that a rectangular area including eye image is adopted in a specific embodiment
Orient the position coordinates of left eye angle point, right eye angle point and place between the eyebrows point with human face characteristic point detection algorithm, eyes rectangular area with
The abscissa of left eye angle point is left side coordinate, is upper with the ordinate of place between the eyebrows point using the abscissa of right eye angle point as right side coordinate
Side coordinate, with left eye angle point ordinate (or right eye angle point ordinate) plus the distance of place between the eyebrows point to left eye angle point vertical direction
For downside coordinate, the rectangular area constituted with this four coordinate (left side coordinate, right side coordinate, upside coordinate and downside coordinate)
As eyes rectangular area.
S14: carrying out image normalization processing to eyes rectangular area, obtains normalization eyes rectangular area.
Wherein, normalized, which refers to, carries out a series of transformation so that image to be processed is converted into image to be processed
Corresponding canonical form.Such as image size normalization, image gray scale normalization.Preferably, normalized refers to pair
Eyes rectangular area carries out size normalization.Specifically, the resolution ratio by eyes rectangular area according to facial image sample is arranged
For fixed dimension, such as: eyes rectangular area can be set to Size (48,32) rectangle, i.e., a length of 48 pixel, and width is 32 pixels
Rectangular area, by setting fixed dimension for eyes rectangular area, so as to it is subsequent reduce characteristic vector pickup complexity.
It is readily appreciated that ground, image normalization processing is carried out to eyes rectangular area, is conducive to subsequent supporting vector machine model
Training, the attribute that can be avoided big numerical intervals has excessively dominated the attribute in fractional value section, but also is avoided that and calculated
Numerical complexity in journey.
S15: HOG feature vector is extracted according to normalization eyes rectangular area.
HOG (Histogram of Oriented Gradient, HOG) feature vector, is for describing image local area
The vector of the Gradient direction information in domain, this feature are affected by variations such as picture size positions, and the fixation of input picture range makes
The HOG feature vector being calculated is more unified, and when model training can more pay close attention to unobstructed eye image and block eyes
Variation of the difference of image without paying attention to eye position, training is more convenient, at the same HOG feature vector itself pay close attention to be
Image gradient features rather than color characteristic, the influence for being illuminated by the light variation and geometry variation is little, therefore, extracts HOG
Feature vector can easily and efficiently carry out the extraction of feature vector to facial image sample.Wherein, according to classification and Detection target
Difference, it is also different for feature extraction, usually using color, texture and shape as target signature.According to detection
The requirement of eye image accuracy, the present embodiment selection uses shape feature, using the HOG feature vector of training sample.
In the present embodiment, left eye angle point, the right eye angle point of human face characteristic point are obtained using facial feature points detection algorithm
With place between the eyebrows point;Then positive adjustment is carried out to image pattern then to construct to improve face picture to the robustness of direction change
Eyes rectangular area simultaneously carries out image normalization processing to eyes rectangular area, obtains normalization eyes rectangular area, is conducive to
The training of subsequent supporting vector machine model finally extracts normalization eyes rectangular area HOG feature vector, thus easily and efficiently
The extraction of feature vector is carried out to the facial image sample in facial image sample data.
In one embodiment, as shown in figure 4, in step S30, i.e., classified using training sample data Training Support Vector Machines
Device obtains the critical surface of support vector machine classifier, specifically comprises the following steps:
S31: obtaining the kernel function of support vector machine classifier and the punishment parameter of support vector machine classifier, and use is following
Equations Lagrange multiplierWith decision-making value b:
In formula, s.t. is the abbreviation of constraint condition in mathematical formulae, and min, which refers to, replaces numerical expression under constraint conditionMinimum value, K (xi,xj) be support vector machine classifier kernel function, C is
The punishment parameter of support vector machine classifier, C > 0, αiWith Lagrange multiplierIt is conjugate relation, xiFor training sample data
Feature vector, l be training sample data feature vector number, yiFor the mark of training sample data.
Wherein, kernel function is the kernel function in support vector machine classifier, for Training Support Vector Machines classifier mistake
The feature vector of the training sample inputted in journey carries out kernel function operation, and the kernel function of support vector machine classifier includes but unlimited
In linear kernel function, Polynomial kernel function, gaussian kernel function, gaussian kernel function and it is based on Radial basis kernel function, because of this implementation
Example in support vector machine classifier be linear separability, it is preferable that in the present embodiment using linear kernel function as support to
Kernel function in amount machine classifier, therefore K (xi,xj)=(xi,xj), linear nuclear parameter has the spy that parameter is few, arithmetic speed is fast
Point, be suitable for linear separability the case where.yiFor the mark of training sample data, because being two classification of support vector machine classifier
Problem, therefore yiIt can be 1 or -1 liang of class, the y if facial image sample is positive samplei=1, if facial image sample is negative
Sample then yi=-1.
Punishment parameter C is the parameter for optimizing to support vector machine classifier, is a determining numerical value.It can solve
The certainly classification problem of sample skewness specifically participates in two classifications (can also refer to multiple classifications) sample size difference of classification
It is very big, such as positive sample has 10000 and negative sample has 100, can so lead to the problem of sample skewness, positive sample is distributed at this time
Range is wide, to solve the problems, such as sample skewness, specifically, can rationally increase C according to the ratio of positive sample quantity and negative sample quantity
Value.C is bigger, and the fault-tolerance of presentation class device is small.Decision-making value b is for determining during determining support vector machine classifier
The critical value of plan classification, is a real number.
Specifically, by obtaining suitable kernel function K (xi,xj), and suitable punishment parameter C is set, using formulaFeature vector and kernel function to training sample data carry out kernel function fortune
After calculation, optimal problem is solved, that is, seeks Lagrange multiplierValue so that the result after kernel function operationReach minimum, obtainsThen, it is determined that open interval
In (0, C) rangeComponentAnd according toCalculate b value.
The Lagrange multiplier in support vector machine classifier is solvedWith decision-making value b, to obtain preferable
Parameter, to construct efficient support vector machine classifier.
S32: according to Lagrange multiplierSupport vector machine classifier is obtained using following formula with decision-making value b
Critical surface g (x):
Lagrange multiplier is obtained by Training Support Vector Machines classifierAfter decision-making value b, i.e. adjusting training sample
This Lagrange multiplierAfter the two parameters of decision-making value b, and it is updated to formula
In to get arrive support vector machine classifier critical surface.
Be readily appreciated that ground, critical surface is obtained by calculation, so as to subsequent facial image sample according to critical surface to training sample
The feature vector of sample is first extracted and preserved in this classification, training program, so as to repeatedly train in continuous adjusting training parameter
The time for extracting feature is saved in the process, obtains satisfactory training parameter as early as possible.Critical surface adjustable in this way is to a certain
The rate of false alarm and accuracy rate of classification improve model training efficiency without frequent repetition training model.
In the present embodiment, suitable kernel function K (x is obtained firsti,xj), and suitable punishment parameter C is set, by training sample
The feature vector and kernel function of notebook data carry out kernel function operation, solve the decision-making value b in support vector machine classifier, thus
Preferable parameter is obtained, support vector machine classifier is constructed, then by Lagrange multiplierWith the two ginsengs of decision-making value b
Number is updated to formulaIn, obtain critical surface g (x), so as to subsequent facial image sample according to
Critical surface classifies to training sample data, without frequent repetition training model, improves the efficiency of model training.
In one embodiment, as shown in figure 5, in step S15, i.e., HOG feature is extracted according to normalization eyes rectangular area
Vector specifically comprises the following steps:
S151: normalization eyes rectangular area is divided into cell factory, and calculates each pixel gradient of cell factory
Size and Orientation.
Specifically, the requirement according to actual needs and to support vector machine classifier is different, to normalization eyes rectangle region
The mode that domain divides is also different.Subregion be overlapped can not also be overlapped with subregion.Cell factory refers to connection of image
Region, i.e. each subregion are made of multiple cell factories, for example, the normalization eyes rectangular area of a width 48*32, it is assumed that
One cell factory is 4*4 pixel, and 2*2 cell is formed a sub-regions, then this normalization eyes rectangular area has
6*4 sub-regions.0 ° to 180 ° of the gradient direction section of each cell factory is divided into 9 sections, therefore can be tieed up with one 9
One cell factory of vector description.
Obtain the size and Orientation detailed process of the normalization each pixel gradient in eyes rectangular area are as follows: obtain first each
The gradient of pixel, if pixel is (x, y), gradient calculation formula is as follows:
Wherein, Gx(x, y) is the horizontal direction gradient of pixel (x, y), wherein Gy(x, y) is the vertical direction of pixel (x, y)
Gradient, H (x, y) are the gray value of pixel (x, y).Then it is calculated using the following equation the gradient magnitude of the pixel:
Wherein, G (x, y) is the size of pixel gradient.
Finally, being calculated using the following equation the direction of pixel gradient:
Wherein, α (x, y) is the deflection in the direction of pixel gradient.
S152: the histogram of gradients of the size and Orientation of each pixel gradient of cell factory is counted.
Wherein, histogram of gradients refers to the histogram counted to the size and Orientation of pixel gradient, is used for table
Levy the gradient information of each cell factory.Specifically, first by the gradient direction of each cell factory from 0 ° to 180 ° equably
It is divided into 9 direction blocks, i.e., 0 ° -20 ° are first direction block, 20 ° of -40 ° of second direction blocks, and so on, 160 ° -180 ° are
9th direction block.Then judge the direction block where the direction of the pixel gradient of cell factory, and add the picture of direction block
The size of plain gradient.As soon as example, the direction of a certain pixel of cell factory falls in 40 ° -60 °, by histogram of gradients third
Pixel value on a direction adds the size of the pixel gradient of the direction, to obtain the histogram of gradients of the cell factory.
S153: series connection histogram of gradients obtains HOG feature vector.
Wherein, series connection refers to that the histogram of gradients to each cell factory will according to sequence from left to right, from up to down
All histogram of gradients merge, to obtain the HOG feature vector of normalization eyes rectangular area.
In the present embodiment, it is divided into several zonules by the way that eyes rectangular area will be normalized, then calculates each cell
The histogram of gradients in domain finally together by the corresponding histogram of gradients series connection in each zonule obtains whole picture normalization eyes square
The histogram of gradients in shape region, for describing the feature vector of facial image sample, while HOG feature vector itself pay close attention to i.e.
It is image gradient features rather than color characteristic, being illuminated by the light variation influences less.Extracting HOG feature vector can be easily and efficiently
Eye image is identified.
In one embodiment, as shown in fig. 6, in step S50, that is, obtain and preset real class rate or false positive class rate, according to
Span obtains classification thresholds from labeled data corresponding with verifying sample data, specifically comprises the following steps:
S51: ROC curve is drawn according to vector distance and labeled data corresponding with verifying sample data.
Wherein, ROC curve refers to Receiver operating curve/receiver operating characteristic curve (receiver
Operating characteristic curve), it is the overall target for reflecting sensibility and specificity continuous variable, is to use structure
The correlation of figure method announcement sensibility and specificity.In the present embodiment, it is true that support vector machine classifier is shown in ROC curve
Relationship between positive class rate and false positive class rate, the curve are higher closer to the accuracy of upper left corner classifier.
Sample has been carried out to the classification of positive negative sample: positive sample (positive) or negative sample in verifying training sample
(negative).During classifying to the face image data in verifying training sample, it may appear that four kinds of situations: such as
Fruit face image data is positive sample and is also predicted to positive sample, as real class (True positive, TP), if
Face image data is that negative sample is predicted to positive sample, referred to as false positive class (False positive, FP).Correspondingly, such as
Fruit face image data is that negative sample is predicted to negative sample, referred to as very negative class (True negative, TN), positive sample quilt
Predicting into negative sample then is false negative sample (false negative, FN).
It is all that real class rate (true positive rate, TPR) was portrayed is that the positive example that classifier is identified accounts for
The ratio of positive example, calculation formula are TPR=TP/ (TP+FN).False positive class rate (false positive rate, FPR) is portrayed
Be that classifier misdeems that the negative example for positive sample accounts for the ratio of all negative examples, calculation formula is FPR=FP/ (FP+TN).
The drawing process of ROC curve are as follows: according to the vector of the feature vector of verifying sample data and critical surface feature vector
Distance and corresponding verifying sample data mark obtain the real class rate and false positive class rate of numerous verifying samples, and ROC curve is with vacation
Positive class rate is horizontal axis, and using real class rate as the longitudinal axis, the real class rate of connection each point, that is, numerous verifyings sample and false positive class rate are drawn
Curve, the then area under calculated curve, area is bigger, judges that value is higher.
In a specific embodiment, it can be drawn by ROC curve drawing tool, specifically, by matlab
PlotSVMroc (true_labels, predict_labels, classnumber) function draw ROC curve.Wherein,
True_labels is that correctly label, predict_labels are the label of classification judgement, and classnumber is class categories
Quantity, the present embodiment because be positive negative sample two classification problems, classnumber=2.Specifically, pass through calculating
After verifying the feature vector of sample data and the vector distance of critical surface feature vector, according to vector distance distribution situation, i.e., respectively
The distribution of a verifying sample data and the degree of closeness of critical surface, and can according to the mark of corresponding verifying sample data
The real class rate and false positive class rate for getting verifying sample data, then according to the real class rate of verifying sample data and false positive class
Rate draws ROC curve.
S52: classification thresholds are obtained on the horizontal axis of ROC curve according to presetting real class rate or presetting false positive class rate.
Specifically, it presets real class rate or default false positive class rate is configured by actual using needs, server-side
After getting and presetting real class rate or default false positive class rate, pass through the positive class rate of vacation and the longitudinal axis that the horizontal axis in ROC curve indicates
The real class rate indicated size compared with presetting real class rate or default false positive class rate, that is, preset real class rate or default false positive class
Rate determines classification thresholds according to classification standard from the horizontal axis of ROC curve as the standard classified to test sample data,
So that different classification thresholds can be chosen according to different scenes by ROC curve in following model training, weight is avoided
The needs that refreshment is practiced, improve the efficiency of model training.
In the present embodiment, first by the feature vector that calculates verifying sample data and critical surface feature vector to span
From rear, and the real class rate and the positive class of vacation of verifying sample data can be got according to the mark of corresponding verifying sample data
Then rate draws ROC curve according to the real class rate of verifying sample data and false positive class rate.By presetting real class rate or presetting
False positive class rate obtains classification thresholds from the horizontal axis of ROC curve, so that can root by ROC curve in following model training
Different classification thresholds are chosen according to different scenes, avoid the need for repetition training, improve the efficiency of model training.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Fig. 7 shows the principle frame with the one-to-one human-eye model training device of human-eye model training method in embodiment
Figure.As shown in fig. 7, the human-eye model training device includes that facial image sample data obtains module 10, facial image sample number
Module 30, vector distance computing module 40, classification thresholds, which are obtained, according to division module 20, critical surface obtains module 50 and human eye judgement
Model obtains module 60.Wherein, facial image sample data obtains module 10, facial image sample data division module 20, faces
Interface obtains module 30, vector distance computing module 40, classification thresholds and obtains module 50 and human eye judgment models acquisition module 60
Corresponding with human-eye model training method in the embodiment step of realization function correspond, each functional module is described in detail such as
Under:
Facial image sample data obtains module 10, carries out for obtaining facial image sample, and to facial image sample
Label is to obtain facial image sample data, and extracts the feature vector of the facial image sample in facial image sample data,
Wherein, facial image sample data includes facial image sample and labeled data;
Facial image sample data division module 20, for by facial image sample data be divided into training sample data and
Verify sample data;
Critical surface obtains module 30, for using training sample data Training Support Vector Machines classifier, obtain supporting to
The critical surface of amount machine classifier;
Vector distance computing module 40, for calculating the feature vector and critical surface of the verifying sample in verifying sample data
Vector distance;
Classification thresholds obtain module 50, preset real class rate or default false positive class rate for obtaining, according to vector distance and
Labeled data corresponding with verifying sample data obtains classification thresholds;
Human eye judgment models obtain module 60, for obtaining human eye judgment models according to classification thresholds.
Specifically, it includes human face characteristic point acquiring unit 11, positive adjustment list that facial image sample data, which obtains module 10,
Member 12, eyes rectangular area construction unit 13, eyes rectangular area acquiring unit 14 and characteristic vector pickup unit 15.
Human face characteristic point acquiring unit 11, for obtaining human face characteristic point, the face using facial feature points detection algorithm
Characteristic point includes: left eye angle point, right eye angle point and place between the eyebrows point;Wherein, left eye angle point, right eye angle point and place between the eyebrows point be belong to it is same
The characteristic point of eye areas;
Positive adjustment unit 12, for carrying out positive adjustment to facial image sample according to left eye angle point and right eye angle point;
Eyes rectangular area construction unit 13, for constructing eyes rectangle according to left eye angle point, right eye angle point and place between the eyebrows point
Region;
Eyes rectangular area acquiring unit 14 obtains normalizing for carrying out image normalization processing to eyes rectangular area
Change eyes rectangular area;
Characteristic vector pickup unit 15, for extracting HOG feature vector according to normalization eyes rectangular area.
Specifically, characteristic vector pickup unit 15 includes that pixel gradient obtains subelement 151, histogram of gradients obtains son list
First 152 and HOG feature vector obtains subelement 153.
Pixel gradient obtains subelement 151, is divided into cell factory for that will normalize eyes rectangular area, and calculate thin
The size and Orientation of each pixel gradient of born of the same parents' unit;
Histogram of gradients obtains subelement 152, the size and Orientation of each pixel gradient for counting cell factory
Histogram of gradients;
HOG feature vector obtains subelement 153 and obtains HOG feature vector for histogram of gradients of connecting.
Specifically, it includes parameter acquiring unit 31 and critical surface acquiring unit 32 that critical surface, which obtains module 30,.
Parameter acquiring unit 31, for obtaining the kernel function of support vector machine classifier and punishing for support vector machine classifier
Penalty parameter solves Lagrange multiplier using following formulaWith decision-making value b:
In formula, s.t. is the abbreviation of constraint condition in mathematical formulae, and min, which refers to, replaces numerical expression under constraint conditionMinimum value, K (xi,xj) be support vector machine classifier kernel function, C is
The punishment parameter of support vector machine classifier, C > 0, αiWith Lagrange multiplierIt is conjugate relation, xiFor training sample data
Feature vector, l be training sample data feature vector number, yiFor the mark of training sample data;
Critical surface acquiring unit 32: according to Lagrange multiplierIt is supported with decision-making value b using following formula
The critical surface g (x) of vector machine classifier:
Specifically, it includes ROC curve drawing unit 51 and classification thresholds acquiring unit 52 that classification thresholds, which obtain module 50,.
ROC curve drawing unit 51, for being drawn according to vector distance and labeled data corresponding with verifying sample data
ROC curve;
Classification thresholds acquiring unit 52 presets real class rate or default false positive class rate in the horizontal axis of ROC curve for basis
Upper acquisition classification thresholds.
Specific about human-eye model training device limits the limit that may refer to above for human-eye model training method
Fixed, details are not described herein.Modules in above-mentioned human-eye model training device can fully or partially through software, hardware and its
Combination is to realize.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with
It is stored in the memory in computer equipment in a software form, in order to which processor calls the above modules of execution corresponding
Operation.
In one embodiment, an eye recognition method is provided, which can also apply the application in such as Fig. 1
In environment, wherein computer equipment is communicated by network with server-side.Client is led to by network and server-side
Letter, server-side receive client and send face picture to be identified, carry out eye recognition.Wherein, client can be, but not limited to be
Various personal computers, laptop, smart phone, tablet computer and portable wearable device.Server-side can be with solely
The server clusters of the either multiple servers compositions of vertical server is realized.
In one embodiment, as shown in figure 8, being applied to be illustrated for the server-side in Fig. 1 in this way, including
Following steps:
S70: obtaining face picture to be identified, and positive eye areas image is obtained using facial feature points detection algorithm.
Wherein, face picture to be identified refers to the face picture for needing to carry out eye recognition.Specifically, facial image is obtained
It can be by acquiring face picture in advance, or face picture is directly obtained from face database, such as AR face database.
In the present embodiment, face picture to be identified includes unobstructed eyes picture and blocks eyes picture, and uses people
Face characteristic point detection algorithm obtains positive eye areas image.This obtains positive eyes using facial feature points detection algorithm
The realization process of area image is identical with the method for step S11 to step S13, and details are not described herein.
S80: positive eye areas image is normalized, eye image to be identified is obtained.
Wherein, eye image to be identified refers to the positive eye areas image after realizing normalized, by right
Positive eye areas image is normalized, and recognition efficiency can be improved.Specifically, normalized obtain wait know
Other eye image is because transform to unified canonical form, so as to avoid the big numerical intervals in support vector machine classifier
Attribute has excessively dominated the attribute in fractional value section, but also is avoided that numerical complexity in calculating process.Optionally, to forward direction
The realization process that is normalized of eye areas image it is identical with step S14, details are not described herein.
S90: eye image to be identified is input to the training of the human-eye model training method such as step S10 into step S60
Obtained human eye judgment models are identified, recognition result is obtained.
Wherein, recognition result refers to carries out identifying obtained knot to eye image to be identified using human eye judgment models
Fruit, including two kinds of situations: eye image to be identified, which is unobstructed eye image, and eye image to be identified is the eye blocked
Eyeball image.Specifically, eye image to be identified is input to human eye judgment models to identify, to obtain recognition result.
In the present embodiment, face picture to be identified is first obtained, positive eye areas image is normalized, is obtained
To eye image to be identified, identified so that the face picture to be identified to normalized is input to human eye judgment models,
Recognition result is obtained, quickly recognizing the face picture eyes has unobstructed, raising recognition efficiency, to avoid influencing subsequent
Image processing process.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Fig. 9 shows the functional block diagram with the one-to-one Eye recognition device of eye recognition method in embodiment.Such as Fig. 9 institute
Show, which includes Ophthalmologic image-taking module 70, Ophthalmologic image-taking module 80 to be identified and identification to be identified
As a result module 90 is obtained.Wherein, Ophthalmologic image-taking module 70 to be identified, Ophthalmologic image-taking module 80 to be identified and identification knot
The realization function step corresponding with eye recognition method in embodiment that fruit obtains module 90 corresponds, and each functional module is detailed
It is described as follows:
Face picture to be identified is obtained module 70 and is calculated for obtaining face picture to be identified using facial feature points detection
Method obtains positive eye areas image;
Ophthalmologic image-taking module 80 to be identified is obtained for positive eye areas image to be normalized
Eye image to be identified;
Recognition result obtains module 90, obtains for eye image to be identified to be input to the training of human-eye model training method
Human eye judgment models identified, obtain recognition result.
Specific about human-eye model training device limits the restriction that may refer to above for eye recognition method,
This is repeated no more.Modules in above-mentioned Eye recognition device can come real fully or partially through software, hardware and combinations thereof
It is existing.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with software shape
Formula is stored in the memory in computer equipment, executes the corresponding operation of the above modules in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction
Composition can be as shown in Figure 10.The computer equipment include by system bus connect processor, memory, network interface and
Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment
Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data
Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The database of machine equipment is used to store the feature vector and human eye mould of the facial image sample data in human-eye model training method
Type training data.The network interface of the computer equipment is used to communicate with external terminal by network connection.The computer journey
To realize a kind of human-eye model training method when sequence is executed by processor.Alternatively, real when the computer program is executed by processor
In current embodiment in Eye recognition device each module/unit function
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor realize that above-described embodiment human-eye model is instructed when executing computer program
The step of practicing method, such as step S10 shown in Fig. 2 to step S60.Or it is realized when processor execution computer program above-mentioned
The step of embodiment eye recognition method, such as step S70 shown in Fig. 7 to step S90.Alternatively, processor executes computer
Realize the function of each module/unit of above-described embodiment human-eye model training device when program, for example, module shown in Fig. 7 10 to
Module 60.Alternatively, processor realizes the function of each module/unit of above-described embodiment Eye recognition device when executing computer program
Can, such as module shown in Fig. 9 70 is to module 90.To avoid repeating, which is not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program realizes that the step of above-described embodiment human-eye model training method or computer program are processed when being executed by processor
The step of device realizes above-described embodiment eye recognition method when executing, alternatively, on being realized when computer program is executed by processor
The function of stating each module/unit of embodiment human-eye model training device, alternatively, realization when computer program is executed by processor
The function of each module/unit of above-described embodiment Eye recognition device, to avoid repeating, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided by the present invention,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.
Above-described embodiment is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to the foregoing embodiments
Invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each implementation
Technical solution documented by example is modified or equivalent replacement of some of the technical features;And these modification or
Replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all include
Within protection scope of the present invention.
Claims (10)
1. a kind of human-eye model training method characterized by comprising
Facial image sample is obtained, and the facial image sample is marked to obtain facial image sample data, and is mentioned
Take the feature vector of the facial image sample in the facial image sample data, wherein facial image sample data includes people
Face image sample and labeled data;
The facial image sample data is divided into training sample data and verifying sample data;
Using the training sample data Training Support Vector Machines classifier, the critical of the support vector machine classifier is obtained
Face;
Calculate the feature vector of the verifying sample in the verifying sample data and the vector distance of the critical surface;
Real class rate or default false positive class rate are preset in acquisition, according to the vector distance and mark corresponding with verifying sample data
Data acquisition classification thresholds;
According to the classification thresholds, human eye judgment models are obtained.
2. human-eye model training method as described in claim 1, which is characterized in that described to extract the facial image sample number
The feature vector of facial image sample in, specifically includes:
Human face characteristic point is obtained using facial feature points detection algorithm, the human face characteristic point includes: left eye angle point, right eye angle point
With place between the eyebrows point;Wherein, the left eye angle point, the right eye angle point and place between the eyebrows point are the features for belonging to same eye areas
Point;
Positive adjustment is carried out to the facial image sample according to the left eye angle point and the right eye angle point;
Eyes rectangular area is constructed according to the left eye angle point, the right eye angle point and place between the eyebrows point;
Image normalization processing is carried out to the eyes rectangular area, obtains normalization eyes rectangular area;
HOG feature vector is extracted according to the normalization eyes rectangular area.
3. human-eye model training method as described in claim 1, which is characterized in that described using training sample data training branch
Vector machine classifier is held, the critical surface of the support vector machine classifier is obtained, specifically includes:
The kernel function of the support vector machine classifier and the punishment parameter of the support vector machine classifier are obtained, use is following
Equations Lagrange multiplierWith decision-making value b:
In formula, s.t. is the abbreviation of constraint condition in mathematical formulae, and min, which refers to, replaces numerical expression under constraint conditionMinimum value, K (xi,xj) be the support vector machine classifier kernel function,
C is the punishment parameter of the support vector machine classifier, C > 0, αiWith the Lagrange multiplierIt is conjugate relation, xiFor institute
The feature vector of training sample data is stated, l is the number of the feature vector of the training sample data, yiFor the training sample
The mark of data;
According to the Lagrange multiplierThe support vector cassification is obtained using following formula with the decision-making value b
The critical surface g (x) of device:
4. human-eye model training method as claimed in claim 2, which is characterized in that described according to the normalization eyes rectangle
Extracted region HOG feature vector, specifically includes:
Normalization eyes rectangular area is divided into cell factory, and calculates the size of each pixel gradient of the cell factory
The direction and;
Count the histogram of gradients of the size and Orientation of each pixel gradient of the cell factory;
It connects the histogram of gradients, obtains the HOG feature vector.
5. human-eye model training method as described in claim 1, which is characterized in that real class rate or default is preset in the acquisition
False positive class rate obtains classification thresholds according to the vector distance and labeled data corresponding with verifying sample data, specifically includes:
ROC curve is drawn according to the vector distance and labeled data corresponding with verifying sample data;
Real class rate is preset or default false positive class rate obtains classification thresholds on the horizontal axis of the ROC curve according to described.
6. a kind of eye recognition method characterized by comprising
Face picture to be identified is obtained, positive eye areas image is obtained using facial feature points detection algorithm;
The eye areas image of the forward direction is normalized, eye image to be identified is obtained;
The eye image to be identified is input to the human-eye model training method training as described in claim any one of 1-5 to obtain
Human eye judgment models identified, obtain recognition result.
7. a kind of human-eye model training device characterized by comprising
Facial image sample data obtains module, marks for obtaining facial image sample, and to the facial image sample
Note to obtain facial image sample data, and extract the feature of facial image sample in the facial image sample data to
Amount, wherein facial image sample data includes facial image sample and labeled data;
Facial image sample data division module, for the facial image sample data to be divided into training sample data and is tested
Demonstrate,prove sample data;
Critical surface obtains module, for using the training sample data Training Support Vector Machines classifier, obtains the support
The critical surface of vector machine classifier;
Vector distance computing module, for calculate it is described verifying sample data in verifying sample feature vector with it is described critical
The vector distance in face;
Classification thresholds obtain module, preset real class rate or default false positive class rate for obtaining, according to the vector distance and with
It verifies the corresponding labeled data of sample data and obtains classification thresholds;
Human eye judgment models obtain module, for obtaining human eye judgment models according to the classification thresholds.
8. a kind of Eye recognition device characterized by comprising
Face picture to be identified obtains module, for obtaining face picture to be identified, is obtained using facial feature points detection algorithm
Positive eye areas image;
Ophthalmologic image-taking module to be identified is normalized for the eye areas image to the forward direction, obtain to
Identify eye image;
Recognition result obtains module, as described in any one in claim 1-5 for the eye image to be identified to be input to
The human eye judgment models that the training of human-eye model training method obtains are identified, recognition result is obtained.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor
The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to
The step of any one of 5 human-eye model training method;Alternatively, the processor is realized when executing the computer program as weighed
Benefit requires the step of 6 eye recognition method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In realizing the human-eye model training method as described in any one of claim 1 to 5 when the computer program is executed by processor
Step;Alternatively, the processor realizes the step of eye recognition method as claimed in claim 6 when executing the computer program
Suddenly.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585092.2A CN108985159A (en) | 2018-06-08 | 2018-06-08 | Human-eye model training method, eye recognition method, apparatus, equipment and medium |
PCT/CN2018/094341 WO2019232866A1 (en) | 2018-06-08 | 2018-07-03 | Human eye model training method, human eye recognition method, apparatus, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810585092.2A CN108985159A (en) | 2018-06-08 | 2018-06-08 | Human-eye model training method, eye recognition method, apparatus, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108985159A true CN108985159A (en) | 2018-12-11 |
Family
ID=64541049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810585092.2A Withdrawn CN108985159A (en) | 2018-06-08 | 2018-06-08 | Human-eye model training method, eye recognition method, apparatus, equipment and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108985159A (en) |
WO (1) | WO2019232866A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858024A (en) * | 2019-01-04 | 2019-06-07 | 中山大学 | A kind of source of houses term vector training method and device based on word2vec |
CN109919029A (en) * | 2019-01-31 | 2019-06-21 | 深圳和而泰数据资源与云技术有限公司 | Black eye kind identification method, device, computer equipment and storage medium |
CN110211094A (en) * | 2019-05-06 | 2019-09-06 | 平安科技(深圳)有限公司 | Black eye intelligent determination method, device and computer readable storage medium |
CN110222724A (en) * | 2019-05-15 | 2019-09-10 | 平安科技(深圳)有限公司 | A kind of picture example detection method, apparatus, computer equipment and storage medium |
CN110222571A (en) * | 2019-05-06 | 2019-09-10 | 平安科技(深圳)有限公司 | Black eye intelligent determination method, device and computer readable storage medium |
CN110276333A (en) * | 2019-06-28 | 2019-09-24 | 上海鹰瞳医疗科技有限公司 | Eyeground identification model training method, eyeground personal identification method and equipment |
CN110414588A (en) * | 2019-07-23 | 2019-11-05 | 广东小天才科技有限公司 | Picture mask method, device, computer equipment and storage medium |
CN110569826A (en) * | 2019-09-18 | 2019-12-13 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN111401440A (en) * | 2020-03-13 | 2020-07-10 | 重庆第二师范学院 | Target classification recognition method and device, computer equipment and storage medium |
CN111429409A (en) * | 2020-03-13 | 2020-07-17 | 深圳市雄帝科技股份有限公司 | Method and system for identifying glasses worn by person in image and storage medium thereof |
CN111626371A (en) * | 2020-05-29 | 2020-09-04 | 歌尔科技有限公司 | Image classification method, device and equipment and readable storage medium |
CN111931617A (en) * | 2020-07-29 | 2020-11-13 | 中国工商银行股份有限公司 | Human eye image recognition method and device based on image processing and self-service terminal |
CN112883774A (en) * | 2020-12-31 | 2021-06-01 | 厦门易仕特仪器有限公司 | Pedestrian re-identification data enhancement method, device and equipment and readable storage medium |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991641B (en) * | 2019-12-17 | 2024-03-05 | 合肥鼎盛锦业科技有限公司 | Oil reservoir type analysis method and device and electronic equipment |
CN111126347B (en) * | 2020-01-06 | 2024-02-20 | 腾讯科技(深圳)有限公司 | Human eye state identification method, device, terminal and readable storage medium |
CN111259743B (en) * | 2020-01-09 | 2023-11-24 | 中山大学中山眼科中心 | Training method and system for myopia image deep learning recognition model |
CN111444860A (en) * | 2020-03-30 | 2020-07-24 | 东华大学 | Expression recognition method and system |
CN111582068B (en) * | 2020-04-22 | 2023-07-07 | 北京交通大学 | Method for detecting wearing state of mask for personnel |
CN111583093B (en) * | 2020-04-27 | 2023-12-22 | 西安交通大学 | Hardware implementation method for ORB feature point extraction with good real-time performance |
CN111611910B (en) * | 2020-05-19 | 2023-04-28 | 黄河水利委员会黄河水利科学研究院 | Yellow river ice dam image feature recognition method |
CN111783598B (en) * | 2020-06-24 | 2023-08-08 | 北京百度网讯科技有限公司 | Face recognition model training method, device, equipment and medium |
CN114005151B (en) * | 2020-07-28 | 2024-05-03 | 北京君正集成电路股份有限公司 | Face angle sample collection and labeling method |
CN111967436B (en) * | 2020-09-02 | 2024-03-19 | 北京猿力未来科技有限公司 | Image processing method and device |
CN112116525B (en) * | 2020-09-24 | 2023-08-04 | 百度在线网络技术(北京)有限公司 | Face recognition method, device, equipment and computer readable storage medium |
CN112733795B (en) * | 2021-01-22 | 2022-10-11 | 腾讯科技(深圳)有限公司 | Method, device and equipment for correcting sight of face image and storage medium |
CN114609602B (en) * | 2022-03-09 | 2023-04-07 | 电子科技大学 | Feature extraction-based target detection method under sea clutter background |
CN116311553B (en) * | 2023-05-17 | 2023-08-15 | 武汉利楚商务服务有限公司 | Human face living body detection method and device applied to semi-occlusion image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292225B (en) * | 2016-08-18 | 2020-11-20 | 北京师范大学珠海分校 | Face recognition method |
CN107633204B (en) * | 2017-08-17 | 2019-01-29 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
CN107590506B (en) * | 2017-08-17 | 2018-06-15 | 北京航空航天大学 | A kind of complex device method for diagnosing faults of feature based processing |
-
2018
- 2018-06-08 CN CN201810585092.2A patent/CN108985159A/en not_active Withdrawn
- 2018-07-03 WO PCT/CN2018/094341 patent/WO2019232866A1/en active Application Filing
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109858024A (en) * | 2019-01-04 | 2019-06-07 | 中山大学 | A kind of source of houses term vector training method and device based on word2vec |
CN109858024B (en) * | 2019-01-04 | 2023-04-11 | 中山大学 | Word2 vec-based room source word vector training method and device |
CN109919029A (en) * | 2019-01-31 | 2019-06-21 | 深圳和而泰数据资源与云技术有限公司 | Black eye kind identification method, device, computer equipment and storage medium |
CN110211094B (en) * | 2019-05-06 | 2023-05-26 | 平安科技(深圳)有限公司 | Intelligent judging method and device for black eye and computer readable storage medium |
CN110211094A (en) * | 2019-05-06 | 2019-09-06 | 平安科技(深圳)有限公司 | Black eye intelligent determination method, device and computer readable storage medium |
CN110222571A (en) * | 2019-05-06 | 2019-09-10 | 平安科技(深圳)有限公司 | Black eye intelligent determination method, device and computer readable storage medium |
CN110222571B (en) * | 2019-05-06 | 2023-04-07 | 平安科技(深圳)有限公司 | Intelligent judgment method and device for black eye and computer readable storage medium |
CN110222724A (en) * | 2019-05-15 | 2019-09-10 | 平安科技(深圳)有限公司 | A kind of picture example detection method, apparatus, computer equipment and storage medium |
CN110222724B (en) * | 2019-05-15 | 2023-12-19 | 平安科技(深圳)有限公司 | Picture instance detection method and device, computer equipment and storage medium |
CN110276333A (en) * | 2019-06-28 | 2019-09-24 | 上海鹰瞳医疗科技有限公司 | Eyeground identification model training method, eyeground personal identification method and equipment |
CN110276333B (en) * | 2019-06-28 | 2021-10-15 | 上海鹰瞳医疗科技有限公司 | Eye ground identity recognition model training method, eye ground identity recognition method and equipment |
CN110414588A (en) * | 2019-07-23 | 2019-11-05 | 广东小天才科技有限公司 | Picture mask method, device, computer equipment and storage medium |
CN110569826A (en) * | 2019-09-18 | 2019-12-13 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN110569826B (en) * | 2019-09-18 | 2022-05-24 | 深圳市捷顺科技实业股份有限公司 | Face recognition method, device, equipment and medium |
CN111401440B (en) * | 2020-03-13 | 2023-03-31 | 重庆第二师范学院 | Target classification recognition method and device, computer equipment and storage medium |
CN111429409A (en) * | 2020-03-13 | 2020-07-17 | 深圳市雄帝科技股份有限公司 | Method and system for identifying glasses worn by person in image and storage medium thereof |
CN111401440A (en) * | 2020-03-13 | 2020-07-10 | 重庆第二师范学院 | Target classification recognition method and device, computer equipment and storage medium |
CN111626371A (en) * | 2020-05-29 | 2020-09-04 | 歌尔科技有限公司 | Image classification method, device and equipment and readable storage medium |
CN111626371B (en) * | 2020-05-29 | 2023-10-31 | 歌尔科技有限公司 | Image classification method, device, equipment and readable storage medium |
CN111931617A (en) * | 2020-07-29 | 2020-11-13 | 中国工商银行股份有限公司 | Human eye image recognition method and device based on image processing and self-service terminal |
CN111931617B (en) * | 2020-07-29 | 2023-11-21 | 中国工商银行股份有限公司 | Human eye image recognition method and device based on image processing and self-service terminal |
CN112883774A (en) * | 2020-12-31 | 2021-06-01 | 厦门易仕特仪器有限公司 | Pedestrian re-identification data enhancement method, device and equipment and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019232866A1 (en) | 2019-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985159A (en) | Human-eye model training method, eye recognition method, apparatus, equipment and medium | |
CN108985155A (en) | Mouth model training method, mouth recognition methods, device, equipment and medium | |
US11775056B2 (en) | System and method using machine learning for iris tracking, measurement, and simulation | |
CN106897658B (en) | Method and device for identifying human face living body | |
CN109697416B (en) | Video data processing method and related device | |
WO2019096029A1 (en) | Living body identification method, storage medium and computer device | |
US20200134868A1 (en) | Gaze point determination method and apparatus, electronic device, and computer storage medium | |
US9031317B2 (en) | Method and apparatus for improved training of object detecting system | |
CN105205480B (en) | Human-eye positioning method and system in a kind of complex scene | |
CN108229330A (en) | Face fusion recognition methods and device, electronic equipment and storage medium | |
CN106778450B (en) | Face recognition method and device | |
CN108229297A (en) | Face identification method and device, electronic equipment, computer storage media | |
CN109670441A (en) | A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium | |
CN112215180B (en) | Living body detection method and device | |
CN109711297A (en) | Risk Identification Method, device, computer equipment and storage medium based on facial picture | |
TW201137768A (en) | Face recognition apparatus and methods | |
CN110390229B (en) | Face picture screening method and device, electronic equipment and storage medium | |
CN110889355A (en) | Face recognition verification method, system and storage medium | |
CN109697719A (en) | A kind of image quality measure method, apparatus and computer readable storage medium | |
CN110032970A (en) | Biopsy method, device, computer equipment and the storage medium of high-accuracy | |
CN110826372A (en) | Method and device for detecting human face characteristic points | |
Hernandez-Ortega et al. | FaceQvec: Vector quality assessment for face biometrics based on ISO compliance | |
CN108416304B (en) | Three-classification face detection method using context information | |
KR101782575B1 (en) | Image Processing Method and System For Extracting Distorted Circular Image Elements | |
EP3176726A1 (en) | Method and device for positioning human eyes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181211 |