CN106407911A - Image-based eyeglass recognition method and device - Google Patents
Image-based eyeglass recognition method and device Download PDFInfo
- Publication number
- CN106407911A CN106407911A CN201610795999.2A CN201610795999A CN106407911A CN 106407911 A CN106407911 A CN 106407911A CN 201610795999 A CN201610795999 A CN 201610795999A CN 106407911 A CN106407911 A CN 106407911A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- classification
- probit
- glasses
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Abstract
The present invention discloses an image-based eyeglass recognition method and device. The method includes the following steps that: a priori model in a preset deep convolution neural network is adopted to carry out classification judgment on a face image to be detected so as to obtain the face category of the face image and a probability value corresponding to the face category; if the probability value of the face category being a human face category is larger than a preset limit value, the face priori model in the preset deep convolution neural network is adopted to carry out classification judgment on the eye region image in the face image, so that probability values corresponding to various categories can be obtained; if a category with the highest probability value is an eyeglass wearing category, it is determined that a face in the face image wears eyeglasses; and if a category with the highest probability value is an non-eyeglass wearing category, it is determined that the face in the face image does not wear eyeglasses. With the method and device adopted, when the eye region image in the face image is recognized under a complex imaging condition, whether the face in the face image wears the eyeglasses can be accurately recognized, and the accuracy of eyeglass wearing recognition can be increased.
Description
Technical field
The invention belongs to image identification technical field, more particularly, to a kind of glasses recognition methodss based on image and device.
Background technology
Recognition of face, is a kind of biological identification technology that facial feature information based on people carries out identification.With shooting
Machine or the photographic head image containing face for the collection or video flowing, and automatic detect and track face in the picture, generally also referred to as
Identification of Images, facial recognition.Face recognition products be widely used to finance, the administration of justice, army, public security, frontier inspection, government, space flight,
The fields such as electric power, factory, education, medical and numerous enterprises and institutions.For example, recognition of face access control and attendance system and recognition of face
Antitheft door, with regard to the computer login of information security, E-Government and ecommerce.Ripe further and social with technology
The raising of degree of recognition, face recognition technology is applied in more fields.Wherein, the identification to face wearing spectacles can be accurate
Auxiliary realize face verification and face search etc. function.
In prior art, the recognition methodss of glasses in image are that whole image is extracted feature, are then known by grader
Other image septum reset whether wearing spectacles.Due to the relation of imaging device, image not necessarily can clearly react the complete face of people
Portion, most image occurs fuzzy, unsharp situation such as high light, dark, or in image personage occur bowing, side face
Etc. the attitude that can not show entirely face completely, under this complicated image-forming condition, the simple extraction of prior art is entirely schemed
The feature of picture is cannot accurately to have discriminated whether wearing spectacles, and then increases the error rate of recognition result.
Content of the invention
The embodiment of the present invention provides a kind of glasses recognition methodss based on image and device it is intended to solve due to outside imaging
The change of factor and the problem that cannot accurately discriminate whether wearing spectacles that leads to.
A kind of glasses recognition methodss based on image provided in an embodiment of the present invention, including:By preset depth convolution
Prior model in neutral net, carries out discriminant classification to face-image to be detected, obtains the facial classification of described face-image
With described face the corresponding probit of classification, wherein said face class classification include face classification and non-face classification;If described
Facial classification is that the probit corresponding to face classification is more than preset limit value, then by people in described depth convolutional neural networks
Face prior model, carries out discriminant classification to eye areas image in described face-image, obtains the corresponding probability of difference of all categories
Value;If the maximum classification of probit is wearing spectacles classification it is determined that described face-image septum reset has wearing spectacles;If probability
The maximum classification of value is non-wearing spectacles classification it is determined that described face-image septum reset does not have wearing spectacles.
A kind of glasses identifying device based on image provided in an embodiment of the present invention, including:Differentiate that processing module is used for leading to
Cross prior model in preset depth convolutional neural networks, discriminant classification is carried out to face-image to be detected, obtain described face
The facial classification of portion's image and the described face corresponding probit of classification;If described differentiation processing module is additionally operable to described face class
The probit corresponding to face classification Wei not be more than preset limit value, then by face priori in described depth convolutional neural networks
Model, carries out discriminant classification to eye areas image in described face-image, obtains the corresponding probit of difference of all categories;Determine
If it is wearing spectacles classification it is determined that described face-image septum reset has wearing spectacles that module is used for the maximum classification of probit;
If it is non-wearing spectacles classification it is determined that described face-image septum reset that described determining module is additionally operable to the maximum classification of probit
There is no wearing spectacles.
Glasses recognition methodss based on image provided in an embodiment of the present invention and device, by preset depth convolutional Neural
Prior model in network, carries out discriminant classification to face-image to be detected, obtains facial classification and the institute of described face-image
State the corresponding probit of facial classification, wherein said face class classification includes face classification and non-face classification;If described face
Classification is that the probit corresponding to face classification is more than preset limit value, then by face in described depth convolutional neural networks first
Test model, discriminant classification is carried out to eye areas image in described face-image, obtain the corresponding probit of difference of all categories;If
The maximum classification of probit is wearing spectacles classification it is determined that described face-image septum reset has wearing spectacles;If probit is
Big classification is non-wearing spectacles classification it is determined that described face-image septum reset does not have wearing spectacles, so in complicated one-tenth
As in the case of, first differentiate that whether face-image is the image of face by preset depth convolutional neural networks, when differentiation is people
During the image of face, eye areas image in face-image is identified, can accurately identify whether there are wearing spectacles, increase
The accuracy of identification wearing spectacles, and then accurately auxiliary realizes the function such as face verification and face search.
Brief description
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
Have technology description in required use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only this
Some embodiments of invention.
Fig. 1 be first embodiment of the invention provide the glasses recognition methodss based on image realize schematic flow sheet;
Fig. 2 be second embodiment of the invention provide the glasses recognition methodss based on image realize schematic flow sheet;
Fig. 3 is the schematic diagram of Face datection in the embodiment of the present invention;
Fig. 4 is the structural representation of the glasses identifying device based on image that third embodiment of the invention provides;
Fig. 5 is the structural representation of the glasses identifying device based on image that fourth embodiment of the invention provides;
Fig. 6 is the hardware of the electronic equipment of glasses recognition methodss executing based on image that fifth embodiment of the invention provides
Structural representation.
Specific embodiment
For enabling the goal of the invention of the present invention, feature, advantage more obvious and understandable, below in conjunction with the present invention
Accompanying drawing in embodiment, is clearly and completely described the reality it is clear that described to the technical scheme in the embodiment of the present invention
Applying example is only a part of embodiment of the present invention, and not all embodiments.Based on the embodiment in the present invention, people in the art
The every other embodiment that member is obtained under the premise of not making creative work, broadly falls into the scope of protection of the invention.
Refer to Fig. 1, the glasses recognition methodss based on image that Fig. 1 provides for first embodiment of the invention realize flow process
Schematic diagram, can be applicable in the terminal that face Time Attendance Device, face burglary-resisting system, computer etc. identify facial image.Shown in Fig. 1
The glasses recognition methodss based on image, mainly include the following steps that:
S101, by prior model in preset depth convolutional neural networks, face-image to be detected is classified
Differentiate, obtain the facial classification of this face-image with this facial classification corresponding probit.
If this facial classification of S102 is the probit corresponding to face classification is more than preset limit value, pass through this depth
Face prior model in convolutional neural networks, carries out discriminant classification to eye areas image in this face-image, obtains of all categories
Corresponding probit respectively.
Depth convolutional neural networks are convolutional neural networks (CNN, Convolutional Neural Networks), extremely
Include two non-linear trainable convolutional layers, two nonlinear fixing convolutional layers and full articulamentum less, altogether at least 5 hidden
Containing layer, it is mainly used in speech analysises and field of image recognition.
Prior model, wherein training image be can be obtained by by preset depth convolutional neural networks training image sample
Sample is process image pattern classified according to preset facial classification to obtain the process of prior model, i.e. this enforcement
In example, this facial classification includes face classification and non-face classification.Wherein prior model can be used for carrying out classification to image sentencing
Other model.
The facial classification of this face-image and the corresponding probit of this facial classification can be obtained by discriminant classification, that is, should
Face-image is face classification and the corresponding probit of face-image affiliated face classification, and this face-image is non-face class
The corresponding probit of non-face classification not and belonging to this face-image.
Preset limit value is the numerical value more than or equal to 60%, and the present embodiment preferred preset limit value is 60%, that
When the corresponding probit of this face-image affiliated face classification is more than 60%, by priori in this depth convolutional neural networks
Model, carries out discriminant classification to eye areas image in this face-image, obtains the corresponding probit of difference of all categories;If this face
The corresponding probit of face classification belonging to portion's image is less than preset limit value, then process terminates.
Face prior model be can be obtained by by the image pattern that this depth convolutional neural networks trains face, wherein instruct
The image pattern practicing face is according to preset classification, image pattern to be classified to obtain the process of face prior model
Process, wherein face prior model can be used for image is carried out with the model of discriminant classification.In the present embodiment, preset classification has two
Class, a class is the classification of wearing spectacles, another kind of be non-wearing spectacles classification, then pass through face prior model discriminant classification
Afterwards, the probit of eye wear glasses and the probability not having wearing spectacles in eye areas image in this face-image can be obtained
Value, that is, in this face-image, eye areas image belongs to eyes area in the corresponding probit of wearing spectacles classification and this face-image
Area image belongs to the corresponding probit of non-wearing spectacles classification.
If the maximum classification of S103 probit is wearing spectacles classification it is determined that this face-image septum reset wears eye
Mirror.
If the maximum classification of S104 probit is non-wearing spectacles classification it is determined that this face-image septum reset is not worn
Wear glasses.
In actual applications, if this eye areas image belong to the corresponding probit of wearing spectacles classification maximum it is determined that
This face-image septum reset has wearing spectacles, if this eye areas image belongs to the corresponding probit of non-wearing spectacles classification
Greatly it is determined that this face-image septum reset does not have wearing spectacles.
In the embodiment of the present invention, by prior model in preset depth convolutional neural networks, to face figure to be detected
As carrying out discriminant classification, obtain the facial classification of this face-image with this facial classification corresponding probit, wherein this facial class
Classification includes face classification and non-face classification, if probit corresponding to face classification for this facial classification is more than preset limit
Value, then pass through face prior model in this depth convolutional neural networks, eye areas image in this face-image classified
Differentiate, obtain the corresponding probit of difference of all categories;If the maximum classification of probit is wearing spectacles classification it is determined that this face
Image septum reset has wearing spectacles;If the maximum classification of probit is non-wearing spectacles classification it is determined that face in this face-image
Portion does not have wearing spectacles, so under complicated imaging contexts, first differentiates face figure by preset depth convolutional neural networks
Seem the no image for face, when the image that differentiation is face, eye areas image in face-image is identified, permissible
Accurately identify whether there are wearing spectacles, increased the accuracy of identification wearing spectacles, and then accurately auxiliary is realized face and tested
The function such as card and face search.
Refer to Fig. 2, the glasses recognition methodss based on image that Fig. 2 provides for second embodiment of the invention realize flow process
Schematic diagram, can be applicable in the terminal that face Time Attendance Device, face burglary-resisting system, computer etc. identify facial image, main bag
Include following steps:
S201, this face-image being determined by Face datection and face key point location in images to be recognized, and will
This face-image in this images to be recognized is set as detection zone.
Face datection is carried out to the image of input by small echo (HAAR) grader or DLIB (C++library) algorithm,
Then by supervised descent algorithm (SDM, Supervised Descent Method), face pass is carried out to the image after detection
Key point location, the face key point wherein being positioned by SDM algorithm is included:Eyebrow, eyes, nose, face and face contour.When
So Face datection and face key point location can also be realized by other algorithms.
HAAR grader, comprises self adaptation and strengthens (Adaboost) algorithm, in field of image recognition, grader refers to right
Face and the non-face algorithm classified.
DLIB is a kind of algorithms library of C++, can be applicable to Face datection and face key point location.
Fig. 3 is the schematic diagram of Face datection, as shown in figure 3, atrouss square box is face detection block, circular expression people
Face, triangular representation animal, Polygons Representation trees, the image of face can be extracted in the picture through Face datection.
S202, expand this detection zone according to preset times, so that this face-image includes overall face contour area
Image.
Expanding this detection zone according to preset times is to extend to overall face contour area in face-image, expands inspection
The mode surveying region can be enlarged whole detection zone or by the top of detection zone, bottom, left and right two
Partly all expand preset times, can also only expand left side and the right side of this detection zone.The width of this preset times face contour
Degree is relevant, and in this enforcements, the numerical value of preferred preset times is 0.1 times, will the right side of this detection zone and left side respectively expand
0.1 times.Due to when detecting glasses, the width of spectacle-frame be possible to wider than face, by expanding detection zone, can will be whole
Individual glasses show into detection zone, so can increase the accuracy of identification.
S203, eye areas in this detection zone are corrected, so that double in this eye areas in this face-image
Eye is in same horizontal line.
The mode of correction ocular does not limit, can by geometric transformation come correction ocular it is also possible to change two eyes
Between angle carry out correction ocular, final purpose is that the eyes in this eye areas are in same horizontal line.
S204, determine the position of this eye areas, and the position according to this eye areas by this face key point location,
This eye areas image is extracted in this face-image.
This eye areas image includes the tail of the eye of the same side facial zone image between ear top.
S205, by prior model in preset depth convolutional neural networks, face-image to be detected is classified
Differentiate, obtain the facial classification of this face-image with this facial classification corresponding probit.
Depth convolutional neural networks are a kind of deep neural network with convolutional coding structure, at least include two non-linear can
The convolutional layer of training, two nonlinear fixing convolutional layers and full articulamentum, at least 5 hidden layers, are mainly used in language altogether
Cent analysis and field of image recognition.
Perform the operation of detection face in step s 201, but face can not accurately be determined by HAAR grader very much
Portion's image is the image of face, and for some fuzzyyer images, HAAR grader is difficult to accurately know in the picture
Others' face.So prior model be can be obtained by by preset depth convolutional neural networks training image sample, wherein train
Image pattern is process image pattern classified according to preset facial classification to obtain the process of prior model, that is, originally
In embodiment, this facial classification includes face classification and non-face classification.Wherein prior model can be used for image is carried out point
The model that class differentiates.
The facial classification of this face-image and the corresponding probit of this facial classification can be obtained by discriminant classification, that is, should
Face-image is face classification and the corresponding probit of face-image affiliated face classification, and this face-image is non-face class
The corresponding probit of non-face classification not and belonging to this face-image.
If this facial classification of S206 is the probit corresponding to face classification is more than preset limit value, pass through this depth
Face prior model in convolutional neural networks, carries out discriminant classification to eye areas image in this face-image, obtains of all categories
Corresponding probit respectively.
Preset limit value is the numerical value more than or equal to 60%, and the present embodiment preferred preset limit value is 60%.If
The corresponding probit of face classification belonging to this face-image is less than preset limit value, then process terminates.If this facial classification is
Probit corresponding to face classification is more than preset limit value it is determined that this face-image is the image of face;When this face of determination
After portion's image is the image of face, by face prior model in this depth convolutional neural networks, to eyes in this face-image
Area image carries out discriminant classification, obtains the corresponding probit of difference of all categories.
Face prior model be can be obtained by by the image pattern that this depth convolutional neural networks trains face, wherein instruct
The image pattern practicing face is according to preset classification, image pattern to be classified to obtain the process of face prior model
Process, wherein face prior model can be used for image is carried out with the model of discriminant classification.In the present embodiment, preset classification has two
Class, a class is the classification of wearing spectacles, another kind of be non-wearing spectacles classification, then pass through face prior model discriminant classification
Afterwards, the probit of eye wear glasses and the probability not having wearing spectacles in eye areas image in this face-image can be obtained
Value, that is, in this face-image, eye areas image belongs to eyes area in the corresponding probit of wearing spectacles classification and this face-image
Area image belongs to the corresponding probit of non-wearing spectacles classification.
It should be noted that this face prior model and this prior model are identical model or different models, above-mentioned
Embodiment describes the two different situation.If this face prior model is identical model with this prior model, pass through
Image pattern before this depth convolutional neural networks training pattern includes facial image sample and non-face image pattern, its
Middle facial image sample includes the facial image sample worn various glasses and do not have wearing spectacles.So only need to store one
Individual model, can save memory space, can differentiate that facial classification identifies glasses again by a model simultaneously, and then improves
The efficiency differentiating.
If the maximum classification of S207 probit is wearing spectacles classification it is determined that this face-image septum reset wears eye
Mirror.
In actual applications, if this eye areas image belong to the corresponding probit of wearing spectacles classification maximum it is determined that
This face-image septum reset has wearing spectacles.
S208, by this face prior model in this depth convolutional neural networks, wear eye in this eye areas image
The species of mirror carries out discriminant classification, obtains the corresponding probit of each glasses species.
The species that face prior model divides in step S206 includes the classification of wearing spectacles and the class of non-wearing spectacles
Not, the classification of wherein this wearing spectacles includes glasses species, and this glasses kind apoplexy due to endogenous wind includes:Near-sighted glasses, protective eye lens, sunglassess,
The glasses species such as spectacle-frame, does not limit to the species of glasses herein, and those skilled in the art can arrange according in the present embodiment
The species of the glasses lifted is added or replaces.
Each glasses species all calculates a probit, wherein the maximum corresponding probit of glasses species of probit with pre-
Put probit to be compared, if the maximum corresponding probit of glasses species of probit is more than preset probit, execution step
S209, if the maximum corresponding probit of glasses species of probit is less than this preset probit, execution step S210.
If the maximum corresponding probit of glasses species of S209 probit is more than preset probit it is determined that this face is schemed
As the species of septum reset wearing spectacles is the maximum glasses species of this probit.
This preset probit can identical with the numerical value of aforementioned preset ultimate value it is also possible to different, the present embodiment preferably should
Preset probit is 80%.If the maximum glasses species of probit is sunglassess it is determined that this face-image septum reset is worn
The species of glasses is sunglassess.
If the maximum corresponding probit of glasses species of S210 probit is less than this preset probit, by this face
In image, the cosine comparison result between the feature of the feature of this eye areas image and preset glasses image pattern, identifies this face
The species of portion's image septum reset wearing spectacles.
In field of image recognition, the feature of image is the proper noun of field of image recognition, the extraction of the feature of image
It is one of computer vision and image procossing concept.
In cosine ratio to before, alternatively, this face can be extracted by the full articulamentum of this depth convolutional neural networks
The feature of this lens area image in image, and extract the feature of multiple glasses image patterns simultaneously.By this depth convolution god
The feature of the eye areas image that the full articulamentum through network extracts, can effectively describe the side of glasses in eye areas image
Edge, texture and color, increase the accuracy of identification glasses species.
The feature of this glasses image pattern can be stored in the memory module of terminal built-in and take beyond the clouds it is also possible to store
On business device, wherein in the feature of this glasses image pattern, include the feature of the glasses image pattern of different glasses species.
Alternatively, by the feature of the feature of this eye areas image in this face-image and preset glasses image pattern it
Between cosine comparison result, identify that the species of this face-image septum reset wearing spectacles is specially:
The feature of the feature of this eye areas image and this glasses image pattern is carried out cosine similarity compare, calculate this
Cosine similarity value between eye areas image and each glasses image pattern;
According to the cosine similarity value calculating, according to similarity by high order on earth, select in this glasses image pattern
Take the target glasses image pattern of preset number;
Quantity statistics is carried out to glasses species in this target glasses image pattern, and chooses the most glasses kind of sample size
Class is as target glasses species;
Choose similar glasses image pattern corresponding with target glasses species in this target glasses image pattern, and extract
Cosine similarity value between this eye areas image and similar glasses image pattern is as target cosine similarity value;
Calculate the meansigma methodss of this target cosine similarity value;
If this meansigma methods is more than default value, using this target glasses species as this face-image septum reset wearing spectacles
Species.
Cosine similarity is as two interindividual variations of measurement with two vectorial angle cosine values in vector space
The tolerance of size.The numerical value of cosine similarity value is bigger, represents that similarity is higher.In actual applications, it is first according to calculate
Cosine similarity value, glasses image pattern is arranged to similarity is low by similarity height, then by similarity height, is chosen
Go out the target glasses image pattern of preset number.The numerical value of preset number can arbitrarily be chosen, the number of the sample certainly chosen
Many, then the last accuracy differentiating will improve.The preferred preset number of the present embodiment is 20.This default value be more than or
The numerical value that person is equal to 50, in the present embodiment, preferably default value is 50.
Below the process of species determining face-image septum reset wearing spectacles is illustrated, illustrate as
Under:
If preset number is 20.By statistics, in 20 target glasses image patterns, the sample size of sunglassess
For 11, the sample size of protective eye lens is 1, and the sample size of near-sighted glasses is 8, then target glasses species is sunglassess, so
Choose 11 image patterns that glasses species is sunglassess afterwards as similar glasses image pattern, and extract this eye areas image
Cosine similarity value between glasses image pattern similar to 11 is as target cosine similarity value, the similar glasses of each of which
Image pattern all corresponds to a target cosine similarity value, extracts 11 target cosine similarity values altogether, calculates this 11 mesh
The meansigma methodss of mark cosine similarity value, the meansigma methodss calculating are A value, when A value is more than default value, by face in this face-image
The species of portion's wearing spectacles is sunglassess.
If the maximum classification of S211 probit is non-wearing spectacles classification it is determined that this face-image septum reset is not worn
Wear glasses.
After determining that face-image septum reset does not have wearing spectacles, process terminates.
In the embodiment of the present invention, this face in images to be recognized is determined by Face datection and face key point location
Image, and this face-image in this images to be recognized is set as detection zone, expand this detection zone according to preset times,
So that this face-image includes the image of overall face contour area, eye areas in this detection zone are corrected, with
So that the eyes in this eye areas in this face-image is in same horizontal line, this eye is determined by this face key point location
The position in eyeball region, and the position according to this eye areas, extract this eye areas image, by preset in this face-image
Depth convolutional neural networks in prior model, discriminant classification is carried out to face-image to be detected, obtains this face-image
Facial classification and the corresponding probit of this facial classification, if this facial classification is probit corresponding to face classification be more than preset
Ultimate value, then pass through face prior model in this depth convolutional neural networks, eye areas image in this face-image carried out
Discriminant classification, obtains the corresponding probit of difference of all categories, if the maximum classification of probit is wearing spectacles classification it is determined that being somebody's turn to do
Face-image septum reset has wearing spectacles, by this face prior model in this depth convolutional neural networks, to this eye areas
In image, the species of wearing spectacles carries out discriminant classification, obtains the corresponding probit of each glasses species, if the maximum eye of probit
The corresponding probit of mirror species is more than preset probit it is determined that the species of this face-image septum reset wearing spectacles is this probability
The maximum glasses species of value, if the maximum corresponding probit of glasses species of probit is less than this preset probit, passing through should
Cosine comparison result between the feature of the feature of this eye areas image and preset glasses image pattern in face-image, identification
The species of this face-image septum reset wearing spectacles, if the maximum classification of probit is non-wearing spectacles classification it is determined that this face
Portion's image septum reset does not have wearing spectacles, so under complicated imaging contexts, by preset depth convolutional neural networks only
Eye areas image in face-image is identified, can accurately identify whether there are wearing spectacles, can identify simultaneously
The species of glasses, increased the accuracy of identification wearing spectacles, and then accurately auxiliary realizes face verification and face search etc.
Function.
Refer to Fig. 4, Fig. 4 is the structural representation of the glasses identifying device based on image that third embodiment of the invention provides
Figure, for convenience of description, illustrate only the part related to the embodiment of the present invention.The glasses identification based on image of Fig. 4 example
Device can be the executive agent of the glasses recognition methodss based on image that aforementioned embodiment illustrated in fig. 1 provides, and can be terminal
Or one of terminal control module.The glasses identifying device based on image of Fig. 4 example, main inclusion:Differentiate processing module
401st, determining module 402.Each functional module detailed description is as follows above:
Differentiate processing module 401, for by prior model in preset depth convolutional neural networks, to face to be detected
Portion's image carries out discriminant classification, obtains the facial classification of this face-image and the corresponding probit of this facial classification.
Differentiate processing module 401, if being additionally operable to the probit that this facial classification is corresponding to face classification to be more than preset pole
Limit value, then pass through face prior model in this depth convolutional neural networks, eye areas image in this face-image carried out point
Class differentiates, obtains the corresponding probit of difference of all categories.
Depth convolutional neural networks be convolutional neural networks, at least include two non-linear trainable convolutional layers, two
Nonlinear fixing convolutional layer and full articulamentum, at least 5 hidden layers, are mainly used in speech analysises and image recognition altogether
Field.
Differentiate that processing module 401 is additionally operable to can be obtained by by preset depth convolutional neural networks training image sample
Prior model, wherein training image sample are according to preset facial classification, image pattern to be entered to obtain the process of prior model
The process of row classification, that is, in the present embodiment, this facial classification includes face classification and non-face classification.Wherein prior model can
For image is carried out with the model of discriminant classification.
Differentiate that processing module 401 can obtain facial classification and this facial classification pair of this face-image by discriminant classification
The probit answered, that is, this face-image is face classification and the corresponding probit of face-image affiliated face classification, and this face
Portion's image is the corresponding probit of non-face classification belonging to non-face classification and this face-image.
Preset limit value is the numerical value more than or equal to 60%, and the present embodiment preferred preset limit value is 60%, that
When the corresponding probit of this face-image affiliated face classification is more than 60%, differentiate that processing module 401 is rolled up by this depth
Face prior model in long-pending neutral net, carries out discriminant classification to eye areas image in this face-image, obtains of all categories point
Not corresponding probit;If the corresponding probit of face classification belonging to this face-image is less than preset limit value, process is tied
Bundle.
Differentiate that processing module 401 can be obtained by face by the image pattern that this depth convolutional neural networks trains face
Prior model, the image pattern of wherein training face is to image according to preset classification to obtain the process of face prior model
The process that sample is classified, wherein face prior model can be used for image is carried out with the model of discriminant classification.In the present embodiment
Preset classification has two classes, and a class is the classification of wearing spectacles, another kind of be non-wearing spectacles classification, then differentiate process mould
Block 401, after face prior model discriminant classification, can obtain eye wear eye in eye areas image in this face-image
The probit of mirror and the probit not having wearing spectacles, that is, in this face-image, eye areas image belongs to wearing spectacles classification pair
In the probit answered and this face-image, eye areas image belongs to the corresponding probit of non-wearing spectacles classification.
Determining module 402, if be wearing spectacles classification for the maximum classification of probit it is determined that face in this face-image
There are wearing spectacles in portion.
Determining module 402, if being additionally operable to the maximum classification of probit is non-wearing spectacles classification it is determined that this face-image
Septum reset does not have wearing spectacles.
In actual applications, if this eye areas image of determining module 402 belongs to the corresponding probit of wearing spectacles classification
Maximum it is determined that this face-image septum reset has wearing spectacles, if this eye areas image of determining module 402 belongs to non-and wears eye
Mirror classification corresponding probit maximum is not it is determined that this face-image septum reset has wearing spectacles.
It should be noted that in the embodiment based on the glasses identifying device of image of figure 4 above example, each function mould
The division of block is merely illustrative of, in practical application can as needed, the configuration requirement of for example corresponding hardware or software
The convenient consideration realized, and above-mentioned functions distribution is completed by different functional modules.And, in practical application, the present embodiment
In corresponding functional module can be realized by corresponding hardware it is also possible to by corresponding hardware execute corresponding software complete
Become.Each embodiment that this specification provides all can apply foregoing description principle, below repeats no more.
In the embodiment of the present invention, differentiate processing module 401 by prior model in preset depth convolutional neural networks, right
Face-image to be detected carries out discriminant classification, obtains the facial classification of this face-image and the corresponding probability of this facial classification
Value, wherein this facial class classification includes face classification and non-face classification, if differentiation this facial classification of processing module 401 is face
Probit corresponding to classification is more than preset limit value, then pass through face prior model in this depth convolutional neural networks, to this
In face-image, eye areas image carries out discriminant classification, obtains the corresponding probit of difference of all categories;If determining module 402 is general
The maximum classification of rate value is wearing spectacles classification it is determined that this face-image septum reset has wearing spectacles;If determining module 402 is general
The maximum classification of rate value is non-wearing spectacles classification it is determined that this face-image septum reset does not have wearing spectacles, so in complexity
Imaging contexts under, differentiate processing module 401 first differentiate whether face-image is people by preset depth convolutional neural networks
The image of face, when the image that differentiation is face, differentiates that processing module 401 is known to eye areas image in face-image
Not, can accurately identify whether there are wearing spectacles, increased the accuracy of identification wearing spectacles, and then accurately auxiliary is realized
The function such as face verification and face search.
Refer to Fig. 5, the structural representation of the glasses identifying device based on image that fourth embodiment of the invention provides, be
It is easy to illustrate, illustrate only the part related to the embodiment of the present invention.The glasses identifying device based on image of Fig. 5 example can
To be the executive agent of the glasses recognition methodss based on image that aforementioned embodiment illustrated in fig. 2 provides, can be terminal or terminal
One of control module.The glasses identifying device based on image of Fig. 5 example, main inclusion:Determining module 501, expansion mould
Block 502, rectification module 503, extraction process module 504, differentiation processing module 505 and matching identification module 506.Each function above
Module describes in detail as follows:
Determining module 501, for determining this face in images to be recognized by Face datection and face key point location
Portion's image, and this face-image in this images to be recognized is set as detection zone.
Face datection is carried out to the image of input by HAAR grader or DLIB algorithm, then to the image after detection
Face key point location is carried out by SDM algorithm, the face key point wherein being positioned by SDM algorithm is included:Eyebrow, eyes,
Nose, face and face contour.Certainly Face datection and face key point location can also be realized by other algorithms.
Fig. 3 is the schematic diagram of Face datection, as shown in figure 3, atrouss square box is face detection block, circular expression people
Face, triangular representation animal, Polygons Representation trees, the image of face can be extracted in the picture through Face datection.
Extension module 502, for expanding this detection zone according to preset times, so that this face-image includes integral face
The image in contouring region.
Expanding this detection zone according to preset times is to extend to overall face contour area in face-image, expands inspection
The mode surveying region can be enlarged whole detection zone or by the top of detection zone, bottom, left and right two
Partly all expand preset times, can also only expand left side and the right side of this detection zone.The width of this preset times face contour
Degree is relevant, and in this enforcements, the numerical value of preferred preset times is 0.1 times, will the right side of this detection zone and left side respectively expand
0.1 times.Due to when detecting glasses, the width of spectacle-frame be possible to wider than face, by expanding detection zone, can will be whole
Individual glasses show into detection zone, so can increase the accuracy of identification.
Rectification module 503, for correcting to eye areas in this detection zone, so that this eye in this face-image
Eyes in region are in same horizontal line.
The mode of correction ocular does not limit, can by geometric transformation come correction ocular it is also possible to change two eyes
Between angle carry out correction ocular, final purpose is that the eyes in this eye areas are in same horizontal line.
Extraction process module 504, for determining the position of this eye areas, and according to this by this face key point location
The position of eye areas, extracts this eye areas image in this face-image.
This eye areas image includes the tail of the eye of the same side facial zone image between ear top.
Differentiate processing module 505, for by prior model in preset depth convolutional neural networks, to face to be detected
Portion's image carries out discriminant classification, obtains the facial classification of this face-image and the corresponding probit of this facial classification.
Determining module 501 can not accurately determine the image that face-image is face, for one by HAAR grader very much
For a little relatively fuzzyyer images, HAAR grader is difficult to accurately identify in the picture face.So differentiate processing module 505
Prior model be can be obtained by by preset depth convolutional neural networks training image sample, wherein training image sample is to obtain
Process to prior model is process image pattern classified according to preset facial classification, that is, in the present embodiment, should
Facial classification includes face classification and non-face classification.Wherein prior model can be used for image is carried out with the mould of discriminant classification
Type.
Differentiate that processing module 505 can obtain facial classification and this facial classification pair of this face-image by discriminant classification
The probit answered, that is, this face-image is face classification and the corresponding probit of face-image affiliated face classification, and this face
Portion's image is the corresponding probit of non-face classification belonging to non-face classification and this face-image.
Differentiate processing module 505, if being additionally operable to the probit that this facial classification is corresponding to face classification to be more than preset pole
Limit value, then pass through face prior model in this depth convolutional neural networks, eye areas image in this face-image carried out point
Class differentiates, obtains the corresponding probit of difference of all categories.
Preset limit value is the numerical value more than or equal to 60%, and the present embodiment preferred preset limit value is 60%.If
The corresponding probit of face classification belonging to this face-image is less than preset limit value, then process terminates.
Differentiate that processing module 505 can be obtained by face by the image pattern that this depth convolutional neural networks trains face
Prior model, the image pattern of wherein training face is to image according to preset classification to obtain the process of face prior model
The process that sample is classified, wherein face prior model can be used for image is carried out with the model of discriminant classification.In the present embodiment
Preset classification has two classes, and a class is the classification of wearing spectacles, another kind of be non-wearing spectacles classification, differentiate processing module
505, then after face prior model discriminant classification, can obtain eye wear in eye areas image in this face-image
The probit of glasses and the probit not having wearing spectacles, that is, in this face-image, eye areas image belongs to wearing spectacles classification
In corresponding probit and this face-image, eye areas image belongs to the corresponding probit of non-wearing spectacles classification.
Determining module 501, if being additionally operable to the maximum classification of probit is wearing spectacles classification it is determined that in this face-image
Face has wearing spectacles.
In actual applications, determining module 501, if this eye areas image belongs to the corresponding probit of wearing spectacles classification
Maximum is it is determined that this face-image septum reset has wearing spectacles.
Differentiate processing module 505, be additionally operable to by this face prior model in this depth convolutional neural networks, to this eye
In area image, the species of wearing spectacles carries out discriminant classification, obtains the corresponding probit of each glasses species.
The species that face prior model divides includes the classification of wearing spectacles and the classification of non-wearing spectacles, and wherein this is worn
The classification of glasses includes glasses species, and this glasses kind apoplexy due to endogenous wind includes:The glasses kind such as near-sighted glasses, protective eye lens, sunglassess, spectacle-frame
Class, does not limit to the species of glasses herein, and those skilled in the art can be according to the species of the glasses enumerated in the present embodiment
It is added or replace.Each glasses species all calculates a probit, and the wherein maximum glasses species of probit is corresponding
Probit is compared with preset probit.
It should be noted that this face prior model and this prior model are identical model or different models, above-mentioned
Embodiment describes the two different situation.If this face prior model is identical model with this prior model, pass through
Image pattern before this depth convolutional neural networks training pattern includes facial image sample and non-face image pattern, its
Middle facial image sample includes the facial image sample worn various glasses and do not have wearing spectacles.So only need to store one
Individual model, can save memory space, can differentiate that facial classification identifies glasses again by a model simultaneously, and then improves
The efficiency differentiating.
Matching identification module 506, if being more than preset probit for the maximum corresponding probit of glasses species of probit,
The species then determining this face-image septum reset wearing spectacles is the maximum glasses species of this probit.
This preset probit can identical with the numerical value of aforementioned preset ultimate value it is also possible to different, the present embodiment preferably should
Preset probit is 80%.If the maximum glasses species of probit is sunglassess it is determined that this face-image septum reset is worn
The species of glasses is sunglassess.
Matching identification module 506, if it is preset general less than this to be additionally operable to the maximum corresponding probit of glasses species of probit
Rate value, then pass through the cosine between the feature of this eye areas image and the feature of preset glasses image pattern in this face-image
Comparison result, identifies the species of this face-image septum reset wearing spectacles.
In cosine ratio to before, alternatively, extraction process module 504 be additionally operable to complete by this depth convolutional neural networks
Articulamentum can extract the feature of this lens area image in this face-image, and extract the spy of multiple glasses image patterns simultaneously
Levy.Extraction process module 504 passes through the feature of the eye areas image of full articulamentum extraction of this depth convolutional neural networks, can
Describe the edge of glasses, texture and color in eye areas image with effective, increase the accuracy of identification glasses species.
The feature of this glasses image pattern can be stored in the memory module of terminal built-in and take beyond the clouds it is also possible to store
On business device, wherein in the feature of this glasses image pattern, include the feature of the glasses image pattern of different glasses species.
Alternatively, matching identification module 506 includes:Comparing module, selection module, statistical module, computing module and identification
Module;
Comparing module, similar for the feature of this eye areas image and the feature of this glasses image pattern are carried out cosine
Degree compares, and calculates the cosine similarity value between this eye areas image and each glasses image pattern;
Choose module, for according to the cosine similarity value calculating, according to similarity by high order on earth, in this glasses
The target glasses image pattern of preset number is chosen in image pattern;
Statistical module, for carrying out quantity statistics to glasses species in this target glasses image pattern, and chooses sample number
Measure most glasses species as target glasses species;
Choose module, be additionally operable to choose similar glasses corresponding with target glasses species in this target glasses image pattern
Image pattern, and extract cosine similarity value between this eye areas image and similar glasses image pattern as target cosine
Similarity value;
Computing module, for calculating the meansigma methodss of this target cosine similarity value;
Identification module, if be more than default value for this meansigma methods, using this target glasses species as this face-image
The species of septum reset wearing spectacles.
This default value is the numerical value more than or equal to 50, and in the present embodiment, preferably default value is 50.Cosine phase
Bigger like the numerical value of angle value, represent that similarity is higher.In actual applications, choose module according to the cosine similarity value calculating,
Glasses image pattern is arranged to similarity is low by similarity height, and by similarity height, is chosen the mesh of preset number
Mark glasses image pattern.The numerical value of preset number can arbitrarily be chosen, and the number of the sample certainly chosen is many, then finally differentiate
Accuracy will improve.The preferred preset number of the present embodiment is 20.
If the maximum classification of determining module 501 probit is non-wearing spectacles classification it is determined that this face-image septum reset
There is no wearing spectacles.
After determining that face-image septum reset does not have wearing spectacles, process terminates.
In the embodiment of the present invention, determining module 501 determines figure to be identified by Face datection and face key point location
This face-image in picture, and this face-image in this images to be recognized is set as detection zone, extension module 502 basis
Preset times expand this detection zone, so that this face-image includes the image of overall face contour area, rectification module 503
Eye areas in this detection zone are corrected, so that the eyes in this eye areas in this face-image are in same level
On line, extraction process module 504 determines the position of this eye areas by this face key point location, and according to this eye areas
Position, this face-image extracts this eye areas image, differentiates processing module 505 by preset depth convolutional Neural
Prior model in network, carries out discriminant classification to face-image to be detected, obtains facial classification and this face of this face-image
The not corresponding probit of category, if it is pre- to differentiate that probit corresponding to face classification for this facial classification of processing module 505 is more than
Put ultimate value, then pass through face prior model in this depth convolutional neural networks, eye areas image in this face-image is entered
Row discriminant classification, obtains the corresponding probit of difference of all categories, if the maximum classification of determining module 501 probit is wearing spectacles
Classification, it is determined that this face-image septum reset has wearing spectacles, differentiates that processing module 505 is passed through in this depth convolutional neural networks
This face prior model, carries out discriminant classification to the species of wearing spectacles in this eye areas image, obtains each glasses species pair
The probit answered, if the maximum corresponding probit of glasses species of matching identification module 506 probit is more than preset probit,
The species determining this face-image septum reset wearing spectacles is the maximum glasses species of this probit, if matching identification module 506
The maximum corresponding probit of glasses species of probit is less than this preset probit, then pass through this eye areas in this face-image
Cosine comparison result between the feature of the feature of image and preset glasses image pattern, identifies that this face-image septum reset is worn
The species of glasses, if the maximum classification of determining module 501 probit is non-wearing spectacles classification it is determined that face in this face-image
Portion does not have wearing spectacles, so under complicated imaging contexts, by preset depth convolutional neural networks only to face-image
Middle eye areas image is identified, and can accurately identify whether there are wearing spectacles, can identify the species of glasses simultaneously,
Increased the accuracy of identification wearing spectacles, and then accurately auxiliary realizes the function such as face verification and face search.
Fig. 6 is the hardware knot of the electronic equipment of glasses recognition methodss executing based on image that the embodiment of the present invention five provides
Structure schematic diagram, for convenience of description, illustrate only the part related to the embodiment of the present invention, particular technique details does not disclose,
Refer to present invention method part.This electronic equipment can for computer, panel computer, personal digital assistant (PDA,
Personal Digital Assistant), face Time Attendance Device, the electronics of the identification facial image such as face burglary-resisting system set
Standby.
This electronic equipment includes:One or more processor 610, memorizer 620 and one or more program (mould
Block).Wherein this one or more program (module) is stored in this memorizer 620.This electronic equipment also includes:Input equipment
630 and output device 640, processor 610, memorizer 620, input equipment 630 and output device 640 can by bus or
Other modes connect, in Fig. 6 taking connected by bus 650 as a example.
When being executed by this one or more processor 610, this processor 610 executes following operation:
By prior model in preset depth convolutional neural networks, discriminant classification is carried out to face-image to be detected,
Obtain the facial classification of this face-image and the corresponding probit of this facial classification, wherein this facial class classification includes face classification
With non-face classification;
If this facial classification is the probit corresponding to face classification is more than preset limit value, by this depth convolution god
Through face prior model in network, discriminant classification is carried out to eye areas image in this face-image, it is of all categories right respectively to obtain
The probit answered;
If the maximum classification of probit is wearing spectacles classification it is determined that this face-image septum reset has wearing spectacles;
If the maximum classification of probit is non-wearing spectacles classification it is determined that this face-image septum reset does not wear eye
Mirror.
During the present invention is implemented, processor 610 is used for by prior model in preset depth convolutional neural networks, to be checked
The face-image surveyed carries out discriminant classification, obtain the facial classification of this face-image with this facial classification corresponding probit, its
In this facial class classification include face classification and non-face classification, if this facial classification be face classification corresponding to probit big
In preset limit value, then pass through face prior model in this depth convolutional neural networks, to eye areas figure in this face-image
As carrying out discriminant classification, obtain the corresponding probit of difference of all categories;If the maximum classification of probit is wearing spectacles classification,
Determine that this face-image septum reset has wearing spectacles;If the maximum classification of probit is non-wearing spectacles classification it is determined that this face
Portion's image septum reset does not have wearing spectacles, so under complicated imaging contexts, by preset depth convolutional neural networks first
Differentiate that whether face-image is the image of face, when differentiate be face image when, eye areas image in face-image is entered
Row identification, can accurately identify whether there are wearing spectacles, increased the accuracy of identification wearing spectacles, and then accurately assist
Realize the function such as face verification and face search.
It should be understood that disclosed system, terminal and method in multiple embodiments provided herein, permissible
Realize by another way.For example, the glasses recognition methodss based on image described above and device embodiment are only
Schematically, for example, the division of described module, only a kind of division of logic function, actual can have other drawing when realizing
Point mode, for example multiple module or components can in conjunction with or be desirably integrated into another system, or some features can be ignored,
Or do not execute.Another, shown or discussed coupling each other or direct-coupling or communication linkage can be by one
A little interfaces, the INDIRECT COUPLING of module or communication linkage, can be electrical, mechanical or other forms.
The described module illustrating as separating component can be or may not be physically separate, show as module
The part showing can be or may not be physical module, you can with positioned at a place, or can also be distributed to multiple
On mixed-media network modules mixed-media.The mesh to realize this embodiment scheme for some or all of module therein can be selected according to the actual needs
's.
In addition, can be integrated in a processing module in each functional module in each embodiment of the present invention it is also possible to
It is that modules are individually physically present it is also possible to two or more modules are integrated in a module.Above-mentioned integrated mould
Block both can be to be realized in the form of hardware, it would however also be possible to employ the form of software function module is realized.
If described integrated module is realized and as independent production marketing or use using in the form of software function module
When, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art being contributed or all or part of this technical scheme can be in the form of software products
Embody, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or network equipment etc.) executes the complete of each embodiment methods described of the present invention
Portion or part steps.And aforesaid storage medium includes:USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
It should be noted that for aforesaid each method embodiment, for easy description, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because
According to the present invention, some steps can be carried out using other orders or simultaneously.Secondly, those skilled in the art also should know
Know, embodiment described in this description belongs to preferred embodiment, and involved action and module might not be all these
Bright necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not have the portion described in detail in certain embodiment
Point, may refer to the associated description of other embodiments.
It is more than based on the glasses recognition methodss of image and the description of device to provided by the present invention, for this area
Technical staff, according to the thought of the embodiment of the present invention, all will change in specific embodiments and applications, comprehensive
On, this specification content should not be construed as limitation of the present invention.
Claims (8)
1. a kind of glasses recognition methodss based on image are it is characterised in that include:
By prior model in preset depth convolutional neural networks, discriminant classification is carried out to face-image to be detected, obtains
The facial classification of described face-image and the described face corresponding probit of classification, wherein said face class classification includes face class
Other and non-face classification;
If described face classification is the probit corresponding to face classification is more than preset limit value, by described depth convolution god
Through face prior model in network, discriminant classification is carried out to eye areas image in described face-image, obtains difference of all categories
Corresponding probit;
If the maximum classification of probit is wearing spectacles classification it is determined that described face-image septum reset has wearing spectacles.
If 2. method according to claim 1 is it is characterised in that the maximum classification of described probit is wearing spectacles class
Not it is determined that after described face-image septum reset has wearing spectacles, also including:
By face prior model described in described depth convolutional neural networks, to wearing spectacles in described eye areas image
Species carries out discriminant classification, obtains the corresponding probit of each glasses species;
If the maximum corresponding probit of glasses species of probit is more than preset probit it is determined that described face-image septum reset
The species of wearing spectacles is the maximum glasses species of described probit;
If the maximum corresponding probit of glasses species of probit is less than described preset probit, by described eye areas figure
Cosine comparison result between the feature of the feature of picture and preset glasses image pattern, identifies that described face-image septum reset is worn
The species of glasses.
3. method according to claim 2 it is characterised in that
Described face-image in images to be recognized is determined by Face datection and face key point location, and by described face
Image setting is detection zone;
Expand described detection zone according to preset times, so that described face-image includes overall face contour area;
Eye areas in described detection zone are corrected, so that the eyes in described eye areas are in same horizontal line
On.
4. method according to claim 3 it is characterised in that
Determine the position of described eye areas by described face key point location, and the position according to described eye areas,
Described eye areas image is extracted, wherein said eye areas image includes the tail of the eye of the same side to ear in described face-image
Facial zone between piece top.
5. a kind of middle glasses identifying device based on image is it is characterised in that include:
Differentiate processing module, for by prior model in preset depth convolutional neural networks, to face-image to be detected
Carry out discriminant classification, obtain the facial classification of described face-image and the described face corresponding probit of classification;
Described differentiation processing module, if be additionally operable to the probit that described face classification is corresponding to face classification to be more than preset limit
Value, then by face prior model in described depth convolutional neural networks, carried out to eye areas image in described face-image
Discriminant classification, obtains the corresponding probit of difference of all categories;
Determining module, if be wearing spectacles classification it is determined that described face-image septum reset has for the maximum classification of probit
Wearing spectacles;
Described determining module, if being additionally operable to the maximum classification of probit is non-wearing spectacles classification it is determined that described face-image
Septum reset does not have wearing spectacles.
6. device according to claim 5 it is characterised in that
Described differentiation processing module, is additionally operable to by face prior model described in described depth convolutional neural networks, to described
In eye areas image, the species of wearing spectacles carries out discriminant classification, obtains the corresponding probit of each glasses species;
Described device also includes:
Matching identification module, if be more than preset probit for the maximum corresponding probit of glasses species of probit it is determined that
The species of described face-image septum reset wearing spectacles is the maximum glasses species of described probit;
Described matching identification module, if be additionally operable to the maximum corresponding probit of glasses species of probit to be less than described preset probability
Value, then by remaining between the feature of the feature of eye areas image described in described face-image and preset glasses image pattern
String comparison result, identifies the species of described face-image septum reset wearing spectacles.
7. device according to claim 5 it is characterised in that
Described determining module, is additionally operable to determine the described face in images to be recognized by Face datection and face key point location
Portion's image, and the described face-image in described images to be recognized is set as detection zone;
Described device also includes:
Extension module, for expanding described detection zone according to preset times, so that described face-image includes overall face
The image of contour area;
Rectification module, for correcting to eye areas in described detection zone, so that eyes described in described face-image
Eyes in region are in same horizontal line.
8. device according to claim 7 is it is characterised in that described device also includes:
Extraction process module, for determining the position of described eye areas, and according to described by described face key point location
The position of eye areas, extracts described eye areas image in described face-image, and wherein said eye areas image includes
The facial zone image between ear top for the tail of the eye of the same side.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610795999.2A CN106407911A (en) | 2016-08-31 | 2016-08-31 | Image-based eyeglass recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610795999.2A CN106407911A (en) | 2016-08-31 | 2016-08-31 | Image-based eyeglass recognition method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106407911A true CN106407911A (en) | 2017-02-15 |
Family
ID=58001978
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610795999.2A Pending CN106407911A (en) | 2016-08-31 | 2016-08-31 | Image-based eyeglass recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106407911A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153072A (en) * | 2017-06-21 | 2017-09-12 | 苏州卡睿知光电科技有限公司 | A kind of eyeglass flaw inspection method and device |
CN107464253A (en) * | 2017-07-10 | 2017-12-12 | 北京小米移动软件有限公司 | eyebrow location method and device |
CN107808142A (en) * | 2017-11-09 | 2018-03-16 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
CN107992835A (en) * | 2017-12-11 | 2018-05-04 | 浙江大学 | A kind of glasses image-recognizing method |
CN107992815A (en) * | 2017-11-28 | 2018-05-04 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
CN108090450A (en) * | 2017-12-20 | 2018-05-29 | 深圳和而泰数据资源与云技术有限公司 | Face identification method and device |
CN108288024A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | Face identification method and device |
CN108573219A (en) * | 2018-03-27 | 2018-09-25 | 上海电力学院 | A kind of eyelid key point accurate positioning method based on depth convolutional neural networks |
WO2019061658A1 (en) * | 2017-09-30 | 2019-04-04 | 平安科技(深圳)有限公司 | Method and device for positioning eyeglass, and storage medium |
CN109934062A (en) * | 2017-12-18 | 2019-06-25 | 比亚迪股份有限公司 | Training method, face identification method, device and the equipment of eyeglasses removal model |
CN110222608A (en) * | 2019-05-24 | 2019-09-10 | 山东海博科技信息系统股份有限公司 | A kind of self-service examination machine eyesight detection intelligent processing method |
CN111429409A (en) * | 2020-03-13 | 2020-07-17 | 深圳市雄帝科技股份有限公司 | Method and system for identifying glasses worn by person in image and storage medium thereof |
CN111814815A (en) * | 2019-04-11 | 2020-10-23 | 苏州工其器智能科技有限公司 | Intelligent glasses placement state distinguishing method based on lightweight neural network |
CN112418138A (en) * | 2020-12-04 | 2021-02-26 | 兰州大学 | Glasses try-on system and program |
CN112733570A (en) * | 2019-10-14 | 2021-04-30 | 北京眼神智能科技有限公司 | Glasses detection method and device, electronic equipment and storage medium |
CN113723308A (en) * | 2021-08-31 | 2021-11-30 | 上海西井信息科技有限公司 | Detection method, system, equipment and storage medium of epidemic prevention suite based on image |
US11270100B2 (en) | 2017-11-14 | 2022-03-08 | Huawei Technologies Co., Ltd. | Face image detection method and terminal device |
CN116343312A (en) * | 2023-05-29 | 2023-06-27 | 深圳市优友互联股份有限公司 | Method and equipment for identifying wearing object in face image |
-
2016
- 2016-08-31 CN CN201610795999.2A patent/CN106407911A/en active Pending
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107153072A (en) * | 2017-06-21 | 2017-09-12 | 苏州卡睿知光电科技有限公司 | A kind of eyeglass flaw inspection method and device |
CN107464253A (en) * | 2017-07-10 | 2017-12-12 | 北京小米移动软件有限公司 | eyebrow location method and device |
CN107464253B (en) * | 2017-07-10 | 2020-11-20 | 北京小米移动软件有限公司 | Eyebrow positioning method and device |
WO2019061658A1 (en) * | 2017-09-30 | 2019-04-04 | 平安科技(深圳)有限公司 | Method and device for positioning eyeglass, and storage medium |
US10635946B2 (en) | 2017-09-30 | 2020-04-28 | Ping An Technology (Shenzhen) Co., Ltd. | Eyeglass positioning method, apparatus and storage medium |
CN107808142A (en) * | 2017-11-09 | 2018-03-16 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
US11270100B2 (en) | 2017-11-14 | 2022-03-08 | Huawei Technologies Co., Ltd. | Face image detection method and terminal device |
CN107992815A (en) * | 2017-11-28 | 2018-05-04 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
CN107992835A (en) * | 2017-12-11 | 2018-05-04 | 浙江大学 | A kind of glasses image-recognizing method |
CN109934062A (en) * | 2017-12-18 | 2019-06-25 | 比亚迪股份有限公司 | Training method, face identification method, device and the equipment of eyeglasses removal model |
CN108090450A (en) * | 2017-12-20 | 2018-05-29 | 深圳和而泰数据资源与云技术有限公司 | Face identification method and device |
CN108288024A (en) * | 2017-12-20 | 2018-07-17 | 深圳和而泰数据资源与云技术有限公司 | Face identification method and device |
CN108090450B (en) * | 2017-12-20 | 2020-11-13 | 深圳和而泰数据资源与云技术有限公司 | Face recognition method and device |
CN108573219A (en) * | 2018-03-27 | 2018-09-25 | 上海电力学院 | A kind of eyelid key point accurate positioning method based on depth convolutional neural networks |
CN108573219B (en) * | 2018-03-27 | 2022-03-29 | 上海电力学院 | Eyelid key point accurate positioning method based on deep convolutional neural network |
CN111814815A (en) * | 2019-04-11 | 2020-10-23 | 苏州工其器智能科技有限公司 | Intelligent glasses placement state distinguishing method based on lightweight neural network |
CN111814815B (en) * | 2019-04-11 | 2023-08-22 | 浙江快奇控股有限公司 | Intelligent judging method for glasses placement state based on lightweight neural network |
CN110222608A (en) * | 2019-05-24 | 2019-09-10 | 山东海博科技信息系统股份有限公司 | A kind of self-service examination machine eyesight detection intelligent processing method |
CN112733570A (en) * | 2019-10-14 | 2021-04-30 | 北京眼神智能科技有限公司 | Glasses detection method and device, electronic equipment and storage medium |
CN111429409A (en) * | 2020-03-13 | 2020-07-17 | 深圳市雄帝科技股份有限公司 | Method and system for identifying glasses worn by person in image and storage medium thereof |
CN112418138B (en) * | 2020-12-04 | 2022-08-19 | 兰州大学 | Glasses try-on system |
CN112418138A (en) * | 2020-12-04 | 2021-02-26 | 兰州大学 | Glasses try-on system and program |
CN113723308A (en) * | 2021-08-31 | 2021-11-30 | 上海西井信息科技有限公司 | Detection method, system, equipment and storage medium of epidemic prevention suite based on image |
CN113723308B (en) * | 2021-08-31 | 2023-08-22 | 上海西井科技股份有限公司 | Image-based epidemic prevention kit detection method, system, equipment and storage medium |
CN116343312A (en) * | 2023-05-29 | 2023-06-27 | 深圳市优友互联股份有限公司 | Method and equipment for identifying wearing object in face image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106407911A (en) | Image-based eyeglass recognition method and device | |
CN106469298A (en) | Age recognition methodss based on facial image and device | |
KR20230021043A (en) | Method and apparatus for recognizing object, and method and apparatus for learning recognizer | |
CN112016464B (en) | Method and device for detecting face shielding, electronic equipment and storage medium | |
CN101558431B (en) | Face authentication device | |
CN103136504B (en) | Face identification method and device | |
CN106295591A (en) | Gender identification method based on facial image and device | |
CN106326857A (en) | Gender identification method and gender identification device based on face image | |
CN108985135A (en) | A kind of human-face detector training method, device and electronic equipment | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN105243374A (en) | Three-dimensional human face recognition method and system, and data processing device applying same | |
CN105354565A (en) | Full convolution network based facial feature positioning and distinguishing method and system | |
CN113239907B (en) | Face recognition detection method and device, electronic equipment and storage medium | |
CN107316029A (en) | A kind of live body verification method and equipment | |
CN106650670A (en) | Method and device for detection of living body face video | |
US20220406090A1 (en) | Face parsing method and related devices | |
CN106778489A (en) | The method for building up and equipment of face 3D characteristic identity information banks | |
CN113449704B (en) | Face recognition model training method and device, electronic equipment and storage medium | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN108446672A (en) | A kind of face alignment method based on the estimation of facial contours from thick to thin | |
CN112200176B (en) | Method and system for detecting quality of face image and computer equipment | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
Hebbale et al. | Real time COVID-19 facemask detection using deep learning | |
CN112257665A (en) | Image content recognition method, image recognition model training method, and medium | |
CN107368803A (en) | A kind of face identification method and system based on classification rarefaction representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170215 |