CN106326857A - Gender identification method and gender identification device based on face image - Google Patents

Gender identification method and gender identification device based on face image Download PDF

Info

Publication number
CN106326857A
CN106326857A CN201610698494.4A CN201610698494A CN106326857A CN 106326857 A CN106326857 A CN 106326857A CN 201610698494 A CN201610698494 A CN 201610698494A CN 106326857 A CN106326857 A CN 106326857A
Authority
CN
China
Prior art keywords
facial image
image
target
sex
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610698494.4A
Other languages
Chinese (zh)
Inventor
公绪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Original Assignee
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeTV Holding Beijing Co Ltd, LeTV Cloud Computing Co Ltd filed Critical LeTV Holding Beijing Co Ltd
Priority to CN201610698494.4A priority Critical patent/CN106326857A/en
Publication of CN106326857A publication Critical patent/CN106326857A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Abstract

The invention discloses a gender identification method and a gender identification device based on a face image. The gender identification method comprises the steps of through a prior model in a preset deep convolutional neural network, performing gender classification determining on an objective face image, and obtaining a gender determining result which corresponds with the objective face image; through performing comparison for obtaining cosine similarity between the characteristic of the objective face image and the characteristic of an image sample, obtaining a cosine comparing result which corresponds with the objective face image; performing Bayes classifier operation on the gender determining result and the cosine comparing result, obtaining the gender of an object in the face image. Therefore, in a complicated imaging condition, through splicing key parts in the face image, and combining deep convolutional neural network determining and image characteristic comparing, auxiliary identification can be performed by means of the characteristic of each key part area, deep texture, edge and color characteristic of the face, thereby improving gender identification accuracy.

Description

Gender identification method based on facial image and device
Technical field
The invention belongs to image identification technical field, particularly relate to a kind of gender identification method based on facial image and dress Put.
Background technology
Recognition of face, is facial feature information based on people a kind of biological identification technology of carrying out identification.With shooting Machine or camera collection contain image or the video flowing of face, and detect and track face the most in the picture, are generally also called Identification of Images, facial recognition.Face recognition products be widely used to finance, the administration of justice, army, public security, frontier inspection, government, space flight, The fields such as electric power, factory, education, medical and numerous enterprises and institutions.Such as, recognition of face access control and attendance system and recognition of face Antitheft door, about computer login, E-Government and the ecommerce of information security.The most ripe and social along with technology The raising of degree of recognition, face recognition technology is applied in more field.Wherein, sex identification based on image can be effectively Auxiliary recognition of face, the accuracy of sex identification can directly affect the accuracy of final recognition of face.
In prior art, the flow process of facial image sex identification is: advanced row Face datection, the then feature of facial image Extract, finally according to the feature the extracted sex by grader identification facial image.Due to the relation of imaging device, image is not Being bound to react clearly the full facial of people, most image there will be unsharp situations such as fuzzy, high light, dark, Or in image, bowing occurs in personage, side face etc. can not show the attitude of whole face completely, at the image-forming condition of this complexity Under, the feature extracting facial image that prior art is simple is the sex that cannot accurately differentiate personage, and then can increase identification The error rate of result.
Summary of the invention
The embodiment of the present invention provides a kind of gender identification method based on facial image and device, it is intended to solve due to outside The change of imaging factors and the feature that cannot extract facial image accurately that causes, so can increase the asking of error rate of identification Topic.
A kind of based on facial image the gender identification method that the embodiment of the present invention provides, including: by the preset degree of depth Prior model in convolutional neural networks, carries out sex discriminant classification to target facial image, obtains described target facial image pair The Sex Discrimination result answered, the image that wherein said target facial image is made up of key position region each in facial image; By the cosine similarity between feature and the feature of image pattern of target facial image described in comparison, obtain described target person The cosine comparison result that face image is corresponding;Described Sex Discrimination result and described cosine comparison result are carried out Bayes classifier Computing, obtains the sex of object in described facial image.
A kind of based on facial image the sex identification device that the embodiment of the present invention provides, including: Sex Discrimination module is used Prior model in by preset degree of depth convolutional neural networks, carries out sex discriminant classification to target facial image, obtains institute Stating the Sex Discrimination result that target facial image is corresponding, wherein said target facial image is by key position each in facial image The image of region composition;Comparison processing module is for by the feature of the feature of target facial image described in comparison Yu image pattern Between cosine similarity, obtain the cosine comparison result that described target facial image is corresponding;Identification module is for by described property Result and described cosine comparison result Pan Bie not carry out Bayes classifier computing, obtain the property of object in described facial image Not.
The gender identification method based on facial image of embodiment of the present invention offer and device, by preset degree of depth convolution Prior model in neutral net, carries out sex discriminant classification to target facial image, obtains described target facial image corresponding Sex Discrimination result, the image that wherein said target facial image is made up of key position region each in facial image, passes through Cosine similarity between feature and the feature of image pattern of target facial image described in comparison, obtains described target face figure As corresponding cosine comparison result, described Sex Discrimination result and described cosine comparison result are carried out Bayes classifier fortune Calculate, obtain the sex of object in described facial image, so under complicated imaging contexts, by key each in facial image Position re-starts splicing, and degree of depth convolutional neural networks differentiating, the aspect ratio with image is to combining, it is possible to utilize face The feature in each key position region, depth texture, edge assist in identifying with color characteristic, add the accurate of sex identification Property.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this Some embodiments of invention.
Fig. 1 is that the flow process that realizes of the gender identification method based on facial image that first embodiment of the invention provides is illustrated Figure;
Fig. 2 is that the flow process that realizes of the gender identification method based on facial image that second embodiment of the invention provides is illustrated Figure;
Fig. 3 is the schematic diagram of Face datection in the embodiment of the present invention;
Fig. 4 is the schematic diagram of facial image in the embodiment of the present invention;
Fig. 5 is the schematic diagram of the target facial image obtained by each key position regional restructuring in the embodiment of the present invention;
Fig. 6 is the structural representation of the sex identification device based on facial image that third embodiment of the invention provides;
Fig. 7 is the structural representation of the sex identification device based on facial image that fourth embodiment of the invention provides.
Fig. 8 is the hardware of the electronic equipment of the gender identification method based on facial image that fifth embodiment of the invention provides Structural representation.
Detailed description of the invention
For making the goal of the invention of the present invention, feature, the advantage can be the most obvious and understandable, below in conjunction with the present invention Accompanying drawing in embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that described reality Executing example is only a part of embodiment of the present invention, and not all embodiments.Based on the embodiment in the present invention, people in the art The every other embodiment that member is obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
Refer to the realization of the gender identification method based on facial image that Fig. 1, Fig. 1 provide for first embodiment of the invention Schematic flow sheet, can be applicable in the terminal that face Time Attendance Device, face burglary-resisting system, computer etc. identify facial image.Fig. 1 Shown gender identification method based on facial image, mainly comprises the steps that
S101, by prior model in preset degree of depth convolutional neural networks, target facial image is carried out Gender Classification Differentiate, obtain the Sex Discrimination result that this target facial image is corresponding.
The image that this target facial image is made up of key position region each in facial image.This each key position region Can be region and the face contour region of each organ sites of face, herein to target facial image Zhong Ge key position district Territory does not limits.It should be noted that the relation of facial image and target person image is: this target facial image is by this face The image of this each key position region composition in image.This facial image is for representing the image of overall face.
Degree of depth convolutional neural networks is convolutional neural networks (CNN, Convolutional Neural Networks), is A kind of deep neural network with convolutional coding structure, at least include two non-linear trainable convolutional layers, two nonlinear Fixing convolutional layer and full articulamentum, at least 5 hidden layers, are mainly used in speech analysis and field of image recognition altogether.
Can be obtained by prior model by degree of depth convolutional neural networks training image sample, wherein training image sample with The process obtaining prior model is the process classified image pattern according to preset classification, and wherein prior model can be used for Image is carried out the model of discriminant classification.
S102, by the cosine similarity between feature and the feature of image pattern of this target facial image of comparison, To the cosine comparison result that this target facial image is corresponding.
In field of image recognition, the feature of image is the proper noun of field of image recognition, the extraction of the feature of image It it is a concept in computer vision and image procossing.
The feature of image pattern can be stored in the memory module of terminal built-in, it is also possible to is stored in cloud server On.Cosine similarity is as the size weighing two interindividual variations by two vectorial angle cosine values in vector space Tolerance.
S103, this Sex Discrimination result and this cosine comparison result are carried out Bayes classifier computing, obtain this face The sex of objects in images.
It should be noted that object is the people described in this facial image in this facial image.Bayes classifier is Calculating object by Bayesian formula and belong to the algorithm of which class probability, the principle of Bayes classifier computing is by certain The prior probability of object, utilizes Bayesian formula to calculate the posterior probability of this object, i.e. this object and belongs to the probability of a certain class, Select there is the classification of maximum a posteriori probability as the classification belonging to this object.
Using this Sex Discrimination result and this cosine comparison result as object in facial image in the embodiment of the present invention Prior probability, calculates the male's classification belonging to this object and the posterior probability of women classification by Bayesian formula, and selects Take the sex classification with maximum a posteriori probability as the sex classification of this object.
In the embodiment of the present invention, by prior model in preset degree of depth convolutional neural networks, target facial image is entered Row Gender Classification differentiates, obtaining the Sex Discrimination result that this target facial image is corresponding, wherein this target facial image is by people The image of each key position region composition in face image, then by the feature of this target facial image of comparison and image pattern Cosine similarity between feature, obtains the cosine comparison result that this target facial image is corresponding, is finally tied by this Sex Discrimination Fruit and this cosine comparison result carry out Bayes classifier computing, obtain the sex of object in this facial image, so in complexity Imaging contexts under, by key position each in facial image being re-started splicing, and by degree of depth convolutional neural networks differentiate With the aspect ratio of image to combining, it is possible to utilize the feature in face each key position region, depth texture, edge special with color Levy and assist in identifying, add the accuracy of sex identification.
Refer to the realization of the gender identification method based on facial image that Fig. 2, Fig. 2 provide for second embodiment of the invention Schematic flow sheet, can be applicable in the terminal that face Time Attendance Device, face burglary-resisting system, computer etc. identify facial image, main Comprise the following steps:
S201, according to position in this facial image of each key point of facial image, extract in this facial image crucial Area.
S202, according to the image of key position regional restructuring face in this facial image, obtain this target facial image.
The image that this target facial image is made up of key position region each in facial image.
Alternatively, also included before step S201: determine to be identified by Face datection and face key point location Facial image in image;It is set as the facial image in this image to be identified detecting region.
By small echo (HAAR) grader or DLIB (C++library) algorithm, the image of input is carried out Face datection, Then the image after detection is carried out face pass by supervised descent algorithm (SDM, Supervised Descent Method) Key point location, is wherein included by the face key point that SDM algorithm positions: eyebrow, eyes, nose, face and face contour.When So Face datection and face key point location can also be realized by other algorithms.
HAAR grader, comprises self adaptation and strengthens (Adaboost) algorithm, and in field of image recognition, grader refers to right Face and the non-face algorithm carrying out classifying.DLIB is the algorithms library of a kind of C++, can be applicable to Face datection and face key point Location.
Fig. 3 is the schematic diagram of Face datection, as it is shown on figure 3, atrous square box is face detection block, and circular expression people Face, triangular representation animal, Polygons Representation trees, facial image can be extracted in the picture through Face datection, and by people Face image is arranged in Face datection frame.
Alternatively, expand this detection region according to preset times, so that this facial image includes the image of hair zones. Expanding this detection region according to preset times is to extend in facial image by hair zones, and the mode expanding detection region is permissible It is that whole detection region is enlarged, it is also possible to be that the detection top in region, bottom, left and right two parts are all expanded preset times Number, it is also possible to only expand the upper and lower in this detection region.This preset times comes directly towards to send out with hair lengths and brain highly to be had Closing, in this enforcement, the numerical value of preferred preset times is 0.15, the upper and lower in this detection region will respectively expand 0.15 times. Fig. 4 is the schematic diagram of facial image, is wherein detection region within rectangle frame in Fig. 4 (a) and Fig. 4 (b), the inspection in Fig. 4 (a) Survey in region and include expanding the facial image before detection region, include in the detection region in Fig. 4 (b) expanding detection region it After facial image.
Owing to color, length and the hair style of hair are to identify the foundation that sex is critically important, so facial image includes Hair zones can more accurately identify sex.
Alternatively, eye areas in this detection region is corrected, so that in this facial image in this eye areas Eyes are on same level line.The mode of correction ocular does not limits, and can carry out correction ocular by geometric transformation, it is also possible to Changing angle between two eyes and carry out correction ocular, final purpose is that the eyes in this eye areas are in same level line On.
Face key point includes: eyes, nose, face, eyebrow, hair and face contour, by each key point at this Position in facial image, extracts key position region in this facial image.Wherein key position region bag in this facial image Include: hair zones, brow region, eye areas, nasal area, face region.By reconstructing what each key position region obtained Target facial image can describe the office of the faces such as the hair style of face, skin and the face contour of different sexes more accurately Region, portion.
As it is shown in figure 5, the schematic diagram that Fig. 5 is the target facial image obtained by each key position regional restructuring.In order to just In explanation and display, in the schematic diagram shown in Fig. 5 key position region only include eye areas, brow region, nasal area and Face region, in Figure 5, the schematic diagram shown in Fig. 5 is only a citing to the not display of other key position regions, not Key position region in the present invention can be constituted and limit.
S203, by prior model in preset degree of depth convolutional neural networks, target facial image is carried out Gender Classification Differentiate, obtain the Sex Discrimination result that this target facial image is corresponding.
Degree of depth convolutional neural networks is a kind of deep neural network with convolutional coding structure, at least include two non-linear can The convolutional layer of training, two nonlinear fixing convolutional layers and full articulamentum, at least 5 hidden layers, are mainly used in language altogether Cent analysis and field of image recognition.Explanation to convolutional neural networks, prior model herein refer to the present invention first Associated description in embodiment, does not repeats.
Alternatively, by prior model in preset degree of depth convolutional neural networks, target facial image is carried out sex and divides Class differentiates, obtains the Sex Discrimination result that this target facial image is corresponding, particularly as follows:
By prior model in this degree of depth convolutional neural networks, this target facial image is carried out discriminant classification, is somebody's turn to do Most probable value that target facial image is corresponding and first object sex classification corresponding to this most probable value;
Using this most probable value and this first object sex classification as described Sex Discrimination result.
This sex classification includes women classification and male's classification.To multiple key position figures in degree of depth convolutional neural networks Decent is trained, and obtains prior model after training.Be can be obtained by by degree of depth convolutional neural networks training image sample Prior model, wherein training image sample is to enter image pattern according to preset sex classification to obtain the process of prior model The process of row classification, wherein prior model can be used for carrying out image the model of discriminant classification.This first object sex classification is Male's classification or women classification.
S204, by the cosine similarity between feature and the feature of image pattern of this target facial image of comparison, To the cosine comparison result that this target facial image is corresponding.
This image pattern is the image pattern being made up of each key position region in overall face image pattern, and wherein this is whole In body facial image sample, key position region includes: hair zones, brow region, eye areas, nasal area, face district Territory.The region that in this entirety face image pattern, key position region and key position region in this facial image include is right Answer, i.e. if this entirety face image pattern includes brow region, then this facial image includes brow region, if this is whole Body facial image sample includes lens area, then this facial image includes lens area.The image pattern obtained after reconstruct Composition and the image similarity shown in Fig. 5.The mode in key position region is extracted and from face from overall face image pattern The extracting mode in image zooming-out key position region is identical, does not repeats.Same, reconstruct obtains the reconstruct of image pattern The reconstruct mode that mode obtains target facial image with reconstruct is identical, does not repeats.
This cosine similarity is as weighing two interindividual variations by two vectorial angle cosine values in vector space The tolerance of size.
Alternatively, by the cosine similarity between feature and the feature of image pattern of this target facial image of comparison, Obtain the cosine comparison result that this target facial image is corresponding, particularly as follows:
By by carrying out cosine similarity comparison between feature and the feature of image pattern of this target facial image, obtaining Multiple cosine similarity values that this target facial image is corresponding;
According to this cosine similarity value order from high to low, choose the target face image pattern of preset number;
The sex classification of this target face image pattern is carried out quantity statistics, and using sex classifications most for quantity as Second target gender classification;
The similar face image that sex classification is this second target gender classification is chosen in this target face image pattern Sample, and extract the cosine similarity value between this similar face image sample with this target facial image as target cosine phase Like angle value;
Calculate the meansigma methods of this target cosine similarity value;
Using this second target gender classification and described meansigma methods as this cosine comparison result.
The numerical value of cosine similarity value is the biggest, represents that similarity is the highest.First according to the cosine similarity value calculated, will figure Decent, arranged by height order on earth according to similarity, then by similarity height, select the mesh of preset number Mark facial image sample.Preset number is arbitrarily chosen, and the number certainly chosen is many, then the last accuracy differentiated will improve. The preferred preset number of the present embodiment is 20.
The sex classification of this target face image pattern is carried out quantity statistics, and by sex classifications most for sample size As target gender classification, if the quantity of the sample of male's classification is many in this target face image pattern, then target gender class Wei male;If the quantity of the sample of women classification is many in this target face image pattern, then target gender classification is women. If target gender classification is male, then choose in target face image pattern the image pattern of male as similar face image sample This, if target gender classification is women, then choose in target face image pattern the image pattern of women as similar face figure Decent.
Illustrate with an object lesson below, how according to the feature of the feature of target facial image and image pattern it Between cosine similarity comparison identify the sex of object, be described as follows:
For convenience of description, if the feature of 5 image patterns is respectively as follows: sample 1, sample 2, sample 3, sample 4, sample 5, This target facial image is image 1, and preset number is 3;If the sex classification of sample 1 is male, the sex classification of sample 2 is Women, the sex classification of sample 3 is male, and the sex classification of sample 4 is women, and the sex classification of sample 5 is male.
By the feature of image 1, feature with sample 1-5 carries out cosine similarity comparison respectively;
The cosine similarity value calculated between image 1 and sample 1 is numerical value 1, calculates the cosine between image 1 and sample 2 Similarity value is numerical value 2, and the cosine similarity value calculated between image 1 and sample 3 is numerical value 3, calculate image 1 and sample 4 it Between cosine similarity value be numerical value 4, calculating the cosine similarity value between image 1 and sample 5 is numerical value 5, wherein numerical value 5 > Numerical value 3 > numerical value 4 > numerical value 1 > numerical value 2;
According to the above-mentioned cosine similarity value calculated order from high to low, choose the target facial image sample of preset number This is sample 5, sample 3 and sample 4;
The sex classification of this target face image pattern is carried out quantity statistics, and the number of samples of male's classification is 2, female The number of samples of property classification is 1, then male's classification is the second target gender classification;
The target face image pattern determining male's classification is sample 5 and sample 3, and calculates the average of numerical value 5 and numerical value 3 Value 1;
Using meansigma methods 1 and the second target gender classification as this cosine comparison result.
S205, this Sex Discrimination result and this cosine comparison result are carried out Bayes classifier computing, obtain this face The sex of objects in images.
Bayes classifier is to calculate object by Bayesian formula to belong to the algorithm of which class probability, and Bayes divides The principle of class device computing is the prior probability by certain object, utilizes Bayesian formula to calculate the posterior probability of this object, i.e. This object belongs to the probability of a certain class, selects have the classification of maximum a posteriori probability as the classification belonging to this object.
Using this Sex Discrimination result and this cosine comparison result as object in facial image in the embodiment of the present invention Prior probability, belongs to male's classification and the posterior probability of women classification by what Bayesian formula calculated this object, and chooses There is the sex classification sex classification as this object of maximum a posteriori probability.
In the embodiment of the present invention, according to each key point of facial image position in this facial image, extract this face Key position region in image, according to the image of key position regional restructuring face in this facial image, obtains this target face Image, by prior model in preset degree of depth convolutional neural networks, carries out sex discriminant classification to target facial image, obtains The Sex Discrimination result that this target facial image is corresponding, by the feature of the feature of this target facial image of comparison Yu image pattern Between cosine similarity, obtain the cosine comparison result that this target facial image is corresponding, finally by this Sex Discrimination result and This cosine comparison result carries out Bayes classifier computing, obtains the sex of object in this facial image, so at complicated one-tenth In the case of Xiang, by key position each in facial image being re-started splicing, and degree of depth convolutional neural networks is differentiated and figure The aspect ratio of picture is to combining, it is possible to utilize the feature in face each key position region, depth texture, edge to enter with color characteristic Row assists in identifying, and adds the accuracy of sex identification.
Refer to the structure that Fig. 6, Fig. 6 are the sex identification devices based on facial image that third embodiment of the invention provides Schematic diagram, for convenience of description, illustrate only the part relevant to the embodiment of the present invention.Fig. 6 example based on facial image Sex identification device can be the execution master of the gender identification method based on facial image that aforementioned embodiment illustrated in fig. 1 provides Body, can be a control module in terminal or terminal.The sex identification device based on facial image of Fig. 6 example, mainly Including: Sex Discrimination module 601, comparison processing module 602 and identification module 603.The most each functional module describes in detail as follows:
Sex Discrimination module 601, for by prior model in preset degree of depth convolutional neural networks, to target face figure As carrying out sex discriminant classification, obtain the Sex Discrimination result that this target facial image is corresponding;
Comparison processing module 602, between feature and the feature of image pattern by this target facial image of comparison Cosine similarity, obtain the cosine comparison result that this target facial image is corresponding;
Identification module 603, for this Sex Discrimination result and this cosine comparison result are carried out Bayes classifier computing, Obtain the sex of object in this facial image.
The image that this target facial image is made up of key position region each in facial image.This each key position region Can be region and the face contour region of each organ sites of face, herein to target facial image Zhong Ge key position district Territory does not limits.It should be noted that the relation of facial image and target person image is: this target facial image is by this face The image of this each key position region composition in image.This facial image is for representing the image of overall face.
Degree of depth convolutional neural networks is a kind of deep neural network with convolutional coding structure, at least include two non-linear can The convolutional layer of training, two nonlinear fixing convolutional layers and full articulamentum, at least 5 hidden layers, are mainly used in language altogether Cent analysis and field of image recognition.
Can be obtained by prior model by degree of depth convolutional neural networks training image sample, wherein training image sample with The process obtaining prior model is the process classified image pattern according to preset classification, and wherein prior model can be used for Image is carried out the model of discriminant classification.
In field of image recognition, the feature of image is the proper noun of field of image recognition, the extraction of the feature of image It it is a concept in computer vision and image procossing.
The feature of image pattern can be stored in the memory module of terminal built-in, it is also possible to is stored in cloud server On.Cosine similarity is as the size weighing two interindividual variations by two vectorial angle cosine values in vector space Tolerance.It should be noted that object is the people described in this facial image in this facial image.
Bayes classifier is to calculate object by Bayesian formula to belong to the algorithm of which class probability, and Bayes divides The principle of class device computing is the prior probability by certain object, utilizes Bayesian formula to calculate the posterior probability of this object, i.e. This object belongs to the probability of a certain class, selects have the classification of maximum a posteriori probability as the classification belonging to this object.The present invention Embodiment is using this Sex Discrimination result and this cosine comparison result as the prior probability of object in facial image, passes through shellfish This formula of leaf calculates the male's classification belonging to this object and the posterior probability of women classification, and chooses that to have maximum a posteriori general The sex classification of rate is as the sex classification of this object.
It should be noted that in the embodiment of the sex identification device based on facial image of figure 6 above example, each merit Can the division of module be merely illustrative of, can as required in actual application, the configuration requirement of such as corresponding hardware or soft The convenient consideration of the realization of part, and above-mentioned functions distribution is completed by different functional modules.And, in actual application, this reality Executing the corresponding functional module in example can be to be realized by corresponding hardware, it is also possible to performed corresponding software by corresponding hardware Complete.Each embodiment that this specification provides all can apply foregoing description principle, below repeats no more.
In the embodiment of the present invention, Sex Discrimination module 602 is by prior model in preset degree of depth convolutional neural networks, right Target facial image carries out sex discriminant classification, obtains the Sex Discrimination result that this target facial image is corresponding, wherein said mesh The image that mark facial image is made up of key position region each in facial image, comparison processing module 603 is by this mesh of comparison Cosine similarity between feature and the feature of image pattern of mark facial image, obtains the cosine that this target facial image is corresponding Comparison result, this Sex Discrimination result and this cosine comparison result are carried out Bayes classifier computing, obtain by identification module 604 The sex of object in this facial image, so under complicated imaging contexts, by key position each in facial image again Splice, and degree of depth convolutional neural networks is differentiated, and the aspect ratio with image is to combining, it is possible to utilize face each key portion The position feature in region, depth texture, edge assist in identifying with color characteristic, add the accuracy of sex identification.
Refer to Fig. 7, the structural representation of the sex identification device based on facial image that fourth embodiment of the invention provides Figure, for convenience of description, illustrate only the part relevant to the embodiment of the present invention.The sex based on facial image of Fig. 7 example Identify that device can be the executive agent of the gender identification method based on facial image that aforementioned embodiment illustrated in fig. 2 provides, can To be a control module in terminal or terminal.The sex identification device based on facial image of Fig. 7 example, specifically includes that Extraction module 701, reconstructed module 702, Sex Discrimination module 703, comparison processing module 704, identification module 705, wherein sex Discrimination module 703 includes: discrimination module 7031 and setting module 7032;Comparison processing module 704 includes: comparing module 7041, Choose module 7042, statistical module 7043 and module 7044 is set.The most each functional module describes in detail as follows:
Extraction module 701, is used for the position in this facial image of each key point according to facial image, extracts this face Key position region in image.
Reconstructed module 702, for according to the image of key position regional restructuring face in this facial image, obtains this target Facial image.
The image that this target facial image is made up of key position region each in facial image.
Alternatively, this device also comprises determining that module is treated for being determined by Face datection and face key point location Identify the facial image in image;This determines module to be additionally operable to the facial image in this image to be identified to be set as detection zone Territory.
The image of input is carried out Face datection, then to the image after detection by HAAR grader or DLIB algorithm Carry out face key point location, wherein being included by the face key point that SDM algorithm positions by SDM algorithm: eyebrow, eyes, Nose, face and face contour.Certainly Face datection and face key point location can also be realized by other algorithms.
HAAR grader, comprises Adaboost algorithm, and in field of image recognition, grader refers to face and non-face Carry out the algorithm classified.DLIB is the algorithms library of a kind of C++, can be applicable to Face datection and face key point location.
Fig. 3 is the schematic diagram of Face datection, as it is shown on figure 3, atrous square box is face detection block, and circular expression people Face, triangular representation animal, Polygons Representation trees, facial image can be extracted in the picture through Face datection, and by people Face image is arranged in Face datection frame.
Alternatively, this device also include extension module for expanding this detection region according to preset times so that this face Image includes the image of hair zones.It is hair zones to be extended to that extension module expands this detection region according to preset times In facial image, the mode expanding detection region can be to be enlarged in whole detection region, it is also possible to is will to detect region Top, bottom, left and right two parts all expand preset times, it is also possible to only expand the upper and lower in this detection region.This is pre- Putting multiple and send out the most relevant with hair lengths and brain top, in this enforcement, the numerical value of preferred preset times is 0.15, will The upper and lower in this detection region respectively expands 0.15 times.Fig. 4 is the schematic diagram of facial image, wherein in Fig. 4 (a) and Fig. 4 (b) For detection region within rectangle frame, include expanding the facial image before detection region, Fig. 4 in the detection region in Fig. 4 (a) Include expanding the facial image after detection region in detection region in (b).
Owing to color, length and the hair style of hair are to identify the foundation that sex is critically important, so facial image includes Hair zones can more accurately identify sex.
Alternatively, this device also include rectification module for eye areas in this detection region is corrected so that should In facial image, the eyes in this eye areas are on same level line.The mode of correction ocular does not limits, and can pass through Geometric transformation carrys out correction ocular, it is also possible to changing angle between two eyes and carry out correction ocular, final purpose is by these eyes Eyes in region are on same level line.
Face key point includes: eyes, nose, face, eyebrow, hair and face contour, and extraction module 701 is by each Key point position in this facial image, extracts key position region in this facial image.Wherein crucial in this facial image Area includes: hair zones, brow region, eye areas, nasal area, face region.Reconstructed by reconstructed module 702 The target facial image that each key position region obtains can describe more accurately the hair style of face of different sexes, skin with And the regional area of the face such as face contour.
As it is shown in figure 5, the schematic diagram that Fig. 5 is the target facial image obtained by each key position regional restructuring.In order to just In explanation and display, in the schematic diagram shown in Fig. 5 key position region only include eye areas, brow region, nasal area and Face region, in Figure 5, the schematic diagram shown in Fig. 5 is only a citing to the not display of other key position regions, not Key position region in the present invention can be constituted and limit.
Sex Discrimination module 703, for by prior model in preset degree of depth convolutional neural networks, to target face figure As carrying out sex discriminant classification, obtain the Sex Discrimination result that this target facial image is corresponding.
Alternatively, Sex Discrimination module 703 includes: discrimination module 7031 and setting module 7032;
Discrimination module 7031, for by prior model in this degree of depth convolutional neural networks, enters this target facial image Row discriminant classification, obtains most probable value corresponding to this target facial image and first object corresponding to this most probable value Other classification;
Setting module 7032, for tying this most probable value and this first object sex classification as described Sex Discrimination Really.
This first object sex classification is male's classification or women classification.Degree of depth convolutional neural networks is a kind of with convolution The deep neural network of structure, at least include two non-linear trainable convolutional layers, two nonlinear fixing convolutional layers and Full articulamentum, at least 5 hidden layers, are mainly used in speech analysis and field of image recognition altogether.Herein to convolutional Neural Network, the explanation of prior model refer to associated description in first embodiment of the invention, do not repeat.
Comparison processing module 704, between feature and the feature of image pattern by this target facial image of comparison Cosine similarity, obtain the cosine comparison result that this target facial image is corresponding.
This image pattern is the image pattern being made up of each key position region in overall face image pattern, and wherein this is whole In body facial image sample, key position region includes: hair zones, brow region, eye areas, nasal area, face district Territory.The region that in this entirety face image pattern, key position region and key position region in this facial image include is right Answer, i.e. if this entirety face image pattern includes brow region, then this facial image includes brow region, if this is whole Body facial image sample includes lens area, then this facial image includes lens area.The image pattern obtained after reconstruct Composition and the image similarity shown in Fig. 5.The mode in key position region is extracted and from face from overall face image pattern The extracting mode in image zooming-out key position region is identical, does not repeats.Same, reconstruct obtains the reconstruct of image pattern The reconstruct mode that mode obtains target facial image with reconstruct is identical, does not repeats.
This cosine similarity is as weighing two interindividual variations by two vectorial angle cosine values in vector space The tolerance of size.
Alternatively, comparison processing module 704 includes: comparing module 7041, choose module 7042, statistical module 7043, meter Calculate module 7044 and module 7045 is set.
Comparing module 7041, for by remaining by carrying out between feature and the feature of image pattern of this target facial image String similarity comparison, obtains multiple cosine similarity values that this target facial image is corresponding;
Choose module 7042, for according to this cosine similarity value order from high to low, choose the target of preset number Facial image sample;
Statistical module 7043, for carrying out quantity statistics to the sex classification of this target face image pattern, and by quantity Most sex classifications is as the second target gender classification;
Choosing module 7042, being additionally operable to choose sex classification in this target face image pattern is this second target gender The similar face image sample of classification, and it is similar to extract the cosine between this similar face image sample to this target facial image Angle value is as target cosine similarity value;
Computing module 7044, for calculating the meansigma methods of this target cosine similarity value;
Module 7045 is set, for using this second target gender classification and described meansigma methods as this cosine comparison result.
The numerical value of cosine similarity value is the biggest, represents that similarity is the highest.Preset number is arbitrarily chosen, the number certainly chosen Many, then the last accuracy differentiated will improve.The preferred preset number of the present embodiment is 20.
Statistical module 7043 carries out quantity statistics to the sex classification of this target face image pattern, and by sample size Many sex classifications are as target gender classification, if the quantity of the sample of male's classification is many in this target face image pattern, Then target gender classification is male;If in this target face image pattern, the quantity of the sample of women classification is many, then targeted Other classification is women.If target gender classification is male, then chooses module 7042 and choose male in target face image pattern Image pattern is as similar face image sample, if target gender classification is women, then chooses module 7042 and chooses target face In image pattern, the image pattern of women is as similar face image sample.
Identification module 705, for this Sex Discrimination result and this cosine comparison result are carried out Bayes classifier computing, Obtain the sex of object in this facial image.
Bayes classifier is to calculate object by Bayesian formula to belong to the algorithm of which class probability, and Bayes divides The principle of class device computing is the prior probability by certain object, utilizes Bayesian formula to calculate the posterior probability of this object, i.e. This object belongs to the probability of a certain class, selects have the classification of maximum a posteriori probability as the classification belonging to this object.
Using this Sex Discrimination result and this cosine comparison result as object in facial image in the embodiment of the present invention Prior probability, identification module 705 belongs to male's classification and the posteriority of women classification by what Bayesian formula calculated this object Probability, and choose the sex classification with maximum a posteriori probability as the sex classification of this object.
In the embodiment of the present invention, extraction module 701 according to each key point of described facial image in described facial image Position, extract key position region in described facial image, in wherein said facial image, key position region includes: hair Region, brow region, eye areas, nasal area, face region, reconstructed module 702 is according to key portion in described facial image The image of position regional restructuring face, obtains described target facial image, Sex Discrimination module 703 is by preset degree of depth convolution Prior model in neutral net, carries out sex discriminant classification to target facial image, obtains described target facial image corresponding Sex Discrimination result, the image that wherein said target facial image is made up of key position region each in facial image, then Comparison processing module 704 is similar by the cosine between feature and the feature of image pattern of target facial image described in comparison Degree, obtains the cosine comparison result that described target facial image is corresponding, and last identification module 705 is by this Sex Discrimination result and is somebody's turn to do Cosine comparison result carries out Bayes classifier computing, obtains the sex of object in this facial image, so in complicated imaging In the case of, by key position each in facial image being re-started splicing, and degree of depth convolutional neural networks is differentiated and image Aspect ratio to combining, it is possible to utilize the feature in face each key position region, depth texture, edge to carry out with color characteristic Assist in identifying, add the accuracy of sex identification.
Fig. 8 is the hardware of the electronic equipment of the gender identification method based on facial image that fifth embodiment of the invention provides Structural representation, as shown in Figure 8, this electronic equipment includes:
One or more processors 810 and memorizer 820, in Fig. 8 as a example by a processor 810.
This electronic equipment can also include: input equipment 830 and output device 840.
Processor 810, memorizer 820, input equipment 830 and output device 840 can be by bus or other modes Connect, in Fig. 8 as a example by being connected by bus 850.
Memorizer 820, as a kind of non-volatile computer readable storage medium storing program for executing, can be used for storing non-volatile software journey Sequence, non-volatile computer executable program and module, program as corresponding in the date storage method in the embodiment of the present application Instruction/module (such as, the Sex Discrimination module 601 shown in accompanying drawing 6, comparison processing module 602 and identification module 603).Process Non-volatile software program, instruction and the module that device 810 is stored in memorizer 820 by operation, thus perform server Various functions application and data process, i.e. realize the gender identification method based on facial image of said method embodiment.
Memorizer 820 can include storing program area and storage data field, and wherein, storage program area can store operation system Application program required for system, at least one function;Storage data field can store and fill according to sex identification based on facial image The data etc. that the use put is created.Additionally, memorizer 820 can include high-speed random access memory, it is also possible to include non- Volatile memory, for example, at least one disk memory, flush memory device or other non-volatile solid state memory parts.? In some embodiments, memorizer 820 is optional includes the memorizer remotely located relative to processor 810, these remote memories Sex identification device based on facial image can be connected to by network.The example of above-mentioned network includes but not limited to interconnection Net, intranet, LAN, mobile radio communication and combinations thereof.
Input equipment 830 can receive numeral or the character information of input, and produces and sex based on facial image knowledge The user setup of other device and function control relevant key signals input.Output device 840 can include that the displays such as display screen set Standby.
One or more module stores is in described memorizer 820, when by one or more processor During 810 execution, perform the gender identification method based on facial image in above-mentioned any means embodiment.
The said goods can perform the method that the embodiment of the present application is provided, and possesses the corresponding functional module of execution method and has Benefit effect.The ins and outs of the most detailed description, can be found in the method that the embodiment of the present application is provided.
The electronic equipment of the embodiment of the present invention exists in a variety of forms, includes but not limited to:
(1) mobile communication equipment: the feature of this kind equipment is to possess mobile communication function, and to provide speech, data Communication is main target.This Terminal Type includes: smart mobile phone, multimedia handset, functional mobile phone, and low-end mobile phone etc..
(2) super mobile personal computer equipment: this kind equipment belongs to the category of personal computer, has calculating and processes merit Can, the most also possess mobile Internet access characteristic.This Terminal Type includes: palm PC (PDA Personal Digital Assistant), mobile internet device (MID, Mobile Internet Device) and Ultra-Mobile PC (UMPC, Ultra-mobile Personal Computer) equipment etc..
(3) portable entertainment device: this kind equipment can show and play content of multimedia.This kind equipment includes: audio frequency, Video player, handheld device, e-book, and intelligent toy and portable car-mounted navigator.
(4) server: providing the equipment of the service of calculating, the composition of server includes that processor, hard disk, internal memory, system are total Lines etc., server is similar with general computer architecture, but owing to needing to provide highly reliable service, is therefore processing energy The aspects such as power, stability, reliability, safety, extensibility, manageability require higher.
(5) other have the electronic installation of data interaction function.
In multiple embodiments provided herein, it should be understood that disclosed system, terminal and method, permissible Realize by another way.Such as, terminal embodiment described above is only schematically, such as, and described module Dividing, be only a kind of logic function and divide, actual can have other dividing mode, the most multiple modules or assembly when realizing Can in conjunction with or be desirably integrated into another system, or some features can be ignored, or does not performs.Another point, shown or The coupling each other discussed or direct-coupling or communication linkage can be by some interfaces, the INDIRECT COUPLING or logical of module Letter link, can be electrical, machinery or other form.
The described module illustrated as separating component can be or may not be physically separate, shows as module The parts shown can be or may not be physical module, i.e. may be located at a place, or can also be distributed to multiple On mixed-media network modules mixed-media.Some or all of module therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.
It addition, each functional module in each embodiment of the present invention can be integrated in a processing module, it is also possible to It is that modules is individually physically present, it is also possible to two or more modules are integrated in a module.Above-mentioned integrated mould Block both can realize to use the form of hardware, it would however also be possible to employ the form of software function module realizes.
If described integrated module realizes and as independent production marketing or use using the form of software function module Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part that in other words prior art contributed or this technical scheme completely or partially can be with the form of software product Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
It should be noted that for aforesaid each method embodiment, in order to simplicity describes, therefore it is all expressed as a series of Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because According to the present invention, some step can use other order or carry out simultaneously.Secondly, those skilled in the art also should know Knowing, it might not be all this that embodiment described in this description belongs to preferred embodiment, involved action and module Bright necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not has the portion described in detail in certain embodiment Point, may refer to the associated description of other embodiments.
It is more than to gender identification method based on facial image provided by the present invention and the description of device, for ability The technical staff in territory, according to the thought of the embodiment of the present invention, the most all will change, To sum up, this specification content should not be construed as limitation of the present invention.

Claims (10)

1. a gender identification method based on facial image, it is characterised in that including:
By prior model in preset degree of depth convolutional neural networks, target facial image is carried out sex discriminant classification, obtains The Sex Discrimination result that described target facial image is corresponding, wherein said target facial image is by key portion each in facial image The image of region, position composition;
By the cosine similarity between feature and the feature of image pattern of target facial image described in comparison, obtain described mesh The cosine comparison result that mark facial image is corresponding;
Described Sex Discrimination result and described cosine comparison result are carried out Bayes classifier computing, obtains described facial image The sex of middle object.
Method the most according to claim 1, it is characterised in that in described facial image, each key position region includes: head Send out region, brow region, eye areas, nasal area, face region.
Method the most according to claim 2, it is characterised in that described by priori in preset degree of depth convolutional neural networks Model, carries out sex discriminant classification to target facial image, obtains the Sex Discrimination result that described target facial image is corresponding, tool Body includes:
By prior model in described degree of depth convolutional neural networks, described target facial image is carried out discriminant classification, obtains institute State most probable value corresponding to target facial image and first object sex classification corresponding to described most probable value;
Using described most probable value and described first object sex classification as described Sex Discrimination result.
Method the most according to claim 2, it is characterised in that described image pattern is by each in overall face image pattern The image pattern of key position region composition, in wherein said overall face image pattern, key position region includes: hair district Territory, brow region, eye areas, nasal area, face region.
Method the most according to claim 4, it is characterised in that described by the feature of target facial image described in comparison with Cosine similarity between the feature of image pattern, obtains the cosine comparison result that described target facial image is corresponding, specifically wraps Include:
By by carrying out cosine similarity comparison between feature and the feature of image pattern of described target facial image, obtaining many Individual cosine similarity value;
According to described cosine similarity value order from high to low, choose the target face image pattern of preset number;
The sex classification of described target face image pattern is carried out quantity statistics, and using sex classifications most for quantity as Two target gender classifications;
The similar face image that sex classification is described second target gender classification is chosen in described target face image pattern Sample, and extract cosine similarity value between described similar face image sample and described target facial image as more than target String Similarity value;
Calculate the meansigma methods of described target cosine similarity value;
Using described second target gender classification and described meansigma methods as described cosine comparison result.
6. a sex identification device based on facial image, it is characterised in that including:
Sex Discrimination module, for by prior model in preset degree of depth convolutional neural networks, is carried out target facial image Gender Classification differentiate, obtain the Sex Discrimination result that described target facial image is corresponding, wherein said target facial image be by The image of each key position region composition in facial image;
Comparison processing module, for by the cosine between feature and the feature of image pattern of target facial image described in comparison Similarity, obtains the cosine comparison result that described target facial image is corresponding;
Identification module, for described Sex Discrimination result and described cosine comparison result are carried out Bayes classifier computing, The sex of object in described facial image.
Device the most according to claim 6, it is characterised in that in described facial image, each key position region includes: head Send out region, brow region, eye areas, nasal area, face region.
Device the most according to claim 7, it is characterised in that described Sex Discrimination module includes:
Discrimination module, for by prior model in described degree of depth convolutional neural networks, is carried out described target facial image point Class differentiates, obtains most probable value corresponding to described target facial image and first object corresponding to described most probable value Other classification;
Setting module, for using described most probable value and described first object sex classification as described Sex Discrimination result.
Device the most according to claim 7, it is characterised in that described image pattern is by each in overall face image pattern The image pattern of key position region composition, in wherein said overall face image pattern, key position region includes: hair district Territory, brow region, eye areas, nasal area, face region.
Device the most according to claim 8, it is characterised in that described comparison processing module includes:
Comparing module, for by similar by carrying out cosine between feature and the feature of image pattern of described target facial image Degree comparison, obtains multiple cosine similarity value;
Choose module, for according to described cosine similarity value order from high to low, choosing the target face figure of preset number Decent;
Statistical module is for carrying out quantity statistics to the sex classification of described target face image pattern and quantity is most Sex classification is as the second target gender classification;
Described choosing module, being additionally operable to choose sex classification in described target face image pattern is described second target gender The similar face image sample of classification, and extract the cosine between described similar face image sample and described target facial image Similarity value is as target cosine similarity value;
Computing module, for calculating the meansigma methods of described target cosine similarity value;
Module is set, for using described second target gender classification and described meansigma methods as described cosine comparison result.
CN201610698494.4A 2016-08-19 2016-08-19 Gender identification method and gender identification device based on face image Pending CN106326857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610698494.4A CN106326857A (en) 2016-08-19 2016-08-19 Gender identification method and gender identification device based on face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610698494.4A CN106326857A (en) 2016-08-19 2016-08-19 Gender identification method and gender identification device based on face image

Publications (1)

Publication Number Publication Date
CN106326857A true CN106326857A (en) 2017-01-11

Family

ID=57741455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610698494.4A Pending CN106326857A (en) 2016-08-19 2016-08-19 Gender identification method and gender identification device based on face image

Country Status (1)

Country Link
CN (1) CN106326857A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578029A (en) * 2017-09-21 2018-01-12 北京邮电大学 Method, apparatus, electronic equipment and the storage medium of area of computer aided picture certification
CN107886474A (en) * 2017-11-22 2018-04-06 北京达佳互联信息技术有限公司 Image processing method, device and server
CN107958245A (en) * 2018-01-12 2018-04-24 上海正鹏信息科技有限公司 A kind of gender classification method and device based on face characteristic
CN108122001A (en) * 2017-12-13 2018-06-05 北京小米移动软件有限公司 Image-recognizing method and device
CN108154169A (en) * 2017-12-11 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108664948A (en) * 2018-05-21 2018-10-16 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN109271957A (en) * 2018-09-30 2019-01-25 厦门市巨龙信息科技有限公司 Face gender identification method and device
CN109493490A (en) * 2018-11-27 2019-03-19 电卫士智能电器(北京)有限公司 Electricity consumption user right judgment method and device
CN109584417A (en) * 2018-11-27 2019-04-05 电卫士智能电器(北京)有限公司 Door-access control method and device
CN110210293A (en) * 2018-10-30 2019-09-06 上海市服装研究所有限公司 A kind of gender identification method based on three-dimensional data and face-image
CN110751215A (en) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 Image identification method, device, equipment, system and medium
CN110826525A (en) * 2019-11-18 2020-02-21 天津高创安邦技术有限公司 Face recognition method and system
CN110880011A (en) * 2018-09-05 2020-03-13 宏达国际电子股份有限公司 Image segmentation method, device and non-transitory computer readable medium thereof
CN111353943A (en) * 2018-12-20 2020-06-30 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN111951267A (en) * 2020-09-08 2020-11-17 南方科技大学 Gender judgment method, device, server and storage medium based on neural network
CN112882042A (en) * 2021-01-14 2021-06-01 天津市水产研究所 Marine ranching seabed telemetering and identifying method based on acoustic data

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578029A (en) * 2017-09-21 2018-01-12 北京邮电大学 Method, apparatus, electronic equipment and the storage medium of area of computer aided picture certification
CN107578029B (en) * 2017-09-21 2020-03-27 北京邮电大学 Computer-aided picture authentication method and device, electronic equipment and storage medium
CN107886474A (en) * 2017-11-22 2018-04-06 北京达佳互联信息技术有限公司 Image processing method, device and server
CN108154169A (en) * 2017-12-11 2018-06-12 北京小米移动软件有限公司 Image processing method and device
CN108122001A (en) * 2017-12-13 2018-06-05 北京小米移动软件有限公司 Image-recognizing method and device
CN108122001B (en) * 2017-12-13 2022-03-11 北京小米移动软件有限公司 Image recognition method and device
CN107958245A (en) * 2018-01-12 2018-04-24 上海正鹏信息科技有限公司 A kind of gender classification method and device based on face characteristic
CN108664948A (en) * 2018-05-21 2018-10-16 北京京东尚科信息技术有限公司 Method and apparatus for generating information
CN110880011A (en) * 2018-09-05 2020-03-13 宏达国际电子股份有限公司 Image segmentation method, device and non-transitory computer readable medium thereof
CN109271957B (en) * 2018-09-30 2020-10-20 厦门市巨龙信息科技有限公司 Face gender identification method and device
CN109271957A (en) * 2018-09-30 2019-01-25 厦门市巨龙信息科技有限公司 Face gender identification method and device
CN110210293A (en) * 2018-10-30 2019-09-06 上海市服装研究所有限公司 A kind of gender identification method based on three-dimensional data and face-image
CN109584417A (en) * 2018-11-27 2019-04-05 电卫士智能电器(北京)有限公司 Door-access control method and device
CN109493490A (en) * 2018-11-27 2019-03-19 电卫士智能电器(北京)有限公司 Electricity consumption user right judgment method and device
CN111353943A (en) * 2018-12-20 2020-06-30 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN111353943B (en) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN110751215A (en) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 Image identification method, device, equipment, system and medium
CN110826525A (en) * 2019-11-18 2020-02-21 天津高创安邦技术有限公司 Face recognition method and system
CN110826525B (en) * 2019-11-18 2023-05-26 天津高创安邦技术有限公司 Face recognition method and system
CN111951267A (en) * 2020-09-08 2020-11-17 南方科技大学 Gender judgment method, device, server and storage medium based on neural network
CN112882042A (en) * 2021-01-14 2021-06-01 天津市水产研究所 Marine ranching seabed telemetering and identifying method based on acoustic data
CN112882042B (en) * 2021-01-14 2022-06-21 天津市水产研究所 Marine ranching seabed telemetering and identifying method based on acoustic data

Similar Documents

Publication Publication Date Title
CN106326857A (en) Gender identification method and gender identification device based on face image
CN106295591A (en) Gender identification method based on facial image and device
CN106469298A (en) Age recognition methodss based on facial image and device
US10867405B2 (en) Object learning and recognition method and system
US9965705B2 (en) Systems and methods for attention-based configurable convolutional neural networks (ABC-CNN) for visual question answering
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN106068514B (en) System and method for identifying face in free media
CN109815826B (en) Method and device for generating face attribute model
Yan et al. Ranking with uncertain labels
Li et al. Universal sketch perceptual grouping
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
CN108388876A (en) A kind of image-recognizing method, device and relevant device
CN106407911A (en) Image-based eyeglass recognition method and device
US20140143183A1 (en) Hierarchical model for human activity recognition
CN108229268A (en) Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN108182409A (en) Biopsy method, device, equipment and storage medium
CN109145871B (en) Psychological behavior recognition method, device and storage medium
CN106909938B (en) Visual angle independence behavior identification method based on deep learning network
CN104517097A (en) Kinect-based moving human body posture recognition method
CN107609563A (en) Picture semantic describes method and device
CN104915658B (en) A kind of emotion component analyzing method and its system based on emotion Distributed learning
CN110705428B (en) Facial age recognition system and method based on impulse neural network
US20230095182A1 (en) Method and apparatus for extracting biological features, device, medium, and program product
CN110287848A (en) The generation method and device of video
CN113205017A (en) Cross-age face recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170111

WD01 Invention patent application deemed withdrawn after publication