CN103034840B - A kind of gender identification method - Google Patents

A kind of gender identification method Download PDF

Info

Publication number
CN103034840B
CN103034840B CN201210515116.XA CN201210515116A CN103034840B CN 103034840 B CN103034840 B CN 103034840B CN 201210515116 A CN201210515116 A CN 201210515116A CN 103034840 B CN103034840 B CN 103034840B
Authority
CN
China
Prior art keywords
subcharacter
sex
probability
man
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210515116.XA
Other languages
Chinese (zh)
Other versions
CN103034840A (en
Inventor
张传锋
许野平
方亮
曹杰
刘辰飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synthesis Electronic Technology Co Ltd
Original Assignee
SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd filed Critical SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201210515116.XA priority Critical patent/CN103034840B/en
Publication of CN103034840A publication Critical patent/CN103034840A/en
Application granted granted Critical
Publication of CN103034840B publication Critical patent/CN103034840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of gender identification method, the sample based on same specification is according to selected multiple subcharacter training, the sex recognition result of output based on each subcharacter identification; Input picture to be identified, be normalized to described specification, the sex recognition result of each subcharacter identification of the described picture after identification normalizing, the mode of information fusion is the sex recognition result based on each subcharacter of this picture of summation, gets the sex output corresponding to sex recognition result sum of advantage. Be a kind of new gender identification method according to the present invention, its discrimination is higher.

Description

A kind of gender identification method
Technical field
The present invention relates to a kind of gender identification method, belong to technical field of image processing.
Background technology
In recent years, along with going deep into of Research on Face Recognition Technology, the sex identification based on facial image has become calculatingOne of research topic of hot topic in machine field of biological recognition. Sex identification is to allow computer sentence according to the people's of input image informationThe process of disconnected sex, has important prospect at aspects such as artificial intelligence, system monitoring, pattern-recognitions.
Should be appreciated that sex is identified in identification and checking can serve as " filter ", utilizes the property that detectsOther information significantly reduces the picture searching quantity of identification, improves authentication recognition speed and precision.
Techniques of Gender Recognition field be in facial image identification by the identification field early proposing, both at home and abroad this is also carried outA large amount of research, conventional sex is known method for distinguishing to be had:
1. the identification of the sex based on artificial neural network. Taken the lead in proposing by B.A.Golomb, follow-uply carry as people such as S.C.YenGo out to improve, correct recognition rata can reach 88.7%. It is according to the face characteristic neural network training extracting, and recycling trainsNeural network recognization people's sex.
2. the identification of the sex based on SVMs. It is reduced into the facial image obtaining the image of 21*21, trainingHold vector machine, maybe by the features training SVMs that utilizes AAM to extract, the SVMs identification people that recycling trainsSex.
3. the identification of the sex based on Adaboost, also referred to as Adaoost sorting technique. It extracts the Haar feature of image,Training Adaboost grader, the Adaboost identification people's that recycling trains sex.
Above taked method is all the partial information of only having utilized image, when partial information wherein disturbedTime, will certainly affect its discrimination, therefore, there is the shortcoming that discrimination is not high, discrimination is easily disturbed in above-mentioned several method.And multiline message in image, in the time that a part of information is disturbed, the elimination that comprehensive all information can relative efficiency is disturbed, because ofThis, this is in the time that single information cannot meet identification needs, and inventor thinks can consider fuse information, to improve the knowledge of sexNot rate.
Summary of the invention
Therefore, the present invention starts with from the angle that merges multiline message, adopts a kind of path of new sex identification, and proposesA kind of discrimination can received gender identification method.
The present invention is by the following technical solutions:
A kind of gender identification method, the sample based on same specification is according to selected multiple subcharacters training, output based onThe sex recognition result of each subcharacter identification;
Input picture to be identified, be normalized to described specification, each subcharacter identification of the described picture after identification normalizingSex recognition result, the mode of information fusion is the sex recognition result based on each subcharacter of this picture of summation, gets advantageThe sex output that sex recognition result sum is corresponding.
According to above-mentioned gender identification method of the present invention, the multiple identifying informations that obtain are carried out to secondary fusion,To final recognition result, information fusion takes full advantage of each recognition result, forms message complementary sense, has effectively eliminatedDisturb, there is good effect. It is other robustness that its application can improve sex greatly, and the interference such as illumination are had veryGood adaptability, thus better discrimination obtained.
Above-mentioned gender identification method,, described specification is the big or small dimensions for M × N of human face region intercepting, and containsThe specification of two interocular distances, ranks are divided equally this face accordingly, and generating mesh obtains the mesh point of matching number;
Based on each grid nodes extraction face subcharacter, men and women's information of utilizing each subcharacter information and knowing in advance,Applied Learning algorithm is learnt, output training result.
Above-mentioned gender identification method, the method for extracting face subcharacter is, first intercepts the predetermined neighborhood of corresponding mesh point,Form M1 × N1 subregion, and then obtain the vector of M1 × N1 row.
Above-mentioned gender identification method, the span of M1, N1 is [10,15].
Above-mentioned gender identification method, described sex recognition result is y={0, and 1} wherein 0 represents female, and 1 represents man, or defeatedGo out y={p, q}, wherein p is woman's probability, q is man's probability, 0p1,0q1;
Correspondingly, the mode of information fusion is y={0 for Output rusults, and the sorting technique of 1} adopts contrast output categoryBe 0 be categorized as 1 number;
The mode of information fusion is y={p for Output rusults, and q} sorting technique adopts the method for probability weight:
P={p1, p2, p3, p4 ..., pK} and Q={q1, q2, q3, q4 ..., qK}, wherein K is mesh point number, P is for sentencingBe decided to be woman's Making by Probability Sets, Q is for being judged to be the combination of man's probability;
Being calculated as woman's probability is p=sum (P)/K, for man's probability is q=sum (Q)/K, and wherein sum representative summation; If p> q, assert it is female, otherwise be man.
Above-mentioned gender identification method, described Meshing Method is wide m decile, high n decile, wherein m, n are natural number,And m ∈ [4,10], n ∈ [3,8].
Above-mentioned gender identification method, described specification comprises the content that intercepts human face region, and described human face region is wholeOne or more in face or selection eyes, eyebrow, nose, face.
The present invention will be described in more detail in conjunction with specific embodiments, and above-mentioned and other object of the present invention and advantage willMore apparent.
Detailed description of the invention
In following content, involved gender identification method, shows the information fusion to picture much information,Its principle is that the multiple identifying informations that obtain are carried out to secondary fusion, and the result merging with secondary, as final identificationResult. While being affected because be interfered when some identifying information, other a identifying information, when correct, is analogous toWeeded out disabled identifying information, thereby information fusion takes full advantage of each recognition result, forms message complementary sense,Thereby there is good effect. And then can greatly improve the robustness of sex identification, and the interference such as illumination are had wellAdaptability, has very high discrimination.
According to foregoing, a kind of gender identification method is provided, it comprises step:
1. picture geometric size normalization, intercepts human face region, obtains the big or small M × N of human face region;
Normalization sample and pending picture, make relevant information have relatively uniform basis.
1.1, in certain embodiments, the method for geometric size normalization can be to utilize SVMs to obtain, logicalCross the study of great amount of samples, people from location two pupil positions; In preferred embodiment, setting two interpupillary distances is 64 pixels,The central point that the mid point of setting two pupils is picture, by the size of picture scaling to 240 × 320, can in preferred embodimentWith in the situation that obtaining better discrimination, amount of calculation is relatively little;
1.2, in certain embodiments, the human face region of intercepting can be whole face, can be also face eyes,The key features such as eyebrow, nose, face, in the time selecting local feature, select the position that sex discrimination is larger, as far as possible in districtCalibration relatively hour, can mate multiple position and distinguish, to meet the needs of information fusion;
1.2.1, in some preferred embodiments, intercept human face region can be whole face, peak width is 100-150, region height is 150-200, and the method has been utilized the full detail of face comparatively fully;
1.2.2, in certain embodiments, intercept human face region and can be eyebrow eye areas, the size that intercepts region can be100 × 40, the method has been utilized the regional area of face, but because eyebrow eyes have good discrimination, also can have veryGood recognition effect;
2. the height and width of the M × n-quadrant of pair selection equidistant grid division respectively, obtains several mesh points;
2.1, in certain embodiments, the wide of M × n-quadrant can be divided into m equal portions, and m is natural number, and m generally gets M's1/10-1/4; Through a large amount of experiment, m is excessive easily comprises too much invalid information, causes discrimination to decline, and m is too smallEasily omit key message, cause equally discrimination to decline, in this interval, can obtain better discrimination;
2.2, in further embodiments, the height of M × n-quadrant can be divided into n equal portions, and n is natural number, and n generally gets N's1/8-1/3, through checking, n is excessive or too smallly also all will make discrimination have decline in various degree;
3, choosing 2K(K is natural number, generally gets 100-300 and opens, and can obtain and have better representational data)The normalized facial image of size, the each K of men and women's image opens, and based on each grid nodes extraction face subcharacter, utilizes every height spyReference breath and men and women's information of knowing in advance, learn with learning algorithm; Wherein be input as mesh point information, be output as y=0,1}, wherein 0 represents female, and 1 represents man, or output y={p, and q}, wherein p is woman's probability, q is man's probability, 0p1,0q1;
3.1, in certain embodiments, the subcharacter of each mesh point extracts and can adopt direct intercepting mesh point M1 aroundThe method in × N1 region, forms neighborhood centered by mesh point, arranges line by line the vector that obtains 1 row M1 × N1 row, wherein M1,N1 is natural number, and this kind of method is simple and clear, easy operating;
3.2,, in some embodiment that further apply again, first the subcharacter of each mesh point extracts can intercept mesh pointM1 × N1 region around, then the gray value in M1 × N1 region is done to normalized, finally arrange line by line and obtain 1 row M1 × N1 rowVector, this kind of method, by normalized, is mapped to same metric space by face half-tone information, can be to a certain extentImprove discrimination;
3.3, preferably, the subcharacter of each mesh point extracts and can first intercept mesh point M1 × N1 region around, calculates AThe average gray AveGray of (i, j) and gray variance VariGray, wherein 0i<M1,0J < N1, A (i, j) be M1 ×Gray value in N1 region on correspondence position, gray value As (i, j)=(A (i, the j)-AveGray) of normalized/VariGray, finally arranges the region after standardization the vector that obtains 1 row M1 × N1 row line by line, and this kind of method can be guaranteed to processThe pixel value average of rear region is 0, and in region, Pixel Information is similar to and meets standardized normal distribution;
M1, N1 in above-mentioned 3.1-3.3 be all between desirable 10-15, the excessive or too small results of learning variation that all can cause,Can use through checking above-mentioned scope in the discrimination of expecting, and the peak value that can go in the above range;
The scope including M1, N1 span that should be appreciated that meets the curve of certain parameter effect, thus institute substantiallyExtend the poor numerical value of effect, belong to its simple transformation, within should falling into its protection domain.
3.4 according to 3, and machine learning method can be the method for Bayesian learning, and recognition result now can be real number pAnd q, wherein p is for being woman's probability, and q is for being man's probability, 0p1,0q1: recognition result can also be y={0,1},Wherein 0 represent female, 1 represents man;
3.5 according to 3, and machine learning method can be the method for SVMs study, recognition result now can be 0 or1, wherein 0 represent female, 1 represents man;
3.6 according to 3, and machine learning method can be the method for neural network learning, recognition result now can be 0 or1, wherein 0 represent female, 1 represents man;
4. for an image to be identified,, to image pretreatment the multiple mesh points that obtain are believed by above-mentioned stepsBreath, as the input of the grader training, obtains respectively its recognition result, finally obtains by the mode of information fusionFinal judged result; Step is expressed as follows more specifically:
If the Output rusults of 4.1 graders is y={0,1}, the mode of information fusion can adopt the method for majority decision,For judging conveniently, choose mesh point number NUM and can be odd number, NUM generally can be the odd number between 15-35, respectively statistical classificationDevice is output as the number NUMQ2 that 0 number NUMQ1 and grader are 1, if NUMQ1 > NUMQ2, be female, otherwise be man;
4.2, in certain embodiments, if the Output rusults of grader is y={p, q}, the mode of information fusion can adoptThe method of probability weight. If the Output rusults of mesh point is P={p1, p2, p3, p4 ..., pK} and Q={q1, q2, q3, q4 ...,QK}, wherein K is mesh point number, and P is for being judged to be woman's Making by Probability Sets, and Q is for being judged to be the combination of man's probability; Finally be calculated asWoman's probability is p=sum (P)/K, for man's probability is q=sum (Q)/K, and wherein sum representative summation; If p > q, assert it is female,Otherwise be man.
Below in conjunction with embodiment more specifically, such scheme is done to describe more specifically.
Embodiment one:
1. image geometry size normalization, intercepts human face region;
1.1. set sample size S=100000, adopt the method for SVMs study, people from location two pupil positionsPut; Setting two interpupillary distances is 64 pixels, and the central point that the mid point of setting two pupils is picture, by picture scaling to 240 × 320Size;
1.2 for n=100 width facial image, and rough estimate can intercept the rectangle size A---of human face region successivelyj, andTo the upper left corner coordinate (x---of this rectangle1j,y----1j) and lower right corner coordinate (x---2j,y----2j), wherein, pointDo not calculate the average (x---of two coordinate points1,y----1) and (x---2,y----2), wherein(i=1,2);
1.3. calculate the big or small M × N of average rectangular area, wherein M=y----2-y----1+1,N=x---2-x---1+1;
1.4. for piece image, with the rectangle intercepting human face region of M × N.
2. the length of pair M × n-quadrant and wide equidistant partition respectively, be divided into m equal portions by the transverse axis length M of image, and the longitudinal axis is longDegree N is divided into n equal portions, m > 2, n > 2, and m, n is even number, can obtain (m-1) × (n-1) individual by the way of grid divisionMesh point (not comprising the marginal portion of image).
3. extract the pixel in each mesh point surrounding M1 × N1 region, wherein M1, N1(M1 > 5, N1 > 5) be odd number, pressThe order of arranging line by line forms the vector of 1 row M1 × N1 row as input, sets learning sample Sp=60000, uses SVMsThe method training classifier of study, grader is output as y={0, and 1} wherein 0 represents female, and 1 represents man.
4, for piece image to be identified, successively image is done to pretreatment by step 1,2, obtain (m-1) × (n-1)The vector of individual 1 row M1 × N1 row, using the Bayes classifier that this (m-1) × (n-1) individual vector trains as step 3 respectivelyInput, obtains (m-1) × (n-1) individual classification results, finally utilizes the method for ballot to obtain final differentiation result, if defeatedGoing out result is 0, is female, and Output rusults is 1, is man.
Use this kind of method to choose 200 facial images that have under various uneven illumination conditions, each 100 of men and women,Test discrimination is 98%, chooses the facial image of 100 low resolution that read from identity card and identifies, and discrimination is93%, choosing the discrimination that facial image (angle changing is no more than 15 degree) that 300 various attitudes change obtains is 96%. Pass throughThese group data can find out that the factor that this kind of gender identification method affects discrimination to low resolution, attitude etc. has stronger ShandongRod, and illumination is had to extremely strong adaptability, its discrimination is also comparatively desirable.
Embodiment two:
1. image geometry size normalization, intercepts human face region;
1.1. set sample size S=100000, adopt the method for SVMs study, people from location two pupil positionsPut; Setting two interpupillary distances is 64 pixels, and the central point that the mid point of setting two pupils is picture, by picture scaling to 240 × 320Size;
1.2 for n=100 width facial image, and rough estimate can intercept the rectangle size A---of people's eyebrow and eye regions successivelyj,And obtain the upper left corner coordinate (x---of this rectangle1j,y----1j) and lower right corner coordinate (x---2j,y----2j), wherein, calculate respectively the average (x---of two coordinate points1,y----1) and (x---2,y----2), wherein(i=1,2);
1.3. calculate the big or small M × N of average rectangular area, wherein M=y----2-y----1+1,N=x---2-x---1+1;
1.4. for piece image, intercept people's eyebrow and eye regions with the rectangle of M × N.
2. the length of pair M × n-quadrant and wide equidistant partition respectively, be divided into m equal portions by the transverse axis length M of image, and the longitudinal axis is longDegree N is divided into n equal portions, m > 2, n > 2, and m, n is even number, can obtain (m-1) × (n-1) individual by the way of grid divisionMesh point (not comprising the marginal portion of image).
3. extract the pixel in each mesh point surrounding M1 × N1 region, wherein M1, N1(M1 > 5, N1 > 5) be odd number, countCalculate average gray AveGray and the gray variance VariGray of A (i, j), wherein 0i<M1,0J < N1, A (i, j) is M1Gray value in × N1 region on correspondence position, gray value As (i, j)=(A (i, the j)-AveGray) of normalized/VariGray, finally arranges the region after standardization the vector that obtains 1 row M1 × N1 row, line by line by the der group of arranging line by lineBecome the vector of 1 row M1 × N1 row as input, set learning sample Sp=100000, with the method training of naive Bayesian studyGrader, grader is output as y={0, and 1} wherein 0 represents female, and 1 represents man.
4, for piece image to be identified, successively image is done to pretreatment by step 1,2, obtain (m-1) × (n-1)The vector of individual 1 row M1 × N1 row, using the Bayes classifier that this (m-1) × (n-1) individual vector trains as step 3 respectivelyInput, obtains (m-1) × (n-1) individual classification results, finally utilizes the method for ballot to obtain final differentiation result, if defeatedGoing out result is 0, is female, and Output rusults is 1, is man.
Use this kind of method to choose 200 facial images that have under various uneven illumination conditions, each 100 of men and women,Test discrimination is 99.5%, chooses the facial image of 100 low resolution that read from identity card and identifies, and discrimination is93%, choosing the discrimination that facial image (angle changing is no more than 15 degree) that 300 various attitudes change obtains is 97%. This kindWhat method intercepted is the face subcharacter eyebrow and eye regions most with discrimination, and each mesh point pixel has around been done to standardizationPretreatment, and increased learning sample, employing be the Bayesian learning method that large learning sample is had to better treatment effect, because ofAnd obtain very high discrimination.

Claims (3)

1. a gender identification method, is characterized in that, the sample based on same specification is trained according to selected multiple subcharacters,The sex recognition result of output based on each subcharacter identification;
Input picture to be identified, be normalized to described specification, the sex of each subcharacter identification of the described picture after identification normalizingRecognition result, the mode of information fusion is the sex recognition result based on each subcharacter of this picture of summation, gets the sex of advantageThe sex output that recognition result sum is corresponding;
Described specification is the big or small dimensions for M × N of human face region intercepting, and the specification that contains two interocular distances, accordinglyRanks are divided equally this face, and generating mesh obtains the mesh point of matching number;
Based on each grid nodes extraction face subcharacter, men and women's information of utilizing each subcharacter information and knowing in advance, applicationLearning algorithm is learnt, output training result;
The method of extracting face subcharacter is, first intercepts the predetermined neighborhood of corresponding mesh point, forms M1 × N1 subregion,And then obtain the vector of M1 × N1 row;
The span of M1, N1 is [10,15];
Described sex recognition result is y={0, and 1} wherein 0 represents female, and 1 represents man, or output y={p, q}, and wherein p is womanProbability, q is man's probability, 0p1,0q1;
Correspondingly, the mode of information fusion is y={0 for Output rusults, the sorting technique of 1} adopt contrast output category be 0 withBe categorized as 1 number;
The mode of information fusion is y={p for Output rusults, and q} sorting technique adopts the method for probability weight:
P={p1, p2, p3, p4 ..., pK} and Q={q1, q2, q3, q4 ..., qK}, wherein K is mesh point number, P is for being judged to beWoman's Making by Probability Sets, Q is for being judged to be the combination of man's probability;
Being calculated as woman's probability is p=sum (P)/K, for man's probability is q=sum (Q)/K, and wherein sum representative summation; If p > q,Assert it is female, otherwise be man.
2. gender identification method according to claim 1, is characterized in that, described Meshing Method is wide m decile, heightN decile, wherein m, n are natural number, and m ∈ [4,10], n ∈ [3,8].
3. gender identification method according to claim 1, is characterized in that, described specification comprise intercept human face region inHold, described human face region is whole face or selects one or more in eyes, eyebrow, nose, face.
CN201210515116.XA 2012-12-05 2012-12-05 A kind of gender identification method Active CN103034840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210515116.XA CN103034840B (en) 2012-12-05 2012-12-05 A kind of gender identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210515116.XA CN103034840B (en) 2012-12-05 2012-12-05 A kind of gender identification method

Publications (2)

Publication Number Publication Date
CN103034840A CN103034840A (en) 2013-04-10
CN103034840B true CN103034840B (en) 2016-05-04

Family

ID=48021719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210515116.XA Active CN103034840B (en) 2012-12-05 2012-12-05 A kind of gender identification method

Country Status (1)

Country Link
CN (1) CN103034840B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324946B (en) * 2013-07-11 2016-08-17 广州广电运通金融电子股份有限公司 A kind of method and system of paper money recognition classification
CN104680118B (en) * 2013-11-29 2018-06-15 华为技术有限公司 A kind of face character detection model generation method and system
CN103714316B (en) * 2013-12-10 2017-03-01 小米科技有限责任公司 Image-recognizing method, device and electronic equipment
CN105373777B (en) * 2015-10-30 2019-01-08 中国科学院自动化研究所 A kind of method and device for recognition of face
CN108765014A (en) * 2018-05-30 2018-11-06 中海云智慧(北京)物联网科技有限公司 A kind of intelligent advertisement put-on method based on access control system
CN108681928A (en) * 2018-05-30 2018-10-19 中海云智慧(北京)物联网科技有限公司 A kind of intelligent advertisement put-on method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637251A (en) * 2012-03-20 2012-08-15 华中科技大学 Face recognition method based on reference features

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637251A (en) * 2012-03-20 2012-08-15 华中科技大学 Face recognition method based on reference features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人脸图像的性别识别与年龄估计研究;陆丽;《中国博士学位论文全文数据库信息科技辑》;20101015;1-126 *

Also Published As

Publication number Publication date
CN103034840A (en) 2013-04-10

Similar Documents

Publication Publication Date Title
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
Zhang et al. Driver fatigue detection based on eye state recognition
CN102360421B (en) Face identification method and system based on video streaming
CN103034840B (en) A kind of gender identification method
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN105138954B (en) A kind of image automatic screening inquiry identifying system
Nabatchian et al. Illumination invariant feature extraction and mutual-information-based local matching for face recognition under illumination variation and occlusion
CN108268859A (en) A kind of facial expression recognizing method based on deep learning
CN102024145B (en) Layered recognition method and system for disguised face
CN105335732B (en) Based on piecemeal and differentiate that Non-negative Matrix Factorization blocks face identification method
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN105956578A (en) Face verification method based on identity document information
CN108829900A (en) A kind of Research on face image retrieval based on deep learning, device and terminal
CN101710383A (en) Method and device for identity authentication
CN102799870A (en) Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN102902986A (en) Automatic gender identification system and method
CN103020602B (en) Based on the face identification method of neural network
Rouhi et al. A review on feature extraction techniques in face recognition
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN103136516A (en) Face recognition method and system fusing visible light and near-infrared information
CN108629336A (en) Face value calculating method based on human face characteristic point identification
CN104143091B (en) Based on the single sample face recognition method for improving mLBP
CN113963032A (en) Twin network structure target tracking method fusing target re-identification
CN106529504A (en) Dual-mode video emotion recognition method with composite spatial-temporal characteristic
Paul et al. Extraction of facial feature points using cumulative histogram

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: Shun high tech Zone of Ji'nan City, Shandong province 250101 China West Road No. 699

Patentee after: SYNTHESIS ELECTRONIC TECHNOLOGY CO., LTD.

Address before: Shun high tech Zone of Ji'nan City, Shandong province 250101 China West Road No. 699

Patentee before: Shandong Synthesis Electronic Technology Co., Ltd.