CN103914683A - Gender identification method and system based on face image - Google Patents
Gender identification method and system based on face image Download PDFInfo
- Publication number
- CN103914683A CN103914683A CN201310753988.4A CN201310753988A CN103914683A CN 103914683 A CN103914683 A CN 103914683A CN 201310753988 A CN201310753988 A CN 201310753988A CN 103914683 A CN103914683 A CN 103914683A
- Authority
- CN
- China
- Prior art keywords
- image
- facial image
- face
- lbp
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention discloses a gender identification method and system based on a face image. The method includes the following steps that firstly, the face image is preprocessed; the face image is converted into a grey level image firstly, then a face area image is cut from the whole image according to the positions of the eyes to obtain the face area image, and finally de-noising processing is performed on the face area based on histogram equalization; secondly, a composite LBP is used for performing feature extraction, a zoom LBP is used for conducting zoom and multiple setting on the image, then a multi-scale LBP is used for dividing the face image into a plurality of areas according to different sizes of LBP histogram areas, LBP feature extraction is performed on the image and each divided area, and all LBP histogram features are combined together to serve as multi-scale LBP features of the image; thirdly, gender identification is performed based on an SVM model. According to the gender identification method and system, the gender of the face can be identified according to the face image, and the identification accuracy is improved.
Description
Technical field
The invention belongs to face recognition technology field, relate to a kind of face identification system, relate in particular to a kind of gender identification method based on facial image; Meanwhile, the invention still further relates to a kind of sex recognition system based on facial image.
Background technology
Along with biological identification technology is in the widespread use of the aspects such as man-machine interaction, authentication, demographics, sex identification causes that as its important branch increasing scholar studies it both at home and abroad.Face has the important information of mankind's biological characteristic, can express interpersonal social interaction, and the sex identification based on facial image is typical case's application of facial image classification.Along with being widely used and the raising of mobile phone camera quality of smart mobile phone, mobile phone, as image capture device the most easily, will play vital role in the sex identification of facial image.
The research of the sex identification of early stage facial image, the sex identification that mainly solves the facial image under single controlled condition.These images be generally image front, clean background, do not block and consistent illumination.But in the image of taking by mobile phone, facial image is always subject to the impact of environment, as attitude changes, illumination variation, especially some children's image.For sex identification problem, in the case of the known supporting vector machine model of disaggregated model (Support Vector Machine, SVM), most critical be exactly the processing of carrying out image.
But good disposal route without comparison also nowadays makes existing sex recognition accuracy not high.In view of this, nowadays in the urgent need to designing a kind of new sex recognition method, to overcome the above-mentioned defect of existing recognition method.
Summary of the invention
Technical matters to be solved by this invention is: a kind of gender identification method based on facial image is provided, can identifies according to face figure the sex of face, improve the accuracy of identification.
In addition, the present invention also provides a kind of sex recognition system based on facial image, can identify according to face figure the sex of face, improves the accuracy of identification.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
Based on a gender identification method for facial image, described method comprises the steps:
Step S1, facial image is carried out to pre-service; First convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising;
Step S2, adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Step S3, carry out sex identification by SVM model;
Wherein, step S1 specifically comprises:
Step S11, image gray processing treatment step;
Real-life image is made up of by different proportion red R, green G, tri-kinds of primary colors of blue B; The mode of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
Step S12, geometrical normalization step;
By yardstick correction, translation, spinning solution, remove the malformation in image, make face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
Step S13, cut out step;
In order to remove the interference of image background part, facial image is cut out; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
Step S14, histogram equalization step;
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image;
Wherein, step S3 specifically comprises:
Construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
Based on a gender identification method for facial image, described method comprises the steps:
Step S1, facial image is carried out to pre-service;
Step S2, adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Step S3, carry out sex identification by SVM model.
As a preferred embodiment of the present invention, described step S1 comprises: first convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.
As a preferred embodiment of the present invention, described step S1 specifically comprises:
Step S11, image gray processing treatment step;
Real-life image is made up of by different proportion red R, green G, tri-kinds of primary colors of blue B; The mode of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
Step S12, geometrical normalization step;
By yardstick correction, translation, spinning solution, remove the malformation in image, make face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
Step S13, cut out step;
In order to remove the interference of image background part, facial image is cut out; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
Step S14, histogram equalization step;
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
As a preferred embodiment of the present invention, described step S3 specifically comprises:
Construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
A sex recognition system based on facial image, described system comprises:
Image pretreatment module, in order to carry out pre-service to facial image;
Characteristic extracting module, in order to adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Sex identification module, in order to carry out sex identification by SVM model.
As a preferred embodiment of the present invention, first described image pretreatment module converts thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.
As a preferred embodiment of the present invention, described image pretreatment module specifically comprises: image gray processing processing unit, geometrical normalization processing unit, cut out unit, histogram equalization unit;
-described image gray processing processing unit carries out gray processing processing in order to the mode by traversing graph picture to image; First obtain the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determine the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
-geometrical normalization processing unit, by yardstick correction, translation, spinning solution, removes the malformation in image, makes face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
-cut out unit, in order to facial image is cut out, to remove the interference of image background part; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
-histogram equalization unit, in order to face sample image is carried out to histogram equalization, to reduce the impact of illumination on facial image intensity profile; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
As a preferred embodiment of the present invention, described sex identification module constructs an optimum lineoid at sample input control or feature space, makes lineoid reach maximum to the distance between two class sample sets, thereby obtains best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
Beneficial effect of the present invention is: the gender identification method based on facial image and system that the present invention proposes, can identify according to facial image people's sex.The design, after image is carried out to pre-service, adopts a kind of new face detection and compound LBP image characteristics extraction mode to carry out the extraction of image facial characteristics, finally carries out sex identification by SVM model, can improve recognition efficiency and degree of accuracy.
Brief description of the drawings
Fig. 1 is the process flow diagram that the present invention is based on the gender identification method of facial image.
Fig. 2 is the composition schematic diagram that the present invention is based on the sex recognition system of facial image.
Fig. 3 is the schematic flow sheet of recognition methods of the present invention in embodiment bis-.
Fig. 4 is LBP recording pixel point simple in embodiment bis-and the comparative information schematic diagram of its surrounding pixel point.
Embodiment
Describe the preferred embodiments of the present invention in detail below in conjunction with accompanying drawing.
Embodiment mono-
Refer to Fig. 1, the present invention has disclosed a kind of gender identification method based on facial image, and described method comprises the steps:
[step S1] face detecting step.
In research before, people's face detection algorithm major part is all to propose on the basis of front face image, and in order to better meet the demand of actual life, the present invention adopts the Adaboost human-face detector of pyramid structure.The human-face detector of pyramid structure has been realized using by slightly realizing multi-pose Face to smart strategy and has been detected.
In this pyramid structure, the detecting device of top layer can be realized the detection of the facial image of full attitude, and which floor is merely able to the face of a certain attitude to detect for all the other, and, bottom, the scope of detector processes attitude is just less, and structure is more complicated; Top layer, the scope of detector processes attitude is just larger, and number is also just fewer.Thereby realize the background that can both remove greatly at every layer, thereby improve detection efficiency.
Pyramid structure of the present invention is designed to three layers: top layer arranges a detecting device; Middle layer arranges three detecting devices, and the face being respectively used in [90 ° ,-40 °], [30 °, 30 °], [40 °, 90 °] scope detects; Bottom arranges five detecting devices, and the face being respectively used in [90 ° ,-60 °], [60 ° ,-20 °], [20 °, 20 °], [20 °, 60 °] and [60 °, 90 °] scope detects.
[step S1] carries out pre-service to facial image; First convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.Specifically comprise:
Step S11, image gray processing treatment step;
Real-life image is made up of by different proportion red R, green G, tri-kinds of primary colors of blue B; The mode of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image.
Step S12, geometrical normalization step;
By yardstick correction, translation, spinning solution, remove the malformation in image, make face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces.
Step S13, cut out step;
In order to remove the interference of image background part, facial image is cut out; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image.
Step S14, histogram equalization step;
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
[step S2] adopts compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
[step S3] carries out sex identification by SVM model;
Wherein, step S3 specifically comprises:
Construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
The present invention also discloses a kind of sex recognition system based on facial image, and described system comprises: face detection module 1, image pretreatment module 2, characteristic extracting module 3, sex identification module 4.
Face detection module 1 adopts the Adaboost human-face detector of pyramid structure.The human-face detector of pyramid structure has been realized using by slightly realizing multi-pose Face to smart strategy and has been detected.In this pyramid structure, the detecting device of top layer can be realized the detection of the facial image of full attitude, and which floor is merely able to the face of a certain attitude to detect for all the other, and, bottom, the scope of detector processes attitude is just less, and structure is more complicated; Top layer, the scope of detector processes attitude is just larger, and number is also just fewer.Thereby realize the background that can both remove greatly at every layer, thereby improve detection efficiency.
Pyramid structure of the present invention is designed to three layers: top layer arranges a detecting device; Middle layer arranges three detecting devices, and the face being respectively used in [90 ° ,-40 °], [30 °, 30 °], [40 °, 90 °] scope detects; Bottom arranges five detecting devices, and the face being respectively used in [90 ° ,-60 °], [60 ° ,-20 °], [20 °, 20 °], [20 °, 60 °] and [60 °, 90 °] scope detects.
Image pretreatment module 2 is in order to carry out pre-service to facial image; First convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.
Described image pretreatment module 2 specifically comprises: image gray processing processing unit 21, geometrical normalization processing unit 22, cut out unit 23, histogram equalization unit 24.
Described image gray processing processing unit 21 carries out gray processing processing in order to the mode by traversing graph picture to image; First obtain the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determine the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image.
Geometrical normalization processing unit 22, by yardstick correction, translation, spinning solution, is removed the malformation in image, makes face image standardization.
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces.
Cut out unit 23 in order to facial image is cut out, to remove the interference of image background part; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image.
Histogram equalization unit 24 is in order to face sample image is carried out to histogram equalization, to reduce the impact of illumination on facial image intensity profile; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
Characteristic extracting module 3 is in order to adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image.
Sex identification module 4 is in order to carry out sex identification by SVM model.In the present embodiment, described sex identification module constructs an optimum lineoid at sample input control or feature space, makes lineoid reach maximum to the distance between two class sample sets, thereby obtains best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
Embodiment bis-
In sex recognition system of the present invention, the sex identification of facial image specifically can be divided into face detection module, image pretreatment module, facial feature extraction module and Gender Classification module four part compositions, as shown in Figure 3.In the present invention, first facial image is detected and calibrated, then use medium filtering to remove noise, adopting compound LBP to carry out feature extraction, finally carry out sex identification by SVM model.
The core of the sex identification based on facial image is the selection of feature extraction and sorting technique.In research before, the mostly feature extraction based on recognition of face and the sorting technique of sex identification based on face features.
The correct impact of gender classification being classified because of actual environment is very large, therefore must carry out pre-service to the image obtaining, it is any given piece image, first must in detected image, whether there is face, if exist, from image, extract face facial zone, through image gray processing, geometrical normalization, cut out with histogram equalization after, extract face feature, finally by sex recognizer, facial image is classified and obtained classification results.
2.1 face detection modules
In research before, people's face detection algorithm major part is all to propose on the basis of front face image, and in order to better meet the demand of actual life, the present invention adopts the Adaboost human-face detector of pyramid structure.The human-face detector of pyramid structure has been realized using by slightly realizing multi-pose Face to smart strategy and has been detected.
In this pyramid structure, the detecting device of top layer can be realized the detection of the facial image of full attitude, and which floor is merely able to the face of a certain attitude to detect for all the other, and, bottom, the scope of detector processes attitude is just less, and structure is more complicated; Top layer, the scope of detector processes attitude is just larger, and number is also just fewer.Thereby realize the background that can both remove greatly at every layer, thereby improve detection efficiency.
Pyramid structure of the present invention is designed to three layers: top layer arranges a detecting device; Middle layer arranges three detecting devices, and the face being respectively used in [90 ° ,-40 °], [30 °, 30 °], [40 °, 90 °] scope detects; Bottom arranges five detecting devices, and the face being respectively used in [90 ° ,-60 °], [60 ° ,-20 °], [20 °, 20 °], [20 °, 60 °] and [60 °, 90 °] scope detects.
2.2 image pretreatment module
In real life, owing to being subject to the interference of external environment, the image collecting often can, with a lot of Noise and Interference signal, especially face-images, can produce larger impact for the sex identification of facial image.Therefore facial image is being carried out before feature extraction, carrying out digital picture pre-service and be very important.Because the high-frequency of facial image details is mixed with noise.When using low-pass filtering, some details in image may be destroyed.Because median filtering method is very effective to eliminating salt-pepper noise, be conducive to protect marginal information.Thereby we adopt medium filtering to remove noise, change little face's texture information thereby can successfully reduce noise.
The facial image showing for a width, first makes it to convert to gray level image.Then according to the position of eyes, face area image is cut out out from entire image, obtain face image.Finally carry out histogram equalization to facial zone denoising.
Image pretreatment module comprises image gray processing unit, geometrical normalization processing unit, cuts out unit, histogram equalization unit.
2.2.1 image gray processing unit
Real-life image is by R(redness), G(green), B(blueness) three kinds of primary colors form by different proportion.The basic thought of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula.Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
The wherein gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image.
2.2.2 geometrical normalization processing unit
Geometrical normalization is by methods such as yardstick correction, translation, rotations, removes the malformation in image, makes face image standardization.
Face is nonrigid, and the sample of same individuality also can cause image to produce larger difference because of the difference of expression, but empirical discovery: the distance between two relatively other changes changes minimum, thus can be using the position of eyes with apart from the foundation as facial image geometrical normalization.
According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes.And, also can be taking the position of eyes as benchmark, other position in face is also zoomed to rational position as the position of face, nose etc., thereby realize " alignment " of all faces.
2.2.3 cut out unit
In order to remove the interference of image background part, need to cut out facial image.The present invention is taking the distance between eyes as foundation, and the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image.Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image.
2.2.4 histogram equalization unit
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization." central idea " of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
2.3 facial feature extraction modules
2.3.1 LBP algorithm
Simple LBP recording pixel point and the comparative information of its surrounding pixel point, as shown in Figure 4.Far Left (Example) is former figure, and the pixel value of middle grid is threshold value, is more than or equal to central point pixel and is made as 1, is less than and is made as 0.Finally central pixel point 11110001 binary numbers are around turned to decimal number, be LBP value.Wherein, Pattern=11110001; LBP=128+64+32+16+1=241.
But basic LBP arthmetic statement feature capabilities is limited, and does not possess rotational invariance.Subsequently, this algorithm has two kinds of expansions.First, use the neighborhood that varies in size to expand this algorithm to obtain the notable feature of different scale.Symbol (P, R) has been described on a circle taking R as radius, a neighborhood of P equal interval sampling point.Secondly, adopt a little subset of 2P model, it is by LBP(P, R) generate, with the texture of Description Image.These models, are called unified model, in the time being regarded as a circulation binary string, comprise at the most two from 0 to 1 or the step-by-step conversion that vice versa.Observe and find that most texture information is comprised in unified model.By one yield to the single labelled of LBP algorithm with have the model that exceedes 2 conversions to be designated as LBP (P, R, u2), it can avoid redundancy model not lose too many information.After being marked with the image of a LBP, the histogram of this marking image can be used as Texture descriptor.
2.3.2 compound LBP
The present invention proposes a kind of new feature extracting method, two normalization LBP, be merged into a new LBP.First, by some multiples of image scaling, for example, and if wished image scaling to original four times, the mean intensity of 4 pixels that then calculate, consequently, we will obtain a new matrix, wherein have 1/4 initial matrix.Next facial image is divided into several regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image.Can be good at like this merging local grain and the global information of facial image, to reduce the loss of information.
2.4 sex identification modules (adopting svm classifier device)
Another key issue that gender classification problem need to solve is exactly the selection of gender sorter.Support vector machine for the present invention (Support Vector Machine is called for short SVM) is carried out sex identification.SVM is a kind of based on structural risk minimization (Structure Risk Minimization, be called for short SRM) general learning algorithm, basic thought is to construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability.
In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample.When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
In sum, the gender identification method based on facial image and system that the present invention proposes, can identify according to facial image people's sex.The design, after image is carried out to pre-service, adopts a kind of new face detection and compound LBP image characteristics extraction mode to carry out the extraction of image facial characteristics, finally carries out sex identification by SVM model, can improve recognition efficiency and degree of accuracy.
Here description of the invention and application is illustrative, not wants scope of the present invention to limit in the above-described embodiments.Here the distortion of disclosed embodiment and change is possible, and for those those of ordinary skill in the art, the various parts of the replacement of embodiment and equivalence are known.Those skilled in the art are noted that in the situation that not departing from spirit of the present invention or essential characteristic, and the present invention can be with other form, structure, layout, ratio, and realize with other assembly, material and parts.In the situation that not departing from the scope of the invention and spirit, can carry out other distortion and change to disclosed embodiment here.
Claims (9)
1. the gender identification method based on facial image, is characterized in that, described method comprises the steps:
Step S1, facial image is carried out to pre-service; First convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising;
Step S2, adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Step S3, carry out sex identification by SVM model;
Wherein, step S1 specifically comprises:
Step S11, image gray processing treatment step;
Real-life image is made up of by different proportion red R, green G, tri-kinds of primary colors of blue B; The mode of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
Step S12, geometrical normalization step;
By yardstick correction, translation, spinning solution, remove the malformation in image, make face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
Step S13, cut out step;
In order to remove the interference of image background part, facial image is cut out; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
Step S14, histogram equalization step;
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image;
Wherein, step S3 specifically comprises:
Construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
2. the gender identification method based on facial image, is characterized in that, described method comprises the steps:
Step S1, facial image is carried out to pre-service;
Step S2, adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Step S3, carry out sex identification by SVM model.
3. the gender identification method based on facial image according to claim 2, is characterized in that:
Described step S1 comprises: first convert thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.
4. the gender identification method based on facial image according to claim 3, is characterized in that:
Described step S1 specifically comprises:
Step S11, image gray processing treatment step;
Real-life image is made up of by different proportion red R, green G, tri-kinds of primary colors of blue B; The mode of image gray processing is traversing graph picture, first obtains the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determines the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
Step S12, geometrical normalization step;
By yardstick correction, translation, spinning solution, remove the malformation in image, make face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
Step S13, cut out step;
In order to remove the interference of image background part, facial image is cut out; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
Step S14, histogram equalization step;
For reducing the impact of illumination on facial image intensity profile, face sample image is carried out to histogram equalization; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
5. the gender identification method based on facial image according to claim 2, is characterized in that:
Described step S3 specifically comprises:
Construct an optimum lineoid at sample input control or feature space, make lineoid reach maximum to the distance between two class sample sets, thereby obtain best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
6. the sex recognition system based on facial image, is characterized in that, described system comprises:
Image pretreatment module, in order to carry out pre-service to facial image;
Characteristic extracting module, in order to adopt compound LBP to carry out feature extraction; Adopt convergent-divergent LBP to carry out convergent-divergent to image and set multiple; Then adopt multiple dimensioned LBP, facial image is divided into some regions according to different LBP histogram region sizes, to this image and each piece zoning, all carry out the extraction of LBP feature, then all LBP histogram features are combined, as the multiple dimensioned LBP feature of this image;
Sex identification module, in order to carry out sex identification by SVM model.
7. the sex recognition system based on facial image according to claim 6, is characterized in that:
First described image pretreatment module converts thereof into gray level image; Then according to the position of eyes, face area image is cut out out from entire image, obtain face image; Finally carry out histogram equalization to facial zone denoising.
8. the sex recognition system based on facial image according to claim 7, is characterized in that:
Described image pretreatment module specifically comprises: image gray processing processing unit, geometrical normalization processing unit, cut out unit, histogram equalization unit;
-described image gray processing processing unit carries out gray processing processing in order to the mode by traversing graph picture to image; First obtain the rgb value of each pixel, in the value that extracts respectively red, green, blue by computing, finally determine the gray-scale value of each pixel by gradation conversion formula; Grayvalue transition formula is:
Gray=(9798R+19235G+3735B)/32768 (1)
Wherein, the gray-scale value after Gray representative conversion, redness, green and the blue component of each pixel in R, G, B difference representative image;
-geometrical normalization processing unit, by yardstick correction, translation, spinning solution, removes the malformation in image, makes face image standardization;
Using the position of eyes with apart from the foundation as facial image geometrical normalization; According to the position of two, first by two-dimentional affined transformation, rotation facial image makes the line maintenance level between two, dwindles or enlarged image simultaneously, makes the facial image in same face database be as the criterion and carry out " alignment " with eyes; Or taking the position of eyes as benchmark, other bit position in face is also zoomed to rational position, thereby realize " alignment " of all faces;
-cut out unit, in order to facial image is cut out, to remove the interference of image background part; Taking the distance between eyes as foundation, the central point of two, as with reference to center, extends along the direction of the upper and lower, left and right with reference to center respectively, cuts out a certain size face area image; Face area is specifically determined by following formula:
w
f=h
f=d
e/10×24 (2)
l=(x
l+x
r)/2-w
f/2 (3)
r=l+w
f (4)
t=y
e-h
f/3.5 (5)
b=t+h
j (6)
Wherein, w
fthe width of face area, h
fthe height of face area, d
ebe the distance of two, l, r, t and b have determined the boundary position of the upper and lower, left and right of face area in facial image, x
l, x
rand y
ethe position coordinateses of two eyes in facial image;
-histogram equalization unit, in order to face sample image is carried out to histogram equalization, to reduce the impact of illumination on facial image intensity profile; The object of histogram equalization processing is from becoming being uniformly distributed in whole tonal ranges between certain gray area of relatively concentrating the grey level histogram of original image.
9. the sex recognition system based on facial image according to claim 6, is characterized in that:
Described sex identification module constructs an optimum lineoid at sample input control or feature space, makes lineoid reach maximum to the distance between two class sample sets, thereby obtains best generalization ability; In the problem of the sex identification of facial image, only have man and 2 classifications of female, category attribute be 1 ,-1}, existence:
f(x)=a□x
i+b (7)
In formula: x
ifor the sex character data of the facial image of training sample; When f (x)>=1, x
ibe expressed as the male sex; With should f (x)≤-1 o'clock, x
ibe expressed as women.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310753988.4A CN103914683A (en) | 2013-12-31 | 2013-12-31 | Gender identification method and system based on face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310753988.4A CN103914683A (en) | 2013-12-31 | 2013-12-31 | Gender identification method and system based on face image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103914683A true CN103914683A (en) | 2014-07-09 |
Family
ID=51040353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310753988.4A Pending CN103914683A (en) | 2013-12-31 | 2013-12-31 | Gender identification method and system based on face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103914683A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156720A (en) * | 2014-07-26 | 2014-11-19 | 佳都新太科技股份有限公司 | Face image denoising method on basis of noise evaluation model |
CN104463142A (en) * | 2014-12-26 | 2015-03-25 | 中科创达软件股份有限公司 | Gender identification method and device based on facial images |
CN104601817A (en) * | 2015-01-20 | 2015-05-06 | 电子科技大学 | User base attribute forecasting method based on smart phone acceleration sensor |
CN105320948A (en) * | 2015-11-19 | 2016-02-10 | 北京文安科技发展有限公司 | Image based gender identification method, apparatus and system |
CN105447492A (en) * | 2015-11-13 | 2016-03-30 | 重庆邮电大学 | Image description method based on 2D local binary pattern |
CN105740864A (en) * | 2016-01-22 | 2016-07-06 | 大连楼兰科技股份有限公司 | LBP-based image feature extraction method |
CN105894050A (en) * | 2016-06-01 | 2016-08-24 | 北京联合大学 | Multi-task learning based method for recognizing race and gender through human face image |
CN106339661A (en) * | 2015-07-17 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method nand device for detecting text region in image |
CN106446821A (en) * | 2016-09-20 | 2017-02-22 | 北京金山安全软件有限公司 | Method and device for identifying gender of user and electronic equipment |
CN106599834A (en) * | 2016-12-13 | 2017-04-26 | 浙江省公众信息产业有限公司 | Information pushing method and system |
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN106897662A (en) * | 2017-01-06 | 2017-06-27 | 北京交通大学 | The localization method of the face key feature points based on multi-task learning |
CN107729891A (en) * | 2017-12-01 | 2018-02-23 | 旗瀚科技有限公司 | Face characteristic region partitioning method in the case of a kind of non-alignment |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
CN107995982A (en) * | 2017-09-15 | 2018-05-04 | 达闼科技(北京)有限公司 | A kind of target identification method, device and intelligent terminal |
CN108009491A (en) * | 2017-11-29 | 2018-05-08 | 深圳火眼智能有限公司 | A kind of object recognition methods solved in fast background movement and system |
CN108268859A (en) * | 2018-02-08 | 2018-07-10 | 南京邮电大学 | A kind of facial expression recognizing method based on deep learning |
CN108269342A (en) * | 2018-03-29 | 2018-07-10 | 成都惠网远航科技有限公司 | Automobile door control automatic induction method |
CN108334870A (en) * | 2018-03-21 | 2018-07-27 | 四川意高汇智科技有限公司 | The remote monitoring system of AR device data server states |
CN108394378A (en) * | 2018-03-29 | 2018-08-14 | 成都惠网远航科技有限公司 | The autocontrol method of vehicle switch door sensing device |
CN108446642A (en) * | 2018-03-23 | 2018-08-24 | 四川意高汇智科技有限公司 | A kind of Distributive System of Face Recognition |
CN108446639A (en) * | 2018-03-21 | 2018-08-24 | 四川意高汇智科技有限公司 | Low-power consumption augmented reality equipment |
CN108491791A (en) * | 2018-03-21 | 2018-09-04 | 四川意高汇智科技有限公司 | Distributed AR data transmission methods |
CN108491798A (en) * | 2018-03-23 | 2018-09-04 | 四川意高汇智科技有限公司 | Face identification method based on individualized feature |
CN108520582A (en) * | 2018-03-29 | 2018-09-11 | 成都惠网远航科技有限公司 | Vehicle switch door automatic induction system |
CN109297417A (en) * | 2018-08-30 | 2019-02-01 | 蒋丽英 | Intelligent track train height adjustment system |
CN109934047A (en) * | 2017-12-15 | 2019-06-25 | 浙江舜宇智能光学技术有限公司 | Face identification system and its face identification method based on deep learning |
CN110414428A (en) * | 2019-07-26 | 2019-11-05 | 厦门美图之家科技有限公司 | A method of generating face character information identification model |
CN110785769A (en) * | 2019-09-29 | 2020-02-11 | 京东方科技集团股份有限公司 | Face gender identification method, and training method and device of face gender classifier |
CN111738927A (en) * | 2020-03-23 | 2020-10-02 | 阳光暖果(北京)科技发展有限公司 | Face recognition feature enhancement and denoising method and system based on histogram equalization |
CN113409187A (en) * | 2021-06-30 | 2021-09-17 | 深圳市斯博科技有限公司 | Cartoon style image conversion method and device, computer equipment and storage medium |
CN114037541A (en) * | 2021-11-05 | 2022-02-11 | 湖南创研科技股份有限公司 | Fixed-point medicine institution supervision method based on biological feature recognition and related equipment |
-
2013
- 2013-12-31 CN CN201310753988.4A patent/CN103914683A/en active Pending
Non-Patent Citations (1)
Title |
---|
张宁: "基于人脸图像的性别分类研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156720A (en) * | 2014-07-26 | 2014-11-19 | 佳都新太科技股份有限公司 | Face image denoising method on basis of noise evaluation model |
CN104463142A (en) * | 2014-12-26 | 2015-03-25 | 中科创达软件股份有限公司 | Gender identification method and device based on facial images |
CN104463142B (en) * | 2014-12-26 | 2018-10-16 | 中科创达软件股份有限公司 | A kind of gender identification method and device based on facial image |
CN104601817A (en) * | 2015-01-20 | 2015-05-06 | 电子科技大学 | User base attribute forecasting method based on smart phone acceleration sensor |
CN106339661A (en) * | 2015-07-17 | 2017-01-18 | 阿里巴巴集团控股有限公司 | Method nand device for detecting text region in image |
CN105447492A (en) * | 2015-11-13 | 2016-03-30 | 重庆邮电大学 | Image description method based on 2D local binary pattern |
CN105447492B (en) * | 2015-11-13 | 2018-10-12 | 重庆邮电大学 | A kind of Image Description Methods based on two-dimentional local binary patterns |
CN105320948A (en) * | 2015-11-19 | 2016-02-10 | 北京文安科技发展有限公司 | Image based gender identification method, apparatus and system |
CN105740864A (en) * | 2016-01-22 | 2016-07-06 | 大连楼兰科技股份有限公司 | LBP-based image feature extraction method |
CN105894050A (en) * | 2016-06-01 | 2016-08-24 | 北京联合大学 | Multi-task learning based method for recognizing race and gender through human face image |
CN106446821A (en) * | 2016-09-20 | 2017-02-22 | 北京金山安全软件有限公司 | Method and device for identifying gender of user and electronic equipment |
CN106778518A (en) * | 2016-11-24 | 2017-05-31 | 汉王科技股份有限公司 | A kind of human face in-vivo detection method and device |
CN106599834A (en) * | 2016-12-13 | 2017-04-26 | 浙江省公众信息产业有限公司 | Information pushing method and system |
CN106897662B (en) * | 2017-01-06 | 2020-03-10 | 北京交通大学 | Method for positioning key feature points of human face based on multi-task learning |
CN106897662A (en) * | 2017-01-06 | 2017-06-27 | 北京交通大学 | The localization method of the face key feature points based on multi-task learning |
CN107995982A (en) * | 2017-09-15 | 2018-05-04 | 达闼科技(北京)有限公司 | A kind of target identification method, device and intelligent terminal |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
CN108009491A (en) * | 2017-11-29 | 2018-05-08 | 深圳火眼智能有限公司 | A kind of object recognition methods solved in fast background movement and system |
CN107729891A (en) * | 2017-12-01 | 2018-02-23 | 旗瀚科技有限公司 | Face characteristic region partitioning method in the case of a kind of non-alignment |
CN109934047A (en) * | 2017-12-15 | 2019-06-25 | 浙江舜宇智能光学技术有限公司 | Face identification system and its face identification method based on deep learning |
CN108268859A (en) * | 2018-02-08 | 2018-07-10 | 南京邮电大学 | A kind of facial expression recognizing method based on deep learning |
CN108334870A (en) * | 2018-03-21 | 2018-07-27 | 四川意高汇智科技有限公司 | The remote monitoring system of AR device data server states |
CN108446639A (en) * | 2018-03-21 | 2018-08-24 | 四川意高汇智科技有限公司 | Low-power consumption augmented reality equipment |
CN108491791A (en) * | 2018-03-21 | 2018-09-04 | 四川意高汇智科技有限公司 | Distributed AR data transmission methods |
CN108491798A (en) * | 2018-03-23 | 2018-09-04 | 四川意高汇智科技有限公司 | Face identification method based on individualized feature |
CN108446642A (en) * | 2018-03-23 | 2018-08-24 | 四川意高汇智科技有限公司 | A kind of Distributive System of Face Recognition |
CN108269342A (en) * | 2018-03-29 | 2018-07-10 | 成都惠网远航科技有限公司 | Automobile door control automatic induction method |
CN108520582A (en) * | 2018-03-29 | 2018-09-11 | 成都惠网远航科技有限公司 | Vehicle switch door automatic induction system |
CN108394378A (en) * | 2018-03-29 | 2018-08-14 | 成都惠网远航科技有限公司 | The autocontrol method of vehicle switch door sensing device |
CN108394378B (en) * | 2018-03-29 | 2020-08-14 | 荣成名骏户外休闲用品股份有限公司 | Automatic control method of automobile door opening and closing induction device |
CN109297417A (en) * | 2018-08-30 | 2019-02-01 | 蒋丽英 | Intelligent track train height adjustment system |
CN109297417B (en) * | 2018-08-30 | 2019-06-11 | 汕头市昊哲网络科技有限公司 | Intelligent track train height adjustment system |
CN110414428A (en) * | 2019-07-26 | 2019-11-05 | 厦门美图之家科技有限公司 | A method of generating face character information identification model |
CN110785769A (en) * | 2019-09-29 | 2020-02-11 | 京东方科技集团股份有限公司 | Face gender identification method, and training method and device of face gender classifier |
WO2021056531A1 (en) * | 2019-09-29 | 2021-04-01 | 京东方科技集团股份有限公司 | Face gender recognition method, face gender classifier training method and device |
CN111738927A (en) * | 2020-03-23 | 2020-10-02 | 阳光暖果(北京)科技发展有限公司 | Face recognition feature enhancement and denoising method and system based on histogram equalization |
CN113409187A (en) * | 2021-06-30 | 2021-09-17 | 深圳市斯博科技有限公司 | Cartoon style image conversion method and device, computer equipment and storage medium |
CN113409187B (en) * | 2021-06-30 | 2023-08-15 | 深圳万兴软件有限公司 | Cartoon style image conversion method, device, computer equipment and storage medium |
CN114037541A (en) * | 2021-11-05 | 2022-02-11 | 湖南创研科技股份有限公司 | Fixed-point medicine institution supervision method based on biological feature recognition and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103914683A (en) | Gender identification method and system based on face image | |
CN109829443B (en) | Video behavior identification method based on image enhancement and 3D convolution neural network | |
CN109154978B (en) | System and method for detecting plant diseases | |
CN108121991B (en) | Deep learning ship target detection method based on edge candidate region extraction | |
CN104751142B (en) | A kind of natural scene Method for text detection based on stroke feature | |
CN104050471B (en) | Natural scene character detection method and system | |
Sharma et al. | Recent advances in video based document processing: a review | |
CN106372648A (en) | Multi-feature-fusion-convolutional-neural-network-based plankton image classification method | |
CN104408449A (en) | Intelligent mobile terminal scene character processing method | |
CN109325507B (en) | Image classification method and system combining super-pixel saliency features and HOG features | |
CN107527054B (en) | Automatic foreground extraction method based on multi-view fusion | |
CN102496157A (en) | Image detection method based on Gaussian multi-scale transform and color complexity | |
CN103218605A (en) | Quick eye locating method based on integral projection and edge detection | |
CN101976114A (en) | System and method for realizing information interaction between computer and pen and paper based on camera | |
CN105426890A (en) | Method for identifying graphic verification code with twisty and adhesion characters | |
CN110046544A (en) | Digital gesture identification method based on convolutional neural networks | |
CN104484652A (en) | Method for fingerprint recognition | |
CN112906550A (en) | Static gesture recognition method based on watershed transformation | |
Xia et al. | Cervical cancer cell detection based on deep convolutional neural network | |
CN106203448A (en) | A kind of scene classification method based on Nonlinear Scale Space Theory | |
CN109272522B (en) | A kind of image thinning dividing method based on local feature | |
Zhang et al. | Residual attentive feature learning network for salient object detection | |
WO2022121025A1 (en) | Certificate category increase and decrease detection method and apparatus, readable storage medium, and terminal | |
Zhang et al. | License plate recognition model based on CNN+ LSTM+ CTC | |
CN108564020B (en) | Micro-gesture recognition method based on panoramic 3D image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140709 |