CN102799872A - Image processing method based on face image characteristics - Google Patents

Image processing method based on face image characteristics Download PDF

Info

Publication number
CN102799872A
CN102799872A CN201210247479XA CN201210247479A CN102799872A CN 102799872 A CN102799872 A CN 102799872A CN 201210247479X A CN201210247479X A CN 201210247479XA CN 201210247479 A CN201210247479 A CN 201210247479A CN 102799872 A CN102799872 A CN 102799872A
Authority
CN
China
Prior art keywords
image
face
eigenwert
calculate
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210247479XA
Other languages
Chinese (zh)
Other versions
CN102799872B (en
Inventor
周秦武
张军
张啸宇
强敢峯
张凤华
蔡云丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201210247479.XA priority Critical patent/CN102799872B/en
Publication of CN102799872A publication Critical patent/CN102799872A/en
Application granted granted Critical
Publication of CN102799872B publication Critical patent/CN102799872B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image processing method based on face image characteristics. The image processing method comprises the steps of: obtaining the maximum characteristic region and colors and aberration of the region at morning and night states according to HIS space components of face images of a plurality of testees at morning and night so as to form a color characteristic vector; obtaining a textural characteristic vector of the maximum characteristic region by using a co-occurrence matrix; and constructing one set of judgment criterion through a total structure of a database data color characteristic vector and the textural characteristic vector, and thus providing a state judgment method. The image processing method can be conveniently used for carrying out state detection immediately by people, can be applied to an intelligent phone, is convenient and rapid, and has a bright application prospect.

Description

Image processing method based on the face-image characteristic
Technical field:
Patent of the present invention relates to image processing field, is used to measure the human body face state, is specifically related to a kind of image processing method based on the face-image characteristic.
Background technology:
In image processing field, have variedly to the disposal route of face feature, modally be:
Rudimentary algorithm is chosen
(1) based on the method for geometric properties
Method (Geometrical Features Based) based on geometric properties is early stage face recognition algorithms.The eigenvector that this method requires to choose has certain uniqueness, should reflect the difference of different people face, also will have certain elasticity, to reduce influences such as perhaps eliminating illumination difference.The geometric properties vector is the proper vector that is the basis with the shape of human face and geometric relationship, and its component generally includes people's face and specifies the Euclidean distance of point-to-point transmission, curvature, angle etc.
(2) based on neural network method
Neural network is used for (Neural Network Based) recognition of face early.The neural network that is used for recognition of face in early days mainly is Kohonen from associating map neural network, and is when facial image receives the serious or segmental defect of noise pollution, better with the effect of the complete people's face of Kohonen network recovery.People such as Cottrell use cascade BP neural network to carry out recognition of face, and are also better to the facial image recognition capability that is partially damaged, illumination changes to some extent.Directly use gray-scale map (two-dimensional matrix) to characterize people's face based on neural network method, lie in the characteristic of pattern among the structure and parameter of neural network, promptly design the ad hoc structure neural network as the decision-making sorter through training.The neural network of being selected for use has: reverse transmittance nerve network (Back Propagation NN), convolutional neural networks (Convolution NN), SVMs (SVM) etc.
(3) based on the method for algebraic characteristic
This type algorithm is to adopt the algebraic characteristic vector, i.e. the projection of facial image on the dimensionality reduction subspace of being opened by " eigenface ".Cardinal principle based on algebraic characteristic identification is to utilize statistical method to extract characteristic, discerns thereby form the subspace.Basic process based on the method for algebraic characteristic is: regard image as a numerical matrix, it is carried out SVD decompose, the singular value that obtains is as the description of facial image.Because singular value vector and image have one-to-one relationship and have the unchangeability of stable preferably and various conversion, algebraic characteristic has reflected the essence of image, can be used as the description of face characteristic.And this method is represented the optimum that Karhunen-Loeve transformation is used for facial image.
4) current main-stream algorithm
(1) eigenface (eigenface) algorithm
The advantage of eigenface method:
1. the image original gradation data directly is used for learning identification, does not need any low middle rank to handle;
2. the geometry and reflection knowledge that do not need people's face;
3. can effectively compress through low dimension high dimensional data;
4. compare with other matching process, identification is effectively simple.
(2) based on Fisher linear discriminant analysis algorithm
The Fisher linear decision rule is the classic algorithm in the pattern-recognition, and it is linear separability at model space that the Fisher criterion has been supposed different classes of, and causes that the main cause that they can divide is the difference between the different people face.Can be applicable to tired judgement and represent the facial differences of same individual under varying environment and health status.
(3) elastic graph matching process
Elastic graph matching process (Elastic Graph Matching) is a kind of based on dynamic linking structure (Dynamic Link Architecture, method DLA).It with people's face with the expression of the sparse graph (being topological diagram) of trellis, the proper vector mark that the node among the figure obtains with the Gabor wavelet decomposition of picture position, the limit of figure is with the distance vector mark of connected node.
(4) local feature analytical algorithm
Local feature analytical algorithm (LFA:Local Feature Analysis) is an intensity profile knowledge of utilizing the priori structure knowledge and the facial image of people's face; Find out earlier the unique point of people's face roughly; Utilize people's face elastic graph to come it is adjusted then; Calculate the Gabor set of transform coefficients at each unique point place at last, and represent the characteristic of people's face with this.
(5) non-linear subspace algorithm
It has represented a kind of main flow development trend non-linear subspace analysis method (Non-Linear SubSpace).Mainly contain based on kernel machine method (as: K-SVM, K-PCA, K-LDA), local linear embedding grammar (LLE), Laplce's eigenface method (LE) etc.Its main thought: the characteristic with lesser amt is described sample, adopts Nonlinear Mapping to realize dimensionality reduction, structure face characteristic subspace, and then, realize recognition of face and signature tracking through sorter.
3, based on the method for detecting human face of knowledge Modeling
Utilize face characteristic knowledge to set up several rules based on the method for knowledge, people's face detection problem is converted into hypothesis and validation problem.
Organ location mode:, but follow some blanket rules though people's face alters a great deal in appearance.Like locus distribution of face etc.Whether people's face is arranged in the detected image, and promptly testing in this image is to exist to satisfy these regular image blocks.The general several organs possibility distribution position that directly detects earlier makes up these location points respectively, screens as sorter with the geometric relationship that organ distributes, and finds people's face that possibly exist.
Extract contour method: the profile of people's face can be regarded a sub-elliptical as, and people's face detects to detect through ellipse and accomplishes.To any piece image, at first carry out rim detection, and to the edge extracting curvilinear characteristic after the refinement, the valuation functions of calculating each curve combination adult face then detects people's face.Can adopt the Hough conversion to carry out; Basic thought is that the detection problem in the image space is transformed into parameter space; Accomplish the detection task through the statistics that in parameter space, simply adds up, describe the zone boundary curve of image, had good fault-tolerance and robustness by the image on noise or discontiguous area border for some with certain satisfied parametric form of most of frontier points; But the calculated amount of the bent conversion of H0u is very big, has a strong impact on recognition speed.
Utilize the method for color, texture: it is bigger that people's blee is influenced by brightness, receive colourity to influence a little less than, the distribution in color space is relatively concentrated, colouring information can be distinguished people's face and background to a certain extent.Compare with other detection methods, utilize the detected human face region of color knowledge not accurate enough, because features of skin colors can represent that computing velocity is fast with several simple parameters.
Utilize the method for motion: if input picture is the dynamic image sequence, then can utilizes with the organ of people's face or people's face and detect people's face, realize separating of people's face and background such as the detection of actions such as utilizing nictation or speak with respect to the motion of background.
Symmetry: people's face has certain axial symmetry, and each organ also has certain symmetry.The scholar who has proposes the method based on Symmetry Detection, detects the symmetry of a border circular areas, thereby determines whether the face into the people.
Method based on knowledge is a kind of top-down mode.Its one of them difficulty is how human knowledge to be transformed into effective rule: too thin if Rulemaking gets, having perhaps so, plurality of human faces can't pass through regular checking; If Rulemaking gets too wide in range, possibly many non-face meetings be mistaken for people's face so.
But there is following shortcoming in these methods;
Present image processing algorithm is slower to the processing speed of high definition face image, is easy to the detailed information of high definition face image is lost, and the quantity of information of face reduces, and causes face feature to be handled and loses a lot of useful informations.Face image such as is easy to receive that exterior light is shone at the influence of factor in addition, and above-mentioned algorithm is difficult to filtering interfering information.The purpose of invention is to propose a kind of analysis and processing method of face image easily and efficiently, is used to judge face's state.
Summary of the invention:
To above-mentioned defective or deficiency, the objective of the invention is to propose a kind of novel image processing method based on the face-image characteristic based on HIS space and texture image.Specific as follows:
Comprise the steps:
1) gathers the photo of many groups facial mornings, night two kinds of different conditions; Read in facial RGB image respectively, image transitions to the HSI space, is obtained form and aspect, saturation degree and three parameters of brightness in the HIS space; Respectively to three calculation of parameter eigenwerts, and obtain the mean value of every stack features value;
2) calculate each parameter difference percentage D=abs (P-Z)/max (P, Z), wherein, on behalf of mean value, the Z of the eigenwert of night during state, P represent the mean value of the eigenwert of morning during state; With three parameter difference percentage summations, obtain sum=∑ D as color standard;
3) gather a certain state lower face photo; Calculate the eigenwert of form and aspect, saturation degree and three parameters of brightness in HSI space; Calculate the difference percentage of each parameter according to the computing formula of above-mentioned D, the eigenwert the when P in the formula is above-mentioned a certain state, Z is the mean value of the eigenwert of morning during state; Difference percentage summation to three parameters obtains testsum; Relatively obtain body state with sum and judge parameter c 1, c1=testsum/sum, c1 are big more, and body state is more near the state in night.
Further; Also calculate the synthetic variance of three parameters and the average of synthetic standards difference in the said step 1); And in step 2) in calculate synthetic variance and synthetic standards difference number percent, the difference percentage sum of asking form and aspect, saturation degree and brightness and synthetic variance and synthetic standards difference is as color standard.
Further, the RGB image transitions is following to the method in HSI space:
I = R + G + B 3
H = 1 360 [ 90 - Arctan ( F / 3 ) + { 0 , G > B ; 180 , G < B } ] , Wherein F = 2 R - G - B G - B
S = I - min ( R , G , B ) I
Wherein, I is brightness; H is a tone; S is a color saturation.
Further, also comprise the steps:
The RGB image transitions of two kinds of different conditions is a gray level image with said morning, night, and generates gray level co-occurrence matrixes respectively:
P(i,j)=#{(x1,y1),(x2,y2)∈M×N|f(x1,y1)=i,f(x2,y2)=j}
Wherein, (x1, y1), (x2, y2) be image (any 2 coordinate among the M * N), (x y) is a width of cloth two-dimensional digital image to f, and its size is M * N; #{x} representes to gather the element number among the x, and P is the matrix of Ng * Ng, the Ng positive integer;
According to said gray level co-occurrence matrixes, calculate the standard deviation of energy, contrast, entropy, correlation, moment of inertia and each parameter of gray level image; The eigenwert of calculating these five parameters reaches the mean value of standard deviation separately;
Calculate the standard deviation of each parameter under morning, the night two states difference percentage D=abs (P-Z)/max (P, Z), wherein, on behalf of mean value, the Z of the standard deviation of night during state, P represent the mean value of the standard deviation of morning during state; And the difference percentage sum sum=∑ D that calculates five standard deviations is as the texture standard;
Gather a certain state lower face photo, calculate energy, contrast, entropy, correlativity, the moment of inertia of gray level image, and calculate the eigenwert of these five parameters; Calculate the difference percentage sum testsum of each eigenwert according to the computing formula of above-mentioned D, the eigenwert the when P in the formula is above-mentioned a certain state, Z is the mean value of the eigenwert of morning during state; Relatively obtain body state with sum and judge parameter c 2, c2=testsum/sum, c2 are big more, and body state is more near the state in night; Said c1 and c2 while are as the foundation of the judgement body state under this a certain state.
Further, to step 2) in utilize the HSI spatial component eigenwert of mask process computed image.
The invention has the beneficial effects as follows:
The present invention proposes a kind of novel face-image disposal route; Utilize the facial coloured image of HSI color space mask process; Can filtering illumination etc. extraneous factor to the influence of the detailed information of picture, color, saturation degree component can extract the detailed information of face-image in addition; Utilize gray level co-occurrence matrixes to calculate the texture information of face-image.This method is taken into account and is handled characteristic colored and the texture aspect, the complete description face detail information of ability and be used for measuring body state.The present invention can be applied to platforms such as smart mobile phone, the convenient popular condition of in time measuring.
Description of drawings:
Fig. 1 is implementing procedure figure of the present invention.
Embodiment:
Below in conjunction with accompanying drawing the present invention is done detailed description.
As shown in Figure 1, the present invention includes following steps:
One. coloured image
1. gather many groups facial mornings, two night the kind different conditions photo; Read in facial RGB image respectively, image transitions is arrived the HSI space, to every group form and aspect, saturation degree and brightness difference computation of characteristic values; And three's synthetic variance and synthetic standards are poor, and calculate average separately.Data such as table 1, shown in 2:
Table 1 facial color property amount in morning
Figure BDA00001899189000051
Figure BDA00001899189000061
Table 2 facial color property amount in night
Figure BDA00001899189000062
Figure BDA00001899189000071
Calculated difference number percent and with: calculate respectively under each group state in morning and the night state; Difference percentage D:D=abs (P-Z)/max (P of colourity, saturation degree, brightness, synthetic variance, synthetic standards difference; Z), wherein P be night average, the Z during state be the average of morning during state; With these five difference percentage summations, obtain sum=∑ D=45%; With this as color standard; As shown in table 3:
The contrast of table 3 color property
Figure BDA00001899189000072
3. a certain state lower face of picked at random picture calculates the color property vector and the eigenwert thereof in its HIS space, according to formula D=abs (P-Z)/max (P when testing; Z); The eigenwert of P substitution this moment, Z is the mean value of the eigenwert of morning during state, obtains colourity, the saturation degree of this moment, the difference percentage of brightness respectively; With these three difference percentage summations, obtain testsum; The value of c1=testsum/sum will be as the foundation of judging body state, and c1 is big more, more near the state in night.Can confirm according to following parameter:
Two, gray level image is a texture image
With said morning, night two kinds of different conditions the RGB image transitions be gray level image, and generate respectively gray level co-occurrence matrixes: P (i, j)=#{ (x1, y1); (x2, y2) ∈ M * N|f (x1, y1)=i, f (x2; Y2)=j } wherein, (x1, y1), (x2 y2) is image (any coordinate among the N * N) at 2; (x y) is a width of cloth two-dimensional digital image to f, and its size is M * N; Element number among # (x) the expression set x, P is the matrix of Ng * Ng;
2. according to said gray level co-occurrence matrixes, calculate the standard deviation of energy, contrast, entropy, correlativity, moment of inertia and each parameter of gray level image, and calculate the mean value of the eigenwert and the standard deviation of these five parameters.Like table 4, shown in 5 (only calculated energy a1 in the table, b1, entropy a2, b2, moment of inertia a3, b3, correlativity a4, four parameters of b4:
Table 4 face texture in morning characteristic
Table 5 face texture in night characteristic
Figure BDA00001899189000082
Figure BDA00001899189000091
3. the same with above-mentioned coloured image computing method, calculated difference number percent with, in order to reduce error, a basis of calculation difference number percent is like, table 6 be 113.42%, with this standard as texture image here.
The contrast of table 6 textural characteristics
Figure BDA00001899189000092
4. a certain state lower face of picked at random picture calculates its texture and levies vector and eigenwert thereof, according to formula D=abs (P-Z)/max (P when testing; Z); Energy, contrast, entropy, correlativity, the moment of inertia of P substitution this moment, Z is the mean value of the eigenwert of morning during state, obtains energy, contrast, entropy, the correlativity of this moment, the difference percentage of moment of inertia respectively; With these five difference percentage summations, obtain testsum; The value of c2=testsum/sum will be as the foundation of judging body state, and c1 is big more, more near night.

Claims (5)

1. based on the image processing method of face-image characteristic, it is characterized in that, comprise the steps:
1) gathers the photo of many groups facial mornings, night two kinds of different conditions; Read in facial RGB image respectively, image transitions to the HSI space, is obtained form and aspect, saturation degree and three parameters of brightness in the HIS space; Respectively to three calculation of parameter eigenwerts, and obtain the mean value of every stack features value;
2) calculate each parameter difference percentage D=abs (P-Z)/max (P, Z), wherein, on behalf of mean value, the Z of the eigenwert of night during state, P represent the mean value of the eigenwert of morning during state; With three parameter difference percentage summations, obtain sum=∑ D as color standard;
3) gather a certain state lower face photo; Calculate the eigenwert of form and aspect, saturation degree and three parameters of brightness in HSI space; Calculate the difference percentage of each parameter according to the computing formula of above-mentioned D, the eigenwert the when P in the formula is above-mentioned a certain state, Z is the mean value of the eigenwert of morning during state; Difference percentage summation to three parameters obtains testsum; Relatively obtain body state with sum and judge parameter c 1, c1=testsum/sum, c1 are big more, and body state is more near the state in night.
2. the image processing method based on the face-image characteristic according to claim 1; It is characterized in that: also calculate the synthetic variance of three parameters and the average of synthetic standards difference in the said step 1); And in step 2) in calculate synthetic variance and synthetic standards difference number percent, the difference percentage sum of asking form and aspect, saturation degree and brightness and synthetic variance and synthetic standards difference is as color standard.
3. the image processing method based on the face-image characteristic according to claim 1 is characterized in that, the RGB image transitions is following to the method in HSI space:
I = R + G + B 3
H = 1 360 [ 90 - Arctan ( F / 3 ) + { 0 , G > B ; 180 , G < B } ] , Wherein F = 2 R - G - B G - B
S = I - min ( R , G , B ) I
Wherein, I is brightness; H is a tone; S is a color saturation.
4. the image processing method based on the face-image characteristic according to claim 1 is characterized in that, also comprises the steps:
1) with said morning, night two kinds of different conditions the RGB image transitions be gray level image, and generate gray level co-occurrence matrixes respectively:
P(i,j)=#{(x1,y1),(x2,y2)∈M×N|f(x1,y1)=i,f(x2,y2)=j}
Wherein, (x1, y1), (x2, y2) be image (any 2 coordinate among the M * N), (x y) is a width of cloth two-dimensional digital image to f, and its size is M * N; #{x} representes to gather the element number among the x, and P is the matrix of Ng * Ng, and Ng is a positive integer;
2), calculate the standard deviation of energy, contrast, entropy, correlation, moment of inertia and each parameter of gray level image according to said gray level co-occurrence matrixes; The eigenwert of calculating these five parameters reaches the mean value of standard deviation separately;
Calculate the standard deviation of each parameter under morning, the night two states difference percentage D=abs (P-Z)/max (P, Z), wherein, on behalf of mean value, the Z of the standard deviation of night during state, P represent the mean value of the standard deviation of morning during state; And the difference percentage sum sum=∑ D that calculates five standard deviations is as the texture standard;
3) gather a certain state lower face photo, calculate energy, contrast, entropy, correlativity, the moment of inertia of gray level image, and calculate the eigenwert of these five parameters; Calculate the difference percentage sum testsum of each eigenwert according to the computing formula of above-mentioned D, the eigenwert the when P in the formula is above-mentioned a certain state, Z is the mean value of the eigenwert of morning during state; Relatively obtain body state with sum and judge parameter c 2, c2=testsum/sum, c2 are big more, and body state is more near the state in night; The foundation that said c1 and c2 judge as the body state under this a certain state simultaneously.
5. the image processing method based on the face-image characteristic according to claim 1 is characterized in that: to step 2) in utilize the HSI spatial component eigenwert of mask process computed image.
CN201210247479.XA 2012-07-17 2012-07-17 Image processing method based on face image characteristics Expired - Fee Related CN102799872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210247479.XA CN102799872B (en) 2012-07-17 2012-07-17 Image processing method based on face image characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210247479.XA CN102799872B (en) 2012-07-17 2012-07-17 Image processing method based on face image characteristics

Publications (2)

Publication Number Publication Date
CN102799872A true CN102799872A (en) 2012-11-28
CN102799872B CN102799872B (en) 2015-04-29

Family

ID=47198972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210247479.XA Expired - Fee Related CN102799872B (en) 2012-07-17 2012-07-17 Image processing method based on face image characteristics

Country Status (1)

Country Link
CN (1) CN102799872B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820770A (en) * 2015-03-25 2015-08-05 百度在线网络技术(北京)有限公司 Method and device for providing health indication
CN106295496A (en) * 2015-06-24 2017-01-04 三星电子株式会社 Recognition algorithms and equipment
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
CN106983493A (en) * 2017-03-04 2017-07-28 武汉嫦娥医学抗衰机器人股份有限公司 A kind of skin image processing method based on three spectrum
CN107862308A (en) * 2017-12-12 2018-03-30 成都电科海立科技有限公司 A kind of face identification method based on vehicle-mounted face identification device
CN107977640A (en) * 2017-12-12 2018-05-01 成都电科海立科技有限公司 A kind of acquisition method based on vehicle-mounted recognition of face image collecting device
CN109325959A (en) * 2018-11-09 2019-02-12 南京邮电大学 A kind of method and its application of the extraction infrared image details based on Hough transform
CN111131328A (en) * 2020-01-09 2020-05-08 周钰 Safe financial settlement method and system for block chain
CN113854961A (en) * 2021-09-13 2021-12-31 珀莱雅化妆品股份有限公司 Evaluation method for human body efficacy of blackhead removing cosmetic

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184413A (en) * 2011-05-16 2011-09-14 浙江大华技术股份有限公司 Automatic vehicle body color recognition method of intelligent vehicle monitoring system
CN102298702A (en) * 2010-06-28 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures
CN102547132A (en) * 2011-12-31 2012-07-04 蔡静 Method and device for carrying out shooting under backlighting condition, and camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298702A (en) * 2010-06-28 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures
CN102184413A (en) * 2011-05-16 2011-09-14 浙江大华技术股份有限公司 Automatic vehicle body color recognition method of intelligent vehicle monitoring system
CN102547132A (en) * 2011-12-31 2012-07-04 蔡静 Method and device for carrying out shooting under backlighting condition, and camera

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820770B (en) * 2015-03-25 2018-11-13 百度在线网络技术(北京)有限公司 A kind of method and apparatus for providing health instruction
CN104820770A (en) * 2015-03-25 2015-08-05 百度在线网络技术(北京)有限公司 Method and device for providing health indication
CN106295496A (en) * 2015-06-24 2017-01-04 三星电子株式会社 Recognition algorithms and equipment
CN106295496B (en) * 2015-06-24 2021-09-14 三星电子株式会社 Face recognition method and device
CN106372603A (en) * 2016-08-31 2017-02-01 重庆大学 Shielding face identification method and shielding face identification device
CN106983493A (en) * 2017-03-04 2017-07-28 武汉嫦娥医学抗衰机器人股份有限公司 A kind of skin image processing method based on three spectrum
CN106983493B (en) * 2017-03-04 2020-08-18 武汉嫦娥医学抗衰机器人股份有限公司 Skin image processing method based on three spectrums
CN107977640A (en) * 2017-12-12 2018-05-01 成都电科海立科技有限公司 A kind of acquisition method based on vehicle-mounted recognition of face image collecting device
CN107862308A (en) * 2017-12-12 2018-03-30 成都电科海立科技有限公司 A kind of face identification method based on vehicle-mounted face identification device
CN109325959A (en) * 2018-11-09 2019-02-12 南京邮电大学 A kind of method and its application of the extraction infrared image details based on Hough transform
CN111131328A (en) * 2020-01-09 2020-05-08 周钰 Safe financial settlement method and system for block chain
CN111131328B (en) * 2020-01-09 2021-02-26 周钰 Safe financial settlement method and system for block chain
CN113854961A (en) * 2021-09-13 2021-12-31 珀莱雅化妆品股份有限公司 Evaluation method for human body efficacy of blackhead removing cosmetic

Also Published As

Publication number Publication date
CN102799872B (en) 2015-04-29

Similar Documents

Publication Publication Date Title
CN102799872B (en) Image processing method based on face image characteristics
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN101763503B (en) Face recognition method of attitude robust
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
Xie et al. Scut-fbp: A benchmark dataset for facial beauty perception
US20210027048A1 (en) Human face image classification method and apparatus, and server
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN106295694B (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN103839041B (en) The recognition methods of client features and device
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN103279768B (en) A kind of video face identification method based on incremental learning face piecemeal visual characteristic
CN109902590A (en) Pedestrian&#39;s recognition methods again of depth multiple view characteristic distance study
CN104700076A (en) Face image virtual sample generating method
CN103839042B (en) Face identification method and face identification system
CN105447441A (en) Face authentication method and device
CN105956570B (en) Smiling face&#39;s recognition methods based on lip feature and deep learning
CN105160299A (en) Human face emotion identifying method based on Bayes fusion sparse representation classifier
Rouhi et al. A review on feature extraction techniques in face recognition
CN106529395B (en) Signature image identification method based on depth confidence network and k mean cluster
Omara et al. Learning pairwise SVM on deep features for ear recognition
CN103839033A (en) Face identification method based on fuzzy rule
CN103971106A (en) Multi-view human facial image gender identification method and device
CN105893916A (en) New method for detection of face pretreatment, feature extraction and dimensionality reduction description
CN103714340B (en) Self-adaptation feature extracting method based on image partitioning
CN109543637A (en) A kind of face identification method, device, equipment and readable storage medium storing program for executing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150429

Termination date: 20180717