CN106570459A - Face image processing method - Google Patents

Face image processing method Download PDF

Info

Publication number
CN106570459A
CN106570459A CN201610905128.1A CN201610905128A CN106570459A CN 106570459 A CN106570459 A CN 106570459A CN 201610905128 A CN201610905128 A CN 201610905128A CN 106570459 A CN106570459 A CN 106570459A
Authority
CN
China
Prior art keywords
face
characteristics vector
coordinate
state
singularity characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610905128.1A
Other languages
Chinese (zh)
Inventor
付昕军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610905128.1A priority Critical patent/CN106570459A/en
Publication of CN106570459A publication Critical patent/CN106570459A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face image processing method which processes the face image to obtain a formal model and includes four main steps. The face image is processed through the steps, in this process, the correction problem of the image position is considered, and the formal model of the face image is generated to embody the feature of the face image and to facilitate the image data storage. The formal model can strictly express the face image and can process the subsequent face image.

Description

A kind of processing method of facial image
Technical field
The present invention relates to a kind of image processing field, is related to a kind of processing method of facial image.
Background technology
Face can provide important information as the most important expression organ of the mankind to us:Such as sex, race, feelings Thread, age and personality etc., therefore face image processing technology also just necessarily becomes the important means of man-machine interaction.Additionally, from Substantially studying the rule of face can help us to deepen the grasp to the general consciousness cooked mode rule of the mankind, can also have Effect promotes the research and development of machine vision.Existing face image processing process is only from image, to image Quality be processed, make the quality of image purer, but existing image procossing mode more needs to be to pass through Process can strictly represent facial image and pictures subsequent can be facilitated to process.
The content of the invention
To solve problem above, the present invention provides a kind of processing method of facial image.To reach above-mentioned technical proposal Effect, the technical scheme is that:The step of a kind of processing method of facial image, the method, is as follows:
1) training set of a colorized face images is given, using HLS colour model transfer algorithms, HLS colour models turn Scaling method is nonlinear, has the advantages that edge brightness noise is few, smooth effect is good, by colorized face images training set Each width colorized face images are converted into Gray Face image, by the corresponding Gray Face image training of Gray Face image construction Collection;
2) Lycoperdon polymorphum Vitt facial image training set is sampled, obtains sample window set, sample window set digitized is obtained Sample window matrix, gathers sample window matrix conversion into singularity characteristics vector according to singular value decomposition theorem, singularity characteristics vector Set is into the average mark implantation of calculating characteristic point, calculating formula is as follows by feature point group one by one:
In formula:I=1,2 ..., M, siFor the coordinate figure of the characteristic point, P is characterized average mark implantation a little, and variable M is used In the quantity of recording feature point;
3) according to the average mark implantation of characteristic point, using calculated coordinate figure as new coordinate axess origin, in order to disappear Except the difference of face location, carry out the translation of face and carry out rotation transformation, each is strange during first is gathered singularity characteristics vector Different characteristic vector all deducts new coordinate axess origin value, the vector set of singularity characteristics after being translated, and then observer is on the face The coordinate of eyebrow, by the coordinate (x of two eyebrows on face1, y1)、(x2, y2) connect the tilt angle alpha for obtaining face into a line, The computing formula of tilt angle alpha is as follows:
Singularity characteristics vector set after the translation is surrounded around the new zero rotationAngle, rotation parameter ForThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, is rotated Singularity characteristics vector set after conversion;
4) by the singularity characteristics vector set even partition after rotation transformation, it is corresponding to set up state matrix, state matrix Middle state is corresponded with the singularity characteristics vector set coordinate after rotation transformation, and calculates turning between state in state matrix Probability is changed, Probability State model is built by this process, the composition ultimate unit of Probability State model is in state matrix State, and each state corresponds to the transition probability of other states, and the Probability State model for finally giving is a kind of formalization Model.
Specific embodiment
In order that the technical problem to be solved, technical scheme and beneficial effect become more apparent, below tie Embodiment is closed, the present invention will be described in detail.It should be noted that specific embodiment described herein is only to explain The present invention, is not intended to limit the present invention, and the product that can realize said function belongs to equivalent and improvement, is all contained in this Within bright protection domain.Concrete grammar is as follows:
Embodiment one:Real facial image is colored, and these colors can provide more richer than Gray Face image Rich information.However, because gray level image has the characteristics of being easily handled, and most of classical image processing methods are all based on Gray level image, therefore, if one width coloured image is converted into into gray level image through certain conversion.The first step of this method just will Colorized face images are converted to Gray Face image, make in the gray level image comprising the most features in original color image Information, then, subsequent treatment can just adopt classical image processing method, greatly reduce amount of calculation.
Principal component feature gray level image in order to obtain colorized face images, can be simulated using the optimal base for proposing Karhunen-Loeve transformation method, so as to obtain the principal component feature image of the new most characteristic informations for containing coloured image.It is unusual It is a kind of algebraic characteristic extracting method of effective image that value is decomposed.Singular value features are stable when image is described, and With critical natures such as transposition invariance, rotational invariance, shift invariant, mirror transformation invariance, therefore singular value features Can describe as a kind of effective algebraic characteristic of image.Singularity value decomposition is at Image Data Compression, signal It is widely applied in reason and pattern analyses.After having obtained singular value, need to be analyzed face, using principal component analysiss Face identification method.The method by facial image by row or) launch, define a high dimension vector, be seen as it is a kind of with Machine vector, therefore its orthogonal K-L substrate can be obtained using Karhunen-Loeve transformation.Have corresponding to the substrate of wherein larger eigenvalue With the shape of human face similarity, therefore eigenface is called.Thereafter, face is described using relatively small Eigenface collection, is made per width Facial image then corresponds to a relatively low weight vector of dimension, therefore, what recognition of face can be after dimensionality reduction is spatially carried out.So And, it is optimal Expressive Features that the shortcoming of the method is the feature for obtaining, rather than optimal classification feature.Feature is further Projection, makes scatter matrix between the class of the pattern sample after projection maximum using linear discriminant method, and while spreading in its class Matrix is minimum, after projection Assured Mode sample have simultaneously in new space in the class of maximum between class distance and minimum away from From that is, the pattern sample has within this space optimal separability.The set of eigenvectors that Fisher linear discriminant analysiss are extracted What is paid attention to is the difference of different faces, rather than the change in lighting condition, human face expression and direction.Thus, using the method pair The change of illumination condition, human face posture etc. is less sensitive, so as to be favorably improved recognition effect.However, due in normal condition Human face recognizes a problem always small sample problem, therefore scatter matrix always becomes for singular matrix the solution of the method in its class Obtain highly difficult.
Embodiment two:Due to the interference of the extraneous factors such as shooting angle, distance, sitting posture, and manual feature point for calibration is deposited In certain error, the Lycoperdon polymorphum Vitt facial image training set of acquisition is likely to appear in different positions, with different sizes and rotation Gyration, it is therefore desirable to the face PDM models for further being obtained by manual fixed point, wherein containing a lot " non-shapes " Factor, further comprises many redundancies in addition to the shape of face, and the factor of these redundancies can be to face below Feature extraction produces deleterious effect.In order to eliminate the impact of this redundancy factor, train in the facial image to gray scale Collection is carried out before statistical analysiss, it is necessary to it is initialized, that is, is alignd.The method that can be used to align is a lot, by suitable When translation, rotation, scaling conversion, snap to same framework on the basis of global shape for not changing points distribution models Under, so as to change the rambling state of the initial data of acquisition, reduce redundancy Factors on Human face points distribution models number According to the impact of analysis.
Carry out in discrete space in the feature point search of face, the point for searching not is truly continuous Extreme point in space, so that the discrete space obtained using oneself clicks through the extreme point that row interpolation obtains continuous space, this Plant extreme point and be frequently found in sub- metric space plane.D (x) is expanded to into quadratic term according to Taylor series:
X=(x, y, σ) is space-yardstick coordinate in formula, and D (x) is the value of approaching of continuous space extreme point, quadratic each Term coefficient can pass through oneself the adjacent discrete scale-space point Difference Calculation of acquisition and approximately obtain.Then D is asked for) extreme value (x), MeetSo as to obtain the extreme point that degree of accuracy is sub-pix-Asia yardstick level:
Determine reference direction by being characterized, can L justice construction and directional correlation feature description vector so that feature Possesses rotational invariance.The direction of characteristic point determines according to the Gradient distribution characteristic of its local neighborhood pixel, if feature point scale For σ, corresponding Gauss scalogram picture is L (x, y, σ), such as uses finite difference, is calculated centered on characteristic point, is with the σ of 3x 1.5 The gradient magnitude and deflection of image in radius region.Amplitude m (x, y, σ) and deflectionBe calculated as follows formula:
Statistical analysiss are carried out to the Gradient distribution characteristic in neighborhood window using rectangular histogram, gradient direction angle is divided into into 36 Interval, an interval coordinate as rectangular histogram transverse axis per 10 degree, the longitudinal axis is the corresponding gradient magnitude of gradient direction angle from this Weighted cumulative value.Here the variance of weight Gaussian window is characterized 1.5 times of point scale, and addition Gaussian window can be attached with Enhanced feature point Near gradient magnitude affects, and the histogrammic main peak value for so generating reflects local neighborhood image gradient around this feature point The principal direction of Main way, i.e. this feature point.When there are the minor peaks that another is more than the energy of main peak value 80%, it is believed that this Individual direction is the auxiliary direction of this feature point, and it is identical with yardstick that such a characteristic point may produce coordinate position, and direction is not Same characteristic point, to strengthen the robustness of matching.
The present invention will be described in detail for above-described embodiment.It should be noted that specific embodiment described herein Only to explain the present invention, it is not intended to limit the present invention, the product that can realize said function belongs to equivalent and improvement, It is included within protection scope of the present invention.
The invention has the beneficial effects as follows:The present invention is processed facial image, in this course not only image position The Correction Problemss put are taken into account, and generate the formalized model of facial image to embody feature and the side of facial image Just the data storage of image, the formalized model can strictly represent facial image, and can be used for the place of follow-up facial image Reason.

Claims (1)

1. a kind of processing method of facial image, it is characterised in that as follows the step of the method:
1) a colorized face images training set is given, the CPU in computer is counted using HLS colour model transfer algorithms Calculate, the HLS colour models transfer algorithm is nonlinear, can make that edge brightness noise is few, smooth effect is good, by the colour Each width colorized face images in facial image training set are converted into Gray Face image, by the Gray Face image construction Corresponding Gray Face training set of images;
2) the Lycoperdon polymorphum Vitt facial image training set is sampled, obtains sample window set, by the sample window set digitized, Sample window matrix is obtained, is gathered the sample window matrix conversion into singularity characteristics vector according to singular value decomposition theorem, it is described Singularity characteristics vector set be by feature point group one by one into, calculate average mark implantation P of the characteristic point, calculating formula is such as Under:
P = 1 M Σ i = 1 M s i
In formula:I=1,2 ..., M, siFor the coordinate figure of the characteristic point, P is the average mark implantation of the characteristic point, and variable M is used In the quantity for recording the characteristic point;
3) according to the average mark implantation of the characteristic point, using calculated coordinate figure as new coordinate axess origin, in order to disappear Except the difference of face location, carry out the translation of face and carry out rotation transformation, first will be every in singularity characteristics vector set Individual singularity characteristics vector all deducts the new coordinate axess origin value, the singularity characteristics vector set after being translated, and then sees The coordinate of eyebrow on face is examined, by the coordinate (x of two eyebrows on face1, y1)、(x2, y2) even into a line obtain inclining for face Rake angle α, the computing formula of the tilt angle alpha is as follows:
α = a r c c o s | y 2 - y 1 ( x 1 , y 1 ) - ( x 2 , y 2 ) |
Singularity characteristics vector set after the translation is surrounded around the new zero rotationAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by the rotation parameter, obtains Singularity characteristics vector set to after rotation transformation;
4) by the singularity characteristics vector set even partition after the rotation transformation, state matrix, the shape are accordingly set up State in state matrix is corresponded with the singularity characteristics vector set coordinate after the rotation transformation, and calculates the state square The transition probability between state in battle array, by this process Probability State model, the composition of the Probability State model are built Ultimate unit is the state in the state matrix, and each state both corresponds to the transition probability with other states, finally The Probability State model for obtaining is a kind of formalized model.
CN201610905128.1A 2016-10-11 2016-10-11 Face image processing method Pending CN106570459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610905128.1A CN106570459A (en) 2016-10-11 2016-10-11 Face image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610905128.1A CN106570459A (en) 2016-10-11 2016-10-11 Face image processing method

Publications (1)

Publication Number Publication Date
CN106570459A true CN106570459A (en) 2017-04-19

Family

ID=58533067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610905128.1A Pending CN106570459A (en) 2016-10-11 2016-10-11 Face image processing method

Country Status (1)

Country Link
CN (1) CN106570459A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650606A (en) * 2016-10-21 2017-05-10 江苏理工学院 Matching and processing method of face image and face image model construction system
CN108073914A (en) * 2018-01-10 2018-05-25 成都品果科技有限公司 A kind of animal face key point mask method
CN109948397A (en) * 2017-12-20 2019-06-28 Tcl集团股份有限公司 A kind of face image correcting method, system and terminal device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593272A (en) * 2009-06-18 2009-12-02 电子科技大学 A kind of human face characteristic positioning method based on the ASM algorithm
CN105718906A (en) * 2016-01-25 2016-06-29 宁波大学 Living body face detection method based on SVD-HMM

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593272A (en) * 2009-06-18 2009-12-02 电子科技大学 A kind of human face characteristic positioning method based on the ASM algorithm
CN105718906A (en) * 2016-01-25 2016-06-29 宁波大学 Living body face detection method based on SVD-HMM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
白冬辉: "人脸识别技术的研究与应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650606A (en) * 2016-10-21 2017-05-10 江苏理工学院 Matching and processing method of face image and face image model construction system
CN109948397A (en) * 2017-12-20 2019-06-28 Tcl集团股份有限公司 A kind of face image correcting method, system and terminal device
CN108073914A (en) * 2018-01-10 2018-05-25 成都品果科技有限公司 A kind of animal face key point mask method
CN108073914B (en) * 2018-01-10 2022-02-18 成都品果科技有限公司 Animal face key point marking method

Similar Documents

Publication Publication Date Title
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
CN104834922B (en) Gesture identification method based on hybrid neural networks
Rekha et al. Shape, texture and local movement hand gesture features for indian sign language recognition
CN110458192B (en) Hyperspectral remote sensing image classification method and system based on visual saliency
CN108681725A (en) A kind of weighting sparse representation face identification method
CN103207986A (en) Face recognition method based on local binary pattern-histogram Fourier (LBP-HF)
CN107341510A (en) Image clustering method based on sparse orthogonal digraph Non-negative Matrix Factorization
CN107862680B (en) Target tracking optimization method based on correlation filter
CN107886558A (en) A kind of human face expression cartoon driving method based on RealSense
Saval-Calvo et al. 3D non-rigid registration using color: color coherent point drift
CN106570459A (en) Face image processing method
CN103714331A (en) Facial expression feature extraction method based on point distribution model
CN102495999A (en) Face recognition method
CN108154176B (en) 3D human body posture estimation algorithm aiming at single depth image
CN109543637A (en) A kind of face identification method, device, equipment and readable storage medium storing program for executing
CN106599833A (en) Field adaptation and manifold distance measurement-based human face identification method
CN105844667A (en) Structural target tracking method of compact color coding
CN105740838A (en) Recognition method in allusion to facial images with different dimensions
CN110598719A (en) Method for automatically generating face image according to visual attribute description
Avraam Static gesture recognition combining graph and appearance features
CN111783526B (en) Cross-domain pedestrian re-identification method using posture invariance and graph structure alignment
Wang et al. The facial expression recognition based on KPCA
CN112800882A (en) Mask face posture classification method based on weighted double-flow residual error network
CN106952287A (en) A kind of video multi-target dividing method expressed based on low-rank sparse
CN109918998A (en) A kind of big Method of pose-varied face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170419

WD01 Invention patent application deemed withdrawn after publication
DD01 Delivery of document by public notice

Addressee: He Zhengdi

Document name: Notification that Application Deemed to be Withdrawn

DD01 Delivery of document by public notice