CN106650606A - Matching and processing method for face image and face image model construction system - Google Patents
Matching and processing method for face image and face image model construction system Download PDFInfo
- Publication number
- CN106650606A CN106650606A CN201610921354.9A CN201610921354A CN106650606A CN 106650606 A CN106650606 A CN 106650606A CN 201610921354 A CN201610921354 A CN 201610921354A CN 106650606 A CN106650606 A CN 106650606A
- Authority
- CN
- China
- Prior art keywords
- face
- facial image
- image
- grey
- characteristics vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The invention relates to a matching and processing method for a face image and a face image model construction system. The matching method for a face image comprises the following steps: step S1, acquiring a face image by using image acquisition equipment, and preprocessing the face image to obtain a processed face image; step S2, performing feature extraction on the processed face image; and step S3, comparing the face image. The process of matching the face image is thinned, mass information included in unique features of the face image is processed to extract multiple features of the face image, and detailed image matching is performed, so that the recognition rate, stability and matching degree of the face image are improved.
Description
Technical field
The present invention relates to a kind of image processing field, is related to a kind of processing method of facial image.
Background technology
With the continuous development of information technology, people are to conveniently authentication and the requirement of identifying system also not
It is disconnected to improve.Face recognition technology because have the advantages that it is direct, friendly, quick and easy, easily received by user, become identity
The most preferable foundation of checking, also becomes already the focus of area of pattern recognition research.The matching process of existing facial image
The shortcomings of discrimination is not high, stability is poor is there is mostly, it is impossible to which satisfaction is actually needed, accordingly, it would be desirable to one kind overcomes above-mentioned lacking
The matching process of the facial image of point.
The content of the invention
It is an object of the invention to provide a kind of matching process of facial image, to improve the discrimination of facial image and steady
It is qualitative.
In order to solve above-mentioned technical problem, the invention provides a kind of matching process of facial image, it is characterised in that bag
Include following steps:
Step S1, using image capture device the collection of facial image is carried out, and carries out pre-processing for facial image
Facial image to after process;
Step S2, feature extraction is carried out to the facial image after process, and
Step S3, compares to facial image.
Further, the collection of facial image is carried out using image capture device in step S1, and carries out facial image
The method of the facial image after being processed includes:
Facial image is changed into grey facial image using binaryzation technology, using the horizontal gray scale of grey facial image
Projection determines wherein approximate region of the eyes in grey facial image, then enters one using the gray level ratio distribution characteristics of eyes
Step determines determination scope of the eyes in grey facial image, is sought using point transformation in the determination scope in grey facial image
The center of eyes is looked for, it is half of as the left and right for dividing grey facial image using the perpendicular bisector of the line at the center of two eyes
The line of demarcation of face, the gray scale for adjusting the gray scale to both of left and right one side of something face of grey facial image is consistent, after being adjusted gray scale
Grey facial image, using region threshold method to it is described adjustment gray scale after grey facial image scan for, will be less than one
The noise region for determining threshold value is set as background, and it is removed, the grey facial image after being processed;
Further, the method that the facial image after process carries out feature extraction is included in step S2:
Filtered by medium filtering and Gabor, the grey facial image after process is resolved into 10 yardsticks, 8 sides
To 80 filtered grey facial images, extract the texture letter of filtered grey facial image using Wavelet Transform
Breath, the texture information of filtered grey facial image is further split, and is divided into 320 rectangular areas of non-overlapping copies,
Frequency distribution histogram is asked in each rectangular area, 32 mode types are extracted from frequency distribution histogram as people
Face feature, while training the texture information of filtered grey facial image using the method for principal component analysis, obtains after training
One group of transformation matrix, subspace is projected to using transformation matrix the texture information of grey facial image, is obtained on subspace
Feature detection face, and the projection coefficient of the texture information of grey facial image is calculated, face characteristic is calculated in grey face
Distribution law on the texture information of image, both are combined, the key feature of the facial image after being processed;
Further, the method that facial image is compared is included in step S3:
Typical facial image, the feature detection people on typical facial image and subspace are extracted from face database
Contrast window is set on the face simultaneously, and the initial size for contrasting window is the minimum rectangle at the center comprising eyes, gradual magnification pair
Than window, the ratio that contrast window is amplified every time is 1.1, the feature on face or subspace in typical facial image
Face in detection face is comprised in contrast window, to contrasting the son in the typical facial image in window, contrast window
Feature detection face spatially is extracted, and constitutes contrast images group, expands the image not comprising face in contrast images group
Make it include face, using dyadic wavelet transform to contrast images group contrasting detection, obtain the similarity of face.
The invention has the beneficial effects as follows, the present invention is matched to facial image, and step is carried out in this course
Refinement, by the way that for the specific characteristic of facial image, the bulk information to wherein containing is processed, and extracts the various of facial image
Feature, has carried out detailed images match, improves discrimination, stability and the matching degree of facial image.
Second aspect, the present invention also provides a kind of processing method and facial image model construction system of facial image, with
Formalized model is built to represent facial image, and is easy to the process of follow-up facial image.
In order to solve above-mentioned technical problem, the invention provides a kind of processing method of facial image, comprises the steps:
Step S1, builds Gray Face training set of images;
Step S2, calculates the corresponding characteristic point average mark implantation of Gray Face training set of images;
Step S3, by face translation, rotation transformation, and obtains corresponding singularity characteristics vector set;And
Step S4, obtains formalized model.
Further, the method for Gray Face training set of images is built in step S1 to be included:
The training set of a colorized face images is given, and is calculated using HLS model conversion algorithms, by the coloured silk
Each width colorized face images in color facial image training set are converted into Gray Face image, by the Gray Face image structure
Into corresponding Gray Face training set of images.
Further, the method that the corresponding characteristic point average mark implantation of Gray Face training set of images is calculated in step S2
Including:
The grey facial image training set is sampled, sample window set is obtained, by sample window set numeral
Change, obtain sample window matrix, gathered the sample window matrix conversion into singularity characteristics vector according to singular value decomposition theorem, institute
State singularity characteristics vector set be by feature point group one by one into, and calculate average mark implantation P of characteristic point, calculating formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used
In the quantity of recording feature point.
Further, by face translation, rotation transformation in step S3, and the side of corresponding singularity characteristics vector set is obtained
Method, i.e.,
According to the average mark implantation of characteristic point, using calculated coordinate value as new reference axis origin, face is carried out
Translation and carry out rotation transformation, its method includes:
First each singularity characteristics vector in singularity characteristics vector set is deducted into new reference axis origin value, put down
Singularity characteristics vector set after shifting;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) even into a line
The tilt angle alpha of face is obtained, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing.
Further, the method for formalized model is obtained in step S4 to be included:
By the singularity characteristics vector set even partition after rotation transformation, state matrix is accordingly set up, the state square
Singularity characteristics vector set coordinate in battle array after state and rotation transformation is corresponded, and calculate in the state matrix state it
Between transition probability, and Probability State model is built by this process;
The composition base unit of the Probability State model is both corresponded to for the state in state matrix, and each state
With the transition probability of other states, to obtain the formalized model.
The third aspect, in order to solve same technical problem, present invention also offers a kind of facial image model construction system
System, including:The training set being sequentially connected builds module, characteristic point average mark implantation computing module, singularity characteristics vector set and adds up to
Calculate module and formalized model builds module.
Further, the training set builds module and is suitable to build Gray Face training set of images, i.e.,
The training set of a colorized face images is given, and is calculated using HLS model conversion algorithms, by the coloured silk
Each width colorized face images in color facial image training set are converted into Gray Face image, by the Gray Face image structure
Into corresponding Gray Face training set of images.
Further, the characteristic point average mark implantation computing module is suitable to calculate the corresponding spy of Gray Face training set of images
An average mark implantation is levied, i.e.,
The grey facial image training set is sampled, sample window set is obtained, by sample window set numeral
Change, obtain sample window matrix, gathered the sample window matrix conversion into singularity characteristics vector according to singular value decomposition theorem, institute
State singularity characteristics vector set be by feature point group one by one into, and calculate average mark implantation P of characteristic point, calculating formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used
In the quantity of recording feature point.
Further, the singularity characteristics vector set calculation module is suitable to face translation, rotation transformation, and obtains corresponding
Singularity characteristics vector set, i.e.,
According to the average mark implantation of characteristic point, using calculated coordinate value as new reference axis origin, face is carried out
Translation and carry out rotation transformation, its method includes:
First each singularity characteristics vector in singularity characteristics vector set is deducted into new reference axis origin value, put down
Singularity characteristics vector set after shifting;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) even into a line
The tilt angle alpha of face is obtained, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing.
Further, the formalized model builds module and is suitable to obtain formalized model, i.e.,
By the singularity characteristics vector set even partition after rotation transformation, state matrix is accordingly set up, the state square
Singularity characteristics vector set coordinate in battle array after state and rotation transformation is corresponded, and calculate in the state matrix state it
Between transition probability, and Probability State model is built by this process;
The composition base unit of the Probability State model is both corresponded to for the state in state matrix, and each state
With the transition probability of other states, to obtain the formalized model.
The invention has the beneficial effects as follows, the present invention is processed facial image, in this course not only image position
The Correction Problemss put are taken into account, and generate the formalized model of facial image to embody feature and the side of facial image
Just the data storage of image, the formalized model can strictly, accurately represent facial image, and can be used for follow-up face figure
The process of picture.
Description of the drawings
With reference to the accompanying drawings and examples the present invention is further described.
Fig. 1 is the method flow diagram of the matching process of facial image in the embodiment of the present invention 1;
Fig. 2 is the method flow diagram of the processing method of facial image in the embodiment of the present invention 3;
Fig. 3 is the theory diagram of the facial image model construction system of the present invention.
Specific embodiment
In conjunction with the accompanying drawings, the present invention is further detailed explanation.These accompanying drawings are simplified schematic diagram, only with
The basic structure of the illustration explanation present invention, therefore it only shows the composition relevant with the present invention.
Embodiment 1
As shown in figure 1, the present embodiment 1 provides a kind of matching process of facial image, comprise the steps:
Step S1, using image capture device the collection of facial image is carried out, and carries out pre-processing for facial image
Facial image to after process;
Step S2, feature extraction is carried out to the facial image after the process, and
Step S3, compares to facial image.
Specifically, gathered using image capture device in step S1 in the present embodiment 1, in gatherer process, can
Can by posture, block, light, expression, age, many factors such as image-forming condition are affected, the facial image for collecting is simultaneously
The requirement of matching can not be met.Some factors can be overcome by non-technical means, such as posture, block, surrounding environment etc.,
These factors can install several cameras more the different angles and using positive facial image was photographed or can install several
Fluorescent lamp supplements the method such as light to be improved and is eliminated to these factors, and pair with human face expression, image-forming condition and age across
The factors such as degree are accomplished by going to improve by image procossing and matching technique means.Therefore Image semantic classification be must not in image recognition
The link that can lack.The purpose for carrying out Image semantic classification is exactly to remove noise and strengthen the differentiation useful to recognition of face to believe
Breath, and useful face information is extracted.
The general facial image through collecting all is coloured image, and each pixel of coloured image is by red
(B), green (G), blue (B) three primary colours are mixed, and these three color components of R, G, B are combined by different proportionings, can be obtained
Upper million colors.And grey facial image to be then a width pixel value span have 256 values from black to white (0~255)
Gray scale colour gamut or registration monochrome image.Coloured image contains abundant information for human eye, for characterizing face
Advantageously in the identification of face, but computer is matched but in facial image using the colouring information of coloured image
Can be affected by skin color and complex background, and can not therefrom be extracted useful discriminant information, and the number of coloured image
Grey facial image is much larger than according to amount, to the feature extraction of step S2 many inconvenience are brought.Comparatively, grey facial image
Data volume is relatively small, and is easily handled, and most of image pre-processing methods are all based on gray level image, thus face
The object for distinguishing generally is gray image.
In the concrete execution of step S1, need first colorized face images to be changed into grey facial image, colored people
The value of three kinds of colors of R, G, B of face image each pixel is different and show different colors, and gray level image is then because picture
The value of tri- kinds of colors of vegetarian refreshments R, G, B is equal and shows grey.Because the span of the three primary colours of R, G, B is 0~255, institute
There was only 256 grades with the rank of gray scale, (0,0,0) it is all black, (255,255, it is 255) whole white, middle interval is grey.
I.e. gray image is only capable of showing 256 kinds of color gray scales.Image gray processing is exactly to change color image pixel point by conversion formula
R, G, the value of B component so as to the equal process of three components.R, G, B component conversion between coloured image and gray image
Formula is as follows:
One width colorized face images can be just converted into by Gray Face image according to above-mentioned formula the present embodiment 1.In step
In the concrete execution of rapid S1, the noise remove to facial image, facial image is typically subject to during sampling and transmission
The interference of noise, has a great impact so as to extract to face characteristic in step S2, and for ease of further images match, having must
Noise reduction process is filtered to original image.
Embodiment 2
A kind of matching process of facial image, including:Step S1, using image capture device adopting for facial image is carried out
Collection, and carry out the facial image after being processed of facial image;Step S2, to the facial image after the process
Carry out feature extraction;Step S3, compares to facial image.
Employ in step S1 in region threshold method to grey facial image take Threshold segmentation for first determine one be in
Gray threshold among gradation of image span, then by the gray value of each pixel in grey facial image all with this gray scale
Threshold value compares, and divides the segmentation of corresponding pixel into two classes according to comparative result:The gray value of pixel is more than gray threshold
Belong to a class, the gray value of pixel belongs to another kind of less than gray threshold.This two classes pixel adheres to two class regions in image separately,
So the purpose of region segmentation has been reached according to gray threshold classification to pixel.As can be seen here, thresholded image partitioning algorithm master
There are two steps:1) segmentation threshold for needing is determined;2) segmentation threshold is compared with the gray value of pixel, to split ash
The pixel of color facial image.Threshold value be segmentation key, if can determine that a suitable threshold value if can easily by ash
Color facial image is separated.And after threshold value determines, comparing and divide pixel with pixel value by threshold value can concurrently be carried out, point
The result cut directly gives image-region.
Medium filtering is employed to the method that the facial image after process carries out feature extraction in step S2, medium filtering is
By a kind of nonlinear smoothing technology, its principle is the institute being set to the gray value of each pixel in the point neighborhood window
There is the median of grey scale pixel value, so as to eliminate isolated noise spot.Medium filtering is while noise is filtered, additionally it is possible to very well
The edge of ground protection grey facial image, matching of these edges to facial image can play the role of positive.
Contrast images group is compared in the concrete steps of step S3, employs dyadic wavelet transform, it is right to maintain
It is divided into little lattice than the sub-band images grid of the low frequency component of image sets, then calculates variance, direction variable of each little lattice etc.
Several characteristic values.Characteristic value is normalized, in being deposited into face characteristic storehouse, similarity is done through grader and is differentiated.
Because the different frequency sections that wavelet decomposition is obtained include the face information in different contrast images groups, from each wavelet packet
In can extract different facial characteristics.Wavelet decomposition is a kind of multiresolution analysis method, in the present embodiment can be by
Contrast images group reduces dimension so that extracts face characteristic hour operation quantity and reduces, so as to improve processing speed, while also can be
Frequency domain and time domain have localization ability concurrently, and progressively decompose any details of face in contrast images group, and can provide good
Good face texture description.
Embodiment 3
As shown in Fig. 2 the present embodiment 3 provides a kind of processing method of facial image, comprise the steps:
Step S1, builds Gray Face training set of images;
Step S2, calculates the corresponding characteristic point average mark implantation of Gray Face training set of images;
Step S3, by face translation, rotation transformation, and obtains corresponding singularity characteristics vector set;And
Step S4, obtains formalized model.
Specifically, the method for Gray Face training set of images is built in step S1 to be included:
The training set of a colorized face images is given, the CPU in computer is counted using HLS model conversion algorithms
Calculate, the HLS model conversions algorithm is nonlinear, can make that edge brightness noise is few, smooth effect is good, by the colored human face
Each width colorized face images in training set of images are converted into Gray Face image, corresponding by the Gray Face image construction
Gray Face training set of images.
Specifically, the method that the corresponding characteristic point average mark implantation of Gray Face training set of images is calculated in step S2
Including:
The grey facial image training set is sampled, sample window set is obtained, by sample window set numeral
Change, obtain sample window matrix, gathered the sample window matrix conversion into singularity characteristics vector according to singular value decomposition theorem, institute
State singularity characteristics vector set be by feature point group one by one into, and calculate average mark implantation P of characteristic point, calculating formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used
In the quantity of recording feature point.
Specifically, by face translation, rotation transformation in step S3, and the side of corresponding singularity characteristics vector set is obtained
Method, i.e.,
According to the average mark implantation of characteristic point, using calculated coordinate value as new reference axis origin, in order to eliminate
The difference of face location, carries out the translation of face and carries out rotation transformation, and its method includes:
First each singularity characteristics vector in singularity characteristics vector set is deducted into new reference axis origin value, put down
Singularity characteristics vector set after shifting;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) even into a line
The tilt angle alpha of face is obtained, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing.
Specifically, the method for formalized model is obtained in step S4 to be included:
By the singularity characteristics vector set even partition after rotation transformation, state matrix is accordingly set up, the state square
Singularity characteristics vector set coordinate in battle array after state and rotation transformation is corresponded, and calculate in the state matrix state it
Between transition probability, and Probability State model is built by this process;
The composition base unit of the Probability State model is both corresponded to for the state in state matrix, and each state
With the transition probability of other states, to obtain the formalized model.
Embodiment 4
As shown in figure 3, on the basis of embodiment 3, the present embodiment 4 provides a kind of facial image model construction system.
The facial image model construction system includes:
The training set being sequentially connected builds module, characteristic point average mark implantation computing module, singularity characteristics vector set and adds up to
Calculate module and formalized model builds module.
Specifically, the training set builds module and is suitable to build Gray Face training set of images, i.e.,
The training set of a colorized face images is given, the CPU in computer is counted using HLS model conversion algorithms
Calculate, the HLS model conversions algorithm is nonlinear, can make that edge brightness noise is few, smooth effect is good, by the colored human face
Each width colorized face images in training set of images are converted into Gray Face image, corresponding by the Gray Face image construction
Gray Face training set of images
Specifically, the characteristic point average mark implantation computing module is suitable to calculate the corresponding spy of Gray Face training set of images
An average mark implantation is levied, i.e., the grey facial image training set is sampled, obtain sample window set, by the sample window
Set digitlization, obtain sample window matrix, according to singular value decomposition theorem by the sample window matrix conversion into singularity characteristics to
Duration set, singularity characteristics vector set is into and calculating average mark implantation P of characteristic point, meter by feature point group one by one
Formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used
In the quantity of recording feature point.
Specifically, the singularity characteristics vector set calculation module is suitable to face translation, rotation transformation, and obtains corresponding
Singularity characteristics vector set, i.e., it is former using calculated coordinate value as new reference axis according to the average mark implantation of characteristic point
Point, in order to eliminate the difference of face location, carries out the translation of face and carries out rotation transformation, and its method includes:First will be unusual
Each singularity characteristics vector deducts new reference axis origin value in characteristic vector set, the singularity characteristics vector after being translated
Set;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) even into a line
The tilt angle alpha of face is obtained, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing.
Specifically, the formalized model builds module and is suitable to obtain formalized model, will be after rotation transformation it is unusual
Characteristic vector set even partition, accordingly sets up state matrix, unusual after state and rotation transformation in the state matrix
Characteristic vector set coordinate is corresponded, and calculates the transition probability in the state matrix between state, and by this mistake
Journey builds Probability State model;The composition base unit of the Probability State model is the state in state matrix, and each
State both corresponds to the transition probability with other states, to obtain the formalized model.
With the above-mentioned desirable embodiment according to the present invention as enlightenment, by above-mentioned description, relevant staff is complete
Entirely various change and modification can be carried out in the range of without departing from this invention technological thought.The technology of this invention
Property scope is not limited to the content on specification, it is necessary to its technical scope is determined according to right.
Claims (10)
1. a kind of matching process of facial image, it is characterised in that comprise the steps:
Step S1, using image capture device the collection of facial image is carried out, and carry out facial image pre-process everywhere
Facial image after reason;
Step S2, feature extraction is carried out to the facial image after process, and
Step S3, compares to facial image.
2. the matching process of a kind of facial image according to claim 1, it is characterised in that
The collection of facial image is carried out in step S1 using image capture device, and carries out pre-processing for facial image
The method of the facial image to after process includes:
Facial image is changed into grey facial image using binaryzation technology, using the horizontal gray scale of the grey facial image
Projection determines wherein approximate region of the eyes in the grey facial image, then using the gray level ratio distribution characteristics of eyes
Determination scope of the eyes in the grey facial image is further determined that, in the determination scope in the grey facial image
The center of eyes is found using point transformation, using the perpendicular bisector of the line at the center of two eyes as the division grey people
The line of demarcation of left and right one side of something face of face image, adjusts the gray scale of the gray scale to both of left and right one side of something face of the grey facial image
Unanimously, the grey facial image after gray scale is adjusted, using region threshold method to the ash after the adjustment gray scale
Color facial image is scanned for, and will be set as background less than the noise region of certain threshold value, it is removed, the institute after being processed
State grey facial image.
3. the matching process of a kind of facial image according to claim 2, it is characterised in that
The method that the facial image after the process carries out feature extraction is included in step S2:
Filtered by medium filtering and Gabor, by process after the grey facial image resolve into 10 yardsticks, 8 sides
To 80 filtered grey facial images, extract the filtered grey face figure using Wavelet Transform
The texture information of picture, the texture information of the filtered grey facial image is further split, and is divided into and does not mutually weigh
320 folded rectangular areas, in each rectangular area frequency distribution histogram is asked for, and is extracted from frequency distribution histogram
32 mode types are used as face characteristic, while training the filtered grey face using the method for principal component analysis
The texture information of image, obtains one group of transformation matrix after training, using the transformation matrix the line of the grey facial image
Reason information projects to subspace, obtains the feature detection face on subspace, and calculates the texture of the grey facial image
The projection coefficient of information, calculates distribution law of the face characteristic on the texture information of the grey facial image, incite somebody to action both
With reference to obtaining the key feature of the facial image after the process.
4. a kind of matching process of facial image according to claim 3, it is characterised in that
The method that facial image is compared is included in step S3:
Typical facial image is extracted from face database, the feature inspection on the typical facial image and the subspace
Survey on face and contrast window is set simultaneously, the initial size of the contrast window is the minimum rectangle at the center comprising eyes, by
Secondary to amplify the contrast window, the ratio that the contrast window is amplified every time is 1.1, the people in the typical facial image
The face in feature detection face on face or the subspace is comprised in the contrast window, to the contrast window
The feature detection face on the subspace in interior described typical facial image, the contrast window is extracted, and is constituted
Contrast images group, expanding the image not comprising face in the contrast images group makes it include face, using dyadic wavelet transform
To the contrast images group contrasting detection, the similarity of face is obtained.
5. a kind of processing method of facial image, it is characterised in that comprise the steps:
Step S1, builds Gray Face training set of images;
Step S2, calculates the corresponding characteristic point average mark implantation of Gray Face training set of images;
Step S3, by face translation, rotation transformation, and obtains corresponding singularity characteristics vector set;And
Step S4, obtains formalized model.
6. processing method according to claim 5, it is characterised in that
The method of Gray Face training set of images is built in step S1 to be included:
The training set of a colorized face images is given, and is calculated using HLS model conversion algorithms, by the colored people
Each width colorized face images in face image training set are converted into Gray Face image, by the Gray Face image construction phase
The Gray Face training set of images answered;
The method of the corresponding characteristic point average mark implantation of Gray Face training set of images is calculated in step S2 to be included:
The grey facial image training set is sampled, sample window set is obtained, sample window set digitlization is obtained
To sample window matrix, the sample window matrix conversion is gathered into singularity characteristics vector according to singular value decomposition theorem, it is described strange
Different characteristic vector set be by feature point group one by one into, and calculate average mark implantation P of characteristic point, calculating formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used to record
The quantity of characteristic point.
7. processing method according to claim 6, it is characterised in that
By face translation, rotation transformation in step S3, and the method for obtaining corresponding singularity characteristics vector set, i.e.,
According to the average mark implantation of characteristic point, using calculated coordinate value as new reference axis origin, the flat of face is carried out
Rotation transformation is moved and carries out, its method includes:
First each singularity characteristics vector in singularity characteristics vector set is deducted into new reference axis origin value, after being translated
Singularity characteristics vector set;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) connect into a line obtaining
The tilt angle alpha of face, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing;
The method of formalized model is obtained in step S4 to be included:
By the singularity characteristics vector set even partition after rotation transformation, state matrix is accordingly set up, in the state matrix
Singularity characteristics vector set coordinate after state and rotation transformation is corresponded, and is calculated in the state matrix between state
Transition probability, and Probability State model is built by this process;
The composition base unit of the Probability State model is both corresponded to and it for the state in state matrix, and each state
The transition probability of his state, to obtain the formalized model.
8. a kind of facial image model construction system, it is characterised in that include:
The training set being sequentially connected builds the total calculation mould of module, characteristic point average mark implantation computing module, singularity characteristics vector set
Block and formalized model build module.
9. facial image model construction system according to claim 8, it is characterised in that
The training set builds module and is suitable to build Gray Face training set of images, i.e.,
The training set of a colorized face images is given, and is calculated using HLS model conversion algorithms, by the colored people
Each width colorized face images in face image training set are converted into Gray Face image, by the Gray Face image construction phase
The Gray Face training set of images answered;
The characteristic point average mark implantation computing module is suitable to calculate the corresponding characteristic point average mark of Gray Face training set of images
Implantation, i.e.,
The grey facial image training set is sampled, sample window set is obtained, sample window set digitlization is obtained
To sample window matrix, the sample window matrix conversion is gathered into singularity characteristics vector according to singular value decomposition theorem, it is described strange
Different characteristic vector set be by feature point group one by one into, and calculate average mark implantation P of characteristic point, calculating formula is as follows:
In formula (1):I=1,2 ..., M;siIt is characterized coordinate value a little;P is characterized average mark implantation a little;Variable M is used to record
The quantity of characteristic point.
10. facial image model construction system according to claim 9, it is characterised in that
Singularity characteristics vector set calculation module is suitable to face translation, rotation transformation, and obtain corresponding singularity characteristics to
Duration set, i.e.,
According to the average mark implantation of characteristic point, using calculated coordinate value as new reference axis origin, the flat of face is carried out
Rotation transformation is moved and carries out, its method includes:
First each singularity characteristics vector in singularity characteristics vector set is deducted into new reference axis origin value, after being translated
Singularity characteristics vector set;
The coordinate of observer's eyebrow on the face again, by the coordinate (x of two eyebrows on face1,y1)、(x2,y2) connect into a line obtaining
The tilt angle alpha of face, the computing formula of the tilt angle alpha is as follows:
Singularity characteristics vector set after translation is surrounded and is rotated around the new origin of coordinatesAngle, rotation parameter isThe coordinate of the singularity characteristics vector set after all translations is all multiplied by rotation parameter, obtains rotation and becomes
Singularity characteristics vector set after changing;
The formalized model builds module and is suitable to obtain formalized model, i.e.,
By the singularity characteristics vector set even partition after rotation transformation, state matrix is accordingly set up, in the state matrix
Singularity characteristics vector set coordinate after state and rotation transformation is corresponded, and is calculated in the state matrix between state
Transition probability, and Probability State model is built by this process;
The composition base unit of the Probability State model is both corresponded to and it for the state in state matrix, and each state
The transition probability of his state, to obtain the formalized model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610921354.9A CN106650606A (en) | 2016-10-21 | 2016-10-21 | Matching and processing method for face image and face image model construction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610921354.9A CN106650606A (en) | 2016-10-21 | 2016-10-21 | Matching and processing method for face image and face image model construction system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106650606A true CN106650606A (en) | 2017-05-10 |
Family
ID=58856172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610921354.9A Pending CN106650606A (en) | 2016-10-21 | 2016-10-21 | Matching and processing method for face image and face image model construction system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650606A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107883541A (en) * | 2017-10-24 | 2018-04-06 | 珠海格力电器股份有限公司 | Air conditioning control method and device |
CN107944427A (en) * | 2017-12-14 | 2018-04-20 | 厦门市美亚柏科信息股份有限公司 | Dynamic human face recognition methods and computer-readable recording medium |
CN108297797A (en) * | 2018-02-09 | 2018-07-20 | 安徽江淮汽车集团股份有限公司 | A kind of vehicle mirrors regulating system and adjusting method |
CN109196438A (en) * | 2018-01-23 | 2019-01-11 | 深圳市大疆创新科技有限公司 | A kind of flight control method, equipment, aircraft, system and storage medium |
CN109740423A (en) * | 2018-11-22 | 2019-05-10 | 霍尔果斯奇妙软件科技有限公司 | Ethnic recognition methods and system based on face and wavelet packet analysis |
CN110266940A (en) * | 2019-05-29 | 2019-09-20 | 昆明理工大学 | A kind of face-video camera active pose collaboration face faces image acquiring method |
CN110471194A (en) * | 2019-01-23 | 2019-11-19 | 上海理工大学 | The measurement method of human eye top rake |
CN110706263A (en) * | 2019-09-30 | 2020-01-17 | 武汉工程大学 | Image processing method, device, equipment and computer readable storage medium |
CN112101058A (en) * | 2020-08-17 | 2020-12-18 | 武汉诺必答科技有限公司 | Method and device for automatically identifying test paper bar code |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570459A (en) * | 2016-10-11 | 2017-04-19 | 付昕军 | Face image processing method |
-
2016
- 2016-10-21 CN CN201610921354.9A patent/CN106650606A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106570459A (en) * | 2016-10-11 | 2017-04-19 | 付昕军 | Face image processing method |
Non-Patent Citations (5)
Title |
---|
LIN KEZHENG ET AL.: "Using 2DGabor values and kernel fisher discriminant analysis for face recognition", 《PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND ENGINEERING》 * |
王宪 等: "基于Gabor小波变换与分块PCA的人脸识别", 《计算机工程与应用》 * |
胡伟平: "基于对称区块的人脸图像光照补偿算法", 《广西科学院学报》 * |
陈秀端: "基于Gabor 与Pca 融合算法的人脸识别研究", 《万方数据》 * |
鱼滨 等: "《基于MATLAB和遗传算法的图像处理》", 30 September 2015, 西安电子科技大学出版社 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107883541A (en) * | 2017-10-24 | 2018-04-06 | 珠海格力电器股份有限公司 | Air conditioning control method and device |
CN107944427A (en) * | 2017-12-14 | 2018-04-20 | 厦门市美亚柏科信息股份有限公司 | Dynamic human face recognition methods and computer-readable recording medium |
CN107944427B (en) * | 2017-12-14 | 2020-11-06 | 厦门市美亚柏科信息股份有限公司 | Dynamic face recognition method and computer readable storage medium |
CN109196438A (en) * | 2018-01-23 | 2019-01-11 | 深圳市大疆创新科技有限公司 | A kind of flight control method, equipment, aircraft, system and storage medium |
CN108297797A (en) * | 2018-02-09 | 2018-07-20 | 安徽江淮汽车集团股份有限公司 | A kind of vehicle mirrors regulating system and adjusting method |
CN109740423A (en) * | 2018-11-22 | 2019-05-10 | 霍尔果斯奇妙软件科技有限公司 | Ethnic recognition methods and system based on face and wavelet packet analysis |
CN110471194A (en) * | 2019-01-23 | 2019-11-19 | 上海理工大学 | The measurement method of human eye top rake |
CN110266940A (en) * | 2019-05-29 | 2019-09-20 | 昆明理工大学 | A kind of face-video camera active pose collaboration face faces image acquiring method |
CN110706263A (en) * | 2019-09-30 | 2020-01-17 | 武汉工程大学 | Image processing method, device, equipment and computer readable storage medium |
CN110706263B (en) * | 2019-09-30 | 2023-06-06 | 武汉工程大学 | Image processing method, device, equipment and computer readable storage medium |
CN112101058A (en) * | 2020-08-17 | 2020-12-18 | 武汉诺必答科技有限公司 | Method and device for automatically identifying test paper bar code |
CN112101058B (en) * | 2020-08-17 | 2023-05-09 | 武汉诺必答科技有限公司 | Automatic identification method and device for test paper bar code |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650606A (en) | Matching and processing method for face image and face image model construction system | |
US8345936B2 (en) | Multispectral iris fusion for enhancement and interoperability | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
Neal et al. | Measuring shape | |
CN101615292B (en) | Accurate positioning method for human eye on the basis of gray gradation information | |
CN106446872A (en) | Detection and recognition method of human face in video under low-light conditions | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
CN101739546A (en) | Image cross reconstruction-based single-sample registered image face recognition method | |
CN106485222A (en) | A kind of method for detecting human face being layered based on the colour of skin | |
Thoonen et al. | Multisource classification of color and hyperspectral images using color attribute profiles and composite decision fusion | |
CN105844242A (en) | Method for detecting skin color in image | |
Casanova et al. | Texture analysis using fractal descriptors estimated by the mutual interference of color channels | |
Hassanat et al. | Colour-based lips segmentation method using artificial neural networks | |
CN103034838A (en) | Special vehicle instrument type identification and calibration method based on image characteristics | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
Yarlagadda et al. | A novel method for human age group classification based on Correlation Fractal Dimension of facial edges | |
CN108416291A (en) | Face datection recognition methods, device and system | |
CN109359577A (en) | A kind of Complex Background number detection system based on machine learning | |
CN106909883A (en) | A kind of modularization hand region detection method and device based on ROS | |
Zhang et al. | Saliency detection and region of interest extraction based on multi-image common saliency analysis in satellite images | |
Paul et al. | PCA based geometric modeling for automatic face detection | |
CN108154496A (en) | A kind of power equipment appearance suitable for electric operating robot changes recognition methods | |
CN111709305A (en) | Face age identification method based on local image block | |
Rahman et al. | An automatic face detection and gender classification from color images using support vector machine | |
Wu et al. | Facial feature extraction and applications: A review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |
|
RJ01 | Rejection of invention patent application after publication |