CN101958000B - Face image-picture generating method based on sparse representation - Google Patents

Face image-picture generating method based on sparse representation Download PDF

Info

Publication number
CN101958000B
CN101958000B CN2010102893309A CN201010289330A CN101958000B CN 101958000 B CN101958000 B CN 101958000B CN 2010102893309 A CN2010102893309 A CN 2010102893309A CN 201010289330 A CN201010289330 A CN 201010289330A CN 101958000 B CN101958000 B CN 101958000B
Authority
CN
China
Prior art keywords
portrait
photo
piece
pseudo
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010102893309A
Other languages
Chinese (zh)
Other versions
CN101958000A (en
Inventor
高新波
王楠楠
李洁
王斌
张杰伟
邓成
韩冠
肖冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2010102893309A priority Critical patent/CN101958000B/en
Publication of CN101958000A publication Critical patent/CN101958000A/en
Application granted granted Critical
Publication of CN101958000B publication Critical patent/CN101958000B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a face image-picture generating method based on sparse representation, which mainly solves the problems of low definition and blurry details of a pseudo image and a pseudo picture generated in the traditional method. The face image-picture generating method comprises the following implementation processes of: generating an initial pseudo image and an initial pseudo picture by using the traditional pseudo image and pseudo picture generating method; blocking all images and then training an image block dictionary and a picture block dictionary by utilizing a training sample set; synthesizing high definition characteristics information according to an input test picture block or an input test image block by utilizing the two dictionaries; adding the obtained high definition characteristics information and the corresponding initial pseudo image block or the initial pseudo picture block to obtain a final pseudo image block or a pseudo picture block with high definition; and fusing all pseudo images and pseudo pictures with high definition to obtain a complete pseudo image or a complete pseudo picture. Compared with the prior art, the generated pseudo image and pseudo picture have the advantages of high definition and obvious details and can be used for face identification and face retrieval.

Description

Human face portrait-photograph generation method based on rarefaction representation
Technical field
The invention belongs to technical field of image processing, relate to human face portrait-photo and generate, can be used in the fields such as criminal investigation and anti-terrorism retrieval and identification people's face.
Background technology
Along with the arrival of information age, People more and more is experienced the importance of information security.Identification and authentication techniques are the effective means that ensure information security, and development in recent years is more and more rapider.Based on the identification of people's face and authentication techniques is one of most convenient, effective identity verification technology, so recognition of face receives much concern in recent years.Because the difference of imaging mode; Facial image can have multiple example, like photo, and portrait etc.; Therefore recognition of face is not limited to the identification of human face photo, and correspondingly the recognition of face based on example mainly contains dual mode: based on the recognition of face of photo with based on the recognition of face of drawing a portrait.Recognition of face based on photo has obtained using like gate control system search engine, video monitoring etc. in a lot of fields.But under many circumstances; During for example criminal investigation and case detection and anti-terrorism are pursued and captured an escaped prisoner; The photo that does not often have the suspect only have a width of cloth to come from the portrait that artist and eyewitness's cooperation are accomplished, and then recognition of face will be carried out the identification of identity based on the portrait that obtains and existing police's database.In addition; Arresting a suspicion of crime man-hour; Can obtain this suspect's human face photo; We can retrieve in the portrait storehouse that the police set up in the process in the past with the human face photo that obtains to confirm this suspect crime of whether commiting excesses in the past, determine whether the crime of commiting excesses or confirm the crime number of times according to the portrait quantity that retrieves according to result for retrieval.
To sum up, mainly be to be applied to following two kinds of situation based on the recognition of face of drawing a portrait: the one, judgement suspect's identity, the 2nd, verify whether someone has the historical or further definite crime number of times of crime.For first kind of situation, face recognition technology can be achieved through utilizing a portrait in existing police's picture data storehouse, to mate identification again as test pattern.For second kind of situation, human face photo can be used as test pattern, in existing police draws a portrait database, matees identification again and gets final product.But because mechanism of production is different, portrait and photo are heterogeneous each other, and general photo-photo recognizer is seen document " Chellappa R; Wilson C, Sirobey S.Human and MachineRecognition of Faces:a Survey.Proceedings of IEEE, 83 (5): 705-741,1995 "; " Zhao W, Chellappa C, Rosenfeld A, Phillips P.Face Recognition:a Survey.ACM Computer Survey; 34 (4): 399-458,2003 ", " Phillips P, Flynn P; Scruggs T, Bowyer K, Chang J, Hoffman K; Marques J, Min J, Worek W.Overview of the Face Recognition Grand Challenge.Inproceeding of IEEE International Conference on Computer Vision and Pattern Recognition; San Diego, CA, USA; 20-25 June 2005 " all is the coupling identification in comparison film and picture data storehouse, run into challenge when the coupling in the coupling in portrait and picture data storehouse and photo and portrait storehouse discerned solving.In order to solve the challenge that runs in the coupling identifying of portrait and picture data storehouse; Portrait or photo should be converted under the identical representation space, and then discern like photo space or the face identification method of portrait the space utilization in identical space and get final product.Therefore, heterogeneous image transitions just becomes a problem demanding prompt solution.
To the problems referred to above, it can be the homogeneity map picture with heterogeneous image transformation that two kinds of strategies are arranged: first kind is from drawing a portrait the conversion of photo, and a kind of in addition is conversion from photo to portrait.In existing method, refer to human face photo or human face portrait with facial image, refer to pseudo-portrait or pseudo-photo with pseudo-image.Portrait can be divided into two big types: one type is simple stick figure and caricature; This type portrait does not almost have complicated information such as grain details, and another kind of is complicated portrait, and this type portrait is compared with caricature with stick figure; Grain details is complicated, and it is more to comprise quantity of information.Existing research for the complicacy portrait is mainly carried out from two aspects, is to convert the portrait that contains less texture information into containing the photo that enriches texture information on the one hand, is to convert the portrait that contains less texture information into complicated photo on the other hand.
1. photo mainly comprises three major types to the converter technique of portrait:
One is based on linear method; Adopt the principal component analysis (PCA) algorithm respectively at photo space and portrait space training structure proper subspace separately; Obtain photo to be transformed projection coefficient in the photo feature space; Reconstruction coefficients when obtaining the base that utilizes in the photo feature space and reconstruct this photo according to this projection coefficient then, basis reconstructs pseudo-portrait with basic corresponding portrait and reconstruction coefficients thereof in the photo feature space in the portrait space.Mapping between this method supposition photo and the portrait is a kind of linear relationship, can't reflect the nonlinear relationship between the two veritably, and the noise that causes bearing results is many, and sharpness is low, and details is fuzzy;
Two are based on pseudo-non-linear method; The method be approach with piecewise linearity non-linear; Be specially: the photo-portrait in the training set is carried out even piecemeal to reaching photo to be transformed,, in all training photo pieces, find a K the most similar fritter with it for each fritter of photo to be transformed; Produce pseudo-portrait piece through the corresponding portrait piece of this K photo piece being carried out linear weighted function then, the whole pseudo-portrait piece with gained is combined into complete puppet portrait at last.This method is approached the nonlinear relationship of the overall situation through the linear combination of part; But still not nonlinear method truly, specifically see document " Liu Q S, Tang X O.A Nonlinear approach for face sketch synthesis and recognition.IEEE International Conference on Computer Vision and Pattern Recognition; San Diego; CA, USA, 20-26 Jun 2005 " and " Gao X B; Zhong J J; Tao D C and Li X L.Local face sketchsynthesis learning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".Because the arest neighbors that K neighbour's piece being sought is Euclidean distance might not be K maximally related of photo piece to be transformed, and the number of arest neighbors is fixed in the method, this has just caused the sharpness that bears results low, the defective that details is fuzzy;
Three are based on non-linear method, and this method mainly is based on the method for built-in type hidden Markov model.Utilize the nonlinear relationship between built-in type hidden Markov model comparison film and the portrait to carry out modeling, convert photo to be transformed to pseudo-portrait according to the built-in type hidden Markov model of being learnt.Consider that single model is to can't depict the nonlinear relationship of the complicacy between photo and the portrait fully; Introduced the integrated thought of selectivity; To each photo-portrait to obtaining one by one body portrait maker; Select the individual maker of part to merge, thereby photo to be transformed is mapped to corresponding pseudo-portrait.Then, on the said method basis, image is carried out piecemeal again, utilize said method to carry out modeling for every pair of training photo piece-portrait piece; Change photo piece to be transformed into pseudo-portrait piece according to model, merge pseudo-portrait piece and obtain pseudo-portrait, specifically see document " Gao X B; Zhong J J; Tao D C and Li X L.Local face sketch synthesislearning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".In the method, also used K the thought that arest neighbors carries out modeling with photo to be transformed, same, this sharpness that also caused bearing results is low, the shortcoming that details is fuzzy.
2. the switch technology of photo of drawing a portrait mainly contains following two kinds of methods:
One is based on the method for subspace; This is a kind of method of blending space being carried out signature analysis; At first synthetic blending space is spliced in photo space and portrait space; Adopt the principal component analysis (PCA) algorithm that blending space is trained the global subgroup space of constructing portrait-photo, then this global subgroup spatial is slit into photo proper subspace and portrait proper subspace, obtain the projection coefficient of portrait to be transformed at the portrait proper subspace; Utilize projection coefficient to go out the facial image vector in the global subgroup Space Reconstruction at last, this vectorial the first half is pseudo-photo.Mapping between this method supposition photo and the portrait is a kind of linear relationship, and in fact the relation between the two is wanted the many of complicacy, and the noise that causes bearing results is more, and sharpness is low, and details is fuzzy;
Two are based on the method for built-in type hidden Markov model, and the method and the method for utilizing the built-in type hidden Markov model to generate pseudo-portrait recited above are the processes of symmetry, as long as will draw a portrait another method that can obtain symmetry with the role reversal of photo; Specifically see document " Xiao B; Gao X B, Li X L, Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 ".Sharpness is low as a result for the pseudo-photo of the feasible generation of same use owing to k nearest neighbor, and details is fuzzy.
Summary of the invention
The objective of the invention is to overcome the deficiency of above-mentioned existing method, propose a kind of human face portrait-photograph generation method based on rarefaction representation, the puppet that generates is drawn a portrait and the sharpness of pseudo-photo to improve, and makes that the puppet portrait and the pseudo-photo details that generate are more obvious.
The technical scheme that realizes the object of the invention comprises:
1. arrive the generation method of portrait based on the human face photo of rarefaction representation, comprise the steps:
(1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test photo P;
(2) utilize pseudo-portrait generation method, training sample set and test photo are generated a width of cloth initial pseudo-portrait
Figure BDA0000026913900000041
corresponding with the test photo
(3) will initial pseudo-portrait
Figure BDA0000026913900000042
Again be divided into the piece of identical size and identical overlapping degree with test photo P, wherein
Figure BDA0000026913900000043
P={P 1, P 2, L P M, M is total number of piece, extracts the proper vector f of each piece in the test photo;
(4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
(4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; The first order derivative of extraction photo piece and second derivative are as characteristic; The average that deducts the portrait piece with the pixel value of portrait piece is as characteristic, and the portrait block feature that will obtain and photo block feature make up and form a line, and it is carried out normalization;
(4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
(4c). with the coupling dictionary D that obtains in (4b) according to
Figure BDA0000026913900000045
Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
(5) utilize the photo piece dictionary D that obtains in the proper vector f that obtains in the step (3) and the step (4) p, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
min w | | f - D p w | | 2 2 + β | | w | | 1 ,
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
(6) utilize the portrait piece dictionary D that obtains in the step (4) s(5) the rarefaction representation coefficient w that obtains in is according to the synthetic high definition details obvious characteristics message block of following formula
Figure BDA0000026913900000047
S Hi % = D s w ,
I=1 wherein, 2, L M, M are total number of message block;
(7) the tangible block feature message block of high definition details
Figure BDA0000026913900000049
that step (6) is obtained is added on the corresponding piece
Figure BDA00000269139000000410
of initial pseudo-portrait that step (2) obtains; To strengthen sharpness and details, obtain final puppet portrait piece;
(8) repeated execution of steps (5)-(7) are until obtaining the final puppet of M piece portrait piece, and these final pseudo-portrait pieces that will obtain make up and obtain the puppet corresponding with testing photo and draw a portrait.
2. based on the human face portrait of rarefaction representation generation method, comprise the steps: to photo
1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test portrait S;
2) utilize pseudo-photograph generation method, training sample set and test portrait are generated a width of cloth and the corresponding initial pseudo-photo
Figure BDA0000026913900000051
of test portrait
3) with initial pseudo-photo Again be divided into the piece of identical size and identical overlapping degree with test portrait S, wherein S={S 1, S 2, L S M, M is total number of piece, extracts the proper vector f of each piece in the test portrait;
4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; The first order derivative of extraction portrait piece and second derivative are as characteristic; The average that deducts the photo piece with the pixel value of photo piece is as characteristic, and the portrait block feature that will obtain and the combination of photo block feature form a line, and it is carried out normalization;
4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
4c). with 4b) in the coupling dictionary D that obtains according to
Figure BDA0000026913900000055
Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
5) utilize the portrait piece dictionary D that obtains in the proper vector f that obtains in the step 3) and the step 4) s, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
min w | | f - D s w | | 2 2 + β | | w | | 1 ,
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
6) utilize the photo piece dictionary D that obtains in the step 4) pWith 5) in the rarefaction representation coefficient w that obtains, according to the synthetic high definition details obvious characteristics message block of following formula
Figure BDA0000026913900000057
P Hi % = D p w ,
I=1 wherein, 2, L M, M are total number of message block;
7) the tangible block feature message block of high definition details
Figure BDA0000026913900000059
that step 6) is obtained is added to step 2) on the corresponding piece of the initial pseudo-photo that obtains; To strengthen sharpness and details, obtain final puppet portrait piece;
8) repeated execution of steps 5)-7) until obtaining the final pseudo-photo piece of M piece, and these final pseudo-photo pieces that will obtain make up and obtain the pseudo-photo corresponding with testing photo.
The present invention is owing to set up combination learning photo piece dictionary D pWith portrait piece dictionary D sModel makes that the rarefaction representation of test pattern piece is identical with the rarefaction representation coefficient of image to be synthesized; Simultaneously owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient; Thereby make that be used for neighbour's the number of synthetic target image piece is not what fix; Make that the image definition that generates is high; Details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Description of drawings
Fig. 1 the present invention is based on the generation method flow diagram of the photo of rarefaction representation to portrait;
Fig. 2 the present invention is based on the generation method flow diagram of the portrait of rarefaction representation to photo;
The comparing result figure that Fig. 3 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on CUHK student database;
Fig. 4 is the pseudo-photo comparing result figure that the present invention and existing a kind of method generate on CUHK student database;
The comparing result figure that Fig. 5 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on the VIPS database;
The comparing result figure of the pseudo-photo that Fig. 6 generates on the VIPS database for the present invention and existing a kind of method.
Embodiment
Core concept of the present invention is: fix to existing pseudo-portrait and K used arest neighbors piece number of pseudo-photograph generation method generation image block; The deficiency that the puppet portrait is low with pseudo-photo sharpness and details is fuzzy that causes existing human face portrait-photograph generation method to generate; Proposition is based on the human face portrait-photograph generation method of rarefaction representation; With can adaptive selection arest neighbors piece, make the puppet portrait and pseudo-photo sharpness height that generates, details is obvious.Below provide two kinds of instances:
One, generates the method for pseudo-portrait based on rarefaction representation by human face photo
With reference to Fig. 1, the concrete steps of pseudo-portrait generation method of the present invention are following:
The first step is divided training sample set and test sample book collection;
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant the test photograph collection, chooses a photo in the test photograph collection as test photo P.
Second step; According to training sample set and test photo P; Generate initial pseudo-portrait
Figure BDA0000026913900000061
concrete grammar of a width of cloth list of references " Liu Q S, Tang X O.A Nonlinearapproach for face sketch synthesis and recognition.IEEE International Conference onComputer Vision and Pattern Recognition, San Diego; CA; USA, 20-26 Jun 2005 " and " Gao X B, Zhong J J; Tao D C and Li X L.Local face sketch synthesis learning.Neurocomputing; 71 (10-12): 1921~1930,2008 " respectively with pseudo-nonlinear method or built-in type hidden Markov model method, concrete steps are following:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test photo piece on the test photo, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-portrait piece, and the mean value of getting this k pseudo-portrait piece obtains testing the corresponding initial pseudo-portrait piece of photo piece.
In the 3rd step, test photo, initial pseudo-portrait are carried out piecemeal and keep 75% overlapping degree, and extract test photo block feature.
(3a). the initial puppet portrait
Figure BDA0000026913900000071
that obtains in second step is divided into pseudo-portrait piece collection
Figure BDA0000026913900000072
when dividing puppet portrait piece, will keeps 75% overlapping degree, the wherein total block data of M for dividing;
(3b). will test photo and be divided into photo piece collection { P 1, P 2, L, P M, wherein M is total number of piece in the test photo, it is identical with the branch block size and the overlapping degree of initial pseudo-portrait in the step (3a) that the size of piecemeal and overlapping degree will keep;
(3c). for each test photo piece, extract the horizontal direction of this piece and the single order and the second derivative of vertical direction, the linear operator that wherein is used for extracting derivative is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative extracted by row, as the characteristic of this piece;
The 4th step, utilize the training sample set training behind the piecemeal, obtain drawing a portrait piece dictionary D sWith photo piece dictionary D pTwo dictionaries.
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; First order derivative and the second derivative of extracting the photo piece be as characteristic, and the linear operator that wherein is used for extracting characteristic is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] deducts the characteristic of portrait piece average as the portrait piece with the pixel value of portrait piece, and the portrait block feature that will obtain makes up with the photo block feature and form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each in the matrix is classified the characteristic of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method step is specially:
(4.2a) random initializtion matrix D;
(4.2b). Substituting the matrix D
Figure BDA0000026913900000074
get on the C optimization function
Figure BDA0000026913900000075
and solving for the function to get the matrix C;
(4.2c). Substituting the matrix C
Figure BDA0000026913900000076
get on the matrix D optimization function
Figure BDA0000026913900000077
and solving for the function to get the matrix D;
(4.2d). iteration is carried out (4.2b) to (4.2c); No longer reduce until or reach preset iterations, obtain matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to
Figure BDA0000026913900000079
Be decomposed into two sub-dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
The 5th step, for each piece of test photo, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it at photo piece dictionary D according to following formula pUnder rarefaction representation coefficient w:
min w λ | | w | | 1 + 0.5 | | D p w - f | | ,
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
In the 6th step, utilize the 4th to go on foot the portrait piece dictionary D that obtains sThe rarefaction representation coefficient w that obtains with the 5th step synthesizes high definition details obvious characteristics information:
S Hi % = D s w ,
Wherein
Figure BDA0000026913900000083
is i (i=1; L M) individual high definition details obvious characteristics information, wherein M is a high definition details obvious characteristics message block number.
The 7th step; The high definition details obvious characteristics information
Figure BDA0000026913900000084
that obtains in the 6th step is added on the initial pseudo-portrait piece of the correspondence that obtains in the 3rd step; Obtain the significantly pseudo-portrait piece of final high definition details:
Figure BDA0000026913900000086
be i=1 wherein; L M, M are total number of pseudo-portrait piece.
The 8th step; Repeated for the 5th step to the 7th step; Obtain M high definition details significantly pseudo-portrait piece
Figure BDA0000026913900000087
these pseudo-portrait pieces merged generate significantly pseudo-portrait
Figure BDA0000026913900000088
of a complete high definition details corresponding to test photo P; Wherein when final puppet portrait piece merged, the pixel value of the lap of two adjacent final pseudo-portrait pieces was taken as the mean value of the pixel value of two final pseudo-portrait pieces.
Two, generate the method for pseudo-photo based on rarefaction representation by human face portrait
With reference to Fig. 2, the concrete steps of pseudo-photograph generation method of the present invention are following:
Step 1 is divided training sample set and test sample book collection.
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant test portrait collection, chooses a portrait that the test portrait concentrates as test portrait S.
Step 2; According to training sample set and test portrait S; Generate the initial pseudo-photo of a width of cloth
Figure BDA0000026913900000089
concrete grammar list of references " Xiao B, Gao X B, Li X L; Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 " with built-in type hidden Markov model method, concrete steps are following:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test portrait piece on the test portrait, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-photo piece, and the mean value of getting this k pseudo-photo piece obtains testing the corresponding initial pseudo-photo piece of portrait piece.
Step 3 is carried out piecemeal and is kept 75% overlapping degree test portrait, initial pseudo-photo, and extracts test portrait block feature.
(3a). the initial pseudo-photo that obtains in the step 2
Figure BDA0000026913900000091
is divided into pseudo-photo piece collection
Figure BDA0000026913900000092
when dividing pseudo-photo piece, will keeps 75% overlapping degree, the wherein total block data of M for dividing;
(3b). will test portrait and be divided into portrait piece collection { S 1, S 2, L, S M, it is identical with the branch block size and the overlapping degree of initial pseudo-photo in the step (3a) that the size of piecemeal and overlapping degree will keep, and wherein M is total number of piece in the test portrait;
(3c). for each test portrait piece, extract this piece in the horizontal direction with vertical direction on single order and second derivative, the linear operator that wherein is used for extracting derivative is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative extracted by row, as the characteristic of this piece.
Step 4 utilizes the training sample set training behind the piecemeal to obtain two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; First order derivative and the second derivative of extracting the portrait piece be as characteristic, and the linear operator that wherein is used for extracting characteristic is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] deducts the characteristic of photo piece average as the photo piece with the pixel value of photo piece, and the portrait block feature that will obtain and the combination of photo block feature form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies the characteristic of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method concrete steps are:
(4.2a) random initializtion matrix D;
(4.2b). Substituting the matrix D
Figure BDA0000026913900000094
get on the C optimization function
Figure BDA0000026913900000095
and solving for the function to get the matrix C;
(4.2c). Substituting the matrix C get on the matrix D optimization function
Figure BDA0000026913900000097
and solving for the function to get the matrix D;
(4.2d). iteration execution (4.2b) to (4.2c) no longer reduces until or reaches preset iterations, obtains matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to
Figure BDA0000026913900000099
Be decomposed into two sub-dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
Step 5, for each piece of test portrait, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it according to following formula and drawing a portrait piece dictionary D sUnder rarefaction representation coefficient w:
min w λ | | w | | 1 + 0.5 | | D p w - f | | ,
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
Step 6, the photo piece dictionary D that utilizes step 4 to obtain pThe rarefaction representation coefficient w that obtains with step 5 synthesizes high definition details obvious characteristics information:
P Hi % = D p w ,
Wherein
Figure BDA0000026913900000103
is i (i=1; L M) individual high definition details obvious characteristics information, wherein M is a high definition details obvious characteristics message block number.
Step 7; The high definition details obvious characteristics information
Figure BDA0000026913900000104
that obtains in the step 6 is added on the initial pseudo-photo piece
Figure BDA0000026913900000105
that obtains in the step 3; Obtain the tangible pseudo-photo piece of final high definition details:
Figure BDA0000026913900000106
be i=1 wherein; L M, M are total number of initial pseudo-photo piece.
Step 8; Repeating step 5 is to step 7; Obtain M the tangible pseudo-photo piece
Figure BDA0000026913900000107
of high definition details these pieces are merged complete tangible pseudo-photo of high definition details
Figure BDA0000026913900000108
corresponding to test portrait S of generation; Wherein when final pseudo-photo merged, the pixel value of the lap of two adjacent final pseudo-photo pieces was taken as the mean value of these two pieces.
Effect of the present invention can further specify through following experiment:
With the inventive method respectively with pseudo-nonlinear method LLE; Based on the contrast that experimentizes of the method for built-in type hidden Markov model EHMM; At first generate pseudo-portrait and pseudo-photo respectively in the experiment, and then the pseudo-photo that generates is carried out subjective quality evaluation and recognition of face experiment with pseudo-portrait with these methods
1, experiment condition and description of test
Realize that software environment of the present invention is the MATLAB 2009a of U.S. Mathworks company exploitation, used computing machine is the personal computer of 2G Hz.Marks more of the present invention are: according to the difference that is used for producing initial pseudo-portrait or initial pseudo-photo method; To produce the initial pseudo-method that combines with the inventive method again of drawing a portrait with pseudo-nonlinear method and be designated as SR-LLE, will be designated as SR-EHMM with the method that initial pseudo-portrait of built-in type hidden Markov model generation and initial pseudo-photo combine with the inventive method again.
Following experiment is all carried out on two databases: the one, and the disclosed CUHK student database of Hong Kong Chinese University (CUHK), another one is the newly-built VIPS database in Xian Electronics Science and Technology University VIPS laboratory.
2, experiment content
Experiment 1: the generation of pseudo-portrait and pseudo-photo
Described in the inventive method embodiment 1; Utilize SR-LLE method and SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-portrait respectively; On the CUHK student database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM; Experimental result comparison diagram such as Fig. 3; Wherein Fig. 3 (a) is an original photo, and Fig. 3 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 3 (c) for using built-in type hidden Markov model method EHMM to generate; The puppet portrait that Fig. 3 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 3 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2; Utilize the SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-photo; On the CUHK student database, generate pseudo-photo with built-in type hidden Markov model method EHMM; Experimental result comparison diagram such as Fig. 4; Wherein Fig. 4 (a) is original portrait, the pseudo-photo of Fig. 4 (b) for using built-in type hidden Markov model method EHMM to generate, and Fig. 4 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
Described in the inventive method embodiment 1; Utilize SR-LLE method and SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-portrait respectively; On the VIPS database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM; Experimental result comparison diagram such as Fig. 5; Wherein Fig. 5 (a) is an original photo, and Fig. 5 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 5 (c) for using built-in type hidden Markov model method EHMM to generate; The puppet portrait that Fig. 5 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 5 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2; Utilize the SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-photo; On the VIPS database, generate pseudo-photo with built-in type hidden Markov model method EHMM; Experimental result comparison diagram such as Fig. 6; Wherein Fig. 6 (a) is original portrait, the pseudo-photo of Fig. 6 (b) for using built-in type hidden Markov model method EHMM to generate, and Fig. 6 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
1 result is visible by experiment; Owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient; And then make that be used for neighbour's the number of synthetic target image piece is not what fix; Thereby make that the image definition that generates is high, details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Experiment 2: subjective picture quality evaluation
In this experiment, invited 20 volunteers to come experimental result is given a mark.The validity of our algorithm is described through the MOS average evaluation fractional value that calculates these marks.With the puppet portrait is that example describes, and original portrait is used as reference picture, uses the LLE method, the EHMM method, and the SR-LLE method, the puppet portrait that the SR-EHMM method generates is used as image to be estimated.To each width of cloth image to be estimated, at first calculate the final score of the mean value of its mark that provides by 20 volunteers as this portrait.And then the validity of certain algorithm is that the mean value of the final scores of all pseudo-portraits of being generated by this algorithm is weighed.Promptly for piece image, the MOS value is calculated as follows:
MOS ( l ) = 1 20 Σ i = 1 20 A ( i , l ) ,
Wherein (i, l) i volunteer of expression is to the scoring of l width of cloth image for A.Final experimental result is seen table 1.
The subjective picture quality evaluation of estimate of table 1. algorithms of different
Figure BDA0000026913900000122
In table 1, " No " be meant in the document of pseudo-nonlinear method it is with generating pseudo-portrait, not with generating pseudo-photo, so SR-LLE just draws a portrait with generating puppet.Can find out that from table one the subjective picture quality evaluation of estimate of the inventive method obviously is superior to existing method.
Experiment 3: based on the recognition of face of rarefaction representation
Described in background technology, be combined in criminal investigation and case detection and the anti-terrorism practical application in pursuing and capturing an escaped prisoner, based on the recognition of face of example two kinds of situations are arranged usually: based on the human face photo storehouse identification of portrait with based on the human face portrait storehouse identification of photo.For the former, at first will test portrait and convert a pseudo-photo into, and then discern, also can convert the human face photo storehouse into pseudo-portrait storehouse, and then mate identification.Equally, for the latter, can convert photo into pseudo-portrait and maybe will draw a portrait the storehouse and convert pseudo-photo library into, and then discern.Like this, four kinds of RMs are just arranged, as shown in table 2.
Four kinds of recognition of face modes of table 2.
The recognition methods of using in this experiment is based on the face recognition algorithms of the robust of rarefaction representation.For the CUHK student database, have 188 width of cloth human face photos, corresponding to each photo, the portrait that all has a width of cloth to draw by the artist.For the VIPS database, have 200 width of cloth human face photos, wherein every width of cloth human face photo comes from 5 portraits that the artist painted to 5 width of cloth should be arranged.For of the effect of check multiple image for identification, on the VIPS database, to test, each people's face is used for constituting the training image number of dictionary and elects 1,3,5 respectively as.Experimental result such as table 3, table 4, table 5 is shown in the table 6, table 7.
The recognition of face rate (%) of table 3. on the CUHK student database
Figure BDA0000026913900000131
The recognition of face rate (%) of table 4. under VIPS database photo library pattern
Figure BDA0000026913900000132
The recognition of face rate (%) of table 5. under the pattern of VIPS database portrait storehouse
Figure BDA0000026913900000133
The recognition of face rate (%) of table 6. under the pattern of the pseudo-portrait of VIPS database storehouse
Figure BDA0000026913900000134
The recognition of face rate (%) of table 7. under the pseudo-picture mode of VIPS database
Figure BDA0000026913900000135
From table 4,5,6,7 can find out, the method that the present invention proposes has had raising than original method on the recognition of face rate.Particularly on the VIPS database; Discrimination of the present invention is apparently higher than existing pseudo-portrait and pseudo-photograph generation method; Wherein the discrimination of SR-LLE has had than LLE method and has significantly improved, and the discrimination of SR-EHMM has also had than EHMM and significantly improves, and this explains validity of the present invention.

Claims (3)

1. the human face photo based on rarefaction representation arrives the generation method of drawing a portrait, and comprises the steps:
(1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test photo P;
(2) utilize pseudo-portrait generation method, training sample set and test photo are generated a width of cloth initial pseudo-portrait
Figure FDA00001627549900011
corresponding with the test photo
(3) will initial pseudo-portrait
Figure FDA00001627549900012
Again be divided into the piece of identical size and identical overlapping degree with test photo P, wherein P={P 1, P 2... P M, M is total number of piece, extracts the proper vector f of each piece in the test photo;
(4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
(4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; The first order derivative of extraction photo piece and second derivative are as characteristic; The average that deducts the portrait piece with the pixel value of portrait piece is as characteristic, and the portrait block feature that will obtain and photo block feature make up and form a line, and it is carried out normalization;
(4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
Figure FDA00001627549900014
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
(4c). with the coupling dictionary D that obtains in (4b) according to Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
(5) utilize the photo piece dictionary D that obtains in the proper vector f that obtains in the step (3) and the step (4) p, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Figure FDA00001627549900016
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
(6) utilize the portrait piece dictionary D that obtains in the step (4) sAnd the rarefaction representation coefficient w that obtains in the step (5), according to the synthetic high definition details obvious characteristics message block of following formula
Figure FDA00001627549900017
Figure FDA00001627549900021
I=1 wherein, 2 ... M, M are total number of message block;
(7) the high definition details obvious characteristics message block
Figure FDA00001627549900022
that step (6) is obtained is added on the corresponding piece
Figure FDA00001627549900023
of initial pseudo-portrait that step (3) obtains; To strengthen sharpness and details, obtain final puppet portrait piece;
(8) repeated execution of steps (5)-(7) are until obtaining the final puppet of M piece portrait piece, and these final pseudo-portrait pieces that will obtain make up and obtain the puppet corresponding with testing photo and draw a portrait.
2. arrive the generation method of drawing a portrait according to the human face photo based on rarefaction representation in the claim 1; Being wherein that step (4b) is described finds the solution dictionary D to obtain being coupled through the method for iteration alternately, and concrete steps are following:
(2a) random initializtion matrix D;
(2b). The matrix D into?
Figure FDA00001627549900025
get on the C optimization function?
Figure FDA00001627549900026
and solving for the function to get the matrix C;
(2c). Substituting the matrix C?
Figure FDA00001627549900027
get on the matrix D optimization function?
Figure FDA00001627549900028
and solving for the function to get the matrix D;
(2d). iteration execution (2b) to (2c) no longer reduces until or reaches preset iterations, obtains matrix D and Matrix C.
3. one kind based on the human face portrait of the rarefaction representation generation method to photo, comprises the steps:
1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test portrait S;
2) utilize pseudo-photograph generation method, training sample set and test portrait are generated a width of cloth and the corresponding initial pseudo-photo
Figure FDA000016275499000210
of test portrait
3) with initial pseudo-photo
Figure FDA000016275499000211
Again be divided into the piece of identical size and identical overlapping degree with test portrait S, wherein
Figure FDA000016275499000212
S={S 1, S 2... S M, M is total number of piece, extracts the proper vector f of each piece in the test portrait;
4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random; Wherein each photo piece is corresponding respectively with each portrait piece; The first order derivative of extraction portrait piece and second derivative are as characteristic; The average that deducts the photo piece with the pixel value of photo piece is as characteristic, and the portrait block feature that will obtain and the combination of photo block feature form a line, and it is carried out normalization;
4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled through the method that replaces iteration:
Figure FDA00001627549900031
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
4c). with 4b) in the coupling dictionary D that obtains according to Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
5) utilize the portrait piece dictionary D that obtains in the proper vector f that obtains in the step 3) and the step 4) s, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
6) utilize the photo piece dictionary D that obtains in the step 4) pWith the rarefaction representation coefficient w that obtains in the step 5), according to the synthetic high definition details obvious characteristics message block of following formula
Figure FDA00001627549900035
I=1 wherein, 2 ... M, M are total number of message block;
7) the high definition details obvious characteristics message block
Figure FDA00001627549900036
that step 6) is obtained is added on the corresponding piece
Figure FDA00001627549900037
of initial pseudo-photo that step (3) obtains; To strengthen sharpness and details, obtain final pseudo-photo piece;
8) repeated execution of steps 5)-7) until obtaining the final pseudo-photo piece of M piece, and these final pseudo-photo pieces that will obtain make up and obtain the pseudo-photo corresponding with testing photo.
CN2010102893309A 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation Expired - Fee Related CN101958000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102893309A CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102893309A CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Publications (2)

Publication Number Publication Date
CN101958000A CN101958000A (en) 2011-01-26
CN101958000B true CN101958000B (en) 2012-08-15

Family

ID=43485317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102893309A Expired - Fee Related CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Country Status (1)

Country Link
CN (1) CN101958000B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608451A (en) * 2016-03-14 2016-05-25 西安电子科技大学 Face sketch generation method based on subspace ridge regression

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793695B (en) * 2014-02-10 2017-11-28 天津大学 A kind of method of the sub- dictionary joint training of multiple feature spaces for recognition of face
CN103984954B (en) * 2014-04-23 2017-06-13 西安电子科技大学宁波信息技术研究院 Image combining method based on multi-feature fusion
CN103902991A (en) * 2014-04-24 2014-07-02 西安电子科技大学 Face recognition method based on forensic sketches
CN104517274B (en) * 2014-12-25 2017-06-16 西安电子科技大学 Human face portrait synthetic method based on greedy search
CN104700380B (en) * 2015-03-12 2017-08-15 陕西炬云信息科技有限公司 Based on single photo with portrait to human face portrait synthetic method
CN105869134B (en) * 2016-03-24 2018-11-30 西安电子科技大学 Human face portrait synthetic method based on direction graph model
CN106056561A (en) * 2016-04-12 2016-10-26 西安电子科技大学 Face portrait compositing method based on Bayesian inference
CN106778811B (en) * 2016-11-21 2020-12-25 西安电子科技大学 Image dictionary generation method, image processing method and device
CN109145135A (en) * 2018-08-03 2019-01-04 厦门大学 A kind of human face portrait aging method based on principal component analysis

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571950A (en) * 2009-03-25 2009-11-04 湖南大学 Image restoring method based on isotropic diffusion and sparse representation
CN101640541A (en) * 2009-09-04 2010-02-03 西安电子科技大学 Reconstruction method of sparse signal
CN101719142A (en) * 2009-12-10 2010-06-02 湖南大学 Method for detecting picture characters by sparse representation based on classifying dictionary

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006106508A2 (en) * 2005-04-04 2006-10-12 Technion Research & Development Foundation Ltd. System and method for designing of dictionaries for sparse representation
US8290251B2 (en) * 2008-08-21 2012-10-16 Adobe Systems Incorporated Image stylization using sparse representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571950A (en) * 2009-03-25 2009-11-04 湖南大学 Image restoring method based on isotropic diffusion and sparse representation
CN101640541A (en) * 2009-09-04 2010-02-03 西安电子科技大学 Reconstruction method of sparse signal
CN101719142A (en) * 2009-12-10 2010-06-02 湖南大学 Method for detecting picture characters by sparse representation based on classifying dictionary

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608451A (en) * 2016-03-14 2016-05-25 西安电子科技大学 Face sketch generation method based on subspace ridge regression
CN105608451B (en) * 2016-03-14 2019-11-26 西安电子科技大学 Human face portrait generation method based on subspace ridge regression

Also Published As

Publication number Publication date
CN101958000A (en) 2011-01-26

Similar Documents

Publication Publication Date Title
CN101958000B (en) Face image-picture generating method based on sparse representation
Nhan Duong et al. Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition
CN103116763B (en) A kind of living body faces detection method based on hsv color Spatial Statistical Character
CN104239856B (en) Face identification method based on Gabor characteristic and self adaptable linear regression
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN103279936A (en) Human face fake photo automatic combining and modifying method based on portrayal
JP2007072620A (en) Image recognition device and its method
Chen et al. Convolutional neural network based dem super resolution
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN107766864B (en) Method and device for extracting features and method and device for object recognition
JP2014211719A (en) Apparatus and method for information processing
CN111539351B (en) Multi-task cascading face frame selection comparison method
CN114329034A (en) Image text matching discrimination method and system based on fine-grained semantic feature difference
Xie et al. Hand detection using robust color correction and gaussian mixture model
Fu et al. Personality trait detection based on ASM localization and deep learning
CN111008570A (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
Qian et al. Exploring deep gradient information for biometric image feature representation
CN105844605A (en) Face image synthesis method based on adaptive expression
CN102110303B (en) Method for compounding face fake portrait\fake photo based on support vector return
CN116704585A (en) Face recognition method based on quality perception
Mena-Chalco et al. 3D human face reconstruction using principal components spaces
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN103093184A (en) Face identification method of two-dimensional principal component analysis based on column vector
Li et al. Intelligent terminal face spoofing detection algorithm based on deep belief network
Li et al. A method of inpainting moles and acne on the high‐resolution face photos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120815

Termination date: 20170924