CN101958000A - Face image-picture generating method based on sparse representation - Google Patents

Face image-picture generating method based on sparse representation Download PDF

Info

Publication number
CN101958000A
CN101958000A CN 201010289330 CN201010289330A CN101958000A CN 101958000 A CN101958000 A CN 101958000A CN 201010289330 CN201010289330 CN 201010289330 CN 201010289330 A CN201010289330 A CN 201010289330A CN 101958000 A CN101958000 A CN 101958000A
Authority
CN
China
Prior art keywords
photo
portrait
block
pseudo
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010289330
Other languages
Chinese (zh)
Other versions
CN101958000B (en
Inventor
高新波
王楠楠
李洁
王斌
张杰伟
邓成
韩冠
肖冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2010102893309A priority Critical patent/CN101958000B/en
Publication of CN101958000A publication Critical patent/CN101958000A/en
Application granted granted Critical
Publication of CN101958000B publication Critical patent/CN101958000B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于稀疏表示的人脸画像-照片生成方法,主要解决现有方法生成的伪画像和伪照片清晰度低细节模糊的问题。其实现过程是:用现有的伪画像和伪照片生成方法生成初始伪画像和伪照片;对所有的图像分块后,利用训练样本集训练出画像块字典和照片块字典;利用这两个字典根据输入的测试照片块或测试画像块,合成高清晰度特征信息;将得到的高清晰度特征信息和对应的初始伪画像块或伪照片块相加得到最终的高清晰度的伪画像块或伪照片块;对所有的高清晰度的伪画像块或伪照片块进行融合即可得到一幅完整的伪画像或伪照片。本发明方法与现有方法相比,生成的伪画像和伪照片具有清晰度高细节明显的优点,可用于人脸识别和人脸检索。

Figure 201010289330

The invention discloses a face portrait-photo generation method based on sparse representation, which mainly solves the problem of low definition and fuzzy details of pseudo portraits and pseudo photos generated by existing methods. The implementation process is: use the existing pseudo-portraits and pseudo-photo generation methods to generate initial pseudo-portraits and pseudo-photos; after all the images are divided into blocks, use the training sample set to train the portrait block dictionary and photo block dictionary; use these two The dictionary synthesizes high-definition feature information based on the input test photo block or test image block; adds the obtained high-definition feature information to the corresponding initial pseudo-image block or pseudo-photo block to obtain the final high-definition pseudo-portrait block or pseudo-photo blocks; a complete pseudo-portrait or pseudo-photo can be obtained by fusing all high-definition pseudo-portrait blocks or pseudo-photo blocks. Compared with the existing method, the pseudo-portraits and pseudo-photos generated by the method of the invention have the advantages of high definition and obvious details, and can be used for face recognition and face retrieval.

Figure 201010289330

Description

Human face portrait-photograph generation method based on rarefaction representation
Technical field
The invention belongs to technical field of image processing, relate to human face portrait-photo and generate, can be used in the fields such as criminal investigation and anti-terrorism retrieval and identification people's face.
Background technology
Along with the arrival of information age, people more and more experience the importance of information security.Identification and authentication techniques are the effective means that ensure information security, and development in recent years is more and more rapider.Identification and authentication techniques based on people's face are one of most convenient, effective identity verification technology, so recognition of face receives much concern in recent years.Because the difference of imaging mode, facial image can have multiple example, as photo, and portrait etc., therefore recognition of face is not limited to the identification of human face photo, and correspondingly the recognition of face based on example mainly contains dual mode: based on the recognition of face of photo with based on the recognition of face of drawing a portrait.Recognition of face based on photo has obtained using as gate control system search engine, video monitoring etc. in a lot of fields.But under many circumstances, during for example criminal investigation and case detection and anti-terrorism are pursued and captured an escaped prisoner, the photo that does not often have the suspect only have a width of cloth to come from the artist and cooperate the portrait finished with eyewitness, and then recognition of face will be carried out the identification of identity based on the portrait that obtains and existing police's database.In addition, arresting a suspicion of crime man-hour, can obtain this suspect's human face photo, we can retrieve in the portrait storehouse that the police set up in the process in the past with the human face photo that obtains to determine this suspect crime of whether commiting excesses in the past, determine whether the crime of commiting excesses or determine the crime number of times according to the portrait quantity that retrieves according to result for retrieval.
To sum up, mainly be to be applied to following two kinds of situations based on the recognition of face of drawing a portrait: the one, judgement suspect's identity, the 2nd, verify whether someone has the historical or further definite crime number of times of crime.For first kind of situation, face recognition technology can be achieved by utilizing a portrait to mate identification again as test pattern in existing police's picture data storehouse.For second kind of situation, human face photo can be used as test pattern, mates identification again in existing police draws a portrait database and gets final product.But because mechanism of production difference, portrait and photo are heterogeneous mutually, and general photo-photo recognizer, see document " Chellappa R; Wilson C; Sirobey S.Human and MachineRecognition of Faces:a Survey.Proceedings of IEEE; 83 (5): 705-741; 1995 ", " Zhao W; Chellappa C; Rosenfeld A; Phillips P.Face Recognition:a Survey.ACM Computer Survey; 34 (4): 399-458,2003 ", " Phillips P; Flynn P, Scruggs T, Bowyer K, Chang J, Hoffman K, Marques J, Min J, Worek W.Overview of the Face Recognition Grand Challenge.Inproceeding of IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20-25 June 2005 "; all be the coupling identification in comparison film and picture data storehouse, run into challenge when the coupling in the coupling in portrait and picture data storehouse and photo and portrait storehouse discerned solving.In order to solve the challenge that runs in the coupling identifying of portrait and picture data storehouse, portrait or photo should be converted under the identical representation space, so again in identical space as photo space or portrait the face identification method the space utilization discern and get final product.Therefore, heterogeneous image transitions just becomes a problem demanding prompt solution.
At the problems referred to above, it can be the homogeneity map picture with heterogeneous image transformation that two kinds of strategies are arranged: first kind is from drawing a portrait the conversion of photo, and another is the conversion from photo to portrait.In existing method, refer to human face photo or human face portrait with facial image, refer to pseudo-portrait or pseudo-photo with pseudo-image.Portrait can be divided into two big classes: a class is simple stick figure and caricature, this class portrait does not almost have complicated information such as grain details, and another kind of is complicated portrait, and this class portrait is compared with caricature with stick figure, grain details is complicated, and it is more to comprise quantity of information.Existing research for the complexity portrait is mainly carried out from two aspects, is to be converted to the portrait that contains less texture information with containing the photo that enriches texture information on the one hand, is that the portrait that will contain less texture information is converted to complicated photo on the other hand.
1. photo mainly comprises three major types to the converter technique of portrait:
One is based on linear method, adopt the principal component analysis (PCA) algorithm respectively at photo space and portrait space training structure proper subspace separately, obtain photo to be transformed projection coefficient in the photo feature space, reconstruction coefficients when obtaining the base that utilizes in the photo feature space and reconstruct this photo according to this projection coefficient then, in the portrait space according to the photo feature space in basic corresponding portrait and reconstruction coefficients thereof reconstruct pseudo-portrait.Mapping between this method supposition photo and the portrait is a kind of linear relationship, can't reflect the nonlinear relationship between the two veritably, and the noise that causes bearing results is many, and sharpness is low, and details is fuzzy;
Two are based on pseudo-non-linear method, the method be approach with piecewise linearity non-linear, be specially: the photo-portrait in the training set is carried out even piecemeal to reaching photo to be transformed, each fritter for photo to be transformed, in all training photo pieces, find a K the most similar fritter to it, carry out linear weighted function by the portrait piece to this K photo piece correspondence then and produce pseudo-portrait piece, the whole pseudo-portrait piece with gained is combined into complete puppet portrait at last.This method is approached the nonlinear relationship of the overall situation by the linear combination of part, but still not nonlinear method truly, specifically see document " Liu Q S; Tang X O.A Nonlinear approach for face sketch synthesis and recognition.IEEE International Conference on Computer Vision and Pattern Recognition; San Diego; CA; USA; 20-26 Jun 2005 " and " Gao X B, Zhong J J, Tao D C and Li X L.Local face sketchsynthesis learning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".Because the arest neighbors that K neighbour's piece being sought is Euclidean distance might not be K maximally related of photo piece to be transformed, and the number of arest neighbors is fixed in the method, this has just caused the sharpness that bears results low, the defective that details is fuzzy;
Three are based on non-linear method, and this method mainly is based on the method for built-in type hidden Markov model.Utilize the nonlinear relationship between built-in type hidden Markov model comparison film and the portrait to carry out modeling, convert photo to be transformed to pseudo-portrait according to the built-in type hidden Markov model of being learnt.Consider that single model is to depicting the nonlinear relationship of the complexity between photo and the portrait fully, introduced the integrated thought of selectivity, at each photo-portrait to obtaining one by one body portrait maker, select the individual maker of part to merge, thereby photo to be transformed is mapped to corresponding pseudo-portrait.Then, on the said method basis, image is carried out piecemeal again, utilize said method to carry out modeling for every pair of training photo piece-portrait piece, change photo piece to be transformed into pseudo-portrait piece according to model, merge pseudo-portrait piece and obtain pseudo-portrait, specifically see document " Gao X B; Zhong J J; Tao D C and Li X L.Local face sketch synthesislearning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".Also used K the thought that arest neighbors carries out modeling with photo to be transformed in the method, same, this sharpness that also caused bearing results is low, the shortcoming that details is fuzzy.
2. the switch technology of photo of drawing a portrait mainly contains following two kinds of methods:
One is based on the method for subspace, this is a kind of method of blending space being carried out signature analysis, at first synthetic blending space is spliced in photo space and portrait space, adopt the principal component analysis (PCA) algorithm blending space to be trained the overall subspace of structure portrait-photo, should be divided into photo proper subspace and portrait proper subspace in overall situation subspace then, obtain the projection coefficient of portrait to be transformed at the portrait proper subspace, utilize projection coefficient to reconstruct the facial image vector in overall subspace at last, this vectorial the first half is pseudo-photo.Mapping between this method supposition photo and the portrait is a kind of linear relationship, and in fact the relation between the two is wanted the many of complexity, and the noise that causes bearing results is more, and sharpness is low, and details is fuzzy;
Two are based on the method for built-in type hidden Markov model, the process that the method and the method for utilizing the built-in type hidden Markov model to generate pseudo-portrait recited above are symmetries, as long as will draw a portrait the other method that can obtain symmetry with the role reversal of photo, specifically see document " Xiao B; Gao X B, Li X L, Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 ".Sharpness is low as a result for the pseudo-photo of the feasible generation of same use owing to k nearest neighbor, and details is fuzzy.
Summary of the invention
The objective of the invention is to overcome the deficiency of above-mentioned existing method, propose a kind of human face portrait-photograph generation method based on rarefaction representation, the puppet that generates is drawn a portrait and the sharpness of pseudo-photo to improve, and makes that the puppet portrait and the pseudo-photo details that generate are more obvious.
The technical scheme that realizes the object of the invention comprises:
1. arrive the generation method of portrait based on the human face photo of rarefaction representation, comprise the steps:
(1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test photo P;
(2) utilize pseudo-portrait generation method, training sample set and test photo are generated a width of cloth initial pseudo-portrait corresponding with the test photo
Figure BDA0000026913900000041
(3) will initial pseudo-portrait
Figure BDA0000026913900000042
Again be divided into the piece of identical size and identical overlapping degree with test photo P, wherein
Figure BDA0000026913900000043
P={P 1, P 2, L P M, M is total number of piece, extracts the proper vector f of each piece in the test photo;
(4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
(4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction photo piece and second derivative are as feature, pixel value with the portrait piece deducts the average of portrait piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
(4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
(4c). with the coupling dictionary D that obtains in (4b) according to
Figure BDA0000026913900000045
Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
(5) utilize the photo piece dictionary D that obtains in the proper vector f that obtains in the step (3) and the step (4) p, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
min w | | f - D p w | | 2 2 + β | | w | | 1 ,
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
(6) utilize the portrait piece dictionary D that obtains in the step (4) s(5) the rarefaction representation coefficient w that obtains in is according to the synthetic high definition details obvious characteristics message block of following formula
Figure BDA0000026913900000047
S Hi % = D s w ,
I=1 wherein, 2, L M, M are total number of message block;
(7) the tangible block feature message block of high definition details that step (6) is obtained
Figure BDA0000026913900000049
Be added to the corresponding piece of initial pseudo-portrait that step (2) obtains On, to strengthen sharpness and details, obtain final puppet portrait piece;
(8) repeated execution of steps (5)-(7) are until obtaining the final puppet of M piece portrait piece, and these final pseudo-portrait pieces that will obtain make up and obtain the puppet portrait corresponding with testing photo.
2. based on the human face portrait of rarefaction representation generation method, comprise the steps: to photo
1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test portrait S;
2) utilize pseudo-photograph generation method, training sample set and test portrait are generated a width of cloth and the corresponding initial pseudo-photo of test portrait
Figure BDA0000026913900000051
3) with initial pseudo-photo
Figure BDA0000026913900000052
Again be divided into the piece of identical size and identical overlapping degree with test portrait S, wherein
Figure BDA0000026913900000053
S={S 1, S 2, L S M, M is total number of piece, extracts the proper vector f of each piece in the test portrait;
4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D sWith photo piece dictionary D p:
4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction portrait piece and second derivative are as feature, pixel value with the photo piece deducts the average of photo piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
4c). with 4b) in the coupling dictionary D that obtains according to Be decomposed into two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
5) utilize the portrait piece dictionary D that obtains in the proper vector f that obtains in the step 3) and the step 4) s, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
min w | | f - D s w | | 2 2 + β | | w | | 1 ,
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
6) utilize the photo piece dictionary D that obtains in the step 4) pWith 5) in the rarefaction representation coefficient w that obtains, according to the synthetic high definition details obvious characteristics message block of following formula
Figure BDA0000026913900000057
P Hi % = D p w ,
I=1 wherein, 2, L M, M are total number of message block;
7) the tangible block feature message block of the high definition details that step 6) is obtained
Figure BDA0000026913900000059
Be added to step 2) piece of the initial pseudo-photo correspondence that obtains On, to strengthen sharpness and details, obtain final puppet portrait piece;
8) repeated execution of steps 5)-7) until obtaining the final pseudo-photo piece of M piece, and these final pseudo-photo pieces that will obtain make up and obtain the pseudo-photo corresponding with testing photo.
The present invention is owing to set up combination learning photo piece dictionary D pWith portrait piece dictionary D sModel makes that the rarefaction representation of test pattern piece is identical with the rarefaction representation coefficient of image to be synthesized; Simultaneously owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient, thereby make that be used for neighbour's the number of synthetic target image piece is not what fix, the image definition height that makes generation, details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Description of drawings
Fig. 1 the present invention is based on the generation method flow diagram of the photo of rarefaction representation to portrait;
Fig. 2 the present invention is based on the generation method flow diagram of the portrait of rarefaction representation to photo;
The comparing result figure that Fig. 3 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on CUHK student database;
Fig. 4 is the pseudo-photo comparing result figure that the present invention and existing a kind of method generate on CUHK student database;
The comparing result figure that Fig. 5 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on the VIPS database;
The comparing result figure of the pseudo-photo that Fig. 6 generates on the VIPS database for the present invention and existing a kind of method.
Embodiment
Core concept of the present invention is: fix at existing pseudo-portrait and K used arest neighbors piece number of pseudo-photograph generation method generation image block, the deficiency that the puppet portrait is low with pseudo-photo sharpness and details is fuzzy that causes existing human face portrait-photograph generation method to generate, proposition is based on the human face portrait-photograph generation method of rarefaction representation, with can adaptive selection arest neighbors piece, make the puppet that generates draw a portrait and pseudo-photo sharpness height, details is obvious.Below provide two kinds of examples:
One, generates the method for pseudo-portrait based on rarefaction representation by human face photo
With reference to Fig. 1, the concrete steps of pseudo-portrait generation method of the present invention are as follows:
The first step is divided training sample set and test sample book collection;
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant the test photograph collection, chooses a photo in the test photograph collection as test photo P.
In second step,, generate the initial pseudo-portrait of a width of cloth with pseudo-nonlinear method or built-in type hidden Markov model method according to training sample set and test photo P
Figure BDA0000026913900000061
Concrete grammar is list of references " Liu Q S; Tang X O.A Nonlinearapproach for face sketch synthesis and recognition.IEEE International Conference onComputer Vision and Pattern Recognition; San Diego; CA; USA; 20-26 Jun 2005 " and " Gao X B respectively, Zhong J J, Tao D C and Li X L.Local face sketch synthesis learning.Neurocomputing, 71 (10-12): 1921~1930,2008 ", concrete steps are as follows:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test photo piece on the test photo, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-portrait piece, and the mean value of getting this k pseudo-portrait piece obtains testing the initial pseudo-portrait piece of photo piece correspondence.
In the 3rd step, test photo, initial pseudo-portrait are carried out piecemeal and keep 75% overlapping degree, and extract test photo block feature.
(3a). with the initial pseudo-portrait that obtains in second step
Figure BDA0000026913900000071
Be divided into pseudo-portrait piece collection
Figure BDA0000026913900000072
When dividing puppet portrait piece, to keep 75% overlapping degree, wherein the total block data of M for dividing;
(3b). will test photo and be divided into photo piece collection { P 1, P 2, L, P M, wherein M is total number of piece in the test photo, it is identical with the branch block size and the overlapping degree of initial pseudo-portrait in the step (3a) that the size of piecemeal and overlapping degree will keep;
(3c). for each test photo piece, extract the horizontal direction of this piece and the single order and the second derivative of vertical direction, the linear operator that wherein is used for extracting derivative is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative of extracting by row, as the feature of this piece;
The 4th step, utilize the training sample set training behind the piecemeal, obtain drawing a portrait piece dictionary D sWith photo piece dictionary D pTwo dictionaries.
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, extract the first order derivative of photo piece and second derivative as feature, the linear operator that wherein is used for extracting feature is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] deducts the feature of portrait piece average as the portrait piece with the pixel value of portrait piece, and the portrait block feature that will obtain makes up with the photo block feature and form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each in the matrix is classified the feature of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method step is specially:
(4.2a) random initializtion matrix D;
(4.2b). with the matrix D substitution
Figure BDA0000026913900000074
Obtain majorized function about C
Figure BDA0000026913900000075
And find the solution this function and obtain Matrix C;
(4.2c). with the Matrix C substitution Obtain majorized function about matrix D
Figure BDA0000026913900000077
And find the solution this function and obtain matrix D;
(4.2d). iteration is carried out (4.2b) to (4.2c), until
Figure BDA0000026913900000078
No longer reduce or reach default iterations, obtain matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to Be decomposed into two sub-dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
The 5th the step, for the test photo each piece, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it at photo piece dictionary D according to following formula pUnder rarefaction representation coefficient w:
min w λ | | w | | 1 + 0.5 | | D p w - f | | ,
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
In the 6th step, utilize the 4th to go on foot the portrait piece dictionary D that obtains sThe rarefaction representation coefficient w that obtains with the 5th step synthesizes high definition details obvious characteristics information:
S Hi % = D s w ,
Wherein
Figure BDA0000026913900000083
Be the individual high definition details obvious characteristics information of i (i=1, L M), wherein M is a high definition details obvious characteristics message block number.
The 7th step is with the high definition details obvious characteristics information that obtains in the 6th step
Figure BDA0000026913900000084
Be added to the initial pseudo-portrait piece of the correspondence that obtains in the 3rd step
Figure BDA0000026913900000085
On, obtain the significantly pseudo-portrait piece of final high definition details:
Figure BDA0000026913900000086
I=1 wherein, L M, M are total number of pseudo-portrait piece.
The 8th step repeated for the 5th step to the 7th step, obtained the significantly pseudo-portrait piece of M high definition details
Figure BDA0000026913900000087
These pseudo-portrait pieces are merged complete significantly pseudo-portrait of the high definition details corresponding to test photo P of generation , wherein when final puppet portrait piece merged, the pixel value of the lap of two adjacent final pseudo-portrait pieces was taken as the mean value of the pixel value of two final pseudo-portrait pieces.
Two, generate the method for pseudo-photo based on rarefaction representation by human face portrait
With reference to Fig. 2, the concrete steps of pseudo-photograph generation method of the present invention are as follows:
Step 1 is divided training sample set and test sample book collection.
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant test portrait collection, chooses a portrait that the test portrait concentrates as test portrait S.
Step 2 according to training sample set and test portrait S, generates the initial pseudo-photo of a width of cloth with built-in type hidden Markov model method Concrete grammar list of references " Xiao B, Gao X B, Li X L; Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 ", concrete steps are as follows:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test portrait piece on the test portrait, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-photo piece, and the mean value of getting this k pseudo-photo piece obtains testing the initial pseudo-photo piece of portrait piece correspondence.
Step 3 is carried out piecemeal and is kept 75% overlapping degree test portrait, initial pseudo-photo, and extracts test portrait block feature.
(3a). with the initial pseudo-photo that obtains in the step 2
Figure BDA0000026913900000091
Be divided into pseudo-photo piece collection
Figure BDA0000026913900000092
When dividing pseudo-photo piece, to keep 75% overlapping degree, wherein the total block data of M for dividing;
(3b). will test portrait and be divided into portrait piece collection { S 1, S 2, L, S M, it is identical with the branch block size and the overlapping degree of initial pseudo-photo in the step (3a) that the size of piecemeal and overlapping degree will keep, and wherein M is total number of piece in the test portrait;
(3c). for each test portrait piece, extract this piece in the horizontal direction with vertical direction on single order and second derivative, the linear operator that wherein is used for extracting derivative is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative of extracting by row, as the feature of this piece.
Step 4 utilizes the training sample set training behind the piecemeal to obtain two dictionaries: portrait piece dictionary D sWith photo piece dictionary D p
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, extract the first order derivative of portrait piece and second derivative as feature, the linear operator that wherein is used for extracting feature is respectively f 1=[1,0,1], f 2=[1,0 ,-2,0,1] deducts the feature of photo piece average as the photo piece with the pixel value of photo piece, and the portrait block feature that will obtain and the combination of photo block feature form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
min { D , C } | | I - DC | | 2 2 + β | | C | | 1 ,
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies the feature of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method concrete steps are:
(4.2a) random initializtion matrix D;
(4.2b). with the matrix D substitution
Figure BDA0000026913900000094
Obtain majorized function about C
Figure BDA0000026913900000095
And find the solution this function and obtain Matrix C;
(4.2c). with the Matrix C substitution
Figure BDA0000026913900000096
Obtain majorized function about matrix D
Figure BDA0000026913900000097
And find the solution this function and obtain matrix D;
(4.2d). iteration carry out (4.2b) to (4.2c) until
Figure BDA0000026913900000098
No longer reduce or reach default iterations, obtain matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to
Figure BDA0000026913900000099
Be decomposed into two sub-dictionaries: portrait piece dictionary D sWith photo piece dictionary D p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
Step 5, for each piece of test portrait, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it according to following formula and drawing a portrait piece dictionary D sUnder rarefaction representation coefficient w:
min w λ | | w | | 1 + 0.5 | | D p w - f | | ,
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
Step 6, the photo piece dictionary D that utilizes step 4 to obtain pThe rarefaction representation coefficient w that obtains with step 5 synthesizes high definition details obvious characteristics information:
P Hi % = D p w ,
Wherein
Figure BDA0000026913900000103
Be the individual high definition details obvious characteristics information of i (i=1, L M), wherein M is a high definition details obvious characteristics message block number.
Step 7 is with the high definition details obvious characteristics information that obtains in the step 6
Figure BDA0000026913900000104
Be added to the initial pseudo-photo piece that obtains in the step 3
Figure BDA0000026913900000105
On, obtain the tangible pseudo-photo piece of final high definition details:
Figure BDA0000026913900000106
I=1 wherein, L M, M are total number of initial pseudo-photo piece.
Step 8, repeating step 5 obtain M the pseudo-photo piece that the high definition details is tangible to step 7
Figure BDA0000026913900000107
These pieces are merged complete tangible pseudo-photo of high definition details corresponding to test portrait S of generation , wherein when final pseudo-photo merged, the pixel value of the lap of two adjacent final pseudo-photo pieces was taken as the mean value of these two pieces.
Effect of the present invention can further specify by following experiment:
With the inventive method respectively with pseudo-nonlinear method LLE, based on the contrast that experimentizes of the method for built-in type hidden Markov model EHMM, at first generate pseudo-portrait and pseudo-photo respectively in the experiment, and then the pseudo-photo that generates is carried out subjective quality evaluation and recognition of face experiment with pseudo-portrait with these methods
1, experiment condition and description of test
Realize that software environment of the present invention is the MATLAB 2009a of U.S. Mathworks company exploitation, used computing machine is the personal computer of 2G Hz.Marks more of the present invention are: according to the difference that is used for producing initial pseudo-portrait or initial pseudo-photo method, to produce the initial pseudo-method that combines with the inventive method again of drawing a portrait with pseudo-nonlinear method and be designated as SR-LLE, will be designated as SR-EHMM with the method that initial pseudo-portrait of built-in type hidden Markov model generation and initial pseudo-photo combine with the inventive method again.
Following experiment is all carried out on two databases: the one, and the disclosed CUHK student database of Hong Kong Chinese University (CUHK), another one is the newly-built VIPS database in Xian Electronics Science and Technology University VIPS laboratory.
2, experiment content
Experiment 1: the generation of pseudo-portrait and pseudo-photo
Described in the inventive method embodiment 1, utilize SR-LLE method and SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-portrait respectively, on the CUHK student database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 3, wherein Fig. 3 (a) is an original photo, Fig. 3 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 3 (c) for using built-in type hidden Markov model method EHMM to generate, the puppet portrait that Fig. 3 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 3 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2, utilize the SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-photo, on the CUHK student database, generate pseudo-photo with built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 4, wherein Fig. 4 (a) is original portrait, the pseudo-photo of Fig. 4 (b) for using built-in type hidden Markov model method EHMM to generate, Fig. 4 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
Described in the inventive method embodiment 1, utilize SR-LLE method and SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-portrait respectively, on the VIPS database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 5, wherein Fig. 5 (a) is an original photo, Fig. 5 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 5 (c) for using built-in type hidden Markov model method EHMM to generate, the puppet portrait that Fig. 5 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 5 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2, utilize the SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-photo, on the VIPS database, generate pseudo-photo with built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 6, wherein Fig. 6 (a) is original portrait, the pseudo-photo of Fig. 6 (b) for using built-in type hidden Markov model method EHMM to generate, Fig. 6 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
By experiment 1 result as seen, owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient, and then make that be used for neighbour's the number of synthetic target image piece is not what fix, thereby make the image definition height that generates, details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Experiment 2: subjective picture quality evaluation
In this experiment, invited 20 volunteers to come experimental result is given a mark.The validity of our algorithm is described by the MOS average evaluation fractional value that calculates these marks.With the puppet portrait is that example describes, and original portrait is used as reference picture, uses the LLE method, the EHMM method, and the SR-LLE method, the puppet portrait that the SR-EHMM method generates is used as image to be estimated.To each width of cloth image to be estimated, at first calculate the final score of the mean value of its mark that provides by 20 volunteers as this portrait.And then the validity of certain algorithm is that the mean value of the final scores of all pseudo-portraits of being generated by this algorithm is weighed.Promptly for piece image, the MOS value is calculated as follows:
MOS ( l ) = 1 20 Σ i = 1 20 A ( i , l ) ,
Wherein (i, l) i volunteer of expression is to the scoring of l width of cloth image for A.Final experimental result sees Table 1.
The subjective picture quality evaluation of estimate of table 1. algorithms of different
In table 1, " No " be meant in the document of pseudo-nonlinear method it is with generating pseudo-portrait, not with generating pseudo-photo, so SR-LLE just draws a portrait with generating puppet.As can be seen, the subjective picture quality evaluation of estimate of the inventive method obviously is better than existing method from table one.
Experiment 3: based on the recognition of face of rarefaction representation
As described in the background art, be combined in criminal investigation and case detection and the anti-terrorism practical application in pursuing and capturing an escaped prisoner, two kinds of situations arranged usually: based on the human face photo storehouse identification of portrait with based on the human face portrait storehouse identification of photo based on the recognition of face of example.For the former, at first will test portrait and be converted to a pseudo-photo, and then discern, also the human face photo storehouse can be converted to pseudo-portrait storehouse, and then mate identification.Equally, for the latter, photo can be converted to pseudo-portrait and maybe will draw a portrait the storehouse and be converted to pseudo-photo library, and then discern.Like this, four kinds of recognition method are just arranged, as shown in table 2.
Four kinds of recognition of face modes of table 2.
Figure BDA0000026913900000123
The recognition methods of using in this experiment is based on the face recognition algorithms of the robust of rarefaction representation.For the CUHK student database, have 188 width of cloth human face photos, corresponding to each photo, the portrait that all has a width of cloth to draw by the artist.For the VIPS database, have 200 width of cloth human face photos, wherein every width of cloth human face photo comes from 5 portraits that the artist painted to 5 width of cloth should be arranged.For of the effect of check multiple image for identification, on the VIPS database, to test, each people's face is used for constituting the training image number of dictionary and elects 1,3,5 respectively as.Experimental result such as table 3, table 4, table 5 is shown in the table 6, table 7.
The recognition of face rate (%) of table 3. on the CUHK student database
Figure BDA0000026913900000131
The recognition of face rate (%) of table 4. under VIPS database photo library pattern
Figure BDA0000026913900000132
The recognition of face rate (%) of table 5. under the pattern of VIPS database portrait storehouse
Figure BDA0000026913900000133
The recognition of face rate (%) of table 6. under the pattern of the pseudo-portrait of VIPS database storehouse
The recognition of face rate (%) of table 7. under the pseudo-picture mode of VIPS database
Figure BDA0000026913900000135
From table 4,5,6,7 as can be seen, and the method that the present invention proposes has had raising than original method on the recognition of face rate.Particularly on the VIPS database, discrimination of the present invention is apparently higher than existing pseudo-portrait and pseudo-photograph generation method, wherein the discrimination of SR-LLE has had than LLE method and has significantly improved, and the discrimination of SR-EHMM has also had than EHMM and significantly improves, and this illustrates validity of the present invention.

Claims (3)

1.一种基于稀疏表示的人脸照片到画像的生成方法,包括如下步骤:1. A method for generating face photos based on sparse representation to portraits, comprising the steps of: (1)将画像-照片对集划分为训练样本集和测试样本集,并从测试样本集中选取一张测试照片P;(1) The portrait-photo pair set is divided into a training sample set and a test sample set, and a test photo P is selected from the test sample set; (2)利用伪画像生成方法,将训练样本集和测试照片生成一幅与测试照片对应的初始伪画像
Figure FDA0000026913890000011
(2) Using the pseudo-portrait generation method, the training sample set and test photos are used to generate an initial pseudo-portrait corresponding to the test photos
Figure FDA0000026913890000011
(3)将初始伪画像
Figure FDA0000026913890000012
和测试照片P重新分为相同大小及相同重叠程度的块,其中
Figure FDA0000026913890000013
P={P1,P2,L PM},M为块的总个数,提取测试照片中每一块的特征向量f;
(3) The initial pseudo-portrait
Figure FDA0000026913890000012
and the test photo P are re-divided into blocks of the same size and the same degree of overlap, where
Figure FDA0000026913890000013
P={P 1 , P 2 , L P M }, M is the total number of blocks, extracting the feature vector f of each block in the test photo;
(4)利用训练样本集联合学习得到画像块字典Ds和照片块字典Dp(4) Use the joint learning of the training sample set to obtain the portrait block dictionary D s and the photo block dictionary D p : (4a).随机选择训练样本集中照片块集和画像块集的20000个块,其中每个照片块与每个画像块分别对应,提取照片块的一阶导数与二阶导数作为特征,用画像块的像素值减去画像块的均值作为特征,并将得到的画像块特征与照片块特征组合排成一列,并对其进行归一化;(4a). Randomly select 20,000 blocks of the photo block set and the portrait block set in the training sample set, wherein each photo block corresponds to each portrait block, extract the first-order derivative and the second-order derivative of the photo block as features, and use the portrait The pixel value of the block minus the mean value of the portrait block is used as a feature, and the obtained portrait block features and photo block features are combined and arranged in a column, and normalized; (4b).利用归一化的组合特征,通过交替迭代的方法求解下式来得到耦合的字典D:(4b). Using the normalized combined features, solve the following formula to obtain the coupled dictionary D through the method of alternating iterations: minmin {{ DD. ,, CC }} || || II -- DCDC || || 22 22 ++ ββ || || CC || || 11 ,, 其中I为归一化的组合特征组成的矩阵,每一列为一个归一化的组合特征,C为待求的稀疏表示系数矩阵,β为稀疏表示惩罚因子,实验中取β为0.05;Among them, I is a matrix composed of normalized combined features, each column is a normalized combined feature, C is the sparse representation coefficient matrix to be found, β is the sparse representation penalty factor, and β is taken as 0.05 in the experiment; (4c).将(4b)中得到的耦合字典D按照
Figure FDA0000026913890000015
分解为两个字典:画像块字典Ds和照片块字典Dp,并分别将这两个字典的每一列归一化,式中上标T表示矩阵的转置;
(4c). The coupling dictionary D obtained in (4b) according to
Figure FDA0000026913890000015
Decompose into two dictionaries: the portrait block dictionary D s and the photo block dictionary D p , and normalize each column of the two dictionaries respectively, where the superscript T represents the transposition of the matrix;
(5)利用步骤(3)中得到的特征向量f和步骤(4)中得到的照片块字典Dp,按照下式寻找其稀疏表示,得到其稀疏表示系数w:(5) Using the feature vector f obtained in step (3) and the photo block dictionary Dp obtained in step (4), find its sparse representation according to the following formula, and obtain its sparse representation coefficient w: minmin ww || || ff -- DD. pp ww || || 22 22 ++ ββ || || ww || || 11 ,, 其中β为稀疏表示惩罚因子,实验中取为0.05;Among them, β is the sparse representation penalty factor, which is taken as 0.05 in the experiment; (6)利用步骤(4)中得到的画像块字典Ds和(5)中得到的稀疏表示系数w,按照下式合成高清晰度细节明显的特征信息块 (6) Use the image block dictionary D s obtained in step (4) and the sparse representation coefficient w obtained in (5) to synthesize feature information blocks with high-definition details and obvious details according to the following formula SS Hihi %% == DD. sthe s ww ,, 其中i=1,2,L M,M为信息块的总个数;Wherein i=1, 2, L M, M is the total number of information blocks; (7)将步骤(6)得到的高清晰度细节明显的块特征信息块
Figure FDA0000026913890000021
加到步骤(2)得到的初始伪画像对应的块
Figure FDA0000026913890000022
上,以增强清晰度和细节,得到最终的伪画像块;
(7) The block feature information block with obvious high-definition details obtained in step (6)
Figure FDA0000026913890000021
Add to the block corresponding to the initial pseudo-image obtained in step (2)
Figure FDA0000026913890000022
to enhance the clarity and details to get the final pseudo-portrait block;
(8)重复执行步骤(5)-(7)直至得到M块最终的伪画像块,并将得到的这些最终伪画像块进行组合得到与测试照片对应的伪画像。(8) Steps (5)-(7) are repeated until M final pseudo portrait blocks are obtained, and these final pseudo portrait blocks are combined to obtain a pseudo portrait corresponding to the test photo.
2.根据权利要求1中基于稀疏表示的人脸照片到画像的生成方法,其中在于步骤(4b)所述的通过交替迭代的方法求解
Figure FDA0000026913890000023
以得到耦合的字典D,具体步骤如下:
2. according to the generation method of face photo based on sparse representation in claim 1 to portrait, wherein it is to solve by the method described in step (4b) by alternate iteration
Figure FDA0000026913890000023
To obtain the coupled dictionary D, the specific steps are as follows:
(2a)随机初始化矩阵D;(2a) random initialization matrix D; (2b).将矩阵D代入
Figure FDA0000026913890000024
得到关于C的优化函数
Figure FDA0000026913890000025
并求解该函数得到矩阵C;
(2b). Substitute the matrix D into
Figure FDA0000026913890000024
Get optimized functions for C
Figure FDA0000026913890000025
And solve the function to get the matrix C;
(2c).将矩阵C代入
Figure FDA0000026913890000026
得到关于矩阵D的优化函数
Figure FDA0000026913890000027
并求解该函数得到矩阵D;
(2c). Substitute matrix C into
Figure FDA0000026913890000026
Get the optimization function about the matrix D
Figure FDA0000026913890000027
And solve the function to get the matrix D;
(2d).迭代执行(2b)至(2c)直至不再减小或者达到预设的迭代次数,得到矩阵D和矩阵C。(2d). Perform (2b) to (2c) iteratively until No longer decrease or reach the preset number of iterations, matrix D and matrix C are obtained.
3.一种基于稀疏表示的人脸画像到照片的生成方法,包括如下步骤:3. A generation method from a face portrait to a photo based on sparse representation, comprising the steps of: 1)将画像-照片对集划分为训练样本集和测试样本集,并从测试样本集中选取一张测试画像S;1) Divide the portrait-photo pair set into a training sample set and a test sample set, and select a test portrait S from the test sample set; 2)利用伪照片生成方法,将训练样本集和测试画像生成一幅与测试画像对应的初始伪照片
Figure FDA0000026913890000029
2) Using the pseudo-photo generation method, the training sample set and the test portrait are used to generate an initial pseudo-photo corresponding to the test portrait
Figure FDA0000026913890000029
3)将初始伪照片
Figure FDA00000269138900000210
和测试画像S重新分为相同大小及相同重叠程度的块,其中
Figure FDA00000269138900000211
S={S1,S2,L SM},M为块的总个数,提取测试画像中每一块的特征向量f;
3) The initial pseudo-photo
Figure FDA00000269138900000210
and the test image S are re-divided into blocks of the same size and the same degree of overlap, where
Figure FDA00000269138900000211
S={S 1 , S 2 , L S M }, M is the total number of blocks, extracting the feature vector f of each block in the test image;
4)利用训练样本集联合学习得到画像块字典Ds和照片块字典Dp4) Use the joint learning of the training sample set to obtain the portrait block dictionary D s and the photo block dictionary D p : 4a).随机选择训练样本集中照片块集和画像块集的20000个块,其中每个照片块与每个画像块分别对应,提取画像块的一阶导数与二阶导数作为特征,用照片块的像素值减去照片块的均值作为特征,并将得到的画像块特征与照片块特征组合排成一列,并对其进行归一化;4a). Randomly select 20,000 blocks from the photo block set and portrait block set in the training sample set, where each photo block corresponds to each portrait block, extract the first-order derivative and second-order derivative of the portrait block as features, and use the photo block The pixel value minus the mean value of the photo block is used as a feature, and the obtained portrait block features and photo block features are combined and arranged in a column, and normalized; 4b).利用归一化的组合特征,通过交替迭代的方法求解下式来得到耦合的字典D:4b).Using the normalized combined features, solve the following formula by alternate iterative method to obtain the coupled dictionary D: minmin {{ DD. ,, CC }} || || II -- DCDC || || 22 22 ++ ββ || || CC || || 11 ,, 其中I为归一化的组合特征组成的矩阵,每一列为一个归一化的组合特征,C为待求的稀疏表示系数矩阵,β为稀疏表示惩罚因子,实验中取β为0.05;Among them, I is a matrix composed of normalized combined features, each column is a normalized combined feature, C is the sparse representation coefficient matrix to be found, β is the sparse representation penalty factor, and β is taken as 0.05 in the experiment; 4c).将4b)中得到的耦合字典D按照
Figure FDA0000026913890000032
分解为两个字典:画像块字典Ds和照片块字典Dp,并分别将这两个字典的每一列归一化,式中上标T表示矩阵的转置;
4c). Use the coupling dictionary D obtained in 4b) according to
Figure FDA0000026913890000032
Decompose into two dictionaries: the portrait block dictionary D s and the photo block dictionary D p , and normalize each column of the two dictionaries respectively, where the superscript T represents the transposition of the matrix;
5)利用步骤3)中得到的特征向量f和步骤4)中得到的画像块字典Ds,按照下式寻找其稀疏表示,得到其稀疏表示系数w:5) Using the feature vector f obtained in step 3) and the image block dictionary D s obtained in step 4), find its sparse representation according to the following formula, and obtain its sparse representation coefficient w: minmin ww || || ff -- DD. sthe s ww || || 22 22 ++ ββ || || ww || || 11 ,, 其中β为稀疏表示惩罚因子,实验中取为0.05;Among them, β is the sparse representation penalty factor, which is taken as 0.05 in the experiment; 6)利用步骤4)中得到的照片块字典Dp和5)中得到的稀疏表示系数w,按照下式合成高清晰度细节明显的特征信息块
Figure FDA0000026913890000034
6) Using the photo block dictionary Dp obtained in step 4) and the sparse representation coefficient w obtained in 5), synthesize a feature information block with high-definition details according to the following formula
Figure FDA0000026913890000034
PP Hihi %% == DD. pp ww ,, 其中i=1,2,L M,M为信息块的总个数;Wherein i=1, 2, L M, M is the total number of information blocks; 7)将步骤6)得到的高清晰度细节明显的块特征信息块
Figure FDA0000026913890000036
加到步骤2)得到的初始伪照片对应的块
Figure FDA0000026913890000037
上,以增强清晰度和细节,得到最终的伪画像块;
7) The block feature information block with obvious high-definition details obtained in step 6)
Figure FDA0000026913890000036
Add to the block corresponding to the initial pseudo-photo obtained in step 2)
Figure FDA0000026913890000037
to enhance the clarity and details to get the final pseudo-portrait block;
8)重复执行步骤5)-7)直至得到M块最终的伪照片块,并将得到的这些最终伪照片块进行组合得到与测试照片对应的伪照片。8) Steps 5)-7) are repeated until M final fake photo blocks are obtained, and these final fake photo blocks are combined to obtain a fake photo corresponding to the test photo.
CN2010102893309A 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation Expired - Fee Related CN101958000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102893309A CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102893309A CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Publications (2)

Publication Number Publication Date
CN101958000A true CN101958000A (en) 2011-01-26
CN101958000B CN101958000B (en) 2012-08-15

Family

ID=43485317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102893309A Expired - Fee Related CN101958000B (en) 2010-09-24 2010-09-24 Face image-picture generating method based on sparse representation

Country Status (1)

Country Link
CN (1) CN101958000B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902991A (en) * 2014-04-24 2014-07-02 西安电子科技大学 Face recognition method based on forensic sketches
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN105608451A (en) * 2016-03-14 2016-05-25 西安电子科技大学 Face portrait generation method based on subspace ridge regression
CN105869134A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Face portrait synthesis method based on directional graph model
CN106056561A (en) * 2016-04-12 2016-10-26 西安电子科技大学 A Face Portrait Synthesis Method Based on Bayesian Inference
CN106778811A (en) * 2016-11-21 2017-05-31 西安电子科技大学 A kind of image dictionary generation method, image processing method and device
CN103793695B (en) * 2014-02-10 2017-11-28 天津大学 A kind of method of the sub- dictionary joint training of multiple feature spaces for recognition of face
CN109145135A (en) * 2018-08-03 2019-01-04 厦门大学 A kind of human face portrait aging method based on principal component analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080170623A1 (en) * 2005-04-04 2008-07-17 Technion Resaerch And Development Foundation Ltd. System and Method For Designing of Dictionaries For Sparse Representation
CN101571950A (en) * 2009-03-25 2009-11-04 湖南大学 Image restoring method based on isotropic diffusion and sparse representation
CN101640541A (en) * 2009-09-04 2010-02-03 西安电子科技大学 Reconstruction method of sparse signal
US20100046829A1 (en) * 2008-08-21 2010-02-25 Adobe Systems Incorporated Image stylization using sparse representation
CN101719142A (en) * 2009-12-10 2010-06-02 湖南大学 Method for detecting picture characters by sparse representation based on classifying dictionary

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080170623A1 (en) * 2005-04-04 2008-07-17 Technion Resaerch And Development Foundation Ltd. System and Method For Designing of Dictionaries For Sparse Representation
US20100046829A1 (en) * 2008-08-21 2010-02-25 Adobe Systems Incorporated Image stylization using sparse representation
CN101571950A (en) * 2009-03-25 2009-11-04 湖南大学 Image restoring method based on isotropic diffusion and sparse representation
CN101640541A (en) * 2009-09-04 2010-02-03 西安电子科技大学 Reconstruction method of sparse signal
CN101719142A (en) * 2009-12-10 2010-06-02 湖南大学 Method for detecting picture characters by sparse representation based on classifying dictionary

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793695B (en) * 2014-02-10 2017-11-28 天津大学 A kind of method of the sub- dictionary joint training of multiple feature spaces for recognition of face
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN103902991A (en) * 2014-04-24 2014-07-02 西安电子科技大学 Face recognition method based on forensic sketches
CN104517274B (en) * 2014-12-25 2017-06-16 西安电子科技大学 Human face portrait synthetic method based on greedy search
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search
CN104700380B (en) * 2015-03-12 2017-08-15 陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN105608451A (en) * 2016-03-14 2016-05-25 西安电子科技大学 Face portrait generation method based on subspace ridge regression
CN105608451B (en) * 2016-03-14 2019-11-26 西安电子科技大学 Face portrait generation method based on subspace ridge regression
CN105869134A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Face portrait synthesis method based on directional graph model
CN105869134B (en) * 2016-03-24 2018-11-30 西安电子科技大学 Face portrait synthesis method based on directional diagram model
CN106056561A (en) * 2016-04-12 2016-10-26 西安电子科技大学 A Face Portrait Synthesis Method Based on Bayesian Inference
CN106778811A (en) * 2016-11-21 2017-05-31 西安电子科技大学 A kind of image dictionary generation method, image processing method and device
CN106778811B (en) * 2016-11-21 2020-12-25 西安电子科技大学 Image dictionary generation method, image processing method and device
CN109145135A (en) * 2018-08-03 2019-01-04 厦门大学 A kind of human face portrait aging method based on principal component analysis

Also Published As

Publication number Publication date
CN101958000B (en) 2012-08-15

Similar Documents

Publication Publication Date Title
Nhan Duong et al. Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition
CN101958000A (en) Face image-picture generating method based on sparse representation
Yang et al. Learning face age progression: A pyramid architecture of gans
Kolouri et al. Optimal mass transport: Signal processing and machine-learning applications
Gao et al. A review of active appearance models
CN100520807C (en) Face Recognition Method Based on Independent Component Analysis of Multi-scale Total Variation Quotient Images
WO2022000420A1 (en) Human body action recognition method, human body action recognition system, and device
TWI419059B (en) Method and system for example-based face hallucination
CN107085716A (en) Cross-view gait recognition method based on multi-task generative adversarial network
JP6207210B2 (en) Information processing apparatus and method
Wallace et al. Cross-pollination of normalization techniques from speaker to face authentication using Gaussian mixture models
CN102915527A (en) Face image super-resolution reconstruction method based on morphological component analysis
CN113538608B (en) Controllable Character Image Generation Method Based on Generative Adversarial Network
CN114937298B (en) A micro-expression recognition method based on feature decoupling
CN116975602A (en) AR interactive emotion recognition method and system based on multi-modal information double fusion
JP2007072620A (en) Image recognition device and its method
Chandaliya et al. Child face age progression and regression using self-attention multi-scale patch gan
Ding et al. Sequential convolutional network for behavioral pattern extraction in gait recognition
CN107563319A (en) Face similarity measurement computational methods between a kind of parent-offspring based on image
Que et al. Denoising diffusion probabilistic model for face sketch-to-photo synthesis
CN111008570A (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
Banitalebi-Dehkordi et al. Face recognition using a new compressive sensing-based feature extraction method
CN112766157B (en) Cross-age face image recognition method based on disentanglement representation learning
Teo et al. 2.5 D Face Recognition System using EfficientNet with Various Optimizers
Bian et al. Conditional adversarial consistent identity autoencoder for cross-age face synthesis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120815

Termination date: 20170924

CF01 Termination of patent right due to non-payment of annual fee