CN101958000A - Face image-picture generating method based on sparse representation - Google Patents
Face image-picture generating method based on sparse representation Download PDFInfo
- Publication number
- CN101958000A CN101958000A CN 201010289330 CN201010289330A CN101958000A CN 101958000 A CN101958000 A CN 101958000A CN 201010289330 CN201010289330 CN 201010289330 CN 201010289330 A CN201010289330 A CN 201010289330A CN 101958000 A CN101958000 A CN 101958000A
- Authority
- CN
- China
- Prior art keywords
- portrait
- photo
- piece
- pseudo
- dictionary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a face image-picture generating method based on sparse representation, which mainly solves the problems of low definition and blurry details of a pseudo image and a pseudo picture generated in the traditional method. The face image-picture generating method comprises the following implementation processes of: generating an initial pseudo image and an initial pseudo picture by using the traditional pseudo image and pseudo picture generating method; blocking all images and then training an image block dictionary and a picture block dictionary by utilizing a training sample set; synthesizing high definition characteristics information according to an input test picture block or an input test image block by utilizing the two dictionaries; adding the obtained high definition characteristics information and the corresponding initial pseudo image block or the initial pseudo picture block to obtain a final pseudo image block or a pseudo picture block with high definition; and fusing all pseudo images and pseudo pictures with high definition to obtain a complete pseudo image or a complete pseudo picture. Compared with the prior art, the generated pseudo image and pseudo picture have the advantages of high definition and obvious details and can be used for face identification and face retrieval.
Description
Technical field
The invention belongs to technical field of image processing, relate to human face portrait-photo and generate, can be used in the fields such as criminal investigation and anti-terrorism retrieval and identification people's face.
Background technology
Along with the arrival of information age, people more and more experience the importance of information security.Identification and authentication techniques are the effective means that ensure information security, and development in recent years is more and more rapider.Identification and authentication techniques based on people's face are one of most convenient, effective identity verification technology, so recognition of face receives much concern in recent years.Because the difference of imaging mode, facial image can have multiple example, as photo, and portrait etc., therefore recognition of face is not limited to the identification of human face photo, and correspondingly the recognition of face based on example mainly contains dual mode: based on the recognition of face of photo with based on the recognition of face of drawing a portrait.Recognition of face based on photo has obtained using as gate control system search engine, video monitoring etc. in a lot of fields.But under many circumstances, during for example criminal investigation and case detection and anti-terrorism are pursued and captured an escaped prisoner, the photo that does not often have the suspect only have a width of cloth to come from the artist and cooperate the portrait finished with eyewitness, and then recognition of face will be carried out the identification of identity based on the portrait that obtains and existing police's database.In addition, arresting a suspicion of crime man-hour, can obtain this suspect's human face photo, we can retrieve in the portrait storehouse that the police set up in the process in the past with the human face photo that obtains to determine this suspect crime of whether commiting excesses in the past, determine whether the crime of commiting excesses or determine the crime number of times according to the portrait quantity that retrieves according to result for retrieval.
To sum up, mainly be to be applied to following two kinds of situations based on the recognition of face of drawing a portrait: the one, judgement suspect's identity, the 2nd, verify whether someone has the historical or further definite crime number of times of crime.For first kind of situation, face recognition technology can be achieved by utilizing a portrait to mate identification again as test pattern in existing police's picture data storehouse.For second kind of situation, human face photo can be used as test pattern, mates identification again in existing police draws a portrait database and gets final product.But because mechanism of production difference, portrait and photo are heterogeneous mutually, and general photo-photo recognizer, see document " Chellappa R; Wilson C; Sirobey S.Human and MachineRecognition of Faces:a Survey.Proceedings of IEEE; 83 (5): 705-741; 1995 ", " Zhao W; Chellappa C; Rosenfeld A; Phillips P.Face Recognition:a Survey.ACM Computer Survey; 34 (4): 399-458,2003 ", " Phillips P; Flynn P, Scruggs T, Bowyer K, Chang J, Hoffman K, Marques J, Min J, Worek W.Overview of the Face Recognition Grand Challenge.Inproceeding of IEEE International Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20-25 June 2005 "; all be the coupling identification in comparison film and picture data storehouse, run into challenge when the coupling in the coupling in portrait and picture data storehouse and photo and portrait storehouse discerned solving.In order to solve the challenge that runs in the coupling identifying of portrait and picture data storehouse, portrait or photo should be converted under the identical representation space, so again in identical space as photo space or portrait the face identification method the space utilization discern and get final product.Therefore, heterogeneous image transitions just becomes a problem demanding prompt solution.
At the problems referred to above, it can be the homogeneity map picture with heterogeneous image transformation that two kinds of strategies are arranged: first kind is from drawing a portrait the conversion of photo, and another is the conversion from photo to portrait.In existing method, refer to human face photo or human face portrait with facial image, refer to pseudo-portrait or pseudo-photo with pseudo-image.Portrait can be divided into two big classes: a class is simple stick figure and caricature, this class portrait does not almost have complicated information such as grain details, and another kind of is complicated portrait, and this class portrait is compared with caricature with stick figure, grain details is complicated, and it is more to comprise quantity of information.Existing research for the complexity portrait is mainly carried out from two aspects, is to be converted to the portrait that contains less texture information with containing the photo that enriches texture information on the one hand, is that the portrait that will contain less texture information is converted to complicated photo on the other hand.
1. photo mainly comprises three major types to the converter technique of portrait:
One is based on linear method, adopt the principal component analysis (PCA) algorithm respectively at photo space and portrait space training structure proper subspace separately, obtain photo to be transformed projection coefficient in the photo feature space, reconstruction coefficients when obtaining the base that utilizes in the photo feature space and reconstruct this photo according to this projection coefficient then, in the portrait space according to the photo feature space in basic corresponding portrait and reconstruction coefficients thereof reconstruct pseudo-portrait.Mapping between this method supposition photo and the portrait is a kind of linear relationship, can't reflect the nonlinear relationship between the two veritably, and the noise that causes bearing results is many, and sharpness is low, and details is fuzzy;
Two are based on pseudo-non-linear method, the method be approach with piecewise linearity non-linear, be specially: the photo-portrait in the training set is carried out even piecemeal to reaching photo to be transformed, each fritter for photo to be transformed, in all training photo pieces, find a K the most similar fritter to it, carry out linear weighted function by the portrait piece to this K photo piece correspondence then and produce pseudo-portrait piece, the whole pseudo-portrait piece with gained is combined into complete puppet portrait at last.This method is approached the nonlinear relationship of the overall situation by the linear combination of part, but still not nonlinear method truly, specifically see document " Liu Q S; Tang X O.A Nonlinear approach for face sketch synthesis and recognition.IEEE International Conference on Computer Vision and Pattern Recognition; San Diego; CA; USA; 20-26 Jun 2005 " and " Gao X B, Zhong J J, Tao D C and Li X L.Local face sketchsynthesis learning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".Because the arest neighbors that K neighbour's piece being sought is Euclidean distance might not be K maximally related of photo piece to be transformed, and the number of arest neighbors is fixed in the method, this has just caused the sharpness that bears results low, the defective that details is fuzzy;
Three are based on non-linear method, and this method mainly is based on the method for built-in type hidden Markov model.Utilize the nonlinear relationship between built-in type hidden Markov model comparison film and the portrait to carry out modeling, convert photo to be transformed to pseudo-portrait according to the built-in type hidden Markov model of being learnt.Consider that single model is to depicting the nonlinear relationship of the complexity between photo and the portrait fully, introduced the integrated thought of selectivity, at each photo-portrait to obtaining one by one body portrait maker, select the individual maker of part to merge, thereby photo to be transformed is mapped to corresponding pseudo-portrait.Then, on the said method basis, image is carried out piecemeal again, utilize said method to carry out modeling for every pair of training photo piece-portrait piece, change photo piece to be transformed into pseudo-portrait piece according to model, merge pseudo-portrait piece and obtain pseudo-portrait, specifically see document " Gao X B; Zhong J J; Tao D C and Li X L.Local face sketch synthesislearning.Neurocomputing, 71 (10-12): 1921~1930,2008 ".Also used K the thought that arest neighbors carries out modeling with photo to be transformed in the method, same, this sharpness that also caused bearing results is low, the shortcoming that details is fuzzy.
2. the switch technology of photo of drawing a portrait mainly contains following two kinds of methods:
One is based on the method for subspace, this is a kind of method of blending space being carried out signature analysis, at first synthetic blending space is spliced in photo space and portrait space, adopt the principal component analysis (PCA) algorithm blending space to be trained the overall subspace of structure portrait-photo, should be divided into photo proper subspace and portrait proper subspace in overall situation subspace then, obtain the projection coefficient of portrait to be transformed at the portrait proper subspace, utilize projection coefficient to reconstruct the facial image vector in overall subspace at last, this vectorial the first half is pseudo-photo.Mapping between this method supposition photo and the portrait is a kind of linear relationship, and in fact the relation between the two is wanted the many of complexity, and the noise that causes bearing results is more, and sharpness is low, and details is fuzzy;
Two are based on the method for built-in type hidden Markov model, the process that the method and the method for utilizing the built-in type hidden Markov model to generate pseudo-portrait recited above are symmetries, as long as will draw a portrait the other method that can obtain symmetry with the role reversal of photo, specifically see document " Xiao B; Gao X B, Li X L, Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 ".Sharpness is low as a result for the pseudo-photo of the feasible generation of same use owing to k nearest neighbor, and details is fuzzy.
Summary of the invention
The objective of the invention is to overcome the deficiency of above-mentioned existing method, propose a kind of human face portrait-photograph generation method based on rarefaction representation, the puppet that generates is drawn a portrait and the sharpness of pseudo-photo to improve, and makes that the puppet portrait and the pseudo-photo details that generate are more obvious.
The technical scheme that realizes the object of the invention comprises:
1. arrive the generation method of portrait based on the human face photo of rarefaction representation, comprise the steps:
(1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test photo P;
(2) utilize pseudo-portrait generation method, training sample set and test photo are generated a width of cloth initial pseudo-portrait corresponding with the test photo
(3) will initial pseudo-portrait
Again be divided into the piece of identical size and identical overlapping degree with test photo P, wherein
P={P
1, P
2, L P
M, M is total number of piece, extracts the proper vector f of each piece in the test photo;
(4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D
sWith photo piece dictionary D
p:
(4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction photo piece and second derivative are as feature, pixel value with the portrait piece deducts the average of portrait piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
(4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
(4c). with the coupling dictionary D that obtains in (4b) according to
Be decomposed into two dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
(5) utilize the photo piece dictionary D that obtains in the proper vector f that obtains in the step (3) and the step (4)
p, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
(6) utilize the portrait piece dictionary D that obtains in the step (4)
s(5) the rarefaction representation coefficient w that obtains in is according to the synthetic high definition details obvious characteristics message block of following formula
I=1 wherein, 2, L M, M are total number of message block;
(7) the tangible block feature message block of high definition details that step (6) is obtained
Be added to the corresponding piece of initial pseudo-portrait that step (2) obtains
On, to strengthen sharpness and details, obtain final puppet portrait piece;
(8) repeated execution of steps (5)-(7) are until obtaining the final puppet of M piece portrait piece, and these final pseudo-portrait pieces that will obtain make up and obtain the puppet portrait corresponding with testing photo.
2. based on the human face portrait of rarefaction representation generation method, comprise the steps: to photo
1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test portrait S;
2) utilize pseudo-photograph generation method, training sample set and test portrait are generated a width of cloth and the corresponding initial pseudo-photo of test portrait
3) with initial pseudo-photo
Again be divided into the piece of identical size and identical overlapping degree with test portrait S, wherein
S={S
1, S
2, L S
M, M is total number of piece, extracts the proper vector f of each piece in the test portrait;
4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D
sWith photo piece dictionary D
p:
4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction portrait piece and second derivative are as feature, pixel value with the photo piece deducts the average of photo piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
4c). with 4b) in the coupling dictionary D that obtains according to
Be decomposed into two dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
5) utilize the portrait piece dictionary D that obtains in the proper vector f that obtains in the step 3) and the step 4)
s, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
6) utilize the photo piece dictionary D that obtains in the step 4)
pWith 5) in the rarefaction representation coefficient w that obtains, according to the synthetic high definition details obvious characteristics message block of following formula
I=1 wherein, 2, L M, M are total number of message block;
7) the tangible block feature message block of the high definition details that step 6) is obtained
Be added to step 2) piece of the initial pseudo-photo correspondence that obtains
On, to strengthen sharpness and details, obtain final puppet portrait piece;
8) repeated execution of steps 5)-7) until obtaining the final pseudo-photo piece of M piece, and these final pseudo-photo pieces that will obtain make up and obtain the pseudo-photo corresponding with testing photo.
The present invention is owing to set up combination learning photo piece dictionary D
pWith portrait piece dictionary D
sModel makes that the rarefaction representation of test pattern piece is identical with the rarefaction representation coefficient of image to be synthesized; Simultaneously owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient, thereby make that be used for neighbour's the number of synthetic target image piece is not what fix, the image definition height that makes generation, details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Description of drawings
Fig. 1 the present invention is based on the generation method flow diagram of the photo of rarefaction representation to portrait;
Fig. 2 the present invention is based on the generation method flow diagram of the portrait of rarefaction representation to photo;
The comparing result figure that Fig. 3 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on CUHK student database;
Fig. 4 is the pseudo-photo comparing result figure that the present invention and existing a kind of method generate on CUHK student database;
The comparing result figure that Fig. 5 draws a portrait for the puppet that the present invention and existing two kinds of methods generate on the VIPS database;
The comparing result figure of the pseudo-photo that Fig. 6 generates on the VIPS database for the present invention and existing a kind of method.
Embodiment
Core concept of the present invention is: fix at existing pseudo-portrait and K used arest neighbors piece number of pseudo-photograph generation method generation image block, the deficiency that the puppet portrait is low with pseudo-photo sharpness and details is fuzzy that causes existing human face portrait-photograph generation method to generate, proposition is based on the human face portrait-photograph generation method of rarefaction representation, with can adaptive selection arest neighbors piece, make the puppet that generates draw a portrait and pseudo-photo sharpness height, details is obvious.Below provide two kinds of examples:
One, generates the method for pseudo-portrait based on rarefaction representation by human face photo
With reference to Fig. 1, the concrete steps of pseudo-portrait generation method of the present invention are as follows:
The first step is divided training sample set and test sample book collection;
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant the test photograph collection, chooses a photo in the test photograph collection as test photo P.
In second step,, generate the initial pseudo-portrait of a width of cloth with pseudo-nonlinear method or built-in type hidden Markov model method according to training sample set and test photo P
Concrete grammar is list of references " Liu Q S; Tang X O.A Nonlinearapproach for face sketch synthesis and recognition.IEEE International Conference onComputer Vision and Pattern Recognition; San Diego; CA; USA; 20-26 Jun 2005 " and " Gao X B respectively, Zhong J J, Tao D C and Li X L.Local face sketch synthesis learning.Neurocomputing, 71 (10-12): 1921~1930,2008 ", concrete steps are as follows:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test photo piece on the test photo, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-portrait piece, and the mean value of getting this k pseudo-portrait piece obtains testing the initial pseudo-portrait piece of photo piece correspondence.
In the 3rd step, test photo, initial pseudo-portrait are carried out piecemeal and keep 75% overlapping degree, and extract test photo block feature.
(3a). with the initial pseudo-portrait that obtains in second step
Be divided into pseudo-portrait piece collection
When dividing puppet portrait piece, to keep 75% overlapping degree, wherein the total block data of M for dividing;
(3b). will test photo and be divided into photo piece collection { P
1, P
2, L, P
M, wherein M is total number of piece in the test photo, it is identical with the branch block size and the overlapping degree of initial pseudo-portrait in the step (3a) that the size of piecemeal and overlapping degree will keep;
(3c). for each test photo piece, extract the horizontal direction of this piece and the single order and the second derivative of vertical direction, the linear operator that wherein is used for extracting derivative is respectively f
1=[1,0,1], f
2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative of extracting by row, as the feature of this piece;
The 4th step, utilize the training sample set training behind the piecemeal, obtain drawing a portrait piece dictionary D
sWith photo piece dictionary D
pTwo dictionaries.
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, extract the first order derivative of photo piece and second derivative as feature, the linear operator that wherein is used for extracting feature is respectively f
1=[1,0,1], f
2=[1,0 ,-2,0,1] deducts the feature of portrait piece average as the portrait piece with the pixel value of portrait piece, and the portrait block feature that will obtain makes up with the photo block feature and form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each in the matrix is classified the feature of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method step is specially:
(4.2a) random initializtion matrix D;
(4.2b). with the matrix D substitution
Obtain majorized function about C
And find the solution this function and obtain Matrix C;
(4.2c). with the Matrix C substitution
Obtain majorized function about matrix D
And find the solution this function and obtain matrix D;
(4.2d). iteration is carried out (4.2b) to (4.2c), until
No longer reduce or reach default iterations, obtain matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to
Be decomposed into two sub-dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
The 5th the step, for the test photo each piece, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it at photo piece dictionary D according to following formula
pUnder rarefaction representation coefficient w:
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
In the 6th step, utilize the 4th to go on foot the portrait piece dictionary D that obtains
sThe rarefaction representation coefficient w that obtains with the 5th step synthesizes high definition details obvious characteristics information:
Wherein
Be the individual high definition details obvious characteristics information of i (i=1, L M), wherein M is a high definition details obvious characteristics message block number.
The 7th step is with the high definition details obvious characteristics information that obtains in the 6th step
Be added to the initial pseudo-portrait piece of the correspondence that obtains in the 3rd step
On, obtain the significantly pseudo-portrait piece of final high definition details:
I=1 wherein, L M, M are total number of pseudo-portrait piece.
The 8th step repeated for the 5th step to the 7th step, obtained the significantly pseudo-portrait piece of M high definition details
These pseudo-portrait pieces are merged complete significantly pseudo-portrait of the high definition details corresponding to test photo P of generation
, wherein when final puppet portrait piece merged, the pixel value of the lap of two adjacent final pseudo-portrait pieces was taken as the mean value of the pixel value of two final pseudo-portrait pieces.
Two, generate the method for pseudo-photo based on rarefaction representation by human face portrait
With reference to Fig. 2, the concrete steps of pseudo-photograph generation method of the present invention are as follows:
Step 1 is divided training sample set and test sample book collection.
To draw a portrait-photo is divided into training sample set and test sample book collection to collection, and wherein training sample set comprises training portrait collection and training photograph collection, and the test sample book collection is meant test portrait collection, chooses a portrait that the test portrait concentrates as test portrait S.
Step 2 according to training sample set and test portrait S, generates the initial pseudo-photo of a width of cloth with built-in type hidden Markov model method
Concrete grammar list of references " Xiao B, Gao X B, Li X L; Tao D C.A New Approach for FaceRecognition by Sketches in Photos.Signal Processing; 89 (8): 1531-1539,2009 ", concrete steps are as follows:
(2a). training sample set and test sample book collection are carried out piecemeal;
(2b). for each the test portrait piece on the test portrait, portrait collection and k built-in type hidden Markov model of photograph collection training of utilizing training sample to concentrate;
(2c). k the built-in type hidden Markov model that utilizes training to obtain synthesizes k pseudo-photo piece, and the mean value of getting this k pseudo-photo piece obtains testing the initial pseudo-photo piece of portrait piece correspondence.
Step 3 is carried out piecemeal and is kept 75% overlapping degree test portrait, initial pseudo-photo, and extracts test portrait block feature.
(3a). with the initial pseudo-photo that obtains in the step 2
Be divided into pseudo-photo piece collection
When dividing pseudo-photo piece, to keep 75% overlapping degree, wherein the total block data of M for dividing;
(3b). will test portrait and be divided into portrait piece collection { S
1, S
2, L, S
M, it is identical with the branch block size and the overlapping degree of initial pseudo-photo in the step (3a) that the size of piecemeal and overlapping degree will keep, and wherein M is total number of piece in the test portrait;
(3c). for each test portrait piece, extract this piece in the horizontal direction with vertical direction on single order and second derivative, the linear operator that wherein is used for extracting derivative is respectively f
1=[1,0,1], f
2=[1,0 ,-2,0,1] is combined into a vector with single order and the second derivative of extracting by row, as the feature of this piece.
Step 4 utilizes the training sample set training behind the piecemeal to obtain two dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p
4.1) select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, extract the first order derivative of portrait piece and second derivative as feature, the linear operator that wherein is used for extracting feature is respectively f
1=[1,0,1], f
2=[1,0 ,-2,0,1] deducts the feature of photo piece average as the photo piece with the pixel value of photo piece, and the portrait block feature that will obtain and the combination of photo block feature form a line, and it is carried out normalization;
4.2) utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies the feature of a coupling as, and C is for waiting to ask the rarefaction representation matrix of coefficients, and β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment,
Described alternately solution by iterative method concrete steps are:
(4.2a) random initializtion matrix D;
(4.2b). with the matrix D substitution
Obtain majorized function about C
And find the solution this function and obtain Matrix C;
(4.2c). with the Matrix C substitution
Obtain majorized function about matrix D
And find the solution this function and obtain matrix D;
(4.2d). iteration carry out (4.2b) to (4.2c) until
No longer reduce or reach default iterations, obtain matrix D and Matrix C;
4.3) with 4.2) and in the coupling dictionary D that obtains according to
Be decomposed into two sub-dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix wherein.
Step 5, for each piece of test portrait, extract its in the horizontal direction with vertical direction on single order and second derivative as proper vector, obtain it according to following formula and drawing a portrait piece dictionary D
sUnder rarefaction representation coefficient w:
Wherein λ=0.1 is a penalty factor, the photo block feature of f for extracting.
Step 6, the photo piece dictionary D that utilizes step 4 to obtain
pThe rarefaction representation coefficient w that obtains with step 5 synthesizes high definition details obvious characteristics information:
Wherein
Be the individual high definition details obvious characteristics information of i (i=1, L M), wherein M is a high definition details obvious characteristics message block number.
Step 7 is with the high definition details obvious characteristics information that obtains in the step 6
Be added to the initial pseudo-photo piece that obtains in the step 3
On, obtain the tangible pseudo-photo piece of final high definition details:
I=1 wherein, L M, M are total number of initial pseudo-photo piece.
Step 8, repeating step 5 obtain M the pseudo-photo piece that the high definition details is tangible to step 7
These pieces are merged complete tangible pseudo-photo of high definition details corresponding to test portrait S of generation
, wherein when final pseudo-photo merged, the pixel value of the lap of two adjacent final pseudo-photo pieces was taken as the mean value of these two pieces.
Effect of the present invention can further specify by following experiment:
With the inventive method respectively with pseudo-nonlinear method LLE, based on the contrast that experimentizes of the method for built-in type hidden Markov model EHMM, at first generate pseudo-portrait and pseudo-photo respectively in the experiment, and then the pseudo-photo that generates is carried out subjective quality evaluation and recognition of face experiment with pseudo-portrait with these methods
1, experiment condition and description of test
Realize that software environment of the present invention is the MATLAB 2009a of U.S. Mathworks company exploitation, used computing machine is the personal computer of 2G Hz.Marks more of the present invention are: according to the difference that is used for producing initial pseudo-portrait or initial pseudo-photo method, to produce the initial pseudo-method that combines with the inventive method again of drawing a portrait with pseudo-nonlinear method and be designated as SR-LLE, will be designated as SR-EHMM with the method that initial pseudo-portrait of built-in type hidden Markov model generation and initial pseudo-photo combine with the inventive method again.
Following experiment is all carried out on two databases: the one, and the disclosed CUHK student database of Hong Kong Chinese University (CUHK), another one is the newly-built VIPS database in Xian Electronics Science and Technology University VIPS laboratory.
2, experiment content
Experiment 1: the generation of pseudo-portrait and pseudo-photo
Described in the inventive method embodiment 1, utilize SR-LLE method and SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-portrait respectively, on the CUHK student database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 3, wherein Fig. 3 (a) is an original photo, Fig. 3 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 3 (c) for using built-in type hidden Markov model method EHMM to generate, the puppet portrait that Fig. 3 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 3 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2, utilize the SR-EHMM method on the CUHK of Hong Kong Chinese University student database, to carry out the synthetic of pseudo-photo, on the CUHK student database, generate pseudo-photo with built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 4, wherein Fig. 4 (a) is original portrait, the pseudo-photo of Fig. 4 (b) for using built-in type hidden Markov model method EHMM to generate, Fig. 4 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
Described in the inventive method embodiment 1, utilize SR-LLE method and SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-portrait respectively, on the VIPS database, generate pseudo-portrait with pseudo-nonlinear method LLE and built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 5, wherein Fig. 5 (a) is an original photo, Fig. 5 (b) is the puppet portrait that pseudo-nonlinear method LLE generates, the puppet portrait of Fig. 5 (c) for using built-in type hidden Markov model method EHMM to generate, the puppet portrait that Fig. 5 (d) generates for being used in combination pseudo-nonlinear method and the inventive method SR-LLE, the puppet portrait that Fig. 5 (e) generates for being used in combination built-in type hidden Markov model method and the inventive method SR-EHMM.
Described in the inventive method embodiment 2, utilize the SR-EHMM method on the VIPS database, to carry out the synthetic of pseudo-photo, on the VIPS database, generate pseudo-photo with built-in type hidden Markov model method EHMM, experimental result comparison diagram such as Fig. 6, wherein Fig. 6 (a) is original portrait, the pseudo-photo of Fig. 6 (b) for using built-in type hidden Markov model method EHMM to generate, Fig. 6 (c) is for being used in combination the pseudo-photo that built-in type hidden Markov model method and the inventive method SR-EHMM generate.
By experiment 1 result as seen, owing to implied the information of adaptive selection and the maximally related neighbour's piece of test pattern piece in the rarefaction representation coefficient, and then make that be used for neighbour's the number of synthetic target image piece is not what fix, thereby make the image definition height that generates, details is obvious, has overcome in the existing method because the defective that details is not obvious, sharpness is low of selecting the neighbour of fixed number to bring.
Experiment 2: subjective picture quality evaluation
In this experiment, invited 20 volunteers to come experimental result is given a mark.The validity of our algorithm is described by the MOS average evaluation fractional value that calculates these marks.With the puppet portrait is that example describes, and original portrait is used as reference picture, uses the LLE method, the EHMM method, and the SR-LLE method, the puppet portrait that the SR-EHMM method generates is used as image to be estimated.To each width of cloth image to be estimated, at first calculate the final score of the mean value of its mark that provides by 20 volunteers as this portrait.And then the validity of certain algorithm is that the mean value of the final scores of all pseudo-portraits of being generated by this algorithm is weighed.Promptly for piece image, the MOS value is calculated as follows:
Wherein (i, l) i volunteer of expression is to the scoring of l width of cloth image for A.Final experimental result sees Table 1.
The subjective picture quality evaluation of estimate of table 1. algorithms of different
In table 1, " No " be meant in the document of pseudo-nonlinear method it is with generating pseudo-portrait, not with generating pseudo-photo, so SR-LLE just draws a portrait with generating puppet.As can be seen, the subjective picture quality evaluation of estimate of the inventive method obviously is better than existing method from table one.
Experiment 3: based on the recognition of face of rarefaction representation
As described in the background art, be combined in criminal investigation and case detection and the anti-terrorism practical application in pursuing and capturing an escaped prisoner, two kinds of situations arranged usually: based on the human face photo storehouse identification of portrait with based on the human face portrait storehouse identification of photo based on the recognition of face of example.For the former, at first will test portrait and be converted to a pseudo-photo, and then discern, also the human face photo storehouse can be converted to pseudo-portrait storehouse, and then mate identification.Equally, for the latter, photo can be converted to pseudo-portrait and maybe will draw a portrait the storehouse and be converted to pseudo-photo library, and then discern.Like this, four kinds of recognition method are just arranged, as shown in table 2.
Four kinds of recognition of face modes of table 2.
The recognition methods of using in this experiment is based on the face recognition algorithms of the robust of rarefaction representation.For the CUHK student database, have 188 width of cloth human face photos, corresponding to each photo, the portrait that all has a width of cloth to draw by the artist.For the VIPS database, have 200 width of cloth human face photos, wherein every width of cloth human face photo comes from 5 portraits that the artist painted to 5 width of cloth should be arranged.For of the effect of check multiple image for identification, on the VIPS database, to test, each people's face is used for constituting the training image number of dictionary and elects 1,3,5 respectively as.Experimental result such as table 3, table 4, table 5 is shown in the table 6, table 7.
The recognition of face rate (%) of table 3. on the CUHK student database
The recognition of face rate (%) of table 4. under VIPS database photo library pattern
The recognition of face rate (%) of table 5. under the pattern of VIPS database portrait storehouse
The recognition of face rate (%) of table 6. under the pattern of the pseudo-portrait of VIPS database storehouse
The recognition of face rate (%) of table 7. under the pseudo-picture mode of VIPS database
From table 4,5,6,7 as can be seen, and the method that the present invention proposes has had raising than original method on the recognition of face rate.Particularly on the VIPS database, discrimination of the present invention is apparently higher than existing pseudo-portrait and pseudo-photograph generation method, wherein the discrimination of SR-LLE has had than LLE method and has significantly improved, and the discrimination of SR-EHMM has also had than EHMM and significantly improves, and this illustrates validity of the present invention.
Claims (3)
1. the human face photo based on rarefaction representation arrives the generation method of drawing a portrait, and comprises the steps:
(1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test photo P;
(2) utilize pseudo-portrait generation method, training sample set and test photo are generated a width of cloth initial pseudo-portrait corresponding with the test photo
(3) will initial pseudo-portrait
Again be divided into the piece of identical size and identical overlapping degree with test photo P, wherein
P={P
1, P
2, L P
M, M is total number of piece, extracts the proper vector f of each piece in the test photo;
(4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D
sWith photo piece dictionary D
p:
(4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction photo piece and second derivative are as feature, pixel value with the portrait piece deducts the average of portrait piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
(4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
(4c). with the coupling dictionary D that obtains in (4b) according to
Be decomposed into two dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
(5) utilize the photo piece dictionary D that obtains in the proper vector f that obtains in the step (3) and the step (4)
p, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
(6) utilize the portrait piece dictionary D that obtains in the step (4)
s(5) the rarefaction representation coefficient w that obtains in is according to the synthetic high definition details obvious characteristics message block of following formula
I=1 wherein, 2, L M, M are total number of message block;
(7) the tangible block feature message block of high definition details that step (6) is obtained
Be added to the corresponding piece of initial pseudo-portrait that step (2) obtains
On, to strengthen sharpness and details, obtain final puppet portrait piece;
(8) repeated execution of steps (5)-(7) are until obtaining the final puppet of M piece portrait piece, and these final pseudo-portrait pieces that will obtain make up and obtain the puppet portrait corresponding with testing photo.
According in the claim 1 based on the human face photo of rarefaction representation generation method to portrait, be wherein that step (4b) is described to find the solution by the method for iteration alternately
With the dictionary D that obtains being coupled, concrete steps are as follows:
(2a) random initializtion matrix D;
(2b). with the matrix D substitution
Obtain majorized function about C
And find the solution this function and obtain Matrix C;
(2c). with the Matrix C substitution
Obtain majorized function about matrix D
And find the solution this function and obtain matrix D;
(2d). iteration carry out (2b) to (2c) until
No longer reduce or reach default iterations, obtain matrix D and Matrix C.
3. one kind based on the human face portrait of the rarefaction representation generation method to photo, comprises the steps:
1) will draw a portrait-photo is divided into training sample set and test sample book collection to collection, and concentrate from test sample book and to choose a test portrait S;
2) utilize pseudo-photograph generation method, training sample set and test portrait are generated a width of cloth and the corresponding initial pseudo-photo of test portrait
3) with initial pseudo-photo
Again be divided into the piece of identical size and identical overlapping degree with test portrait S, wherein
S={S
1, S
2, L S
M, M is total number of piece, extracts the proper vector f of each piece in the test portrait;
4) utilize the training sample set combination learning to obtain drawing a portrait piece dictionary D
sWith photo piece dictionary D
p:
4a). select training sample to concentrate 20000 pieces of photo piece collection and portrait piece collection at random, wherein each photo piece is corresponding respectively with each portrait piece, the first order derivative of extraction portrait piece and second derivative are as feature, pixel value with the photo piece deducts the average of photo piece as feature, and the combination of the portrait block feature that will obtain and photo block feature forms a line, and it is carried out normalization;
4b). utilize normalized assemblage characteristic, find the solution the dictionary D that following formula obtains being coupled by the method that replaces iteration:
Wherein I is the matrix that normalized assemblage characteristic is formed, and each classifies a normalized assemblage characteristic as, and C is a rarefaction representation matrix of coefficients to be asked, and β is the rarefaction representation penalty factor, and getting β in the experiment is 0.05;
4c). with 4b) in the coupling dictionary D that obtains according to
Be decomposed into two dictionaries: portrait piece dictionary D
sWith photo piece dictionary D
p, and respectively with each row normalization of this two dictionaries, the transposition of subscript T representing matrix in the formula;
5) utilize the portrait piece dictionary D that obtains in the proper vector f that obtains in the step 3) and the step 4)
s, seek its rarefaction representation according to following formula, obtain its rarefaction representation coefficient w:
Wherein β is the rarefaction representation penalty factor, is taken as 0.05 in the experiment;
6) utilize the photo piece dictionary D that obtains in the step 4)
pWith 5) in the rarefaction representation coefficient w that obtains, according to the synthetic high definition details obvious characteristics message block of following formula
I=1 wherein, 2, L M, M are total number of message block;
7) the tangible block feature message block of the high definition details that step 6) is obtained
Be added to step 2) piece of the initial pseudo-photo correspondence that obtains
On, to strengthen sharpness and details, obtain final puppet portrait piece;
8) repeated execution of steps 5)-7) until obtaining the final pseudo-photo piece of M piece, and these final pseudo-photo pieces that will obtain make up and obtain the pseudo-photo corresponding with testing photo.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102893309A CN101958000B (en) | 2010-09-24 | 2010-09-24 | Face image-picture generating method based on sparse representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102893309A CN101958000B (en) | 2010-09-24 | 2010-09-24 | Face image-picture generating method based on sparse representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101958000A true CN101958000A (en) | 2011-01-26 |
CN101958000B CN101958000B (en) | 2012-08-15 |
Family
ID=43485317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010102893309A Expired - Fee Related CN101958000B (en) | 2010-09-24 | 2010-09-24 | Face image-picture generating method based on sparse representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101958000B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103902991A (en) * | 2014-04-24 | 2014-07-02 | 西安电子科技大学 | Face recognition method based on forensic sketches |
CN103984954A (en) * | 2014-04-23 | 2014-08-13 | 西安电子科技大学宁波信息技术研究院 | Image synthesis method based on multi-feature fusion |
CN104517274A (en) * | 2014-12-25 | 2015-04-15 | 西安电子科技大学 | Face portrait synthesis method based on greedy search |
CN104700380A (en) * | 2015-03-12 | 2015-06-10 | 陕西炬云信息科技有限公司 | Face portrait compositing method based on single photos and portrait pairs |
CN105608451A (en) * | 2016-03-14 | 2016-05-25 | 西安电子科技大学 | Face sketch generation method based on subspace ridge regression |
CN105869134A (en) * | 2016-03-24 | 2016-08-17 | 西安电子科技大学 | Directional diagram model based human face sketch synthesis method |
CN106056561A (en) * | 2016-04-12 | 2016-10-26 | 西安电子科技大学 | Face portrait compositing method based on Bayesian inference |
CN106778811A (en) * | 2016-11-21 | 2017-05-31 | 西安电子科技大学 | A kind of image dictionary generation method, image processing method and device |
CN103793695B (en) * | 2014-02-10 | 2017-11-28 | 天津大学 | A kind of method of the sub- dictionary joint training of multiple feature spaces for recognition of face |
CN109145135A (en) * | 2018-08-03 | 2019-01-04 | 厦门大学 | A kind of human face portrait aging method based on principal component analysis |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080170623A1 (en) * | 2005-04-04 | 2008-07-17 | Technion Resaerch And Development Foundation Ltd. | System and Method For Designing of Dictionaries For Sparse Representation |
CN101571950A (en) * | 2009-03-25 | 2009-11-04 | 湖南大学 | Image restoring method based on isotropic diffusion and sparse representation |
CN101640541A (en) * | 2009-09-04 | 2010-02-03 | 西安电子科技大学 | Reconstruction method of sparse signal |
US20100046829A1 (en) * | 2008-08-21 | 2010-02-25 | Adobe Systems Incorporated | Image stylization using sparse representation |
CN101719142A (en) * | 2009-12-10 | 2010-06-02 | 湖南大学 | Method for detecting picture characters by sparse representation based on classifying dictionary |
-
2010
- 2010-09-24 CN CN2010102893309A patent/CN101958000B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080170623A1 (en) * | 2005-04-04 | 2008-07-17 | Technion Resaerch And Development Foundation Ltd. | System and Method For Designing of Dictionaries For Sparse Representation |
US20100046829A1 (en) * | 2008-08-21 | 2010-02-25 | Adobe Systems Incorporated | Image stylization using sparse representation |
CN101571950A (en) * | 2009-03-25 | 2009-11-04 | 湖南大学 | Image restoring method based on isotropic diffusion and sparse representation |
CN101640541A (en) * | 2009-09-04 | 2010-02-03 | 西安电子科技大学 | Reconstruction method of sparse signal |
CN101719142A (en) * | 2009-12-10 | 2010-06-02 | 湖南大学 | Method for detecting picture characters by sparse representation based on classifying dictionary |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103793695B (en) * | 2014-02-10 | 2017-11-28 | 天津大学 | A kind of method of the sub- dictionary joint training of multiple feature spaces for recognition of face |
CN103984954A (en) * | 2014-04-23 | 2014-08-13 | 西安电子科技大学宁波信息技术研究院 | Image synthesis method based on multi-feature fusion |
CN103902991A (en) * | 2014-04-24 | 2014-07-02 | 西安电子科技大学 | Face recognition method based on forensic sketches |
CN104517274B (en) * | 2014-12-25 | 2017-06-16 | 西安电子科技大学 | Human face portrait synthetic method based on greedy search |
CN104517274A (en) * | 2014-12-25 | 2015-04-15 | 西安电子科技大学 | Face portrait synthesis method based on greedy search |
CN104700380B (en) * | 2015-03-12 | 2017-08-15 | 陕西炬云信息科技有限公司 | Based on single photo with portrait to human face portrait synthetic method |
CN104700380A (en) * | 2015-03-12 | 2015-06-10 | 陕西炬云信息科技有限公司 | Face portrait compositing method based on single photos and portrait pairs |
CN105608451A (en) * | 2016-03-14 | 2016-05-25 | 西安电子科技大学 | Face sketch generation method based on subspace ridge regression |
CN105608451B (en) * | 2016-03-14 | 2019-11-26 | 西安电子科技大学 | Human face portrait generation method based on subspace ridge regression |
CN105869134A (en) * | 2016-03-24 | 2016-08-17 | 西安电子科技大学 | Directional diagram model based human face sketch synthesis method |
CN105869134B (en) * | 2016-03-24 | 2018-11-30 | 西安电子科技大学 | Human face portrait synthetic method based on direction graph model |
CN106056561A (en) * | 2016-04-12 | 2016-10-26 | 西安电子科技大学 | Face portrait compositing method based on Bayesian inference |
CN106778811A (en) * | 2016-11-21 | 2017-05-31 | 西安电子科技大学 | A kind of image dictionary generation method, image processing method and device |
CN106778811B (en) * | 2016-11-21 | 2020-12-25 | 西安电子科技大学 | Image dictionary generation method, image processing method and device |
CN109145135A (en) * | 2018-08-03 | 2019-01-04 | 厦门大学 | A kind of human face portrait aging method based on principal component analysis |
Also Published As
Publication number | Publication date |
---|---|
CN101958000B (en) | 2012-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101958000B (en) | Face image-picture generating method based on sparse representation | |
Nhan Duong et al. | Temporal non-volume preserving approach to facial age-progression and age-invariant face recognition | |
Yang et al. | Learning face age progression: A pyramid architecture of gans | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN103116763B (en) | A kind of living body faces detection method based on hsv color Spatial Statistical Character | |
CN104700087B (en) | The method for mutually conversing of visible ray and near-infrared facial image | |
CN102103690A (en) | Method for automatically portioning hair area | |
CN104239856B (en) | Face identification method based on Gabor characteristic and self adaptable linear regression | |
CN105447532A (en) | Identity authentication method and device | |
CN113255557B (en) | Deep learning-based video crowd emotion analysis method and system | |
CN104715266B (en) | The image characteristic extracting method being combined based on SRC DP with LDA | |
CN114329034A (en) | Image text matching discrimination method and system based on fine-grained semantic feature difference | |
CN103902991A (en) | Face recognition method based on forensic sketches | |
CN111008570B (en) | Video understanding method based on compression-excitation pseudo-three-dimensional network | |
Xie et al. | Hand detection using robust color correction and gaussian mixture model | |
CN103745242A (en) | Cross-equipment biometric feature recognition method | |
Fu et al. | Personality trait detection based on ASM localization and deep learning | |
Qian et al. | Exploring deep gradient information for biometric image feature representation | |
CN105844605A (en) | Face image synthesis method based on adaptive expression | |
CN108154107B (en) | Method for determining scene category to which remote sensing image belongs | |
Li et al. | Intelligent terminal face spoofing detection algorithm based on deep belief network | |
Shi et al. | Transformer-Based adversarial network for semi-supervised face sketch synthesis | |
Shylaja et al. | Illumination Invariant Novel Approaches for Face Recognition | |
Peng et al. | End-to-end efficient cascade license plate recognition system in unconstrained scenarios | |
CN116109877B (en) | Combined zero-sample image classification method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120815 Termination date: 20170924 |