CN103984954B - Image combining method based on multi-feature fusion - Google Patents
Image combining method based on multi-feature fusion Download PDFInfo
- Publication number
- CN103984954B CN103984954B CN201410165469.0A CN201410165469A CN103984954B CN 103984954 B CN103984954 B CN 103984954B CN 201410165469 A CN201410165469 A CN 201410165469A CN 103984954 B CN103984954 B CN 103984954B
- Authority
- CN
- China
- Prior art keywords
- block
- portrait
- photo
- training
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to a kind of image combining method based on multi-feature fusion, for photo to be synthesized into portrait, or portrait is synthesized into photo, implementation step is:Partition database sample set first;After all of image is carried out into image filtering, to image block and image block characteristics are extracted, obtain training portrait block dictionary and photo block dictionary;Using the two dictionaries according to the test photo block of input or test portrait block, neighbour's block is found;Set up markov network model and obtain portrait block or photo block to be synthesized to be synthesized;All of portrait block to be synthesized or photo block to be synthesized is merged and can obtain synthesis portrait or photomontage.Compared with the conventional method, composite result has definition and less structure missing higher to the present invention, can be used for face retrieval with identification.
Description
Technical field
The invention belongs to technical field of image processing, in further relating to pattern-recognition and technical field of computer vision
Image combining method based on multi-feature fusion, the face retrieval that can be used in criminal investigation and case detection with identification.
Background technology
With the development of science and technology, how accurately a person's identity to be differentiated and certification, it has also become need badly
One of problem of solution, wherein recognition of face have direct, friendly and the features such as facilitate, and have obtained extensive research and application.
One important application of face recognition technology is exactly to assist the police to carry out cracking of cases.But under many circumstances, suspect
Photo is very unobtainable, and the police can draw out the portrait of suspect according to the description of live eye witness, afterwards in the police
Picture data storehouse in retrieved and recognized.Because human face photo and portrait are all deposited in terms of image-forming mechanism, shape and texture
It is directly poor using existing face identification method recognition effect in larger difference.Regarding to the issue above, a solution
It is that the photo in police's face database is changed into synthesis portrait, afterwards enters portrait to be identified in synthesis representation data storehouse
Row identification;Another scheme is that portrait to be identified is changed into photomontage, and it is entered in the picture data storehouse of the police afterwards
Row identification.Current human face portrait-photo synthesis is typically based on three kinds of methods:First, human face portrait-the photo based on local linear
Synthetic method;Second, human face portrait-the picture synthesis method based on markov network model;Third, based on rarefaction representation
Human face portrait-picture synthesis method.
Liu et al. is in document " Q.S.Liu and X.O.Tang, A nonlinear approach for face
sketch synthesis and recognition,in Proc.IEEE Int.Conference on Computer
Proposed in Vision, San Diego, CA, pp.1005-1010,20-26Jun.2005. " a kind of next near by local linear
Photo is changed into synthesis portrait like global non-linear method.The method implementation method is:First by the photo in training set
The image block of formed objects and identical overlapping region is divided into portrait pair and photo to be transformed, it is each for photo to be transformed
Individual photo block finds its K neighbour's photo block in photo block is trained, and is then added the corresponding block of drawing a portrait of K photo block
Power combination obtains portrait block to be synthesized, and all of portrait block fusion to be synthesized finally is obtained into synthesis portrait.But the method is deposited
Weak point be:Because neighbour's number is fixed, composite result is caused to there is the defect that definition is low, details is fuzzy.
Wang et al. is in document " X.Wang, and X.Tang, " Face Photo-Sketch Synthesis and
Recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence,
A kind of human face portrait based on markov network model-photo synthesis side is proposed in 31 (11), 1955-1967,2009 "
Method.The method implementation method is:First by the sketch-photo pair in training set and test photo piecemeal, then shone according to test
Relation between the portrait block of relation and adjacent position between tile and training photo block, sets up markov network mould
Type, finds an optimal training and draws a portrait block as portrait block to be synthesized to each test photo block, finally waits to close by all of
Synthesis portrait is obtained into portrait block fusion.But the weak point that the method is present is:Because each photo block position only selects
One training portrait block carries out portrait synthesis, causes composite result to there is a problem of that blocking effect and details are lacked.
Patented technology " the sketch-photo generation method based on rarefaction representation " (application number of high-new ripple et al. application:
201010289330.9, the applying date:2010-09-24 application publication numbers:The A of CN 101958000) in disclose a kind of based on dilute
Human face portrait-the picture synthesis method for representing is dredged, the method implementation method is:First using existing method generation synthesis portrait or
The initial estimation of photomontage, then synthesizes detailed information using the method for rarefaction representation, finally by initial estimation and details
Information is merged.But the weak point that the method is present is:The relation between the image block of adjacent position is have ignored, is caused
There is fuzzy and blocking effect in composite result.
The content of the invention
The technical problems to be solved by the invention are to overcome the shortcomings of above-mentioned existing method, propose that one kind is melted based on multiple features
The image combining method of conjunction, the method can improve the picture quality of synthesis portrait or photomontage.
The present invention solve the technical scheme that is used of above-mentioned technical problem for:A kind of image synthesis based on multi-feature fusion
Method, for photo to be synthesized into portrait, or synthesizes photo by portrait, it is characterised in that:
When needing for photo to synthesize portrait, comprise the following steps:
(1a), selection M are corresponding with training portrait to training portrait to train photo as training basis, by M training
Portrait, will be with corresponding M training photo of above-mentioned M training portrait as training photo sample used as training portrait sample set
Collection, while choosing a test photo P in addition;
(2a), M training photo and test photo P in photo sample set will be trained respectively while carrying out difference of Gaussian filter
Ripple, Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training photo carries out the filtered M of difference of Gaussian the
One class filters photo, and M training photo carries out Core-Periphery and normalize filtered M Equations of The Second Kind filtering photo, M training
Photo carries out the M after gaussian filtering the 3rd class filtering photo, and test photo carries out filtered 4th filter of difference of Gaussian
Ripple photo, test photo carries out filtered one the 5th filtering photo of Core-Periphery normalization, and test photo carries out Gauss filter
One the 6th filtering photo after ripple;
(3a), the M training photo that will be trained in photo sample set, M first kind filtering photo, M Equations of The Second Kind filtering
Photo and M the 3rd class filtering photo combine a photograph collection for including 4M photo, by every photo in the photograph collection
It is divided into that N block sizes are identical, overlapping degree identical training photo block, the training photo block is referred to as original training photo block, former
The number for beginning to train photo block is 4M*N;Then the SURF for extracting the original training photo block to each original training photo block is special
Seek peace LBP features, the photo that original training photo block is extracted after SURF features is referred to as first kind training photo block, first kind training
The number of photo block is also 4M*N;The photo that original training photo block is extracted after LBP features is referred to as Equations of The Second Kind training photo block,
The number of Equations of The Second Kind training photo block is also 4M*N;By original training photo block, first kind training photo block and Equations of The Second Kind instruction
Practice photo block to combine, so as to obtain 4M*N*3 i.e. 12*M*N training photo block, by this 12*M*N training photo block
Composition training photo block dictionary, uses DpRepresent;
(4a), the M portraits in portrait sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical
Training portrait block, so as to obtain M*N training portrait block, this M*N is trained block composition training portrait block dictionary of drawing a portrait, and uses Ds
Represent;
(5a), will test photo and the 4th filtering photo, the 5th filtering photo, the 6th filtering photo this four photos difference
It is divided into that N block sizes are identical and overlapping degree identical test photo block, the test photo block is referred to as original test photo block, former
The number for beginning to test photo block is 4*N;SURF features and LBP feature extractions are carried out to each original test photo block, it is original
The photo that test photo block is extracted after SURF features is referred to as the first class testing photo block, and the number of the first class testing photo block is also
4*N;The photo that original test photo block is extracted after LBP features is referred to as the second class testing photo block, the second class testing photo block
Number be also 4*N;By original test photo block, the first class testing photo block and the combination of the second class testing photo block one
Rise, so as to obtain 4*N*3 i.e. 12*N test photo block, this 12*N test photo block composition test photo block dictionary is used
DtRepresent;
(6a), will test photo block dictionary DtIn it is any original test photo block and its corresponding first class testing photo
Block and the second class testing photo block are combined into a vector by row, then from test photo block dictionary DtIn can obtain it is N number of so
Vector, will be from test photo block dictionary DtIn obtain N number of vector and be referred to as original test photo block vector dictionary Dtv;Meanwhile, will
Training photo block dictionary DpIn it is any original training photo block and its corresponding first kind training photo block and Equations of The Second Kind training
Photo block is grouped together into a vector by row, then from training photo block dictionary DpIn can obtain M*N vector, will
From training photo block dictionary DpIn obtain M*N vector be referred to as original training photo block vector dictionary Dpv;
(7a), for original test photo block vector dictionary DtvIn any vector, calculate its with it is original training photo block
Vectorial dictionary DpvIn each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance value,
This K lowest distance value is selected in original training photo block vector dictionary DpvIn it is corresponding K vector;Meanwhile, further respectively
Original training photo block, first kind training photo block and the Equations of The Second Kind training photo block of each vector in this K vector are obtained, will
The K for obtaining opens original training photo block referred to as first kind candidate photo block, and photo block original with K will be trained corresponding K
Original training portrait block, referred to as candidate's portrait block;The K that will be obtained first kind trains photo block referred to as Equations of The Second Kind candidate photo
Block, the K that will be obtained an Equations of The Second Kind training photo block is referred to as the 3rd class candidate's photo block;
(8a), drawn a portrait using first kind candidate's photo block, Equations of The Second Kind candidate's photo block, the 3rd class candidate's photo block, candidate
Block, original test photo block, the first class testing photo block and the second class testing photo block, horse is solved by the method for alternating iteration
Er Kefu network models, respectively obtain first kind candidate's photo block, Equations of The Second Kind candidate's photo block and the 3rd class candidate's photo block
Weights μ 1, μ 2, μ 3, while obtaining the weight w of candidate's portrait block;
The weight w that (9a), the candidate for obtaining step (7a) portrait block are obtained with step (8a) is multiplied and obtains synthesis portrait
Block;
(10a), step (8a)-(9a) is repeated, until obtaining N blocks synthesis portrait block, the N blocks synthesis that will finally obtain
Portrait block is combined and obtains the corresponding synthesis portraits of original test photo P.
When needing for portrait to synthesize photo, comprise the following steps:
(1b), selection M are corresponding with training portrait to training portrait to train photo as training basis, by M training
Portrait, will be with corresponding M training photo of above-mentioned M training portrait as training photo sample used as training portrait sample set
Collection, while choosing a test portrait S in addition;
(2b), the M in portrait sample set will be trained to open training portrait and test portrait S respectively while carrying out difference of Gaussian filter
Ripple, Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training portrait carries out the filtered M of difference of Gaussian the
One class filtering portrait, M training portrait carries out Core-Periphery and normalizes filtered M Equations of The Second Kind filtering portrait, M training
Portrait carries out the M after gaussian filtering the 3rd class filtering portrait, and test portrait carries out filtered 4th filter of difference of Gaussian
Ripple is drawn a portrait, and test portrait carries out filtered one the 5th filtering portrait of Core-Periphery normalization, and test portrait carries out Gauss filter
One the 6th filtering portrait after ripple;
(3b), the M training portrait that will be trained in portrait sample set, M first kind filtering portrait, M Equations of The Second Kind filtering
Portrait and M the 3rd class filtering portrait combination one include the 4M portrait collection of portrait, by every portrait in the portrait collection
It is divided into that N block sizes are identical, overlapping degree identical training portrait block, training portrait block is referred to as original training portrait block, former
The number for beginning to train portrait block is 4M*N;Then the SURF for extracting the original training portrait block to each original training portrait block is special
Seek peace LBP features, the portrait that original training portrait block is extracted after SURF features is referred to as first kind training portrait block, first kind training
The number of portrait block is also 4M*N;The portrait that original training portrait block is extracted after LBP features is referred to as Equations of The Second Kind training portrait block,
The number of Equations of The Second Kind training portrait block is also 4M*N;By original training portrait block, first kind training portrait block and Equations of The Second Kind instruction
Practice portrait block to combine, so as to obtain 4M*N*3 i.e. 12*M*N training portrait block, by this 12*M*N training portrait block
Composition training portrait block dictionary, uses Ds' represent;
(4b), the M photos in photo sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical
Training photo block, so as to obtain M*N training photo block, this M*N training photo block composition training photo block dictionary is used
Dp' represent;
(5b), will test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering portrait this four portrait difference
It is divided into that N block sizes are identical and overlapping degree identical test portrait block, test portrait block is referred to as original test portrait block, former
The number for beginning to test portrait block is 4*N;SURF features and LBP feature extractions are carried out to each original test portrait block, it is original
The portrait that test portrait block is extracted after SURF features is referred to as the first class testing portrait block, and the number of the first class testing portrait block is also
4*N;The portrait that original test portrait block is extracted after LBP features is referred to as the second class testing portrait block, the second class testing portrait block
Number be also 4*N;Original test is drawn a portrait into block, the first class testing portrait block and the portrait block combination of the second class testing one
Rise, so as to obtain 4*N*3 i.e. 12*N test portrait block, this 12*N test portrait block composition test portrait block dictionary is used
Dt' represent;
(6b), draw a portrait test block dictionary Dt' in it is any original test portrait block and its corresponding first class testing draw
As block and the second class testing portrait block are combined into a vector by row, then from test portrait block dictionary Dt' in can obtain it is N number of this
The vector of sample, will be from test portrait block dictionary Dt' in obtain N number of vector and be referred to as original test portrait block vector dictionary Dtv’;Together
When, by training portrait block dictionary Ds' in it is any original training portrait block and its corresponding first kind training portrait block and second
Class training portrait block is grouped together into a vector by row, then from training portrait block dictionary Ds' in can obtain M*N
Vector, will be from training portrait block dictionary Ds' in obtain M*N vector be referred to as original training portrait block vector dictionary Dsv’;
(7b), for original test portrait block vector dictionary Dtv' in any vector, calculate its with it is original training portrait block
Vectorial dictionary Dsv' in each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance
Value, selects this K lowest distance value in original training portrait block vector dictionary Dsv' in it is corresponding K vector;Meanwhile, further
Respectively obtain original training portrait block, first kind training portrait block and the Equations of The Second Kind training portrait of each vector in this K vector
Block, the K that will be obtained opens original training portrait block referred to as first kind candidate portrait block, and portrait block original with K will be trained corresponding
The original training photo blocks of K, referred to as candidate's photo block;The K that will be obtained first kind trains portrait block referred to as Equations of The Second Kind candidate
Portrait block, the K that will be obtained an Equations of The Second Kind training portrait block is referred to as the 3rd class candidate portrait block;
(8b), using first kind candidate portrait block, Equations of The Second Kind candidate portrait block, the 3rd class candidate portrait block, candidate's photo
Block, original test portrait block, the first class testing portrait block and the second class testing portrait block, horse is solved by the method for alternating iteration
Er Kefu network models, respectively obtain first kind candidate portrait block, Equations of The Second Kind candidate portrait block and the 3rd class candidate portrait block
Weights μ 1 ', μ 2 ', μ 3 ', while obtaining the weight w of candidate's photo block ';
The weight w that (9b), the candidate's photo block for obtaining step (7b) are obtained with step (8b) ' being multiplied obtains photomontage
Block;
(10b), step (8b)-(9b) is repeated, until N block photomontage blocks are obtained, the N blocks synthesis that will finally obtain
Photo block is combined and obtains the corresponding photomontages of original test portrait S.
Compared with prior art, the advantage of the invention is that:
First, the present invention considers the relation between the image block of adjacent position, while in each tile location selection K
Neighbour's image block is rebuild so that composite result becomes apparent from;
Second, the method present invention employs multiple features fusion weighs the distance between two image blocks relation, improves
The quality of composite result and effectively avoid structure lack problem.
Brief description of the drawings
Fig. 1 is synthetic method flow chart of the photo based on multi-feature fusion of the invention to portrait;
Fig. 2 is synthetic method flow chart of the present invention portrait based on multi-feature fusion to photo;
Fig. 3 is the comparing result of the synthesis portrait with existing two methods on CUHK student databases of the invention
Figure;
Fig. 4 is the comparing result of the photomontage with existing two methods on CUHK student databases of the invention
Figure.
Specific embodiment
The present invention is described in further detail below in conjunction with accompanying drawing embodiment.
The image combining method based on multi-feature fusion that the present invention is provided, can synthesize portrait by photo, or will
Portrait synthesizes photo, when needing for photo to synthesize portrait, comprises the following steps, shown in Figure 1:
(1a), selection M are corresponding with training portrait to training portrait to train photo as training basis, by M training
Portrait, will be with corresponding M training photo of above-mentioned M training portrait as training photo sample used as training portrait sample set
Collection, while choosing a test photo P in addition;
(2a), M training photo and test photo P in photo sample set will be trained respectively while carrying out difference of Gaussian filter
Ripple, Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training photo carries out the filtered M of difference of Gaussian the
One class filters photo, and M training photo carries out Core-Periphery and normalize filtered M Equations of The Second Kind filtering photo, M training
Photo carries out the M after gaussian filtering the 3rd class filtering photo, and test photo carries out filtered 4th filter of difference of Gaussian
Ripple photo, test photo carries out filtered one the 5th filtering photo of Core-Periphery normalization, and test photo carries out Gauss filter
One the 6th filtering photo after ripple;In this step, difference of Gaussian filtering, Core-Periphery normalization filtering and gaussian filtering are
Existing conventional techniques;
(3a), the M training photo that will be trained in photo sample set, M first kind filtering photo, M Equations of The Second Kind filtering
Photo and M the 3rd class filtering photo combine a photograph collection for including 4M photo, by every photo in the photograph collection
It is divided into that N block sizes are identical, overlapping degree identical training photo block, the training photo block is referred to as original training photo block, former
The number for beginning to train photo block is 4M*N;Then the SURF for extracting the original training photo block to each original training photo block is special
Seek peace LBP features, the photo that original training photo block is extracted after SURF features is referred to as first kind training photo block, first kind training
The number of photo block is also 4M*N;The photo that original training photo block is extracted after LBP features is referred to as Equations of The Second Kind training photo block,
The number of Equations of The Second Kind training photo block is also 4M*N;By original training photo block, first kind training photo block and Equations of The Second Kind instruction
Practice photo block to combine, so as to obtain 4M*N*3 i.e. 12*M*N training photo block, by this 12*M*N training photo block
Composition training photo block dictionary, uses DpRepresent;
In this step, the extracting method of SURF features and the extracting method of LBP features are routine techniques, can be referred to respectively
Document " H.Bay, A.Ess, T.Tuytelaars, L.Gool.SURF:Speeded Up Robust Features.Computer
Vision and Image Understanding,110(3):346-359,2008 " and " T.Ojala, M.T.Multiresolution Gray-Scale and Rotation Invariant Texture
Classification with Local Binary Patterns.IEEE Transactions on Pattern
Analysis and Machine Intelligence,24(7):971-987,2002”;
(4a), the M portraits in portrait sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical
Training portrait block, so as to obtain M*N training portrait block, this M*N is trained block composition training portrait block dictionary of drawing a portrait, and uses Ds
Represent;
(5a), will test photo and the 4th filtering photo, the 5th filtering photo, the 6th filtering photo this four photos difference
It is divided into that N block sizes are identical and overlapping degree identical test photo block, the test photo block is referred to as original test photo block, former
The number for beginning to test photo block is 4*N;SURF features and LBP feature extractions are carried out to each original test photo block, it is original
The photo that test photo block is extracted after SURF features is referred to as the first class testing photo block, and the number of the first class testing photo block is also
4*N;The photo that original test photo block is extracted after LBP features is referred to as the second class testing photo block, the second class testing photo block
Number be also 4*N;By original test photo block, the first class testing photo block and the combination of the second class testing photo block one
Rise, so as to obtain 4*N*3 i.e. 12*N test photo block, this 12*N test photo block composition test photo block dictionary is used
DtRepresent;
(6a), will test photo block dictionary DtIn it is any original test photo block and its corresponding first class testing photo
Block and the second class testing photo block are combined into a vector by row, then from test photo block dictionary DtIn can obtain it is N number of so
Vector, will be from test photo block dictionary DtIn obtain N number of vector and be referred to as original test photo block vector dictionary Dtv;Meanwhile, will
Training photo block dictionary DpIn it is any original training photo block and its corresponding first kind training photo block and Equations of The Second Kind training
Photo block is grouped together into a vector by row, then from training photo block dictionary DpIn can obtain M*N vector, will
From training photo block dictionary DpIn obtain M*N vector be referred to as original training photo block vector dictionary Dpv;
(7a), for original test photo block vector dictionary DtvIn any vector, calculate its with it is original training photo block
Vectorial dictionary DpvIn each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance value,
This K lowest distance value is selected in original training photo block vector dictionary DpvIn it is corresponding K vector;Meanwhile, further respectively
Original training photo block, first kind training photo block and the Equations of The Second Kind training photo block of each vector in this K vector are obtained, will
The K for obtaining opens original training photo block referred to as first kind candidate photo block, and photo block original with K will be trained corresponding K
Original training portrait block, referred to as candidate's portrait block;The K that will be obtained first kind trains photo block referred to as Equations of The Second Kind candidate photo
Block, the K that will be obtained an Equations of The Second Kind training photo block is referred to as the 3rd class candidate's photo block;
(8a), drawn a portrait using first kind candidate's photo block, Equations of The Second Kind candidate's photo block, the 3rd class candidate's photo block, candidate
Block, original test photo block, the first class testing photo block and the second class testing photo block, horse is solved by the method for alternating iteration
Er Kefu network models, respectively obtain first kind candidate's photo block, Equations of The Second Kind candidate's photo block and the 3rd class candidate's photo block
Weights μ 1, μ 2, μ 3, while obtaining the weight w of candidate's portrait block;
The weight w that (9a), the candidate for obtaining step (7a) portrait block are obtained with step (8a) is multiplied and obtains synthesis portrait
Block;
(10a), step (8a)-(9a) is repeated, until obtaining N blocks synthesis portrait block, the N blocks synthesis that will finally obtain
Portrait block is combined and obtains the corresponding synthesis portraits of original test photo P.
When needing for portrait to synthesize photo, comprise the following steps, it is shown in Figure 2:
(1b), selection M are corresponding with training portrait to training portrait to train photo as training basis, by M training
Portrait, will be with corresponding M training photo of above-mentioned M training portrait as training photo sample used as training portrait sample set
Collection, while choosing a test portrait S in addition;
(2b), the M in portrait sample set will be trained to open training portrait and test portrait S respectively while carrying out difference of Gaussian filter
Ripple, Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training portrait carries out the filtered M of difference of Gaussian the
One class filtering portrait, M training portrait carries out Core-Periphery and normalizes filtered M Equations of The Second Kind filtering portrait, M training
Portrait carries out the M after gaussian filtering the 3rd class filtering portrait, and test portrait carries out filtered 4th filter of difference of Gaussian
Ripple is drawn a portrait, and test portrait carries out filtered one the 5th filtering portrait of Core-Periphery normalization, and test portrait carries out Gauss filter
One the 6th filtering portrait after ripple;
(3b), the M training portrait that will be trained in portrait sample set, M first kind filtering portrait, M Equations of The Second Kind filtering
Portrait and M the 3rd class filtering portrait combination one include the 4M portrait collection of portrait, by every portrait in the portrait collection
It is divided into that N block sizes are identical, overlapping degree identical training portrait block, training portrait block is referred to as original training portrait block, former
The number for beginning to train portrait block is 4M*N;Then the SURF for extracting the original training portrait block to each original training portrait block is special
Seek peace LBP features, the portrait that original training portrait block is extracted after SURF features is referred to as first kind training portrait block, first kind training
The number of portrait block is also 4M*N;The portrait that original training portrait block is extracted after LBP features is referred to as Equations of The Second Kind training portrait block,
The number of Equations of The Second Kind training portrait block is also 4M*N;By original training portrait block, first kind training portrait block and Equations of The Second Kind instruction
Practice portrait block to combine, so as to obtain 4M*N*3 i.e. 12*M*N training portrait block, by this 12*M*N training portrait block
Composition training portrait block dictionary, uses Ds' represent;
(4b), the M photos in photo sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical
Training photo block, so as to obtain M*N training photo block, this M*N training photo block composition training photo block dictionary is used
Dp' represent;
(5b), will test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering portrait this four portrait difference
It is divided into that N block sizes are identical and overlapping degree identical test portrait block, test portrait block is referred to as original test portrait block, former
The number for beginning to test portrait block is 4*N;SURF features and LBP feature extractions are carried out to each original test portrait block, it is original
The portrait that test portrait block is extracted after SURF features is referred to as the first class testing portrait block, and the number of the first class testing portrait block is also
4*N;The portrait that original test portrait block is extracted after LBP features is referred to as the second class testing portrait block, the second class testing portrait block
Number be also 4*N;Original test is drawn a portrait into block, the first class testing portrait block and the portrait block combination of the second class testing one
Rise, so as to obtain 4*N*3 i.e. 12*N test portrait block, this 12*N test portrait block composition test portrait block dictionary is used
Dt' represent;
(6b), draw a portrait test block dictionary Dt' in it is any original test portrait block and its corresponding first class testing draw
As block and the second class testing portrait block are combined into a vector by row, then from test portrait block dictionary Dt' in can obtain it is N number of this
The vector of sample, will be from test portrait block dictionary Dt' in obtain N number of vector and be referred to as original test portrait block vector dictionary Dtv’;Together
When, by training portrait block dictionary Ds' in it is any original training portrait block and its corresponding first kind training portrait block and second
Class training portrait block is grouped together into a vector by row, then from training portrait block dictionary Ds' in can obtain M*N
Vector, will be from training portrait block dictionary Ds' in obtain M*N vector be referred to as original training portrait block vector dictionary Dsv’;
(7b), for original test portrait block vector dictionary Dtv' in any vector, calculate its with it is original training portrait block
Vectorial dictionary Dsv' in each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance
Value, selects this K lowest distance value in original training portrait block vector dictionary Dsv' in it is corresponding K vector;Meanwhile, further
Respectively obtain original training portrait block, first kind training portrait block and the Equations of The Second Kind training portrait of each vector in this K vector
Block, the K that will be obtained opens original training portrait block referred to as first kind candidate portrait block, and portrait block original with K will be trained corresponding
The original training photo blocks of K, referred to as candidate's photo block;The K that will be obtained first kind trains portrait block referred to as Equations of The Second Kind candidate
Portrait block, the K that will be obtained an Equations of The Second Kind training portrait block is referred to as the 3rd class candidate portrait block;
(8b), using first kind candidate portrait block, Equations of The Second Kind candidate portrait block, the 3rd class candidate portrait block, candidate's photo
Block, original test portrait block, the first class testing portrait block and the second class testing portrait block, horse is solved by the method for alternating iteration
Er Kefu network models, respectively obtain first kind candidate portrait block, Equations of The Second Kind candidate portrait block and the 3rd class candidate portrait block
Weights μ 1 ', μ 2 ', μ 3 ', while obtaining the weight w of candidate's photo block ';
The weight w that (9b), the candidate's photo block for obtaining step (7b) are obtained with step (8b) ' being multiplied obtains photomontage
Block;
(10b), step (8b)-(9b) is repeated, until N block photomontage blocks are obtained, the N blocks synthesis that will finally obtain
Photo block is combined and obtains the corresponding photomontages of original test portrait S.
Effect of the invention can be described further by following emulation experiment.
1st, simulated conditions
The present invention be central processing unit be Inter (R) Core (TM) i5-3470 3.20GHz, internal memory 8G, WINDOWS7
In operating system, the MATLAB 2012b developed with Mathworks companies of the U.S. are emulated, and database uses Hong Kong Chinese
University's CUHK student databases.
2nd, emulation content
Experiment 1:Synthesis of the photo to portrait
It is big in Hong Kong Chinese using method based on multi-feature fusion according to the inventive method specific embodiment
Photo to the synthesis of portrait is carried out on CUHK student databases, with the method LLE based on local linear and is based on
The method MRF of markov network model carries out photo to the synthesis of portrait, experimental result pair on CUHKstudent databases
Than figure such as Fig. 3, wherein Fig. 3 (a) is original photo, and Fig. 3 (b) is the portrait of the method LLE synthesis based on local linear, Fig. 3 (c)
It is the portrait of the method MRF synthesis based on markov network model, Fig. 3 (d) is the portrait of the inventive method synthesis;
Experiment 2:Draw a portrait to the synthesis of photo
According to the inventive method specific embodiment two, using method based on multi-feature fusion in Hong Kong Chinese
Portrait to the synthesis of photo is carried out on the CUHK student databases of university, with method LLE and base based on local linear
Portrait to the synthesis of photo, experimental result are carried out on CUHKstudent databases in the method MRF of markov network model
Comparison diagram such as Fig. 4, wherein Fig. 4 (a) are original portrait, and Fig. 4 (b) is the photo of the method LLE synthesis based on local linear, Fig. 4
C () is the photo of the method MRF synthesis based on markov network model, Fig. 4 (d) is the photo of the inventive method synthesis.
From 2 results of experiment 1 and experiment, due to by means of the thought of multiple features fusion, two can be preferably weighed
The distance between image block relation so that composite result is better than other human face portrait-picture synthesis methods, demonstrates the present invention
Advance.
Claims (1)
1. a kind of image combining method based on multi-feature fusion, for photo to be synthesized into portrait, or portrait is synthesized
Photo, it is characterised in that:
When needing for photo to synthesize portrait, comprise the following steps:
(1a), selection M are corresponding with training portrait to training portrait to train photo as training basis, and M training is drawn a portrait
As sample set of drawing a portrait is trained, photo will be trained as photo sample set is trained with above-mentioned M corresponding M, training portrait, together
When choose a test photo P in addition;
(2a), will train M training photo and test photo P in photo sample set carry out simultaneously respectively difference of Gaussian filtering,
Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training photo carries out the filtered M of difference of Gaussian first
Class filters photo, and M training photo carries out Core-Periphery and normalize filtered M Equations of The Second Kind filtering photo, and M training is shone
Piece carries out the M after gaussian filtering the 3rd class filtering photo, and test photo carries out filtered 4th filtering of difference of Gaussian
Photo, test photo carries out filtered one the 5th filtering photo of Core-Periphery normalization, and test photo carries out gaussian filtering
One the 6th filtering photo afterwards;
(3a), the M training photo that will be trained in photo sample set, M first kind filtering photo, M Equations of The Second Kind filtering photo
A photograph collection for including 4M photo is combined with M the 3rd class filtering photo, every photo in the photograph collection is divided
For N block sizes are identical, overlapping degree identical training photo block, the training photo block is referred to as original training photo block, original instruction
The number for practicing photo block is 4M*N;Then to each it is original training photo block extract this it is original training photo block SURF features and
LBP features, the photo that original training photo block is extracted after SURF features is referred to as first kind training photo block, first kind training photo
The number of block is also 4M*N;The photo that original training photo block is extracted after LBP features is referred to as Equations of The Second Kind training photo block, second
The number of class training photo block is also 4M*N;Original training photo block, first kind training photo block and Equations of The Second Kind training are shone
Tile is combined, so as to obtain 4M*N*3 i.e. 12*M*N training photo block, by this 12*M*N training photo block composition
Training photo block dictionary, uses DpRepresent;
(4a), M portrait in portrait sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical is trained
Portrait block, so as to obtain M*N training portrait block, by this M*N training portrait block composition training portrait block dictionary, uses DsTable
Show;
(5a), test photo and the 4th filtering photo, the 5th filtering photo, the 6th filtering photo this four photos are respectively divided
For N block sizes are identical and overlapping degree identical test photo block, the test photo block is referred to as original test photo block, original survey
It is 4*N to try the number of photo block;SURF features and LBP feature extractions, original test are carried out to each original test photo block
The photo that photo block is extracted after SURF features is referred to as the first class testing photo block, and the number of the first class testing photo block is also 4*N
;The photo that original test photo block is extracted after LBP features is referred to as the second class testing photo block, the number of the second class testing photo block
Mesh is also 4*N;Original test photo block, the first class testing photo block and the second class testing photo block are combined, from
And 4*N*3 i.e. 12*N test photo block is obtained, by this 12*N test photo block composition test photo block dictionary, use DtTable
Show;
(6a), will test photo block dictionary DtIn it is any original test photo block and its corresponding first class testing photo block with
Second class testing photo block is combined into a vector by row, then from test photo block dictionary DtIn can obtain it is N number of it is such to
Amount, will be from test photo block dictionary DtIn obtain N number of vector and be referred to as original test photo block vector dictionary Dtv;Meanwhile, will train
Photo block dictionary DpIn it is any original training photo block and its corresponding first kind training photo block and Equations of The Second Kind training photo
Block is grouped together into a vector by row, then from training photo block dictionary DpIn can obtain M*N vector, will be from instruction
Practice photo block dictionary DpIn obtain M*N vector be referred to as original training photo block vector dictionary Dpv;
(7a), for original test photo block vector dictionary DtvIn any vector, calculate it with original training photo block vector
Dictionary DpvIn each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance value, select
This K lowest distance value is in original training photo block vector dictionary DpvIn it is corresponding K vector;Meanwhile, further respectively obtain
The original training photo block of each vector, first kind training photo block and Equations of The Second Kind training photo block, will obtain in this K vector
The original training photo blocks of K be referred to as first kind candidate's photo block, and will original with K training photo block it is corresponding K it is original
Training portrait block, referred to as candidate's portrait block;The K that will be obtained first kind trains photo block referred to as Equations of The Second Kind candidate photo block, will
The K for obtaining an Equations of The Second Kind training photo block is referred to as the 3rd class candidate's photo block;
(8a), using first kind candidate's photo block, Equations of The Second Kind candidate's photo block, the 3rd class candidate's photo block, candidate's portrait block, original
Begin test photo block, the first class testing photo block and the second class testing photo block, and Ma Erke is solved by the method for alternating iteration
Husband's network model, respectively obtains the weights of first kind candidate's photo block, Equations of The Second Kind candidate's photo block and the 3rd class candidate's photo block
μ 1, μ 2, μ 3, while obtaining the weight w of candidate's portrait block;
The weight w that (9a), the candidate for obtaining step (7a) portrait block are obtained with step (8a) is multiplied and obtains synthesis portrait block;
(10a), step (8a)-(9a) is repeated, until N blocks synthesis portrait block is obtained, the N blocks synthesis portrait that will finally obtain
Block is combined and obtains the corresponding synthesis portraits of original test photo P;
When needing for portrait to synthesize photo, comprise the following steps:
(1b), selection M are corresponding with training portrait to training portrait to train photo as training basis, and M training is drawn a portrait
As sample set of drawing a portrait is trained, photo will be trained as photo sample set is trained with above-mentioned M corresponding M, training portrait, together
When in addition choose one test portrait S;
(2b), will train M training portrait and test portrait S in portrait sample set carry out simultaneously respectively difference of Gaussian filtering,
Core-Periphery normalization filtering and gaussian filtering, respectively obtaining M training portrait carries out the filtered M of difference of Gaussian first
Class filtering portrait, M training portrait carries out Core-Periphery and normalizes filtered M Equations of The Second Kind filtering portrait, M training picture
As carrying out the M after gaussian filtering the 3rd class filtering portrait, test portrait carries out filtered 4th filtering of difference of Gaussian
Portrait, test portrait carries out filtered one the 5th filtering portrait of Core-Periphery normalization, and test portrait carries out gaussian filtering
One the 6th filtering portrait afterwards;
(3b), the M training portrait that will be trained in portrait sample set, M first kind filtering portrait, M Equations of The Second Kind filtering portrait
One includes the 4M portrait collection of portrait with the combination of M the 3rd class filtering portrait, and every portrait in the portrait collection is divided
For N block sizes are identical, overlapping degree identical training portrait block, training portrait block is referred to as original training portrait block, original instruction
The number for practicing portrait block is 4M*N;Then to each it is original training portrait block extract this it is original training portrait block SURF features and
LBP features, the portrait that original training portrait block is extracted after SURF features is referred to as first kind training portrait block, first kind training portrait
The number of block is also 4M*N;The portrait that original training portrait block is extracted after LBP features is referred to as Equations of The Second Kind training portrait block, second
The number of class training portrait block is also 4M*N;By original training portrait block, first kind training portrait block and Equations of The Second Kind training picture
As block is combined, so as to obtain 4M*N*3 i.e. 12*M*N training portrait block, by this 12*M*N training portrait block composition
Training portrait block dictionary, uses Ds' represent;
(4b), M photo in photo sample set will be trained to be respectively divided into, and N block sizes are identical and overlapping degree identical is trained
Photo block, so as to obtain M*N training photo block, by this M*N training photo block composition training photo block dictionary, uses Dp' table
Show;
(5b), test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering this four portraits of portrait are respectively divided
For N block sizes are identical and overlapping degree identical test portrait block, test portrait block is referred to as original test portrait block, original survey
The number of examination portrait block is 4*N;SURF features and LBP feature extractions, original test are carried out to each original test portrait block
The portrait that portrait block is extracted after SURF features is referred to as the first class testing portrait block, and the number of the first class testing portrait block is also 4*N
;The portrait that original test portrait block is extracted after LBP features is referred to as the second class testing portrait block, the number of the second class testing portrait block
Mesh is also 4*N;Original test portrait block, the first class testing portrait block and the second class testing portrait block are combined, from
And 4*N*3 i.e. 12*N test portrait block is obtained, by this 12*N test portrait block composition test portrait block dictionary, use Dt' table
Show;
(6b), draw a portrait test block dictionary Dt' in it is any original test portrait block and its corresponding first class testing portrait block with
Second class testing portrait block is combined into a vector by row, then from test portrait block dictionary Dt' in can obtain it is N number of it is such to
Amount, will be from test portrait block dictionary Dt' in obtain N number of vector and be referred to as original test portrait block vector dictionary Dtv’;Meanwhile, will instruct
Practice portrait block dictionary Ds' in it is any original training portrait block and its corresponding first kind training portrait block and Equations of The Second Kind training picture
As block is grouped together into a vector by row, then from training portrait block dictionary Ds' in can obtain M*N vector, will
From training portrait block dictionary Ds' in obtain M*N vector be referred to as original training portrait block vector dictionary Dsv’;
(7b), for original test portrait block vector dictionary Dtv' in any vector, calculate its with it is original training portrait block vector
Dictionary Dsv' in each vectorial Euclidean distance, so as to obtain M*N distance value, therefrom select K minimum distance value, choosing
Go out this K lowest distance value in original training portrait block vector dictionary Dsv' in it is corresponding K vector;Meanwhile, further obtain respectively
The original training portrait block of each vector, first kind training portrait block and Equations of The Second Kind training portrait block in this K vector, will
The original training portrait blocks of the K for arriving referred to as first kind candidate portrait block, and will the corresponding K original of original with K training portrait block
Begin training photo block, referred to as candidate's photo block;The K that will be obtained first kind trains portrait block referred to as Equations of The Second Kind candidate portrait block,
The K that will be obtained an Equations of The Second Kind training portrait block is referred to as the 3rd class candidate portrait block;
(8b), using first kind candidate portrait block, Equations of The Second Kind candidate portrait block, the 3rd class candidate portrait block, candidate's photo block, original
Begin test portrait block, the first class testing portrait block and the second class testing portrait block, and Ma Erke is solved by the method for alternating iteration
Husband's network model, respectively obtains the weights of first kind candidate portrait block, Equations of The Second Kind candidate portrait block and the 3rd class candidate portrait block
μ 1 ', μ 2 ', μ 3 ', while obtaining the weight w of candidate's photo block ';
The weight w that (9b), the candidate's photo block for obtaining step (7b) are obtained with step (8b) ' being multiplied obtains photomontage block;
(10b), step (8b)-(9b) is repeated, until N block photomontage blocks are obtained, the N block photomontages that will finally obtain
Block is combined and obtains the corresponding photomontages of original test portrait S.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410165469.0A CN103984954B (en) | 2014-04-23 | 2014-04-23 | Image combining method based on multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410165469.0A CN103984954B (en) | 2014-04-23 | 2014-04-23 | Image combining method based on multi-feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103984954A CN103984954A (en) | 2014-08-13 |
CN103984954B true CN103984954B (en) | 2017-06-13 |
Family
ID=51276916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410165469.0A Expired - Fee Related CN103984954B (en) | 2014-04-23 | 2014-04-23 | Image combining method based on multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103984954B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104517274B (en) * | 2014-12-25 | 2017-06-16 | 西安电子科技大学 | Human face portrait synthetic method based on greedy search |
CN105989584B (en) * | 2015-01-29 | 2019-05-14 | 北京大学 | The method and apparatus that image stylization is rebuild |
CN104700439B (en) * | 2015-03-12 | 2017-08-15 | 陕西炬云信息科技有限公司 | The human face portrait synthetic method drawn a portrait based on individual target |
CN104700380B (en) * | 2015-03-12 | 2017-08-15 | 陕西炬云信息科技有限公司 | Based on single photo with portrait to human face portrait synthetic method |
CN106023079B (en) * | 2016-05-19 | 2019-05-24 | 西安电子科技大学 | The two stages human face portrait generation method of joint part and global property |
CN107392213B (en) * | 2017-07-21 | 2020-04-07 | 西安电子科技大学 | Face portrait synthesis method based on depth map model feature learning |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7526123B2 (en) * | 2004-02-12 | 2009-04-28 | Nec Laboratories America, Inc. | Estimating facial pose from a sparse representation |
CN101169830A (en) * | 2007-11-30 | 2008-04-30 | 西安电子科技大学 | Human face portrait automatic generation method based on embedded type hidden markov model and selective integration |
CN101482925B (en) * | 2009-01-16 | 2012-01-04 | 西安电子科技大学 | Photograph generation method based on local embedding type hidden Markov model |
CN102013020B (en) * | 2009-09-08 | 2015-03-04 | 王晓刚 | Method and system for synthesizing human face image |
JP2013536960A (en) * | 2010-09-03 | 2013-09-26 | シャオガン ワン | System and method for synthesizing portrait sketches from photographs |
CN101958000B (en) * | 2010-09-24 | 2012-08-15 | 西安电子科技大学 | Face image-picture generating method based on sparse representation |
CN102110303B (en) * | 2011-03-10 | 2012-07-04 | 西安电子科技大学 | Method for compounding face fake portrait\fake photo based on support vector return |
-
2014
- 2014-04-23 CN CN201410165469.0A patent/CN103984954B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN103984954A (en) | 2014-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103984954B (en) | Image combining method based on multi-feature fusion | |
CN106780448B (en) | A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features | |
CN104143079B (en) | The method and system of face character identification | |
Zhang et al. | End-to-end photo-sketch generation via fully convolutional representation learning | |
US9514356B2 (en) | Method and apparatus for generating facial feature verification model | |
CN109034210A (en) | Object detection method based on super Fusion Features Yu multi-Scale Pyramid network | |
CN107194341A (en) | The many convolution neural network fusion face identification methods of Maxout and system | |
CN109409384A (en) | Image-recognizing method, device, medium and equipment based on fine granularity image | |
CN107133651A (en) | The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network | |
CN107992783A (en) | Face image processing process and device | |
CN101169830A (en) | Human face portrait automatic generation method based on embedded type hidden markov model and selective integration | |
Pratama et al. | Face recognition for presence system by using residual networks-50 architecture | |
CN105574509A (en) | Face identification system playback attack detection method and application based on illumination | |
CN109766873A (en) | pedestrian re-identification method based on hybrid deformable convolution | |
CN104966075B (en) | A kind of face identification method and system differentiating feature based on two dimension | |
CN109145971A (en) | Based on the single sample learning method for improving matching network model | |
CN111914758A (en) | Face in-vivo detection method and device based on convolutional neural network | |
CN106778554A (en) | Cervical cell image-recognizing method based on union feature PCANet | |
CN103839066A (en) | Feature extraction method from biological vision | |
CN108876776A (en) | A kind of method of generating classification model, eye fundus image classification method and device | |
CN114783034A (en) | Facial expression recognition method based on fusion of local sensitive features and global features | |
CN105844605B (en) | Based on the human face portrait synthetic method adaptively indicated | |
Han et al. | Learning generative models of tissue organization with supervised GANs | |
Sabry et al. | Image retrieval using convolutional autoencoder, infogan, and vision transformer unsupervised models | |
CN114708617A (en) | Pedestrian re-identification method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170613 |