CN103984954A - Image synthesis method based on multi-feature fusion - Google Patents

Image synthesis method based on multi-feature fusion Download PDF

Info

Publication number
CN103984954A
CN103984954A CN201410165469.0A CN201410165469A CN103984954A CN 103984954 A CN103984954 A CN 103984954A CN 201410165469 A CN201410165469 A CN 201410165469A CN 103984954 A CN103984954 A CN 103984954A
Authority
CN
China
Prior art keywords
piece
portrait
photo
training
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410165469.0A
Other languages
Chinese (zh)
Other versions
CN103984954B (en
Inventor
李洁
彭春蕾
王楠楠
高新波
任文君
张铭津
张声传
胡彦婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XIDIAN-NINGBO INFORMATION TECHNOLOGY INSTITUTE
Original Assignee
XIDIAN-NINGBO INFORMATION TECHNOLOGY INSTITUTE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XIDIAN-NINGBO INFORMATION TECHNOLOGY INSTITUTE filed Critical XIDIAN-NINGBO INFORMATION TECHNOLOGY INSTITUTE
Priority to CN201410165469.0A priority Critical patent/CN103984954B/en
Publication of CN103984954A publication Critical patent/CN103984954A/en
Application granted granted Critical
Publication of CN103984954B publication Critical patent/CN103984954B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an image synthesis method based on multi-feature fusion. The method is used for synthesizing pictures into a portrait or synthesizing portraits into a picture. The method includes the steps of dividing a database sample set, blocking images and extracting features of image blocks after image filtering is carried out on all images to obtain a training portrait block dictionary and a picture block dictionary, finding adjacent blocks by means of the two dictionaries according to input test picture blocks or test portrait blocks, building a markov network model to obtain portrait blocks to be synthesized or picture blocks to be synthesized, and fusing all portrait blocks to be synthesized or picture blocks to be synthesized to obtain the synthesized portrait or the synthesized picture. Compared with the prior art, definition of a synthesized result is higher, structure loss is less, and the method can be used for facial retrieval and recognition.

Description

Image combining method based on multi-feature fusion
Technical field
The invention belongs to technical field of image processing, further relate to the image combining method based on multi-feature fusion in pattern-recognition and technical field of computer vision, can be used for face retrieval and identification in criminal investigation and case detection.
Background technology
Along with scientific and technical development, how accurately a people's identity to be differentiated and to be authenticated, become one of problem of needing badly solution, wherein recognition of face have directly, the feature such as friendly and convenient, obtained research widely and application.An important application of face recognition technology assists the police to carry out cracking of cases exactly.But under many circumstances, suspect's photo is very unobtainable, the police can draw out according to on-the-spot eye witness's description suspect's portrait, retrieve afterwards and identification in the picture data storehouse of the police.Because human face photo and portrait all exist larger difference aspect image-forming mechanism, shape and texture, directly adopt existing face identification method recognition effect poor.For the problems referred to above, a solution is that the photo in police's face database is changed into synthetic portrait, afterwards portrait to be identified is identified in synthetic portrait database; Another kind of scheme is that portrait to be identified is changed into photomontage, afterwards it is identified in the picture data storehouse of the police.Human face portrait-photo is synthetic conventionally based on three kinds of methods at present: one, based on the human face portrait-picture synthesis method of local linear; Its two, based on the human face portrait-picture synthesis method of markov network model; Its three, based on the human face portrait-picture synthesis method of rarefaction representation.
The people such as Liu have proposed a kind of be similar to overall non-linear method by local linear photo is changed into synthetic portrait in document " Q.S.Liu and X.O.Tang; A nonlinear approach for face sketch synthesis and recognition; in Proc.IEEE Int.Conference on Computer Vision; San Diego; CA; pp.1005-1010,20-26Jun.2005. ".The method embodiment is: first by the photo in training set to portrait to and photo to be transformed be divided into the image block of formed objects and identical overlapping region, in training photo piece, find its K neighbour's photo piece for each photo piece of photo to be transformed, then K portrait piece corresponding to photo piece is weighted to combination and obtains portrait piece to be synthesized, finally all portrait pieces to be synthesized are merged and obtain synthetic portrait.But the weak point that the method exists is: because neighbour's number is to fix, cause synthetic result to have the defect that sharpness is low, details is fuzzy.
The people such as Wang document " X.Wang; and X.Tang; " Face Photo-Sketch Synthesis and Recognition; " IEEE Transactions on Pattern Analysis and Machine Intelligence, 31 (11), 1955-1967,2009 " in, a kind of human face portrait-picture synthesis method based on markov network model has been proposed.The method embodiment is: first by the sketch-photo in training set to and test photo piecemeal, then according to the relation between the portrait piece of the relation between test photo piece and training photo piece and adjacent position, set up markov network model, each test photo piece is found to a best training portrait piece as portrait piece to be synthesized, finally all portrait pieces to be synthesized are merged and obtain synthetic portrait.But the weak point that the method exists is: due to each photo piece position, only to select a training portrait piece draw a portrait synthetic, the problem that causes synthetic result to exist blocking effect and details to lack.
Patented technology " the sketch-photo generation method based on the rarefaction representation " (application number: 201010289330.9 of people's applications such as high-new ripple, the applying date: 2010-09-24 application publication number: CN101958000A) in a kind of human face portrait-picture synthesis method based on rarefaction representation is disclosed, the method embodiment is: first adopt existing method to generate the initial estimation of synthetic portrait or photomontage, then utilize the method for rarefaction representation to synthesize detailed information, finally initial estimation and detailed information are merged.But the weak point that the method exists is: ignored the relation between the image block of adjacent position, caused synthetic result to have fuzzy and blocking effect.
Summary of the invention
Technical matters to be solved by this invention is to overcome above-mentioned existing methodical deficiency, proposes a kind of image combining method based on multi-feature fusion, and the method can improve the picture quality of synthetic portrait or photomontage.
The present invention solves the problems of the technologies described above adopted technical scheme: a kind of image combining method based on multi-feature fusion, for photo is synthesized to portrait, or portrait is synthesized to photo, and it is characterized in that:
In the time that photo is synthesized portrait by needs, comprise the steps:
(1a), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test photo P simultaneously;
(2a), M in training photo sample set is opened to training photo and tests photo P and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training photo and carries out the filtered M of difference of Gaussian and open first kind filtering photo, M opens training photo and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering photo, M opens the M that trains photo to carry out after gaussian filtering and opens the 3rd class filtering photo, test photo carries out filtered the 4th filtering photo of difference of Gaussian, test photo carries out filtered the 5th filtering photo of Core-Periphery normalization, test photo carries out a 6th filtering photo after gaussian filtering,
(3a), the M in training photo sample set being opened to training photo, M opens first kind filtering photo, M and opens Equations of The Second Kind filtering photo and M and open the photograph collection that the 3rd class filtering group of photos unification includes 4M and open photo, every photo in this photograph collection is divided into the training photo piece that N block size is identical, overlapping degree is identical, this training photo piece is called original training photo piece, and the number of original training photo piece is 4M*N; Then each original training photo piece is extracted SURF feature and the LBP feature of this original training photo piece, the photo that original training photo piece extracts after SURF feature is called first kind training photo piece, and the number of first kind training photo piece is also for 4M*N opens; The photo that original training photo piece extracts after LBP feature is called Equations of The Second Kind training photo piece, and the number of Equations of The Second Kind training photo piece is also for 4M*N opens; Original training photo piece, first kind training photo piece and Equations of The Second Kind training photo piece are combined, and are that 12*M*N opens training photo piece thereby obtain 4M*N*3, and this 12*M*N is opened to training photo piece composition training photo piece dictionary, use D prepresent;
(4a), the M in training portrait sample set opened to portrait be divided into respectively the training portrait piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training image blocks, this M*N training image blocks composition trained and drawn a portrait piece dictionary, use D srepresent;
(5a), test photo is divided into respectively to the test photo piece that N block size is identical and overlapping degree is identical with the 4th filtering photo, the 5th filtering photo, these four photos of the 6th filtering photo, this test photo piece is called original test photo piece, and the number of original test photo piece is that 4*N opens; Each original test photo piece is carried out to SURF feature and LBP feature extraction, and the photo that original test photo piece extracts after SURF feature is called first kind test photo piece, and the number of first kind test photo piece is also for 4*N opens; The photo that original test photo piece extracts after LBP feature is called Equations of The Second Kind test photo piece, and the number of Equations of The Second Kind test photo piece is also for 4*N opens; Original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece are combined, and are that 12*N opens test photo piece thereby obtain 4*N*3, and this 12*N is opened to test photo piece composition test photo piece dictionary, use D trepresent;
(6a), will test photo piece dictionary D tin arbitrary original test photo piece and corresponding first kind test photo piece be combined into a vector with Equations of The Second Kind test photo piece by row, so from testing photo piece dictionary D tmiddlely can obtain N such vector, will be from test photo piece dictionary D tin obtain N vector and be called original test photo piece vector dictionary D tv; Meanwhile, will train photo piece dictionary D pin arbitrary original training photo piece and corresponding first kind training photo piece and Equations of The Second Kind training photo piece combine and form a vector by row, so from training photo piece dictionary D pin can obtain M*N vector, will be from training photo piece dictionary D pin M*N vector obtaining be called original training photo piece vector dictionary D pv;
(7a), for original test photo piece vector dictionary D tvin arbitrary vector, calculate itself and original training photo piece vector dictionary D pvin the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training photo piece pvthe K of middle correspondence vector; Simultaneously, further obtain respectively original training photo piece, first kind training photo piece and the Equations of The Second Kind training photo piece of each vector in this K vector, the K obtaining is opened to original training photo piece and be called first kind candidate photo piece, and open original training portrait piece by opening with K K that original training photo piece is corresponding, be called candidate to draw a portrait piece; The K obtaining is opened to first kind training photo piece and be called Equations of The Second Kind candidate photo piece, the K obtaining is opened to Equations of The Second Kind training photo piece and be called the 3rd class candidate photo piece;
(8a), utilize first kind candidate photo piece, Equations of The Second Kind candidate photo piece, the 3rd class candidate photo piece, candidate to draw a portrait piece, original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece, solve markov network model by the method that replaces iteration, obtain respectively weights μ 1, μ 2, the μ 3 of first kind candidate photo piece, Equations of The Second Kind candidate photo piece and the 3rd class candidate photo piece, obtain candidate simultaneously and draw a portrait the weight w of piece;
(9a) candidate who, step (7a) is obtained draws a portrait weight w that piece and step (8a) obtain and multiplies each other and obtain synthetic portrait piece;
(10a), repeated execution of steps (8a)-(9a), until obtain the synthetic portrait of N piece piece, finally the synthetic portrait of the N piece obtaining piece is combined and obtains synthetic portrait corresponding to original test photo P.
In the time that portrait is synthesized photo by needs, comprise the steps:
(1b), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test portrait S simultaneously;
(2b), M in training portrait sample set is opened to training portrait and test portrait S and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training portrait and carries out the filtered M of difference of Gaussian and open first kind filtering portrait, M opens training portrait and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering portrait, M opens the M that trains portrait to carry out after gaussian filtering and opens the 3rd class filtering portrait, test portrait carries out filtered the 4th filtering portrait of difference of Gaussian, test portrait carries out filtered the 5th filtering portrait of Core-Periphery normalization, test portrait carries out a 6th filtering portrait after gaussian filtering,
(3b), the M in training portrait sample set being opened to training portrait, M opens first kind filtering portrait, M and opens Equations of The Second Kind filtering portrait and M and open the portrait collection that the 3rd one of class filtering portrait combination includes 4M and open portrait, every portrait in this portrait collection is divided into the training portrait piece that N block size is identical, overlapping degree is identical, this training portrait piece is called original training portrait piece, and the number of original training portrait piece is 4M*N; Then SURF feature and the LBP feature of each original training portrait piece being extracted this original training portrait piece, the portrait that original training portrait piece extracts after SURF feature is called first kind training portrait piece, and the number of first kind training portrait piece is also for 4M*N opens; The portrait that original training portrait piece extracts after LBP feature is called Equations of The Second Kind training portrait piece, and the number of Equations of The Second Kind training portrait piece is also for 4M*N opens; Original training portrait piece, first kind training portrait piece and Equations of The Second Kind training portrait piece are combined, and are that 12*M*N opens training portrait piece thereby obtain 4M*N*3, and this 12*M*N is opened to training portrait piece composition training portrait piece dictionary, use D s' represent;
(4b), the M in training photo sample set opened to photo be divided into respectively the training photo piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training photo piece, this M*N training photo piece formed and trains photo piece dictionary, use D p' represent;
(5b), test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering being drawn a portrait to these four portraits is divided into respectively the test portrait piece that N block size is identical and overlapping degree is identical, this test portrait piece is called original test portrait piece, and the number of original test portrait piece is that 4*N opens; Each original test portrait piece is carried out to SURF feature and LBP feature extraction, and the portrait that original test portrait piece extracts after SURF feature is called first kind test portrait piece, and the number of first kind test portrait piece is also for 4*N opens; The portrait that original test portrait piece extracts after LBP feature is called Equations of The Second Kind test portrait piece, and the number of Equations of The Second Kind test portrait piece is also for 4*N opens; Original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece are combined, and are that 12*N opens test portrait piece thereby obtain 4*N*3, and this 12*N is opened to test portrait piece composition test portrait piece dictionary, use D t' represent;
(6b), will test portrait piece dictionary D t' in arbitrary original test portrait piece and corresponding first kind test portrait piece be combined into a vector with Equations of The Second Kind test portrait piece by row, draw a portrait piece dictionary D from test so t' in can obtain N such vector, will be from test portrait piece dictionary D t' in obtain N vector and be called original test portrait piece vector dictionary D tv'; Meanwhile, will train portrait piece dictionary D s' in arbitrary original training portrait piece and corresponding first kind training portrait piece and Equations of The Second Kind training portrait piece combine and form a vector by row, so from training portrait piece dictionary D s' in can obtain M*N vector, will be from training portrait piece dictionary D s' in M*N vector obtaining be called original training portrait piece vector dictionary D sv';
(7b), for original training portrait piece vector dictionary D tv' in arbitrary vector, calculate itself and original training portrait piece vector dictionary D sv' in the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training portrait piece sv' middle K corresponding vector; Simultaneously, further obtain respectively original training portrait piece, first kind training portrait piece and the Equations of The Second Kind training portrait piece of each vector in this K vector, the K obtaining is opened to original training portrait piece to be called first kind candidate and to draw a portrait piece, and open original training photo piece by opening with K the K that original training portrait piece is corresponding, be called candidate's photo piece; The K obtaining is opened to first kind training portrait piece and be called Equations of The Second Kind candidate and draw a portrait piece, the K obtaining is opened to Equations of The Second Kind training portrait piece and be called the 3rd class candidate and draw a portrait piece;
(8b), utilizing first kind candidate to draw a portrait piece, Equations of The Second Kind candidate draws a portrait piece, the 3rd class candidate and draws a portrait piece, candidate's photo piece, original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece, solve markov network model by the method that replaces iteration, obtain respectively first kind candidate and draw a portrait piece, Equations of The Second Kind candidate and draw a portrait piece and the 3rd class candidate and draw a portrait weights μ 1 ', μ 2 ', the μ 3 ' of piece, obtain the weight w of candidate's photo piece simultaneously ';
(9b) weight w that the candidate's photo piece, step (7b) being obtained and step (8b) obtain ' multiplying each other obtains photomontage piece;
(10b), repeated execution of steps (8b)-(9b), until obtain N piece photomontage piece, finally the N piece photomontage piece obtaining is combined and obtain photomontage corresponding to original test portrait S.
Compared with prior art, the invention has the advantages that:
The first, the present invention has considered the relation between the image block of adjacent position, selects neighbour's image block to rebuild in each image block position simultaneously, makes synthetic result more clear;
The second, the present invention has adopted the method for many Fusion Features to weigh the distance relation between two image blocks, the problem that has improved the quality of synthetic result and effectively avoided structure to lack.
Brief description of the drawings
Fig. 1 the present invention is based on the photo of many Fusion Features to the synthetic method process flow diagram of drawing a portrait;
Fig. 2 is the portrait that the present invention is based on many Fusion Features synthetic method process flow diagram to photo;
Fig. 3 is the comparing result figure of the present invention and the synthetic portrait of existing two kinds of methods on CUHK student database;
Fig. 4 is the comparing result figure of the present invention and the photomontage of existing two kinds of methods on CUHK student database.
Embodiment
Below in conjunction with accompanying drawing, embodiment is described in further detail the present invention.
Image combining method based on multi-feature fusion provided by the invention, can synthesize portrait by photo, or portrait is synthesized to photo, in the time that photo is synthesized portrait by needs, comprises the steps, shown in Figure 1:
(1a), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test photo P simultaneously;
(2a), M in training photo sample set is opened to training photo and tests photo P and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training photo and carries out the filtered M of difference of Gaussian and open first kind filtering photo, M opens training photo and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering photo, M opens the M that trains photo to carry out after gaussian filtering and opens the 3rd class filtering photo, test photo carries out filtered the 4th filtering photo of difference of Gaussian, test photo carries out filtered the 5th filtering photo of Core-Periphery normalization, test photo carries out a 6th filtering photo after gaussian filtering, in this step, difference of Gaussian filtering, Core-Periphery normalization filtering and gaussian filtering are existing routine techniques,
(3a), the M in training photo sample set being opened to training photo, M opens first kind filtering photo, M and opens Equations of The Second Kind filtering photo and M and open the photograph collection that the 3rd class filtering group of photos unification includes 4M and open photo, every photo in this photograph collection is divided into the training photo piece that N block size is identical, overlapping degree is identical, this training photo piece is called original training photo piece, and the number of original training photo piece is 4M*N; Then each original training photo piece is extracted SURF feature and the LBP feature of this original training photo piece, the photo that original training photo piece extracts after SURF feature is called first kind training photo piece, and the number of first kind training photo piece is also for 4M*N opens; The photo that original training photo piece extracts after LBP feature is called Equations of The Second Kind training photo piece, and the number of Equations of The Second Kind training photo piece is also for 4M*N opens; Original training photo piece, first kind training photo piece and Equations of The Second Kind training photo piece are combined, and are that 12*M*N opens training photo piece thereby obtain 4M*N*3, and this 12*M*N is opened to training photo piece composition training photo piece dictionary, use D prepresent;
In this step, the extracting method of the extracting method of SURF feature and LBP feature is routine techniques, can distinguish list of references " H.Bay; A.Ess; T.Tuytelaars, L.Gool.SURF:Speeded Up Robust Features.Computer Vision and Image Understanding, 110 (3): 346-359; 2008 " and " T.Ojala, M. t. multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns.IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (7): 971-987,2002 ";
(4a), the M in training portrait sample set opened to portrait be divided into respectively the training portrait piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training image blocks, this M*N training image blocks composition trained and drawn a portrait piece dictionary, use D srepresent;
(5a), test photo is divided into respectively to the test photo piece that N block size is identical and overlapping degree is identical with the 4th filtering photo, the 5th filtering photo, these four photos of the 6th filtering photo, this test photo piece is called original test photo piece, and the number of original test photo piece is that 4*N opens; Each original test photo piece is carried out to SURF feature and LBP feature extraction, and the photo that original test photo piece extracts after SURF feature is called first kind test photo piece, and the number of first kind test photo piece is also for 4*N opens; The photo that original test photo piece extracts after LBP feature is called Equations of The Second Kind test photo piece, and the number of Equations of The Second Kind test photo piece is also for 4*N opens; Original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece are combined, and are that 12*N opens test photo piece thereby obtain 4*N*3, and this 12*N is opened to test photo piece composition test photo piece dictionary, use D trepresent;
(6a), will test photo piece dictionary D tin arbitrary original test photo piece and corresponding first kind test photo piece be combined into a vector with Equations of The Second Kind test photo piece by row, so from testing photo piece dictionary D tmiddlely can obtain N such vector, will be from test photo piece dictionary D tin obtain N vector and be called original test photo piece vector dictionary D tv; Meanwhile, will train photo piece dictionary D pin arbitrary original training photo piece and corresponding first kind training photo piece and Equations of The Second Kind training photo piece combine and form a vector by row, so from training photo piece dictionary D pin can obtain M*N vector, will be from training photo piece dictionary D pin M*N vector obtaining be called original training photo piece vector dictionary D pv;
(7a), for original test photo piece vector dictionary D tvin arbitrary vector, calculate itself and original training photo piece vector dictionary D pvin the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training photo piece pvthe K of middle correspondence vector; Simultaneously, further obtain respectively original training photo piece, first kind training photo piece and the Equations of The Second Kind training photo piece of each vector in this K vector, the K obtaining is opened to original training photo piece and be called first kind candidate photo piece, and open original training portrait piece by opening with K K that original training photo piece is corresponding, be called candidate to draw a portrait piece; The K obtaining is opened to first kind training photo piece and be called Equations of The Second Kind candidate photo piece, the K obtaining is opened to Equations of The Second Kind training photo piece and be called the 3rd class candidate photo piece;
(8a), utilize first kind candidate photo piece, Equations of The Second Kind candidate photo piece, the 3rd class candidate photo piece, candidate to draw a portrait piece, original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece, solve markov network model by the method that replaces iteration, obtain respectively weights μ 1, μ 2, the μ 3 of first kind candidate photo piece, Equations of The Second Kind candidate photo piece and the 3rd class candidate photo piece, obtain candidate simultaneously and draw a portrait the weight w of piece;
(9a) candidate who, step (7a) is obtained draws a portrait weight w that piece and step (8a) obtain and multiplies each other and obtain synthetic portrait piece;
(10a), repeated execution of steps (8a)-(9a), until obtain the synthetic portrait of N piece piece, finally the synthetic portrait of the N piece obtaining piece is combined and obtains synthetic portrait corresponding to original test photo P.
In the time that portrait is synthesized photo by needs, comprise the steps, shown in Figure 2:
(1b), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test portrait S simultaneously;
(2b), M in training portrait sample set is opened to training portrait and test portrait S and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training portrait and carries out the filtered M of difference of Gaussian and open first kind filtering portrait, M opens training portrait and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering portrait, M opens the M that trains portrait to carry out after gaussian filtering and opens the 3rd class filtering portrait, test portrait carries out filtered the 4th filtering portrait of difference of Gaussian, test portrait carries out filtered the 5th filtering portrait of Core-Periphery normalization, test portrait carries out a 6th filtering portrait after gaussian filtering,
(3b), the M in training portrait sample set being opened to training portrait, M opens first kind filtering portrait, M and opens Equations of The Second Kind filtering portrait and M and open the portrait collection that the 3rd one of class filtering portrait combination includes 4M and open portrait, every portrait in this portrait collection is divided into the training portrait piece that N block size is identical, overlapping degree is identical, this training portrait piece is called original training portrait piece, and the number of original training portrait piece is 4M*N; Then SURF feature and the LBP feature of each original training portrait piece being extracted this original training portrait piece, the portrait that original training portrait piece extracts after SURF feature is called first kind training portrait piece, and the number of first kind training portrait piece is also for 4M*N opens; The portrait that original training portrait piece extracts after LBP feature is called Equations of The Second Kind training portrait piece, and the number of Equations of The Second Kind training portrait piece is also for 4M*N opens; Original training portrait piece, first kind training portrait piece and Equations of The Second Kind training portrait piece are combined, and are that 12*M*N opens training portrait piece thereby obtain 4M*N*3, and this 12*M*N is opened to training portrait piece composition training portrait piece dictionary, use D s' represent;
(4b), the M in training photo sample set opened to photo be divided into respectively the training photo piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training photo piece, this M*N training photo piece formed and trains photo piece dictionary, use D p' represent;
(5b), test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering being drawn a portrait to these four portraits is divided into respectively the test portrait piece that N block size is identical and overlapping degree is identical, this test portrait piece is called original test portrait piece, and the number of original test portrait piece is that 4*N opens; Each original test portrait piece is carried out to SURF feature and LBP feature extraction, and the portrait that original test portrait piece extracts after SURF feature is called first kind test portrait piece, and the number of first kind test portrait piece is also for 4*N opens; The portrait that original test portrait piece extracts after LBP feature is called Equations of The Second Kind test portrait piece, and the number of Equations of The Second Kind test portrait piece is also for 4*N opens; Original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece are combined, and are that 12*N opens test portrait piece thereby obtain 4*N*3, and this 12*N is opened to test portrait piece composition test portrait piece dictionary, use D t' represent;
(6b), will test portrait piece dictionary D t' in arbitrary original test portrait piece and corresponding first kind test portrait piece be combined into a vector with Equations of The Second Kind test portrait piece by row, draw a portrait piece dictionary D from test so t' in can obtain N such vector, will be from test portrait piece dictionary D t' in obtain N vector and be called original test portrait piece vector dictionary D tv'; Meanwhile, will train portrait piece dictionary D s' in arbitrary original training portrait piece and corresponding first kind training portrait piece and Equations of The Second Kind training portrait piece combine and form a vector by row, so from training portrait piece dictionary D s' in can obtain M*N vector, will be from training portrait piece dictionary d s' in M*N vector obtaining be called original training portrait piece vector dictionary D sv';
(7b), for original training portrait piece vector dictionary D tv' in arbitrary vector, calculate itself and original training portrait piece vector dictionary D sv' in the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training portrait piece sv' middle K corresponding vector; Simultaneously, further obtain respectively original training portrait piece, first kind training portrait piece and the Equations of The Second Kind training portrait piece of each vector in this K vector, the K obtaining is opened to original training portrait piece to be called first kind candidate and to draw a portrait piece, and open original training photo piece by opening with K the K that original training portrait piece is corresponding, be called candidate's photo piece; The K obtaining is opened to first kind training portrait piece and be called Equations of The Second Kind candidate and draw a portrait piece, the K obtaining is opened to Equations of The Second Kind training portrait piece and be called the 3rd class candidate and draw a portrait piece;
(8b), utilizing first kind candidate to draw a portrait piece, Equations of The Second Kind candidate draws a portrait piece, the 3rd class candidate and draws a portrait piece, candidate's photo piece, original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece, solve markov network model by the method that replaces iteration, obtain respectively first kind candidate and draw a portrait piece, Equations of The Second Kind candidate and draw a portrait piece and the 3rd class candidate and draw a portrait weights μ 1 ', μ 2 ', the μ 3 ' of piece, obtain the weight w of candidate's photo piece simultaneously ';
(9b) weight w that the candidate's photo piece, step (7b) being obtained and step (8b) obtain ' multiplying each other obtains photomontage piece;
(10b), repeated execution of steps (8b)-(9b), until obtain N piece photomontage piece, finally the N piece photomontage piece obtaining is combined and obtain photomontage corresponding to original test portrait S.
Effect of the present invention can be described further by following emulation experiment.
1, simulated conditions
The present invention is to be in Inter (R) Core (TM) i5-34703.20GHz, internal memory 8G, WINDOWS7 operating system at central processing unit, use the MATLAB2012b of Mathworks company of U.S. exploitation to carry out emulation, database adopts the CUHK student of Hong Kong Chinese University database.
2, emulation content
Experiment 1: photo is synthetic to portrait
Described in the inventive method embodiment, utilize method based on multi-feature fusion on the CUHK of Hong Kong Chinese University student database, to carry out photo synthesizing to portrait, use the method LLE based on local linear and the method MRF based on markov network model on CUHK student database, to carry out photo synthesizing to portrait, experimental result comparison diagram is as Fig. 3, wherein Fig. 3 (a) is original photo, Fig. 3 (b) is the synthetic portrait of method LLE based on local linear, Fig. 3 (c) is the synthetic portrait of method MRF based on markov network model, Fig. 3 (d) is the synthetic portrait of the inventive method,
Experiment 2: portrait is synthetic to photo
Described in the inventive method embodiment two, utilize method based on multi-feature fusion on the CUHK of Hong Kong Chinese University student database, to draw a portrait synthesizing of photo, use the method LLE based on local linear and the method MRF based on markov network model on CUHK student database, to draw a portrait synthesizing of photo, experimental result comparison diagram is as Fig. 4, wherein Fig. 4 (a) is original portrait, Fig. 4 (b) is the synthetic photo of method LLE based on local linear, Fig. 4 (c) is the synthetic photo of method MRF based on markov network model, Fig. 4 (d) is the synthetic photo of the inventive method.
From experiment 1 and experiment 2 results, due to by the thought of many Fusion Features, can better weigh two distance relations between image block, make synthetic result be better than other human face portrait-picture synthesis method, verify advance of the present invention.

Claims (1)

1. an image combining method based on multi-feature fusion, for photo is synthesized to portrait, or synthesizes photo by portrait, it is characterized in that:
In the time that photo is synthesized portrait by needs, comprise the steps:
(1a), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test photo P simultaneously;
(2a), M in training photo sample set is opened to training photo and tests photo P and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training photo and carries out the filtered M of difference of Gaussian and open first kind filtering photo, M opens training photo and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering photo, M opens the M that trains photo to carry out after gaussian filtering and opens the 3rd class filtering photo, test photo carries out filtered the 4th filtering photo of difference of Gaussian, test photo carries out filtered the 5th filtering photo of Core-Periphery normalization, test photo carries out a 6th filtering photo after gaussian filtering,
(3a), the M in training photo sample set being opened to training photo, M opens first kind filtering photo, M and opens Equations of The Second Kind filtering photo and M and open the photograph collection that the 3rd class filtering group of photos unification includes 4M and open photo, every photo in this photograph collection is divided into the training photo piece that N block size is identical, overlapping degree is identical, this training photo piece is called original training photo piece, and the number of original training photo piece is 4M*N; Then each original training photo piece is extracted SURF feature and the LBP feature of this original training photo piece, the photo that original training photo piece extracts after SURF feature is called first kind training photo piece, and the number of first kind training photo piece is also for 4M*N opens; The photo that original training photo piece extracts after LBP feature is called Equations of The Second Kind training photo piece, and the number of Equations of The Second Kind training photo piece is also for 4M*N opens; Original training photo piece, first kind training photo piece and Equations of The Second Kind training photo piece are combined, and are that 12*M*N opens training photo piece thereby obtain 4M*N*3, and this 12*M*N is opened to training photo piece composition training photo piece dictionary, use D prepresent;
(4a), the M in training portrait sample set opened to portrait be divided into respectively the training portrait piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training image blocks, this M*N training image blocks composition trained and drawn a portrait piece dictionary, use D srepresent;
(5a), test photo is divided into respectively to the test photo piece that N block size is identical and overlapping degree is identical with the 4th filtering photo, the 5th filtering photo, these four photos of the 6th filtering photo, this test photo piece is called original test photo piece, and the number of original test photo piece is that 4*N opens; Each original test photo piece is carried out to SURF feature and LBP feature extraction, and the photo that original test photo piece extracts after SURF feature is called first kind test photo piece, and the number of first kind test photo piece is also for 4*N opens; The photo that original test photo piece extracts after LBP feature is called Equations of The Second Kind test photo piece, and the number of Equations of The Second Kind test photo piece is also for 4*N opens; Original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece are combined, and are that 12*N opens test photo piece thereby obtain 4*N*3, and this 12*N is opened to test photo piece composition test photo piece dictionary, use D trepresent;
(6a), will test photo piece dictionary D tin arbitrary original test photo piece and corresponding first kind test photo piece be combined into a vector with Equations of The Second Kind test photo piece by row, so from testing photo piece dictionary D tmiddlely can obtain N such vector, will be from test photo piece dictionary D tin obtain N vector and be called original test photo piece vector dictionary D tv; Meanwhile, will train photo piece dictionary D pin arbitrary original training photo piece and corresponding first kind training photo piece and Equations of The Second Kind training photo piece combine and form a vector by row, so from training photo piece dictionary D pin can obtain M*N vector, will be from training photo piece dictionary D pin M*N vector obtaining be called original training photo piece vector dictionary D pv;
(7a), for original test photo piece vector dictionary D tvin arbitrary vector, calculate itself and original training photo piece vector dictionary D pvin the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training photo piece pvthe K of middle correspondence vector; Simultaneously, further obtain respectively original training photo piece, first kind training photo piece and the Equations of The Second Kind training photo piece of each vector in this K vector, the K obtaining is opened to original training photo piece and be called first kind candidate photo piece, and open original training portrait piece by opening with K K that original training photo piece is corresponding, be called candidate to draw a portrait piece; The K obtaining is opened to first kind training photo piece and be called Equations of The Second Kind candidate photo piece, the K obtaining is opened to Equations of The Second Kind training photo piece and be called the 3rd class candidate photo piece;
(8a), utilize first kind candidate photo piece, Equations of The Second Kind candidate photo piece, the 3rd class candidate photo piece, candidate to draw a portrait piece, original test photo piece, first kind test photo piece and Equations of The Second Kind test photo piece, solve markov network model by the method that replaces iteration, obtain respectively weights μ 1, μ 2, the μ 3 of first kind candidate photo piece, Equations of The Second Kind candidate photo piece and the 3rd class candidate photo piece, obtain candidate simultaneously and draw a portrait the weight w of piece;
(9a) candidate who, step (7a) is obtained draws a portrait weight w that piece and step (8a) obtain and multiplies each other and obtain synthetic portrait piece;
(10a), repeated execution of steps (8a)-(9a), until obtain the synthetic portrait of N piece piece, finally the synthetic portrait of the N piece obtaining piece is combined and obtains synthetic portrait corresponding to original test photo P;
In the time that portrait is synthesized photo by needs, comprise the steps:
(1b), select M training portrait and this training are drawn a portrait to corresponding training photo as training basis, M is opened to training portrait as training portrait sample set, open training photo as training photo sample set using opening with above-mentioned M the M that training portrait is corresponding, choose in addition a test portrait S simultaneously;
(2b), M in training portrait sample set is opened to training portrait and test portrait S and carry out respectively difference of Gaussian filtering simultaneously, Core-Periphery normalization filtering and gaussian filtering, obtaining respectively M opens training portrait and carries out the filtered M of difference of Gaussian and open first kind filtering portrait, M opens training portrait and carries out the filtered M of Core-Periphery normalization and open Equations of The Second Kind filtering portrait, M opens the M that trains portrait to carry out after gaussian filtering and opens the 3rd class filtering portrait, test portrait carries out filtered the 4th filtering portrait of difference of Gaussian, test portrait carries out filtered the 5th filtering portrait of Core-Periphery normalization, test portrait carries out a 6th filtering portrait after gaussian filtering,
(3b), the M in training portrait sample set being opened to training portrait, M opens first kind filtering portrait, M and opens Equations of The Second Kind filtering portrait and M and open the portrait collection that the 3rd one of class filtering portrait combination includes 4M and open portrait, every portrait in this portrait collection is divided into the training portrait piece that N block size is identical, overlapping degree is identical, this training portrait piece is called original training portrait piece, and the number of original training portrait piece is 4M*N; Then SURF feature and the LBP feature of each original training portrait piece being extracted this original training portrait piece, the portrait that original training portrait piece extracts after SURF feature is called first kind training portrait piece, and the number of first kind training portrait piece is also for 4M*N opens; The portrait that original training portrait piece extracts after LBP feature is called Equations of The Second Kind training portrait piece, and the number of Equations of The Second Kind training portrait piece is also for 4M*N opens; Original training portrait piece, first kind training portrait piece and Equations of The Second Kind training portrait piece are combined, and are that 12*M*N opens training portrait piece thereby obtain 4M*N*3, and this 12*M*N is opened to training portrait piece composition training portrait piece dictionary, use D s' represent;
(4b), the M in training photo sample set opened to photo be divided into respectively the training photo piece that N block size is identical and overlapping degree is identical, thereby obtain M*N training photo piece, this M*N training photo piece formed and trains photo piece dictionary, use D p' represent;
(5b), test portrait and the 4th filtering portrait, the 5th filtering portrait, the 6th filtering being drawn a portrait to these four portraits is divided into respectively the test portrait piece that N block size is identical and overlapping degree is identical, this test portrait piece is called original test portrait piece, and the number of original test portrait piece is that 4*N opens; Each original test portrait piece is carried out to SURF feature and LBP feature extraction, and the portrait that original test portrait piece extracts after SURF feature is called first kind test portrait piece, and the number of first kind test portrait piece is also for 4*N opens; The portrait that original test portrait piece extracts after LBP feature is called Equations of The Second Kind test portrait piece, and the number of Equations of The Second Kind test portrait piece is also for 4*N opens; Original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece are combined, and are that 12*N opens test portrait piece thereby obtain 4*N*3, and this 12*N is opened to test portrait piece composition test portrait piece dictionary, use D t' represent;
(6b), will test portrait piece dictionary D t' in arbitrary original test portrait piece and corresponding first kind test portrait piece be combined into a vector with Equations of The Second Kind test portrait piece by row, draw a portrait piece dictionary D from test so t' in can obtain N such vector, will be from test portrait piece dictionary D t' in obtain N vector and be called original test portrait piece vector dictionary D tv'; Meanwhile, will train portrait piece dictionary D s' in arbitrary original training portrait piece and corresponding first kind training portrait piece and Equations of The Second Kind training portrait piece combine and form a vector by row, so from training portrait piece dictionary D s' in can obtain M*N vector, will be from training portrait piece dictionary D s' in M*N vector obtaining be called original training portrait piece vector dictionary D sv';
(7b), for original training portrait piece vector dictionary D tv' in arbitrary vector, calculate itself and original training portrait piece vector dictionary D sv' in the Euclidean distance of each vector, thereby obtain M*N distance value, therefrom select K minimum distance value, select this K lowest distance value at the vectorial dictionary D of original training portrait piece sv' middle K corresponding vector; Simultaneously, further obtain respectively original training portrait piece, first kind training portrait piece and the Equations of The Second Kind training portrait piece of each vector in this K vector, the K obtaining is opened to original training portrait piece to be called first kind candidate and to draw a portrait piece, and open original training photo piece by opening with K the K that original training portrait piece is corresponding, be called candidate's photo piece; The K obtaining is opened to first kind training portrait piece and be called Equations of The Second Kind candidate and draw a portrait piece, the K obtaining is opened to Equations of The Second Kind training portrait piece and be called the 3rd class candidate and draw a portrait piece;
(8b), utilizing first kind candidate to draw a portrait piece, Equations of The Second Kind candidate draws a portrait piece, the 3rd class candidate and draws a portrait piece, candidate's photo piece, original test portrait piece, first kind test portrait piece and Equations of The Second Kind test portrait piece, solve markov network model by the method that replaces iteration, obtain respectively first kind candidate and draw a portrait piece, Equations of The Second Kind candidate and draw a portrait piece and the 3rd class candidate and draw a portrait weights μ 1 ', μ 2 ', the μ 3 ' of piece, obtain the weight w of candidate's photo piece simultaneously ';
(9b) weight w that the candidate's photo piece, step (7b) being obtained and step (8b) obtain ' multiplying each other obtains photomontage piece;
(10b), repeated execution of steps (8b)-(9b), until obtain N piece photomontage piece, finally the N piece photomontage piece obtaining is combined and obtain photomontage corresponding to original test portrait S.
CN201410165469.0A 2014-04-23 2014-04-23 Image combining method based on multi-feature fusion Expired - Fee Related CN103984954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410165469.0A CN103984954B (en) 2014-04-23 2014-04-23 Image combining method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410165469.0A CN103984954B (en) 2014-04-23 2014-04-23 Image combining method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN103984954A true CN103984954A (en) 2014-08-13
CN103984954B CN103984954B (en) 2017-06-13

Family

ID=51276916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410165469.0A Expired - Fee Related CN103984954B (en) 2014-04-23 2014-04-23 Image combining method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN103984954B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
CN104700439A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Single target portrait-based face portrait compositing method
CN105989584A (en) * 2015-01-29 2016-10-05 北京大学 Image stylized reconstruction method and device
CN106023079A (en) * 2016-05-19 2016-10-12 西安电子科技大学 Two-stage face sketch generation method capable of combining local and global characteristics
CN107392213A (en) * 2017-07-21 2017-11-24 西安电子科技大学 Human face portrait synthetic method based on the study of the depth map aspect of model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180626A1 (en) * 2004-02-12 2005-08-18 Nec Laboratories Americas, Inc. Estimating facial pose from a sparse representation
CN101169830A (en) * 2007-11-30 2008-04-30 西安电子科技大学 Human face portrait automatic generation method based on embedded type hidden markov model and selective integration
CN101482925A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Photograph generation method based on local embedding type hidden Markov model
CN101958000A (en) * 2010-09-24 2011-01-26 西安电子科技大学 Face image-picture generating method based on sparse representation
CN102013020A (en) * 2009-09-08 2011-04-13 王晓刚 Method and system for synthesizing human face image
CN102110303A (en) * 2011-03-10 2011-06-29 西安电子科技大学 Method for compounding face fake portrait\fake photo based on support vector return
CN103080979A (en) * 2010-09-03 2013-05-01 王晓刚 System and method for synthesizing portrait sketch from photo

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180626A1 (en) * 2004-02-12 2005-08-18 Nec Laboratories Americas, Inc. Estimating facial pose from a sparse representation
CN101169830A (en) * 2007-11-30 2008-04-30 西安电子科技大学 Human face portrait automatic generation method based on embedded type hidden markov model and selective integration
CN101482925A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Photograph generation method based on local embedding type hidden Markov model
CN102013020A (en) * 2009-09-08 2011-04-13 王晓刚 Method and system for synthesizing human face image
CN103080979A (en) * 2010-09-03 2013-05-01 王晓刚 System and method for synthesizing portrait sketch from photo
CN101958000A (en) * 2010-09-24 2011-01-26 西安电子科技大学 Face image-picture generating method based on sparse representation
CN102110303A (en) * 2011-03-10 2011-06-29 西安电子科技大学 Method for compounding face fake portrait\fake photo based on support vector return

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search
CN104517274B (en) * 2014-12-25 2017-06-16 西安电子科技大学 Human face portrait synthetic method based on greedy search
CN105989584A (en) * 2015-01-29 2016-10-05 北京大学 Image stylized reconstruction method and device
CN105989584B (en) * 2015-01-29 2019-05-14 北京大学 The method and apparatus that image stylization is rebuild
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
CN104700439A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Single target portrait-based face portrait compositing method
CN104700439B (en) * 2015-03-12 2017-08-15 陕西炬云信息科技有限公司 The human face portrait synthetic method drawn a portrait based on individual target
CN104700380B (en) * 2015-03-12 2017-08-15 陕西炬云信息科技有限公司 Based on single photo with portrait to human face portrait synthetic method
CN106023079A (en) * 2016-05-19 2016-10-12 西安电子科技大学 Two-stage face sketch generation method capable of combining local and global characteristics
CN106023079B (en) * 2016-05-19 2019-05-24 西安电子科技大学 The two stages human face portrait generation method of joint part and global property
CN107392213A (en) * 2017-07-21 2017-11-24 西安电子科技大学 Human face portrait synthetic method based on the study of the depth map aspect of model
CN107392213B (en) * 2017-07-21 2020-04-07 西安电子科技大学 Face portrait synthesis method based on depth map model feature learning

Also Published As

Publication number Publication date
CN103984954B (en) 2017-06-13

Similar Documents

Publication Publication Date Title
Ding et al. Semantic segmentation of large-size VHR remote sensing images using a two-stage multiscale training architecture
CN103984954A (en) Image synthesis method based on multi-feature fusion
Zeng et al. Towards resolution invariant face recognition in uncontrolled scenarios
CN109409384A (en) Image-recognizing method, device, medium and equipment based on fine granularity image
CN106980825B (en) Human face posture classification method based on normalized pixel difference features
CN103544504B (en) Scene character recognition method based on multi-scale map matching core
Wan et al. Generative adversarial multi-task learning for face sketch synthesis and recognition
Jaberi et al. Improving the detection and localization of duplicated regions in copy-move image forgery
CN110147570A (en) It is a kind of that method for distinguishing is known based on the electronic component of texture and shape feature
CN105844605B (en) Based on the human face portrait synthetic method adaptively indicated
Zheng et al. Joint bilateral-resolution identity modeling for cross-resolution person re-identification
Sabry et al. Image retrieval using convolutional autoencoder, infogan, and vision transformer unsupervised models
Mansourifar et al. One-shot gan generated fake face detection
Hoque et al. Bdsl36: A dataset for bangladeshi sign letters recognition
Xu et al. CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation
Deng et al. Attention-aware dual-stream network for multimodal face anti-spoofing
Sushma et al. Kannada handwritten word conversion to electronic textual format using HMM model
Nimbarte et al. Biased face patching approach for age invariant face recognition using convolutional neural network
Saravanan et al. Using machine learning principles, the classification method for face spoof detection in artificial neural networks
Ramírez-Sáyago et al. Towards inpainting and denoising latent fingerprints: A study on the impact in latent fingerprint identification
Guo et al. Domain alignment embedding network for sketch face recognition
CN106022373B (en) A kind of image-recognizing method based on extended mean value canonical correlation analysis
Sreelekshmi et al. Deep forgery detect: enhancing social media security through deep learning-based forgery detection
Huang et al. Age-puzzle facenet for cross-age face recognition
Wang et al. Learning to remove reflections for text images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170613