CN105844605B - Based on the human face portrait synthetic method adaptively indicated - Google Patents

Based on the human face portrait synthetic method adaptively indicated Download PDF

Info

Publication number
CN105844605B
CN105844605B CN201610152915.3A CN201610152915A CN105844605B CN 105844605 B CN105844605 B CN 105844605B CN 201610152915 A CN201610152915 A CN 201610152915A CN 105844605 B CN105844605 B CN 105844605B
Authority
CN
China
Prior art keywords
block
photo
portrait
test
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610152915.3A
Other languages
Chinese (zh)
Other versions
CN105844605A (en
Inventor
王楠楠
于昕晔
高新波
彭春蕾
李洁
査文锦
孙雷雨
张宇航
朱明瑞
曹兵
马卓奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201610152915.3A priority Critical patent/CN105844605B/en
Publication of CN105844605A publication Critical patent/CN105844605A/en
Application granted granted Critical
Publication of CN105844605B publication Critical patent/CN105844605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the human face portrait synthetic method adaptively indicated, mainly solves the problems, such as that existing method synthesis portrait clarity is low and details is incomplete.Implementation step is:Database is handled first, after all images are carried out image filtering, to image block and extracts image block characteristics, obtains a training portrait block dictionary and two photo block dictionaries;Secondly according to whether including that marginal information or face feature information select different dictionaries in test photo block, neighbour's block is found;Portrait block to be synthesized finally is obtained using markov network model, and all portrait blocks to be synthesized are merged to obtain synthesis portrait.Compared with the conventional method, composite result has higher clarity, details more complete to the present invention, can be used for face retrieval and identification.

Description

Based on the human face portrait synthetic method adaptively indicated
Technical field
The present invention is to belong to the technical field of image procossing, further relates to pattern-recognition and is led with computer vision technique Human face portrait synthetic method in domain, can be used for the face retrieval in criminal investigation and case detection and identification.
Background technology
With modern society and multimedia development, the more and more video images of people are recorded, how root Differentiate according to existing image and certification a person's identity, it has also become one of problem to be solved, wherein recognition of face have Have direct, friendly and the features such as facilitate, has obtained extensive research and application.The one of important application of face recognition technology is just It is to assist police's criminal investigation and case detection.But the photo of the suspect of some major cases is very unobtainable in many cases, or To photo be not that positive or uneven illumination is even, then the police can draw out suspicion according to the description of live eye witness The portrait of suspect is retrieved and is identified in the picture data library of the police later.Since human face photo and portrait are in imager All there is larger difference in terms of system, shape and texture, with human face portrait directly using existing face identification method identification effect Fruit is poor.In view of the above-mentioned problems, there are two types of methods altogether:A solution is by the photo conversion in police's face database It draws a portrait at synthesis, portrait to be identified is identified in synthesis representation data library later;Another scheme is by picture to be identified As being converted to photomontage, it is identified in the picture data library of the police later.Human face portrait synthesizes usual base at present In three kinds of methods:First, the human face portrait synthetic method based on local linear;Second, the people based on markov network model Face portrait synthetic method;Third, the human face portrait synthetic method based on rarefaction representation.
Liu et al. people is in document " Q.S.Liu and X.O.Tang, A nonlinear approach for face sketch synthesis and recognition,in Proc.IEEE Int.Conference on Computer It is proposed in Vision, San Diego, CA, pp.1005-1010,20-26Jun.2005. " a kind of next close by local linear Photo is converted to synthesis portrait like global non-linear method.This method embodiment is:First by the photo-in training set Portrait pair and photo to be transformed are divided into same size and the image block of identical overlapping region, for each of photo to be transformed Photo block finds its K neighbour's photo block in training photo block, is then weighted the corresponding portrait block of K photo block Combination obtains portrait block to be synthesized, finally merges all portrait blocks to be synthesized to obtain synthesis portrait.But this method exists Shortcoming be:Since neighbour's number is fixed, lead to composite result there are clarity the defect low, details is fuzzy.
Wang et al. is in document " X.Wang, and X.Tang, " Face Photo-Sketch Synthesis and Recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 31 (11) propose a kind of human face portrait synthetic method based on markov network model in 1955-1967,2009 ".The party Method embodiment is:First by the sketch-photo pair and test photo piecemeal in training set, then according to test photo block and instruction Practice the relationship between the portrait block of the relationship and adjacent position between photo block, markov network model is established, to each It tests photo block and finds a best training portrait block as portrait block to be synthesized, finally melt all portrait blocks to be synthesized Conjunction obtains synthesis portrait.But shortcoming existing for this method is:Since each photo block position only selects a trained picture As block carries out portrait synthesis, composite result is caused to there are problems that blocking artifact and details missing.
Patented technology " the sketch-photo generation method based on rarefaction representation " (application number of high-new wave et al. application: 201010289330.9 the applying date:2010-09-24 application publication numbers:101958000 A of CN) in disclose it is a kind of based on sparse The human face portrait synthetic method of expression.This method embodiment is:Using having, method generates synthesis portrait or synthesis is shone first Then the initial estimation of piece synthesizes detailed information using the method for rarefaction representation, finally by initial estimation and detailed information into Row fusion.But shortcoming existing for this method is:The relationship between the image block of adjacent position is had ignored, synthesis is caused to be tied There is fuzzy and blocking artifact in fruit.
Invention content
It is an object of the invention to overcome the shortcomings of above-mentioned existing method, propose a kind of based on the face adaptively indicated picture As synthetic method, to improve the picture quality of synthesis portrait.
Realize that the technical solution of the object of the invention includes:
1. it is a kind of based on the human face portrait synthetic method adaptively indicated, include the following steps:
(1) sketch-photo is divided into trained library and test library to database, and chooses a test from test library and shines Piece PTe
(2) photo in training library is subjected to difference of Gaussian filtering, and the photo in library and corresponding filtering will be trained to scheme As being divided into, size is identical and identical piece of overlapping degree;PTr={ PTr,1,PTr,2,…,PTr,i,…,PTr,N, 1≤i≤N, N are The total number of block;
(3) it uses training photo block and corresponding filtering image block as two features, obtains the first training photo block Dictionary Dp1, and training photo block and corresponding filtering image block are extracted respectively and accelerate robust features and local binary patterns special Sign uses this four features as second of training photo block dictionary Dp2
(4) portrait in training library is divided into that size is identical and identical piece of S of overlapping degreeTr={ STr,1,STr,2,…, STr,i,…,STr,N, obtain training portrait block dictionary DS
(5) test photo is subjected to edge detection and characteristic point detection, obtains the marginal information and characteristic point of test photo Information;
(6) photo in test library is subjected to difference of Gaussian filtering, and test photo and filtered photo is divided into Size is identical and identical piece of P of overlapping degreeTe={ PTe,1,PTe,2,…,PTe,i,…,PTe,N, and judge each test photo block PTe,iWhether marginal information or characteristic point information are had:
If this photo block includes marginal information or characteristic point information, to testing photo block PTe,iExtraction accelerates robust Feature and local binary patterns feature, according to characteristic distance from training photo block dictionary Dp2Middle searching K similar photo blocks are made For photo block to be selectedSimultaneously from training portrait block dictionary DSIn Select portrait block corresponding with neighbour's photo block as wait for selection portrait block
If this photo block does not include marginal information or characteristic point information, by this test photo block PTe,iAnd it is right The filtering image block answered is as feature, according to characteristic distance from training photo block dictionary Dp1Middle searching K similar photo blocks are made For photo block to be selectedSimultaneously from training portrait block dictionary DSMiddle selection and neighbour The corresponding portrait block of photo block, which is used as, waits for selection portrait block
(7) using the image block characteristics of extraction, markov network model is solved by the method for alternating iteration, is obtained every A test photo block PTe,iMultiple features between weights μi={ μi,1i,2,…,μi,l,…,μi,L, 1≤l≤L, L are spy The total number of sign, while obtaining the corresponding photo block to be selected of each test photo blockWeights ωi={ ωi,1, ωi,2,…,ωi,j,…,ωi,K};
(8) selection portrait block is waited for using what step (6) obtainedAnd step (7) the weights ω obtainedi={ ωi,1i,2,…,ωi,j,…,ωi,K, each test photo block P is obtained according to the following formulaTe,iIt is right The portrait block S to be synthesized answeredi
(9) iteration executes step (7)-(8) until obtaining the final portrait block to be synthesized of N blocks, and by these pictures to be synthesized As block is combined to obtain synthesis portrait corresponding with test photo.
The present invention has the following advantages that compared with the conventional method:
First, the present invention considers the relationship between the image block of adjacent position, while selecting K neighbour each piece of position Block is rebuild so that composite result is more clear;
Second, present invention employs the method adaptively indicated, different regions is synthesized using different features, is used Different features weighs the distance between two image blocks relationship, improves the quality of composite result and keeps details more complete.
Description of the drawings
Fig. 1 is the implementation flow chart of the present invention;
Fig. 2 is the comparing result of the synthesis portrait with existing four kinds of methods on CUHK student databases with invention Figure.
Specific implementation mode
Core of the invention thought is:A kind of human face portrait synthetic method is proposed by the thought adaptively indicated, is made not Same human face region is synthesized with different features, improves the picture quality of composite result.
Referring to Fig.1, implementation steps of the invention are as follows:
Step 1, test photo P is chosenTe
From sketch-photo N to marking off trained library and test library in database, and chooses a photo from test library and make To test photo PTe
Step 2, two trained photo block dictionary D are obtainedp1And Dp2
The photo in training library 2a) is subjected to difference of Gaussian filtering:
2a1) construct the Gaussian function of two different scale value σ:
Wherein, G (x, y, σ) indicates the Gaussian function under σ scale-values, and x, y indicate that pixel is corresponding in photo respectively Horizontal, ordinate value;
Photo 2a2) is subjected to convolution with the Gaussian function of two different scales respectively, obtains the photo after two convolution;
2a3) photo after two convolution is subtracted each other, obtained image is exactly that photo passes through the filtered knot of difference of Gaussian Fruit;
The photo trained in library and corresponding filtering image 2b) are divided into the block of same size and identical overlapping degree, It uses training photo block and corresponding filtering image block as two features, obtains the first training photo block dictionary Dp1
2c) training photo block and corresponding filtering image block are extracted respectively and accelerate robust features and local binary patterns Feature uses this four features as second of training photo block dictionary Dp2
The extracting method for accelerating robust features feature and local binary patterns feature, difference bibliography " H.Bay, A.Ess,T.Tuytelaars,L.Gool.SURF:Speeded Up Robust Features.Computer Vision and Image Understanding,110(3):346-359,2008 " and " T.Ojala, M.T. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns.IEEE Transactions on Pattern Analysis and Machine Intelligence,24(7):971-987,2002”。
Step 3, training portrait block dictionary D is obtainedS
Portrait in training library is divided into the block of same size and identical overlapping degree, obtains training portrait block dictionary DS
Step 4, the information of test photo is obtained.
Test photo is subjected to edge detection and characteristic point detection, obtains the marginal information and characteristic point letter of test photo Breath;
Step 5, photo block to be selected is obtainedWith wait for selection portrait block
Photo in test library is subjected to difference of Gaussian filtering, and test photo and filtered photo are divided into size Identical piece of P of identical and overlapping degreeTe={ PTe,1,PTe,2,…,PTe,i,…,PTe,N, and judge each test photo block PTe,i Whether marginal information or characteristic point information are had:
If this photo block includes marginal information or characteristic point information, to testing photo block PTe,iExtraction accelerates robust Feature and local binary patterns feature, according to characteristic distance from training photo block dictionary Dp2Middle searching K similar photo blocks are made For photo block to be selectedSimultaneously from training portrait block dictionary DSIn Select portrait block corresponding with neighbour's photo block as wait for selection portrait block
If this photo block does not include marginal information or characteristic point information, by this test photo block PTe,iAnd it is right The filtering image block answered is as feature, according to characteristic distance from training photo block dictionary Dp1Middle searching K similar photo blocks are made For photo block to be selectedSimultaneously from training portrait block dictionary DSMiddle selection with it is close The corresponding portrait block of adjacent photo block is as waiting for selection portrait block
Step 6, the weights μ between multiple features is solvediWith the weights ω of photo block to be selectedi
Using the image block characteristics of extraction, markov network model is solved by the method for alternating iteration, is obtained each Test the weights μ between multiple features of photo blocki={ μi,1i,2,…,μi,l,…,μi,L, 1≤l≤L, L are characterized total Number, while obtaining the corresponding photo block to be selected of each test photo blockWeights ωi={ ωi,1i,2,…, ωi,j,…,ωi,K}。
Described is as follows by the method solution procedure of alternating iteration:
6a) each test the weights between the multiple features of the equal random initializtion of photo blockAt the beginning of photo block to be selected Initial value is ωi
6b) according to Euclidean distance formula calculate photo block to be selected and test photo block multiple features between it is European away from From, obtain test photo block and photo block to be selected between relationship:
Wherein, d represents the distance between two features, x1And x2Respectively represent the abscissa of two feature vectors, y1And y2 Respectively represent the ordinate of two feature vectors;
6c) according to 6b) described in Euclidean distance formula, calculate adjacent position wait for selection portrait block pixel value between Euclidean distance, to obtain the relationship of adjacent position waited between selection portrait block;
6d) by 6b) and result 6c) be brought into Markov model;
Markov model 6e) is utilized, according to the initialization weights between multiple featuresTreat the first of selection photo block Beginningization weightsIt optimizes, the weights ω of the photo block to be selected after being optimizedi
6f) according to Markov model and 6e) optimization come photo block to be selected weights ωi, optimize multiple spies Initialization weights between signThe weights μ between multiple features after being optimizedi
6g) iteration executes 6b) to 6f), until the weights ω of the corresponding photo block to be selected of each test photo blockiNo longer Change or reach preset iterations, obtains the weights μ between multiple features of each test photo blockiWith wait for selection shine The weights ω of tilei
Step 7, portrait block S to be synthesized is solvedi
Selection portrait block is waited for using what step 5 obtainedThe weights ω obtained with step 6i, each survey is obtained according to the following formula Try the corresponding portrait block S to be synthesized of photo blocki
Step 8, iteration executes to obtain final synthesis portrait.
Iteration executes step 6 to step 7 until obtaining all portrait blocks to be synthesized, and these pictures to be synthesized that will be obtained As block is merged to obtain synthesis portrait corresponding with test photo.
The effect of the present invention can be described further by following emulation experiment.
1. simulated conditions
It is Inter (R) Core (TM) i5-3470 3.20GHz, memory 16G, WINDOWS that the present invention, which is in central processing unit, In 7 operating systems, the MATLAB 2012b developed with Mathworks companies of the U.S. are emulated.Database uses in Hong Kong University of liberal arts CUHK student databases.
2. emulation content
Experiment 1:Synthesis of the photo to portrait
With the method for the present invention and the existing method LLE based on local linear, the method MRF based on Markov random field, Method MWF based on markov weight field and it is based on multiple features fusion method MrFSPS, in the CUHK of Hong Kong Chinese University On student databases carry out photo to draw a portrait synthesis, experimental result such as Fig. 2, wherein:
Fig. 2 (a) is original photo;
Fig. 2 (b) is the portrait of the method LLE synthesis based on local linear;
Fig. 2 (c) is the portrait of the method MRF synthesis based on Markov random field;
Fig. 2 (d) is the portrait of the method MWF synthesis based on markov weight field;
Fig. 2 (e) is the portrait based on the MrFSPS synthesis of multiple features fusion method;
Fig. 2 (f) is the portrait of the method for the present invention synthesis.
By testing 1 result as it can be seen that since the present invention is by means of the thought adaptively indicated, the difference of facial image can be made Region uses different character representations, can preferably weigh the distance between two image blocks relationship so that composite result is excellent In other human face portrait synthetic methods, the advance of the present invention is demonstrated.

Claims (3)

1. it is a kind of based on the human face portrait synthetic method adaptively indicated, include the following steps:
(1) sketch-photo is divided into trained library and test library to database, and chooses a test photo from test library;
(2) photo in training library is subjected to difference of Gaussian filtering, and the photo in training library is drawn with corresponding filtering image It is divided into that size is identical and identical piece of overlapping degree;PTr={ PTr,1,PTr,2,...,PTr,i,...,PTr,N, 1≤i≤N, N are block Total number;
(3) it uses training photo block and corresponding filtering image block as two features, obtains the first training photo block dictionary Dp1, and training photo block and corresponding filtering image block are extracted respectively and accelerate robust features and local binary patterns feature, Use this four features as second of training photo block dictionary Dp2
(4) portrait in training library is divided into that size is identical and identical piece of S of overlapping degreeTr={ STr,1,STr,2,..., STr,i,...,STr,N, obtain training portrait block dictionary DS
(5) test photo is subjected to edge detection and characteristic point detection, obtains the marginal information and characteristic point information of test photo;
(6) photo in test library is subjected to difference of Gaussian filtering, and test photo and filtered photo is divided into size Identical piece of P of identical and overlapping degreeTe={ PTe,1,PTe,2,...,PTe,i,...,PTe,N, and judge each test photo block PTe,iWhether marginal information or characteristic point information are had:
If this photo block includes marginal information or characteristic point information, to testing photo block PTe,iExtraction accelerates robust features With local binary patterns feature, according to characteristic distance from training photo block dictionary Dp2Middle searching K similar photo blocks are used as and wait for Select photo blockSimultaneously from training portrait block dictionary DSMiddle selection Portrait block corresponding with neighbour's photo block, which is used as, waits for selection portrait block
If this photo block does not include marginal information or characteristic point information, by this test photo block PTe,iAnd it is corresponding Filtering image block is as feature, according to characteristic distance from training photo block dictionary Dp1Middle searching K similar photo blocks are used as and wait for Select photo blockSimultaneously from training portrait block dictionary DSMiddle selection is shone with neighbour The corresponding portrait block of tile, which is used as, waits for selection portrait block
(7) using the image block characteristics of extraction, markov network model is solved by the method for alternating iteration, obtains each survey Try photo block PTe,iMultiple features between weights μi={ μi,1i,2,...,μi,l,...,μi,L, 1≤l≤L, L are characterized Total number, while obtaining the corresponding photo block to be selected of each test photo blockWeights ωi={ ωi,1, ωi,2,...,ωi,j,...,ωi,K};
(8) selection portrait block is waited for using what step (6) obtainedIt is obtained with step (7) The weights ω arrivedi={ ωi,1i,2,...,ωi,j,...,ωi,K, each test photo block P is obtained according to the following formulaTe,iIt is corresponding Portrait block S to be synthesizedi
(9) iteration executes step (7)-(8) until obtaining the final portrait block to be synthesized of N blocks, and by these portrait blocks to be synthesized It is combined to obtain synthesis portrait corresponding with test photo.
2. according to claim 1 based on the human face portrait synthetic method adaptively indicated, which is characterized in that step (2) In to training library in photo carry out difference of Gaussian filtering, carry out as follows:
(2a) constructs the Gaussian function of two different scale value σ:
Wherein, G (x, y, σ) indicates that the Gaussian function under σ scale-values, x, y indicate that pixel is corresponding horizontal, vertical in photo respectively Coordinate value;
Photo is carried out convolution by (2b) with the Gaussian function of two different scales respectively, obtains the photo after two convolution;
(2c) subtracts each other the photo after two convolution, and obtained image is exactly that photo passes through the filtered result of difference of Gaussian.
3. according to claim 1 based on the human face portrait synthetic method adaptively indicated, which is characterized in that step (7) In markov network model solved by the method for alternating iteration, steps are as follows:
(3a) each tests photo block PTe,iWeights between the multiple features of equal random initializtionAt the beginning of photo block to be selected Initial value is
(3b) according to photo block select with test photo block the distance between multiple features, calculate test photo block with it is to be selected Select the relationship between photo block;
(3c) according to the distance between the pixel value for waiting for selection portrait block of adjacent position, calculate adjacent position waits for selection portrait Relationship between block;
The result of (3b) and (3c) are brought into Markov model by (3d);
(3e) utilizes Markov model, according to the initialization weights between multiple featuresTreat the initialization of selection photo block WeightsIt optimizes, the weights ω of the photo block to be selected after being optimizedi
(3f) according to Markov model and (3e) optimization come photo block to be selected weights ωi, optimize multiple features it Between initialization weightsThe weights μ between multiple features after being optimizedi
(3g) iteration executes (3b) to (3f), until the weights of the corresponding photo block to be selected of each test photo block no longer change Or reach preset iterations, obtain the weights μ between multiple features of each test photo blockiWith photo block to be selected Weights ωi
CN201610152915.3A 2016-03-17 2016-03-17 Based on the human face portrait synthetic method adaptively indicated Active CN105844605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610152915.3A CN105844605B (en) 2016-03-17 2016-03-17 Based on the human face portrait synthetic method adaptively indicated

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610152915.3A CN105844605B (en) 2016-03-17 2016-03-17 Based on the human face portrait synthetic method adaptively indicated

Publications (2)

Publication Number Publication Date
CN105844605A CN105844605A (en) 2016-08-10
CN105844605B true CN105844605B (en) 2018-08-10

Family

ID=56588264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610152915.3A Active CN105844605B (en) 2016-03-17 2016-03-17 Based on the human face portrait synthetic method adaptively indicated

Country Status (1)

Country Link
CN (1) CN105844605B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133B (en) * 2018-01-10 2020-04-14 西安电子科技大学 Face portrait-photo recognition method based on asymmetric joint learning
CN108932536B (en) * 2018-07-18 2021-11-09 电子科技大学 Face posture reconstruction method based on deep neural network
CN110069992B (en) * 2019-03-18 2021-02-09 西安电子科技大学 Face image synthesis method and device, electronic equipment and storage medium
CN111179178B (en) * 2019-12-31 2023-06-13 深圳云天励飞技术有限公司 Face image stitching method and related product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279936A (en) * 2013-06-21 2013-09-04 重庆大学 Human face fake photo automatic combining and modifying method based on portrayal
CN104517274A (en) * 2014-12-25 2015-04-15 西安电子科技大学 Face portrait synthesis method based on greedy search

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Face Sketch–Photo Synthesis and Retrieval Using Sparse Representation;Xinbo Gao等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20120831;第22卷(第8期);第1213-1226页 *
Face Sketch-photo Synthesis under Multi-dictionary Sparse Representation Framework;Nannan Wang等;《2011 Sixth International Conference on Image and Graphics》;20111231;第82-87页 *
Robust Face Sketch Style Synthesis;Shengchuan Zhang等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20160131;第25卷(第1期);第220-232页 *
基于人脸画像的伪照片合成及修正;李伟红;《光学精密工程》;20140531;第22卷(第5期);第1371-1378页 *
基于局部约束邻域嵌入的人脸画像-照片合成;胡彦婷;《计算机应用》;20151231;第35卷(第2期);第535-539页 *

Also Published As

Publication number Publication date
CN105844605A (en) 2016-08-10

Similar Documents

Publication Publication Date Title
Walia et al. Digital image forgery detection: a systematic scrutiny
Chen et al. Facial expression recognition in video with multiple feature fusion
Cho et al. A probabilistic image jigsaw puzzle solver
Li et al. Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks
Güçlütürk et al. Convolutional sketch inversion
CN105844605B (en) Based on the human face portrait synthetic method adaptively indicated
CN103544504B (en) Scene character recognition method based on multi-scale map matching core
CN104794479B (en) This Chinese detection method of natural scene picture based on the transformation of local stroke width
Wang et al. A benchmark for clothes variation in person re‐identification
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN106980825B (en) Human face posture classification method based on normalized pixel difference features
CN103984954B (en) Image combining method based on multi-feature fusion
CN112017162B (en) Pathological image processing method, pathological image processing device, storage medium and processor
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
Gangan et al. Distinguishing natural and computer generated images using Multi-Colorspace fused EfficientNet
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
das Neves et al. HU‐PageScan: a fully convolutional neural network for document page crop
Shivakumara et al. A new RGB based fusion for forged IMEI number detection in mobile images
Zhu et al. BDGAN: Image blind denoising using generative adversarial networks
CN105869134B (en) Human face portrait synthetic method based on direction graph model
Satwashil et al. Integrated natural scene text localization and recognition
CN102110303B (en) Method for compounding face fake portrait\fake photo based on support vector return
Sharrma et al. Vision based static hand gesture recognition techniques
CN109685076A (en) A kind of image-recognizing method based on SIFT and sparse coding
CN106023120B (en) Human face portrait synthetic method based on coupling neighbour's index

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220712

Address after: 518057 2304, block a, building 2, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: SHENZHEN AIMO TECHNOLOGY Co.,Ltd.

Address before: 710071 No. 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University