CN107392213A - Human face portrait synthetic method based on the study of the depth map aspect of model - Google Patents

Human face portrait synthetic method based on the study of the depth map aspect of model Download PDF

Info

Publication number
CN107392213A
CN107392213A CN201710602696.9A CN201710602696A CN107392213A CN 107392213 A CN107392213 A CN 107392213A CN 201710602696 A CN201710602696 A CN 201710602696A CN 107392213 A CN107392213 A CN 107392213A
Authority
CN
China
Prior art keywords
block
photo
human face
portrait
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710602696.9A
Other languages
Chinese (zh)
Other versions
CN107392213B (en
Inventor
王楠楠
朱明瑞
李洁
高新波
查文锦
张玉倩
郝毅
曹兵
马卓奇
刘德成
辛经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710602696.9A priority Critical patent/CN107392213B/en
Publication of CN107392213A publication Critical patent/CN107392213A/en
Application granted granted Critical
Publication of CN107392213B publication Critical patent/CN107392213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A kind of human face portrait synthetic method based on the study of the depth map aspect of model.Its step is:(1) sample set is generated;(2) image block set is generated;(3) depth characteristic is extracted;(4) human face portrait reconstructed blocks coefficient is solved;(5) human face portrait block is reconstructed;(6) human face portrait is synthesized.The present invention extracts the depth characteristic of human face photo block using depth convolutional network, depth characteristic figure coefficient and human face portrait block reconstruction coefficients are solved using Markov graph model, human face portrait block weighted sum is obtained using human face portrait block reconstruction coefficients to reconstruct human face portrait block, splicing reconstruct human face portrait block obtains synthesizing human face portrait.The present invention replaces the original pixels value information of image block using the depth characteristic extracted from depth convolutional network, has more preferable robustness to ambient noises such as illumination, can synthesize the high human face portrait of quality.

Description

Human face portrait synthetic method based on the study of the depth map aspect of model
Technical field
The invention belongs to technical field of image processing, further relates in pattern-recognition and technical field of computer vision It is a kind of based on the depth map aspect of model study human face portrait synthetic method.The people that the present invention can be used in public safety field Face is retrieved and identification.
Background technology
In criminal investigation is chased, public security department has citizen's picture data storehouse, with reference to face recognition technology determining crime Suspect's identity, but general more difficult acquisition suspect's photo, but can be under the cooperation of artist and witness in practice The sketch images picture of suspect is obtained to carry out follow-up face retrieval and identification.Due to having between portrait and common human face photo Very big difference, it is difficult to acquire satisfied recognition effect directly with traditional face identification method.By citizen's picture data Photo in storehouse is combined into the gap drawn a portrait and can effectively reduced on their textures, and then improves discrimination.
Paper " X.Gao, J.Zhou, D.Tao, and X.Li, Local the face sketch that X.Gao et al. delivers at it Carried in synthesis learning " (Neurocomputing, vol.71, no.10-12, pp.1921-1930, Jun.2008) Go out using generating pseudo- portrait using built-in type hidden Markov model.This method is entered to the photo in training storehouse and portrait first Row piecemeal, then corresponding photo block and portrait block are modeled with built-in type hidden Markov model, arbitrarily give a photograph Piece, piecemeal is equally carried out, for an arbitrary block, with the thought of selective ensemble, the model of selector piecemeal generation is carried out The generation of puppet portrait is simultaneously merged so as to obtain final pseudo- portrait.Weak point is existing for this method, due to this method Employ selective ensemble technology, the pseudo- portrait of generation will be weighted average, cause that background is not clean, details is unintelligible, enter And reduce generation portrait quality.
Paper " H.Zhou, Z.Kuang, and the K.Wong.Markov Weight that H.Zhou et al. delivers at it Fields for Face Sketch Synthesis”(In Proc.IEEE Int.Conference on Computer Vision, pp.1091-1097,2012) in propose a kind of human face portrait synthetic method based on Markov weight field.Should Method by training image and input test image uniform piecemeal, for any test image block, is searched for its some neighbour, obtained first To the candidate blocks of image aspects to be synthesized.Then Markov artwork is used to test image block, neighbour's block and candidate image block Type models, and asks for reconstructing weights.Finally using weights and candidate's portrait block reconstruct synthesis portrait block are reconstructed, splicing obtains synthesis picture Picture.Weak point is existing for this method, and image block characteristics use original Pixel Information, scarce capacity is represented, by illumination etc. Environmental Noise Influence is larger.
Patent " the human face portrait synthetic method based on direction graph model " (application number of Xian Electronics Science and Technology University's application: The CN201610171867.2 applyings date:2016.03.24 application publication number:CN105869134A one kind is disclosed in) and is based on direction The human face portrait synthetic method of graph model.This method is first by training image and input test image uniform piecemeal, for any Photo block is tested, searches for its some neighbour's photo block and corresponding neighbour portrait block.Then to test photo block, neighbour's photo block Extract direction character.Then test photo block, the direction character of neighbour's photo block and neighbour are drawn using Markov graph model As block modeling, ask for reconstructing the reconstruct weights for synthesizing portrait block by neighbour's portrait block.Finally drawn a portrait using reconstructing weights and neighbour Block reconstruct synthesis portrait block, splicing obtain synthesis portrait.Weak point is existing for this method, and image block characteristics are using manually setting The high-frequency characteristic of meter, adaptive ability deficiency, is not learnt fully to feature.
The content of the invention
It is an object of the invention to overcome the shortcomings of above-mentioned prior art, propose a kind of based on the study of the depth map aspect of model Human face portrait synthetic method, can synthesize by the Environmental Noise Influences such as illumination high quality draw a portrait.
Realize comprising the following steps that for the object of the invention:
(1) sample set is generated:
(1a) takes out M human face photo composition training human face photo sample sets, 2≤M≤U- from human face photo sample set 1, U represents human face photo sum in sample set;
Remaining human face photo composition in human face photo sample set is tested face photograph collection by (1b);
(1c) takes out the one-to-one people of human face photo with training human face photo sample set from human face portrait sample set Face is drawn a portrait, composition training human face portrait sample set;
(2) image block set is generated:
(2a) is any from test face photograph collection to choose a test human face photo, and test human face photo is divided into greatly It is small identical, and degree of overlapping identical photo block, composition test photo set of blocks;
(2b) will train each photo in human face photo sample set, and it is identical to be divided into size, and degree of overlapping identical Photo block, composition training photo sample set of blocks;
(2c) will train each portrait in human face portrait sample set, and it is identical to be divided into size, and degree of overlapping identical Portrait block, composition training portrait sample set of blocks;
(3) depth characteristic is extracted:
(3a) will train photo set of blocks and test all photo blocks in photo set of blocks, input in object identification In the depth convolutional network VGG for object identification trained on database ImageNet, forward-propagating is carried out;
The depth characteristic of (3b) using 128 layers of characteristic pattern that depth convolutional network VGG intermediate layer exports as photo block, it is special The coefficient that sign schemes every layer is ui,l, andWherein, ∑ represents sum operation, and i represents the sequence number of test photo block, i= 1,2 ..., N, N represent test photo block sum, l represent characteristic pattern sequence number, l=1 ..., 128;
(4) human face portrait block reconstruction coefficients are solved:
(4a) uses k nearest neighbor searching algorithm, is found out in photo sample set of blocks is trained with each testing photo block most phase As 10 neighbours train photo blocks, while selected in drawing a portrait sample set of blocks from training and neighbour trains photo block to correspond 10 neighbours train portrait block, the coefficient of each neighbour's training image blocks is wi,k, wherein,K represents training Image block sequence number, k=1 ..., 10;
(4b) uses Markov graph model formula, and photo is trained to all test photo block depth characteristics, all neighbours The depth characteristic of block, all neighbours train portrait block, the coefficient u of depth characteristic figurei,l, neighbour's training image blocks coefficient wi,kBuild Mould;
(4c) solves to Markov graph model formula, obtains human face portrait block reconstruction coefficients wi,k
(6) human face portrait block is reconstructed:
10 neighbours corresponding to photo block will each be tested and train portrait block and respective coefficient wi,kIt is multiplied, result after multiplication Summation, human face portrait block is reconstructed as corresponding to each test photo block;
(7) human face portrait is synthesized:
Reconstruct human face portrait block corresponding to splicing all test photo blocks, obtains synthesizing human face portrait.
Compared with prior art, the invention has the advantages that:
1st, due to the original image of the invention that image block is replaced using the depth characteristic extracted from depth convolutional network Plain value information, the character representation scarce capacity for overcoming prior art to use, by the Environmental Noise Influences such as illumination it is big the problem of so that The present invention has the advantages of to the ambient noise robust such as illumination.
2nd, due to the present invention using Markov graph model to depth characteristic figure coefficient and human face portrait block reconstruction coefficients Joint modeling is carried out, the human face portrait background for overcoming prior art to synthesize is not clean, the unsharp problem of details so that the present invention With the advantages of synthesis human face portrait clean background, details is clear.
Brief description of the drawings
Fig. 1 is flow chart of the present invention;
Fig. 2 is the simulated effect figure of the present invention.
Embodiment
The present invention is further described below in conjunction with the accompanying drawings.
Reference picture 1, of the invention comprises the following steps that.
Step 1, sample set is generated.
M human face photo composition training human face photo sample sets, 2≤M≤U-1, U tables are taken out from human face photo sample set This concentration of sample human face photo sum.
By remaining human face photo composition test face photograph collection in human face photo sample set.
Take out from human face portrait sample set and drawn with the one-to-one face of human face photo of training human face photo sample set Picture, composition training human face portrait sample set.
Step 2, image block set is generated.
It is any from test face photograph collection to choose a test human face photo, test human face photo is divided into size phase Together, and degree of overlapping identical photo block, composition test photo set of blocks.
Each photo in human face photo sample set will be trained, it is identical to be divided into size, and degree of overlapping identical photo Block, composition training photo sample set of blocks.
Each portrait in human face portrait sample set will be trained, it is identical to be divided into size, and degree of overlapping identical is drawn a portrait Block, composition training portrait sample set of blocks.
Degree of overlapping refers to that overlapping region accounts for mutual 1/2 between two adjacent images block.
Step 3, depth characteristic is extracted.
Photo set of blocks will be trained and test all photo blocks in photo set of blocks, inputted in object identification data In the depth convolutional network VGG for object identification trained on the ImageNet of storehouse, forward-propagating is carried out.
128 layers of characteristic pattern that depth convolutional network VGG intermediate layer is exported are as the depth characteristic of photo block, characteristic pattern Every layer of coefficient is ui,l, andWherein, ∑ expression sum operation, the sequence number of i expression test photo blocks, i=1, 2 ..., N, N represent test photo block sum, l represent characteristic pattern sequence number, l=1 ..., 128.
Intermediate layer refers to depth convolutional network VGG activation primitive Relu2_2 layers
Step 4, human face portrait block reconstruction coefficients are solved.
Using k nearest neighbor searching algorithm, found out in photo sample set of blocks is trained most like with each test photo block 10 neighbours train photo block, while are selected in drawing a portrait sample set of blocks from training and train photo block one-to-one 10 with neighbour Individual neighbour trains portrait block, and the coefficient of each neighbour's training image blocks is wi,k, wherein,K represents training image Block sequence number, k=1 ..., 10.
K nearest neighbor searching algorithm comprises the following steps that:
The first step, calculate each test photo block depth characteristic it is vectorial with it is all training photo blocks depth characteristic to Euclidean distance between amount;
Second step, it is worth order from small to large according to Euclidean distance, all training photo blocks is ranked up;
3rd step, 10 training photo blocks before selection, photo block is trained as neighbour.
Using Markov graph model formula, photo block is trained to all test photo block depth characteristics, all neighbours Depth characteristic, all neighbours train portrait block, the coefficient u of depth characteristic figurei,l, neighbour's training image blocks coefficient wi,kModeling.
Markov graph model formula is as follows:
Wherein, min represents operation of minimizing, and ∑ represents sum operation, | | | |2Represent modulus square operation, wi,kRepresent K-th of neighbour of i-th of test photo block trains the coefficient of portrait block, oi,kRepresent k-th of neighbour of i-th of test photo block The pixel value vector of the lap of training portrait block, wj,kRepresent that j-th of neighbour of j-th of test photo block trains portrait block Coefficient, oj,kRepresent that k-th of neighbour of j-th of test photo block trains the pixel value vector of the lap of portrait block, ui,l Represent the coefficient of the l layer depth characteristic patterns of the depth characteristic of i-th of test photo block, dl(xi) represent i-th of test photo block Depth characteristic h layer characteristic patterns, dl(xi,k) represent that k-th of neighbour of i-th of test photo block trains the depth of photo block The u layer characteristic patterns of feature, l, h, u value correspondent equal..
Markov graph model formula is solved, obtains human face portrait block reconstruction coefficients wi,k
Step 5, human face portrait block is reconstructed.
10 neighbours corresponding to photo block will each be tested and train portrait block and respective coefficient wi,kIt is multiplied, result after multiplication Summation, human face portrait block is reconstructed as corresponding to each test photo block.
Step 6, human face portrait is synthesized.
Reconstruct human face portrait block corresponding to splicing all test photo blocks, obtains synthesizing human face portrait.
The method for splicing reconstruct portrait block corresponding to all test photo blocks is as follows:
The first step, will be in place according to its institute positioned at reconstruct portrait block corresponding to all test photo blocks of portrait diverse location Put and placed;
Second step, take it is adjacent two reconstruct human face portrait block between lap pixel value average value;
3rd step, adjacent two reconstruct is replaced with the average value of the pixel value of lap between adjacent two reconstruct human face portrait blocks The pixel value of lap between human face portrait block, obtain synthesizing human face portrait.
The effect of the present invention is further illustrated by following emulation experiment.
1. emulation experiment condition:
The allocation of computer environment of emulation experiment of the present invention be Intel (R) Core i7-4790 3.6GHZ, internal memory 16G, (SuSE) Linux OS, programming language use Python, and database uses Hong Kong Chinese University's CUHK student databases.
The control methods of prior art used in the emulation experiment of the present invention includes the following two kinds:
A kind of is based on the method being locally linear embedding into, and LLE is designated as in experiment;Bibliography for " Q.Liu, X.Tang, H.Jin,H.Lu,and S.Ma”(A Nonlinear Approach for Face Sketch Synthesis and Recognition.In Proc.IEEE Int.Conference on Computer Vision,pp.1005-1010, 2005);
Another kind is the method based on markov weight field model, and MWF is designated as in experiment;Bibliography for " H.Zhou, Z.Kuang,and K.Wong.Markov Weight Fields for Face Sketch Synthesis”(In Proc.IEEE Int.Conference on Computer Vision,pp.1091-1097,2012)。
2. emulation experiment content:
The present invention shares one group of emulation experiment.
Portrait is synthesized on CUHK student databases, and with being locally linear embedding into LLE, markov weight field model The portrait of MWF methods synthesis is contrasted.
3. the simulation experiment result and analysis:
As shown in Figure 2, wherein Fig. 2 (a) is arbitrarily taken from test photo sample set to the simulation experiment result of the present invention The test photo gone out, Fig. 2 (b) is the portrait that the synthesis of LLE methods is locally linear embedding into using prior art, and Fig. 2 (c) is to make The portrait synthesized with prior art markov weight field model MWF methods, Fig. 2 (d) are the pictures synthesized using the inventive method Picture.
From Figure 2 it can be seen that due to the original pixels value information of the invention that image block is replaced using depth characteristic, it is to illumination There is more preferable robustness, therefore the photo for being had a great influence by illumination etc. ambient noise, compared to be locally linear embedding into LLE, Markov weight field model MWF methods, synthesis portrait quality is higher, and noise is smaller.

Claims (6)

1. a kind of human face portrait synthetic method based on the study of the depth map aspect of model, comprises the following steps:
(1) sample set is generated:
(1a) takes out M human face photo composition training human face photo sample sets, 2≤M≤U-1, U tables from human face photo sample set This concentration of sample human face photo sum;
Remaining human face photo composition in human face photo sample set is tested face photograph collection by (1b);
(1c) takes out from human face portrait sample set to be drawn with the one-to-one face of human face photo of training human face photo sample set Picture, composition training human face portrait sample set;
(2) image block set is generated:
(2a) is any from test face photograph collection to choose a test human face photo, and test human face photo is divided into size phase Together, and degree of overlapping identical photo block, composition test photo set of blocks;
(2b) will train each photo in human face photo sample set, and it is identical to be divided into size, and degree of overlapping identical photo Block, composition training photo sample set of blocks;
(2c) will train each portrait in human face portrait sample set, and it is identical to be divided into size, and degree of overlapping identical is drawn a portrait Block, composition training portrait sample set of blocks;
(3) depth characteristic is extracted:
(3a) will train photo set of blocks and test all photo blocks in photo set of blocks, input in object identification data In the depth convolutional network VGG for object identification trained on the ImageNet of storehouse, forward-propagating is carried out;
128 layers of characteristic pattern that (3b) exports depth convolutional network VGG intermediate layer are as the depth characteristic of photo block, characteristic pattern Every layer of coefficient is ui,l, andWherein, ∑ expression sum operation, the sequence number of i expression test photo blocks, i=1, 2 ..., N, N represent test photo block sum, l represent characteristic pattern sequence number, l=1 ..., 128;
(4) human face portrait block reconstruction coefficients are solved:
(4a) uses k nearest neighbor searching algorithm, is found out in photo sample set of blocks is trained most like with each test photo block 10 neighbours train photo block, while are selected in drawing a portrait sample set of blocks from training and train photo block one-to-one 10 with neighbour Individual neighbour trains portrait block, and the coefficient of each neighbour's training image blocks is wi,k, wherein,K represents training image Block sequence number, k=1 ..., 10;
(4b) uses Markov graph model formula, and photo block is trained to all test photo block depth characteristics, all neighbours Depth characteristic, all neighbours train portrait block, the coefficient u of depth characteristic figurei,l, neighbour's training image blocks coefficient wi,kModeling;
(4c) solves to Markov graph model formula, obtains human face portrait block reconstruction coefficients wi,k
(5) human face portrait block is reconstructed:
10 neighbours corresponding to photo block will each be tested and train portrait block and respective coefficient wi,kIt is multiplied, result is summed after multiplication, Human face portrait block is reconstructed as corresponding to each test photo block;
(6) human face portrait is synthesized:
Reconstruct human face portrait block corresponding to splicing all test photo blocks, obtains synthesizing human face portrait.
2. the human face portrait synthetic method according to claim 1 based on the study of the depth map aspect of model, it is characterised in that: Step (2a), step (2b), the degree of overlapping described in step (2c) refer to that overlapping region accounts for each other between two adjacent images block 1/2.
3. the human face portrait synthetic method according to claim 1 based on the study of the depth map aspect of model, it is characterised in that: Intermediate layer described in step (3b) refers to depth convolutional network VGG activation primitive Relu2_2 layers.
4. the human face portrait synthetic method according to claim 1 based on the study of the depth map aspect of model, it is characterised in that: K nearest neighbor searching algorithm comprises the following steps that described in step (4a):
The first step, calculate the vectorial depth characteristic vector with all training photo blocks of depth characteristic of each test photo block Between Euclidean distance;
Second step, it is worth order from small to large according to Euclidean distance, all training photo blocks is ranked up;
3rd step, 10 training photo blocks before selection, photo block is trained as neighbour.
5. the human face portrait synthetic method according to claim 1 based on the study of the depth map aspect of model, it is characterised in that: Markov graph model formula described in step (4b) is as follows:
<mrow> <mi>min</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>o</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>w</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>o</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>128</mn> </munderover> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>d</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>d</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
Wherein, min represents operation of minimizing, and ∑ represents sum operation, | | | |2Represent modulus square operation, wi,kRepresent i-th K-th of neighbour of individual test photo block trains the coefficient of portrait block, oi,kRepresent k-th of neighbour training of i-th of test photo block The pixel value vector of the lap of portrait block, wj,kRepresent that j-th neighbour of j-th of test photo block trains the portrait block to be Number, oj,kRepresent that k-th of neighbour of j-th of test photo block trains the pixel value vector of the lap of portrait block, ui,lRepresent The coefficient of the l layer depth characteristic patterns of the depth characteristic of i-th of test photo block, dl(xi) represent i-th of depth for testing photo block Spend the h layer characteristic patterns of feature, dl(xi,k) represent that k-th of neighbour of i-th of test photo block trains the depth characteristic of photo block U layer characteristic patterns, l, h, u value correspondent equal.
6. the local two benches human face portrait generation method with global property of joint according to claim 1, its feature exist In:The method of reconstruct portrait block is as follows corresponding to all test photo blocks of splicing described in step (6):
The first step, it will enter positioned at reconstruct portrait block corresponding to all test photo blocks of portrait diverse location according to its position Row is placed;
Second step, take it is adjacent two reconstruct human face portrait block between lap pixel value average value;
3rd step, adjacent two reconstruct face is replaced with the average value of the pixel value of lap between adjacent two reconstruct human face portrait blocks The pixel value of lap between portrait block, obtains synthesizing human face portrait.
CN201710602696.9A 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning Active CN107392213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Publications (2)

Publication Number Publication Date
CN107392213A true CN107392213A (en) 2017-11-24
CN107392213B CN107392213B (en) 2020-04-07

Family

ID=60335789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710602696.9A Active CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Country Status (1)

Country Link
CN (1) CN107392213B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133A (en) * 2018-01-10 2018-06-12 西安电子科技大学 Human face portrait based on asymmetric combination learning-photo array method
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN109920021A (en) * 2019-03-07 2019-06-21 华东理工大学 A kind of human face sketch synthetic method based on regularization width learning network
CN110069992A (en) * 2019-03-18 2019-07-30 西安电子科技大学 A kind of face image synthesis method, apparatus, electronic equipment and storage medium
US11270101B2 (en) 2019-11-01 2022-03-08 Industrial Technology Research Institute Imaginary face generation method and system, and face recognition method and system using the same
CN115034957A (en) * 2022-05-06 2022-09-09 西安电子科技大学 Human face sketch portrait editing method based on text description

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
US20150310263A1 (en) * 2014-04-29 2015-10-29 Microsoft Corporation Facial expression tracking
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
US20150310263A1 (en) * 2014-04-29 2015-10-29 Microsoft Corporation Facial expression tracking
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133A (en) * 2018-01-10 2018-06-12 西安电子科技大学 Human face portrait based on asymmetric combination learning-photo array method
CN108154133B (en) * 2018-01-10 2020-04-14 西安电子科技大学 Face portrait-photo recognition method based on asymmetric joint learning
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN109145704B (en) * 2018-06-14 2022-02-22 西安电子科技大学 Face portrait recognition method based on face attributes
CN109920021A (en) * 2019-03-07 2019-06-21 华东理工大学 A kind of human face sketch synthetic method based on regularization width learning network
CN110069992A (en) * 2019-03-18 2019-07-30 西安电子科技大学 A kind of face image synthesis method, apparatus, electronic equipment and storage medium
US11270101B2 (en) 2019-11-01 2022-03-08 Industrial Technology Research Institute Imaginary face generation method and system, and face recognition method and system using the same
CN115034957A (en) * 2022-05-06 2022-09-09 西安电子科技大学 Human face sketch portrait editing method based on text description

Also Published As

Publication number Publication date
CN107392213B (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN107392213A (en) Human face portrait synthetic method based on the study of the depth map aspect of model
Chen et al. The face image super-resolution algorithm based on combined representation learning
Kadam et al. Detection and localization of multiple image splicing using MobileNet V1
Cao et al. Ancient mural restoration based on a modified generative adversarial network
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
CN113269224A (en) Scene image classification method, system and storage medium
CN111062329A (en) Unsupervised pedestrian re-identification method based on augmented network
Xu et al. Generative image completion with image-to-image translation
Bodavarapu et al. Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques
Liu et al. Component semantic prior guided generative adversarial network for face super-resolution
CN105844605A (en) Face image synthesis method based on adaptive expression
Li et al. A review of advances in image inpainting research
Hou et al. Super‐resolution reconstruction of vertebrate microfossil computed tomography images based on deep learning
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
CN110210562B (en) Image classification method based on depth network and sparse Fisher vector
Wu et al. Deep texture exemplar extraction based on trimmed T-CNN
CN106023079B (en) The two stages human face portrait generation method of joint part and global property
CN113191367B (en) Semantic segmentation method based on dense scale dynamic network
Hussein et al. Semantic segmentation of aerial images using u-net architecture
CN110211162A (en) It is a kind of based on the homologous identification platform of image being finely registrated and its implementation
CN117635973B (en) Clothing changing pedestrian re-identification method based on multilayer dynamic concentration and local pyramid aggregation
Rehman et al. Investigation and Morphing Attack Detection Techniques in Multimedia: A Detail Review
Stoean et al. Study on Semantic Inpainting Deep Learning Models for Artefacts with Traditional Motifs
CN118154576B (en) Intelligent detection method for subway tunnel joint leakage water

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220711

Address after: 518057 2304, block a, building 2, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: SHENZHEN AIMO TECHNOLOGY Co.,Ltd.

Address before: 710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Patentee before: XIDIAN University

TR01 Transfer of patent right