CN107392213A - Human face portrait synthetic method based on the study of the depth map aspect of model - Google Patents

Human face portrait synthetic method based on the study of the depth map aspect of model Download PDF

Info

Publication number
CN107392213A
CN107392213A CN201710602696.9A CN201710602696A CN107392213A CN 107392213 A CN107392213 A CN 107392213A CN 201710602696 A CN201710602696 A CN 201710602696A CN 107392213 A CN107392213 A CN 107392213A
Authority
CN
China
Prior art keywords
photo
face
blocks
mrow
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710602696.9A
Other languages
Chinese (zh)
Other versions
CN107392213B (en
Inventor
王楠楠
朱明瑞
李洁
高新波
查文锦
张玉倩
郝毅
曹兵
马卓奇
刘德成
辛经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710602696.9A priority Critical patent/CN107392213B/en
Publication of CN107392213A publication Critical patent/CN107392213A/en
Application granted granted Critical
Publication of CN107392213B publication Critical patent/CN107392213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Molecular Biology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种基于深度图模型特征学习的人脸画像合成方法。其步骤为:(1)生成样本集合;(2)生成图像块集合;(3)提取深度特征;(4)求解人脸画像重构块系数;(5)重构人脸画像块;(6)合成人脸画像。本发明使用深度卷积网络提取人脸照片块的深度特征,利用马尔科夫图模型求解深度特征图系数与人脸画像块重构系数,使用人脸画像块重构系数对人脸画像块加权求和得到重构人脸画像块,拼接重构人脸画像块得到合成人脸画像。本发明使用从深度卷积网络中提取的深度特征来代替图像块的原始像素值信息,对光照等环境噪声具有更好的鲁棒性,能合成质量极高的人脸画像。

A face portrait synthesis method based on deep graph model feature learning. The steps are: (1) generate a sample set; (2) generate an image block set; (3) extract depth features; (4) solve face portrait reconstruction block coefficients; (5) reconstruct face portrait blocks; (6) ) to synthesize a face portrait. The present invention uses a deep convolutional network to extract the depth features of face photo blocks, uses the Markov graph model to solve the depth feature map coefficients and face portrait block reconstruction coefficients, and uses the face portrait block reconstruction coefficients to weight the face portrait blocks The summation is performed to obtain reconstructed face portrait blocks, and the reconstructed face portrait blocks are spliced to obtain a composite face portrait. The present invention uses depth features extracted from deep convolutional networks to replace original pixel value information of image blocks, has better robustness to environmental noise such as illumination, and can synthesize extremely high-quality face portraits.

Description

基于深度图模型特征学习的人脸画像合成方法Face portrait synthesis method based on deep graph model feature learning

技术领域technical field

本发明属于图像处理技术领域,更进一步涉及模式识别与计算机视觉技术领域中的一种基于深度图模型特征学习的人脸画像合成方法。本发明可用于公共安全领域中的人脸检索与识别。The invention belongs to the technical field of image processing, and further relates to a face image synthesis method based on feature learning of a depth map model in the technical field of pattern recognition and computer vision. The invention can be used for face retrieval and recognition in the field of public security.

背景技术Background technique

在刑侦追捕中,公安部门备有公民照片数据库,结合人脸识别技术用以确定犯罪嫌疑人身份,但实际中一般较难获得犯罪嫌疑人照片,但却可以在画家和目击者的合作下得到犯罪嫌疑人的素描画像来进行后续人脸检索和识别。由于画像与普通人脸照片之间有很大的差异,直接用传统的人脸识别方法很难获取得到满意的识别效果。将公民照片数据库中的照片合为画像能有效的减小他们纹理上的差距,进而提高识别率。In the criminal investigation and pursuit, the public security department has a database of citizens’ photos, combined with face recognition technology to determine the identity of criminal suspects. However, it is generally difficult to obtain photos of criminal suspects in practice, but they can be obtained with the cooperation of painters and witnesses. Sketch portraits of criminal suspects for subsequent face retrieval and recognition. Due to the great difference between portraits and ordinary face photos, it is difficult to obtain satisfactory recognition results by directly using traditional face recognition methods. Combining the photos in the citizen photo database into a portrait can effectively reduce the gap in their textures, thereby improving the recognition rate.

X.Gao等人在其发表的论文“X.Gao,J.Zhou,D.Tao,and X.Li,Local face sketchsynthesis learning”(Neurocomputing,vol.71,no.10-12,pp.1921-1930,Jun.2008)中提出利用利用嵌入式隐马尔科夫模型来生成伪画像。该方法首先对训练库中的照片和画像进行分块,然后用嵌入式隐马尔科夫模型对相应的照片块和画像块进行建模,任意给一张照片,同样进行分块,对于任意的一个块,用选择性集成的思想,选择部分块生成的模型进行伪画像的生成并进行融合从而得到最终的伪画像。该方法存在的不足之处是,由于该方法采用了选择性集成技术,生成的伪画像要进行加权平均,导致背景不干净、细节不清晰,进而降低了生成画像质量。In the paper "X.Gao, J.Zhou, D.Tao, and X.Li, Local face sketchsynthesis learning" (Neurocomputing, vol.71, no.10-12, pp.1921- 1930, Jun.2008) proposed the use of embedded hidden Markov models to generate pseudo-portraits. This method first divides the photos and portraits in the training library into blocks, and then uses the embedded hidden Markov model to model the corresponding photo blocks and portrait blocks. For any photo, it is also divided into blocks. For any A block, using the idea of selective integration, selects the model generated by some blocks to generate a pseudo-portrait and fuses it to obtain the final pseudo-portrait. The disadvantage of this method is that because the method uses selective integration technology, the generated pseudo-portraits need to be weighted and averaged, resulting in unclean backgrounds and unclear details, thereby reducing the quality of generated images.

H.Zhou等人在其发表的论文“H.Zhou,Z.Kuang,and K.Wong.Markov WeightFields for Face Sketch Synthesis”(In Proc.IEEE Int.Conference on ComputerVision,pp.1091-1097,2012)中提出了一种基于马尔科夫权重场的人脸画像合成方法。该方法首先将训练图像与输入测试图像均匀分块,对于任意测试图像块,搜索其若干近邻,得到待合成图像形态的候选块。然后对测试图像块、近邻块及候选图像块使用马尔科夫图模型建模,求取重构权值。最后利用重构权值与候选画像块重构合成画像块,拼接得到合成画像。该方法存在的不足之处是,图像块特征使用原始的像素信息,表示能力不足,受光照等环境噪声影响较大。H.Zhou et al published the paper "H.Zhou, Z.Kuang, and K.Wong.Markov WeightFields for Face Sketch Synthesis" (In Proc.IEEE Int.Conference on ComputerVision,pp.1091-1097,2012) A face portrait synthesis method based on Markov weight field is proposed in . In this method, the training image and the input test image are evenly divided into blocks, and for any test image block, several neighbors are searched to obtain the candidate block of the image form to be synthesized. Then use the Markov graph model to model the test image block, neighbor blocks and candidate image blocks, and calculate the reconstruction weight. Finally, the composite portrait block is reconstructed by using the reconstruction weight and the candidate portrait block, and the composite portrait is obtained by splicing. The disadvantage of this method is that the image block features use the original pixel information, which has insufficient representation ability and is greatly affected by environmental noise such as illumination.

西安电子科技大学申请的专利“基于方向图模型的人脸画像合成方法”(申请号:CN201610171867.2申请日:2016.03.24申请公布号:CN105869134A)中公开了一种基于方向图模型的人脸画像合成方法。该方法首先将训练图像与输入测试图像均匀分块,对于任意测试照片块,搜索其若干近邻照片块及对应的近邻画像块。然后对测试照片块、近邻照片块提取方向特征。然后使用马尔科夫图模型对测试照片块、近邻照片块的方向特征及近邻画像块建模,求取由近邻画像块重构合成画像块的重构权值。最后利用重构权值与近邻画像块重构合成画像块,拼接得到合成画像。该方法存在的不足之处是,图像块特征使用人工设计的高频特征,自适应能力不足,没有对特征进行充分学习。The patent "Face Image Synthesis Method Based on Orientation Graph Model" (Application No.: CN201610171867.2 Application Date: 2016.03.24 Application Publication No.: CN105869134A) filed by Xidian University discloses a face recognition method based on a directional graph model. image synthesis method. In this method, the training image and the input test image are evenly divided into blocks, and for any test photo block, several adjacent photo blocks and corresponding adjacent image blocks are searched. Then extract the direction feature for the test photo block and the neighbor photo block. Then use the Markov graph model to model the test photo block, the directional features of the neighbor photo blocks and the neighbor portrait blocks, and calculate the reconstruction weights of the synthetic portrait blocks reconstructed from the neighbor portrait blocks. Finally, the composite portrait blocks are reconstructed by reconstructing the weights and the adjacent image blocks, and spliced to obtain a composite portrait. The disadvantage of this method is that the image block features use artificially designed high-frequency features, the adaptive ability is insufficient, and the features are not fully learned.

发明内容Contents of the invention

本发明的目的在于克服上述已有技术的不足,提出一种基于深度图模型特征学习的人脸画像合成方法,能够合成不受光照等环境噪声影响的高质量画像。The purpose of the present invention is to overcome the deficiencies of the above-mentioned prior art, and propose a face image synthesis method based on depth map model feature learning, which can synthesize high-quality images that are not affected by environmental noise such as illumination.

实现本发明目的的具体步骤如下:The concrete steps that realize the object of the present invention are as follows:

(1)生成样本集合:(1) Generate a sample set:

(1a)从人脸照片样本集中取出M张人脸照片组成训练人脸照片样本集,2≤M≤U-1,U表示样本集中人脸照片总数;(1a) Take M face photos from the face photo sample set to form a training face photo sample set, 2≤M≤U-1, U represents the total number of face photos in the sample set;

(1b)将人脸照片样本集中剩余的人脸照片组成测试人脸照片集;(1b) form the test face photo set with the remaining face photos in the face photo sample set;

(1c)从人脸画像样本集中取出与训练人脸照片样本集的人脸照片一一对应的人脸画像,组成训练人脸画像样本集;(1c) from the face portrait sample set, take out the face portrait corresponding to the face photos of the training face photo sample set one by one, form the training face portrait sample set;

(2)生成图像块集合:(2) Generate a collection of image blocks:

(2a)从测试人脸照片集中任意选取一张测试人脸照片,将测试人脸照片划分成大小相同,且重叠度相同的照片块,组成测试照片块集合;(2a) randomly select a test face photo from the test face photo set, divide the test face photo into photo blocks with the same size and the same degree of overlap to form a test photo block set;

(2b)将训练人脸照片样本集中的每一张照片,划分成大小相同,且重叠度相同的照片块,组成训练照片样本块集合;(2b) Divide each photo in the training face photo sample set into photo blocks with the same size and the same degree of overlap to form a training photo sample block set;

(2c)将训练人脸画像样本集中的每一张画像,划分成大小相同,且重叠度相同的画像块,组成训练画像样本块集合;(2c) Divide each portrait in the training face portrait sample set into portrait blocks with the same size and the same degree of overlap to form a training portrait sample block set;

(3)提取深度特征:(3) Extract depth features:

(3a)将训练照片块集合与测试照片块集合中的所有照片块,输入已经在物体识别数据库ImageNet上训练好的用于物体识别的深度卷积网络VGG中,进行正向传播;(3a) Input all the photo blocks in the training photo block set and the test photo block set into the deep convolutional network VGG trained for object recognition on the object recognition database ImageNet, and carry out forward propagation;

(3b)将深度卷积网络VGG的中间层输出的128层特征图作为照片块的深度特征,特征图每层的系数为ui,l,且其中,∑表示求和操作,i表示测试照片块的序号,i=1,2,...,N,N表示测试照片块的总数,l表示特征图的序号,l=1,...,128;(3b) The 128-layer feature map output by the middle layer of the deep convolutional network VGG is used as the depth feature of the photo block, and the coefficient of each layer of the feature map is u i,l , and Among them, ∑ represents the summation operation, i represents the sequence number of the test photo block, i=1, 2,..., N, N represents the total number of test photo blocks, l represents the sequence number of the feature map, l=1,... ,128;

(4)求解人脸画像块重构系数:(4) Solve the face portrait block reconstruction coefficient:

(4a)使用K近邻搜索算法,在训练照片样本块集合中找出与每个测试照片块最相似的10个近邻训练照片块,同时从训练画像样本块集合中选出与近邻训练照片块一一对应的10个近邻训练画像块,每个近邻训练图像块的系数为wi,k,其中,k表示训练图像块序号,k=1,...,10;(4a) Use the K nearest neighbor search algorithm to find out the 10 nearest neighbor training photo blocks that are most similar to each test photo block in the training photo sample block set, and at the same time select one of the nearest neighbor training photo blocks from the training image sample block set. There are 10 adjacent training image blocks corresponding to each other, and the coefficient of each adjacent training image block is w i,k , where, k represents the sequence number of the training image block, k=1,...,10;

(4b)使用马尔科夫图模型公式,对所有测试照片块深度特征、所有近邻训练照片块的深度特征、所有近邻训练画像块、深度特征图的系数ui,l、近邻训练图像块的系数wi,k建模;(4b) Using the Markov graph model formula, the depth features of all test photo blocks, the depth features of all neighboring training photo blocks, all the neighboring training image blocks, the coefficients u i,l of the depth feature map, and the coefficients of the neighboring training image blocks w i,k modeling;

(4c)对马尔科夫图模型公式进行求解,得到人脸画像块重构系数wi,k(4c) solving the Markov graph model formula to obtain the face portrait block reconstruction coefficient w i,k ;

(6)重构人脸画像块:(6) Reconstruct the face portrait block:

将每个测试照片块对应的10个近邻训练画像块与各自系数wi,k相乘,相乘后结果求和,作为每个测试照片块对应的重构人脸画像块;Multiply the 10 nearest neighbor training portrait blocks corresponding to each test photo block with their respective coefficients w i, k , and sum the results after multiplication, as the reconstructed face portrait block corresponding to each test photo block;

(7)合成人脸画像:(7) Synthetic face portrait:

拼接所有测试照片块对应的重构人脸画像块,得到合成人脸画像。The reconstructed face portrait blocks corresponding to all test photo blocks are stitched together to obtain a synthetic face portrait.

与现有技术相比,本发明有以下优点:Compared with the prior art, the present invention has the following advantages:

第1,由于本发明使用从深度卷积网络中提取的深度特征来代替图像块的原始像素值信息,克服现有技术使用的特征表示能力不足,受光照等环境噪声影响大的问题,使得本发明具有对光照等环境噪声鲁棒的优点。First, because the present invention uses the depth features extracted from the deep convolutional network to replace the original pixel value information of the image block, it overcomes the problems of insufficient feature representation capabilities used in the prior art and is greatly affected by environmental noise such as lighting, making this The invention has the advantage of being robust to environmental noise such as illumination.

第2,由于本发明使用马尔科夫图模型对深度特征图系数与人脸画像块重构系数进行联合建模,克服现有技术合成的人脸画像背景不干净、细节不清晰的问题,使得本发明具有合成人脸画像背景干净,细节清晰的优点。The 2nd, since the present invention uses the Markov graph model to jointly model the depth feature map coefficients and face portrait block reconstruction coefficients, it overcomes the problems that the background of the face portrait synthesized in the prior art is not clean and the details are not clear, so that The invention has the advantages of clean background and clear details of the synthesized human face portrait.

附图说明Description of drawings

图1为本发明流程图;Fig. 1 is a flowchart of the present invention;

图2为本发明的仿真效果图。Fig. 2 is a simulation effect diagram of the present invention.

具体实施方式detailed description

下面结合附图对本发明作进一步地描述。The present invention will be further described below in conjunction with the accompanying drawings.

参照图1,本发明的具体步骤如下。With reference to Fig. 1, concrete steps of the present invention are as follows.

步骤1,生成样本集合。Step 1, generate a sample set.

从人脸照片样本集中取出M张人脸照片组成训练人脸照片样本集,2≤M≤U-1,U表示样本集中人脸照片总数。Take M face photos from the face photo sample set to form a training face photo sample set, 2≤M≤U-1, U represents the total number of face photos in the sample set.

将人脸照片样本集中剩余的人脸照片组成测试人脸照片集。Use the remaining face photos in the face photo sample set to form a test face photo set.

从人脸画像样本集中取出与训练人脸照片样本集的人脸照片一一对应的人脸画像,组成训练人脸画像样本集。From the face portrait sample set, the face portraits corresponding to the face photos in the training face photo sample set are taken out to form the training face portrait sample set.

步骤2,生成图像块集合。Step 2, generate a set of image blocks.

从测试人脸照片集中任意选取一张测试人脸照片,将测试人脸照片划分成大小相同,且重叠度相同的照片块,组成测试照片块集合。Randomly select a test face photo from the test face photo set, and divide the test face photo into photo blocks with the same size and the same overlapping degree to form a test photo block set.

将训练人脸照片样本集中的每一张照片,划分成大小相同,且重叠度相同的照片块,组成训练照片样本块集合。Each photo in the training face photo sample set is divided into photo blocks with the same size and the same overlapping degree to form a training photo sample block set.

将训练人脸画像样本集中的每一张画像,划分成大小相同,且重叠度相同的画像块,组成训练画像样本块集合。Divide each portrait in the training face portrait sample set into portrait blocks with the same size and the same overlapping degree to form a training portrait sample block set.

重叠度是指,相邻两个图像块之间重叠区域占彼此的1/2。The degree of overlap means that the overlapping area between two adjacent image blocks accounts for 1/2 of each other.

步骤3,提取深度特征。Step 3, extracting deep features.

将训练照片块集合与测试照片块集合中的所有照片块,输入已经在物体识别数据库ImageNet上训练好的用于物体识别的深度卷积网络VGG中,进行正向传播。All the photo blocks in the training photo block set and the test photo block set are input into the deep convolution network VGG for object recognition that has been trained on the object recognition database ImageNet for forward propagation.

将深度卷积网络VGG的中间层输出的128层特征图作为照片块的深度特征,特征图每层的系数为ui,l,且其中,∑表示求和操作,i表示测试照片块的序号,i=1,2,...,N,N表示测试照片块的总数,l表示特征图的序号,l=1,...,128。The 128-layer feature map output by the middle layer of the deep convolutional network VGG is used as the depth feature of the photo block, and the coefficient of each layer of the feature map is u i,l , and Among them, ∑ represents the summation operation, i represents the sequence number of the test photo block, i=1, 2,..., N, N represents the total number of test photo blocks, l represents the sequence number of the feature map, l=1,... ,128.

中间层是指深度卷积网络VGG的激活函数Relu2_2层The middle layer refers to the activation function Relu2_2 layer of the deep convolutional network VGG

步骤4,求解人脸画像块重构系数。Step 4, solving the face portrait block reconstruction coefficient.

使用K近邻搜索算法,在训练照片样本块集合中找出与每个测试照片块最相似的10个近邻训练照片块,同时从训练画像样本块集合中选出与近邻训练照片块一一对应的10个近邻训练画像块,每个近邻训练图像块的系数为wi,k,其中,k表示训练图像块序号,k=1,...,10。Use the K nearest neighbor search algorithm to find out the 10 nearest neighbor training photo blocks that are most similar to each test photo block in the training photo sample block set, and select the one-to-one corresponding neighbor training photo block from the training image sample block set. 10 adjacent training image blocks, the coefficient of each adjacent training image block is w i,k , where, k represents the serial number of the training image block, k=1,...,10.

K近邻搜索算法的具体步骤如下:The specific steps of the K nearest neighbor search algorithm are as follows:

第一步,计算每一个测试照片块的深度特征向量与所有训练照片块的深度特征向量之间的欧氏距离;The first step is to calculate the Euclidean distance between the depth feature vectors of each test photo block and the depth feature vectors of all training photo blocks;

第二步,按照欧氏距离值得从小到大顺序,对所有训练照片块进行排序;The second step is to sort all training photo blocks according to the order of Euclidean distance value from small to large;

第三步,选取前10个训练照片块,作为近邻训练照片块。The third step is to select the first 10 training photo blocks as neighbor training photo blocks.

使用马尔科夫图模型公式,对所有测试照片块深度特征、所有近邻训练照片块的深度特征、所有近邻训练画像块、深度特征图的系数ui,l、近邻训练图像块的系数wi,k建模。Using the Markov graph model formula, the depth features of all test photo blocks, the depth features of all neighboring training photo blocks, all neighboring training image blocks, the coefficient u i,l of the depth feature map, and the coefficient w i, of the neighboring training image blocks kModeling .

马尔科夫图模型公式如下:The Markov graph model formula is as follows:

其中,min表示求最小值操作,∑表示求和操作,||||2表示求模平方操作,wi,k表示第i个测试照片块的第k个近邻训练画像块的系数,oi,k表示第i个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,wj,k表示第j个测试照片块的第j个近邻训练画像块的系数,oj,k表示第j个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,ui,l表示第i个测试照片块的深度特征的第l层深度特征图的系数,dl(xi)表示第i个测试照片块的深度特征的第h层特征图,dl(xi,k)表示第i个测试照片块的第k个近邻训练照片块的深度特征的第u层特征图,l,h,u的取值对应相等。。Among them, min represents the minimum value operation, ∑ represents the sum operation, |||| 2 represents the modulus square operation, w i,k represents the coefficient of the kth neighbor training image block of the i-th test photo block, o i , k represents the pixel value vector of the overlapping part of the kth neighbor training image block of the i-th test photo block, w j, k represents the coefficient of the j-th neighbor training image block of the j-th test photo block, o j, k represents the pixel value vector of the overlapping part of the kth neighbor training image block of the jth test photo block, u i,l represents the coefficient of the depth feature map of the lth layer of the depth feature of the ith test photo block, d l (x i ) represents the h-th layer feature map of the depth feature of the i-th test photo block, d l ( xi,k ) represents the u-th depth feature map of the k-th neighbor training photo block of the i-th test photo block The layer feature map, the values of l, h, and u are correspondingly equal. .

对马尔科夫图模型公式进行求解,得到人脸画像块重构系数wi,kThe Markov graph model formula is solved to obtain the face portrait block reconstruction coefficient w i,k .

步骤5,重构人脸画像块。Step 5, reconstruct the face portrait block.

将每个测试照片块对应的10个近邻训练画像块与各自系数wi,k相乘,相乘后结果求和,作为每个测试照片块对应的重构人脸画像块。Multiply the 10 adjacent training portrait blocks corresponding to each test photo block with their respective coefficients w i,k , and sum the results after multiplication, and use it as the reconstructed face portrait block corresponding to each test photo block.

步骤6,合成人脸画像。Step 6, synthetic face portrait.

拼接所有测试照片块对应的重构人脸画像块,得到合成人脸画像。The reconstructed face portrait blocks corresponding to all test photo blocks are stitched together to obtain a synthetic face portrait.

拼接所有测试照片块对应的重构画像块的方法如下:The method of splicing the reconstructed portrait blocks corresponding to all test photo blocks is as follows:

第一步,将位于画像不同位置的所有测试照片块对应的重构画像块按照其所在位置进行放置;The first step is to place the reconstructed portrait blocks corresponding to all test photo blocks located in different positions of the portrait according to their positions;

第二步,取相邻两重构人脸画像块间重叠部分的像素值的平均值;The second step is to get the average value of the pixel values of the overlapping parts between adjacent two reconstructed face portrait blocks;

第三步,用相邻两重构人脸画像块间重叠部分的像素值的平均值替换相邻两重构人脸画像块间重叠部分的像素值,得到合成人脸画像。The third step is to replace the pixel values of the overlapping parts between two adjacent reconstructed face portrait blocks with the average value of the pixel values of the overlapping parts between two adjacent reconstructed face portrait blocks to obtain a synthetic face portrait.

本发明的效果通过以下仿真实验进一步说明。The effects of the present invention are further illustrated by the following simulation experiments.

1.仿真实验条件:1. Simulation experiment conditions:

本发明仿真实验的计算机配置环境为Intel(R)Core i7-4790 3.6GHZ、内存16G、Linux操作系统,编程语言使用Python,数据库采用香港中文大学CUHK student数据库。The computer configuration environment of simulation experiment of the present invention is Intel (R) Core i7-4790 3.6GHZ, memory 16G, Linux operating system, programming language uses Python, database adopts CUHK student database of Chinese University of Hong Kong.

本发明的仿真实验中所使用的现有技术的对比方法包括如下两种:The comparison method of the prior art used in the emulation experiment of the present invention comprises following two kinds:

一种是基于局部线性嵌入的方法,实验中记为LLE;参考文献为“Q.Liu,X.Tang,H.Jin,H.Lu,and S.Ma”(A Nonlinear Approach for Face Sketch Synthesis andRecognition.In Proc.IEEE Int.Conference on Computer Vision,pp.1005-1010,2005);One is a method based on local linear embedding, which is recorded as LLE in the experiment; the reference is "Q.Liu, X.Tang, H.Jin, H.Lu, and S.Ma" (A Nonlinear Approach for Face Sketch Synthesis and Recognition .In Proc.IEEE Int.Conference on Computer Vision,pp.1005-1010,2005);

另一种是基于马尔可夫权重场模型的方法,实验中记为MWF;参考文献为“H.Zhou,Z.Kuang,and K.Wong.Markov Weight Fields for Face Sketch Synthesis”(InProc.IEEE Int.Conference on Computer Vision,pp.1091-1097,2012)。The other is a method based on the Markov weight field model, which is recorded as MWF in the experiment; the reference is "H. Zhou, Z. Kuang, and K. Wong. Markov Weight Fields for Face Sketch Synthesis" (InProc.IEEE Int . Conference on Computer Vision, pp.1091-1097, 2012).

2.仿真实验内容:2. Simulation experiment content:

本发明共有一组仿真实验。The present invention has a group of simulation experiments.

在CUHK student数据库上合成画像,并与局部线性嵌入LLE、马尔可夫权重场模型MWF方法合成的画像进行对比。The portraits were synthesized on the CUHK student database, and compared with those synthesized by local linear embedding LLE and Markov weight field model MWF methods.

3.仿真实验结果和分析:3. Simulation experiment results and analysis:

本发明的仿真实验结果如附图2所示,其中图2(a)是从测试照片样本集中任意取出的一张测试照片,图2(b)是使用现有技术局部线性嵌入LLE方法合成的画像,图2(c)是使用现有技术马尔可夫权重场模型MWF方法合成的画像,图2(d)是使用本发明方法合成的画像。The simulation experiment result of the present invention is as shown in accompanying drawing 2, and wherein Fig. 2 (a) is a test photo that takes out arbitrarily from test photo sample collection, and Fig. 2 (b) is to use prior art local linear embedding LLE method to synthesize Portrait, Fig. 2 (c) is a portrait synthesized using the Markov weight field model MWF method of the prior art, and Fig. 2 (d) is a portrait synthesized using the method of the present invention.

由图2可见,由于本发明使用深度特征来代替图像块的原始像素值信息,其对光照等环境噪声具有更好的鲁棒性,因此对于受光照影响较大的照片,相比局部线性嵌入LLE、马尔可夫权重场模型MWF方法,合成画像质量更高,噪声更小。It can be seen from Figure 2 that since the present invention uses depth features to replace the original pixel value information of image blocks, it has better robustness to environmental noise such as illumination, so for photos that are greatly affected by illumination, compared with local linear embedding LLE, Markov weight field model MWF method, the synthetic image quality is higher and the noise is smaller.

Claims (6)

1.一种基于深度图模型特征学习的人脸画像合成方法,包括如下步骤:1. A method for synthesizing face portraits based on depth map model feature learning, comprising the steps of: (1)生成样本集合:(1) Generate a sample set: (1a)从人脸照片样本集中取出M张人脸照片组成训练人脸照片样本集,2≤M≤U-1,U表示样本集中人脸照片总数;(1a) Take M face photos from the face photo sample set to form a training face photo sample set, 2≤M≤U-1, U represents the total number of face photos in the sample set; (1b)将人脸照片样本集中剩余的人脸照片组成测试人脸照片集;(1b) form the test face photo set with the remaining face photos in the face photo sample set; (1c)从人脸画像样本集中取出与训练人脸照片样本集的人脸照片一一对应的人脸画像,组成训练人脸画像样本集;(1c) from the face portrait sample set, take out the face portrait corresponding to the face photos of the training face photo sample set one by one, form the training face portrait sample set; (2)生成图像块集合:(2) Generate a collection of image blocks: (2a)从测试人脸照片集中任意选取一张测试人脸照片,将测试人脸照片划分成大小相同,且重叠度相同的照片块,组成测试照片块集合;(2a) randomly select a test face photo from the test face photo set, divide the test face photo into photo blocks with the same size and the same degree of overlap to form a test photo block set; (2b)将训练人脸照片样本集中的每一张照片,划分成大小相同,且重叠度相同的照片块,组成训练照片样本块集合;(2b) Divide each photo in the training face photo sample set into photo blocks with the same size and the same degree of overlap to form a training photo sample block set; (2c)将训练人脸画像样本集中的每一张画像,划分成大小相同,且重叠度相同的画像块,组成训练画像样本块集合;(2c) Divide each portrait in the training face portrait sample set into portrait blocks with the same size and the same degree of overlap to form a training portrait sample block set; (3)提取深度特征:(3) Extract depth features: (3a)将训练照片块集合与测试照片块集合中的所有照片块,输入已经在物体识别数据库ImageNet上训练好的用于物体识别的深度卷积网络VGG中,进行正向传播;(3a) Input all the photo blocks in the training photo block set and the test photo block set into the deep convolutional network VGG trained for object recognition on the object recognition database ImageNet, and carry out forward propagation; (3b)将深度卷积网络VGG的中间层输出的128层特征图作为照片块的深度特征,特征图每层的系数为ui,l,且其中,∑表示求和操作,i表示测试照片块的序号,i=1,2,...,N,N表示测试照片块的总数,l表示特征图的序号,l=1,...,128;(3b) The 128-layer feature map output by the middle layer of the deep convolutional network VGG is used as the depth feature of the photo block, and the coefficient of each layer of the feature map is u i,l , and Among them, ∑ represents the summation operation, i represents the sequence number of the test photo block, i=1, 2,..., N, N represents the total number of test photo blocks, l represents the sequence number of the feature map, l=1,... ,128; (4)求解人脸画像块重构系数:(4) Solve the face portrait block reconstruction coefficient: (4a)使用K近邻搜索算法,在训练照片样本块集合中找出与每个测试照片块最相似的10个近邻训练照片块,同时从训练画像样本块集合中选出与近邻训练照片块一一对应的10个近邻训练画像块,每个近邻训练图像块的系数为wi,k,其中,k表示训练图像块序号,k=1,...,10;(4a) Use the K nearest neighbor search algorithm to find out the 10 nearest neighbor training photo blocks that are most similar to each test photo block in the training photo sample block set, and at the same time select one of the nearest neighbor training photo blocks from the training image sample block set. There are 10 adjacent training image blocks corresponding to each other, and the coefficient of each adjacent training image block is w i,k , where, k represents the sequence number of the training image block, k=1,...,10; (4b)使用马尔科夫图模型公式,对所有测试照片块深度特征、所有近邻训练照片块的深度特征、所有近邻训练画像块、深度特征图的系数ui,l、近邻训练图像块的系数wi,k建模;(4b) Using the Markov graph model formula, the depth features of all test photo blocks, the depth features of all neighboring training photo blocks, all the neighboring training image blocks, the coefficients u i,l of the depth feature map, and the coefficients of the neighboring training image blocks w i,k modeling; (4c)对马尔科夫图模型公式进行求解,得到人脸画像块重构系数wi,k(4c) solving the Markov graph model formula to obtain the face portrait block reconstruction coefficient w i,k ; (5)重构人脸画像块:(5) Reconstruct the face portrait block: 将每个测试照片块对应的10个近邻训练画像块与各自系数wi,k相乘,相乘后结果求和,作为每个测试照片块对应的重构人脸画像块;Multiply the 10 nearest neighbor training portrait blocks corresponding to each test photo block with their respective coefficients w i, k , and sum the results after multiplication, as the reconstructed face portrait block corresponding to each test photo block; (6)合成人脸画像:(6) Synthetic face portrait: 拼接所有测试照片块对应的重构人脸画像块,得到合成人脸画像。The reconstructed face portrait blocks corresponding to all test photo blocks are stitched together to obtain a synthetic face portrait. 2.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(2a)、步骤(2b)、步骤(2c)中所述的重叠度是指,相邻两个图像块之间重叠区域占彼此的1/2。2. the method for synthesizing human face portraits based on depth map model feature learning according to claim 1, is characterized in that: the degree of overlap described in step (2a), step (2b), step (2c) refers to, relative The overlapping area between two adjacent image blocks accounts for 1/2 of each other. 3.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(3b)中所述的中间层是指深度卷积网络VGG的激活函数Relu2_2层。3. the method for synthesizing face portraits based on depth map model feature learning according to claim 1, characterized in that: the intermediate layer described in step (3b) refers to the activation function Relu2_2 layer of deep convolutional network VGG. 4.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(4a)中所述K近邻搜索算法的具体步骤如下:4. the method for synthesizing human face images based on depth map model feature learning according to claim 1, is characterized in that: the concrete steps of K nearest neighbor search algorithm described in step (4a) are as follows: 第一步,计算每一个测试照片块的深度特征向量与所有训练照片块的深度特征向量之间的欧氏距离;The first step is to calculate the Euclidean distance between the depth feature vectors of each test photo block and the depth feature vectors of all training photo blocks; 第二步,按照欧氏距离值得从小到大顺序,对所有训练照片块进行排序;The second step is to sort all training photo blocks according to the order of Euclidean distance value from small to large; 第三步,选取前10个训练照片块,作为近邻训练照片块。The third step is to select the first 10 training photo blocks as neighbor training photo blocks. 5.根据权利要求1所述的基于深度图模型特征学习的人脸画像合成方法,其特征在于:步骤(4b)中所述的马尔科夫图模型公式如下:5. the method for synthesizing face portraits based on depth map model feature learning according to claim 1, is characterized in that: the Markov map model formula described in step (4b) is as follows: <mrow> <mi>min</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>o</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>w</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>o</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>128</mn> </munderover> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <msub> <mi>d</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msub> <mi>d</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <mo>|</mo> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow><mi>min</mi><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mo>mn></mrow><mi>N</mi></munderover><mo>|</mo><mo>|</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mi>K</mi></munderover><msub><mi>w</mi><mrow><mi>i</mi><mo>,</mo><mi>k</mi></mrow></msub><msub><mi>o</mi><mrow><mi>i</mi><mo>,</mo><mi>k</mi></mrow></msub><mo>-</mo><msub><mi>w</mi><mrow><mi>j</mi><mo>,</mo><mi>k</mi></mrow></msub><msub><mi>o</mi><mrow><mi>j</mi><mo>,</mo><mi>k</mi></mrow></msub><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>+</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><munderover><mo>&amp;Sigma;</mo><mrow><mi>l</mi><mo>=</mo><mn>1</mn></mrow><mn>128</mn></munderover><msub><mi>u</mi><mrow><mi>i</mi><mo>,</mo><mi>l</mi></mrow></msub><mo>|</mo><mo>|</mo><msub><mi>d</mi><mi>l</mi></msub><mrow><mo>(</mo><msub><mi>x</mi><mi>i</mi></msub><mo>)</mo></mrow><mo>-</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mi>K</mi></munderover><msub><mi>w</mi><mrow><mi>i</mi><mo>,</mo><mi>k</mi></mrow></msub><msub><mi>d</mi><mi>l</mi></msub><mrow><mo>(</mo><msub><mi>x</mi><mrow><mi>i</mi><mo>,</mo><mi>k</mi></mrow></msub><mo>)</mo></mrow><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>+</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mi>N</mi></munderover><mo>|</mo><mo>|</mo><msub><mi>u</mi><mrow><mi>i</mi><mo>,</mo><mi>l</mi></mrow></msub><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup></mrow> 其中,min表示求最小值操作,∑表示求和操作,|| ||2表示求模平方操作,wi,k表示第i个测试照片块的第k个近邻训练画像块的系数,oi,k表示第i个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,wj,k表示第j个测试照片块的第j个近邻训练画像块的系数,oj,k表示第j个测试照片块的第k个近邻训练画像块的重叠部分的像素值向量,ui,l表示第i个测试照片块的深度特征的第l层深度特征图的系数,dl(xi)表示第i个测试照片块的深度特征的第h层特征图,dl(xi,k)表示第i个测试照片块的第k个近邻训练照片块的深度特征的第u层特征图,l,h,u的取值对应相等。Among them, min represents the minimum value operation, ∑ represents the sum operation, || || 2 represents the modulo square operation, w i,k represents the coefficient of the kth neighbor training image block of the i-th test photo block, o i , k represents the pixel value vector of the overlapping part of the kth neighbor training image block of the i-th test photo block, w j, k represents the coefficient of the j-th neighbor training image block of the j-th test photo block, o j, k represents the pixel value vector of the overlapping part of the kth neighbor training image block of the jth test photo block, u i,l represents the coefficient of the depth feature map of the lth layer of the depth feature of the ith test photo block, d l (x i ) represents the h-th layer feature map of the depth feature of the i-th test photo block, d l ( xi,k ) represents the u-th depth feature map of the k-th neighbor training photo block of the i-th test photo block The layer feature map, the values of l, h, and u are correspondingly equal. 6.根据权利要求1所述的联合局部与全局特性的两阶段人脸画像生成方法,其特征在于:步骤(6)中所述的拼接所有测试照片块对应的重构画像块的方法如下:6. the two-stage human face portrait generating method of joint local and global characteristics according to claim 1, is characterized in that: the method for the reconstructed portrait block corresponding to all test photo blocks of splicing described in step (6) is as follows: 第一步,将位于画像不同位置的所有测试照片块对应的重构画像块按照其所在位置进行放置;The first step is to place the reconstructed portrait blocks corresponding to all test photo blocks located in different positions of the portrait according to their positions; 第二步,取相邻两重构人脸画像块间重叠部分的像素值的平均值;The second step is to get the average value of the pixel values of the overlapping parts between adjacent two reconstructed face portrait blocks; 第三步,用相邻两重构人脸画像块间重叠部分的像素值的平均值替换相邻两重构人脸画像块间重叠部分的像素值,得到合成人脸画像。The third step is to replace the pixel values of the overlapping parts between two adjacent reconstructed face portrait blocks with the average value of the pixel values of the overlapping parts between two adjacent reconstructed face portrait blocks to obtain a synthetic face portrait.
CN201710602696.9A 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning Active CN107392213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Publications (2)

Publication Number Publication Date
CN107392213A true CN107392213A (en) 2017-11-24
CN107392213B CN107392213B (en) 2020-04-07

Family

ID=60335789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710602696.9A Active CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Country Status (1)

Country Link
CN (1) CN107392213B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133A (en) * 2018-01-10 2018-06-12 西安电子科技大学 Human face portrait based on asymmetric combination learning-photo array method
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN109920021A (en) * 2019-03-07 2019-06-21 华东理工大学 A face sketch synthesis method based on regularized width learning network
CN110069992A (en) * 2019-03-18 2019-07-30 西安电子科技大学 A kind of face image synthesis method, apparatus, electronic equipment and storage medium
US11270101B2 (en) 2019-11-01 2022-03-08 Industrial Technology Research Institute Imaginary face generation method and system, and face recognition method and system using the same
CN115034957A (en) * 2022-05-06 2022-09-09 西安电子科技大学 Human face sketch portrait editing method based on text description

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
US20150310263A1 (en) * 2014-04-29 2015-10-29 Microsoft Corporation Facial expression tracking
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
US20150310263A1 (en) * 2014-04-29 2015-10-29 Microsoft Corporation Facial expression tracking
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait synthesis method based on single photo and portrait pair
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133A (en) * 2018-01-10 2018-06-12 西安电子科技大学 Human face portrait based on asymmetric combination learning-photo array method
CN108154133B (en) * 2018-01-10 2020-04-14 西安电子科技大学 Face portrait-photo recognition method based on asymmetric joint learning
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN109145704B (en) * 2018-06-14 2022-02-22 西安电子科技大学 A face portrait recognition method based on face attributes
CN109920021A (en) * 2019-03-07 2019-06-21 华东理工大学 A face sketch synthesis method based on regularized width learning network
CN110069992A (en) * 2019-03-18 2019-07-30 西安电子科技大学 A kind of face image synthesis method, apparatus, electronic equipment and storage medium
US11270101B2 (en) 2019-11-01 2022-03-08 Industrial Technology Research Institute Imaginary face generation method and system, and face recognition method and system using the same
CN115034957A (en) * 2022-05-06 2022-09-09 西安电子科技大学 Human face sketch portrait editing method based on text description

Also Published As

Publication number Publication date
CN107392213B (en) 2020-04-07

Similar Documents

Publication Publication Date Title
Gurrola-Ramos et al. A residual dense u-net neural network for image denoising
CN107392213A (en) Human face portrait synthetic method based on the study of the depth map aspect of model
CN109635882B (en) A salient object detection method based on multi-scale convolutional feature extraction and fusion
CN111292264B (en) A high dynamic range image reconstruction method based on deep learning
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN111161158B (en) Image restoration method based on generated network structure
CN107358576A (en) Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN111563418A (en) A Saliency Detection Method for Asymmetric Multimodal Fusion Based on Attention Mechanism
CN114373094B (en) A gated feature attention equivariant segmentation method based on weakly supervised learning
CN113011357A (en) Depth fake face video positioning method based on space-time fusion
Wei et al. A-ESRGAN: Training real-world blind super-resolution with attention U-Net Discriminators
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN113689382B (en) Tumor postoperative survival prediction method and system based on medical images and pathological images
CN106022363A (en) Method for recognizing Chinese characters in natural scene
CN114663685A (en) A method, device and device for training a person re-identification model
CN103985104B (en) Multi-focusing image fusion method based on higher-order singular value decomposition and fuzzy inference
CN117058059A (en) Progressive restoration framework cloud removal method for fusion of optical remote sensing image and SAR image
Li et al. A review of advances in image inpainting research
Nair et al. Image Outpainting using Wasserstein generative adversarial network with gradient penalty
CN114581789A (en) Hyperspectral image classification method and system
CN109583406B (en) Facial Expression Recognition Method Based on Feature Attention Mechanism
Zhu et al. Neighboring-part dependency mining and feature fusion network for person re-identification
CN106023079A (en) A Two-Stage Face Portrait Generation Method Combined with Local and Global Features
Ibrahim et al. Re-designing cities with conditional adversarial networks
Kumari et al. Video forgery detection and localization using optimized attention squeezenet adversarial network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220711

Address after: 518057 2304, block a, building 2, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: SHENZHEN AIMO TECHNOLOGY Co.,Ltd.

Address before: 710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Patentee before: XIDIAN University