CN107392213B - Face portrait synthesis method based on depth map model feature learning - Google Patents

Face portrait synthesis method based on depth map model feature learning Download PDF

Info

Publication number
CN107392213B
CN107392213B CN201710602696.9A CN201710602696A CN107392213B CN 107392213 B CN107392213 B CN 107392213B CN 201710602696 A CN201710602696 A CN 201710602696A CN 107392213 B CN107392213 B CN 107392213B
Authority
CN
China
Prior art keywords
face
photo
training
blocks
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710602696.9A
Other languages
Chinese (zh)
Other versions
CN107392213A (en
Inventor
王楠楠
朱明瑞
李洁
高新波
查文锦
张玉倩
郝毅
曹兵
马卓奇
刘德成
辛经纬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aimo Technology Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710602696.9A priority Critical patent/CN107392213B/en
Publication of CN107392213A publication Critical patent/CN107392213A/en
Application granted granted Critical
Publication of CN107392213B publication Critical patent/CN107392213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

A face portrait synthesizing method based on the feature learning of depth map model. The method comprises the following steps: (1) generating a sample set; (2) generating an image block set; (3) extracting depth features; (4) solving the face image reconstruction block coefficient; (5) reconstructing a face image block; (6) and synthesizing the face portrait. The method comprises the steps of extracting depth features of face image blocks by using a depth convolution network, solving depth feature image coefficients and face image block reconstruction coefficients by using a Markov image model, weighting and summing the face image blocks by using the face image block reconstruction coefficients to obtain reconstructed face image blocks, and splicing the reconstructed face image blocks to obtain a synthesized face image. The invention uses the depth features extracted from the depth convolution network to replace the original pixel value information of the image block, has better robustness to environmental noise such as illumination and the like, and can synthesize the face portrait with extremely high quality.

Description

Face portrait synthesis method based on depth map model feature learning
Technical Field
The invention belongs to the technical field of image processing, and further relates to a face portrait synthesis method based on depth map model feature learning in the technical field of pattern recognition and computer vision. The invention can be used for face retrieval and identification in the field of public security.
Background
In criminal investigation pursuit, the public security department is provided with a citizen photo database, and the citizen photo database is combined with a face recognition technology to determine the identity of a criminal suspect, but in practice, the criminal suspect photo is difficult to obtain, but a sketch portrait of the criminal suspect can be obtained under the cooperation of a painter and a witness, so that subsequent face retrieval and recognition can be carried out. Because of the great difference between the portrait and the common face photograph, it is difficult to obtain satisfactory recognition effect by directly using the traditional face recognition method. The photos in the citizen photo database are combined into the portraits, so that the difference of the textures of the citizen can be effectively reduced, and the recognition rate is further improved.
Gao et al, in their published paper "X.Gao, J.Zhou, D.Tao, and X.Li," neuro facial sketch, vol.71, No.10-12, pp.1921-1930, Jun.2008, propose to use an embedded hidden Markov model to generate a pseudo-portrait. The method comprises the steps of firstly blocking photos and portraits in a training library, then modeling corresponding photo blocks and portraits by using an embedded hidden Markov model, giving a photo randomly, blocking the photos randomly, and selecting models generated by partial blocks for generating pseudo portraits and fusing the pseudo portraits by using a selective integration idea for any block to obtain a final pseudo portraits. The method has the disadvantages that because the method adopts a selective integration technology, the generated pseudo-portrait is subjected to weighted average, so that the background is not clean and the details are not clear, and the quality of the generated portrait is further reduced.
Zhou et al, published In the paper "H.Zhou, Z.Kuang, and K.Wong.Markov weight fields for Face Sketch Synthesis" (In Proc.IEEE int.conference on computer vision, pp.1091-1097,2012), propose a Face Sketch Synthesis method based on the Markov weight field. The method comprises the steps of uniformly partitioning a training image and an input test image, and searching a plurality of neighbors of any test image block to obtain a candidate block in the form of an image to be synthesized. And then modeling the test image block, the neighbor block and the candidate image block by using a Markov picture model to obtain a reconstruction weight. And finally, reconstructing a composite image block by using the reconstruction weight and the candidate image block, and splicing to obtain a composite image. The method has the disadvantages that the image block characteristics use original pixel information, the representation capability is insufficient, and the influence of environmental noise such as illumination is large.
A face portrait synthesizing method based on a directional diagram model is disclosed in a patent of 'face portrait synthesizing method based on a directional diagram model' applied by the university of electronic science and technology of Western Ann (application number: CN201610171867.2, application date: 2016.03.24, application publication number: CN 105869134A). The method comprises the steps of uniformly partitioning a training image and an input test image, and searching a plurality of adjacent photo blocks and corresponding adjacent photo blocks of any test photo block. Then extracting direction characteristics of the test photo block and the adjacent photo blocks. Then, a Markov picture model is used for modeling the direction characteristics of the test picture block and the adjacent picture blocks, and the reconstruction weight of the synthesized picture block reconstructed by the adjacent picture blocks is obtained. And finally, reconstructing a composite image block by using the reconstruction weight and the neighboring image block, and splicing to obtain the composite image. The method has the disadvantages that the image block features use high-frequency features of artificial design, the self-adaptive capacity is insufficient, and the features are not fully learned.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned deficiencies of the prior art, and to provide a method for synthesizing a face image based on depth map model feature learning, which can synthesize a high-quality image that is not affected by environmental noise such as illumination.
The specific steps for realizing the purpose of the invention are as follows:
(1) generating a sample set:
(1a) m face photos are taken out from the face photo sample set to form a training face photo sample set, wherein M is more than or equal to 2 and less than or equal to U-1, and U represents the total number of the face photos in the sample set;
(1b) forming a testing face photo set by the remaining face photos in the face photo sample set;
(1c) taking face pictures corresponding to the face photos of the training face photo sample set one by one from the face picture sample set to form a training face picture sample set;
(2) generating an image block set:
(2a) randomly selecting a test face photo from the test face photo set, dividing the test face photo into photo blocks with the same size and the same overlapping degree, and forming a test photo block set;
(2b) dividing each photo in the training face photo sample set into photo blocks with the same size and the same overlapping degree to form a training photo sample block set;
(2c) dividing each portrait in a training face portrait sample set into portrait blocks with the same size and the same overlapping degree to form a training portrait sample block set;
(3) extracting depth features:
(3a) inputting all photo blocks in the training photo block set and the test photo block set into a deep convolution network VGG for object recognition which is trained on an object recognition database ImageNet, and carrying out forward propagation;
(3b) taking a 128-layer feature map output by the middle layer of the deep convolutional network VGG as the depth feature of the photo block, wherein the coefficient of each layer of the feature map is ui,lAnd is and
Figure GDA0002243201490000021
where, Σ denotes a summation operation, i denotes a sequence number of a test picture block, i ═ 1, 2.., N denotes a total number of test picture blocks, l denotes a sequence number of a feature map, and l ═ 1.., 128;
(4) solving the face image block reconstruction coefficient:
(4a) using K neighbor search algorithm, finding out 10 neighbor training photo blocks which are most similar to each test photo block in the training photo sample block set, and simultaneously selecting out the training photo blocks which are most similar to the neighbor training photo blocks from the training photo sample block set10 neighboring training image blocks corresponding to the image blocks one by one, wherein the coefficient of each neighboring training image block is wi,kWherein, in the step (A),
Figure GDA0002243201490000031
k represents a training image block number, k is 1.
(4b) Using a Markov graph model formula to carry out depth feature on all test photo blocks, depth features of all neighbor training photo blocks, all neighbor training photo blocks and coefficients u of a depth feature graphi,lCoefficient w of neighboring training image blocki,kModeling;
(4c) solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k
(6) Reconstructing the face image block:
10 neighboring training image blocks corresponding to each test photo block and respective coefficients wi,kMultiplying, summing results after multiplying, and taking the result as a reconstructed face image block corresponding to each test image block;
(7) synthesizing a face portrait:
and splicing the reconstructed face image blocks corresponding to all the test image blocks to obtain a synthesized face image.
Compared with the prior art, the invention has the following advantages:
1, because the depth features extracted from the depth convolution network are used for replacing original pixel value information of the image block, the problems that the feature representation capability used in the prior art is insufficient and is greatly influenced by environmental noise such as illumination and the like are solved, and the method has the advantage of robustness to the environmental noise such as illumination and the like.
2, because the invention uses the Markov picture model to jointly model the depth characteristic picture coefficient and the face image block reconstruction coefficient, the problems of unclean background and unclear details of the face image synthesized by the prior art are overcome, and the invention has the advantages of clean background and clear details of the synthesized face image.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of simulation effect of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the specific steps of the present invention are as follows.
Step 1, generating a sample set.
And taking M face photos from the face photo sample set to form a training face photo sample set, wherein M is more than or equal to 2 and less than or equal to U-1, and U represents the total number of the face photos in the sample set.
And forming a test face photo set by the face photos left in the face photo sample set.
And (4) taking the face pictures which correspond to the face pictures of the training face picture sample set one by one from the face picture sample set to form a training face picture sample set.
And 2, generating an image block set.
And randomly selecting a test face photo from the test face photo set, dividing the test face photo into photo blocks with the same size and the same overlapping degree, and forming a test photo block set.
Dividing each photo in the training face photo sample set into photo blocks with the same size and the same overlapping degree to form a training photo sample block set.
Each face image in the training face image sample set is divided into image blocks with the same size and the same overlapping degree to form a training image sample block set.
The overlapping degree means that the area of the overlapping area between two adjacent image blocks is 1/2 of the area of each image block.
And 3, extracting depth features.
And inputting all the photo blocks in the training photo block set and the test photo block set into a deep convolution network VGG for object recognition which is trained on an object recognition database ImageNet, and carrying out forward propagation.
Taking a 128-layer feature map output by the middle layer of the deep convolutional network VGG as the depth feature of the photo block, wherein the coefficient of each layer of the feature map is ui,lAnd is andwhere, Σ denotes a summation operation, i denotes a sequence number of the test picture block, i ═ 1, 2.., N denotes a total number of the test picture blocks, l denotes a sequence number of the feature map, and l ═ 1.., 128.
The middle layer refers to the activation function layer of the deep convolutional network VGG.
And 4, solving the face image block reconstruction coefficient.
Using K neighbor search algorithm, finding out 10 neighbor training picture blocks which are most similar to each test picture block from the training picture sample block set, and simultaneously selecting 10 neighbor training picture blocks which are in one-to-one correspondence with the neighbor training picture blocks from the training picture sample block set, wherein the coefficient of each neighbor training picture block is wi,kWherein, in the step (A),
Figure GDA0002243201490000042
k denotes a training image block number, k 1.
The K neighbor search algorithm comprises the following specific steps:
step one, calculating Euclidean distances between the depth feature vector of each test photo block and the depth feature vectors of all training photo blocks;
secondly, sequencing all the training photo blocks according to the sequence of the Euclidean distance values from small to large;
and thirdly, selecting the first 10 training photo blocks as neighbor training photo blocks.
Using a Markov graph model formula to carry out depth feature on all test photo blocks, depth features of all neighbor training photo blocks, all neighbor training photo blocks and coefficients u of a depth feature graphi,lCoefficient w of neighboring training image blocki,kAnd (6) modeling.
The formula of the Markov graph model is as follows:
Figure GDA0002243201490000051
wherein min represents minimum operation, Σ represents summation operation, | y | | | Y calculation2Representing a modulo squaring operation, wi,kCoefficient of k-th neighboring training image block representing ith test image block, oi,kA pixel value vector, w, representing the overlapping portion of the k-th neighboring training image block of the i-th test image blockj,kCoefficient of j-th neighboring training image block representing j-th test image block, oj,kA pixel value vector, u, representing the overlapping portion of the k-th neighboring training image block of the jth test image blocki,lCoefficients of the l-th layer depth feature map representing the depth features of the i-th test picture block, dl(xi) L-th layer feature map representing depth features of the i-th test photo block, dl(xi,k) An l-th layer feature map representing depth features of a k-th neighboring training picture block of the i-th test picture block.
Solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k
And 5, reconstructing the face image block.
10 neighboring training image blocks corresponding to each test photo block and respective coefficients wi,kMultiplying, and summing the results after multiplication to obtain a reconstructed face image block corresponding to each test image block.
And 6, synthesizing the face portrait.
And splicing the reconstructed face image blocks corresponding to all the test image blocks to obtain a synthesized face image.
The method for splicing the reconstructed image blocks corresponding to all the test image blocks comprises the following steps:
firstly, placing reconstructed picture blocks corresponding to all test picture blocks at different positions of the picture according to the positions of the reconstructed picture blocks;
secondly, taking the average value of the pixel values of the overlapped parts between two adjacent reconstructed face image blocks;
and thirdly, replacing the pixel value of the overlapped part between the two adjacent reconstructed face image blocks by the average value of the pixel values of the overlapped part between the two adjacent reconstructed face image blocks to obtain the synthesized face image.
The effects of the present invention are further illustrated by the following simulation experiments.
1. Simulation experiment conditions are as follows:
the computer configuration environment of the simulation experiment is Intel (R) Core i7-47903.6GHZ and an internal memory 16G, Linux operating system, the programming language uses Python, and the database adopts the CUHK student database of hong Kong Chinese university.
The prior art comparison method used in the simulation experiment of the present invention includes the following two methods:
one is a method based on local linear embedding, and is marked as LLE in an experiment; the reference is "Q.Liu, X.Tang, H.jin, H.Lu, and S.Ma" (A Nonlinear apparatus for Face Sketch Synthesis and registration. in Proc. IEEE int. conference on Computer Vision, pp.1005-1010,2005);
the other method is a Markov weight field model-based method, and is marked as MWF in the experiment; the reference is "H.Zhou, Z.Kuang, and K.Wong.Markov Weight Fields for Face Sketch Synthesis" (InProc.IEEE int. conference on Computer Vision, pp.1091-1097,2012).
2. Simulation experiment contents:
the invention has a group of simulation experiments.
And (3) synthesizing an image on a CUHK student database, and comparing the image with an image synthesized by a local linear embedded LLE and Markov weight field model MWF method.
3. Simulation experiment results and analysis:
the results of the simulation experiment of the present invention are shown in FIG. 2, in which FIG. 2(a) is a test photograph taken arbitrarily from a sample set of test photographs, FIG. 2(b) is a picture synthesized using the prior art local linear embedding LLE method, FIG. 2(c) is a picture synthesized using the prior art Markov weight field model MWF method, and FIG. 2(d) is a picture synthesized using the method of the present invention.
As can be seen from fig. 2, because the depth feature is used to replace the original pixel value information of the image block, the method has better robustness to environmental noise such as illumination, and therefore, for a picture greatly influenced by illumination, compared with the local linear embedding LLE and markov weight field model MWF methods, the synthesized picture has higher quality and less noise.

Claims (6)

1. A human face portrait synthesis method based on depth map model feature learning comprises the following steps:
(1) generating a sample set:
(1a) m face photos are taken out from the face photo sample set to form a training face photo sample set, wherein M is more than or equal to 2 and less than or equal to U-1, and U represents the total number of the face photos in the sample set;
(1b) forming a testing face photo set by the remaining face photos in the face photo sample set;
(1c) taking face pictures corresponding to the face photos of the training face photo sample set one by one from the face picture sample set to form a training face picture sample set;
(2) generating an image block set:
(2a) randomly selecting a test face photo from the test face photo set, dividing the test face photo into photo blocks with the same size and the same overlapping degree, and forming a test photo block set;
(2b) dividing each photo in the training face photo sample set into photo blocks with the same size and the same overlapping degree to form a training photo sample block set;
(2c) dividing each portrait in a training face portrait sample set into portrait blocks with the same size and the same overlapping degree to form a training portrait sample block set;
(3) extracting depth features:
(3a) inputting all photo blocks in the training photo block set and the test photo block set into a deep convolution network VGG for object recognition which is trained on an object recognition database ImageNet, and carrying out forward propagation;
(3b) taking a 128-layer feature map output by the middle layer of the deep convolutional network VGG as the depth feature of the photo block, wherein the coefficient of each layer of the feature map is ui,lAnd is and
Figure FDA0002243201480000011
where, Σ denotes a summation operation, i denotes a sequence number of a test picture block, i ═ 1, 2.., N denotes a total number of test picture blocks, l denotes a sequence number of a feature map, and l ═ 1.., 128;
(4) solving the face image block reconstruction coefficient:
(4a) using K neighbor search algorithm, finding out 10 neighbor training picture blocks which are most similar to each test picture block from the training picture sample block set, and simultaneously selecting 10 neighbor training picture blocks which are in one-to-one correspondence with the neighbor training picture blocks from the training picture sample block set, wherein the coefficient of each neighbor training picture block is wi,kWherein, in the step (A),
Figure FDA0002243201480000021
k represents a training image block number, k is 1.
(4b) Using a Markov graph model formula to carry out depth feature on all test photo blocks, depth features of all neighbor training photo blocks, all neighbor training photo blocks and coefficients u of a depth feature graphi,lCoefficient w of neighboring training image blocki,kModeling;
(4c) solving the Markov graph model formula to obtain the face image block reconstruction coefficient wi,k
(5) Reconstructing the face image block:
10 neighboring training image blocks corresponding to each test photo block and respective coefficients wi,kMultiplying, summing results after multiplying, and taking the result as a reconstructed face image block corresponding to each test image block;
(6) synthesizing a face portrait:
and splicing the reconstructed face image blocks corresponding to all the test image blocks to obtain a synthesized face image.
2. The method for synthesizing a human face portrait based on feature learning of a depth map model as claimed in claim 1, wherein: the overlapping degree in the step (2a), the step (2b), and the step (2c) means that the area of the overlapping region between two adjacent image blocks is 1/2 of the area of each image block.
3. The method for synthesizing a human face portrait based on feature learning of a depth map model as claimed in claim 1, wherein: the middle layer in the step (3b) refers to an activation function layer of a deep convolutional network VGG.
4. The method for synthesizing a human face portrait based on feature learning of a depth map model as claimed in claim 1, wherein: the K neighbor search algorithm in the step (4a) comprises the following specific steps:
step one, calculating Euclidean distances between the depth feature vector of each test photo block and the depth feature vectors of all training photo blocks;
secondly, sequencing all the training photo blocks according to the sequence of the Euclidean distance values from small to large;
and thirdly, selecting the first 10 training photo blocks as neighbor training photo blocks.
5. The method for synthesizing a human face portrait based on feature learning of a depth map model as claimed in claim 1, wherein: the formula of the Markov model in the step (4b) is as follows:
Figure FDA0002243201480000031
wherein min represents minimum operation, Σ represents summation operation, | y | | | Y calculation2Representing a modulo squaring operation, wi,kCoefficient of k-th neighboring training image block representing ith test image block, oi,kA pixel value vector, w, representing the overlapping portion of the k-th neighboring training image block of the i-th test image blockj,kCoefficient of j-th neighboring training image block representing j-th test image block, oj,kA pixel value vector, u, representing the overlapping portion of the k-th neighboring training image block of the jth test image blocki,lCoefficients of the l-th layer depth feature map representing the depth features of the i-th test picture block, dl(xi) L-th layer representing depth feature of i-th test photo blockCharacteristic diagram, dl(xi,k) An l-th layer feature map representing depth features of a k-th neighboring training picture block of the i-th test picture block.
6. The method for synthesizing a face portrait based on feature learning of a depth map model as claimed in claim 1, wherein: the method for splicing the reconstructed image blocks corresponding to all the test image blocks in the step (6) comprises the following steps:
firstly, placing reconstructed picture blocks corresponding to all test picture blocks at different positions of the picture according to the positions of the reconstructed picture blocks;
secondly, taking the average value of the pixel values of the overlapped parts between two adjacent reconstructed face image blocks;
and thirdly, replacing the pixel value of the overlapped part between the two adjacent reconstructed face image blocks by the average value of the pixel values of the overlapped part between the two adjacent reconstructed face image blocks to obtain the synthesized face image.
CN201710602696.9A 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning Active CN107392213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710602696.9A CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Publications (2)

Publication Number Publication Date
CN107392213A CN107392213A (en) 2017-11-24
CN107392213B true CN107392213B (en) 2020-04-07

Family

ID=60335789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710602696.9A Active CN107392213B (en) 2017-07-21 2017-07-21 Face portrait synthesis method based on depth map model feature learning

Country Status (1)

Country Link
CN (1) CN107392213B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154133B (en) * 2018-01-10 2020-04-14 西安电子科技大学 Face portrait-photo recognition method based on asymmetric joint learning
CN109145704B (en) * 2018-06-14 2022-02-22 西安电子科技大学 Face portrait recognition method based on face attributes
CN109920021B (en) * 2019-03-07 2023-05-23 华东理工大学 Face sketch synthesis method based on regularized width learning network
CN110069992B (en) * 2019-03-18 2021-02-09 西安电子科技大学 Face image synthesis method and device, electronic equipment and storage medium
TWI775006B (en) 2019-11-01 2022-08-21 財團法人工業技術研究院 Imaginary face generation method and system, and face recognition method and system using the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9672416B2 (en) * 2014-04-29 2017-06-06 Microsoft Technology Licensing, Llc Facial expression tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984954A (en) * 2014-04-23 2014-08-13 西安电子科技大学宁波信息技术研究院 Image synthesis method based on multi-feature fusion
CN104700380A (en) * 2015-03-12 2015-06-10 陕西炬云信息科技有限公司 Face portrait compositing method based on single photos and portrait pairs
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN107392213A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107392213B (en) Face portrait synthesis method based on depth map model feature learning
Gurrola-Ramos et al. A residual dense u-net neural network for image denoising
Yeh et al. Lightweight deep neural network for joint learning of underwater object detection and color conversion
Zhu et al. MetaIQA: Deep meta-learning for no-reference image quality assessment
Li et al. Linestofacephoto: Face photo generation from lines with conditional self-attention generative adversarial networks
Güçlütürk et al. Convolutional sketch inversion
CN108154133B (en) Face portrait-photo recognition method based on asymmetric joint learning
CN106599863A (en) Deep face identification method based on transfer learning technology
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN104517274B (en) Human face portrait synthetic method based on greedy search
EP3905194A1 (en) Pose estimation method and apparatus
CN110728629A (en) Image set enhancement method for resisting attack
Song et al. Image forgery detection based on motion blur estimated using convolutional neural network
CN114663685B (en) Pedestrian re-recognition model training method, device and equipment
Yang et al. Towards automatic embedding cost learning for JPEG steganography
CN111476727B (en) Video motion enhancement method for face-changing video detection
Huang et al. Flexible gait recognition based on flow regulation of local features between key frames
CN110069992B (en) Face image synthesis method and device, electronic equipment and storage medium
Zheng et al. Feater: An efficient network for human reconstruction via feature map-based transformer
CN114550110A (en) Vehicle weight identification method and system based on unsupervised domain adaptation
CN110490915A (en) A kind of point cloud registration method being limited Boltzmann machine based on convolution
Zhou et al. Photomat: A material generator learned from single flash photos
CN110503157B (en) Image steganalysis method of multitask convolution neural network based on fine-grained image
Shahreza et al. Template inversion attack against face recognition systems using 3d face reconstruction
CN114418003B (en) Double-image recognition and classification method based on attention mechanism and multi-size information extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220711

Address after: 518057 2304, block a, building 2, Shenzhen International Innovation Valley, Dashi 1st Road, Xili community, Xili street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: SHENZHEN AIMO TECHNOLOGY Co.,Ltd.

Address before: 710071 Taibai South Road, Yanta District, Xi'an, Shaanxi Province, No. 2

Patentee before: XIDIAN University